Multiplications are more expensive to compute than additions. Additionally, rounding error can accumulate when additions and multiplications are mixed incorrectly. To evaluate a polynomial at some specific , the most efficient and accurate solution is to use Horner’s Method: re-write the polynomial as and evaluate left-to-right as multiplications and additions. On hardware with a fused-multiply-add instruction, this reduces further to just FMA operations.
The finite difference of some function at is defined to be . For our purposes, we will always use
and to get the simpler , which is sometimes called the forward finite difference
for at , and write it as
.
The forward finite difference of the polynomial can be evaluated by expanding and combining like terms to get . Taking the forward finite difference of that, we get , and the forward finite difference of that is simply . The forward finite difference of a constant like is always .
Just as in this example, the finite difference of any polynomial is a polynomial of one lesser degree.
If we wish to evaluate a polynomial at a sequence of adjacent integer arguments, we can use forward finite differences to compute these points very efficiently. In particular, if we know and , then we can compute . If is a constant we can keep adding that same value to get , $f(x+3), and so on. If it is not a constant than it is another polynomial we can use this same method on to find the sequence of s we need.
Consider the 1st-order polynomial . Use Horner’s rule, we find and .
Subtracting these, we find . Because is linear, is constant.
We now find other values of :
We can also move backwards:
Consider the 2nd-order polynomial . Use Horner’s rule, we find
Subtracting these, we find
and subtracting those we get
Because is quadratic, is linear and is constant.
Now, we know that , but we don’t yet know . But we know , which means . And we got that with just two additions, no multiplications. Similarly
Let . Evaluating using Horner’s rule, we get
We can now find larger by adding finite differences:
And at smaller by subtracting finite differences
The method of finite differences lets us evaluate a single-variable polynomial efficiently at integer arguments. All we need to do is
This also generalizes naturally to multivariate polynomials. If we have we can find both and . Conveniently, the order of differencing does not matter:
Consider the circle equation , so called because is a circle of radius centered at point . Computing the finite differences, we have:
Consider drawing a circle of radius 6 centered at (2,3).
We know that is on the circle, and that it is symmetric; if we can find the points between 0° and 45°, we can mirror them to get the rest of the points.
We find our initial value and differences:
Let’s abbreviate this as so our starting state at is .
Now we’ll repeatedly move up in and decide if it’s better to move right in or not.
up to gives us
right to would give us but that’s further from 0 so let’s not
plot and its symmetric neighbors.
up to gives us
right to would give us but that’s further from 0 so let’s not
plot and its symmetric neighbors.
up to gives us
right to gives us which is closer to 0 so let’s use that
plot and its symmetric neighbors.
up to gives us
right to gives us which is closer to 0 so let’s use that
plot and its symmetric neighbors.
now has larger magnitude than , meaning we have reached the 45° point and are done.
The exact details of how we decide to pick between moving in and not moving depends on if we want to plot points inside, near, or outside the circle. For inside points, always keep ; this is what we’d do to fill it in. For outside points, always keep ; this is what we’d do to mask the circle, coloring things outside it. For nearest points, keep the with the smaller magnitude; this is what we’d do to draw the circle as a single-pixel-width ring.