Newton’s Method: Explanation and Example Usage

Polynomials are compex to solve. Sometimes they can be solved quite easily and sometimes it seems like solving them is impossible. Fortunately there exists something called Newton’s Method which lets you approximate roots of a polynomial by using the Taylor Series.

The Taylor Series is defined as follows

f(x) = f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2+ \frac{f'''(a)}{3!}(x-a)^3 + \dots

By using the first two terms of the Taylor Series, we can approximate a function at some differentiate point x_0.

f(x) \approx f(x_0) + f'(x_0)(x - x_0)

You need to first find an approximation point to where a root could possibly be, for example by looking at the function’s graph or by iterating through some values of x and watching for close values to zero or changes of sign.

For example let’s take the function f(x) = 2x^3+5x^2-2


You can see that there is some root between 0.5 and 0.75, we can take that as approximation.

Screenshot 2016-03-12 17.22.19
Let x_0 = 0.5 be our initial guess. Then, let g(x) be our new function that will get assigned the approximated function that will be solved later.

f(x) \approx g(x) = f(0.5) + f'(0.5)(x - 0.5)

Then we find that

g(x) = 2(0.5)^3+5(0.5)^2-2 + (6(0.5)^2+10(0.5))(x - 0.5) = 6.5x - 3.75

If we plot g(x) you can see that it is tangent to f(x). Furthermore it has a root which is very close to the root we want to calculate. Dazzling!


Now, if we solve g(x) we get x_1 = \frac{15}{26} \approx 0.57. If we zoom in a little more, we find that the approximation turned out to be quite effective with our error of 0.06!

Screenshot 2016-03-12 17.25.09

If you repeat this process again with x_1, you find that you get a new function h(x)=\frac{2625}{338}x + \frac{85575347}{19307236} which has a root at x_2 \approx 0.5707. As seen, the method is quite effective and converges very quickly.


Since we want to iterate this mathematical process with a computer, we might find it simpler if we simplify the equation. We can generalize the equation by noticing the pattern, that is

0 = f(x_n) + f'(x_n)(x_{n+1} - x_n)

when solved for x_{n+1} becomes

x_{n+1} = -\frac{f(x_n)}{f'(x_n)} + x_n

Which is in terms of programming much clearer to our eye than the mess above.

Example: Calculate the square root of a number

Getting the square root of some number  k just means solving the function f(x) = x^2 - k. Since we find the derivative to be f'(x) = 2x we find that implementing such program to be pretty easy.

First, notice how quickly Newton’s Method converges for solving k=2.

Step Starting at 2 Starting at 10 Starting at 100
1 1.5 5.1 50.1
2 1.4166666667 2.746078431 25.024996
3 1.4142135624 1.4442380949 12.55245805
4 1.4142135624 1.4145256551 6.355894
5 1.4142135624 1.4142135968 3.335281609

As you can see Newton’s Method converges pretty fast, even if your approximation is absurdly dumb. Such algorithm can be implemented with ease as seen below (C code).

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

int main(int argc, char **argv)
    double x, num;
    int i;

    if(argc != 2) {
        printf("Usage: %s number\n", argv[0]);
        return 1;

    num = atof(argv[1]);

    // First guess
    x = num/2.0;

    // Number of iterations
    i = 25;

    while(i--) {
        x = -(pow(x, 2.0) - num)/(num * x) + x;

    printf("%f\n", x);
    return 0;

The program though turns out to have big errors for numbers bigger than 10. Notice how bigger the number the worse the approximation? This happens because our iteration count and approximation is only efficient for very small numbers.

Screenshot 2016-03-14 12.50.20

There are a multitude of improvements that could be done such as making the iteration count depend on the number as well as trying to find a good linear approximation to the square root. I won’t discuss these methods here since my only ambition was to introduce you to Newton’s Method and give you an idea of how it works and how awesome it is.