0% found this document useful (0 votes)
131 views11 pages

"Newton's Method and Loops": University of Karbala College of Engineering Petroleum Eng. Dep

This document is a report submitted by Ali Mahmoud Ayal to Dr. Farhan Altaee at the University of Karbala College of Engineering Petroleum Engineering Department. The report discusses Newton's method, which is an iterative method for finding approximations of the roots (or zeros) of a real-valued function. It compares the performance of Newton's method to other root-finding methods like the bisection method and secant method. It also discusses using Newton's method for solving nonlinear equations and its applications in areas like distributed power load flow calculations.

Uploaded by

Ali Mahmoud
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
131 views11 pages

"Newton's Method and Loops": University of Karbala College of Engineering Petroleum Eng. Dep

This document is a report submitted by Ali Mahmoud Ayal to Dr. Farhan Altaee at the University of Karbala College of Engineering Petroleum Engineering Department. The report discusses Newton's method, which is an iterative method for finding approximations of the roots (or zeros) of a real-valued function. It compares the performance of Newton's method to other root-finding methods like the bisection method and secant method. It also discusses using Newton's method for solving nonlinear equations and its applications in areas like distributed power load flow calculations.

Uploaded by

Ali Mahmoud
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 11

UNIVERSITY OF KARBALA

COLLEGE OF ENGINEERING

PETROLEUM ENG. DEP

“Newton's Method and Loops”

A Report

Submitted to petroleum engineering department of university of Karbala

By:-

Ali Mahmoud Ayal

To :-

Dr. Farhan Altaee

Evening study

Date

1/9/2020

1
Abstract

Newton Raphson or Newton Method which is all-inclusive to solve the non-


square and non-linear problems. The study also aims to comparing the rate
of performance, rate of convergence of Bisection method, root findings of
the Newton meted and Secant method. It also represents a new approach of
calculation using nonlinear equation and this will be similar to Newton
Raphson simple method and inverse Jacobian matrix will be used for the
iteration process and this will be further used for distributed power load flow
calculation and will also be helpful in some of the applications. They also
discusses the difference between the use of built in derivative function and
self-derivative function in solving non-linear equation in scientific .

2
Introduction

Newton method is very fast and efficient as compared to the others


methods. In order to compare the performance, it is therefore very important
to observe the cost and speed of the convergence. Newton method requires
only one iteration and the derivative evaluation per iteration. The result of
comparing the rate of convergence of Bisection, Newton and Secant
methods came as Bisection method < Newton method < Secant method
which in terms of number is that the Newton method is 7.678622465 times
better than the Bisection method whereas Secant method is 1.389482397
times better than the newton method [1].

Complex systems with higher speed processing control are in demand now a
days and the solution to this is to divide them into subsystems and in this
way each subsystem will be treated individually and the control and
operation will be applied to each of that subsystem.

Finding roots of the nonlinear equation with the help of Newton Raphson
method provides good result with fast convergence speed and Mat lab also
adopted this method for finding the roots and tool used for such calculations
is scientific calculator.

Bracketing method is which requires bracketing of the root by two guesses


are always convergent as they are based on reducing the interval between
two guesses. Bisection method and the false position method makes use of
the bracketing method.

Newton's method (also acknowledged as the Newton–Raphson method),


named after Isaac Newton and Joseph Raphson[2] , is a technique for
judgment sequentially superior approximations to the extraction (or zeroes)
of a real-valued function. Any zero-finding method (Bisection Method, False
Position Method, Newton-Raphson, etc.) can also be used to find a
minimum or maximum of such a function, by finding a zero in the function's
first derivative, see Newton's method as an optimization algorithm.

3
Explanation

The idea of the Newton-Raphson method is as follows: one starts with an


preliminary conjecture which is logically secure to the true root, then the
purpose is approximated by its digression line (which can be computed
using the tools of calculus), and one computes the x-intercept of this
digression line (which is effortlessly done with simple algebra). This x-
intercept will typically be a enhanced approximation to the function's root
than the original guess, and the method can be iterated Based on collinear
scaling and local quadratic approximation, quasi-Newton methods have
improved for function value is not fully used in the Hessian matrix. As
collinear scaling factor in paper may appear singular, this paper, a new
collinear scaling factor is studied. Using local quadratic approximation, an
improved collinear scaling algorithm to strengthen the stability is presented,
and the global convergence of the algorithm is proved. In addition,
numerical results of training neural network with the improved collinear
scaling algorithm shown the efficiency of this algorithm is much better than
traditional one.

Theory

Before starting Newton’s method: getting an idea what the function looks
like . The first thing to do is always to try to plot the function and look at it.
Our example looks like this:-

4
f(x) is represented by the blue curve.

we see that this root lies somewhere between −1 and −0.5, but the exact
numerical value of this root is hard to read off the graph. Still, that is already
good enough for starting Newton’s method to find a more accurate value for
the solution. We have seen a zero and we know more or less where it is.

Before we continue, a word of caution: when the function is not obviously


well-behaved like it is in our example, visual inspection may actually be
misleading. Unfortunately, I’ve seen even some of my fellow scientists
argue something based on a graph, which would have needed some kind of
backup or proof. In our case, we know that things are fine, because we know
the functional form of f(x) and how its parts behave. So, if that is not the
case, proceed with caution.

5
Prerequisites for the use of Newton’s method

So we have an equation to solve. We need to make sure that the only


variable in the equation is the one we want to know a solution for. All other
variables (or parameters, in this case), have to assume numerical values. If
need be, we set them to example values [3]. At the end of such a procedure,
we’ll arrive at a form similar to our example, which is

sin(x2)−x3=1 …..(3.1)

There is another prerequisite, namely that we are dealing with real numbers
in the equation. This goes for parameters and coefficients, but also for the
variable x itself. Newton’s method cannot find complex-valued solutions to
such an equation.

There are two more things we need to be able to do when using the method:
one is calculating derivatives and the other is finding reasonable points for
the function to start looking for the solution in the neighborhood of these
points. These two requirements become clear as soon as we start using
Newton’s method .

The Newton-Raphson method as an iterative procedure

Newton’s method is a step-by-step procedure at heart. In particular, a set of


steps is repeated over and over again, until we are satisfied with the result or
have decided that the method failed [4]. Such a repetition in a mathematical
procedure or an algorithm is called iteration. So, we iterate (i.e. repeat until
we are done) the following idea:

Given any equation in one real variable to solve, we do the following:

 rewrite the equation such that it is of the form f(x)=0

6
 convince ourselves that the function f(x) does indeed have a zero (or
more)

 figure out, where one of the zeros roughly is (Newton’s method finds


zeros one at a time)

 pick a point on the function close enough to that zero (you’ll see
exactly what “close enough” means)

 note the value of x at this point and call it x0

 construct the tangent to f at this point

 compute the zero of the tangent (which is simple, because the tangent
is a linear function)

 use that zero of the tangent as our new x0, but in order to avoid
confusion, we call it x1

 repeat with x1 instead of x0, starting with the construction of the


tangent

 repeat again with x2, x3, x4, etc. until satisfied with precision of the


solution or failure is evident

 if successful, the final xi is a good numerical approximation of the


actual root of the equation.

f(x) = 0 (3.2)

As you learned in calculus, the final step in many optimization problems is


to solve an equation of this form where f is the derivative of a function, F,
that you want to maximize or minimize. In real engineering problems the
function you wish to optimize can come from a large variety of sources,
including formulas, solutions of differential equations, experiments, or
simulations [5] .

7
Newton iterations

We will denote an actual solution of equation (3.2) by x∗. There are three
methods which you may have discussed in Calculus: the bisection method,
the secant method and Newton’s method. All three depend on beginning
close (in some sense) to an actual solution x∗.

Recall Newton’s method. You should know that the basis for Newton’s
method is approximation of a function by it linearization at a point, i.e.

f(x) = f(x0) + f′(x0)(x − x0). (3.3)

Since we wish to find x so that f(x) = 0, set the left hand side (f(x)) of this
approximation equal to 0 and solve for x to obtain:-

…..(3.4)

We begin the method with the initial guess x0, which we hope is fairly close
to x∗. Then we define a sequence of points {x0, x1, x2, x3, . . .} from the
formula:

……(3.5)

which comes from (3.4). If f(x) is reasonably well-behaved near x∗ and x0 is


close enough to x∗, then it is a fact that the sequence will converge to x∗ and
will do it very quickly.

The loop: for ... end

In order to do Newton’s method, we need to repeat the calculation in (3.5) a


number of times. This is accomplished in a program using a loop, which
means a section of a program which is repeated. The simplest way to

8
accomplish this is to count the number of times through. In Matlab, a for ...
end statement makes a loop as in the following simple function program:-

Call this function in the command window as: > mysum(100)

The result will be the sum of the first 100 integers. All for ... end loops have
the same format, it begins with for, followed by an index (i) and a range of
numbers (1:n). Then come the commands that are to be repeated. Last comes
the end command.

Loops are one of the main ways that computers are made to do calculations
that humans cannot. Any calculation that involves a repeated process is
easily done by a loop. Now let’s do a program that does n steps (iterations)
of Newton’s method[6]. We will need to input the function, its derivative,
the initial guess, and the number of steps. The output will be the final value
of x,

i.e. xn. If we are only interested in the final approximation, not the
intermediate steps, which is usually the case in the real world, then we can
use a single variable x in the program and change it at each step:-

9
In the command window define an inline function: f(x) = x3 – 5

i.e.

> f = inline(’x^3 - 5’)

and define f1 to be its derivative, i.e.

> f1 = inline(’3*x^2’).

Then run mynewton on this function. By trial and error, what is the lowest
value of n for which the program converges (stops changing).

Convergence

Newton’s method converges rapidly when f′(x∗) is nonzero and finite, and
x0 is close enough to x∗ that the linear approximation (3.2) is valid. Let us
take a look at what can go wrong.

10
References

[1] “Ehiwario J.C &Aghamie S.O”, Comparative Study of Bisection,


Newton-Raphson of Root- Finding Problems, Volume no 04, Issue no 04,
April 2014.

[2] “Tan Tingting”, “ Li Ying” & “ Jiang Tong”, The Analysis of the
Convergence of Newton-RaphsonMethod Based on the Current Injection in
Distribution Network Case”,Volume no 5, Issue no 03, June 2013.

[3] “Changbum Chun”, Iterative method improving Newton’s method by the


decomposition method,March 2005.00

[4] “Cheong Tau Han”, “Lim Kquatiian Boon” & “ Tay Kim Gaik”, Solving
Non-Linear Equation by Newton-Raphson Method using Built-in Derivative
Function in Casio fx-570ES Calculator, Publication: UniversitiTeknologi
Mara, 2 UniversitiPendidikan Sultan Idris, UniversitiTun Hussein Onn
Malaysia

[5] “Autar Kaw”,Newton Raphson Method for solving non-linear equations,


Saylor .org, http://numericalmethods.eng.usf.edu/

[6] “Edwin”,Newton Raphson method, http://www.algebra.com

11

You might also like