taylorseriesapproximation

33
7/31/2019 TaylorSeriesApproximation http://slidepdf.com/reader/full/taylorseriesapproximation 1/33 Taylor Series Approximation Taylor series, Order of approximation, Newton’s method. 

Upload: nikhilesh

Post on 05-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 1/33

Taylor Series Approximation

Taylor series,

Order of approximation,Newton’s method. 

Page 2: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 2/33

Orders of approximation 

• In science, engineering, and other quantitative disciplines,orders of approximation refer to formal or informal termsfor how precise an approximation is, and to indicateprogressively more refined approximations: in increasingorder of precision, a zeroth order approximation, a firstorder approximation, a second order approximation, andso forth.

• Formally, an nth order approximation is one where theorder of magnitude of the error is at most n; in terms of bigO notation, with error bounded by O( x n). In suitablecircumstances, approximating a function by a Taylorpolynomial of degree n yields an nth order approximation,by Taylor's theorem: a first order approximation is a linearapproximation, and so forth.

Page 3: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 3/33

Usage in science and engineering 

• Zerot h-order approxi mation (also 0th order) is the term scientists use for afirst educated guess at an answer. Many simplifying assumptions aremade, and when a number is needed, an order of magnitude answer (orzero significant figures) is often given. For example, you might say "thetown has a few thousand residents", when it has 3,914 people in actuality. This is also sometimes referred to as an order of magnitude 

approximation.• A zeroth-order approximation of a  function  (that is, mathematically 

determining a formula to fit multiple data points) will be constant, or a flatline with no slope: a polynomial of degree 0.

• First-order approximation (also 1st order) is the term scientists use for afurther educated guess at an answer. Some simplifying assumptions are

made, and when a number is needed, an answer with only one significantfigure is often given ("the town has 4,000 residents").

• A first-order approximation of a function (that is, mathematicallydetermining a formula to fit multiple data points) will be a linearapproximation, straight line with a slope: a polynomial of degree 1.

Page 4: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 4/33

• Second-order approximation (also 2nd order) is the term scientists use fora decent-quality answer. Few simplifying assumptions are made, andwhen a number is needed, an answer with two or more significant figures

("the town has 3,900 residents") is generally given.• A second-order approximation of a function (that is, mathematically

determining a formula to f it multiple data points) will be a quadraticpolynomial, geometrically, a parabola: a polynomial of degree 2.

• While higher-order approximations exist and are crucial to a betterunderstanding and description of reality, they are not typically referred to

by number.• A third-order approximation would be required to fit four data points, and

so on.

• These terms are also used colloquially by scientists and engineers todescribe phenomena that can be neglected as not significant (eg., "Of course the rotation of the earth affects our experiment, but it's such a

high-order effect that we wouldn't be able to measure it" or "At thesevelocities, relativity is a fourth-order effect that we only worry about atthe annual calibration.") In this usage, the ordinality of the approximationis not exact, but is used to emphasize its insignificance; the higher thenumber used, the less important the effect.

Page 5: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 5/33

Taylor series 

• As the degree of the Taylor

polynomial rises, it approachesthe correct function. This imageshows sin x  (in black) and Taylorapproximations, polynomials of degree 1, 3, 5, 7, 9, 11 and 13.

• The exponential function (inblue), and the sum of the firstn+1 terms of its Taylor series at 0(in red).

Page 6: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 6/33

Definition 

• In mathematics, the Taylor series is a representation of  a function as an infinitesum of terms calculated from the values of its derivatives at a single point. It maybe regarded as the limit of the Taylor polynomials. Taylor series are named afterthe English mathematician Brook Taylor. If the series is centered at zero, the seriesis also called a Maclaurin series, named after the Scottish mathematician ColinMaclaurin. 

The Taylor series of a real or complex function  ƒ ( x ) that is infinitely differentiable ina neighbourhood of a real or complex number a, is the power series 

• which in a more compact form can be written as

• where n! denotes the factorial of  n and  ƒ  (n)(a) denotes the nth derivative of  ƒ  evaluated at the point a; the zeroth derivative of  ƒ is defined to be  ƒ itself and ( x  − a)0 and 0! are both defined to be 1.

• In the particular case where a = 0, the series is also called a Maclaurin series.

Page 7: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 7/33

• The Maclaurin series for any polynomial is the polynomial itself.

• The Maclaurin series for (1 −  x )−1 is the geometric series 

• so the Taylor series for x −1 at a = 1 is

• By integrating the above Maclaurin series we find the Maclaurin series for−log(1 −  x ), where log denotes the natural logarithm:

• and the corresponding Taylor series for log( x ) at a = 1 is

• The Taylor series for the exponential function e x at a = 0 is

• The above expansion holds because the derivative of e x  is also e x  and e0 equals 1. This leaves the terms ( x   − 0)n in the numerator and n! in thedenominator for each term in the infinite sum.

Page 8: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 8/33

Convergence

• The Taylor polynomials forlog(1+x ) only provide accurateapproximations in the range−1 < x  ≤ 1. Note that, for x > 1,the Taylor polynomials of higher degree are worse approximations.

• Taylor series need not ingeneral be convergent. Moreprecisely, the set of functionswith a convergent Taylor seriesis a meager set in the Frechet

space of  smooth functions. Inspite of this, for manyfunctions that arise in practice,the Taylor series doesconverge.

Page 9: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 9/33

Entire functions

• The limit of a convergent Taylor series of a function f need not in generalbe equal to the function value f ( x ), but in practice often it is. For example,the function

is infinitely differentiable at  x  = 0, and has all derivatives zero there.Consequently, the Taylor series of  f ( x ) is zero. However, f ( x ) is not equal tothe zero function, and so it is not equal to its Taylor series.

• If  f ( x ) is equal to its Taylor series in a neighborhood of a, it is said to beanalytic in this neighborhood. If   f ( x ) is equal to its Taylor series

everywhere it is called entire. The exponential function  e x 

and thetrigonometric functions sine and cosine are examples of entire functions.Examples of functions that are not entire include the logarithm, thetrigonometric function tangent, and its inverse arctan. For these functionsthe Taylor series do not converge if  x is far from a.

Page 10: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 10/33

• Taylor series can be used to calculate the value of an entire function inevery point, if the value of the function, and of all of its derivatives, areknown at a single point. Uses of the Taylor series for entire functionsinclude:

The partial sums (the Taylor polynomials) of the series can be used asapproximations of the entire function. These approximations are good if sufficiently many terms are included.

• The series representation simplifies many mathematical proofs.

• Pictured on the right is an accurate approximation of sin( x ) around thepoint a = 0. The pink curve is a polynomial of degree seven:

• The error in this approximation is no more than | x |9/9!. In particular, for−1 < x < 1, the error is less than 0.000003.

• In contrast, also shown is a picture of the natural logarithm functionlog(1 +  x ) and some of its Taylor polynomials around a = 0. These

approximations converge to the function only in the region −1 <  x   ≤ 1;outside of this region the higher-degree Taylor polynomials are worse approximations for the function. This is similar to Runge's phenomenon.

• The error incurred in approximating a f unction by its nth-degree Taylorpolynomial, is called the remainder or residual  and is denoted by thefunction Rn(x). Taylor's theorem can be used to obtain a bound on the size

of the remainder.

Page 11: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 11/33

History 

• The Greek philosopher Zeno considered the problem of summing an infinite seriesto achieve a finite result, but rejected it as an impossibility: the result was Zeno'sparadox. Later, Aristotle proposed a philosophical resolution of the paradox, butthe mathematical content was apparently unresolved until taken up by Democritus and then Archimedes. It was through Archimedes's method of exhaustion that aninfinite number of progressive subdivisions could be performed to achieve a finiteresult. Liu Hui independently employed a similar method a few centuries later.

• In the 14th century, the earliest examples of the use of Taylor series and closely-related methods were given by Madhava of Sangamagrama. Though no record of his work survives, writings of later Indian mathematicians suggest that he found anumber of special cases of the Taylor series, including those for the trigonometricfunctions of sine, cosine, tangent, and arctangent. The Kerala school of astronomyand mathematics further expanded his works with various series expansions andrational approximations until the 16th century.

• In the 17th century, James Gregory also worked in this area and published severalMaclaurin series. It was not until 1715 however that a general method forconstructing these series for all functions for which they exist was finally providedby Brook Taylor, after whom the series are now named.

• The Maclaurin series was named after Colin Maclaurin, a professor in Edinburgh,who published the special case of the Taylor result in the 18th century.

Page 12: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 12/33

Properties 

• The function e−1/x² is not analyticat  x  = 0: the Taylor series isidentically 0, although thefunction is not.

• If this series converges for every x  in the interval (a  −  r , a + r ) andthe sum is equal to  f ( x ), then the function f ( x ) is said to be analytic in the interval (a − r , a + r ). If thisis true for any r then the function 

is said to be an entire function.To check whether the seriesconverges towards  f ( x ), onenormally uses estimates for theremainder term of  Taylor'stheorem. A function is analytic if and only if  it can be representedas a power series; the coefficientsin that power series are thennecessarily the ones given in theabove Taylor series formula.

Page 13: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 13/33

Why Taylor series is handy

• The importance of such a power series representation is at least fourfold.First, differentiation and integration of power series can be performed term by term and is hence particularly easy. Second, an analytic function can be uniquely extended to a holomorphic function defined on an opendisk in the complex plane, which makes the whole machinery of complexanalysis available. Third, the (truncated) series can be used to compute

function values approximately (often by recasting the polynomial into theChebyshev form and evaluating it with the Clenshaw algorithm).

• Fourth, algebraic operations can often be done much more readily on thepower series representation; for instance the simplest proof of  Euler'sformula uses the Taylor series expansions for sine, cosine, and exponentialfunctions. This result is of fundamental importance in such fields as

harmonic analysis.• Another reason why the Taylor series is the natural power series for

studying a function  f  is that, given the value of  f  and its derivatives at apoint a, the Taylor series is in some sense the most likely function that fitsthe given data.

Page 14: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 14/33

List of Maclaurin series of some

common functions •

Several important Maclaurin series expansions follow. All theseexpansions are valid for complex arguments x .

• Exponential function:

• Natural logarithm:

Finite geometric series:

• Infinite geometric series:

.

Page 15: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 15/33

More Maclaurin series

• Variants of the infinite geometric series:

• Square root:

• Binomial series (includes the square root for α = 1/2 and the infinite

geometric series for α = −1):

• with generalized binomial coefficients 

.

Page 16: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 16/33

• Trigonometric functions:

• where the Bs are Bernoulli numbers.

.

Page 17: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 17/33

Compare with trigonometric functions

• Hyperbolic functions:

.

Page 18: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 18/33

Calculation of Taylor series 

• Several methods exist for the calculation of Taylorseries of a large number of functions.

• One can attempt to use the Taylor series as-is andgeneralize the form of the coefficients, or one can use

manipulations such as substitution, multiplication ordivision, addition or subtraction of standard Taylorseries to construct the Taylor series of a function, byvirtue of Taylor series being power series.

• In some cases, one can also derive the Taylor series by

repeatedly applying integration by parts.• Particularly convenient is the use of  computer algebra

systems to calculate Taylor series.

Page 19: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 19/33

First example 

• Compute the 7th degree Maclaurin polynomial for thefunction

• First, rewrite the function as

• We have for the natural logarithm (by using the big Onotation)

• and for the cosine function

.

Page 20: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 20/33

How to calculate

• The latter series expansion has a zero constant term, which enablesus to substitute the second series into the first one and to easilyomit terms of higher order than the 7th degree by using the big Onotation:

• Since the cosine is an even function, the coefficients for all the oddpowers x , x 3, x 5, x 7, ... have to be zero.

Page 21: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 21/33

Second example 

• Suppose we want the Taylor series at 0 of the function

• We have for the exponential function

• and, as in the first example,

• Assume the power series is

.

Page 22: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 22/33

• Then multiplication with the denominator and substitutionof the series of the cosine yields

• Collecting the terms up to fourth order yields

• Comparing coefficients with the above series of theexponential function yields the desired Taylor series

.

Page 23: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 23/33

Taylor series as definitions 

• Classically, algebraic functions are defined by an algebraic equation,and transcendental functions (including those discussed above) aredefined by some property that holds for them, such as a differentialequation. For example the exponential function is the functionwhich is equal to its own derivative everywhere, and assumes thevalue 1 at the origin. However, one may equally well define ananalytic function by its Taylor series.

• Taylor series are used to define functions and "operators" in diverseareas of mathematics. In particular, this is true in areas where theclassical definitions of functions break down. For example, usingTaylor series, one may define analytical functions of matrices and

operators, such as the matrix exponential or matrix logarithm.• In other areas, such as formal analysis, it is more convenient to

work directly with the power series themselves. Thus one maydefine a solution of a differential equation as a power series which,one hopes to prove, is the Taylor series of the desired solution.

Page 24: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 24/33

Taylor series in several variables 

• The Taylor series may also be generalized to functions of more thanone variable with

• For example, for a function that depends on two variables, x and y ,the Taylor series to second order about the point (a, b) is:

• where the subscripts denote the respective partial derivatives.

Page 25: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 25/33

Second order Taylor series

• A second-order Taylor series expansion of a scalar-valued functionof more than one variable can be compactly written as

• where is the gradient of f evaluated at x=a and is theHessian matrix. Applying the multi-index notation the Taylor seriesfor several variables becomes

• which is to be understood as a still more abbreviated multi-index version of the first equation of this paragraph, again in full analogyto the single variable case.

Page 26: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 26/33

Newton's method 

• In numerical analysis, Newton's method (also known as the Newton –

Raphson method), named after Isaac Newton and Joseph Raphson, isperhaps the best known method for finding successively betterapproximations to the zeroes (or roots) of a real-valued function.Newton's method can often converge remarkably quickly, especially if theiteration begins "sufficiently near" the desired root. Just how near

"sufficiently near" needs to be, and just how quickly "remarkably quickly"can be, depends on the problem. This is discussed in detail below.Unfortunately, when iteration begins far from the desired root, Newton'smethod can easily lead an unwary user astray with little warning. Thus,good implementations of the method embed it in a routine that alsodetects and perhaps overcomes possible convergence failures.

• Given a function  ƒ ( x ) and its derivative  ƒ  '( x ), we begin with a first guess x 0 . A better approximation x 1 is

• An important and somewhat surprising application is Newton –Raphsondivision, which can be used to quickly find the reciprocal of a number

using only multiplication and subtraction.

Page 27: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 27/33

• Note that there are examples of  infinitely differentiable functions  f ( x )whose Taylor series converge, but are not equal to  f ( x ). For instance, thefunction defined pointwise by f ( x ) = e−1/ x ² if  x  ≠ 0 and f (0) = 0 is an example

of a non-analytic smooth function. All its derivatives at  x = 0 are zero, sothe Taylor series of  f ( x ) at 0 is zero everywhere, even though the functionis nonzero for every x  ≠ 0. This particular pathology does not afflict Taylorseries in complex analysis. There, the area of convergence of a Taylorseries is always a disk in the complex plane (possibly with radius 0), andwhere the Taylor series converges, it converges to the function value.Notice that e−1/z² does not approach 0 as z approaches 0 along theimaginary axis, hence this function is not continuous as a function on thecomplex plane.

• Since every sequence of real or complex numbers can appear ascoefficients in the Taylor series of an infinitely differentiable functiondefined on the real line, the radius of convergence of a Taylor series canbe zero. There are even infinitely differentiable functions defined on the

real line whose Taylor series have a radius of convergence 0 everywhere.• Some functions cannot be written as Taylor series because they have a

singularity; in these cases, one can often still achieve a series expansion if one allows also negative powers of the variable x ; see Laurent series. Forexample, f ( x ) = e−1/ x ² can be written as a Laurent series.

Page 28: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 28/33

Geometric picture

• An illustration of one iteration of Newton's method (the function  ƒ  is shown in blue and the tangentline is in red). We see that x n+1 is abetter approximation than  x n forthe root x of the function f .

• The idea of the method is asfollows: one starts with an initialguess which is reasonably closeto the true root, then thefunction is approximated by itstangent line (which can be

computed using the tools of calculus), and one computes the x -intercept of this tangent line(which is easily done withelementary algebra). This  x -intercept will typically be a betterapproximation to the function's

root than the original guess, andthe method can be iterated.

Page 29: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 29/33

• Suppose ƒ : [a, b] → R is a differentiable function defined on the interval [a, b] withvalues in the real numbers R. The formula for converging on the root can be easilyderived. Suppose we have some current approximation x n. Then we can derive theformula for a better approximation,  x n+1 by referring to the diagram on the right.

We know from the definition of the derivative at a given point that it is the slopeof a tangent at that point.

• That is

• Here,  f  ' denotes the derivative of the function  f . Then by simple algebra we can

derive

• We start the process off with some arbitrary initial value x 0. (The closer to the zero,the better. But, in the absence of any intuition about where the zero might lie, a"guess and check" method might narrow the possibilities to a reasonably smallinterval by appealing to the intermediate value theorem.) The method will usually

converge, provided this initial guess is close enough to the unknown zero, and that ƒ' ( x 0) ≠ 0. Furthermore, for a zero of  multiplicity 1, the convergence is at leastquadratic (see rate of convergence) in a neighbourhood of the zero, whichintuitively means that the number of correct digits roughly at least doubles inevery step.

Page 30: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 30/33

Application to minimization and

maximization problems 

• Newton's method can also be used to find a

minimum or maximum of a function. The

derivative is zero at a minimum or maximum,

so minima and maxima can be found byapplying Newton's method to the derivative.

The iteration becomes:

Page 31: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 31/33

History 

• Newton's method was described by Isaac Newton in De analysi per aequationes numero ter minorum infinitas (written in 1669, published in1711 by William Jones) and in De metodis fluxionum et serieruminfinitar um (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson).

• However, his description differs substantially from the modern description

given above: Newton applies the method only to polynomials.• He does not compute the successive approximations  x n, but computes a

sequence of polynomials and only at the end, he arrives at anapproximation for the root x .

• Finally, Newton views the method as purely algebraic and fails to noticethe connection with calculus. Isaac Newton probably derived his method

from a similar but less precise method by Vieta.• The essence of Vieta's method can be found in the work of the Persian

mathematician, Sharaf al-Din al-Tusi, while his successor Jamshīd al-Kāshī  used a form of Newton's method to solve  x P  −  N = 0 to find roots of  N (Ypma 1995).

• A special case of Newton's method for calculating square roots was known

much earlier and is often called the Babylonian method.

Page 32: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 32/33

• Newton's method was used by 17th century Japanesemathematician Seki Kōwa to solve single-variable equations, thoughthe connection with calculus was missing.

• Newton's method was first published in 1685 in  A Treatise of 

 Algebra both Historical and Practical by John Wallis. In 1690, JosephRaphson published a simplified description in Analysis aequationumuniversalis. Raphson again viewed Newton's method purely as analgebraic method and restricted its use to polynomials, but hedescribes the method in terms of the successive approximations x n instead of the more complicated sequence of polynomials used by

Newton.• Finally, in 1740, Thomas Simpson described Newton's method as an

iterative method for solving general nonlinear equations usingfluxional calculus, essentially giving the description above. In thesame publication, Simpson also gives the generalization to systemsof two equations and notes that Newton's method can be used for

solving optimization problems by setting the gradient to zero.• Arthur Cayley in 1879 in The Newton-Fourier imaginary problem 

was the first who noticed the difficulties in generalizing theNewton's method to complex roots of  polynomials with degreegreater than 2 and complex initial values. This opened the way tothe study of the theory of iterations of rational functions.

Page 33: TaylorSeriesApproximation

7/31/2019 TaylorSeriesApproximation

http://slidepdf.com/reader/full/taylorseriesapproximation 33/33

• Newton's method is an extremely powerful technique -- in general theconvergence is quadratic: the error is essentially squared at each step (that is, thenumber of accurate digits doubles in each step). However, there are somedifficulties with the method.

Newton's method requires that the derivative be calculated directly. In mostpractical problems, the function in question may be given by a long andcomplicated formula, and hence an analytical expression for the derivative maynot be easily obtainable. In these situations, it may be appropriate to approximatethe derivative by using the slope of a line through two points on the function. Inthis case, the Secant method results. This has slightly slower convergence thanNewton's method but does not require the existence of derivatives.

If the initial value is too far from the true zero, Newton's method may fail toconverge. For this reason, Newton's method is often referred to as a localtechnique. Most practical implementations of Newton's method put an upper limiton the number of iterations and perhaps on the size of the iterates.

• If the derivative of the function is not continuous the method may fail to converge.

• It is clear from the formula for Newton's method that it will fail in cases where thederivative is zero. Similarly, when the derivative is close to zero, the tangent line is

nearly horizontal and hence may "shoot" wildly past the desired root.• If the root being sought has multiplicity greater than one, the convergence rate is

merely linear (errors reduced by a constant factor at each step) unless specialsteps are taken. When there are two or more roots that are close together then itmay take many iterations before the iterates get close enough to one of them forthe quadratic convergence to be apparent.