symbolic/numerical methods for odes - southern …faculty.smu.edu/shampine/mswtalk.pdfinitial value...

Post on 19-Mar-2018

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Symbolic/Numerical Methods for ODEs

L.F. ShampineDepartment of Mathematics

Southern Methodist UniversityDallas, Texas 75275

lshampin@mail.smu.edu

This talk is based on work done withRob Corless

University of Western Ontario

Initial Value Problems for ODEs

RKF45 (Shampine & Watts) is a FORTRAN code widelyused in GSC to solve IVPs. It is the foundation of theoriginal default IVP solvers of the PSEs

Maple — rkf45

Matlab — ode45

Mathematica — NDSolve

Initial Value Problems for ODEs

RKF45 (Shampine & Watts) is a FORTRAN code widelyused in GSC to solve IVPs. It is the foundation of theoriginal default IVP solvers of the PSEs

Maple — rkf45

Matlab — ode45

Mathematica — NDSolve

Matlab ODE Suite (Shampine & Reichelt)

more capable

more efficient

easy to use, even for stiff IVPs

A Maple Project

I became interested in doing something similar in Maple forseveral reasons:

A Maple Project

I became interested in doing something similar in Maple forseveral reasons:

• Anomalous results had been reported that I realized weredue to the design of the solvers.

A Maple Project

I became interested in doing something similar in Maple forseveral reasons:

• Anomalous results had been reported that I realized weredue to the design of the solvers.

• When solving stiff IVPs, analytical Jacobians are quiteadvantageous. Maple had an AD capability that appearedto offer an exciting possibility of combining symbolic andnumerical methods.

A Maple Project

I became interested in doing something similar in Maple forseveral reasons:

• Anomalous results had been reported that I realized weredue to the design of the solvers.

• When solving stiff IVPs, analytical Jacobians are quiteadvantageous. Maple had an AD capability that appearedto offer an exciting possibility of combining symbolic andnumerical methods.

• Solving IVPs would be faster in hardware floating point.

Overview

In the course of this talk we’ll consider briefly

how IVPs are solved numerically

what automatic differentiation is

the use of hardware floating point arithmetic

Overview

In the course of this talk we’ll consider briefly

how IVPs are solved numerically

what automatic differentiation is

the use of hardware floating point arithmetic

This project illustrates the development of mathematicalsoftware with special emphasis on combining symbolic andnumerical computation.

Overview

In the course of this talk we’ll consider briefly

how IVPs are solved numerically

what automatic differentiation is

the use of hardware floating point arithmetic

This project illustrates the development of mathematicalsoftware with special emphasis on combining symbolic andnumerical computation.

It is a snapshot in the evolution of Maple — some of theissues that we’ll discuss have been resolved or greatlyimproved in Maple 9.

NODES — Numerical ODEs

From my experience with Matlab, I knew I needed a nativeguide to the Maple forest. That was one of the roles playedby Rob Corless in our development of the NODES package.

NODES — Numerical ODEs

From my experience with Matlab, I knew I needed a nativeguide to the Maple forest. That was one of the roles playedby Rob Corless in our development of the NODES package.

The programs we developed were added to Maple 7 as thedefault solvers for non-stiff and stiff IVPs by Allan Wittkopf.He made some improvements, including a better way ofhandling output when integrating over “long” intervals.

Symbolic vs Numerical Solutions

Symbolic (Analytical) Solution

Solution provided as a function of x.

Can get more-or-less arbitrary accuracy.

Constraints of speed and complexity cause emphasison a few equations. Convenient to code individually.

Numerical Solution

Solution provided for discrete values of x.

Accuracy constrained by hardware and the problemitself.

Fast. Can solve large and complex systems ofequations. Convenient to code as a procedure.

Solving IVPs in Maple

A collection of numerical IVP solvers was available indsolve[numeric]. Imitating an analytical solution, theyall returned a numerical solution as a function.

Solving IVPs in Maple

A collection of numerical IVP solvers was available indsolve[numeric]. Imitating an analytical solution, theyall returned a numerical solution as a function.

Roughly speaking the function is evaluated for given x by

Start with specified initial values x∗ = a, y∗ = y(a).

With y∗ as initial values at x∗, integrate numerically to x.Return the computed y ≈ y(x). Use x∗ = x, y∗ = y fornext computation.

Solving IVPs in Maple

A collection of numerical IVP solvers was available indsolve[numeric]. Imitating an analytical solution, theyall returned a numerical solution as a function.

Roughly speaking the function is evaluated for given x by

Start with specified initial values x∗ = a, y∗ = y(a).

With y∗ as initial values at x∗, integrate numerically to x.Return the computed y ≈ y(x). Use x∗ = x, y∗ = y fornext computation.

This can be inefficient and result in anomalies because itdoesn’t account for the way IVPs are solved numerically.

Numerical Methods for IVPs

Solvey′(x) = f(x, y(x)), y(a) = y0,

by computing yn ≈ y(xn) at a = x0 < x1 < . . . < xN = b.

Numerical Methods for IVPs

Solvey′(x) = f(x, y(x)), y(a) = y0,

by computing yn ≈ y(xn) at a = x0 < x1 < . . . < xN = b.

At xn, select step size hn. Take a step by computingxn+1 = xn + hn, yn+1 ≈ y(xn+1).

Numerical Methods for IVPs

Solvey′(x) = f(x, y(x)), y(a) = y0,

by computing yn ≈ y(xn) at a = x0 < x1 < . . . < xN = b.

At xn, select step size hn. Take a step by computingxn+1 = xn + hn, yn+1 ≈ y(xn+1).

Taylor series expansion plus ODEs,

y(xn + hn) = y(xn) + hny′(xn) + O(h2n)

= y(xn) + hf(xn, y(xn)) + O(h2n),

suggests Euler’s method yn+1 = yn + hnf(xn, yn)

A Closer Look at Euler’s Method

The local solution u(x) at xn is defined by

u′(x) = f(x, u(x)), u(xn) = yn

A Closer Look at Euler’s Method

The local solution u(x) at xn is defined by

u′(x) = f(x, u(x)), u(xn) = yn

Euler’s method amounts to approximating u(x) on[xn, xn + hn] by a tangent line:

yn+1 = yn + hnf(xn, yn) = u(xn) + hnu′(xn) ≈ u(xn + hn)

A Closer Look at Euler’s Method

The local solution u(x) at xn is defined by

u′(x) = f(x, u(x)), u(xn) = yn

Euler’s method amounts to approximating u(x) on[xn, xn + hn] by a tangent line:

yn+1 = yn + hnf(xn, yn) = u(xn) + hnu′(xn) ≈ u(xn + hn)

We can use the tangent line to get values between meshpoints — a continuous extension: For 0 ≤ σ ≤ 1,

u(xn + σhn) ≈ yn + σhnf(xn, yn)

y′ = cos(x)y, y(0) = 1

0 1 2 3 4 5 6−1

0

1

2

3

4

5

y′ = cos(x)y, y(0) = 1

0 1 2 3 4 5 6−1

0

1

2

3

4

5

y′ = cos(x)y, y(0) = 1

0 1 2 3 4 5 6−1

0

1

2

3

4

5

y′ = cos(x)y, y(0) = 1

0 1 2 3 4 5 6−1

0

1

2

3

4

5

y′ = cos(x)y, y(0) = 1

0 1 2 3 4 5 6−1

0

1

2

3

4

5

y′ = cos(x)y, y(0) = 1

0 1 2 3 4 5 6−1

0

1

2

3

4

5

y′ = cos(x)y, y(0) = 1

0 1 2 3 4 5 6−1

0

1

2

3

4

5

Integrate in Reverse Direction

0 1 2 3 4 5 6−1

0

1

2

3

4

5

Error Propagation

The error y(xn) − yn grows or decays as the local solutionsspread apart or come together — the stability of the IVP.

Error Propagation

The error y(xn) − yn grows or decays as the local solutionsspread apart or come together — the stability of the IVP.

As a practical matter, cannot solve numerically IVPs thatare terribly unstable.

Error Propagation

The error y(xn) − yn grows or decays as the local solutionsspread apart or come together — the stability of the IVP.

As a practical matter, cannot solve numerically IVPs thatare terribly unstable.

Stability depends on the direction of integration.

Error Propagation

The error y(xn) − yn grows or decays as the local solutionsspread apart or come together — the stability of the IVP.

As a practical matter, cannot solve numerically IVPs thatare terribly unstable.

Stability depends on the direction of integration.

Stiff problems are

super stable in the direction of integration,

hence terribly unstable in the reverse direction.

Approach of dsolve[numeric]

The answer at a given x depends on previous x.

To get an answer at x, the solvers shorten some stepsto make x a mesh point. Reducing step sizes forprevious x affects the answer at the current x.

Because of the stability of the IVP, the answer can bequite different if you change direction.

Approach of dsolve[numeric]

The answer at a given x depends on previous x.

To get an answer at x, the solvers shorten some stepsto make x a mesh point. Reducing step sizes forprevious x affects the answer at the current x.

Because of the stability of the IVP, the answer can bequite different if you change direction.

Inefficient when change direction because integrate morethan once over the same interval.

Approach of dsolve[numeric]

The answer at a given x depends on previous x.

To get an answer at x, the solvers shorten some stepsto make x a mesh point. Reducing step sizes forprevious x affects the answer at the current x.

Because of the stability of the IVP, the answer can bequite different if you change direction.

Inefficient when change direction because integrate morethan once over the same interval.

Inefficient when reduce the step size often to get answersat specific x.

Approach of dsolve[numeric]

The answer at a given x depends on previous x.

To get an answer at x, the solvers shorten some stepsto make x a mesh point. Reducing step sizes forprevious x affects the answer at the current x.

Because of the stability of the IVP, the answer can bequite different if you change direction.

Inefficient when change direction because integrate morethan once over the same interval.

Inefficient when reduce the step size often to get answersat specific x.

You cannot reverse direction when solving a stiff problem.

Approach of IVPsolve

Specify an interval [a, b], integrate from a to b.

Approach of IVPsolve

Specify an interval [a, b], integrate from a to b.

Choose the step size hn

small enough for an accurate approximation

as big as possible for efficiency

Approach of IVPsolve

Specify an interval [a, b], integrate from a to b.

Choose the step size hn

small enough for an accurate approximation

as big as possible for efficiency

Save the information needed to evaluate a continuousextension on [xn, xn+1].

Approach of IVPsolve

Specify an interval [a, b], integrate from a to b.

Choose the step size hn

small enough for an accurate approximation

as big as possible for efficiency

Save the information needed to evaluate a continuousextension on [xn, xn+1].

Auxiliary procedure IVPval to evaluate the solution at anyx in [a, b] — finds xn ≤ x ≤ xn+1 and evaluates thecontinuous extension.

Benefits of the Approach

Because of the design and continuous extensions,

Benefits of the Approach

Because of the design and continuous extensions,

Always get exactly the same answer for given x.

Benefits of the Approach

Because of the design and continuous extensions,

Always get exactly the same answer for given x.

Never integrate in an unstable direction.

Benefits of the Approach

Because of the design and continuous extensions,

Always get exactly the same answer for given x.

Never integrate in an unstable direction.

The computation can be much more efficient:

Use as big a step size as possible.

Never repeat an integration.

You can get as many answers as you want, anywhereyou want, for “free” — the continuous extensions arepolynomials.

Stiff Problems

For efficiency, we want hn as big as possible.

Stiff Problems

For efficiency, we want hn as big as possible.

However, hn must be small enough that

the error at each step is less than a given tolerance

the computation is stable

Stiff Problems

For efficiency, we want hn as big as possible.

However, hn must be small enough that

the error at each step is less than a given tolerance

the computation is stable

If accuracy determines the step size, we say that the IVP isnon-stiff. IVPsolve uses a (4,5) explicit RK method.

Stiff Problems

For efficiency, we want hn as big as possible.

However, hn must be small enough that

the error at each step is less than a given tolerance

the computation is stable

If accuracy determines the step size, we say that the IVP isnon-stiff. IVPsolve uses a (4,5) explicit RK method.

If stability restricts the step size severely, we say that theIVP is stiff. IVPsolve uses a (3,4) Rosenbrock method.

Rosenbrock Methods

Semi-implicit Euler method

yn+1 = yn +

[

I − hn

∂f

∂y(xn, yn)

]

−1

hnf(xn, yn)

— same accuracy as Euler method, much more stable.

Rosenbrock Methods

Semi-implicit Euler method

yn+1 = yn +

[

I − hn

∂f

∂y(xn, yn)

]

−1

hnf(xn, yn)

— same accuracy as Euler method, much more stable.

These semi-implicit methods are much more expensive perstep than explicit methods, but for stiff problems you winbecause you can take much longer steps.

Rosenbrock Methods

Semi-implicit Euler method

yn+1 = yn +

[

I − hn

∂f

∂y(xn, yn)

]

−1

hnf(xn, yn)

— same accuracy as Euler method, much more stable.

These semi-implicit methods are much more expensive perstep than explicit methods, but for stiff problems you winbecause you can take much longer steps.

Like explicit RK, but have to solve linear systems at eachstep. Excellent stability and simple, but not popular in GSCbecause require analytical Jacobian.

Approach of IVPsolve

The ODEs are defined by a procedure for evaluating f(x, y).To solve a stiff IVP, just set stiff=true. The solver thenforms a procedure internally for evaluating the Jacobiananalytically.

Approach of IVPsolve

The ODEs are defined by a procedure for evaluating f(x, y).To solve a stiff IVP, just set stiff=true. The solver thenforms a procedure internally for evaluating the Jacobiananalytically.

We planned to use AD, but we ended up using

ys := array(1..neq);

fs := f(xs, ys);

DfDy := linalg[jacobian](fs, ys);

codegen[optimize](codegen[makeproc](DfDy, [xs, ys]));

Automatic Differentiation

AD is an efficient way to evaluate derivatives of a functionalong with evaluating the function. For example, supposewe have evaluated intermediate quantities [u(x), u′(x)] and[v(x), v′(x)] for a specific number x. Then

Automatic Differentiation

AD is an efficient way to evaluate derivatives of a functionalong with evaluating the function. For example, supposewe have evaluated intermediate quantities [u(x), u′(x)] and[v(x), v′(x)] for a specific number x. Then

[

u(x)v(x),d u(x)v(x)

dx

]

= [u(x)v(x), u′(x)v(x) + u(x)v′(x)]

Automatic Differentiation

AD is an efficient way to evaluate derivatives of a functionalong with evaluating the function. For example, supposewe have evaluated intermediate quantities [u(x), u′(x)] and[v(x), v′(x)] for a specific number x. Then

[

u(x)v(x),d u(x)v(x)

dx

]

= [u(x)v(x), u′(x)v(x) + u(x)v′(x)]

[

cos(u(x)),d cos(u(x))

dx

]

= [cos(u(x)), sin(u(x))u′(x)]

AD vs diff

diff(f(x),x) produces a formula for the derivative thatwe can then evaluate for given x whereas AD evaluates thederivative at the same time that it evaluates f(x).

AD vs diff

diff(f(x),x) produces a formula for the derivative thatwe can then evaluate for given x whereas AD evaluates thederivative at the same time that it evaluates f(x).

AD is advantageous for complex functions of manycomponents. Unfortunately, the implementation availabledid not provide for

f(x, y) defined by a procedure

f(x, y) as complex as we wanted

Hardware Floating Point Arithmetic

Numerical computations in Maple are usually done withsoftware floating point arithmetic. The faster HFP can beused via evalhf. This was like building a ship in a bottle.There were a variety of difficulties:

Hardware Floating Point Arithmetic

Numerical computations in Maple are usually done withsoftware floating point arithmetic. The faster HFP can beused via evalhf. This was like building a ship in a bottle.There were a variety of difficulties:

• Some Maple functions could not be evaluated in HFP, sowe had to recognize this using

check_hf := traperror(evalhf(f(x, y)))

and include SFP versions of the methods.

• We had to write procedures for the numerical solution of asystem of linear equations. Although the capability wasadded to Maple as we were finishing up, we couldn’t use itbecause it wasn’t accessible inside evalhf.

• We had to write procedures for the numerical solution of asystem of linear equations. Although the capability wasadded to Maple as we were finishing up, we couldn’t use itbecause it wasn’t accessible inside evalhf.

• The hfarray data structure is used for HFP numbers. Itssize couldn’t be changed inside evalhf. Because thenumber of mesh points is not known in advance, from timeto time it is necessary to come out of evalhf, copy thesolution to a bigger array, and reenter evalhf with the newarray. This is expensive because of type conversion.

Plotting the Solution

• Sometimes it took much longer to plot the solution than itdid to compute it! That was because some builtin functionsdid not accept hfarrays. The CURVE data structureincreased the speed by as much as two orders ofmagnitude.

Plotting the Solution

• Sometimes it took much longer to plot the solution than itdid to compute it! That was because some builtin functionsdid not accept hfarrays. The CURVE data structureincreased the speed by as much as two orders ofmagnitude.

• Sometimes it took much longer to calculate the tickmarks of a logarithmic plot than it did to compute thesolution! Fixing the way this was done in a low-level plotroutine increased the speed of such plots by more thananother order of magnitude.

Another PSE

I’ve been talking about difficulties in developing somesymbolic/numerical software in Maple, but I believe thatthey are representative. Indeed, I encountered the samekinds of difficulties in developing software for ODEs inMatlab. This recent work is described in the report

Using AD to solve BVPs in Matlab, L.F. Shampine, RobertKetzscher and Shaun Forth, AMOR 2003/4, CranfieldUniversity.

AD was added to the BVP solver bvp4c in a way that isalmost seamless.

bvp4cAD

bvp4c solves BVPs of the form

y′ = f(x, y, p), 0 = g(y(a), y(b), p)

BVP codes are more likely to converge and converge fasterif provided the analytical partial derivatives

∂f

∂y,

∂g

∂y(a),

∂g

∂y(b),

∂f

∂p,

∂g

∂p

Users have trouble with the partial derivatives, so by defaultbvp4cAD evaluates them by AD. When AD cannot be used,the solver automatically resorts to finite differences, thedefault in bvp4c.

Conclusion

Combining symbolic and numerical methods is not as easyas you might expect:

Conclusion

Combining symbolic and numerical methods is not as easyas you might expect:

• There are conceptual differences between analytical andnumerical solution.

Conclusion

Combining symbolic and numerical methods is not as easyas you might expect:

• There are conceptual differences between analytical andnumerical solution.

• Basic tools may not

exist at all

have the capabilities needed

interact properly

top related