# Ordinary Differential Equations (53 pages)

**Chapter 7**

**Ordinary Diﬀerential**

**Equations**

Matlab has several diﬀerent functions for the numerical solution of ordinary differential equations. This chapter describes the simplest of these functions and then compares all of the functions for eﬃciency, accuracy, and special features. Stiﬀness is a subtle concept that plays an important role in these comparisons.

**7.1**

**Integrating Diﬀerential Equations**

The *initial value problem *for an ordinary diﬀerential equation involves ﬁnding a function *y*(*t*) that satisﬁes

*dy*(*t*)

= *f *(*t, y*(*t*))

*dt*

together with the initial condition

*y*(*t*

0

) = *y*

0

*.*

A numerical solution to this problem generates a sequence of values for the independent variable, *t*

0

*, t*

1

*, . . . *, and a corresponding sequence of values for the dependent variable, *y*

0

*, y*

1

*, . . . *, so that each *y*

*n*

approximates the solution at *t*

*n*

:

*y n*

*≈ y*(*t*

*n*

)*, n *= 0*, *1*, . . . .*

Modern numerical methods automatically determine the step sizes

*h n*

= *t*

*n*+1

*− t n*

so that the estimated error in the numerical solution is controlled by a speciﬁed tolerance.

The fundamental theorem of calculus gives us an important connection between diﬀerential equations and integrals:

∫

*y*(*t *+ *h*) = *y*(*t*) +

*t*+*h*

*f *(*s, y*(*s*))*ds.*

*t*

September 17, 2013

1

2 Chapter 7. Ordinary Diﬀerential Equations

We cannot use numerical quadrature directly to approximate the integral because we do not know the function *y*(*s*) and so cannot evaluate the integrand. Nevertheless, the basic idea is to choose a sequence of values of *h *so that this formula allows us to generate our numerical solution.

One special case to keep in mind is the situation where *f *(*t, y*) is a function of

*t *alone. The numerical solution of such simple diﬀerential equations is then just a sequence of quadratures:

∫

*t*

*n*+1

*y*

*n*+1

= *y*

*n*

+ *f *(*s*)*ds.*

*t n*

Throughout this chapter, we frequently use “dot” notation for derivatives:

*dy*(*t*)

*dt d*

2

*y*(*t*)

*.*

*dt*

2

**7.2**

**Systems of Equations**

Many mathematical models involve more than one unknown function, and secondand higher order derivatives. These models can be handled by making *y*(*t*) a vectorvalued function of *t*. Each component is either one of the unknown functions or one of its derivatives. The Matlab vector notation is particularly convenient here.

For example, the second-order diﬀerential equation describing a simple harmonic oscillator

¨

*−x*(*t*) becomes two ﬁrst-order equations. The vector *y*(*t*) has two components, *x*(*t*) and

*y*(*t*) =

[

*x*(*t*)

]

*.*

Using this vector, the diﬀerential equation is

[ ]

=

[

*−x*(*t*)

*y*

2

(*t*)

*−y*

1

(*t*)

]

*.*

The Matlab function deﬁning the diﬀerential equation has *t *and *y *as input arguments and should return *f *(*t, y*) as a column vector. For the harmonic oscillator, the function could be an M-ﬁle containing function ydot = harmonic(t,y) ydot = [y(2); -y(1)]

A more compact version uses matrix multiplication in an anonymous function, f = @(t,y) [0 1; -1 0]*y

7.3. Linearized Diﬀerential Equations 3

In both cases, the variable t has to be included as the ﬁrst argument, even though it is not explicitly involved in the diﬀerential equation.

A slightly more complicated example, the *two-body problem*, describes the orbit of one body under the gravitational attraction of a much heavier body. Using

Cartesian coordinates, *u*(*t*) and *v*(*t*), centered in the heavy body, the equations are

¨

*−u*(*t*)*/r*(*t*) 3

*,*

¨

*−v*(*t*)*/r*(*t*)

3

*,*

where

*r*(*t*) =

√

*u*(*t*)

2

+ *v*(*t*)

2

*.*

The vector *y*(*t*) has four components:

*y*(*t*) =

*u*(*t*)

*v*(*t*)

*.*

˙*v*(*t*)

The diﬀerential equation is

˙*v*(*t*)

*−u*(*t*)*/r*(*t*)

3

*−v*(*t*)*/r*(*t*)

3

*.*

The Matlab function could be function ydot = twobody(t,y) r = sqrt(y(1)^2 + y(2)^2); ydot = [y(3); y(4); -y(1)/r^3; -y(2)/r^3];

A more compact Matlab function is ydot = @(t,y) [y(3:4); -y(1:2)/norm(y(1:2))^3]

Despite the use of vector operations, the second M-ﬁle is not signiﬁcantly more eﬃcient than the ﬁrst.

**7.3**

**Linearized Diﬀerential Equations**

The local behavior of the solution to a diﬀerential equation near any point (*t*

*c*

*, y c*

) can be analyzed by expanding *f *(*t, y*) in a two-dimensional Taylor series:

*f *(*t, y*) = *f *(*t*

*c*

*, y c*

) + *α*(*t*

*− t c*

) + *J *(*y*

*− y c*

) +

*· · · ,*

where

*α *=

*∂f*

*∂t*

(*t*

*c*

*, y c*

)*, J *=

*∂f*

*∂y*

(*t*

*c*

*, y c*

)*.*

4 Chapter 7. Ordinary Diﬀerential Equations

The most important term in this series is usually the one involving *J *, the Jacobian.

For a system of diﬀerential equations with *n *components,

*y*

1

(*t*)

*f*

1

(*t, y*

1

*, . . . , y n*

)

*d dt*

*y*

2

..

.

(*t*)

=

*f*

2

(*t, y*

1

*, . . . , y*

..

.

*n*

)

*, y n*

(*t*)

*f n*

(*t, y*

1

*, . . . , y n*

) the Jacobian is an *n*-by-*n *matrix of partial derivatives:

*J *=

*∂f*

1

*∂y*

1

*∂f*

2

*∂y*

1

.

..

*∂f*

1

*∂y*

*∂f*

2

2

*∂y*

.

..

2

*. . .*

*. . .*

*∂f*

1

*∂y n*

*∂f*

2

*∂y n*

.

..

*.*

*∂f n*

*∂y*

1

*∂f n*

*∂y*

2

*. . .*

*∂f n*

*∂y n*

The inﬂuence of the Jacobian on the local behavior is determined by the solution to the linear system of ordinary diﬀerential equations

Let *λ*

*k*

= *µ*

*k*

+ *iν*

*k*

be the eigenvalues of *J *and Λ = diag(*λ*

*k*

) the diagonal eigenvalue matrix. If there is a linearly independent set of corresponding eigenvectors *V *, then

*J *= *V *Λ*V*

*−*1

*.*

The linear transformation

*V x *= *y* transforms the local system of equations into a set of decoupled equations for the individual components of *x*:

*x k*

= *λ*

*k x k*

*.*

The solutions are

*x k*

(*t*) = *e*

*λ k*

(*t*

*−t c*

)

*x*(*t*

*c*

)*.*

A single component *x* and oscillates if *ν*

*k k*

(*t*) grows with *t *if *µ*

*k*

is positive, decays if *µ*

*k*

is negative, is nonzero. The components of the local solution *y*(*t*) are linear combinations of these behaviors.

For example, the harmonic oscillator

[

0 1

*−*1 0

]

*y*

is a linear system. The Jacobian is simply the matrix

[

0 1

]

*J *=

*−*1 0

*.*

The eigenvalues of *J *are

*±i *and the solutions are purely oscillatory linear combinations of *e*

*it*

and *e*

*−it*

.

7.4. Single-Step Methods 5

A nonlinear example is the two-body problem

*y*

3

(*t*)

*−y*

1

*y*

4

(*t*)

(*t*)*/r*(*t*)

3

*−y*

2

(*t*)*/r*(*t*)

3

*,* where

*r*(*t*) =

√

*y*

1

(*t*)

2

+ *y*

2

(*t*)

2

*.*

In exercise 7.8, we ask you to show that the Jacobian for this system is

*J *=

*r*

1

5

2*y*

2

1

0

0

*− y*

3*y*

1

*y*

2

2

2

2*y*

2

2

0

0

3*y*

1

*y*

2

*− y*

2

1

*r*

5

0

0

*r*

5

0

0

0

0

*.*

It turns out that the eigenvalues of *J *just depend on the radius *r*(*t*):

*√*

2

*λ *=

1

*r*

3*/*2

*−*

*√*

*−i*

2

*.*

We see that one eigenvalue is real and positive, so the corresponding component of the solution is growing. One eigenvalue is real and negative, corresponding to a decaying component. Two eigenvalues are purely imaginary, corresponding to oscillatory components. However, the overall global behavior of this nonlinear system is quite complicated and is not described by this local linearized analysis.

**7.4**

**Single-Step Methods**

The simplest numerical method for the solution of initial value problems is *Euler’s* method. It uses a ﬁxed step size *h *and generates the approximate solution by

*y*

*n*+1

= *y*

*n*

+ *hf *(*t*

*n*

*, y n*

)*,*

*t*

*n*+1

= *t*

*n*

+ *h.*

The Matlab code would use an initial point t0, a ﬁnal point tfinal, an initial value y0, a step size h, and a function f. The primary loop would simply be t = t0; y = y0; while t <= tfinal y = y + h*f(t,y) end t = t + h

6 Chapter 7. Ordinary Diﬀerential Equations

Note that this works perfectly well if y0 is a vector and f returns a vector.

As a quadrature rule for integrating *f *(*t*), Euler’s method corresponds to a rectangle rule where the integrand is evaluated only once, at the left-hand endpoint of the interval. It is exact if *f *(*t*) is constant, but not if *f *(*t*) is linear. So the error is proportional to *h*. Tiny steps are needed to get even a few digits of accuracy.

But, from our point of view, the biggest defect of Euler’s method is that it does not provide an error estimate. There is no automatic way to determine what step size is needed to achieve a speciﬁed accuracy.

If Euler’s method is followed by a second function evaluation, we begin to get a viable algorithm. There are two natural possibilities, corresponding to the midpoint rule and the trapezoid rule for quadrature. The midpoint analogue uses

Euler to step halfway across the interval, evaluates the function at this intermediate point, then uses that slope to take the actual step:

*s*

1

*s*

2

= *f*

(

*n*

*, y n*

)*,*

*t n*

+

*h*

*, y n*

2

+

*h s*

1

2

)

*, y*

*n*+1

= *y*

*n*

+ *hs*

2

*, t*

*n*+1

= *t*

*n*

+ *h.*

The trapezoid analogue uses Euler to take a tentative step across the interval, evaluates the function at this exploratory point, then averages the two slopes to take the actual step:

*s*

1

= *f *(*t*

*n*

*, y n*

)*,*

*s*

2

*y*

*n*+1

= *f *(*t*

= *y*

*n n*

+ *h, y*

+ *h*

*s*

1

*n*

+ *hs*

1

)*,*

+ *s*

2

*,*

2

*t*

*n*+1

= *t*

*n*

+ *h.*

If we were to use both of these methods simultaneously, they would produce two diﬀerent values for *y*

*n*+1

. The diﬀerence between the two values would provide an error estimate and a basis for picking the step size. Furthermore, an extrapolated combination of the two values would be more accurate than either one individually.

Continuing with this approach is the idea behind *single-step *methods for integrating ordinary diﬀerential equations. The function *f *(*t, y*) is evaluated several times for values of *t *between *t*

*n*

and *t*

*n*+1 and values of *y *obtained by adding linear combinations of the values of *f *to *y*

*n*

. The actual step is taken using another linear combination of the function values. Modern versions of single-step methods use yet another linear combination of function values to estimate error and determine step size.

Single-step methods are often called *Runge–Kutta *methods, after the two German applied mathematicians who ﬁrst wrote about them around 1905. The classical

Runge–Kutta method was widely used for hand computation before the invention of digital computers and is still popular today. It uses four function evaluations per

7.4. Single-Step Methods 7 step:

*s*

1

*s*

2

*s*

3

= *f*

= *f*

(

*n*

*, y n*

)*,*

(

*t n*

+

*h*

*, y n*

2

*t n*

+

*h*

2

*, y n*

+

+

*h s*

1

2

)

)

*h*

2

*s*

2

*,*

*, s*

4

= *f *(*t*

*n*

+ *h, y*

*n*

+ *hs*

3

)*,*

*y*

*n*+1

*t*

*n*+1

= *y*

*n*

= *t*

*n*

+

*h*

(*s*

1

6

+ *h.*

+ 2*s*

2

+ 2*s*

3

+ *s*

4

)*,*

If *f *(*t, y*) does not depend on *y*, then classical Runge–Kutta has *s*

2 method reduces to Simpson’s quadrature rule.

= *s*

3 and the

Classical Runge–Kutta does not provide an error estimate. The method is sometimes used with a step size *h *and again with step size *h/*2 to obtain an error estimate, but we now know more eﬃcient methods.

Several of the ordinary diﬀerential equation solvers in Matlab, including the textbook solver we describe later in this chapter, are single-step or Runge–Kutta solvers. A general single-step method is characterized by a number of parameters,

*α i*

, *β*

*i,j*

, *γ*

*i*

, and *δ*

*i*

.

There are *k stages*.

Each stage computes a slope, *s*

*i*

, by evaluating *f *(*t, y*) for a particular value of *t *and a value of *y *obtained by taking linear combinations of the previous slopes:

*s i*

= *f*

*t*

*n*

+ *α*

*i h, y n*

+ *h*

*β i,j s j*

*, i *= 1*, . . . , k.*

*j*=1

The proposed step is also a linear combination of the slopes:

*y*

*n*+1

= *y*

*n*

+ *h*

*i*=1

*γ i s i*

*.*

An estimate of the error that would occur with this step is provided by yet another linear combination of the slopes:

*e*

*n*+1

= *h*

*i*=1

*δ i s i*

*.*

If this error is less than the speciﬁed tolerance, then the step is successful and *y*

*n*+1 is accepted. If not, the step is a failure and *y*

*n*+1 is rejected. In either case, the error estimate is used to compute the step size *h *for the next step.

The parameters in these methods are determined by matching terms in Taylor series expansions of the slopes. These series involve powers of *h *and products of various partial derivatives of *f *(*t, y*). The *order *of a method is the exponent of the smallest power of *h *that cannot be matched. It turns out that one, two, three, and

8 Chapter 7. Ordinary Diﬀerential Equations four stages yield methods of order one, two, three, and four, respectively. But it takes six stages to obtain a ﬁfth-order method. The classical Runge–Kutta method has four stages and is fourth order.

The names of the Matlab ordinary diﬀerential equation solvers are all of the form odennxx with digits nn indicating the order of the underlying method and a possibly empty xx indicating some special characteristic of the method. If the error estimate is obtained by comparing formulas with diﬀerent orders, the digits nn indicate these orders. For example, ode45 obtains its error estimate by comparing a fourth-order and a ﬁfth-order formula.

**7.5**

**The BS23 Algorithm**

Our textbook function ode23tx is a simpliﬁed version of the function ode23 that is included with Matlab. The algorithm is due to Bogacki and Shampine [3, 6]. The

“23” in the function names indicates that two simultaneous single-step formulas, one of second order and one of third order, are involved.

The method has three stages, but there are four slopes *s*

*i*

ﬁrst step, the *s*

1 for one step is the *s*

4 because, after the from the previous step. The essentials are

*s s*

2

= *f*

*s*

1

3

= *f*

(

*n*

(

*t t*

*, y n n n*

+

+

)*,*

*h*

2

3

4

*, y n h, y n*

+

+

*h s*

1

2

)

*,*

)

3

*hs*

2

4

*, t*

*n*+1

= *t*

*n*

+ *h,*

*y*

*n*+1

*s*

4

= *y*

*n*

+

*h*

(2*s*

1

9

+ 3*s*

= *f *(*t*

*n*+1

*, y*

*n*+1

)*,*

2

+ 4*s*

3

)*,*

*e*

*n*+1

=

*h*

72

(

*−*5*s*

1

+ 6*s*

2

+ 8*s*

3

*− *9*s*

4

)*.*

The simpliﬁed pictures in Figure 7.1 show the starting situation and the three stages. We start at a point (*t*

*n*

*, y n*

) with an initial slope *s*

1

= *f *(*t*

*n*

*, y n*

) and an estimate of a good step size, *h*. Our goal is to compute an approximate solution

*y*

*n*+1 at *t*

*n*+1

= *t*

*n*

speciﬁed tolerances.

+ *h *that agrees with the true solution *y*(*t*

*n*+1

) to within the

The ﬁrst stage uses the initial slope *s*

1 to take an Euler step halfway across the interval. The function is evaluated there to get the second slope, *s*

2

. This slope is used to take an Euler step three-quarters of the way across the interval. The function is evaluated again to get the third slope, *s*

3

. A weighted average of the three slopes,

*s *=

1

9

(2*s*

1

+ 3*s*

2

+ 4*s*

3

)*,* is used for the ﬁnal step all the way across the interval to get a tentative value for

*y*

*n*+1

. The function is evaluated once more to get *s*

4

. The error estimate then uses

7.5. The BS23 Algorithm 9

**yn s1 tn s1 s2 yn tn tn+h/2 tn+h s4 s3 ynp1 s2 s yn yn tn tn+3*h/4 tn**

**Figure 7.1. ***BS23 algorithm.*

**tn+h**

all four slopes:

*e*

*n*+1

=

*h*

72

(

*−*5*s*

1

+ 6*s*

2

+ 8*s*

3

*− *9*s*

4

)*.*

If the error is within the speciﬁed tolerance, then the step is successful, the tentative value of *y*

*n*+1 is accepted, and *s*

4 too large, then the tentative *y*

*n*+1 becomes the *s*

1 of the next step. If the error is is rejected and the step must be redone. In either case, the error estimate *e* the next step.

*n*+1 provides the basis for determining the step size *h *for

The ﬁrst input argument of ode23tx speciﬁes the function *f *(*t, y*). This argument can be either

*• *a function handle, or

*• *an anonymous function.

The function should accept two arguments—usually, but not necessarily, t and y.

The result of evaluating the character string or the function should be a column vector containing the values of the derivatives, *dy/dt*.

The second input argument of ode23tx is a vector, tspan, with two components, t0 and tfinal. The integration is carried out over the interval

*t*

0

*≤ t ≤ t f inal*

*.*

One of the simpliﬁcations in our textbook code is this form of tspan. Other Matlab ordinary diﬀerential equation solvers allow more ﬂexible speciﬁcations of the integration interval.

10 Chapter 7. Ordinary Diﬀerential Equations

The third input argument is a column vector, y0, providing the initial value of *y*

0

= *y*(*t*

0

). The length of y0 tells ode23tx the number of diﬀerential equations in the system.

A fourth input argument is optional and can take two diﬀerent forms. The simplest, and most common, form is a scalar numerical value, rtol, to be used as the relative error tolerance. The default value for rtol is 10

*−*3

, but you can provide a diﬀerent value if you want more or less accuracy. The more complicated possibility for this optional argument is the structure generated by the Matlab function odeset. This function takes pairs of arguments that specify many diﬀerent options for the Matlab ordinary diﬀerential equation solvers. For ode23tx, you can change the default values of three quantities: the relative error tolerance, the absolute error tolerance, and the M-ﬁle that is called after each successful step. The statement opts = odeset(’reltol’,1.e-5, ’abstol’,1.e-8, ...

’outputfcn’,@myodeplot) creates a structure that speciﬁes the relative error tolerance to be 10

*−*5 error tolerance to be 10

*−*8

, and the output function to be myodeplot.

, the absolute

The output produced by ode23tx can be either graphic or numeric. With no output arguments, the statement ode23tx(F,tspan,y0); produces a dynamic plot of all the components of the solution. With two output arguments, the statement

[tout,yout] = ode23tx(F,tspan,y0); generates a table of values of the solution.

**7.6**

**ode23tx**

Let’s examine the code for ode23tx. Here is the preamble.

%

% function [tout,yout] = ode23tx(F,tspan,y0,arg4,varargin)

%ODE23TX Solve non-stiff differential equations.

Textbook version of ODE23.

% ODE23TX(F,TSPAN,Y0) with TSPAN = [T0 TFINAL]

% integrates the system of differential equations

% dy/dt = f(t,y) from t = T0 to t = TFINAL.

% The initial condition is y(T0) = Y0.

%

% The first argument, F, is a function handle or an

% anonymous function that defines f(t,y).

This function

% must have two input arguments, t and y, and must

% return a column vector of the derivatives, dy/dt.

7.6. ode23tx

%

% With two output arguments, [T,Y] = ODE23TX(...)

% returns a column vector T and an array Y where Y(:,k)

% is the solution at T(k).

%

% With no output arguments, ODE23TX plots the solution.

%

% ODE23TX(F,TSPAN,Y0,RTOL) uses the relative error

% tolerance RTOL instead of the default 1.e-3.

%

% ODE23TX(F,TSPAN,Y0,OPTS) where OPTS = ...

% ODESET(’reltol’,RTOL,’abstol’,ATOL,’outputfcn’,@PLTFN)

% uses relative error RTOL instead of 1.e-3,

% absolute error ATOL instead of 1.e-6, and calls PLTFN

% instead of ODEPLOT after each step.

%

% More than four input arguments, ODE23TX(F,TSPAN,Y0,

% RTOL,P1,P2,..), are passed on to F, F(T,Y,P1,P2,..).

%

% ODE23TX uses the Runge-Kutta (2,3) method of

% Bogacki and Shampine.

%

%

%

% Example

% tspan = [0 2*pi]; y0 = [1 0]’;

F = ’[0 1; -1 0]*y’;

%

% ode23tx(F,tspan,y0);

% See also ODE23.

Here is the code that parses the arguments and initializes the internal variables.

rtol = 1.e-3; atol = 1.e-6; plotfun = @odeplot; if nargin >= 4 & isnumeric(arg4) rtol = arg4; elseif nargin >= 4 & isstruct(arg4) if ~isempty(arg4.RelTol), rtol = arg4.RelTol; end if ~isempty(arg4.AbsTol), atol = arg4.AbsTol; end if ~isempty(arg4.OutputFcn), plotfun = arg4.OutputFcn; end end t0 = tspan(1); tfinal = tspan(2); tdir = sign(tfinal - t0); plotit = (nargout == 0);

11

12 Chapter 7. Ordinary Diﬀerential Equations threshold = atol / rtol; hmax = abs(0.1*(tfinal-t0)); t = t0; y = y0(:);

% Initialize output.

if plotit plotfun(tspan,y,’init’); else tout = t; end yout = y.’;

The computation of the initial step size is a delicate matter because it requires some knowledge of the overall scale of the problem.

s1 = F(t, y, varargin{:}); r = norm(s1./max(abs(y),threshold),inf) + realmin; h = tdir*0.8*rtol^(1/3)/r;

Here is the beginning of the main loop.

The integration starts at *t *= *t*

0

*< t*

0

.

and increments *t *until it reaches *t*

*f inal*

. It is possible to go “backward,” that is, have

*t f inal*

while t ~= tfinal hmin = 16*eps*abs(t); if abs(h) > hmax, h = tdir*hmax; end if abs(h) < hmin, h = tdir*hmin; end

% Stretch the step if t is close to tfinal.

if 1.1*abs(h) >= abs(tfinal - t) h = tfinal - t; end

Here is the actual computation. The ﬁrst slope s1 has already been computed. The function deﬁning the diﬀerential equation is evaluated three more times to obtain three more slopes.

s2 = F(t+h/2, y+h/2*s1, varargin{:}); s3 = F(t+3*h/4, y+3*h/4*s2, varargin{:}); tnew = t + h; ynew = y + h*(2*s1 + 3*s2 + 4*s3)/9; s4 = F(tnew, ynew, varargin{:});

Here is the error estimate. The norm of the error vector is scaled by the ratio of the absolute tolerance to the relative tolerance. The use of the smallest ﬂoating-point number, realmin, prevents err from being exactly zero.

7.7. Examples 13 e = h*(-5*s1 + 6*s2 + 8*s3 - 9*s4)/72; err = norm(e./max(max(abs(y),abs(ynew)),threshold),

... inf) + realmin;

Here is the test to see if the step is successful. If it is, the result is plotted or appended to the output vector. If it is not, the result is simply forgotten.

if err <= rtol t = tnew; y = ynew; if plotit if plotfun(t,y,’’); break end else tout(end+1,1) = t; yout(end+1,:) = y.’; end s1 = s4; % Reuse final function value to start new step.

end

The error estimate is used to compute a new step size. The ratio rtol/err is greater than one if the current step is successful, or less than one if the current step fails. A cube root is involved because the BS23 is a third-order method. This means that changing tolerances by a factor of eight will change the typical step size, and hence the total number of steps, by a factor of two. The factors 0*.*8 and 5 prevent excessive changes in step size.

% Compute a new step size.

h = h*min(5,0.8*(rtol/err)^(1/3));

Here is the only place where a singularity would be detected.

if abs(h) <= hmin warning(sprintf( ...

’Step size %e too small at t = %e.\n’,h,t)); t = tfinal; end end

That ends the main loop. The plot function might need to ﬁnish its work.

if plotit plotfun([],[],’done’); end

**7.7**

**Examples**

Please sit down in front of a computer running Matlab. Make sure ode23tx is in your current directory or on your Matlab path. Start your session by entering

14 Chapter 7. Ordinary Diﬀerential Equations

F = @(t,y) 0 ; ode23tx(F,[0 10],1)

This should produce a plot of the solution of the initial value problem

*dy*

= 0*,*

*dt*

*y*(0) = 1*,*

0

*≤ t ≤ *10*.*

The solution, of course, is a constant function, *y*(*t*) = 1.

Now you can press the up arrow key, use the left arrow key to space over to the 0, and change it to something more interesting. Here are some examples. At ﬁrst, we’ll change just the 0 and leave the [0 10] and 1 alone.

F

0 t y

-y

1/(1-3*t)

2*y-y^2

Exact solution

1

1+t^2/2 exp(t) exp(-t)

1-log(1-3*t)/3

2/(1+exp(-2*t))

(Singular)

Make up some of your own examples. Change the initial condition. Change the accuracy by including 1.e-6 as the fourth argument.

Now let’s try the harmonic oscillator, a second-order diﬀerential equation written as a pair of two ﬁrst-order equations. First, create a function to specify the equations. Use either

F = @(t,y) [y(2); -y(1)]; or

F = @(t,y) [0 1; -1 0]*y;

Then the statement ode23tx(F,[0 2*pi],[1; 0]) plots two functions of *t *that you should recognize. If you want to produce a *phase*

*plane *plot, you have two choices. One possibility is to capture the output and plot it after the computation is complete.

[t,y] = ode23tx(F,[0 2*pi],[1; 0]) plot(y(:,1),y(:,2),’-o’) axis([-1.2 1.2 -1.2 1.2]) axis square

The more interesting possibility is to use a function that plots the solution while it is being computed. Matlab provides such a function in odephas2.m. It is accessed by using odeset to create an options structure.

7.7. Examples 15 opts = odeset(’reltol’,1.e-4,’abstol’,1.e-6, ...

’outputfcn’,@odephas2);

If you want to provide your own plotting function, it should be something like function flag = phaseplot(t,y,job) persistent p if isequal(job,’init’) p = plot(y(1),y(2),’o’,’erasemode’,’none’); axis([-1.2 1.2 -1.2 1.2]) axis square flag = 0; elseif isequal(job,’’) set(p,’xdata’,y(1),’ydata’,y(2)) end pause(0.2) flag = 0;

This is with opts = odeset(’reltol’,1.e-4,’abstol’,1.e-6, ...

’outputfcn’,@phaseplot);

Once you have decided on a plotting function and created an options structure, you can compute and simultaneously plot the solution with ode23tx(F,[0 2*pi],[1; 0],opts)

Try this with other values of the tolerances.

Issue the command type twobody to see if there is an M-ﬁle twobody.m on your path. If not, ﬁnd the two or three lines of code earlier in this chapter and create your own M-ﬁle. Then try ode23tx(@twobody,[0 2*pi],[1; 0; 0; 1]);

The code, and the length of the initial condition, indicate that the solution has four components. But the plot shows only three. Why? Hint: Find the zoom button on the ﬁgure window toolbar and zoom in on the blue curve.

You can vary the initial condition of the two-body problem by changing the fourth component.

y0 = [1; 0; 0; change_this]; ode23tx(@twobody,[0 2*pi],y0);

Graph the orbit, and the heavy body at the origin, with y0 = [1; 0; 0; change_this];

[t,y] = ode23tx(@twobody,[0 2*pi],y0); plot(y(:,1),y(:,2),’-’,0,0,’ro’) axis equal

You might also want to use something other than 2*π *for tfinal.

16 Chapter 7. Ordinary Diﬀerential Equations

**7.8**

**Lorenz Attractor**

One of the world’s most extensively studied ordinary diﬀerential equations is the

Lorenz chaotic attractor. It was ﬁrst described in 1963 by Edward Lorenz, an

M.I.T. mathematician and meteorologist who was interested in ﬂuid ﬂow models of the earth’s atmosphere. An excellent reference is a book by Colin Sparrow [8].

We have chosen to express the Lorenz equations in a somewhat unusual way involving a matrix-vector product:

The vector *y *has three components that are functions of *t*:

*y*

1

(*t*)

*y*(*t*) =

*y*

2

(*t*)

*y*

3

(*t*)

*.*

Despite the way we have written it, this is not a linear system of diﬀerential equations. Seven of the nine elements in the 3-by-3 matrix *A *are constant, but the other two depend on *y*

2

(*t*):

*A *=

*−β*

0

*y*

2

0

*−σ*

*σ*

*−y*

2

*ρ*

*−*1

*.*

The ﬁrst component of the solution, *y*

1

(*t*), is related to the convection in the atmospheric ﬂow, while the other two components are related to horizontal and vertical temperature variation. The parameter *σ *is the Prandtl number, *ρ *is the normalized Rayleigh number, and *β *depends on the geometry of the domain. The most popular values of the parameters, *σ *= 10, *ρ *= 28, and *β *= 8*/*3, are outside the ranges associated with the earth’s atmosphere.

The deceptively simple nonlinearity introduced by the presence of *y*

2 in the system matrix *A *changes everything. There are no random aspects to these equations, so the solutions *y*(*t*) are completely determined by the parameters and the initial conditions, but their behavior is very diﬃcult to predict. For some values of the parameters, the orbit of *y*(*t*) in three-dimensional space is known as a *strange*

*attractor*. It is bounded, but not periodic and not convergent. It never intersects itself. It ranges chaotically back and forth around two diﬀerent points, or attractors.

For other values of the parameters, the solution might converge to a ﬁxed point, diverge to inﬁnity, or oscillate periodically. See Figures 7.2 and 7.3.

Let’s think of *η *= *y*

2 and study the matrix as a free parameter, restrict *ρ *to be greater than one,

*A *=

*−β*

0

*η*

0

*−σ*

*σ*

*−η*

*ρ*

*−*1

*.*

It turns out that *A *is singular if and only if

*η *=

*±*

√

*β*(*ρ*

*− *1)*.*

7.8. Lorenz Attractor 17

**y1 y2 y3**

**0 5 10 15 t**

**20 25 30**

**Figure 7.2. ***Three components of Lorenz attractor.*

**0**

**−5**

**10**

**5**

**25**

**20**

**15**

**−10**

**−15**

**−20**

**−25**

**−25 −20 −15 −10 −5 0 y2**

**5 10 15 20 25**

**Figure 7.3. ***Phase plane plot of Lorenz attractor.*

The corresponding null vector, normalized so that its second component is equal to

*η*, is

*ρ*

*− *1

*η*

*η*

*.*

With two diﬀerent signs for *η*, this deﬁnes two points in three-dimensional space.

18 Chapter 7. Ordinary Diﬀerential Equations

These points are ﬁxed points for the diﬀerential equation. If

*ρ*

*− *1

*y*(*t*

0

) =

*η*

*,*

*η*

then, for all *t*,

0

0

0

*,* and so *y*(*t*) never changes. However, these points are unstable ﬁxed points. If *y*(*t*) does not start at one of these points, it will never reach either of them; if it tries to approach either point, it will be repulsed.

We have provided an M-ﬁle, lorenzgui.m, that facilitates experiments with the Lorenz equations. Two of the parameters, *β *= 8*/*3 and *σ *= 10, are ﬁxed. A uicontrol oﬀers a choice among several diﬀerent values of the third parameter, *ρ*.

A simpliﬁed version of the program for *ρ *= 28 would begin with rho = 28; sigma = 10; beta = 8/3; eta = sqrt(beta*(rho-1));

A = [ -beta 0 eta

0 -sigma sigma

-eta rho -1 ];

The initial condition is taken to be near one of the attractors.

yc = [rho-1; eta; eta]; y0 = yc + [0; 0; 3];

The time span is inﬁnite, so the integration will have to be stopped by another uicontrol.

tspan = [0 Inf]; opts = odeset(’reltol’,1.e-6,’outputfcn’,@lorenzplot); ode45(@lorenzeqn, tspan, y0, opts, A);

The matrix *A *is passed as an extra parameter to the integrator, which sends it on to lorenzeqn, the subfunction deﬁning the diﬀerential equation. The extra parameter machinery included in the function functions allows lorenzeqn to be written in a particularly compact manner.

function ydot = lorenzeqn(t,y,A)

A(1,3) = y(2);

A(3,1) = -y(2); ydot = A*y;

Most of the complexity of lorenzgui is contained in the plotting subfunction, lorenzplot. It not only manages the user interface controls, it must also anticipate the possible range of the solution in order to provide appropriate axis scaling.

7.9. Stiﬀness 19

**7.9**

**Stiﬀness**

Stiﬀness is a subtle, diﬃcult, and important concept in the numerical solution of ordinary diﬀerential equations. It depends on the diﬀerential equation, the initial conditions, and the numerical method. Dictionary deﬁnitions of the word “stiﬀ” involve terms like “not easily bent,” “rigid,” and “stubborn.” We are concerned with a computational version of these properties.

*A problem is stiﬀ if the solution being sought varies slowly, but there are nearby solutions that vary rapidly, so the numerical method must take small steps to obtain satisfactory results.*

Stiﬀness is an eﬃciency issue.

If we weren’t concerned with how much time a computation takes, we wouldn’t be concerned about stiﬀness. Nonstiﬀ methods can solve stiﬀ problems; they just take a long time to do it.

A model of ﬂame propagation provides an example. We learned about this example from Larry Shampine, one of the authors of the Matlab ordinary diﬀerential equation suite. If you light a match, the ball of ﬂame grows rapidly until it reaches a critical size. Then it remains at that size because the amount of oxygen being consumed by the combustion in the interior of the ball balances the amount available through the surface. The simple model is

2

*− y*

3

*,*

*y*(0) = *δ,*

0

*≤ t ≤ *2*/δ.*

The scalar variable *y*(*t*) represents the radius of the ball. The *y*

2 and *y*

3 terms come from the surface area and the volume. The critical parameter is the initial radius,

*δ*, which is “small.” We seek the solution over a length of time that is inversely proportional to *δ*.

At this point, we suggest that you start up Matlab and actually run our examples. It is worthwhile to see them in action. We will start with ode45, the workhorse of the Matlab ordinary diﬀerential equation suite. If *δ *is not very small, the problem is not very stiﬀ. Try *δ *= 0.01 and request a relative error of 10

*−*4

.

delta = 0.01;

F = @(t,y) y^2 - y^3; opts = odeset(’RelTol’,1.e-4); ode45(F,[0 2/delta],delta,opts);

With no output arguments, ode45 automatically plots the solution as it is computed.

You should get a plot of a solution that starts at *y *= 0.01, grows at a modestly increasing rate until *t *approaches 100, which is 1/*δ*, then grows rapidly until it reaches a value close to 1, where it remains.

Now let’s see stiﬀness in action. Decrease *δ *by three orders of magnitude. (If you run only one example, run this one.) delta = 0.00001; ode45(F,[0 2/delta],delta,opts);

20 Chapter 7. Ordinary Diﬀerential Equations

**ode45**

**1**

**0.8**

**0.6**

**0.4**

**0.2**

**0**

**0 0.2**

**0.4**

**0.6**

**0.8**

**1 1.2**

**1.4**

**1.6**

**1.8**

**x 10**

**4**

**2**

**1.0001**

**1**

**0.9999**

**0.98**

**1 1.02**

**1.04**

**1.06**

**1.08**

**1.1**

**1.12**

**x 10**

**4**

**Figure 7.4. ***Stiﬀ behavior of *ode45*.*

You should see something like Figure 7.4, although it will take a long time to complete the plot. If you get tired of watching the agonizing progress, click the stop button in the lower left corner of the window. Turn on zoom, and use the mouse to explore the solution near where it ﬁrst approaches steady state. You should see something like the detail in Figure 7.4. Notice that ode45 is doing its job. It’s keeping the solution within 10

*−*4 of its nearly constant steady state value.

But it certainly has to work hard to do it. If you want an even more dramatic demonstration of stiﬀness, decrease the tolerance to 10

*−*5 or 10

*−*6

.

This problem is not stiﬀ initially. It only becomes stiﬀ as the solution approaches steady state. This is because the steady state solution is so “rigid.” Any solution near *y*(*t*) = 1 increases or decreases rapidly toward that solution. (We should point out that “rapidly” here is with respect to an unusually long time scale.)

What can be done about stiﬀ problems? You don’t want to change the differential equation or the initial conditions, so you have to change the numerical method. Methods intended to solve stiﬀ problems eﬃciently do more work per step, but can take much bigger steps. Stiﬀ methods are *implicit*. At each step they use Matlab matrix operations to solve a system of simultaneous linear equations that helps predict the evolution of the solution. For our ﬂame example, the matrix is only 1 by 1, but even here, stiﬀ methods do more work per step than nonstiﬀ methods.

7.9. Stiﬀness

**ode23s**

**1**

**0.8**

**0.6**

**0.4**

**0.2**

**0**

**0 0.2**

**0.4**

**0.6**

**0.8**

**1 1.2**

**1.4**

**1.6**

**1.8**

**x 10**

**4**

**2**

**1.0001**

**1**

21

**0.9999**

**0.98**

**1 1.02**

**1.04**

**1.06**

**1.08**

**1.1**

**1.12**

**x 10**

**4**

**Figure 7.5. ***Stiﬀ behavior of *ode23s*.*

Let’s compute the solution to our ﬂame example again, this time with one of the ordinary diﬀerential equation solvers in Matlab whose name ends in “s” for

“stiﬀ.” delta = 0.00001; ode23s(F,[0 2/delta],delta,opts);

Figure 7.5 shows the computed solution and the zoom detail. You can see that ode23s takes many fewer steps than ode45. This is actually an easy problem for a stiﬀ solver. In fact, ode23s takes only 99 steps and uses just 412 function evaluations, while ode45 takes 3040 steps and uses 20179 function evaluations. Stiﬀness even aﬀects graphical output. The print ﬁles for the ode45 ﬁgures are much larger than those for the ode23s ﬁgures.

Imagine you are returning from a hike in the mountains. You are in a narrow canyon with steep slopes on either side. An explicit algorithm would sample the local gradient to ﬁnd the descent direction. But following the gradient on either side of the trail will send you bouncing back and forth across the canyon, as with ode45.

You will eventually get home, but it will be long after dark before you arrive. An implicit algorithm would have you keep your eyes on the trail and anticipate where each step is taking you. It is well worth the extra concentration.

This ﬂame problem is also interesting because it involves the Lambert W function, *W *(*z*). The diﬀerential equation is separable. Integrating once gives an

22 Chapter 7. Ordinary Diﬀerential Equations

**1/(lambertw(99 exp(99−t))+1)**

**1**

**0.8**

**0.6**

**0.4**

**0.2**

**0**

**0 20 40 60 80 100 t**

**120 140 160 180 200**

**Figure 7.6. ***Exact solution for the ﬂame example.* implicit equation for *y *as a function of *t*:

1

*y*

+ log

(

1

*y*

*− *1

)

=

1

*δ*

+ log

(

1

*δ*

*− *1

)

*− t.*

This equation can be solved for *y*. The exact analytical solution to the ﬂame model turns out to be

*y*(*t*) =

1

*W *(*ae*

*a*

*−t*

) + 1

*,*

where *a *= 1*/δ*

*− *1. The function *W *(*z*), the Lambert W function, is the solution to

*W *(*z*)*e*

*W *(*z*)

= *z.*

With Matlab and the Symbolic Math Toolbox, the statements y = dsolve(’Dy = y^2 - y^3’,’y(0) = 1/100’); y = simplify(y); pretty(y) ezplot(y,0,200) produce

1

------------------------------lambertw(0, 99 exp(99 - t)) + 1 and the plot of the exact solution shown in Figure 7.6. If the initial value 1/100 is decreased and the time span 0

*≤ t ≤ *200 increased, the transition region becomes narrower.

The Lambert W function is named after J. H. Lambert (1728–1777). Lambert was a colleague of Euler and Lagrange’s at the Berlin Academy of Sciences and is best known for his laws of illumination and his proof that *π *is irrational. The function was “rediscovered” a few years ago by Corless, Gonnet, Hare, and Jeﬀrey, working on Maple, and by Don Knuth [4].

7.10. Events 23

**7.10**

**Events**

So far, we have been assuming that the tspan interval, *t*

0

*≤ t ≤ t f inal*

, is a given part of the problem speciﬁcation, or we have used an inﬁnite interval and a GUI button to terminate the computation. In many situations, the determination of

*t f inal*

is an important aspect of the problem.

One example is a body falling under the force of gravity and encountering air resistance. When does it hit the ground? Another example is the two-body problem, the orbit of one body under the gravitational attraction of a much heavier body. What is the period of the orbit? The *events *feature of the Matlab ordinary diﬀerential equation solvers provides answers to such questions.

Events detection in ordinary diﬀerential equations involves two functions,

*f *(*t, y*) and *g*(*t, y*), and an initial condition, (*t*

0

*, y*

0

). The problem is to ﬁnd a function

*y*(*t*) and a ﬁnal value *t*

*∗*

so that

*y*(*t*

0

) = *y*

0

*,*

and

*g*(*t*

*∗*

*, y*(*t*

*∗*

)) = 0*.*

A simple model for the falling body is

¨

*−*1 + ˙*y*

2

*,*

The code for the function *f *(*t, y*) is

**Falling body**

**1**

**0.8**

**0.6**

**0.4**

**0.2**

**0 tfinal = 1.6585**

**1.2**

**1.4**

**1.6**

**0 0.2**

**0.4**

**0.6**

**0.8**

**t**

**1**

**Figure 7.7. ***Event handling for falling object.*

24 Chapter 7. Ordinary Diﬀerential Equations function ydot = f(t,y) ydot = [y(2); -1+y(2)^2];

With the diﬀerential equation written as a ﬁrst-order system, *y *becomes a vector with two components and so *g*(*t, y*) = *y*

1

. The code for *g*(*t, y*) is function [gstop,isterminal,direction] = g(t,y) gstop = y(1); isterminal = 1; direction = [];

The ﬁrst output, gstop, is the value that we want to make zero. Setting the second output, isterminal, to one indicates that the ordinary diﬀerential equation solver should terminate when gstop is zero. Setting the third output, direction, to the empty matrix indicates that the zero can be approached from either direction.

With these two functions available, the following statements compute and plot the trajectory shown in Figure 7.7.

opts = odeset(’events’,@g); y0 = [1; 0];

[t,y,tfinal] = ode45(@f,[0 Inf],y0,opts); tfinal plot(t,y(:,1),’-’,[0 tfinal],[1 0],’o’) axis([-.1 tfinal+.1 -.1 1.1]) xlabel(’t’) ylabel(’y’) title(’Falling body’) text(1.2, 0, [’tfinal = ’ num2str(tfinal)])

The terminating value of *t *is found to be tfinal = 1.6585.

The three sections of code for this example can be saved in three separate

M-ﬁles, with two functions and one script, or they can all be saved in one function

M-ﬁle. In the latter case, f and g become subfunctions and have to appear after the main body of code.

Events detection is particularly useful in problems involving periodic phenomena. The two-body problem provides a good example. Here is the ﬁrst portion of a function M-ﬁle, orbit.m. The input parameter is reltol, the desired local relative tolerance.

function orbit(reltol) y0 = [1; 0; 0; 0.3]; opts = odeset(’events’,@(t,y)gstop(t,y,y0),’reltol’,reltol);

[t,y,te,ye] = ode45(@(t,y)twobody(t,y,y0),[0 2*pi],y0,opts); tfinal = te(end) yfinal = ye(end,1:2) plot(y(:,1),y(:,2),’-’,0,0,’ro’) axis([-.1 1.05 -.35 .35])

The function ode45 is used to compute the orbit. The ﬁrst input argument is a function handle, @twobody, that references the function deﬁning the diﬀerential

7.10. Events 25 equations. The second argument to ode45 is any overestimate of the time interval required to complete one period. The third input argument is y0, a 4-vector that provides the initial position and velocity. The light body starts at (1*, *0), which is a point with a distance 1 from the heavy body, and has initial velocity (0*, *0*.*3), which is perpendicular to the initial position vector. The fourth input argument is an options structure created by odeset that overrides the default value for reltol and that speciﬁes a function gstop that deﬁnes the events we want to locate. The last argument is y0, an “extra” argument that ode45 passes on to both twobody and gstop.

The code for twobody has to be modiﬁed to accept a third argument, even though it is not used.

function ydot = twobody(t,y,y0) r = sqrt(y(1)^2 + y(2)^2); ydot = [y(3); y(4); -y(1)/r^3; -y(2)/r^3];

The ordinary diﬀerential equation solver calls the gstop function at every step during the integration. This function tells the solver whether or not it is time to stop.

function [val,isterm,dir] = gstop(t,y,y0) d = y(1:2)-y0(1:2); v = y(3:4); val = d’*v; isterm = 1; dir = 1;

The 2-vector d is the diﬀerence between the current position and the starting point.

The 2-vector v is the velocity at the current position. The quantity val is the inner product between these two vectors. Mathematically, the stopping function is

*d*(*t*)

*T*

*d*(*t*)*,* where

*d *= (*y*

1

(*t*)

*− y*

1

(0)*, y*

2

(*t*)

*− y*

2

(0))

*T*

*.*

Points where *g*(*t, y*(*t*)) = 0 are the local minimum or maximum of *d*(*t*)

*T*

*d*(*t*). By setting dir = 1, we indicate that the zeros of *g*(*t, y*) must be approached from below, so they correspond to minima. By setting isterm = 1, we indicate that computation of the solution should be terminated at the ﬁrst minimum. If the orbit is truly periodic, then any minima of *d *occur when the body returns to its starting point.

Calling orbit with a very loose tolerance orbit(2.0e-3) produces tfinal =

26 Chapter 7. Ordinary Diﬀerential Equations

2.350871977619482

yfinal =

0.981076599011125

-0.000125191385574

and plots Figure 7.8.

−0.1

−0.2

0.1

0

0.3

0.2

−0.3

0 0.2

0.4

0.6

0.8

1

**Figure 7.8. ***Periodic orbit computed with loose tolerance.*

You can see from both the value of yfinal and the graph that the orbit does not quite return to the starting point. We need to request more accuracy.

orbit(1.0e-6) produces tfinal =

2.380258461717980

yfinal =

0.999985939055197

0.000000000322391

Now the value of yfinal is close enough to y0 that the graph of the orbit is eﬀectively closed.

**7.11**

**Multistep Methods**

A single-step numerical method has a short memory. The only information passed from one step to the next is an estimate of the proper step size and, perhaps, the value of *f *(*t*

*n*

*, y n*

) at the point the two steps have in common.

As the name implies, a multistep method has a longer memory. After an initial start-up phase, a *p*th-order multistep method saves up to perhaps a dozen values of

7.12. The MATLAB ODE Solvers 27 the solution, *y*

*n*

*−p*+1

*, y n*

*−p*+2

*, . . . , y n*

*−*1

*, y n*

, and uses them all to compute *y* fact, these methods can vary both the order, *p*, and the step size, *h*.

*n*+1

. In

Multistep methods tend to be more eﬃcient than single-step methods for problems with smooth solutions and high accuracy requirements. For example, the orbits of planets and deep space probes are computed with multistep methods.

**7.12**

**The MATLAB ODE Solvers**

This section is derived from the Algorithms portion of the Matlab Reference Manual page for the ordinary diﬀerential equation solvers.

ode45 is based on an explicit Runge–Kutta (4, 5) formula, the Dormand–

Prince pair. It is a one-step solver. In computing *y*(*t*

*n*+1

), it needs only the solution at the immediately preceding time point, *y*(*t*

*n*

). In general, ode45 is the ﬁrst function to try for most problems.

ode23 is an implementation of an explicit Runge–Kutta (2, 3) pair of Bogacki and Shampine’s. It is often more eﬃcient than ode45 at crude tolerances and in the presence of moderate stiﬀness. Like ode45, ode23 is a one-step solver.

ode113 uses a variable-order Adams–Bashforth–Moulton predictor-corrector algorithm. It is often more eﬃcient than ode45 at stringent tolerances and if the ordinary diﬀerential equation ﬁle function is particularly expensive to evaluate.

ode113 is a multistep solver—it normally needs the solutions at several preceding time points to compute the current solution.

The above algorithms are intended to solve nonstiﬀ systems. If they appear to be unduly slow, try using one of the stiﬀ solvers below.

ode15s is a variable-order solver based on the numerical diﬀerentiation formulas (NDFs). Optionally, it uses the backward diﬀerentiation formulas (BDFs, also known as Gear’s method), which are usually less eﬃcient. Like ode113, ode15s is a multistep solver. Try ode15s if ode45 fails or is very ineﬃcient and you suspect that the problem is stiﬀ, or if you are solving a diﬀerential-algebraic problem.

ode23s is based on a modiﬁed Rosenbrock formula of order two. Because it is a one-step solver, it is often more eﬃcient than ode15s at crude tolerances. It can solve some kinds of stiﬀ problems for which ode15s is not eﬀective.

ode23t is an implementation of the trapezoidal rule using a “free” interpolant.

Use this solver if the problem is only moderately stiﬀ and you need a solution without numerical damping. ode23t can solve diﬀerential-algebraic equations.

ode23tb is an implementation of TR-BDF2, an implicit Runge–Kutta formula with a ﬁrst stage that is a trapezoidal rule step and a second stage that is a BDF of order two. By construction, the same iteration matrix is used in evaluating both stages. Like ode23s, this solver is often more eﬃcient than ode15s at crude tolerances.

Here is a summary table from the Matlab Reference Manual. For each function, it lists the appropriate problem type, the typical accuracy of the method, and the recommended area of usage.

*• *ode45. Nonstiﬀ problems, medium accuracy. Use most of the time. This should be the ﬁrst solver you try.

28 Chapter 7. Ordinary Diﬀerential Equations

*• *ode23. Nonstiﬀ problems, low accuracy. Use for large error tolerances or moderately stiﬀ problems.

*• *ode113. Nonstiﬀ problems, low to high accuracy. Use for stringent error tolerances or computationally intensive ordinary diﬀerential equation functions.

*• *ode15s. Stiﬀ problems, low to medium accuracy. Use if ode45 is slow (stiﬀ systems) or there is a mass matrix.

*• *ode23s. Stiﬀ problems, low accuracy. Use for large error tolerances with stiﬀ systems or with a constant mass matrix.

*• *ode23t. Moderately stiﬀ problems, low accuracy. Use for moderately stiﬀ problems where you need a solution without numerical damping.

*• *ode23tb. Stiﬀ problems, low accuracy. Use for large error tolerances with stiﬀ systems or if there is a mass matrix.

**7.13**

**Errors**

Errors enter the numerical solution of the initial value problem from two sources:

*• *discretization error,

*• *roundoﬀ error.

Discretization error is a property of the diﬀerential equation and the numerical method. If all the arithmetic could be performed with inﬁnite precision, discretization error would be the only error present. Roundoﬀ error is a property of the computer hardware and the program. It is usually far less important than the discretization error, except when we try to achieve very high accuracy.

Discretization error can be assessed from two points of view, local and global.

*Local discretization error *is the error that would be made in one step if the previous values were exact and if there were no roundoﬀ error. Let *u*

*n*

(*t*) be the solution of the diﬀerential equation determined not by the original initial condition at *t*

0 but by the value of the computed solution at *t*

*n*

. That is, *u*

*n*

(*t*) is the function of *t* deﬁned by

*u n*

= *f *(*t, u*

*n*

)*,*

*u n*

(*t*

*n*

) = *y*

*n*

*.*

The local discretization error *d*

*n*

is the diﬀerence between this theoretical solution and the computed solution (ignoring roundoﬀ) determined by the same data at *t*

*n*

:

*d n*

= *y*

*n*+1

*− u n*

(*t*

*n*+1

)*.*

*Global discretization error *is the diﬀerence between the computed solution, still ignoring roundoﬀ, and the true solution determined by the original initial condition at *t*

0

, that is,

*e n*

= *y*

*n*

*− y*(*t*

*n*

)*.*

7.13. Errors 29

The distinction between local and global discretization error can be easily seen

*t*

is simply an integral, *y*(*t*) = *f *(*τ *)*dτ *. Euler’s method becomes a scheme for

*t*

0 numerical quadrature that might be called the “composite lazy man’s rectangle rule.” It uses function values at the left-hand ends of the subintervals rather than at the midpoints:

∫

*t*

*N*

*f *(*τ *)*dτ*

*≈ h n*

*f *(*t*

*n*

)*.*

*t*

0 0

The local discretization error is the error in one subinterval:

∫

*t*

*n*+1

*d n*

= *h*

*n*

*f *(*t*

*n*

)

*−*

*f *(*τ *)*dτ ,*

*t n*

and the global discretization error is the total error:

*e*

*N*

=

*n*=0

*h n*

*f *(*t*

*n*

)

*−*

∫

*t*

0

*t*

*N*

*f *(*τ *)*dτ .*

In this special case, each of the subintegrals is independent of the others (the sum could be evaluated in any order), so the global error is the sum of the local errors:

*e*

*N*

=

*n*=0

*d n*

*.*

In the case of a genuine diﬀerential equation where *f *(*t, y*) depends on *y*, the error in any one interval depends on the solutions computed for earlier intervals.

Consequently, the relationship between the global error and the local errors is related to the *stability *of the diﬀerential equation. For a single scalar equation, if the partial derivative *∂f /∂y *is positive, then the solution *y*(*t*) grows as *t *increases and the global error will be greater than the sum of the local errors. If *∂f /∂y *is negative, then the global error will be less than the sum of the local errors. If *∂f /∂y *changes sign, or if we have a nonlinear system of equations where *∂f /∂y *is a varying matrix, the relationship between *e*

*N*

unpredictable.

and the sum of the *d*

*n*

can be quite complicated and

Think of the local discretization error as the deposits made to a bank account and the global error as the overall balance in the account. The partial derivative

*∂f /∂y *acts like an interest rate. If it is positive, the overall balance is greater than the sum of the deposits. If it is negative, the ﬁnal error balance might well be less than the sum of the errors deposited at each step.

Our code ode23tx, like all the production codes in Matlab, only attempts to control the local discretization error. Solvers that try to control estimates of the global discretization error are much more complicated, are expensive to run, and are not very successful.

A fundamental concept in assessing the accuracy of a numerical method is its

*order*. The order is deﬁned in terms of the local discretization error obtained if the

30 Chapter 7. Ordinary Diﬀerential Equations method is applied to problems with smooth solutions. A method is said to be of order *p *if there is a number *C *so that

*|d n*

*| ≤ Ch*

*p*+1

*n*

*.*

The number *C *might depend on the partial derivatives of the function deﬁning the diﬀerential equation and on the length of the interval over which the solution is sought, but it should be independent of the step number *n *and the step size *h*

*n*

.

The above inequality can be abbreviated using “big-oh notation”:

*d n*

= *O*(*h*

*p*+1

*n*

)*.*

For example, consider Euler’s method:

*y*

*n*+1

= *y*

*n*

+ *h*

*n*

*f *(*t*

*n*

*, y n*

)*.*

Assume the local solution *u*

*n*

(*t*) has a continuous second derivative. Then, using

Taylor series near the point *t*

*n*

,

*u n*

(*t*) = *u*

*n*

(*t*

*n*

) + (*t*

*− t n*

)*u*

*′ n*

(*t*

*n*

) + *O*((*t*

*− t n*

)

2

)*.*

Using the diﬀerential equation and the initial condition deﬁning *u*

*n*

(*t*),

*u n*

(*t*

*n*+1

) = *y*

*n*

+ *h*

*n*

*f *(*t*

*n*

*, y n*

) + *O*(*h*

2

*n*

)*.*

Consequently,

*d n*

= *y*

*n*+1

*− u n*

(*t*

*n*+1

) = *O*(*h*

2

*n*

)*.*

We conclude that *p *= 1, so Euler’s method is ﬁrst order. The Matlab naming conventions for ordinary diﬀerential equation solvers would imply that a function using Euler’s method by itself, with ﬁxed step size and no error estimate, should be called ode1.

Now consider the global discretization error at a ﬁxed point *t *= *t*

*f*

. As accuracy requirements are increased, the step sizes *h*

*n*

number of steps *N *required to reach *t*

*f*

will decrease, and the total will increase. Roughly, we shall have

*N *=

*t f*

*− t*

0

*, h*

where *h *is the average step size. Moreover, the global error *e*

*N*

can be expressed as a sum of *N *local errors coupled by factors describing the stability of the equations.

These factors do not depend in a strong way on the step sizes, and so we can say roughly that if the local error is *O*(*h*

*p*+1

), then the global error will be *N*

*·O*(*h*

*p*+1

) =

*O*(*h*

*p*

). This is why *p *+ 1 was used instead of *p *as the exponent in the deﬁnition of order.

For Euler’s method, *p *= 1, so decreasing the average step size by a factor of

2 decreases the average local error by a factor of roughly 2

*p*+1

= 4, but about twice as many steps are required to reach *t*

*f*

, so the global error is decreased by a factor of only 2

*p*

= 2. With higher order methods, the global error for smooth solutions is reduced by a much larger factor.

7.13. Errors 31

It should be pointed out that in discussing numerical methods for ordinary diﬀerential equations, the word “order” can have any of several diﬀerent meanings.

The order of a diﬀerential equation is the index of the highest derivative appearing.

For example, *d*

2

*y/dt*

2

=

*−y *is a second-order diﬀerential equation. The order of a system of equations sometimes refers to the number of equations in the system.

For example, ˙

*− yz, *˙*z *= *−z *+ *yz *is a second-order system. The order of a numerical method is what we have been discussing here. It is the power of the step size that appears in the expression for the global error.

One way of checking the order of a numerical method is to examine its behavior if *f *(*t, y*) is a polynomial in *t *and does not depend on *y*. If the method is exact for

*t p*

*−*1

, but not for *t*

*p*

, then its order is not more than *p*. (The order could be less than *p *if the method’s behavior for general functions does not match its behavior for polynomials.) Euler’s method is exact if *f *(*t, y*) is constant, but not if *f *(*t, y*) = *t*, so its order is not greater than one.

With modern computers, using IEEE ﬂoating-point double-precision arithmetic, the roundoﬀ error in the computed solution only begins to become important if very high accuracies are requested or the integration is carried out over a long interval. Suppose we integrate over an interval of length *L *= *t*

*f*

*−t*

0

. If the roundoﬀ error in one step is of size *ϵ*, then the worst the roundoﬀ error can be after *N *steps of size *h *=

*L*

*N*

is something like

*N ϵ *=

*Lϵ*

*.*

*h*

For a method with global discretization error of size *Ch*

*p*

, the total error is something like

*Ch p*

+

*Lϵ*

*.*

*h*

For the roundoﬀ error to be comparable with the discretization error, we need

*h*

*≈*

(

*Lϵ*

)

1

*p*+1

*.*

*C*

The number of steps taken with this step size is roughly

*N*

*≈ L*

(

*C*

*Lϵ*

)

1

*p*+1

*.*

*ϵ *= 2

Here are the numbers of steps for various orders *p *if *L *= 20: *C *= 100, and

*−*52

:

*p N*

1 4*.*5

*· *10

17

3 5,647,721

5

10

37,285

864

These values of *p *are the orders for Euler’s method and for the Matlab functions ode23 and ode45, and a typical choice for the order in the variableorder method used by ode113. We see that the low-order methods have to take an

32 Chapter 7. Ordinary Diﬀerential Equations impractically large number of steps before this worst-case roundoﬀ error estimate becomes signiﬁcant. Even more steps are required if we assume the roundoﬀ error at each step varies randomly. The variable-order multistep function ode113 is capable of achieving such high accuracy that roundoﬀ error can be a bit more signiﬁcant with it.

**7.14**

**Performance**

We have carried out an experiment to see how all this applies in practice. The diﬀerential equation is the harmonic oscillator

¨

*−x*(*t*) with initial conditions *x*(0) = 1*, *˙

*≤ t ≤ *10*π*. The interval is ﬁve periods of the periodic solution, so the global error can be computed simply as the diﬀerence between the initial and ﬁnal values of the solution. Since the solution neither grows nor decays with *t*, the global error should be roughly proportional to the local error.

The following Matlab script uses odeset to change both the relative and the absolute tolerances. The reﬁnement level is set so that one step of the algorithm generates one row of output.

y0 = [1 0]; for k = 1:13 tol = 10^(-k); opts = odeset(’reltol’,tol,’abstol’,tol,’refine’,1); tic

[t,y] = ode23(@harmonic,[0 10*pi],y0’,opts); time = toc; steps = length(t)-1; end err = max(abs(y(end,:)-y0));

The diﬀerential equation is deﬁned in harmonic.m.

function ydot = harmonic(t,y) ydot = [y(2); -y(1)];

The script was run three times, with ode23, ode45, and ode113. The ﬁrst plot in Figure 7.9 shows how the global error varies with the requested tolerance for the three routines. We see that the actual error tracks the requested tolerance quite well. For ode23, the global error is about 36 times the tolerance; for ode45, it is about 4 times the tolerance; and for ode113, it varies between 1 and 45 times the tolerance.

The second plot in Figure 7.9 shows the numbers of steps required. The results also ﬁt our model quite well. Let *τ *denote the tolerance 10

*−k*

. For ode23, the

*−*1*/*3 number of steps is about 10*τ* , which is the expected behavior for a third-order method. For ode45, the number of steps is about 9*τ*

*−*1*/*5

, which is the expected

7.14. Performance

10

0

10

−4

10

−8

10

−12

10

5

10

4

10

3

10

2 ode23 ode45 ode113

33

10

2

10

1

10

0

10

−1

10

−13

10

−10

10

−7 tol

10

−4

10

−1

**Figure 7.9. ***Performance of ordinary diﬀerential equation solvers.* behavior for a ﬁfth-order method. For ode113, the number of steps reﬂects the fact that the solution is very smooth, so the method was often able to use its maximum order, 13.

The third plot in Figure 7.9 shows the execution times, in seconds, on an

800 MHz Pentium III laptop. For this problem, ode45 is the fastest method for tolerances of roughly 10

*−*6 or larger, while ode113 is the fastest method for more stringent tolerances. The low-order method, ode23, takes a very long time to obtain

34 Chapter 7. Ordinary Diﬀerential Equations high accuracy.

This is just one experiment, on a problem with a very smooth and stable solution.

**7.15**

**Further Reading**

The Matlab ordinary diﬀerential equation suite is described in [7]. Additional material on the numerical solution of ordinary diﬀerential equations, and especially stiﬀness, is available in Ascher and Petzold [1], Brennan, Campbell, and Petzold [2], and Shampine [6].

**Exercises**

7.1. The standard form of an ODE initial value problem is:

*y *= *f *(*t, y*)*, y*(*t*

0

) = *y*

0

*.*

Express this ODE problem in the standard form.

*v*

1 + *t*

*−u*

2

1 + *t*

2

*− *sin *r,*

+ cos *r,* where *r *=

*√*

˙

2

+ ˙*v*

2

. The initial conditions are

7.2. You invest $100 in a savings account paying 6% interest per year. Let *y*(*t*) be the amount in your account after *t *years. If the interest is compounded continuously, then *y*(*t*) solves the ODE initial value problem

Exercises 35

*y*(0) = 100*.*

Compounding interest at a discrete time interval, *h*, corresponds to using a ﬁnite diﬀerence method to approximate the solution to the diﬀerential equation. The time interval *h *is expressed as a fraction of a year. For example, compounding monthly has *h *= 1*/*12. The quantity *y*

*n*

, the balance after *n* time intervals, approximates the continuously compounded balance *y*(*nh*).

The banking industry eﬀectively uses Euler’s method to compute compound interest.

*y*

0

= *y*(0)*,*

*y*

*n*+1

= *y*

*n*

+ *hry*

*n*

*.*

This exercise asks you to investigate the use of higher order diﬀerence methods to compute compound interest. What is the balance in your account after 10 years with each of the following methods of compounding interest?

Euler’s method, yearly.

Euler’s method, monthly.

Midpoint rule, monthly.

Trapezoid rule, monthly.

BS23 algorithm, monthly.

Continuous compounding.

7.3. (a) Show experimentally or algebraically that the BS23 algorithm is exact for

*f *(*t, y*) = 1, *f *(*t, y*) = *t*, and *f *(*t, y*) = *t*

2

, but not for *f *(*t, y*) = *t*

3

.

(b) When is the ode23 error estimator exact?

7.4. The error function erf(*x*) is usually deﬁned by an integral,

∫ erf(*x*) =

*√*

*π*

0

*x e*

*−x*

2

*dx,*

but it can also be deﬁned as the solution to the diﬀerential equation

*y*

*′*

(*x*) =

*y*(0) = 0*.*

*π e*

*−x*

2

*,*

Use ode23tx to solve this diﬀerential equation on the interval 0

*≤ x ≤ *2.

Compare the results with the built-in Matlab function erf(x) at the points chosen by ode23tx.

7.5. (a) Write an M-ﬁle named myrk4.m, in the style of ode23tx.m, that implements the classical Runge–Kutta ﬁxed step size algorithm. Instead of an optional fourth argument rtol or opts, the required fourth argument should be the step size *h*. Here is the proposed preamble.

36 Chapter 7. Ordinary Diﬀerential Equations

% function [tout,yout] = myrk4(F,tspan,y0,h,varargin)

% MYRK4 Classical fourth-order Runge-Kutta.

% Usage is the same as ODE23TX except the fourth

% argument is a fixed step size h.

% MYRK4(F,TSPAN,Y0,H) with TSPAN = [T0 TF] integrates

% the system of differential equations y’ = f(t,y)

% from t = T0 to t = TF.

The initial condition

% is y(T0) = Y0.

% With no output arguments, MYRK4 plots the solution.

% With two output arguments, [T,Y] = MYRK4(..) returns

% T and Y so that Y(:,k) is the approximate solution at

% T(k). More than four input arguments,

% MYRK4(..,P1,P2,..), are passed on to F,

% F(T,Y,P1,P2,...).

(b) Roughly, how should the error behave if the step size *h *for classical Runge–

Kutta is cut in half? (Hint: Why is there a “4” in the name of myrk4?) Run an experiment to illustrate this behavior.

(c) If you integrate the simple harmonic oscillator ¨

*−y *over one full period,

0

*≤ t ≤ *2*π*, you can compare the initial and ﬁnal values of *y *to get a measure of the global accuracy. If you use your myrk4 with a step size *h *= *π/*50, you should ﬁnd that it takes 100 steps and computes a result with an error of about 10

*−*6

. Compare this with the number of steps required by ode23, ode45, and ode113 if the relative tolerance is set to 10

*−*6 and the reﬁnement level is set to one. This is a problem with a very smooth solution, so you should ﬁnd that ode23 requires more steps, while ode45 and ode113 require fewer.

7.6. The ordinary diﬀerential equation problem

˙

*−*1000(*y − *sin *t*) + cos *t, y*(0) = 1*,* on the interval 0

*≤ t ≤ *1 is mildly stiﬀ.

(a) Find the exact solution, either by hand or using dsolve from the Symbolic

Toolbox.

(b) Compute the solution with ode23tx. How many steps are required?

(c) Compute the solution with the stiﬀ solver ode23s. How many steps are required?

(d) Plot the two computed solutions on the same graph, with line style ’.’ for the ode23tx solution and ’o’ for the ode23s solution.

(e) Zoom in, or change the axis settings, to show a portion of the graph where the solution is varying rapidly. You should see that both solvers are taking small steps.

(f) Show a portion of the graph where the solution is varying slowly. You should see that ode23tx is taking much smaller steps than ode23s.

Exercises

7.7. The following problems all have the same solution on 0

*≤ t ≤ π/*2:

√

1

*− y*

2

*, y*(0) = 0,

¨

*−y, y*(0) = 0*, *˙*y*(0) = 1,

¨

*− *sin *t, y*(0) = 0*, *˙*y*(0) = 1.

(a) What is the common solution *y*(*t*)?

37

(c) What is the Jacobian, *J *=

*∂f*

*∂y*

, for each problem? What happens to each

Jacobian as *t *approaches *π/*2?

(d) The work required by a Runge–Kutta method to solve an initial value

*y *= *f *(*t, y*) depends on the function *f *(*t, y*), not just the solution,

*y*(*t*). Use odeset to set both reltol and abstol to 10

*−*9

. How much work does ode45 require to solve each problem? Why are some problems more work than others?

(e) What happens to the computed solutions if the interval is changed to

0

*≤ t ≤ π*?

(f) What happens on 0

*≤ t ≤ π *if the second problem is changed to

√

*|*1 *− y*

2

*|, y*(0) = 0.

7.8. Use the jacobian and eig functions in the Symbolic Toolbox to verify that the Jacobian for the two-body problem is

0

*J *=

*r*

1

5

2*y*

2

1

3*y*

0

*− y*

1

*y*

2

2

2

3*y*

2*y*

2

2

0

0

1

*y*

2

*− y*

2

1

*r*

0

0

5

0

*r*

0

0

0

5

and that its eigenvalues are

*√*

2

*λ *=

1

*r*

3*/*2

*−*

*√*

*−i*

2

*.*

7.9. Verify that the matrix in the Lorenz equations

*A *=

*−β*

0

0

*−σ*

*η*

*σ*

*−η*

*ρ*

*−*1

is singular if and only if

*η *=

*±*

√

*β*(*ρ*

*− *1)*.*

Verify that the corresponding null vector is

*ρ*

*− *1

*η*

*.*

*η*

38 Chapter 7. Ordinary Diﬀerential Equations

7.10. The Jacobian matrix *J *for the Lorenz equations is not *A*, but is closely related to *A*. Find *J *, compute its eigenvalues at one of the ﬁxed points, and verify that the ﬁxed point is unstable.

7.11. Find the largest value of *ρ *in the Lorenz equations for which the ﬁxed point is stable.

7.12. All the values of *ρ *available with lorenzgui except *ρ *= 28 give trajectories that eventually settle down to stable periodic orbits. In his book on the

Lorenz equations, Sparrow classiﬁes a periodic orbit by what we might call its *signature*, a sequence of +’s and

*−*’s specifying the order of the critical points that the trajectory circles during one period. A single + or

*− *would be the signature of a trajectory that circles just one critical point, except that no such orbits exist. The signature ‘+

*−*’ indicates that the trajectory circles each critical point once. The signature ‘+ + +

*− *+ *− −−*’ would indicate a very fancy orbit that circles the critical points a total of eight times before repeating itself.

What are the signatures of the four diﬀerent periodic orbits generated by lorenzgui? Be careful—each of the signatures is diﬀerent, and *ρ *= 99*.*65 is particularly delicate.

7.13. What are the periods of the periodic orbits generated for the diﬀerent values of *ρ *available with lorenzgui?

7.14. The Matlab demos directory contains an M-ﬁle, orbitode, that uses ode45 to solve an instance of the *restricted three-body problem*. This involves the orbit of a light object around two heavier objects, such as an Apollo capsule around the earth and the moon. Run the demo and then locate its source code with the statements orbitode which orbitode

Make your own copy of orbitode.m. Find these two statements: tspan = [0 7]; y0 = [1.2; 0; 0; -1.04935750983031990726];

These statements set the time interval for the integration and the initial position and velocity of the light object. Our question is, Where do these values come from? To answer this question, ﬁnd the statement

[t,y,te,ye,ie] = ode45(@f,tspan,y0,options);

Remove the semicolon and insert three more statements after it: te ye ie

Run the demo again. Explain how the values of te, ye, and ie are related to tspan and y0.

Exercises 39

7.15. A classical model in mathematical ecology is the Lotka–Volterra predatorprey model. Consider a simple ecosystem consisting of rabbits that have an inﬁnite supply of food and foxes that prey on the rabbits for their food. This is modeled by a pair of nonlinear, ﬁrst-order diﬀerential equations:

*dr*

= 2*r*

*− αrf, r*(0) = *r*

0

*, dt df dt*

=

*−f *+ *αrf, f*(0) = *f*

0

*,*

where *t *is time, *r*(*t*) is the number of rabbits, *f *(*t*) is the number of foxes, and *α *is a positive constant. If *α *= 0, the two populations do not interact, the rabbits do what rabbits do best, and the foxes die oﬀ from starvation. If

*α > *0, the foxes encounter the rabbits with a probability that is proportional to the product of their numbers. Such an encounter results in a decrease in the number of rabbits and (for less obvious reasons) an increase in the number of foxes.

The solutions to this nonlinear system cannot be expressed in terms of other known functions; the equations must be solved numerically. It turns out that the solutions are always periodic, with a period that depends on the initial conditions. In other words, for any *r*(0) and *f *(0), there is a value *t *= *t*

*p*

when both populations return to their original values. Consequently, for all *t*,

*r*(*t *+ *t*

*p*

) = *r*(*t*)*, f *(*t *+ *t*

*p*

) = *f *(*t*)*.*

(a) Compute the solution with *r*

0 ﬁnd that *t*

*p*

= 300, *f*

0

= 150, and *α *= 0*.*01. You should is close to 5. Make two plots, one of *r *and *f *as functions of *t* and one a phase plane plot with *r *as one axis and *f *as the other.

(b) Compute and plot the solution with *r*

0 should ﬁnd that *t*

*p*

is close to 6.62.

(c) Compute and plot the solution with *r*

0

= 15, *f*

0

= 22, and *α *= 0*.*01. You

= 102, *f*

0

= 198, and *α *= 0*.*01.

Determine the period *t*

*p*

either by trial and error or with an event handler.

(d) The point (*r*

0

*, f*

0

) = (1*/α, *2*/α*) is a stable equilibrium point. If the populations have these initial values, they do not change. If the initial populations are close to these values, they do not change very much. Let *u*(*t*) = *r*(*t*)

*−*1*/α* and *v*(*t*) = *f *(*t*)

*− *2*/α*. The functions *u*(*t*) and *v*(*t*) satisfy another nonlinear system of diﬀerential equations, but if the *uv *terms are ignored, the system becomes linear. What is this linear system? What is the period of its periodic solutions?

7.16. Many modiﬁcations of the Lotka–Volterra predator-prey model (see previous problem) have been proposed to more accurately reﬂect what happens in nature. For example, the number of rabbits can be prevented from growing indeﬁnitely by changing the ﬁrst equation as follows:

*dr dt df dt*

= 2

(

1

*−*

=

*r*

*R*

)

*r*

*− αrf, r*(0) = *r*

0

*,*

*−f *+ *αrf, y*(0) = *y*

0

*,*

40 Chapter 7. Ordinary Diﬀerential Equations where *t *is time, *r*(*t*) is the number of rabbits, *f *(*t*) is the number of foxes, *α* is a positive constant, and *R *is a positive constant. Because *α *is positive,

*dr dt*

is negative whenever *r*

*≥ R*. Consequently, the number of rabbits can never exceed *R*.

For *α *= 0*.*01, compare the behavior of the original model with the behavior of this modiﬁed model with *R *= 400. In making this comparison, solve the equations with *r*

0 diﬀerent plots:

= 300 and *f*

0

= 150 over 50 units of time. Make four

*• *number of foxes and number of rabbits versus time for the original model,

*• *number of foxes and number of rabbits versus time for the modiﬁed model,

*• *number of foxes versus number of rabbits for the original model,

*• *number of foxes versus number of rabbits for the modiﬁed model.

For all plots, label all curves and all axes and put a title on the plot. For the last two plots, set the aspect ratio so that equal increments on the *x*- and

*y*-axes are equal in size.

7.17. An 80-kg paratrooper is dropped from an airplane at a height of 600 m. After

5 s the chute opens. The paratrooper’s height as a function of time, *y*(*t*), is given by

¨

*−g *+ *α*(*t*)*/m,*

*y*(0) = 600 m*,* where *g *= 9*.*81 m/s

2 is the acceleration due to gravity and *m *= 80 kg is the paratrooper’s mass. The air resistance *α*(*t*) is proportional to the square of the velocity, with diﬀerent proportionality constants before and after the chute opens.

{

*α*(*t*) =

*K*

1

*y*(*t*)

2

*K*

2

*y*(*t*)

2

*,*

*t < *5 s*,*

*, t*

*≥ *5 s*.*

(a) Find the analytical solution for free-fall, *K*

1

= 0*, K*

2

= 0. At what height does the chute open? How long does it take to reach the ground? What is the impact velocity? Plot the height versus time and label the plot appropriately.

(b) Consider the case *K*

1

= 1*/*15*, K*

2

= 4*/*15. At what height does the chute open? How long does it take to reach the ground? What is the impact velocity? Make a plot of the height versus time and label the plot appropriately.

7.18. Determine the trajectory of a spherical cannonball in a stationary Cartesian coordinate system that has a horizontal *x*-axis, a vertical *y*-axis, and an origin at the launch point. The initial velocity of the projectile in this coordinate system has magnitude *v*

0 of *θ*

0 and makes an angle with respect to the *x*-axis radians. The only forces acting on the projectile are gravity and the aerodynamic drag, *D*, which depends on the projectile’s speed relative to any wind that might be present. The equations describing the motion of the

Exercises 41 projectile are

˙ *y *= *v *sin *θ,*

˙

*− g v*

cos *θ,* ˙*v *=

*−*

*D m*

*− g *sin *θ.*

Constants for this problem are the acceleration of gravity, *g *= 9*.*81m/s

2

, the mass, *m *= 15 kg, and the initial speed, *v*

0

= 50 m/s. The wind is assumed to be horizontal and its speed is a speciﬁed function of time, *w*(*t*). The aerodynamic drag is proportional to the square of the projectile’s velocity relative to the wind:

*D*(*t*) =

*cρs*

2

(

( ˙

*− w*(*t*))

2

*y*

2

)

*,*

where *c *= 0*.*2 is the drag coeﬃcient, *ρ *= 1*.*29 kg/m

3 and *s *= 0*.*25 m

2 is the projectile’s cross-sectional area.

Consider four diﬀerent wind conditions.

is the density of air,

*• *No wind. *w*(*t*) = 0 for all *t*.

*• *Steady headwind. *w*(*t*) = *−*10 m/s for all *t*.

*• *Intermittent tailwind. *w*(*t*) = 10 m/s if the integer part of *t *is even, and zero otherwise.

*• *Gusty wind. *w*(*t*) is a Gaussian random variable with mean zero and standard deviation 10 m/s.

The integer part of a real number *t *is denoted by

*⌊t⌋ *and is computed in Matlab by floor(t). A Gaussian random variable with mean 0 and standard deviation *σ *is generated by sigma*randn (see Chapter 9, Random Numbers).

For each of these four wind conditions, carry out the following computations.

Find the 17 trajectories whose initial angles are multiples of 5 degrees, that is, *θ*

0

= *kπ/*36 radians*, k *= 1*, *2*, . . . , *17. Plot all 17 trajectories on one ﬁgure.

Determine which of these trajectories has the greatest downrange distance.

For that trajectory, report the initial angle in degrees, the ﬂight time, the downrange distance, the impact velocity, and the number of steps required by the ordinary diﬀerential equation solver.

Which of the four wind conditions requires the most computation? Why?

7.19. In the 1968 Olympic games in Mexico City, Bob Beamon established a world record with a long jump of 8.90 m. This was 0.80 m longer than the previous world record. Since 1968, Beamon’s jump has been exceeded only once in competition, by Mike Powell’s jump of 8.95 m in Tokyo in 1991. After Beamon’s remarkable jump, some people suggested that the lower air resistance at Mexico City’s 2250 m altitude was a contributing factor. This problem examines that possibility.

The mathematical model is the same as the cannonball trajectory in the previous exercise. The ﬁxed Cartesian coordinate system has a horizontal

*x*-axis, a vertical *y*-axis, and an origin at the takeoﬀ board. The jumper’s initial velocity has magnitude *v*

0

*x*-axis of *θ*

0 and makes an angle with respect to the radians. The only forces acting after takeoﬀ are gravity and the

42 Chapter 7. Ordinary Diﬀerential Equations aerodynamic drag, *D*, which is proportional to the square of the magnitude of the velocity. There is no wind. The equations describing the jumper’s motion are

˙ *y *= *v *sin *θ,*

˙

*− g v*

cos *θ,* ˙*v *=

*−*

*D m*

*− g *sin *θ.*

The drag is

*D *=

*cρs*

(

˙

2

2

2

)

*.*

Constants for this exercise are the acceleration of gravity, *g *= 9*.*81 m/s

2

, the mass, *m *= 80 kg, the drag coeﬃcient, *c *= 0*.*72, the jumper’s cross-sectional area, *s *= 0*.*50 m

2

, and the takeoﬀ angle, *θ*

0

= 22*.*5

*◦*

= *π/*8 radians.

Compute four diﬀerent jumps, with diﬀerent values for initial velocity, *v*

0

, and air density, *ρ*. The length of each jump is *x*(*t*

*f*

), where the air time, *t*

*f*

, is determined by the condition *y*(*t*

*f*

) = 0.

(a) “Nominal” jump at high altitude. *v*

0

(b) “Nominal” jump at sea level. *v*

0

= 10 m/s and *ρ *= 0*.*94 kg/m

= 10 m/s and *ρ *= 1*.*29 kg/m

3

.

3

.

(c) Sprinter’s approach at high altitude. *ρ *= 0*.*94 kg/m

3

. Determine *v*

0 that the length of the jump is Beamon’s record, 8.90 m.

(d) Sprinter’s approach at sea level. *ρ *= 1*.*29 kg/m

3 and *v*

0 so is the value determined in (c).

Present your results by completing the following table.

v0 theta0

10.0000

22.5000

10.0000

22.5000

???

22.5000

???

22.5000

rho

0.9400

1.2900

0.9400

1.2900

distance

???

???

8.9000

???

Which is more important, the air density or the jumper’s initial velocity?

7.20. A pendulum is a point mass at the end of a weightless rod of length *L* supported by a frictionless pin. If gravity is the only force acting on the pendulum, its oscillation is modeled by

¨ *−*(*g/L*) sin *θ.*

Here *θ *is the angular position of the rod, with *θ *= 0 if the rod is hanging down from the pin and *θ *= *π *if the rod is precariously balanced above the pin. Take *L *= 30 cm and *g *= 981 cm/s

2

. The initial conditions are

*θ*(0) = *θ*

0

*,*

If the initial angle *θ*

0 is not too large, then the approximation sin *θ*

*≈ θ*

Exercises 43 leads to a *linearized *equation

¨

*−*(*g/L*)*θ* that is easily solved.

(a) What is the period of oscillation for the linearized equation?

If we do not make the assumption that *θ*

0 is small and do not replace sin *θ* by *θ*, then it turns out that the period *T *of the oscillatory motion is given by

*T *(*θ*

0

) = 4(*L/g*)

1*/*2

*K*(sin

2

(*θ*

0

*/*2))*,* where *K*(*s*

2

) is the complete elliptic integral of the ﬁrst kind, given by

*K*(*s*

2

) =

∫

0

1

*√ dt*

1

*− s*

2

*t*

2

*√*

1

*− t*

2

*.*

(b) Compute and plot *T *(*θ*

0

) for 0

*≤ θ*

0

*≤ *0*.*9999*π *two diﬀerent ways. Use the

Matlab function ellipke and also use numerical quadrature with quadtx.

Verify that the two methods yield the same results, to within the quadrature tolerance.

(c) Verify that for small *θ*

0 the linear equation and the nonlinear equation have approximately the same period.

(d) Compute the solutions to the nonlinear model over one period for several diﬀerent values of *θ*

0

, including values near 0 and near *π*. Superimpose the phase plane plots of the solutions on one graph.

7.21. What eﬀect does the burning of fossil fuels have on the carbon dioxide in the earth’s atmosphere? Even though today carbon dioxide accounts for only about 350 parts per million of the atmosphere, any increase has profound implications for our climate. An informative background article is available at a Web site maintained by the Lighthouse Foundation [5].

A model developed by J. C. G. Walker [9] was brought to our attention by

Eric Roden. The model simulates the interaction of the various forms of carbon that are stored in three regimes: the atmosphere, the shallow ocean, and the deep ocean. The ﬁve principal variables in the model are all functions of time:

*p*, partial pressure of carbon dioxide in the atmosphere;

*σ s*

, total dissolved carbon concentration in the shallow ocean;

*σ d*

, total dissolved carbon concentration in the deep ocean;

*α s*

, alkalinity in the shallow ocean;

*α d*

, alkalinity in the deep ocean.

Three additional quantities are involved in equilibrium equations in the shallow ocean:

*h s*

, hydrogen carbonate in the shallow ocean;

*c s*

, carbonate in the shallow ocean;

44 Chapter 7. Ordinary Diﬀerential Equations

*p s*

, partial pressure of gaseous carbon dioxide in the shallow ocean.

The rate of change of the ﬁve principal variables is given by ﬁve ordinary diﬀerential equations. The exchange between the atmosphere and the shallow ocean involves a constant characteristic transfer time *d *and a source term *f *(*t*):

*dp dt*

=

*p s*

*− p d*

+

*f *(*t*)

*.*

*µ*

1

The equations describing the exchange between the shallow and deep oceans involve *v*

*s*

and *v*

*d*

, the volumes of the two regimes:

*dσ dt s*

=

*v*

1

*s*

(

(*σ*

*d*

*− σ s*

)*w*

*− k*

1

*− p s*

*− p*

*µ*

2

*d*

)

*, dσ d dt*

1

=

*v d*

(*k*

1

*− *(*σ*

*d*

*− σ s*

)*w*) *,*

*dα s dt*

=

1

*v s*

((*α*

*d*

*− α s*

)*w*

*− k*

2

) *,*

*dα d dt*

=

1

*v d*

(*k*

2

*− *(*α*

*d*

*− α s*

)*w*) *.*

The equilibrium between carbon dioxide and the carbonates dissolved in the shallow ocean is described by three nonlinear algebraic equations:

*h s*

=

*σ s*

*−*

(

*σ*

2

*s*

*− k*

3

*α s*

(2*σ*

*s*

*− α s*

)

)

1*/*2

*, c s*

=

*α s k*

3

*− h s*

*,*

2

*p s*

= *k*

4

*h*

2

*s*

*.*

*c s*

The numerical values of the constants involved in the model are

*d *= 8*.*64*,*

*µ*

1

*µ*

2

= 4*.*95

*· *10

2

*,*

= 4*.*95

*· *10

*−*2

*, v s*

= 0*.*12*,*

*v d*

= 1*.*23*,*

*w *= 10

*−*3

*, k k*

1

2

= 2*.*19

*· *10

*−*4

*,*

= 6*.*12

*· *10

*−*5

*, k*

3

*k*

4

= 0*.*997148*,*

= 6*.*79

*· *10

*−*2

*.*

Exercises 45

The source term *f *(*t*) describes the burning of fossil fuels in the modern industrial era. We will use a time interval that starts about a thousand years ago and extends a few thousand years into the future:

1000

*≤ t ≤ *5000*.*

The initial values at *t *= 1000,

*p *= 1*.*00*,*

*σ s*

= 2*.*01*,*

*σ d*

= 2*.*23*,*

*α s*

= 2*.*20*,*

*α d*

= 2*.*26*,* represent preindustrial equilibrium and remain nearly constant as long as the source term *f *(*t*) is zero.

The following table describes one scenario for a source term *f *(*t*) that models the release of carbon dioxide from burning fossil fuels, especially gasoline.

The amounts begin to be signiﬁcant after 1850, peak near the end of this century, and then decrease until the supply is exhausted.

year rate

1000 0.0

1850 0.0

1950 1.0

1980 4.0

2000 5.0

2050 8.0

2080 10.0

2100 10.5

2120 10.0

2150 8.0

2225 3.5

2300 2.0

2500 0.0

5000 0.0

Figure 7.10 shows this source term and its eﬀect on the atmosphere and the ocean. The three graphs in the lower half of the ﬁgure show the atmospheric, shallow ocean, and deep ocean carbon. (The two alkalinity values are not plotted at all because they are almost constant throughout this entire simulation.) Initially, the carbon in the three regimes is nearly at equilibrium and so the amounts hardly change before 1850.

Over the period 1850

*≤ t ≤ *2500, the upper half of Figure 7.10 shows the additional carbon produced by burning fossil fuels entering the system, and the lower half shows the system response. The atmosphere is the ﬁrst to be aﬀected, showing more than a fourfold increase in 500 years. Almost half of the carbon is then slowly transferred to the shallow ocean and eventually to the deep ocean.

46 Chapter 7. Ordinary Diﬀerential Equations

(a) Reproduce Figure 7.10. Use pchiptx to interpolate the fuel table and ode23tx with the default tolerances to solve the diﬀerential equations.

(b) How do the amounts of carbon in the three regimes at year 5000 compare with the amounts at year 1000?

(c) When does the atmospheric carbon dioxide reach its maximum?

(d) These equations are mildly stiﬀ, because the various chemical reactions take place on very diﬀerent time scales. If you zoom in on some portions of the graphs, you should see a characteristic sawtooth behavior caused by the small time steps required by ode23tx. Find such a region.

(e) Experiment with other Matlab ordinary diﬀerential equation solvers, including ode23, ode45, ode113, ode23s, and ode15s. Try various tolerances and report computational costs by using something like odeset(’RelTol’,1.e-6,’AbsTol’,1.e-6,’stats’,’on’);

Which method is preferable for this problem?

7.22. This problem makes use of quadrature, ordinary diﬀerential equations, and zero ﬁnding to study a nonlinear boundary value problem. The function *y*(*x*) is deﬁned on the interval 0

*≤ x ≤ *1 by

*y*

*′′*

= *y*

2 *− *1*,*

**Carbon in the atmosphere and ocean**

**15**

**10**

**5**

**0**

**1000 1500 2000 2500 3000 3500 4000 fossil fuel**

**4500 5000**

**4**

**3**

**2**

**1**

**0**

**1000 4000 atmosphere shallow deep**

**4500 5000 1500 2000 2500 3000 time (yr)**

**3500**

**Figure 7.10. ***Carbon in the atmosphere and ocean.*

Exercises 47

*y*(0) = 0*,*

*y*(1) = 1*.*

This problem can be solved four diﬀerent ways. Plot the four solutions obtained on a single ﬁgure, using subplot(2,2,1),..., subplot(2,2,4).

(a) Shooting method. Suppose we know the value of *η *= *y*

*′*

(0). Then we could use an ordinary diﬀerential equation solver like ode23tx or ode45 to solve the initial value problem

*y*

*′′*

= *y*

2

*− *1*,*

*y*(0) = 0*,*

*y*

*′*

(0) = *η.* on the interval 0

*≤ x ≤ *1. Each value of *η *determines a diﬀerent solution

*y*(*x*; *η*) and corresponding value for *y*(1; *η*). The desired boundary condition

*y*(1) = 1 leads to the deﬁnition of a function of *η*:

*f *(*η*) = *y*(1; *η*)

*− *1*.*

Write a Matlab function whose argument is *η*. This function should solve the ordinary diﬀerential equation initial problem and return *f *(*η*). Then use fzero or fzerotx to ﬁnd a value *η*

*∗*

so that *f *(*η*

*∗*

) = 0. Finally, use this *η*

*∗*

in the initial value problem to get the desired *y*(*x*). Report the value of *η*

*∗*

you obtain.

(b) Quadrature. Observe that *y*

*′′*

= *y*

2

*− *1 can be written

*d dx*

(

(*y*

*′*

2

)

2

*− y*

3

3

+ *y*

)

= 0*.*

This means that the expression

(*y*

*′*

)

2

*y*

3

*κ *=

*−*

+ *y*

2 3 is actually constant. Because *y*(0) = 0, we have *y*

*′*

(0) =

*√*

2*κ*. So, if we could ﬁnd the constant *κ*, the boundary value problem would be converted into an initial value problem. Integrating the equation gives

*dx*

=

*dy*

√

1

2(*κ *+ *y*

3

*/*3

*− y*)

*x *=

∫

*y*

*h*(*y, κ*) *dy,*

0 where

*h*(*y, κ*) =

√

1

2(*κ *+ *y*

3

*/*3

*− y*)

*.*

48 Chapter 7. Ordinary Diﬀerential Equations

This, together with the boundary condition *y*(1) = 1, leads to the deﬁnition of a function *g*(*κ*):

*g*(*κ*) =

∫

0

1

*h*(*y, κ*) *dy*

*− *1*.*

You need two Matlab functions, one that computes *h*(*y, κ*) and one that computes *g*(*κ*). They can be two separate M-ﬁles, but a better idea is to make *h*(*y, κ*) a function within *g*(*κ*). The function *g*(*κ*) should use quadtx to evaluate the integral of *h*(*y, κ*). The parameter *κ *is passed as an extra argument from *g*, through quadtx, to *h*. Then fzerotx can be used to ﬁnd a value *κ*

*∗*

so that *g*(*κ*

*∗*

) = 0. Finally, this *κ*

*∗*

provides the second initial value necessary for an ordinary diﬀerential equation solver to compute *y*(*x*).

Report the value of *κ*

*∗*

you obtain.

(c and d) Nonlinear ﬁnite diﬀerences. Partition the interval into *n *+ 1 equal subintervals with spacing *h *= 1*/*(*n *+ 1):

*x i*

= *ih, i *= 0*, . . . , n *+ 1*.*

Replace the diﬀerential equation with a nonlinear system of diﬀerence equations involving *n *unknowns, *y*

1

*, y*

2

*, . . . , y n*

:

*y*

*i*+1

*− *2*y*

*i*

+ *y*

*i*

*−*1

= *h*

2

(*y*

2

*i*

*− *1)*, i *= 1*, . . . , n.*

The boundary conditions are *y*

0

= 0 and *y*

*n*+1

= 1.

A convenient way to compute the vector of second diﬀerences involves the

*n*-by-*n *tridiagonal matrix *A *with

*−*2’s on the diagonal, 1’s on the superand subdiagonals, and 0’s elsewhere. You can generate a sparse form of this matrix with e = ones(n,1);

A = spdiags([e -2*e e],[-1 0 1],n,n);

The boundary conditions *y*

*n*-vector *b*, with *b*

*i*

0

= 0 and *y*

*n*+1

= 1 can be represented by the

= 0*, i *= 1*, . . . , n*

*− *1, and *b*

*n*

= 1. The vector formulation of the nonlinear diﬀerence equation is

*Ay *+ *b *= *h*

2

(*y*

2 *− *1)*,* where *y*

2 is the vector containing the squares of the elements of *y*, that is, the Matlab element-by-element power y.^2. There are at least two ways to solve this system.

(c) Linear iteration. This is based on writing the diﬀerence equation in the form

*Ay *= *h*

2

(*y*

2

*− *1) *− b.*

Start with an initial guess for the solution vector *y*. The iteration consists of plugging the current *y *into the right-hand side of this equation and then solving the resulting linear system for a new *y*. This makes repeated use of the sparse backslash operator with the iterated assignment statement

Exercises 49 y = A\(h^2*(y.^2 - 1) - b)

It turns out that this iteration converges linearly and provides a robust method for solving the nonlinear diﬀerence equations. Report the value of *n* you use and the number of iterations required.

(d) Newton’s method. This is based on writing the diﬀerence equation in the form

*F *(*y*) = *Ay *+ *b*

*− h*

2

(*y*

2

*− *1) = 0*.*

Newton’s method for solving *F *(*y*) = 0 requires a many-variable analogue of the derivative *F*

*′*

(*y*). The analogue is the Jacobian, the matrix of partial derivatives

*∂F i*

*J *=

*∂y j*

= *A*

*− h*

2 diag(2*y*)*.*

In Matlab, one step of Newton’s method would be

F = A*y + b - h^2*(y.^2 - 1);

J = A - h^2*spdiags(2*y,0,n,n); y = y - J\F;

With a good starting guess, Newton’s method converges in a handful of iterations. Report the value of *n *you use and the number of iterations required.

7.23. The double pendulum is a classical physics model system that exhibits chaotic motion if the initial angles are large enough. The model, shown in Figure 7.11, involves two weights, or *bobs*, attached by weightless, rigid rods to each other and to a ﬁxed pivot.

There is no friction, so once initiated, the motion continues forever. The motion is fully described by the two angles *θ*

1 that the rods make with the negative *y*-axis.

and *θ*

2

**y**

θ

**1**

θ

**2 x**

**Figure 7.11. ***Double pendulum.*

Let *m*

1 and *m*

2 be the masses of the bobs and *ℓ*

1 and *ℓ*

2 be the lengths of the

50 Chapter 7. Ordinary Diﬀerential Equations rods. The positions of the bobs are

*x*

1

= *ℓ*

1

*x*

2

= *ℓ*

1 sin *θ*

1 sin *θ*

1

*, y*

1

+ *ℓ*

2

=

*−ℓ*

1 cos *θ*

1

*,*

sin *θ*

2

*, y*

2

=

*−ℓ*

1 cos *θ*

1

*− ℓ*

2 cos *θ*

2

*.*

The only external force is gravity, denoted by *g*.

Analysis based on the

Lagrangian formulation of classical mechanics leads to a pair of coupled, second-order, nonlinear ordinary diﬀerential equations for the two angles *θ*

1

(*t*) and *θ*

2

(*t*):

(*m*

1

+ *m*

2

)*ℓ*

1

*θ*

1

*m*

2

*ℓ*

1

*θ*

1

+ *m*

2

*ℓ*

cos (*θ*

1

2

¨

2 cos (*θ*

*− θ*

2

1

) + *m*

*− θ*

2

) =

*−g*(*m*

1

2

*ℓ*

2

*θ*

2

+ *m*

2

) sin *θ*

1

*−m*

2

*ℓ*

=

*−gm*

2

2

*θ*

2

2 sin (*θ* sin *θ*

2

1

*− θ*

2

)*,*

+ *m*

2

*ℓ*

1

*θ*

˙

2

1 sin (*θ*

1

*− θ*

2

)*.*

To rewrite these equations as a ﬁrst-order system, introduce the 4-by-1 column vector *u*(*t*):

*u *= [*θ*

1

*, θ*

2

*θ*

1

*θ*

2

]

*T*

*.*

With *m*

1

= *m*

2

= *ℓ* equations become

1

= *ℓ*

2

= 1, *c *= cos (*u*

1

*− u*

2

), and *s *= sin (*u*

1

*− u*

2

), the

*u*

3

*u*

3

*u*

1

= *u*

3

*, u*

2

*u*

4

*u*

4

= *u*

4

*,*

=

*−g *sin *u*

1

=

*−g *sin *u*

2

*− su*

2

4

*,*

+ *su*

2

3

*.*

Let *M *= *M *(*u*) denote the 4-by-4 *mass matrix*

*M *=

1 0 0 0

0 1 0 0

0 0 2

*c*

0 0

*c*

1 and let *f *= *f *(*u*) denote the 4-by-1 nonlinear *force function*

*f *=

*u*

3

*u*

4

*−g *sin *u*

1

*−g *sin *u*

2

*− su*

+ *su*

2

4

2

3

In matrix-vector notation, the equations are simply

This is an *implicit *system of diﬀerential equations involving a nonconstant, nonlinear mass matrix. The double pendulum problem is usually formulated without the mass matrix, but larger problems, with more degrees of freedom,

Exercises 51 are frequently in implicit form. In some situations, the mass matrix is singular and it is not possible to write the equations in explicit form.

The NCM M-ﬁle swinger provides an interactive graphical implementation of these equations. The initial position is determined by specifying the starting coordinates of the second bob, (*x*

2

*, y*

2

), either as arguments to swinger or by using the mouse. In most situations, this does not uniquely determine the starting position of the ﬁrst bob, but there are only two possibilities and one

*θ*

1

*θ*

2

, are zero.

The numerical solution is carried out by ode23 because our textbook code, ode23tx, cannot handle implicit equations. The call to ode23 involves using odeset to specify the functions that generate the mass matrix and do the plotting opts = odeset(’mass’,@swingmass, ...

’outputfcn’,@swingplot); ode23(@swingrhs,tspan,u0,opts);

The mass matrix function is function M = swingmass(t,u) c = cos(u(1)-u(2));

M = [1 0 0 0; 0 1 0 0; 0 0 2 c; 0 0 c 1];

The driving force function is function f = swingrhs(t,u) g = 1; s = sin(u(1)-u(2)); f = [u(3); u(4); -2*g*sin(u(1))-s*u(4)^2;

-g*sin(u(2))+s*u(3)^2];

It would be possible to have just one ordinary diﬀerential equation function that returns M

*\*f, but we want to emphasize the implicit facility.

An internal function swinginit converts a speciﬁed starting point (*x, y*) to a pair of angles (*θ*

1

*, θ*

2

). If (*x, y*) is outside the circle

√

*x*

2

+ *y*

2

*> ℓ*

1

+ *ℓ*

2

*,*

then the pendulum cannot reach the speciﬁed point. In this case, we straighten out the pendulum with *θ*

1

= *θ*

2 and point it in the given direction. If (*x, y*) is inside the circle of radius two, we return one of the two possible conﬁgurations that reach to that point.

Here are some questions to guide your investigation of swinger.

(a) When the initial point is outside the circle of radius two, the two rods start out as one. If the initial angle is not too large, the double pendulum continues to act pretty much like a single pendulum. But if the initial angles are large enough, chaotic motion ensues. Roughly what initial angles lead to chaotic motion?

(b) The default initial condition is

52 Chapter 7. Ordinary Diﬀerential Equations swinger(0.862,-0.994)

Why is this orbit interesting? Can you ﬁnd any similar orbits?

(c) Run swinger for a while, then click on its stop button. Go to the Matlab command line and type get(gcf,’userdata’). What is returned?

(d) Modify swinginit so that, when the initial point is inside the circle of radius two, the other possible initial conﬁguration is chosen.

(e) Modify swinger so that masses other than *m*

1

= *m*

2

= 1 are possible.

(f) Modify swinger so that lengths other than *ℓ*

1

= *ℓ*

2

= 1 are possible. This is trickier than changing the masses because the initial geometry is involved.

(g) What role does gravity play? How would the behavior of a double pendulum change if you could take it to the moon? How does changing the value of g in swingrhs aﬀect the speed of the graphics display, the step sizes chosen by the ordinary diﬀerential equation solver, and the computed values of t?

(h) Combine swingmass and swingrhs into one function, swingode. Eliminate the mass option and use ode23tx instead of ode23.

(i) Are these equations stiﬀ?

(j) This is a diﬃcult question. The statement swinger(0,2) tries to delicately balance the pendulum above its pivot point. The pendulum does stay there for a while, but then loses its balance. Observe the value of *t *displayed in the title for swinger(0,2). What force knocks the pendulum away from the vertical position? At what value of *t *does this force become noticeable?

**Bibliography**

[1] U. M. Ascher and L. R. Petzold, *Computer Methods for Ordinary Dif-*

*ferential Equations and Diﬀerential-Algebraic Equations*, SIAM, Philadelphia,

1998.

[2] K. E. Brenan, S. L. Campbell, and L. R. Petzold, *Numerical Solution of*

*Initial Value Problems in Diﬀerential-Algebraic Equations*, SIAM, Philadelphia,

1996.

[3] P. Bogacki and L. F. Shampine, *A *3(2) *pair of Runge-Kutta formulas*, Applied Mathematics Letters, 2 (1989), pp. 1–9.

[4] R. M. Corless, G. H. Gonnet, D. E. G. Hare, D. J. Jeffrey, and D. E.

Knuth, *On the Lambert W function*, Advances in Computational Mathematics,

5 (1996), pp. 329–359.

http://www.apmaths.uwo.ca/~rcorless/frames/PAPERS/LambertW

[5] Lighthouse Foundation.

http://www.lighthouse-foundation.org

[6] L. F. Shampine, *Numerical Solution of Ordinary Diﬀerential Equations*, Chapman and Hall, New York, 1994.

[7] L. F. Shampine and M. W. Reichelt, *The MATLAB ODE suite*, SIAM

Journal on Scientiﬁc Computing, 18 (1997), pp. 1–22.

[8] C. Sparrow, *The Lorenz Equations: Bifurcations, Chaos, and Strange Attrac-*

*tors*, Springer-Verlag, New York, 1982.

[9] J. C. G. Walker, *Numerical Adventures with Geochemical Cycles*, Oxford

University Press, New York, 1991.

53

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

### Related manuals

advertisement