Non-standard finite difference methods in dynamical systems

Non-standard finite difference methods in dynamical systems

Non-standard finite difference methods in dynamical systems

by

Phumezile Kama

Submitted in partial fulfillment of the requirements for the degree

Philosophiae Doctor in the Faculty of Natural and Agricultural Sciences in the Department of Mathematics and Applied Mathematics

University of Pretoria

Pretoria

April 2009

Supervisor: Professor Jean M-S Lubuma

© U n i i v e r r s s i i t t y o f f P r r e t t o r r i i a

Table of Contents

Table of Contents

List of Tables

List of Figures

Acknowledgements

Declaration

Abstract

1 Introduction

2 Dynamical Systems 10

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 Continuous Dynamical Systems . . . . . . . . . . . . . . 11

2.2.1

Generalities . . . . . . . . . . . . . . . . . . . . . 11

2.2.2

Qualitative Properties . . . . . . . . . . . . . . . 18

2.3 Discrete Dynamical Systems . . . . . . . . . . . . . . . 27

2.3.1

Generalities . . . . . . . . . . . . . . . . . . . . . 27

2.3.2

Qualitative Properties . . . . . . . . . . . . . . . 29

3 Finite Difference Methods 33

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.2 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . 34

3.3 Linear Multi-step Methods . . . . . . . . . . . . . . . . . 36

3.4 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . . 38 viii ix

1 v vii ii iv ii

3.5 Absolute Stability . . . . . . . . . . . . . . . . . . . . . . 40

3.5.1

Linear Multi-step Methods . . . . . . . . . . . . . 42

3.5.2

Runge-Kutta Methods . . . . . . . . . . . . . . . 43

3.6 Numerical Methods as Dynamical Systems . . . . . . . . 46

3.7 Theta Methods . . . . . . . . . . . . . . . . . . . . . . . 49

4 Non-standard Finite Difference Methods 58

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.2 Generalities . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.3 Elementary Stable Schemes . . . . . . . . . . . . . . . . 65

4.4 Dissipative Non-standard Theta Methods . . . . . . . . . 76

4.5 Energy Preserving Discrete Schemes . . . . . . . . . . . . 84

5 Non-standard Finite Difference Schemes for Reaction-

Diffusion Equations 92

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.2 The Fisher Equation . . . . . . . . . . . . . . . . . . . . 93

5.3 Theta Methods for Reaction-Diffusion Equations . . . . . 97

5.4 Explicit Scheme . . . . . . . . . . . . . . . . . . . . . . . 99

5.5 Coupled Spectral and Non-standard Methods . . . . . . 106

6 Conclusion

Bibliography

Summary

110

113

118 iii

List of Tables

4.1

Exact schemes of some ODE’s and PDE’s

. . . . . . . . 62

4.2

Non-standard finite difference schemes

. . . . . . . . . . 64

4.3

Comparison between standard and non-standard θ -methods

73 iv

List of Figures

1.1 Exact solution . . . . . . . . . . . . . . . . . . . . . . . .

4

1.2 Forward Euler method . . . . . . . . . . . . . . . . . . .

4

1.3 Runge-Kutta method . . . . . . . . . . . . . . . . . . . .

5

1.4 Non-standard method . . . . . . . . . . . . . . . . . . .

5

2.1 Proof of Theorem 2.2.21 . . . . . . . . . . . . . . . . . . 26

4.1 Region of elementary stability for

θ ∈

[0

,

4.2 Region of elementary stability for

θ ∈

[

1

2

,

1

2

) . . . . . . . . 72

1] . . . . . . . . 72

4.3 Exact solution for the logistic equation. . . . . . . . . . . 75

4.4 Standard Euler scheme for the logistic equation. . . . . . 75

4.5 Non-standard Euler scheme for the logistic equation. . . 75

4.6 Dissipative non-standard scheme . . . . . . . . . . . . . . 80

4.7 Further dissipative non-standard scheme . . . . . . . . . 80

4.8 Non-dissipative standard forward Euler scheme . . . . . 80

4.9 Dissipative non-standard scheme . . . . . . . . . . . . . 83

4.10 Another dissipative non-standard scheme . . . . . . . . 83

4.11 Nondissipative standard scheme . . . . . . . . . . . . . . 83

4.12 Discrete energy of the Duffing equation by standard (piecewise constant) and non-standard (constant) finite difference schemes . . . . . . . . . . . . . . . . . . . . . . . . 91

5.1 Phase plane trajectories for (5.2.5) - (5.2.6),

c ≥

2. . . . . 96

5.2 Travelling wave solution for the Fisher equation,

c ≥

2. . 96

5.3 Non-standard scheme not related to exact scheme. . . . . 104

5.4 Non-standard scheme related to exact scheme. . . . . . . 104 v

5.5 Non-standard scheme with

φ

(∆

t

) = ∆

t

. . . . . . . . . . . 105

5.6 Standard scheme. . . . . . . . . . . . . . . . . . . . . . . 105

5.7 Spectral non-standard scheme based on the exact scheme.109

5.8 Spectral standard scheme. . . . . . . . . . . . . . . . . . 109 vi

Acknowledgements

I am indebted to Professor Jean M-S Lubuma, my supervisor, for introducing me to the theory of non-standard finite difference methods and for his constant support and guidance through the early period of chaos and confusion. This thesis would not have been possible without the tremendous efforts put forth by my supervisor who has devoted much of his precious time to read endless drafts, provided me with technical corrections, constructive feedback and countless suggestions for improvement of this work.

A special word of gratitude is extended to Professor R. Anguelov for his interest in the work and generating some of the figures in this thesis.

I would like to acknowledge the Numerical Analysis of Differential

Equations research group at the University of Pretoria for their joint work and friendly encouragement, which gave me a better perspective on my own results.

A hearty thank you goes to Dr TA Tshifhumulo, Dr GH Maluleke and Mr PWM Chin for proofreading this thesis.

I would like to thank the National Research Foundation of South

Africa for financial support.

Of course, I am grateful to my family, for their patience and

love

.

Without them this work would not have been fruitful.

Dedication

To all who taught me over the years.

”I humbly thank you; well, well, well.”

Shakespeare,

Hamlet

(1623).

vii

Declaration

I, the undersigned, hereby certify that the thesis submitted herewith for the degree Philosophiae Doctor to the University of Pretoria contains my own, independent work and has not been submitted for any degree at any other university.

Signature:

Phumezile Kama

Date: viii

Abstract

This thesis analyses numerical methods used in finding solutions of differential equations. Numerical methods are viewed as discrete dynamical systems that give useful information on continuous dynamical systems defined by systems of (ordinary) differential equations. We analyse non-standard finite difference schemes that have no spurious fixed-points compared to the dynamical system under consideration, the linear stability/instability property of the fixed-points being the same for both the discrete and continuous systems. We obtain a sharper condition for the elementary stability of the schemes. For more complex dynamical systems which are dissipative, we design schemes that replicate this property.

Furthermore, we investigate the impact of the above analysis on the numerical solution of partial differential equations. We specifically focus on reaction-diffusion equations that arise in many fields of engineering and applied sciences. Often their solutions enjoy the following essential properties: Stability/instability of the fixed points for the space independent equation, the conservation of energy for the stationary equation, and boundedness and positivity.

We design new non-standard finite difference schemes which replicate these properties. Our construction make use of three strategies: the renormalization of the denominator of the discrete derivative, nonlocal approximation of the nonlinear terms and simple functional relation between step sizes. Numerical results that support the theory are provided.

ix

Chapter 1

Introduction

Our main interest in this thesis is the study of numerical methods for dynamical systems defined by (ordinary) differential equations. Problems as diverse as the simulation of planetary interactions, fluid flows

[10] and mechanics [43], chemical reactions [16],[40], biological pattern formulation [2], [18], [33] and economic markets can all be modelled as dynamical systems [41]. For further applications of dynamical systems see [44]. In most of the systems modelled, all rates of change are assumed to be time independent, which makes the corresponding system autonomous.

Dynamical systems are concerned primarily with making qualitative study about the behaviour of systems which evolve in time given knowledge about the initial state of the system itself. It is important to know and study essential qualitative properties of the systems or more precisely their dynamics. Such properties include among others: the type of fixed points, oscillatory solutions, monotonicity of solutions, conservation of energy, dissipativity or dispersion of solution, positivity and boundedness of solutions. Our standard reference for dynamical systems is Stuart and Humphries [41] while Lambert [22] will also be used for numerical methods for ordinary differential equations. The framework of the study will include a wide range of concrete linear and non-linear models such as: logistic equation, decay equation, Hamiltonian system in ordinary differential equations as well as the Fisher equation, the reaction-diffusion equation in partial differential equations.

1

2

Existence theory is extensively developed for differential equations.

However, most differential equations have no analytical solutions. As a result numerical methods are of fundamental importance in gaining understanding of dynamical systems. For contemporary numerical analysts, the understanding of differential equations from numerical methods is often limited to the study of their consistency, (zero-) stability and convergence. Unfortunately such classical numerical methods do not guarantee that the dynamics of the systems are replicated. This explains why we use the monograph [41] as our standard reference on dynamical systems, since it is one of a few classical books emphasizing the similar properties of the exact solutions that numerical schemes exhibit.

To be more explicit in this introduction, we consider the following differential equation

dy dt

=

y

2

(1

− y

)

≡ f

(

y

)

.

(1.0.1)

Equation (1.0.1) is an elementary model for combustion [34]. Despite the simple nature of (1.0.1), its solution cannot be written in a closed form. The solution is expressed in the implicit form

µ ¶ ln

| y

| y |

1

|

+

1

y

=

t

+

C,

(1.0.2) where

C

is a constant. This equation defines a dynamical system on

(

−∞ ,

+

) with an asymptotically stable fixed point

y

= 1 and an unstable fixed point

y

= 0. Full analysis shows that the point

y

= 0 is attracting the solutions below it and repelling those above it. (The concepts used here will be made clear in the next chapter). All these properties that represent the exact solution of (1.0.2) are visualised in

Fig.1.1.

We employ for (1.0.1), the forward Euler method

y n

+1

t y n

=

y

2

n

(1

− y n

)

,

(1.0.3)

3 and the Runge-Kutta method

y n

+1

− y n

t

=

y n

+

t

2

[

f

(

y n

) +

f

(

y n

+ ∆

tf

(

y n

))]

.

(1.0.4)

The two classical schemes (1.0.3) and (1.0.4) are consistent, zerostable and thus convergent. However, the discrepancy between the numerical solution by these methods and the exact solution is evident as can be seen in Figs.1.2 and 1.3. We use ∆

t

= 1

.

8 in both schemes.

Our aim is to design numerical schemes that give reliable simulations, which preserve as much as possible the intrinsic properties of the dynamical systems without any limitation on the value of time step size ∆

t

. We shall do this by considering the non-standard finite difference method which was introduced by RE Mickens [26] more than two decades ago. This approach takes advantage of specific properties of solutions of involved differential equations.

For the above mentioned combustion model (1.0.1), Fig.1.4 shows that the non-standard finite difference scheme

y n

+1

− y

1

− e

t n

=

y

2

n

(1 +

y n

2

y n

+1

)

,

(1.0.5) proposed by Anguelov and Lubuma [8], displays better the properties of the exact solution.

1.5

1

0.5

0

−0.5

0 1 2 3 4

t

5 6

Figure 1.1: Exact solution

7 8

1.5

1

0.5

0

−0.5

−1

0 1 2 3 4

t

5 6

Figure 1.2: Forward Euler method

7 8

4

1.5

1

0.5

0

−0.5

−1

0 1 2 3 4

t

5 6

Figure 1.3: Runge-Kutta method

7 8

5

Figure 1.4: Non-standard method

6

Non-standard finite difference techniques developed by Mickens, have laid the foundation for designing methods that preserve the dynamics, especially the stability property of fixed points of the approximated differential system. The design of the non-standard finite difference method starts mostly with the concept of exact scheme. A major advantage of having an exact scheme for a differential equation is that questions related to the usual considerations of consistency, stability and convergence do not arise. It is to be noted that any method that is not standard could be considered non-standard. However, in this thesis when we talk about non-standard finite difference schemes, we consider those that are based on Mickens’ methodology and rules as explained in the survey paper [35].

Since the publication of the monograph [26], which is the first book on this exciting topic, several authors have contributed to the study of non-standard finite difference methods. Anguelov and Lubuma [7] provided some mathematical justification for the success of empirical procedures used so far. These authors have unambiguously defined non-standard finite difference methods using two of Mickens’ rules.

The edited volumes [17], [28] and [29] contain a wide range of applications of the non-standard finite difference methods, (for example, mathematical epidemiology, reaction-diffusion equations, non-smooth mechanics, singular perturbation problems, conservation law, etc). In addition to these, we mention the following works where the nonstandard finite difference schemes have shown great potential: [1], [6],

[7], [14] and [26].

For this thesis to be relatively self-contained, we dedicate considerable time to study classical concepts regarding dynamical systems and finite difference methods. In particular, the concept of absolute stability of linear multi-step and Runge-Kutta methods is sufficiently reviewed in view of the elementary stability which is the minimum qualitative property that non-standard finite difference methods must satisfy.

The comment made earlier about the reliable scheme (1.0.5) raises the following concerns which constitute the main focus of the thesis:

What is a non-standard finite difference method for a dynamical system?

7

How to construct a non-standard finite difference method for a dynamical system?

How powerful are non-standard finite difference methods compared to standard finite difference methods that are used for dynamical systems?

How can numerical methods be viewed as discrete dynamical systems of the continuous dynamical systems they approximate?

What is the impact of the non-standard finite difference method on concrete examples of dynamical system?

How does the study carry over to dynamical systems related to partial differential equations for dynamical systems, for example the reaction-diffusion equations?

This thesis elaborates, with extension in some cases, the author’s results in the following papers: [3], [4], [5] and [6].

The thesis is organized as follows: Chapter 2 deals with the review of basic concepts, definitions and notation relating to dynamical systems which we will be using throughout this thesis. Continuous dynamical systems defined by ordinary differential equations are presented in Section 2.2 and their discrete counterparts are discussed in Section 2.3. In each case, we present qualitative properties of dynamical systems that are of interest in our work. These include, inter alia, invariant sets, fixed points, hyperbolic fixed points, linear stability and dissipativity.

In Chapter 3 we introduce finite difference schemes for ordinary differential equations. In Section 3.2, consistency, zero-stability and convergence of finite difference methods are discussed. We give a short presentation of two classical methods, namely, the linear multi-step method in Section 3.3 and the Runge-Kutta method in Section 3.4.

The numerical methods are also required to behave asymptotically, like the solutions of the decay equation. This is the essence of the concept of absolute stability addressed in Section 3.5. In Section 3.6 we consider the numerical methods that define discrete dynamical systems. Finally,

8 the analysis in Section 3.7 is restricted to theta methods, which will be the focus for the rest of this thesis.

The first set of the author’s main contribution in this thesis appear in Chapter 4. Firstly, we extend the classical theta methods. In Section 4.2, we analyse non-standard finite difference schemes that have no spurious fixed-points compared to the dynamical system under consideration, the linear stability/instability property of the fixed-points being the same for both the discrete and continuous systems. We obtain a sharper condition for the elementary stability of the schemes, a topic discussed in Section 4.3. For more complex dynamical systems which are dissipative, we design schemes that replicate this property as presented in Section 4.4. Lastly, in Section 4.5, we consider a specific class of dynamical systems which is equivalent to the simplest model of Hamiltonian systems that occur in classical mechanics. We design a non-standard finite difference scheme that replicates the underlying principle of conservation of energy. Here we use Mickens’ rule about nonlocal approximation of nonlinear terms.

Chapter 5 is dedicated to a detailed analysis of the author’s results given in [6]. Our point of departure is the Fisher equation, in Section 5.2, which enjoys a positivity and boundedness property. Then we move to general reaction-diffusion equations for which we construct non-standard theta methods in Section 5.3. In Section 5.4, we design non-standard finite difference schemes which are elementary stable in the limit case of space independent variable and which are stable with respect to the principle of conversation of energy in the stationary case.

Furthermore, we show that our schemes replicate the positivity and boundedness properties under a more simpler functional relation between the time and space step sizes (compared to the literature).

As an alternative approach, Section 5.5 deals with approximations of the space variable by the spectral method, while the time variable is approximated via non-standard finite difference scheme. This results in what we call coupled spectral and non-standard methods which replicates the essential properties of the exact solutions.

In the last chapter, we provide concluding remarks, and a summary of our findings, a discussion on how our work fits in the literature and

9 possible extensions. Throughout the main chapters of the thesis, we provide numerical tests that support the theory and show superiority and reliability of our schemes compared to the classical ones.

Chapter 2

Dynamical Systems

2.1

Introduction

Dynamical systems are found in various fields of science. Usually they are given by an analytical specification or as sampled data. Dynamical systems are mainly represented by a state that evolves in time. The input as well as the current state of a dynamical system determine the evolution of the system.

An important characteristic of a dynamical system is whether it is continuous or discrete. Continuous systems (often called flows) are given by differential equations whereas discrete systems (often called maps) are specified by difference equations.

There are many possible ways to analyse such systems, for example, analysing their long term behaviour. For the analysis, it is very important to know whether a dynamical system is linear or not. Nonlinear systems typically have intricate dynamical behaviour.

The general setting of this thesis is that of continuous dynamical systems defined by a system of autonomous differential equations.

We present continuous dynamical systems in the next section. After specifying general concepts and terminology, we give existence results.

Thereafter, we investigate properties of dynamical systems which constitute the main qualitative properties of interest throughout this thesis. These are the stability of fixed points and their dissipative nature.

Section 2.3 provides the discrete counterpart of the above study for discrete dynamical systems.

10

11

2.2

Continuous Dynamical Systems

We recall that Stuart and Humphries [41] is our standard reference for dynamical systems. Most of the classical concepts given below can be found there.

2.2.1

Generalities

Throughout this thesis, we shall be concerned with the initial-value problem for an autonomous first-order system of ordinary differential equations

Dy

:=

dy dt

=

f

(

y

);

y

(0) =

y

0

,

(2.2.1) where

y

=

y

(

t

) = [

f

= [

1

f · · · m f

]

T

: R

1

m y · · ·

R

m m y

]

T

: [0

, ∞

)

R and

y

0

= [

1

y

0

· · · m m y

0

] is unknown, while

T

R

m

are given.

Implicitly, we assume that

f

satisfies the smoothness properties that are needed. Whenever it is necessary, we will be explicit about the smoothness of

f

. The space R

m

is equipped with the usual Euclidean structure through the norm

|| • ||

and the inner product

h• , •i

.

We begin by defining a dynamical system on a subset

E ⊆

R

m

.

Definition 2.2.1.

The equation (2.2.1) is said to define a dynamical system on a subset E ⊆

R

m solution of (2.2.1) which is defined for all t ∈

[0

, ∞

)

and y

(

t

)

∈ E for all t ≥

0

.

if, for every y

0

∈ E , there exists a unique

¥

The fact that

y

(

t

) is a solution of (2.2.1) on [0

, ∞

) implies at least the following smoothness:

y

(

t

) is differentiable on (0

, ∞

) and continuous on [0

, ∞

).

We now introduce the concept of evolution semigroup operator for a dynamical system.

Definition 2.2.2.

For a dynamical system on E , we define its evolution semigroup operator or solution operator to be the map S

(

t

) :

E → E such that y

(

t

) =

S

(

t

)

y

0

.

¥

12

The terminology in Definition 2.2.2 is motivated by the following properties that can easily be checked: i.

S

(

t

+

s

) =

S

(

t

)

S

(

s

) =

S

(

s

)

S

(

t

)

∀ t, s ≥

0, ii.

S

(0)

≡ I,

the identity operator.

The evolution semigroup operator

S

(

t

) is merely a convenient notation for advancing the solution through time

t

. In fact, for

y

0

∈ E

the set

Γ

+

(

y

0

) :=

{ S

(

t

)

y

0

;

t ∈

[0

, ∞

)

} ⊂ E

(2.2.2) is called the (positive or forward) orbit of

y

0 tory is also used for orbit.

. The terminology trajec-

At this stage we need to discuss sufficient conditions for (2.2.1) to define a dynamical system. Firstly, we consider the commonly known condition stated in the following definition.

Definition 2.2.3.

A function f

: R

m

B ⊂

R

m with Lipschitz constant L ≥

0

→ if

R

m is said to be Lipschitz on k f

(

x

)

− f

(

y

)

k ≤ L || x − y || ∀ x, y ∈ B.

If f is Lipschitz on

R

m

, then f is said to be globally Lipschitz . If f is Lipschitz on every bounded subset of

R

m

, then f is said to be locally

Lipschitz .

¥

The concept of Lipschitzian functions is important in the proof of existence and uniqueness results for many problems in mathematics (see for example [48]). In our specific context, we have the following theorem.

Theorem 2.2.4.

Let f

: R

m

R

m be globally Lipschitz. Then there exists a unique solution y

(

t

)

to (2.2.1) for all t ≥

0

. Hence (2.2.1) defines a dynamical system on

R

m

.

13

Theorem 2.2.4 and its two corollaries below are well-known results.

Given their importance in this work, we outline, for convenience, their proofs.

Proof.

We employ the Banach contraction principle, see [48]. To this end, we first introduce the space

C k

(0

, ∞

vector-valued functions

y

: [0

, ∞

)

R

m

; R

m

) consisting of continuous such that

|| y ||

C k

(0

, ∞

; R

m

)

:= sup

t ≥

0

e

− kt

|| y

(

t

)

|| < ∞ ,

where the parameter

k >

0 will be specified shortly. It is clear that

C k

(0

, ∞

; R

m

) equipped with the norm

|| • ||

C k

Secondly, we consider the operator

(0

, ∞

; R

m

) is a Banach space.

φ

:

C k

(0

, ∞

; R

m

)

→ C k

(0

, ∞

; R

m

) defined by

Z

t

φ

(

y

)(

t

) =

y

0

+

f

(

y

(

s

))

ds.

0

It is equally clear that solving (2.2.1) is equivalent to finding fixedpoints of the operator

φ

:

y

=

φy.

Using the Lipschitz condition in Definition 2.2.3 with Lipschitz constant

L

we have for

y, w ∈ C k

(0

, ∞

; R

m

):

Z

t

|| φ

(

y

)(

t

)

− φ

(

w

)(

t

)

|| ≤ || f

(

y

(

s

))

− f

(

w

(

s

))

|| ds

0

Z

t

≤ L e ks e

− ks

|| y

(

s

)

− w

(

s

)

|| ds

0

Z

t

≤ L || y − w ||

C k

=

L

(

(0

, ∞

; R

m

)

e ks ds

0

e kt k

1

)

|| y − w ||

C k

(0

, ∞

; R

m

)

.

14

Thus

e

− kt

|| φ

(

y

)(

t

)

− φ

(

w

)(

t

)

|| ≤

L k

|| y − w ||

C k

(0

, ∞

; R

m

) and

|| φy − φw ||

C k

(0

, ∞

; R

m

)

L k

|| y − w ||

C k

(0

, ∞

; R

m

)

.

For the choice

k > L, φ

is a contraction and has therefore a unique fixed-point.

In general, if

f

is only locally Lipschitz then the most we can achieve is local existence and uniqueness in the following sense:

Corollary 2.2.5.

Assume that f

: R

m

¯

B

(

y

0

, r

)

with Lipschitz constant L

B

R

m is Lipschitz on the ball

. Consider the finite time

T

B

:= sup

x ∈

¯

r

|| f

(

x

)

||

.

Then, the initial-value problem (2.2.1) has a unique solution y

(

t

)

∈ for t ∈

[0

, T

B

]

.

Proof.

We replace

C k

ous functions

y

: [0

, T

(0

, ∞

; R

B

]

→ m

) by the set

C k

(0

, T

B

; ¯ ) of continu-

¯

. Though not being a normed space,

C k

(0

, T

B

; ¯ ) is a complete metric space under the metric

d k

(

y, w

) = sup

0

≤ t ≤ T

B e

− kt

|| y

(

t

)

− w

(

t

)

|| .

For

y ∈ C k

(0

, T

B

; ¯ ), we have

Z

|| φ

(

y

)(

t

)

− y

0

|| ≤

≤ T

0

B t

|| f

sup

x ∈

¯

(

y

||

(

f s

(

))

x

)

||

|| ds

=

r,

15 which shows that the mapping

φ

defined earlier operates from

C k

(0

, T

B

; ¯ ) into

C k

(0

, T

B

; ¯ ). On the other hand, if

y ∈ C k

(0

, T

B

; ¯ ) and

w

C k

(0

, T

B

; ¯ ), we easily obtain as in the proof of Theorem 2.2.4 that

∈ d k

(

φ

(

y

)

, φ

(

w

))

L

B k d k

(

y, w

)

.

Thus, for

k > L

B

, φ

is a contraction.

Whenever, some a priori bound holds for the solution, Corollary 2.2.5

permits us to obtain a global existence result in the following precise way.

Corollary 2.2.6.

Let f

: R

m

E

² of a bounded set E ⊆

R

m

R

m be Lipschitz on an

. If for any y

0

∈ E

² -neigbourhood

, the solution y

(

t

)

of

(2.2.1) satisfies y

(

t

)

∈ E for each time t ≥

0

where the solution exists, then (2.2.1) defines a dynamical system on E.

Proof.

For each

m

= 0

,

1

,

2

, ...

define

T m

:=

sup

x ∈ E

²

|| f

(

x

)

||

and consider the complete metric space

C

functions

y

: [

T m

, T m

+1

]

→ E

² k

(

T m

, T m

+1

E

equipped with the metric

²

) of continuous

d k

(

y, w

) = sup

t ∈

[

T m

,T m

+1

]

e

− kt

|| y

(

t

)

− w

(

t

)

|| .

Fix

y

0

∈ E

. For

y ∈ C k

(

T m

, T m

+1

; ¯

²

), define the operator

φ

by

Z

t

φ

(

y

)(

t

) =

y

0

+

f

(

y

(

s

))

ds.

T m

As in the proof of Corollary 2.2.5, one can show that

φ

operates from

C k

(

T m

, T m

+1

E

²

) into

C k

(

T m

, T m

+1

E

²

) and

φ

is a contraction for the choice

k > L

E

²

where

L

E

²

is the Lipschitz constant of

f

on

E

²

.

This,

16 together with the assumption that the solution remains in

E

whenever it exists, implies that there exists a sequence

{ Y m

} m ≥

0 of functions

Y m

: [

T m

, T m

+1

]

→ E

such that each

Y m

is on [

T m

, T m

+1

] the unique solution of the differential equation in (2.2.1) that satisfies the initial condition given recursively by

Y

0

(0) =

y

0

Y m

(

T m

) =

Y m −

1

(

T m

)

, m

= 1

,

2

, ....

Since

[0

, ∞

) =

[

[

T m

, T m

+1

]

, m ≥

0 the function

y

:=

[

Y m

: [0

, ∞

)

→ E, m ≥

0 which is well defined in view of the above initial conditions is the unique solution of (2.2.1). Thus, (2.2.1) defines a dynamical system on

E

.

The following inequality introduced by Gronwall in 1918, known as the Gronwall inequality, will be useful in the analysis of continuous dynamical systems.

Lemma 2.2.7 (Gronwall Inequality).

Let z

(

t

)

be a real valued function on

[0

, ∞

)

that satisfy z t

≤ az

+

b, z

(0) =

z

0

, for a,b constants. Then for t ≥

0

z

(

t

)

≤ z

0

e at

+

b a

(

e at

1)

, a

= 0

and z

(

t

)

≤ z

0

+

bt, a

= 0

.

17

Note that an extension of the Gronwall Inequality that does not allow an exponential growth of

z

is known as the uniform Gronwall lemma and is provided in [44].

Theorem 2.2.8.

Suppose that (2.2.1) defines a dynamical system on

R

m and that f

: R

m

R

m is locally Lipschitz. Let B ⊂

R

m be a bounded set with the property

S

(

t

)

B ⊂ B f or t ∈

[0

, T

]

.

Then there exists a constant c >

0

depending on B and T such that

|| S

(

t

)

y

0

− S

(

t

)

z

0

|| ≤ c || y

0

− z

0

|| ∀ t ∈

[0

, T

]

∀ y

0

, z

0

∈ B.

(2.2.3)

Proof.

For

y

0

, z

0

∈ B

, put

y

(

t

) =

S

(

t

)

y

0 and

z

(

t

) =

S

(

t

)

z

0 which belongs to

B

for

t ∈

[0

, T

]. Using (2.2.1) and the Cauchy-Schwarz inequality, we have

1

2

d dt

|| y

(

t

)

− z

(

t

)

||

2

=

h y t

− z t

, y − z i

=

h f

(

y

)

− f

(

z

)

, y − z i

≤ || y − z |||| f

(

y

)

− f

(

z

)

||

≤ L

B

|| y

(

t

)

− z

(

t

)

||

2 where

L

B

0 is the Lipschitz constant of

f

on

B

. Application of

Gronwall inequality (Lemma 2.2.7) yields

|| y

(

t

)

− z

(

t

)

||

2

≤ e

2

L

B t

|| y

0

− z

0

||

2

∀ t ∈

[0

, T

]

.

This implies that

|| y

(

t

)

− z

(

t

)

|| ≤ e

L

B

T

|| y

0

− z

0

|| ∀ t ∈

[0

, T

]

.

18

Remark 2.2.9.

When

f

is globally Lipschitz, the inequality (2.2.3) holds on replacing

B

by R

m

. The inequality (2.2.3) means that the solution of (2.2.1) depends continuously on the initial data or that the dynamical system is continuous with respect to initial data. This motivates the terminology ”Lipschitz continuous dynamical system”.

In fact, in the rest of this thesis we deal with Lipschitz continuous dynamical systems, though the expression ”Lipschitz continuous” is sometimes left out.

¥

2.2.2

Qualitative Properties

We will often be interested in the orbits or trajectories initiated at

y

0 in any set

B ⊆

S

(

t

) on

B ⊂

R

m

R

.

m

and the action of the evolution semigroup operator

Definition 2.2.10.

For a dynamical system defined by (2.2.1) a set B is said to be positively invariant under S

(

)

if S

(

t

)

B ⊆ B for all t ≥

0

. Similarly, B is said to be negatively invariant if S

(

t

)

B ⊇ B for all t ≥

0

. If B is both positively and negatively invariant, so that

S

(

t

)

B

=

B for all t ≥

0

, then B is said to be invariant under S

(

)

.

¥

Certain distinguished orbits play a prominent role in the qualitative theory of dynamical systems. The simplest of such orbits are fixed points which turn out to be also the simplest invariant sets.

Definition 2.2.11.

A point

˜

R

m is called a fixed point of the dynamical system defined by (2.2.1) if f

(˜ ) = 0

.

¥

Remark 2.2.12.

The terminology in Definition 2.2.11 is due to the fact that ˜

R

m

is a fixed point of (2.2.1) if and only if ˜ is a fixed point of the evolution semigroup operator, that is

S

(

t

y

. Other

19 terms often substituted for the term fixed point are equilibrium point, critical point, stationary point, rest point or steady state. We shall utilize the term fixed point exclusively.

¥

Given the simplicity of fixed-points as invariant sets of the dynamical system, it is natural to wonder how other trajectories compare to them.

This is captured in the next definition.

Definition 2.2.13.

Let

˜

R

m

(2.2.1). Then

˜

is said to be be a fixed point of the dynamical system

(i) stable if, for any ² >

0

, there exists δ

=

δ

(

²

)

>

0

such that if y

0

∈ B

(˜ )

then y

(

t

)

∈ B y, ²

)

for all t ≥

0;

(ii) asymptotically stable if (i) holds and in addition, || y

(

t

)

˜

|| →

0

as t → ∞ for all || y

0

˜

|| sufficiently small;

(iii) unstable if (i) fails to hold.

¥

Remark 2.2.14.

A fixed point ˜ is stable if all nearby solutions stay nearby. It is asymptotically stable if all nearby solutions not only stay nearby, but also tend to ˜

y

.

¥

We now turn our attention to a special type of fixed point in the study of dynamical systems, called

hyperbolic fixed points

. To this end, we assume that

f

: R

m

R

m

is of class

C

1

. Here and after, we denote the Jacobian matrix of

f

at the fixed point ˜ by

J ≡ Jf

(˜ )

.

(2.2.4)

Definition 2.2.15.

If the matrix J has no eigenvalues with zero real parts, then

˜

is called hyperbolic . Otherwise the fixed point is called non-hyperbolic

.

¥

20

Suppose that

f

is a continuously differentiable function such that

(2.2.1) generates a continuous dynamical system on suppose that ˜

R

m

. Moreover,

y

is a hyperbolic fixed point of the dynamical system. If

y

solves (2.2.1) and setting

u

=

y −

˜ , we see by Taylor expansion of

f

about ˜ that

u

0

=

f

(

u

+ ˜ ) =

f

(˜ ) +

Jf

(˜ )

u

+

R

(

u

)

.

(2.2.5)

That is,

u

0

=

Ju

+

R

(

u

) (2.2.6) where

R

(

u

)

/ || u || →

0 as

|| u || →

0. Because

R

(

u

) is small when

u

is small, it is reasonable to believe that as

t → ∞

solutions of (2.2.6) behave similarly to solutions of

u

0

=

Ju

(2.2.7) for

u

near 0. Equivalently, it is reasonable to believe that solutions of

(2.2.1) behave like solutions of

y

0

=

J

(

y −

˜ ) (2.2.8) for

y

near ˜ . Equation (2.2.7) or (2.2.8) is called the

linearisation

of

(2.2.1) at ˜ .

The belief expressed earlier is indeed confirmed by the following

Hartman-Grobman theorem

or linearisation theorem which is an important result about the local behaviour of dynamical systems in the neighbourhood of a hyperbolic fixed point.

Theorem 2.2.16 (Hartman-Grobman).

Assume that f

: R

m is of class C

1

and consider a hyperbolic fixed point

˜

R

m of the dynamical system defined by (2.2.1). Then there exist δ >

0

, a neighbourhood

N of the origin, and a homeomorphism h

:

B

(˜ )

→ N such that v

(

t

) :=

h

(

u

(

t

))

solves (2.2.7) if and only if u

(

t

)

solves (2.2.6).

Basically the theorem states that the behaviour as

t → ∞

of the solution of (2.2.1) near a fixed point is the same as the behaviour of the solution of its linearisation near the origin. Therefore when dealing

21 with such fixed points we can use the simpler linearisation of the system to analyse its behaviour. This observation leads us to the following result.

Theorem 2.2.17.

Assume that f

: R

m

˜

R

m

R

m is of class C

1

and that is a hyperbolic fixed point of the dynamical system defined by

(2.2.1). Then

˜

is asymptotically stable if and only if for u

(

t

) =

e tJ u

0

, solution of (2.2.7) with || u

0

||

:=

|| y

0

˜

|| small enough, we have

lim

t →∞ u

(

t

) = 0

.

(2.2.9)

This is equivalent to

Reλ <

0

, ∀ λ ∈ σ

(

J

)

,

(2.2.10)

where σ

(

J

)

is the set of eigenvalues of the matrix J.

The fixed-point is unstable if and only if there exists λ ∈ σ

(

J

)

such that

Reλ >

0

or

lim

t →∞ u

(

t

) =

∞ .

(2.2.11)

Remark 2.2.18.

Note that Theorem 2.2.16 and Theorem 2.2.17 fail in the case of non-hyperbolic fixed-point ˜ . At the same time, Theorem

2.2.16 motivates the terminology ”linear stability” and ”linear instability” that we will often use in place of ”asymptotic stability” and instability for a hyperbolic fixed-point.

¥

Instead of considering systems for which all trajectories are asymptotic to a unique fixed point, a possible generalisation is to consider systems for which the asymptotic behaviour is confined to some bounded set, but where no restrictions are imposed on the possible dynamics within the set. Such systems are said to be dissipative and this constitutes the second type of qualitative property that we will deal with in this thesis.

Definition 2.2.19.

A dynamical system on

R

m is dissipative if there exists a bounded, positively invariant set B with the property that for any bounded set E ⊆

R

m

, there exists t

=

t

(

B, E

)

0

such that

S

(

t

)

E ⊆ B for all t > t

. The set B is called an absorbing set.

¥

22

We now want to investigate when a dynamical system defined by

(2.2.1) is dissipative. To this end, one needs a variety of structural assumptions on the vector field

f

(

), which arise naturally in applications. We will now consider two such structural assumptions.

First consider (2.2.1) under the assumption that there exist constants

α ≥

0 and

β >

0 such that

h f

(

y

)

, y i ≤ α − β || y ||

2 for all

y ∈

R

m

.

(2.2.12)

Theorem 2.2.20.

Assume that f

: R

m

R

m is locally Lipschitz and satisfies (2.2.12). Then (2.2.1) defines a dynamical system on

R

m for any ² >

0

and bounded set E ⊂

R

m there exists t

=

t

(

E, ²

)

and such that for all t > t

|| y

(

t

)

||

2

<

α

β

+

²,

(2.2.13)

where y

(

t

)

is the solution of (2.2.1). Hence the dynamical system

(2.2.1) is dissipative with an absorbing set

B

=

B

µ

0

,

r

α

β

+

²

(2.2.14)

for any ² >

0

.

Proof.

Given the importance of this theorem in our work, we prove it in detail following [41]. We first establish an a priori bound on the solution

y

(

t

) with an initial data

y

0

, whenever it exists. Note that

1

2

d dt

|| y

(

t

)

||

2

=

h f

(

y

(

t

))

, y

(

t

)

≤ α − β || y

(

t

)

||

2

i

by (2

.

2

.

12)

.

Applying the Gronwall inequality, Lemma 2.2.7, we obtain

|| y

(

t

)

||

2

α

β

+

e

2

βt

·

|| y

0

||

2

α

β

¸

.

(2.2.15)

23

Thus for

t ≥

0

,

|| y

(

t

)

|| ≤

max

µ

|| y

0

|| ,

r

α

β

.

(2.2.16)

The relation (2.2.16) shows that the solution of (2.2.1), if it exists, cannot blow up for

y

so that

y

0

R

m

0

R

m

. More precisely, since R

m

³ q belongs to some

B

0

=

B

0

,

α

β

=

+

²

S

0

´

0

B

³

0

,

q

α

β

+

²

, (2.2.16) shows

´ that the solution

y

(

t

) must remain in this

B

0

. By Corollary 2.2.6, a unique solution exists indeed for all

t ≥

0 and it remains in which means that (2.2.1) defines a dynamical system on R

m

.

B

0

R

m

,

From the above use of (2.2.16), we have in passing shown that each

B

is positively invariant:

S

(

t

)

B ⊂ B.

We now show that

B

is absorbing. In other words for any bounded set

E ⊆

R

m

, we want to show that there exists

t

=

t

(

E, ²

)

0 such that

t > t

implies that

|| y

(

t

S

)

(

|| t

)

<

E ⊆

r

α

β

B

. That is, for

+

²

for

t > t

∗ y

.

0

∈ E,

(2.2.17)

For a bounded set

E

||

,

y y

(

0

t

)

∈ E

and from (2.2.15), we have

||

2

α

β

+

e

2

βt

·

R

2

α

β

¸ where

R

= sup

y

0

∈ E ∪ B

|| y

0

|| .

Solving for

t

, the inequality

α

β

+

e

2

βt

·

R

2

α

¸

β

<

α

β

it is clear that

+

², t

=

1

2

β

ln

R

2

²

α

β

24 is the required time in (2.2.17). Thus the dynamical system is dissipative with

B

as absorbing set.

We generalize (2.2.12) by considering a weaker condition that induces slow. Notice that if

f

satisfies (2.2.12) and

R ≥ h f

(

y

)

, y i <

0 for

|| y || > R.

Thus (2.2.12) implies (2.2.18) while the contrary is not true.

(2.2.18)

The theorem below shows that if

f

satisfies (2.2.18) then (2.2.1) defines a dissipative dynamical system.

Theorem 2.2.21.

If f

: R

m

R

m is locally Lipschitz then (2.2.1),

(2.2.18) defines a dissipative dynamical system and the open ball

B

(0

, R

+

²

)

is an absorbing set for any ² >

0

.

α

β

then

Proof.

Given

² >

0, let

B

denote the open ball

B

(0

, R

+

²

). Assume that (2.2.1) has a unique solution

y

(

t

) and that

y

(

t

)

R

m

\ B

for

t ≥

0.

Then by (2.2.18), we have

d dt

|| y

(

t

)

||

2

<

0

,

because

1

2

d dt

|| y

(

t

)

||

2

≤ h

By integrating (2.2.19), we obtain

f

(

y

(

t

))

, y

(

t

)

i .

(2.2.19)

(2.2.20)

|| y

(

t

)

||

2

≤ || y

0

||

2

.

(2.2.21)

25

Combining this with the case when the solution

y

(

t

) remains in

B

, we have the a priori bound

|| y

(

t

)

|| ≤

max

{ R

+

², || y

0

||}

for

t ≥

0

.

(2.2.22)

Thus, in view of Corollary 2.2.6, Equations (2.2.1) and (2.2.18) define a dynamical system on R

m

.

To show that this dynamical system is dissipative, we proceed as follows. Clearly the ball

B

is positively invariant because of (2.2.22).

For a bounded set

E ∈

R

m

, let

r

= sup

y ∈ E ∪ B

|| y || > R

+

²

and

E

=

{ y

;

|| y || ≤ r } .

Note that

E

is positively invariant, i.e.

S

(

t

)

E

⊂ E

because of

(2.2.22) and of the relation (2.2.19) that holds for

y

(

t

)

∈ E

\ B

(see

Figure 2.1).

Furthermore, since

E

\ B

is compact, and

f

is continuous we deduce from (2.2.19) that there exists

δ >

0 such that

|| y

(

t

)

||

2

≤ − δt

+

|| y

0

||

2

.

Now if

y

0

∈ E,

we have two cases. Either

y

0 remains in

B

for

t ≥

0 because

B

is positively invariant, or

y

0 the latter case, using Definition 2.2.2, we have

∈ B

in which case,

6∈ B y

(

t

)

. In

|| S

(

t

)

y

0

||

2

=

|| y

(

t

)

||

2

≤ − δt

+

|| y

0

||

2

≤ − δt

+

r

2

<

(

R

+

²

)

2

26 whenever

t > t

:=

r

2

(

R

+

²

)

2

δ

.

This shows that the dynamical system is dissipative and that

B

(0

, R

+

²

) is an absorbing set.

Figure 2.1: Proof of Theorem 2.2.21

27

2.3

Discrete Dynamical Systems

In this section we present dynamical systems generated by mappings from R

m

to R

m

. The definitions for discrete dynamical systems are in some sense analogous to those of continuous systems on the understanding that the time variable

t ∈

[0

, ∞

) is now replaced by the discrete variable

n ∈

N

.

Given this analogy, we shall be concise and focus only on the main tools that we need. Once again, [41] is our main reference where most of the concepts below can be found.

2.3.1

Generalities

Let

G

: R

m

R

m

. Consider a sequence

{ y n

}

∞ n

=0 defined recursively by

y n

+1

=

G

(

y n

)

.

(2.3.1)

y n

We refer to such a map or iterate as explicit mapping since

y n

+1 is given explicitly in terms of

y n

. Sometimes

y n

+1 explicit mapping of the form (2.3.1), but instead

y

through an implicit mapping of the form is not given by an

n

+1 is obtained from where

H

: R

m

×

R

m

R

m

.

H

(

y n

+1

, y n

) = 0

,

(2.3.2)

Remark 2.3.1.

For (2.3.1) uniqueness of the solution sequence

{ y n

}

is guaranteed due to the explicit nature of the map, whereas for (2.3.2) it is necessary to establish existence and uniqueness of a solution

y n

+1 when

y n

is given.

¥

Definition 2.3.2.

Equation (2.3.1) defines a discrete dynamical system on a subset E ⊆

R

m if, for every y

0

∈ E , the sequence { y n

}

∞ n

=0

is such that y n remains in E for all n ≥

0

.

¥

To deal with the problem of non-uniqueness when solving certain classes of implicit numerical methods, we consider (2.3.2) in the case when there may be multiple solutions. This motivates the following definition.

28

Definition 2.3.3.

Equation (2.3.2) defines a generalised discrete dynamical system on a subset E ⊆

R

m if, for every y

0

∈ E , there exists at least one sequence { y n

} in E that satisfies (2.3.2).

¥

As far as the connection between discrete dynamical systems and generalised discrete dynamical system is concerned, the Implicit Function Theorem can be a powerful tool that reads as follows.

Theorem 2.3.4.

Assume that H

: R

m class C

1

×

R

m that satisfies the following properties:

R

m is a function of

(i) H

R

(

m y

, y

) = 0

, where

(

y

, y

)

∈ there exist open neighbourhoods

R

. Furthermore, there exists a m

U

×

R

m

(ii) The determinant of the Jacobian matrix

C

1

R

m

³

is given;

∂H i

∂z j

(

×

R

m y

, y

)

zero, where

(

z

1

, z

2

, ··· , z

2

m

)

denotes the variable on

R

´

m

1

≤ i,j ≤ m

×

R

m is not

. Then of

(

y

, y

)

and V ⊂ function G

:

V →

R

m such that

(

y, x

)

∈ U solves H

(

y, x

) = 0

if and only if y

=

G

(

x

)

, x ∈ V.

Under these conditions and provided that the range of G is contained in V , { y n

} satisfying (2.3.2) is a generalised discrete dynamical system on V if and only if { y n

} given by (2.3.1) is a discrete dynamical system on V .

The evolution semigroup operator of the dynamical system is an operator

S n

, n ≥

0, that maps R

m

into itself and enjoys the usual semigroup properties [44]. We define this more precisely.

Definition 2.3.5.

We define the evolution semigroup operator for the discrete dynamical system in Definition 2.3.2 to be the map

S n

:

E → E such that y n

=

S n y

0

.

¥

The evolution semigroup operator has the properties that i.

y n

+

m

=

S n y m

=

S m y n

=

S n

+

m y

0

,

ii.

S

0

≡ I,

the identity operator.

∀ n, m ≥

0,

29

The discrete analogue of Gronwall inequality (Lemma 2.2.7) reads as as follows:

Lemma 2.3.6 (Gronwall Inequality).

Let a positive sequence { y n

}

N n

=0

satisfy y n

+1

≤ Cy n

+

D, ∀ n

= 0

, ..., N −

1

for some constants C and D with C >

0

. Then y n

D

1

− C

(1

− C n

) +

y o

C n

, ∀ n

= 0

, ..., N, C

= 1

and y n

≤ nD

+

y

0

∀ n

= 0

, ..., N, C

= 1

.

Assuming that (2.3.1) generates a discrete dynamical system on R

m

where

G

: R

m

1

R

m

is locally Lipschitz, the set

B ⊂

R

m

having the property

S B ⊂ B

, it is easy to check that the discrete dynamical system is continuous with respect to initial data in the following sense: there exists a constant

c >

0 depending on

B

such that

|| S

1

y

0

− S

1

z

0

|| ≤ c || y

0

− z

0

|| ∀ y

0

, z

0

∈ B.

(2.3.3)

In this work, we reflect (2.3.3) by using the terminology ”Lipschitz continuous discrete dynamical system” though the expression ”Lipschitz continuous” is often omitted. The continuity with respect to initial data stated above is to be linked to the zero-stability stated in

Definition 3.2.5 below.

2.3.2

Qualitative Properties

In the results stated below, we deal once and for all with a discrete dynamical system on R

m

defined by (2.3.1) and having evolution semigroup operator

S n

.

Definition 2.3.7.

A subset B ⊆

R

m is said to be

(a) positively invariant if S n

B ⊆ B for all n ≥

0

,

30

(b) negatively invariant if S n

B ⊇ B for all n ≥

0

,

(c) invariant if B is both positively and negatively invariant, i.e.

S n

B ≡ B for all n ≥

0

.

¥

Definition 2.3.8.

A point

˜

R

m is called a fixed point of the discrete dynamical system (2.3.1) if

˜ =

S n

˜

for all n ≥

0

.

¥

Definition 2.3.9.

Let

˜

R

m system. Then

˜

is said to be be a fixed point of the discrete dynamical

(i) stable if, for any ² >

0

, there exists δ

=

δ

(

²

)

>

0

such that if

|| y

0

˜

|| < δ , then || y n

˜

|| < ² for n ≥

0

.

(ii) asymptotically stable if (i) holds and in addition there exists

η >

0

such that, || y

0

˜

|| < η implies

lim

n →∞

|| y n

˜

||

= 0

;

(iii) unstable if (i) fails to hold.

¥

In order to easily investigate the stability of a fixed-point ˜ , we assume that the map

G

is of class

C

1 and we denote by

J

=

JG

(˜ ) the

Jacobian matrix of

G

at ˜ .

u n

+1

=

Ju n

, n

= 0

,

1

, ...,

(2.3.4) is then a linearisation of (2.3.1) around ˜ where the notation

u

=

y − y

is used as in the continuous case (see (2.2.5) - (2.2.7)).

Definition 2.3.10.

A fixed-point

˜

of the discrete dynamical system is said to be hyperbolic if no eigenvalues of the matrix J lie on the unit circle: | λ | 6

= 1

, ∀ λ ∈ σ

(

J

)

. Otherwise the fixed-point is called non-hyperbolic.

¥

Remark 2.3.11.

The map

G

in (2.3.1) is hyperbolic if all fixed points are hyperbolic.

¥

31

Theorem 2.3.12 (Hartman-Grobman).

Let G

: R

m

C

1

have a hyperbolic fixed point

˜

. Then there exist

δ >

R

0

m of class

, a neighbourhood N of the origin and a homeomorphism h

:

B

(˜ )

→ N such that h

(

G

(

y

0

)) =

Jh

(

y

0

)

for all y

0

∈ B

(˜ )

.

(2.3.5)

Consequently, by setting u n

=

h

(

y n

)

for all n ≥

0

,

(2.3.6)

the mapping (2.3.1) in the neighbourhood B

(˜ )

of y is equivalent to the mapping (2.3.4) in the neighbourhood N of the origin.

In practice, Theorem 2.3.12 is used as follows.

Theorem 2.3.13.

Let G

: R

m

R

m of class C

1

have a hyperbolic fixed point

˜

. Then y is asymptotically stable if and only if for u n

=

J n u

0

,

(2.3.7)

solution of (2.3.4) with || u

0

||

:=

|| y

0

˜

|| small enough, we have

lim

n →∞ u n

= 0

,

(2.3.8)

or equivalently,

| λ | <

1

, ∀ λ ∈ σ

(

J

)

.

(2.3.9)

The fixed-point is unstable if and only if there exists at least one λ ∈

σ

(

J

)

such that

| λ | >

1

, or

lim

n →∞

|| u n

||

=

∞ .

(2.3.10)

To conclude this chapter, we present the definition of a discrete version of a dissipative dynamical system.

Definition 2.3.14.

A dynamical system on

R

m

, is dissipative if there exists a bounded, positively invariant set B with the property that for any bounded set E ⊆

R

m

, there exists n

S n

E ⊆ B for all n > n

. The set B

=

n

(

B, E

)

0

such that is called an absorbing set.

¥

32

The way the stability and the dissipativity properties regarding discrete dynamical systems are crucial in our work will appear in Chapter

4 where our main contributions are presented.

Chapter 3

Finite Difference Methods

3.1

Introduction

This thesis is devoted to the study of numerical methods for dynamical systems. In this chapter, we give a short presentation of two classical methods, namely, the linear multi-step methods in Section 3.3, and the

Runge-Kutta method in Section 3.4. The numerical methods we use are required to be consistent, zero-stable and thus convergent: this is discussed in Section 3.2.

The numerical methods are also required to behave asymptotically like the solutions of the decay equation: this is the essence of the concept of absolute stability addressed in Section 3.5. Finally the numerical methods are expected to define discrete dynamical systems, a topic considered in Section 3.6.

The requirements for the linear multi-step methods and the Runge-

Kutta methods to be absolutely stable or to define a discrete dynamical system that is continuous with respect to initial data is subjected to a constraint on the step size ∆

t

. However, the analytical form of this constraint can be complex for practical use. For this reason, the analysis in Section 3.7 is restricted to theta methods, which will be the focus for the rest of this thesis.

33

34

The books by Lambert [22] and Stuart and Humphries [41] are our standard references where the concepts recalled below can be found.

3.2

Basic Concepts

We consider the initial value problem for the autonomous first-order ordinary differential equations defined by (2.2.1). Numerical methods of (2.2.1) are obtained by replacing the continuous interval [0

, ∞

) by equally-spaced grid points

t n

given by

t n

:=

n

t, n

= 0

,

1

,

2

, ...

(3.2.1)

t

being the stepsize. We denote by

y n

an approximation to the solution

y

(

t n

) of (2.2.1) at the point

t n

:

y n

≈ y

(

t n

)

.

(3.2.2)

The sequence

{ y n

of the form

}

∞ n

=0

φ

(∆

t, y

is obtained as solution of a difference equation

n

, y n

+1

, · · · , y n

+

k

) = 0

, k ∈

N

,

(3.2.3) coupled with appropriate initial conditions.

Thus to find the approximation of the iterates

y n

+

j y n

+

k

at the time

t n

+

k

we make use

, j

= 0

,

1

, ..., k

. If

k

= 1, the numerical method is called a

one-step method

, whereas if

k >

1 we have a

multi-step method

or a

k -step method

, see for instance [22].

The method (3.2.3) can be explicit in which case

y n

+

k

recursively from the previous iterates as follows: is determined

y n

+

k

=

φ

(∆

t, y n

, y n

+1

, · · · , y n

+

k −

1

)

.

Otherwise the method is implicit.

(3.2.4)

For the scheme (3.2.3) to be useful, the following minimum property of fixed station convergence is required.

35

Definition 3.2.1.

The difference method (3.2.3) is said to be convergent if for each fixed t

(0

, ∞

)

with t

=

t n

=

n

t , we have

lim

t →

0

|| y n

− y

(

t

)

||

= 0

.

¥

Remark 3.2.2.

With the notation of Definition 3.2.1 in mind, the uniform convergence of the scheme (3.2.3) above means that sup

t

|| y n

− y

(

t

)

|| →

0 as ∆

t →

0

.

¥

Since our concern is to approximate the differential equation (2.2.1), we are mostly interested in difference equations where (3.2.3) takes the following form:

D

t y n

=

F

t

(

f

;

y n

)

.

(3.2.5)

Following the notation in [7], Equation (3.2.5) is more convenient in that

D

t y n y

(

t

) and

F

approximates the derivative

Dy

(

t n

) of the exact solution

t

(

f

;

y n

) approximates

f

(

y

(

t

indicates that the dependence of

F

on

y n n

)). The notation

F

t

(

f

;

y n

) is through

f

(see, for example

[22]), with

f n

:=

f

(

y n

)

.

(3.2.6)

With (3.2.5), the following further concepts of interest can be mentioned:

Definition 3.2.3.

The difference method (3.2.5) is said to be consistent with problem (2.2.1) if the amount by which the exact solution y

(

t

)

fails to satisfy the discrete method is infinitely small. That is, for fixed t

lim

t →

0

|| D

t y

(

t n

)

− F

t

(

f

;

y

(

t n

))

||

= 0

,

[0

, ∞

)

with t

=

t n

=

n

t .

¥

36

The quantity

T n

(∆

t

) :=

D

t y

(

t n

)

− F

t

(

f

;

y

(

t n

))

,

is called the truncation error of the scheme (3.2.5).

(3.2.7)

Lemma 3.2.4.

The difference method (3.2.5) is consistent with (2.2.1) if and only if

F

0

(

f

;

y

) =

f

(

y

)

, y ∈

R

,

(3.2.8)

where

F

0

(

f

;

y

) := lim

t →

0

F

t

(

f

;

y

)

.

Definition 3.2.5.

The difference method (3.2.5) is said to be zerostable if there exist K >

0

and

t

0

>

0

such that for all

t ∈

(0

,

t

0

]

,

|| z n

˜

n

|| ≤ K² whenever || δ n

˜

n

|| ≤ ² for a given accuracy ² >

0

and any two perturbations δ n data in (2.2.1) resulting in perturbed solutions z n and

˜

n

.

and δ n of the

¥

A more convenient way of proving convergence is contained in the following result:

Theorem 3.2.6.

Consistency and zero-stability are necessary and sufficient conditions for the difference method (3.2.5) to be convergent.

3.3

Linear Multi-step Methods

To be more explicit with (3.2.5), let us consider two classical methods.

We first look at the class of linear multi-step methods of order

k ≥

1; they read as

37

α j y n

+

j

= ∆

t β j f n

+

j

, n

= 0

,

1

,

2

, ...,

(3.3.1)

j

=0

j

=0 where

α k

= 1 and

| α

0

|

+

| β

0 a particular method. If

β k

| >

0. The parameters

α j

and

β j

define

= 0 then the method is explicit, otherwise it is implicit . In terms of (3.2.7), the associated truncation error of

(3.3.1) is

T n

(∆

t

) :=

j

=0

α j y

(

t n

+

j

)

t j

=0

β j f

[

y

(

t n

+

j

)]

.

(3.3.2)

For the multi-step method (3.3.1), consistency and zero-stability can be expressed in terms of its first and second characteristic polynomials defined by

ρ

(

z

) =

j

=0

α j z j

, σ

(

z

) =

j

=0

β j z j

.

(3.3.3)

Indeed, we have the following result:

Theorem 3.3.1.

The method (3.3.1) is consistent if and only if ρ

(1) =

0

, and σ

(1) =

ρ

0

(1) = 0

. It is also zero-stable if and only if all roots of ρ

(

z

)

have modulus less than or equal to 1 and those with modulus 1 are simple.

In the framework of this thesis numerical solutions are required to preserve the essential properties of the exact solution. The main criticism of linear multi-step methods is that they need extra initial conditions for the method to work. This could create spurious or ghost solutions, a situation which is not desirable.

For this reason we shall focus on linear one-step schemes. Specifically the two-stage

θ

-method that reads as follows,

y n

+1

− y n

= ∆

t

(

θf n

+1

+ (1

− θ

)

f n

)

,

(3.3.4)

38 where

θ ∈

[0

,

1] is a given parameter. Note that for

θ

= 0, we have the simplest method, which is referred to as the forward Euler method:

y n

+1

=

y n

+ ∆

tf n

.

(3.3.5)

3.4

Runge-Kutta Methods

The second type of classical methods we look at are the so-called Runge-

Kutta methods which have the advantage of avoiding the cost of differentiation because they do not use derivatives of

f

and are one-step methods.

A general

k

-stage Runge-Kutta method for the solution of (2.2.1) is defined by

y n

+1

=

y n

+ ∆

t i

=1

b i k i

(3.4.1) where

k i

=

f

(

y n

+ ∆

t a ij k j

)

, i

= 1

,

2

, .., k.

j

=1

Runge-Kutta methods are often represented using the Butcher tableau

c A b

T

=

c

1

c

2

a a

11

21

a a

.

.

.

12

22

.

.

.

.

.

.

c k a k

1

b

1

a k

2

b

2 where we assume that the following holds:

. . . a

1

k

. . . a

2

k

.

.

.

. . . a kk

. . . b k

(3.4.2)

c i

=

j

=1

a ij

, i

= 1

,

2

, ..., k.

(3.4.3)

39

A more convenient form of (3.4.1) is

y n

+1

=

y n

+ ∆

t i

=1

b i f

(

Y i

)

,

where

Y i

=

y n

+ ∆

t j

=1

a ij f

(

Y j

)

, i

= 1

,

2

, ..., k.

(3.4.4)

(3.4.5)

Definition 3.4.1.

The numerical method (3.4.1) - (3.4.3) is said to be explicit if a ij

= 0

for all

1

≤ i ≤ j ≤ k

(3.4.6)

and implicit otherwise.

¥

It is clear that the Runge-Kutta method (3.4.1) can be written in the compact form (3.2.5) with

D

t y n

:=

y n

+1

− y n t

.

Consequently all the concepts introduced in Section 3.3 apply to Runge-

Kutta methods. For convenience, we state them below.

In view of Definition 3.2.3 and Lemma 3.2.4, the Runge-Kutta method is consistent with (2.2.1) if and only if

We will also use the notation

i

=1

b i

= 1

.

(3.4.7)

40

A = max

i j

=1

| a ij

|

=

|| A ||

,

(3.4.8) and

|| b ||

1

= B =

j

=1

| b i

| ≥

1

.

(3.4.9)

The first characteristic polynomial of the Runge-Kutta method (3.4.1) is

ρ

(

z

) =

z −

1 (3.4.10) and it always satisfies the zero-stability property or the root condition contained in Theorem 3.3.1. Consequently, Theorem 3.2.6 can be rephrased as follows:

Theorem 3.4.2.

The Runge-Kutta method is convergent if and only if it is consistent.

A specific Runge-Kutta method that we shall deal with is the one-stage

θ

-method that reads as follows:

y n

+1

− y n

= ∆

tf

(

θy

where

θ ∈

[0

,

1] is a given parameter.

n

+1

+ (1

− θ

)

y n

)

,

(3.4.11)

Note that the two-stage

θ

-method (3.3.4) that is of interest to us and was presented in Section 3.3 as a linear multi-step method is also a onestage Runge-Kutta method. In both cases the one-stage and two-stage methods with

θ

= 0, reduce to the forward Euler method (3.3.5).

3.5

Absolute Stability

A traditional way of testing the efficiency of a numerical method is to apply it to a single model differential equation. Let us consider the model differential equation

y

0

=

Jy, y

(0) =

y

0

,

(3.5.1)

41 where

J

is a constant

N × N

matrix with

λ s

being its eigenvalues counted with their multiplicity, and satisfying the condition

Reλ s

<

0

.

(3.5.2)

The reason for choosing

J

to be a matrix is the Hartman-Grobman

Theorem (Theorem 2.2.16), which shows that the local behaviour near a fixed-point which is assumed to be ˜ = 0 of the solution of the system (2.2.1) is given by the linearised equation (2.2.7) that has the form (3.5.1).

For simplicity, we assume that

J

is diagonalizable. Then there exists a transition matrix

Q

= [

q

1

q

2

· · · q

N

] such that

Q

1

JQ

= Λ := diag(

λ

1

, λ

2

, · · · , λ

N

)

.

(3.5.3)

If we make the change of dependent variable

y

=

Qz,

(3.5.4)

(3.5.1) is equivalent to

z

0

= Λ

z,

which is an uncoupled system of

N

equations

s z

0 j

=

λ s s z j

,

1

≤ j, s ≤ N,

having the solution

z

(

t

) =

The behaviour of the solution

s z j

(0)

e

λ s t

.

y

(

t

) =

e tJ y

0

,

(3.5.5)

(3.5.6)

(3.5.7)

(3.5.8) of (3.5.1) as

t → ∞

, is equivalent to that of the functions (3.5.7). In view of this behaviour of the solution we require the numerical solution to behave in a similar manner. Schemes producing such numerical solutions are roughly speaking called absolutely stable. Below we make this concept more precise for linear multi-step and Runge-Kutta methods.

42

3.5.1

Linear Multi-step Methods

We first discuss absolute stability in the context of linear multi-step methods. Following [22], we apply the linear multi-step method (3.3.1) to the system (3.5.1) to obtain

X

(

α j

I −

tβ j

J

)

y n

+

j j

=0

= 0

.

Using (3.5.3) - (3.5.4), (3.5.9) is equivalent to

(3.5.9)

X

(

α j

I −

tβ j

Λ)

z n

+

j j

=0

= 0

.

Since both

I

and Λ are diagonal matrices, we may write

(3.5.10)

X

(

α j j

=0

tβ j

λ s

)

s z n

+

j

= 0

,

1

≤ s ≤ N.

(3.5.11)

The general solution for each of the difference equations in (3.5.11) takes the form

" #

s z n

=

d s,

1

+

d s,j n

(

n −

1)

· · ·

(

n − j

+ 2)

r n s

,

(3.5.12)

s

=1

j

=2 where

d s,j

are arbitrary complex constants and ference equation

r s

X

(

α j

tβ j

λ s

)

r j

= 0

j

=0 with multiplicity

µ s

,

1

≤ s ≤ p

and

P

p s

=1

µ s

=

N

.

are roots of the dif-

(3.5.13)

We define the

stability polynomial π

(

r, λ s

t

) of the method (3.3.1) to be

π

(

r, λ s

t

) =

X

[

α j

− λ s

tβ j

]

r j

.

(3.5.14)

j

=0

43

This polynomial can conveniently be written in terms of the first and second characteristic polynomials

ρ

and

σ

as

π

(

r,

ˆ

) =

ρ

(

r

)

ˆ

(

r

)

,

(3.5.15) where ˆ :=

λ s

t

.

The stability polynomial

π

(

r,

ˆ

) with ˆ :=

λ s

t

permits us to better compare the solution (3.5.7) or (3.5.8) with the discrete solution (3.5.12) in the following manner.

Definition 3.5.1.

The linear multi-step method (3.3.1) is called absolutely stable for a given h , if for that

ˆ

all the roots of the stability polynomials lie within the unit circle. Otherwise the method is absolutely unstable.

¥

3.5.2

Runge-Kutta Methods

We now discuss absolute stability of a

k −

stage Runge-Kutta method, following [41]. Applying the Runge-Kutta method (3.4.4) - (3.4.5) to

(3.5.6), for each 1

≤ j ≤ N

, we have

l z j,n

+1

=

Z j

=

l z j,n

+

λ l

tbZ j l z j,n e

+

λ l

tAZ j

,

where

Z j

= [

Z j,

1

, Z j,

2

, ..., Z j,k

]

T

and

e ∈

R

k

with

e

= [1

,

1

, ...,

1]

T

.

(3.5.16)

(3.5.17)

Solving the above system in

Z j

gives

l z j,n

+1

=

l z j,n

[1 +

λ l

tb

(

I − λ l

tA

)

1

e

]

,

(3.5.18) where

I

is the

k × k

unit matrix.

Note that the matrix (

I − λ l

tA

) is nonsingular for ∆

t

small enough.

Unlike the linear multi-step method where we had a stability polynomial, here we have a stability function, namely

R

(

λ l

t

) = 1 +

λ l

tb

(

I − λ l

tA

)

1

e.

(3.5.19)

44

From (3.5.18) we obtain a one-step difference equation of the form

l z j,n

+1

=

R

(

λ l

t

)

l z j,n

.

(3.5.20)

Coming back to the model equation (3.5.1), we obtain from (3.5.20) and the change of variable in (3.5.3) and (3.5.4)

y n

+1

=

R

(∆

tJ

)

y n

,

with the matrix function

R

(∆

tJ

) :=

Q

diag(

R

(

λ l

t

))

Q

1

(3.5.21)

(3.5.22) being the stability function. Clearly,

y n

0 as

n → ∞

if and only if

|| R

(∆

t

)

J || <

1

.

(3.5.23)

The analysis above on Runge-Kutta methods motivates us to state the definition of absolute stability with

R

(

λ l

t

) as is the case with linear multi-step method.

t

) instead of

π

(

r, λ s

Definition 3.5.2.

The Runge-Kutta method (3.4.4) - (3.4.5) is said to be absolutely stable for a given λ

t, Reλ <

0

, if | R

(

λ

t

)

| <

1

.

¥

Remark 3.5.3.

Our expectation is of course to have both the linear multi-step and the Runge-Kutta methods absolutely stable for all

λ l

where

(3.5.1).

{ λ l

}

N l

=1 are the eigenvalues of the diagonalizable matrix

J

t

in

¥

We now give an alternative formula for the stability function which is more suitable for its calculation as is presented in [12].

Theorem 3.5.4.

The stability function of the Runge-Kutta method

(3.4.4) - (3.4.5) is given by

R

(

λ l

t

) =

det

(

I − λ l

tA

+

λ l

tb

T det

(

I − λ l

tA

)

e

)

.

(3.5.24)

45

Proof.

The relation (3.5.16) - (3.5.17) can be written as the algebraic linear system:

1

− λ l

ta

11

− λ l

ta

12

. . .

− λ l

ta

1

k

0

− λ l

ta

21

1

− λ l

ta

22

. . .

− λ l

ta

2

k

0

.

.

.

.

− λ

− λ l l

.

.

ta k

1

tb

1

− λ l

− λ l

.

.

ta k

2

tb

2

.

.

. . .

1

− λ l

. . .

− λ l

ta kk

tb k

.

.

0

1

Z j,

1

Z j,

2

.

.

.

l

Z j,k z j,n

+1

=

l l z j,n z j,n

.

.

.

l l z j,n z j,n

(3.5.25)

The denominator in (3.5.24) is given by the determinant of the matrix in (3.5.25). The numerator in (3.5.24) is the determinant of the matrix

1

− λ

− λ l l

ta ta

11

21

.

− λ l

1

− λ

ta l

.

12

ta

22

. . .

− λ l

. . .

.

− λ l

ta

1

k

ta

2

k l l z j,n z

.

j,n

− λ

− λ l l

.

.

ta

tb k

1

1

− λ

− λ l l

.

.

ta

tb k

2

2

.

.

. . .

1

. . .

− λ

λ l l

ta

tb k kk

.

.

l l z j,n z j,n

.

Indeed subtraction of the last row from the first

k

rows leaves this determinant invariant. Cramer’s rule expresses

l z j,n

+1 as the quotient of two determinants, so we arrive at

l z j,n

+1

= det(

I − λ l

tA

+

λ l

tb

T e

) det(

I − λ l

tA

)

l z j,n

,

(3.5.26) which establishes (3.5.24).

Remark 3.5.5.

When

A

is a strictly lower triangular matrix, the matrix

I − λ l

tA

is then lower triangular with all the elements of its main

46 diagonal being unity. It follows that det(

I − λ l

tA

) = 1 and for all explicit Runge-Kutta methods the stability function is a polynomial in

λ l

t

. For implicit methods det(

I − λ l

tA

) = 1 so that the stability function is a rational function of

λ l

t

.

¥

3.6

Numerical Methods as Dynamical Systems

We now turn our attention to discussing conditions under which numerical methods studied earlier in this chapter generate discrete dynamical systems. If numerical methods are to give useful information on

Lipschitz continuous dynamical systems, it is of paramount importance that these methods are viewed as discrete dynamical systems which are continuous with respect to initial data. In this way, we are comparing dynamical systems of the same nature.

A Runge-Kutta method, applied to (2.2.1), not only defines an approximation to the solution of (2.2.1), but can also define a discrete dynamical system. We start with an explicit Runge-Kutta method

(3.4.4)-(3.4.5) which, in view of Definition 3.4.1, can be written recursively as follows

y n

+1

=

S n

+1

t y

0

,

where

S

1

t y

0

=

y

0

+ ∆

t j

=1

b j f

(

g j

(

y

0

))

, g

1

(

y

0

) =

y

0

, g i

(

y

0

) =

y

0

+ ∆

t a ij f

(

g j

(

y

0

))

, i

= 2

,

3

, · · · , k.

j

=1

Assuming that

{ z n

method from

z

0

}

is another sequence generated by the Runge-Kutta

, it is easy to prove that

|| S

1

t y

0

− S

1

t z

0

|| ≤ c || y

0

− z

0

||

for some

c >

0 whenever

f

is locally Lipschitz. Consequently, we have the following result [41].

47

Theorem 3.6.1.

Let f

: R

m

R

m be locally Lipschitz. If the Runge-

Kutta method (3.4.4), (3.4.5) is explicit, then it defines a Lipschitz continuous discrete dynamical system on

R

m

.

However, for an implicit method the Runge-Kutta method need not be uniquely solvable, and hence an implicit Runge-Kutta method need not define a discrete dynamical system. To overcome this difficulty, we impose a condition on the step size ∆

t

as illustrated in the next result

[41].

Theorem 3.6.2.

Let f

: R

m constant L . Assume that

R

m be globally Lipschitz with Lipschitz

t <

1

L

A

,

(3.6.1)

where

A

is defined in (3.4.8). Then the Runge-Kutta method (3.4.4)-

(3.4.5) is uniquely solvable for Y i

, i

= 1

, .., k . More precisely, the solution can be found as a fixed point of the iteration:

Y i s

+1

=

y n

+ ∆

t j

=1

a ij f

(

Y j s

+1

) + ∆

t j

=

i a ij f

(

Y j s

)

,

(3.6.2)

for i

= 1

,

2

, ..., k, s

= 0

,

1

,

2

, ...

. Consequently, the Runge-Kutta method is a Lipschitz continuous discrete dynamical system on

R

m

.

Proof.

Consider other iterates

Z i s

+1 can be shown that of type (3.6.2) initiated at

Z i

0

. It

|| Y s

+1

− Z s

+1

|| ≤

tL

A

|| Y s

− Z s

||

where

Y s

and

Z s

denote vectors in R

mk

comprised of the

Y i s

, Z i s

R

m

The Contraction Mapping Theorem ([48], [49]) and (3.6.1) lead to the

.

first part of Theorem 3.6.2.

48

Assuming that

{ z n

}

is another sequence generated by the Runge-

Kutta method from

z

0

, (3.6.1) implies the existence of

c >

0 such that

|| y n

+1

− z n

+1

|| ≤ c || y n

− z n

|| ,

which shows the Lipschitz continuity of

{ y n

}

with respect to the initial data.

Regarding the linear multistep method (3.3.1), we re-write it in the form

y n

+

k

=

j

=0

[∆

tβ j f

(

y n

+

j

)

− α j y n

+

j

] + ∆

tβ k f

(

y n

+

k

)

.

(3.6.3)

In the setting of dynamical systems, we can say that the action of the evolution semigroup operator

S

1

t

on the data vector

Y n

:= [

y n

, y n

+1

, · · · , y n

+

k −

1

]

T

R

mk

is the vector

Y n

+1

:= [

y n

+1

, y n

+2

, · · · , y n

+

k

]

T

R

mk

.

That is,

Y n

+1

:=

S

1

t

Y n

.

Another sequence

{ z n

}

of the form (3.6.3), where

z n

+

k

is generated from (

z n

, z n

+1

, · · · , z n

+

k −

1

) is equally considered. Proceeding as for the

Runge-Kutta method, we obtain the following result [41].

Theorem 3.6.3.

Let f

: R

m

R

m be locally Lipschitz. Assume that the multi-step method (3.3.1) or (3.6.3) is explicit, i.e.

β k

= 0

. Then

(3.3.1) or (3.6.3) defines a Lipschitz continuous discrete dynamical system

R

mk

.

On the other hand, we assume that f is globally Lipschitz with Lipschitz constant L and that

t satisfies the condition

t <

1

| β k

| L

.

(3.6.4)

49

Then the linear multi-step method (3.6.3) defines a Lipschitz continuous discrete dynamical system on

R

mk for which the solution y n

+

k generated from the data

(

y n

, y n

+1

, · · · , y n

+

k −

1

)

can be found as a fixed point of the iteration y s

+1

n

+

k

=

j

=0

[∆

tβ j f

(

y n

+

j

)

− α j y n

+

j

] + ∆

tβ k f

(

y s n

+

k

)

, s

= 0

,

1

,

2

, ....

(3.6.5)

Remark 3.6.4.

For

f

: R

m

R

m

locally Lipschitz, the implicit Runge-

Kutta method and linear multi-step method, define Lipschitz continuous discrete dynamical systems under more restrictive conditions on

t

, (see for example, [41]).

¥

3.7

Theta Methods

The structure of the Runge-Kutta and linear multi-step methods in the previous sections has shown that a restriction must be placed on the step size ∆

t

if the methods are to provide acceptable approximations to the solution of the Lipschitz continuous discrete dynamical systems.

The expression of the restriction can be complex. For this reason, we will as from now focus the rest of the thesis to the theta methods.

Within this choice, it is our aim to better understand the said restriction in order to design in the next chapters non-standard schemes which are reliable for any value of ∆

t

.

For convenience we re-define the theta methods we mentioned in

Sections 3.3 and 3.4. Consider the parameter

θ ∈

[0

,

1]

.

The one-stage theta method for approximating (2.2.1) is defined by

y n

+1

− y n

t

=

f

[

θy n

+1

+ (1

− θ

)

y n

]; (3.7.1)

50 the two-stage theta method reads as follows:

y n

+1

t y n

=

θf

(

y n

+1

) + (1

− θ

)

f

(

y n

)

.

(3.7.2)

There are two specific values of

θ

for which both theta methods reduce to the same scheme. More precisely, for

θ

= 0, we have the forward explicit Euler method

y n

+1

− y n

t

=

f

(

y n

)

,

while

θ

= 1 yields the forward implicit Euler method

y n

+1

t y n

=

f

(

y n

+1

)

.

(3.7.3)

(3.7.4)

Note that the value

θ

=

1

2 in (3.7.1) and (3.7.2) corresponds to the so-called mid-point rule and trapezoidal rule, respectively.

When

θ

is different from 0 and 1, the one-stage and two-stage theta methods are still intimately related in the sense of the following theorem.

Theorem 3.7.1.

Let { v n

} then the sequence { y n

}

∞ n

=0

∞ n

=0

satisfy the one-stage theta method (3.7.1), given by y n

= (1

− θ

)

v n

+

θv n

+1

(3.7.5)

satisfies the two-stage theta method (3.7.2). Conversely, if { y satisfies the two-stage theta method (3.7.2). Then the sequence { v n n

}

}

∞ n

=0

∞ n

=0

given by v n

=

y n

tθf

(

y n

) (3.7.6)

satisfies the one-stage theta method.

51

Proof.

We follow the proof by Stuart and Humphries, [41] p 227, but in more detail. Re-writing (3.7.1) and (3.7.2), we have

v n

+1

=

v n

+ ∆

tf

[

θv n

+1

+ (1

− θ

)

v n

] (3.7.7)

y n

+1

=

y n

+ ∆

tθf

(

y n

+1

) + ∆

t

(1

− θ

)

f

(

y n

) (3.7.8) respectively. Let us fix

θ ∈

[0

,

1]. Suppose that two-stage theta method (3.7.8) and let

{ v n

}

∞ n

=0

{ y n

}

∞ n

=0 satisfies the be given by (3.7.6) then

v n

+1

=

y n

+1

tθf

(

y n

+1

)

.

(3.7.9)

Rearranging terms in (3.7.8) and using (3.7.9) yields

v n

+1

=

y n

+ ∆

t

(1

− θ

)

f

(

y n

) =

y n

+ ∆

tf

(

y n

)

tθf

(

y n

)

.

(3.7.10)

Substituting the right hand side of (3.7.9) into (3.7.10), we obtain

v n

+1

=

v n

+ ∆

tf

(

y n

)

.

Multiplying throughout by (1

− θ

) and using (3.7.10) produces

(3.7.11)

t

(1

− θ

)

f

(

y n

) =

v n

+1

− y n

.

(3.7.12)

Making minor manipulation in (3.7.12) the result (3.7.5) follows.

Replacing

y n

in (3.7.12) by

v n

stage

θ −

method (3.7.7) for

n ≥

0

.

shows that

{ v n

}

∞ n

=0 satisfies the one-

Conversely, if

{ v n

}

∞ n

=0 satisfies the one-stage can easily be shown that

{ y n

θ −

method (3.7.7), it

}

∞ n

=0 in (3.7.5) satisfies the two-stage

θ −

method (3.7.8). Indeed from (3.7.9) we have

y n

=

θv n

+1

From (3.7.7) we have

+ (1

− θ

)

v n

=

v n

+

θ

(

v n

+1

− v n

)

.

(3.7.13)

(3.7.14)

v n

+1

− v n

= ∆

tf

[

θv n

+1

+ (1

− θ

)

v n

]

.

Using (3.7.13) and (3.7.14), we get

v n

=

y n

tθf

(

y n

) which completes the proof in view of (3.7.5).

(3.7.15)

52

We saw in Section 3.3 that the two-stage theta method is a linear one-step method. We also saw in Section 3.4 that both the one-stage and the two-stage theta methods are Runge-Kutta methods with the

Butcher tableau

θ θ

1 and

0 0 0

1 1

− θ θ

1

− θ θ,

respectively. Consequently, we have the following result.

Theorem 3.7.2.

The one-stage and two-stage theta methods for approximating the initial value problem (2.2.1) are convergent. In the particular case of the forward Euler method, assuming that f

: R

m

R

m is globally Lipschitz, with Lipschitz constant L , and that the exact solution y

(

t

)

of class C

2

with bounded second derivative, we have the error estimate

|| y

(

t n

)

− y n

|| ≤ K

t

(

e

Lt n

1) (3.7.16)

for some constant K >

0

.

Proof.

Although this result is known, we provide the proof here because we will not come back to convergence issues when dealing later on with non-standard schemes. The first and the second characteristic polynomials of the two-stage theta method (3.7.2), viewed as a linear multi-step method, are

ρ

(

z

) =

z −

1 and

σ

(

z

) = (1

− θ

)+

θz

, respectively.

In view of Theorem 3.3.1, the two-stage theta method is consistent and zero-stable and thus convergent. Furthermore, since the two-stage and one-stage theta methods are equivalent (Theorem 3.7.1) the one-stage theta method is equally convergent.

53

Regarding the forward Euler method

y n

+1

− y n

= ∆

tf

(

y n

)

,

(3.7.17) we proceed as follows. By (2.2.1) and Taylor expansion of

y

(

t n

+1

) about

t

=

t n

, we obtain

y

(

t n

+1

) =

y

(

t n

) + ∆

tf

(

y

(

t n

)) +

(∆

t

)

2

2

y

00

(

ζ n

)

, t n

< ζ n

< t n

+1

.

(3.7.18)

Letting

e n

=

y

(

t n

)

− y n

and subtracting (3.7.17) from (3.7.18), produce:

e n

+1

=

e n

+ ∆

t

[

f

(

y

(

t n

))

− f

(

y n

)] +

(∆

2

t

)

2

y

00

(

ζ n

)

.

Using the Lipschitz and boundedness assumptions, we have

(3.7.19)

|| e n

+1

|| ≤ || e n

||

(1 +

L

t

) +

c

(∆

t

)

2

.

By the discrete Gronwall inequality (Lemma 2.3.6), we have

|| e n

|| ≤ c

(∆

t

)

2

|

1

(1 +

L

t

)

|

[(1 +

L

t

)

n

1] +

|| e

0

||

(1 +

L

t

)

n

from which the estimate (3.7.16) follows.

(3.7.20)

(3.7.21)

Remark 3.7.3.

One could be a bit more precise about the consistency of the theta methods used in the proof of Theorem 3.7.2 in the following way. Assuming that the exact solution

y

(

t

) is smooth enough and has bounded derivatives, the local truncation error

τ n

=

(

y

(

t n

+1

)

− y

(

t n

)

y

(

t n

+1

t

)

− y

(

t

t n

)

− θf

(

y

(

t n

+1

))

(1

− θ

)

f

(

y

(

t n

))

− f

[

θy

(

t n

+1

) + (1

− θ

)

y

(

t n

)] of the theta methods (3.7.1) and (3.7.2) have the asymptotic behaviour

(

τ n

=

O

(∆

t

) if

θ

=

O

((∆

t

)

2

1

2

) if

θ

=

1

2

.

(3.7.22)

¥

54

These are obtained by Taylor expansion of

y

(

t n

+1

) about

t

=

t n

. For example, for the two-stage method, we have the following.

τ n

=

[

y

(

t n

+1

)

− y

(

t n

t

)

− θf

(

y

(

t n

+1

))

(1

− θ

)

f

(

y

(

t

[

y

(

t n

) + ∆

tf

(

y

(

t n

)) + (∆

t

)

2

/

2

f

0 n

(

y

(

t n

)) + (∆

t

)

3

))

/

6

f

00

(

y

(

t n

)) +

O

(∆

t

4

)]

− y

(

t n

)

=

− θf

(

y

(

t n

))

− θ

tf

0

(

y

(

t n

))

− θ

(∆

t

)

2

/

t

2

f

00

(

y

(

t n

))

− f

(

y

(

t n

)) +

θf

(

y

(

t n

)) +

O

(∆

t

3

)

= (∆

t

)

/

2

f

0

(

y

(

t n

)) + (∆

t

)

2

/

6

f

00

(

y

(

t n

))

− θ

tf

0

(

y

(

t n

))

− θ

(∆

t

)

2

/

2

f

00

(

y

(

t n

)) +

O

(∆

t

3

)

= (1

/

2

− θ

)∆

tf

0

(

y

(

t n

)) + (1

/

6

− θ/

2)(∆

t

)

2

/

2

f

00

(

y

(

t n

)) +

O

(∆

t

3

) and from this (3.7.22) follows.

We conclude this chapter by discussing when the theta methods replicate some of the qualitative properties targeted in the previous sections for the underlying differential equation (2.2.1). Firstly, are theta methods discrete dynamical systems? A positive conditional answer is given in the next theorem [41].

Theorem 3.7.4.

Let f

: R

m

R

m be globally Lipschitz with Lipschitz constant L.

Assume that

t <

1

θL

.

(3.7.23)

Then the one-stage and two-stage theta methods define Lipschitz continuous discrete dynamical systems on

R

m

.

Proof.

The theorem follows from Theorem 3.6.1 and Theorem 3.6.2.

However, this can be proved directly given the simple structure of these schemes, as we show now. Let

{ y s n

+1

} s ≥

0 and

{

defined through the one-stage theta method by

z s n

+1

} s ≥

0 be two iterates

y s

+1

n

+1

=

y n

+ ∆

tf

[

θy s n

+1

+ (1

− θ

)

y n

]

z s

+1

n

+1

=

y n

+ ∆

tf

[

θz s n

+1

+ (1

− θ

)

y n

]

.

It is easy to check by using the Lipschitz property of

f

that

(3.7.24)

(3.7.25)

|| y s

+1

n

+1

− z s

+1

n

+1

|| ≤ θ

tL || y s n

+1

− z s n

+1

|| .

(3.7.26)

55

Under the condition (3.7.23), the Contraction Mapping Theorem shows that the one-stage theta method (3.7.1) for

θ

= 0 is uniquely solvable in R

m

, with its solution

y n

+1

R

m

being found as the fixedpoint of the iteration (3.7.24).

Furthermore, if

y n

+1

=

y n

+ ∆

tf

[

θy n

+1

+ (1

− θ

)

y n

] (3.7.27)

z n

+1

=

z n

+ ∆

tf

[

θz n

+1

+ (1

− θ

)

z n

]

,

(3.7.28) we have

|| y n

+1

− z n

+1

|| ≤

1 + (1

− θ

)

L

t

1

− θL

t

|| y n

− z n

|| ,

(3.7.29) which shows the Lipschitz continuity with respect to initial data. Thus the one- stage theta method is a discrete dynamical systems on R

m

.

Given the equivalence stated in Theorem 3.7.1 between the one-stage and the two-stage theta methods, we conclude that the two-stage theta method is equally a Lipschitz continuous discrete dynamical systems on R

m

.

Remark 3.7.5.

When

θ

= 0

,

there is no restriction on ∆

t

in (3.7.23).

Thus the forward Euler method is a discrete dynamical systems on R

m

.

Actually, the forward Euler method is a discrete dynamical systems on R

m

even when

f

is locally Lipschitz, (see Theorem 3.6.1). More generally, the theta methods can be discrete dynamical systems on R

m

under flexible structural assumptions on

f

(e.g. locally Lipschitz, onesided Lipschitz condition, etc). But we will not consider these aspects, which can be found in [41].

¥

The next qualitative property is related to the absolute stability of the theta methods. When applied to the model linear equation (3.5.1),

56 both the one-stage and two-stage theta methods reduce to

y n

+1

t y n

=

J

[

θy n

+1

+ (1

− θ

)

y n

]

.

(3.7.30)

Using the factorization (3.5.3) and the change of variable (3.5.4), (3.7.30) is equivalent to

y n

+1

=

Q

diag

·

1 + ∆

t

(1

− θ

1

tθλ

1

)

λ

1

, · · · ,

1 + ∆

t

(1

− θ

1

tθλ

N

)

λ

N

¸

Q

1

y n

.

(3.7.31)

The stability function (Definition 3.5.22 or 3.5.24) of the one-stage and the two-stage theta methods, viewed as Runge-Kutta methods, is then

R

(

λ

t

) =

1 + ∆

t

(1

− θ

)

λ

1

tθλ

,

(3.7.32) while the stability polynomial (Definition 3.5.14) of the two-stage theta method, as a linear multi-step method, is

π

(

r, λ

t

) = 1 +

λ

whose unique root is

r

=

R

(

λ

t

).

t

(1

− θ

)(1

− λ

)

,

(3.7.33)

For a complex number

λ

, with

Reλ <

0, we have

| R

(

λ

t

)

|

2

=

=

¯

¯

¯

¯

1 + ∆

t

(1

− θ

)

λ

1

tθλ

¯

¯

¯

¯

2

(1

t

(1

− θ

)

| Reλ |

)

2

+ (∆

t

)

2

(1

− θ

)

2

| Imλ |

2

=

(1 + ∆

tθ k Reλ |

) 2 + (∆

t

) 2

θ

2

| Imλ |

2

1 + 2∆

tθ | Reλ | −

2∆

t | Reλ |

+ (∆

t

)

2

(1

− θ

)

2

| λ |

2

1 + 2∆

tθ k Reλ |

+ (∆

t

) 2

θ

2

| λ |

2

.

(3.7.34)

Thus we have the following result.

57

Theorem 3.7.6.

For θ ∈

[

1

2

,

1]

, the one-stage and the two-stage theta methods are (unconditionally) absolutely stable for any λ

t with Reλ <

0

: according to the standard terminology the theta methods are A − stable in this case.

For θ ∈

[0

,

1

2

)

, the theta methods are (conditionally) absolutely stable for λ

t with Reλ <

0

, whenever

t <

2

| Reλ |

(1

2

θ

)

| λ |

2

.

(3.7.35)

The last qualitative property of our interest is the dissipativity of the theta methods when the underlying dynamical system (2.2.1) is dissipative. Our point of departure is the following classical result proved in [41].

Theorem 3.7.7.

Consider (2.2.1) as a dissipative dynamical system in the setting of Theorem 2.2.21, where R >

0

is given and let θ ∈

[

1

2

,

1]

Then for any

t >

0

, the one-stage and the two-stage theta methods de-

.

fine (generalised) dynamical systems which are dissipative in the sense of Definition 2.3.3: the closed ball B

(0

, R

+

δ

+ ∆

t

(1

− θ

)

M

)

is an absorbing set for any δ >

0

and M

:= sup

v ∈

¯

(0

,R

+

δ

)

|| f

(

v

)

|| .

In the particular case when the setting is that of Theorem 2.2.20 where

α and β are given and θ ∈

(

open ball B

(0

,

1

2

θ −

1

α

β

1

2

,

1]

, the above conclusion holds with any

+

δ

)

, δ >

0

, being an absorbing set that does however not depend on the step size

t .

Remark 3.7.8.

It follows from Theorem 3.7.7 that within the range

θ ∈

[0

,

1

2

), the dissipative property of the theta methods is not guaranteed.

We will try to remedy this in the next chapter.

¥

Chapter 4

Non-standard Finite Difference

Methods

4.1

Introduction

The first set of the main contributions of this thesis appear in this chapter. The chapter is based on the author’s publications [3], [5], [6], as well as on the technical report [4] that is under review.

We present in Section 4.2 generalities on the non-standard finite difference method. In Section 4.3, we analyze non-standard finite difference schemes that have no spurious fixed-points compared to the dynamical system under consideration, the linear stability/instability property of the fixed-points being the same for both the discrete and continuous systems. The schemes we study are non-standard variants of the theta methods presented in the previous chapter and they are constructed by using Mickens’ rule about the denominator of the discrete derivatives. We obtain a sharper condition for the elementary stability of the schemes. For more complex dynamical systems which are dissipative, we design schemes that replicate this property in Section 4.4. In a second step in Section 4.5, we consider a specific class of dynamical systems which is equivalent to the simplest model of Hamiltonian systems that occur in classical mechanics. We design a non-standard finite difference scheme that replicates the underlying principle of conversation of energy. Here we use Mickens’ rules about nonlocal approximation of nonlinear terms.

58

59

4.2

Generalities

The shortcomings of the classical numerical schemes, specifically theta methods, for being reliable discrete dynamical systems were pointed out in Chapter 3. It became clear that the time step size ∆

t

should be small enough if the schemes were to replicate qualitative properties of the exact solutions. The non-standard finite difference method introduced by Mickens [26], aims at preserving the qualitative properties at no cost with regard to the value of ∆

t

. The following definition is due to [7]:

Definition 4.2.1.

Assume that the solution of (2.2.1) satisfies some property P. The difference scheme (3.2.5) is called qualitatively stable with respect to the property P (or P -stable) if for all step sizes

t >

0

, the discrete solutions for (3.2.5) satisfy the properties P

.

¥

The term dynamic consistency with respect to

P

has been introduced recently and is sometimes used instead of that in Definition 4.2.1, (see

[29] and [30]).

Significant properties of solutions of differential equations are of great importance from the practical point of view. Such properties include among others: types of fixed points, oscillatory solution, monotonicity of solutions, and conservation of energy.

The ideal situation when the discrete scheme is stable with respect to any property of the exact solution is given in the next definition.

Definition 4.2.2.

The numerical method (3.2.5) for approximating

(2.2.1) is called an exact scheme whenever the difference equation

(3.2.5) and the differential equation (2.2.1) have the same general solutions at the discrete time t

=

t n

. In particular, with y

(

t

)

being the solution of the initial value problem (2.2.1), we have y n

=

y

(

t n

)

.

¥

At this stage, it is essential to consider exact schemes of two model scalar equations that come often in this thesis. These are the exponential growth equation

y

0

=

λy, y

(0) =

y

0

, λ

= 0 (4.2.1)

60 and the logistic equation

y

0

=

λy

(1

− y

)

, y

(0) =

y

0

, λ >

0

.

(4.2.2)

Notice that (4.2.1) was the test equation for absolute stability in Chapter 3, while (4.2.2) will appear in the Fisher equation in the next chapter. The solutions at time

t

=

t n

+1 of (4.2.1) and (4.2.2) are

y

(

t n

+1

) =

y

0

e

λt n

+1

(4.2.3) and

y

(

t n

+1

) =

y

0

e − λt n

+1

+ (1

− e − λt n

+1

)

y

0

,

(4.2.4) respectively. Setting

y n

:= equivalent form as follows

y

(

t n

) permits us to re-write (4.2.3) in an

y

(

t n

+1

)

− y

(

t n

) =

y

0

e

λt n

+1

− y

(

t n

)

=

y

0

e

λ

(

t n

+∆

t

)

− y

(

t n

)

=

y

(

t n

)

e

λ

t

=

y

(

t n

)

e

λ

t

− y

(

t n

)

− y

(

t n

)

=

λy

(

t n

)(

e

λ

t

1)

and we have

y n

+1

− e

λ

t

λ

1

y n

=

λy n

.

In the similar manner from (4.2.4), we have

(4.2.5)

y

(

t n

+1

) =

=

y

0

e

λt n

+1

1 +

y

0

e

λt n

+1

− y

0

y

(

t n

)

e

λ

t

1 +

y

(

t n

)

e

λ

t

− y

(

t n

)

.

Thus

y

(

t n

+1

) =

y

(

t n

)

e

λ

t y

(

t n

+1

)

− y

(

t n

) = (

e

λ

t

1)

− y

(

t y

(

t n n

)

y

(

t n

+1

)

e

λ

t

)(1

− y

(

t n

+1

))

y

(

t n

+1

)

− y

(

t n

)

(

e λ

t −

1)

=

λy

(

t n

)(1

− y

(

t n

+1

))

+

y

(

t n

)

y

(

t n

+1

)

61 which can be written as

y n

+1

− e λ

t −

1

λ y n

=

λy n

(1

− y n

+1

)

.

(4.2.6)

Equations (4.2.5) and (4.2.6) are exact schemes of (4.2.1) and (4.2.2), respectively. Mickens [26] established exact schemes for a substantial number of differential equations of applied sciences. For convenience,

Table 4.1 of exact schemes produced in [24] is incorporated here.

62

k m k m k m k m k m k m k m k m

63

The simple examples (4.2.1) and (4.2.3), (4.2.2) and (4.2.4) as well as Table 4.1 illustrate the need for the structure of the right hand side of the differential equation to be intrinsically reflected in the discrete schemes if they are required to replicate the qualitative properties of the solution of the differential equation. Equation (4.2.6) and similar equations in the table illustrate in addition the need of approximating nonlinear terms in a nonlocal way. These comments motivate the following definition due to [7].

Definition 4.2.3.

The difference method given by Equation (3.2.5) is called a non-standard finite difference method if at least one of the following conditions is satisfied:

• In the first order discrete derivative D

t y n

, the classical denominator

t is replaced by a nonnegative function φ

: (0

, ∞

)

(0

, ∞

)

satisfying

φ

(∆

t

) = ∆

t

+

O

[(∆

t

)

2

]

.

(4.2.7)

[e.g.

φ

(∆

t

) = 1

− e

t

, φ

(∆

t

) = (

e

λ

t

1)

/λ ].

• In the expression F

t

(

f, y n

)

, nonlinear terms are approximated in a nonlocal way, i.e., by suitable function of several points of mesh.

e.g.

y

2

(

t n

) ≈

y n

+1

y n

.

¥

In [26], Mickens set the following rules for the design of non-standard schemes:

Rule 1. The orders of the discrete derivatives should be equal to the orders of the corresponding derivatives of the differential equation.

Rule 2. Denominator functions for the discrete derivatives must, in general, be expressed in terms of more complicated functions of the step-sizes than those conventionally used.

Rule 3. Nonlinear terms should, in general, be replaced by nonlocal discrete representations.

64

Rule 4. Special conditions that hold for the solutions of the differential equations should also hold for the solutions of the finite difference scheme.

Rule 5. The scheme should not introduce extraneous or spurious solutions.

Remark 4.2.4.

For an overview on non-standard finite difference schemes, we refer the reader to [24], [35] and the edited volumes [17],

[28]. In the formal Definition 4.2.3 only two of five Mickens rules are needed because most of the other rules appear as properties of the differential equation with respect to which a discrete scheme might be qualitatively stable.

¥

Table 4.2:

Non-standard finite difference schemes d

2

y dt

2

+

y

+

βy

3

= 0

d

2

dt y

2

+

y

+

²y

2

= 0

d

2

dt y

2

+

y

=

²

(1

− y

2

)

dy dt y k

+1

2

y k

4 sin

2

+

y k −

1

(

2

t

)

y k

+1

2

y k

4 sin

2

(

+

y k −

1

t

2

)

y k

+1

2

y k

+

y k −

1

4 sin

2

(

t

2

)

+

y k

+

y k

+

y k

+

β

µ

+

²

sin

2

µ

4 sin

2

(∆

t

)

(

t

2

)

¶ sin

2

4 sin

2

(∆

t

)

(

2

t

)

y y

2

k

3

k

= 0

= 0

=

²

sin(∆

t

)

2 sin

(

t

2

)

(1

− y

2

k

)

µ

y k

cos(∆

t

)

y k −

1

2 sin

(

t

2

)

Remark 4.2.5.

The above-mentioned schemes were constructed by

Mickens [26], who placed the emphasis on the structure of the discrete derivative. An indication on how the nonlinear terms could be approached in a nonlocal way is given in Section 4.5. In the paper [24], the schemes given in Table 4.2 should have been listed as non-standard finite difference schemes instead of exact schemes of the corresponding differential equations. Their exact schemes are not known since these do not have explicit solutions.

¥

65

4.3

Elementary Stable Schemes

As it was mentioned in Chapter 3, the theta methods (3.7.1) and (3.7.2) are the point of departure of our study. The popularity of the theta methods, also referred to as the weighted average method, is due in large part to their simplicity making it easy to program and efficient on large problems [9]. In this section, we introduce elementary stable nonstandard theta methods and demonstrate their theoretical and practical power over the standard ones.

The terminologies we use here were clarified in Section 2.2.2 for continuous dynamical systems and in Section 2.2.3 for discrete dynamical systems. In particular, we assume, once and for all, that all fixedpoints ˜ of the dynamical systems (2.2.1) are hyperbolic in the sense of

Definition 2.3.10, each Jacobian of

f

at ˜ being denoted by

J

.

We would like to design for (2.2.1) numerical methods the solution of which replicate the qualitative properties of the fixed-points. We start with the following definition ([7], [26]):

Definition 4.3.1.

A difference scheme (3.2.5) for approximating (2.2.1) is called elementary stable if, for any value of the step size

t , its fixed-points

˜

are exactly those of the differential system (2.2.1), and these fixed-points for the difference scheme have the same linear stability/instability properties as for the differential system.

¥

In view of Definition 2.2.11 and Definition 2.3.8, it is clear that the theta methods (3.7.1) and (3.7.2) have no spurious fixed-points

y ∈

R

m

, the constant sequence

y n

0.

= ˜ is the solution of (3.7.1) or (3.7.2) if and only if

f y

) =

However, Theorem 3.7.6, shows that the classical theta methods are not elementary stable for

θ ∈

[0

,

), due to the constraint (3.7.35) on the value of ∆

t

when

λ

is an eigenvalue of

J

with

Reλ <

0. On the other hand, when

λ

is an eigenvalue of

J

with

Reλ >

0 and

θ ∈

(

1

2

,

1], we have from (3.7.32)

1

2

66

| R

(

λ

t

)

|

2

=

1 + 2∆

t

(1

− θ

)

| Reλ |

+ (∆

t

)

2

(1

− θ

)

2

| λ |

2

1

2∆

tθReλ

+ ∆

t

) 2

θ

2

| λ |

2

<

1 if and only if

t >

2

| Reλ |

(2

θ −

1)

| λ |

2

.

(4.3.1)

Under the condition (4.3.1), the discrete solution

{ y n

earised theta method (3.7.30) will tend to zero as

n → ∞

, while the solution

y

(

t

) =

e tJ y

0

}

of the linof the continuous linearization (2.2.7) diverges as

t → ∞

. This discrepancy in the linear stability/instability properties of fixed-points for the theta methods and the differential equation means that the theta methods are equally not elementary stable for

θ ∈

(

1

2

,

1]. For

θ

=

1

2

, the analysis above shows that the theta methods

(i.e. Trapezoidal rule and mid-point rule) are elementary stable. This explains why in what follows, we implicitly assume that

θ

=

1

2

.

Coming back to the general framework of the system (2.2.1), its dynamics will be captured by a fixed nonzero number

q ≥

max

½

| λ |

2

2

| Reλ |

;

λ ∈ E

¾

,

(4.3.2) where

E

=

[

{ σ

(

Jf

(˜ )); ˜

R

m

, f

(˜ ) = 0

}

(4.3.3) is the finite set of all the eigenvalues of the Jacobian matrix

Jf

(˜ ) of

f

at all fixed-points. We also consider a non-negative function

φ

satisfying the asymptotic relation (4.2.7) as well as the property

0

< φ

(

z

)

<

1

,

for

z >

0

.

(4.3.4)

A typical example is

φ

(

z

) = 1

− e

− z

.

(4.3.5)

With the number

q

in (4.3.2) and the function

φ

in (4.3.4), we associate the function

ψ

:=

φ

(

q

t

)

q

,

(4.3.6)

67 which satisfies (4.2.7). We are now in a position to introduce the following non-standard one-stage and two-stage theta methods:

y n

+1

− y n

ψ

(∆

t

)

=

f

[

θy n

+1

+ (1

− θ

)

y n

]

,

and

y n

+1

ψ

(∆

t

)

y n

=

θf

(

y n

+1

) + (1

− θ

)

f

(

y n

)

,

respectively. We have the following important result:

(4.3.7)

(4.3.8)

Theorem 4.3.2.

The non-standard theta method (4.3.7) and (4.3.8), where ψ is defined by (4.3.6) and (4.3.4), are elementary stable.

Proof.

As it was seen earlier for the classical schemes (3.7.1) and (3.7.2), the non-standard schemes (4.3.7) and (4.3.8) have no spurious fixedpoints compared to the system (2.2.1). The linearisation of the nonstandard schemes (4.3.7) and (4.3.8), about a fixed-point ˜ is

y n

+1

ψ

(∆

t

)

y n

=

J

[

θy n

+1

+ (1

− θ

)

y n

]

,

(4.3.9) instead of (3.7.30). Thus the stability function in (3.7.32) becomes

R

(

λ

t

) =

1 +

ψ

(∆

t

)(1

− θ

)

λ

1

− ψ

(∆

t

)

θλ

.

(4.3.10)

|

For

λ

=

λ

1

R

(

λ

t

)

|

2

+

ıλ

2

∈ E

, we have:

¯

¯

¯

¯

1 +

ψ

(∆

t

)(1

− θ

)

λ

1

− ψ

(∆

t

)

θλ

¯

¯

¯

¯

2

=

1 + 2

λ

1

φ

(

q

t

)(1

− θ

)

/q

+

| λ |

2

(

φ

(

q

t

))

2

(1

− θ

)

2

/q

2

1

2

λ

1

φ

(

q

t

)

θ/q

+

| λ |

2

(

φ

(

q

t

))

2

θ

2

/q

2

.

Let ˜ be a fixed-point of the differential equation (2.2.1). Two cases are possible. Firstly, ˜ can be linearly stable, which, by Theorem 2.2.16

68 and Remark 2.2.18, implies that

λ

1

Then by (4.3.4) and (4.3.2), we have:

<

0 for any eigenvalue

λ ∈ σ

(

J

)

.

| R

(

λ

t

)

|

2

=

1

2

| λ

1

| φ

(

q

t

)(1

− θ

)

/q

+

| λ |

2

(

φ

(

q

t

))

2

(1

− θ

)

1 + 2

| λ

1

| φ

(

q

t

)

θ/q

+

| λ |

2 (

φ

(

q

t

)) 2

θ

2

/q

2

<

1

2

| λ

1

| φ

(

q

t

)(1

− θ

)

/q

+

| λ |

2

φ

(

q

t

)(1

− θ

)

/q

2

2

/q

2

1

.

This shows that the fixed-point ˜ is linearly stable for the scheme (4.3.7) and (4.3.8) in view of Theorem 2.3.13. Secondly, the fixed-point ˜ of

(2.2.1) can be linearly unstable, i.e., there exists an eigenvalue

λ ∈ σ

(

J

) such that obtain

λ

1

>

0. Working out the above expression of

| R

(

λ

t

)

|

2

, we

1 + 2

λ

1

φ

(

q

t

)(1

− θ

)

/q

+

| λ |

2

(

φ

(

q

t

))

2

(1

− θ

)

2

/q

2

1

2

λ

1

φ

(

q

t

)

θ/q

+

| λ |

2 (

φ

(

q

t

)) 2

θ

2

/q

2

>

1 if and only if

2

λ

1

+

| λ |

2

φ

(

q

t

)

/q −

2

| λ |

2

φ

(

q

t

)

θ/q >

0

.

But

2

λ

1

+

| λ |

2

φ

(

q

t

)

/q −

2

| λ |

2

φ

(

q

t

)

θ/q ≥

2

λ

1

− | λ |

2

φ

(

q

t

)

/q

which, in view of (4.3.2) and (4.3.4), shows that

2

λ

1

− | λ |

2

φ

(

q

t

)

/q >

0

.

Thus the fixed-point ˜ is linearly unstable for the scheme (4.3.7) or

(4.3.8). We have thus proved that the schemes (4.3.7) and (4.3.8) are elementary stable.

Theorem 4.3.2 is given in [13], [14] in the particular case when

θ

= 0.

By construction and the way it is involved in the proof of Theorem 4.3.2, the relation (4.3.2) is the sharpest condition compared to those in the literature for capturing the dynamics of the differential equation (see

69 for example [7], [26]). Thus, Theorem 4.3.2 is theoretically interesting.

However, it is practically difficult to find

q

that meets the requirement

(4.3.2) since no lower bounds are available in general for the real parts

| Reλ |

of the eigenvalues of an arbitrary matrix. We want to overcome this difficulty. Following the idea in [25], it is convenient to use the identity

Reλ

= cos arg

λ

that, in view of (4.3.2), yields the relation

|

cos arg

λ | ≥

| λ |

2

q

for all

λ ∈ E.

(4.3.11)

The condition (4.3.2) in its equivalent form (4.3.11), implies a restriction on the location of the eigenvalues in the complex plane in the following precise way:

Theorem 4.3.3.

The condition (4.3.2) is equivalent to saying that the eigenvalues of all the matrices J are contained in some wedge in the complex plane, i.e.

E ⊂ W j

:=

{ λ ∈

C ;

|

cos arg

λ | ≥ j

2

} for some j ∈

[0

,

2]

.

(4.3.12)

Proof.

If

q

satisfies (4.3.2) and thus the inequality (4.3.11) holds, then we have the inclusion (4.3.12) with

j

:= min

{| λ |

;

λ ∈ E } q

.

Conversely, if (4.3.12) holds, then the number

q

:= max

{| λ |

;

λ ∈ E } j

satisfies (4.3.2).

In the following result, we present a somewhat refined version of the inclusion (4.3.12); the particular case when

j

= 1 was analysed in [6] and [25].

70

Theorem 4.3.4.

With a fixed real number

0

< j ≤

2

, we associate the wedges in the left and right hands complex plane defined by

W l

1

:=

{ λ ∈

C

; Reλ <

0

and |

cos arg

λ | ≥ j

2

}

(4.3.13)

and

W r

1

:=

{ λ ∈

C

; Reλ >

0

and |

cos arg

λ | ≥ j

2

} .

(4.3.14)

Let the dynamics of the differential equation be captured by a number q satisfying q ≥

max

{| λ |

;

λ ∈ E } j

.

(4.3.15)

Then, the non-standard theta methods (4.3.7) and (4.3.8) are elementary stable whenever we have the inclusions

E ⊂ W l

1

∪ { λ ∈

C ;

Reλ >

0

} for θ ∈

[0

,

1

2

) (4.3.16)

and

E ⊂ W r

1

∪ { λ ∈

C ;

Reλ <

0

} for θ ∈

(

1

2

,

1]

.

(4.3.17)

The region of elementary stability on the right hand side of (4.3.16) and (4.3.17) are shown on Fig 4.1 and Fig 4.2 for

j

= 1.

Proof.

The proof works as that of Theorem 4.3.2, observing that we have to consider four cases in (4.3.16) and (4.3.17).

More precisely, in view of (4.3.10), we have for

λ ∈

C

, Reλ <

0,

| R

(

λ

t

)

|

2

=

1

2

| λ | φ

(

q

t

)(1

− θ

)

/q |

cos arg

λ |

+

| λ |

2

(

φ

(

q

t

))

2

(1

− θ

)

2

/q

2

1 + 2

| λ | φ

(

q

t

)

θ/q |

cos arg

λ |

+

| λ |

2

(

φ

(

q

t

))

2

θ

2

/q

2

(4.3.18) while for

Reλ >

0

,

| R

(

λ

t

)

|

2

=

1 + 2

| λ | φ

(

q

t

)(1

− θ

)

/q |

cos arg

λ |

+

| λ |

2

(

φ

(

q

t

))

2

(1

− θ

)

2

/q

2

1

2

| λ | φ

(

q

t

)

θ/q |

cos arg

λ |

+

| λ |

2

(

φ

(

q

t

))

2

θ

2

/q

2

.

(4.3.19)

71

Consider now the case when

θ ∈

[0

,

1

2

) and let

λ ∈

hand side of (4.3.16). This means that either

λ ∈ W

E l

1 be in the right or

Reλ >

0

.

If

λ ∈ W l

1

,

then we have from (4.3.18)

| R

(

λ

t

)

|

2

<

1

2

| λ | φ

(

q

t

)(1

− θ

)

/q |

cos arg

λ |

+

| λ |

2

(

φ

(

q

t

))

2

(1

− θ

)

2

/q

2

<

1

2

| λ | φ

(

q

t

)(1

− θ

)

/q |

cos arg

λ |

+

j | λ | φ

(

q

t

)(1

− θ

)

/q

= 1 +

| λ | φ

(

q

t

)(1

− θ

)

/q

(

j −

2

|

cos arg

λ |

) by 4

.

3

.

15

1

.

If

Reλ >

0

,

then it follows from (4.3.19) and the fact that

θ ∈

[0

,

that

| R

(

λ

t

)

|

2

>

1

.

1

2

)

Consider finally the case when

θ ∈

(

1

2

,

1] and let

λ ∈

right hand side of (4.3.17), which means that either

λ ∈ W r

1

E

or be in the

Reλ <

0

.

When

λ ∈ W r

1

,

we use (4.3.19) and (4.3.15) to obtain

| R

(

λ

t

)

|

2

>

1

1

2

| λ | φ

(

q

t

)

θ/q |

cos arg

λ |

+

| λ |

2

(

φ

(

q

t

))

2

θ

2

/q

2

1

1

2

| λ | φ

(

q

t

)

θ/q |

cos arg

λ |

+

j | λ |

(

φ

(

q

t

))

θ/q

1

=

1 +

| λ | φ

(

q

t

)

θ/q

(

j −

2

|

cos arg

λ |

)

1

.

For

Reλ <

0, we infer directly from (4.3.18) and from

θ ∈

(

1

2

,

1] that

| R

(

λ

t

)

|

2

<

1

.

Thus, the non-standard theta methods (4.3.7) and (4.3.8) are elementary stable .

72

-1 -0.5

Figure 4.1: Region of elementary stability for

θ ∈

[0

,

1

2

)

-1

-0.5

Figure 4.2: Region of elementary stability for

θ ∈

[

1

2

,

1]

73

Remark 4.3.5.

Unlike (4.3.2), the choice of the number

q

in (4.3.15) is not so critical if the system is non-stiff. In practice, we may take

jq

:= max

k J

(

g

)( e )

k

,

where the supremum norm on R

m

.

k·k

is the matrix norm associated with

¥

Remark 4.3.6.

With the definition (4.3.13)-(4.3.14) of the wedges, the inclusions (4.3.16)-(4.3.17) for elementary stability of the scheme under consideration are in line with what is done in the classical theory of absolute stability of numerical methods for ordinary differential equations (see [22]). This observation permits us to link the extreme cases when

j

= 0 and

j

= 2 in (4.3.12) to the classical concepts of

A/A

0

stable schemes. In [25], the terminology

A −

and

A

0

-elementary stable schemes is used when

j

= 0 and

j

= 2, respectively.

Furthermore, from the comparative analysis in [25], it follows that the non-standard theta methods have much larger regions of absolute elementary stability than the standard ones. Some of the advantages of the non-standard theta methods over the standard ones are summarised in Table 4.3 where we recall that the case

θ

=

1

2 is excluded as the corresponding standard schemes preserve all the involved properties.

¥

Table 4.3:

Comparison between standard and non-standard θ -methods

Explicit

θ

-method

(

θ

= 0)

Std. Non-std.

No Yes Elementary stability

A

-Elementary stability No No

A

0

-Elementary stability No Yes

θ ∈

Implicit

θ

-method

(0

,

1

2

)

θ ∈

(

1

2

,

1]

Std. Non-std. Std. Non-std.

No Yes No Yes

No No

No Yes

Yes Yes

Yes Yes

Remark 4.3.7.

It should be noted that the non-standard theta methods (4.3.7) and (4.3.8) enjoy the consistency and convergence properties stated in Theorem 3.7.2 and Remark 3.7.3 for the classical schemes.

74

This is due to the property (4.2.7) that the denominator

ψ

satisfies.

The analogy of the error estimate (3.7.16) is proved in [6].

¥

To conclude this section, we consider an example that confirms the superiority of the non-standard approach over the standard one.

Example 4.3.8.

A typical example is the logistic equation

y

0

= 25

y

(1

− y

)

, y

(0) =

y

0

,

(4.3.20) whose exact solution and exact scheme are

y

(

t

) =

y

0

+ (1

y

0

− y

0

)

e

25∆

t

,

(4.3.21) and

y n

+1

− y n e

25∆

t

25

1

= 25

y n

(1

− y

The forward Euler difference scheme yields

n

+1

)

.

y n

+1

t y n

= 25

y n

(1

− y n

)

.

(4.3.22)

(4.3.23)

In accordance with (4.3.7) - (4.3.8) for

θ

= 0, we introduce the nonstandard scheme

y n

+1

− y

1

− e

25∆

t

25

n

= 25

y n

(1

− y n

)

.

(4.3.24)

The exact solution for the logistic equation, the Euler forward difference scheme (4.3.23) and the non-standard finite difference schemes

(4.3.24) are visualised in Fig. 4.3 for ∆

t

= 0

.

01, Fig. 4.4 for ∆

t

= 0

.

067, and Fig. 4.5 for ∆

t

= 0

.

067, respectively using various initial conditions.

On comparing Figures 4.3, 4.4, and 4.5, it is evident that the nonstandard scheme in Fig.4.5 gives a more reliable simulation of the exact solution in Fig.4.3 than the standard Euler Scheme in Fig. 4.4.

1.4

1.2

1

0.8

1.8

1.6

0.6

0.4

0.2

0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 4.3: Exact solution for the logistic equation.

1.3

1.2

1.1

1

0.9

0.8

0.7

0.6

0.5

0.4

0.0

0.1

0.2

0.3

0.4

0.5 0.6

0.7

0.8

0.9

1.0

Figure 4.4: Standard Euler scheme for the logistic equation.

1.2

1.1

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.0

0.1

0.2

0.5

0.6

0.7

0.8

0.9 1.00

Figure 4.5: Non-standard Euler scheme for the logistic equation.

75

76

4.4

Dissipative Non-standard Theta Methods

This section is a follow up of the concern mentioned in Remark 3.7.8

regarding the dissipativity of theta methods for

θ ∈

(

1

2 and we state it here for convenience.

θ ∈

[0

,

1

2

). Firstly, when

,

1], Theorem 3.7.7 carries over easily to the non-standard setting

Theorem 4.4.1.

Consider (2.2.1) as a dissipative dynamical system in the setting specified in Theorem 3.7.7. We approximate this dynamical system by the non-standard theta methods (4.3.7) or (4.3.8) where the only requirement on the denominator ψ

(∆

t

)

is the asymptotic behaviour

(4.2.7). Then, for θ ∈

(

1

2

,

1]

, these non-standard schemes are dissipative in the sense of Definition 2.3.3. The absorbing sets are those given in Theorem 3.7.7 on the understanding that

t is replaced by ψ

(∆

t

)

wherever this is applicable.

Regarding the case when

θ ∈

[0

,

1

2

), we managed to deal with the marginal case

θ

= 0. More precisely, we show in what follows how the non-standard approach can help to successfully modify the simple

Euler method so that it is dissipative.

To this end, we suppose that

f

: R

m

R

m

satisfies the structural assumption (2.2.12) involving

α >

0 and

β >

0 and we assume without loss of generality that

β <

1

.

(4.4.1)

Furthermore, we assume that there exist positive constants

γ

and

c >

1 such that, for every

y ∈

R

m

:

|| f

(

y

)

||

2

≤ γ

+

c || y ||

2

.

(4.4.2)

Remark 4.4.2.

The condition (4.4.2) holds if the function

f

is Lipschitz, which is one of the widely used requirement for (2.2.1) to define a dynamical system on R

m

.

¥

We have the following important result:

77

Theorem 4.4.3.

For θ

= 0

, the non-standard finite difference scheme

(4.3.7) or (4.3.8) where ψ

(∆

t

)

is given by (4.3.6) with q

:=

c

β

, is a dissipative dynamical system.

Proof.

From (4.3.7) or (4.3.8) with

θ

= 0, we have

y n

+1

ψ

(∆

t

)

y n

=

f

(

y n

)

.

Multiplying (4.4.3) by

y n

+1

, we obtain

h y n

+1

− y n

, y n

+1

i

ψ

(∆

t

)

=

h f

(

y n

)

, y n

+1

i .

(4.4.3)

(4.4.4)

We use (4.4.3) on the left hand side of (4.4.4) and on the right hand side we apply

h u − v, u i

=

1

2

(

|| u ||

2

− || v ||

2

+

|| u − v ||

2

)

.

1

2

ψ

(∆

t

)

(

|| y n

+1

||

2

1

2

ψ

(∆

t

)

(

|| y n

+1

||

2

− || y

− || n

||

2

y n

||

2

+ (

ψ

+

(∆

|| t y n

+1

))

2

− y n

||

2

) =

h f

(

y n

)

, y n i

+

h f

(

y n

)

, y n

+1

− y n i

|| f

(

y n

)

||

2

) =

h f

(

y n

)

, y n i

+

ψ

(∆

t

)

h f

(

y n

)

, f

(

y n

)

i

=

h f

(

y n

)

, y n i

+

ψ

(∆

t

)

|| f

(

y n

)

||

2

.

From (2.2.12), (4.3.4), (4.4.1) and (4.4.2), we obtain

|| y n

+1

||

2

− || y n

||

2

ψ

(∆

t

)

2

α −

2

β || y

<

2

α

+

βγ c

− n

||

2

β ||

+

y n

β

|| c

2

φ

(

c

t/β

)(

γ

+

c || y n

||

2

)

.

Thus

|| y n

+1

||

2

<

µ

2

α

+

βγ

c

ψ

(∆

t

) + [1

− βψ

(∆

t

)]

|| y n

||

2

.

Applying the discrete Gronwall inequality (Lemma 2.3.6) yields

|| y n

||

2

µ

2

α

β

+

γ c

[1

(1

− βψ

(∆

t

))

n

] +

|| y

0

||

2

(1

− βψ

(∆

t

))

n

.

78

Thus lim sup

n →∞

|| y n

||

2

2

α

β

+

γ c

0

,

2

α

β

+

γ c

+

²

being an absorbing set for every

² >

0.

We have up to this point demonstrated numerically the power of the non-standard finite difference schemes over the standard ones as far as elementary stability is concerned. We now turn our attention on the dissipative property by considering two examples.

Example 4.4.4.

We consider the dynamical system defined by

dy

1

dt dy

2

dt

= 1 + 5

= 1

5

y y

2

1

− y

1

− y

2

,

(4.4.5)

(4.4.6) whose fixed point is ( vector function

3

13

f

,

(

2

13

). The right hand-side of the system is the

µ ¶

y

) =

1 + 5

y

2

1

5

y

1

− y

1

− y

2 which satisfies the structural assumption (2.2.12) and (4.4.1) in the following precise form:

h f

(

y

)

, y i

= (1 + 5

y

2

− y

1

)

y

1

+ (1

5

y

1

− y

2

)

y

=

y

1

1

2

= 1

+ 5

(1 +

y

1

2

y

1

2

1

|| y ||

2

2

y

)

2

− y

2

1

− y

2

1

+

y

2

+

1

2

(1 +

5

y y

1

2

2

.

y

2

)

− y

2

2

y

2

2

2

79

Hence

α

= 1 and

β

= estimated as follows:

|| f

(

y

)

||

2

2

= (1 + 5

= 1 + 25

y y

2

2

2

1

2

y

. Furthermore, the norm of

f

(

y

) can be

1

+

y

2

1

)

2

+ (1

+ 10

y

2

5

y

2

y

1

1

− y

2

)

10

2

y

1

y

2

+ 1 + 25

y

2

1

10

y

1

2

y

2

= 2 + 26(

y

2 + 26(

y

2

1

2

1

+

+

= 12 + 31

|| y ||

2

2

.

y y

+ 10

2

2

2

2

y

1

y

) + 10

2

y

2

) + 5(1 +

− y

2

2

10

y

1

) + 5(1 +

y

2

1

)

+

y

2

2

Hence, the requirement (4.4.2) is met with

γ

= 12 and

c

= 31. With

φ

(∆

t

) = 1

− e

t

, the non-standard scheme considered in Theorem

4.4.3 reads as

y n

+1

1

− e

− q

t q y n

=

f

(

y n

)

,

(4.4.7) where

q

=

c

β

= 62. Taking the step size ∆

t

= 0

.

1, Fig. 4.6 and Fig. 4.7

give the phase diagrams of the numerical solutions of the system (4.4.5)-

(4.4.6) by the non-standard finite difference scheme (4.4.7) using the initial conditions

y

(0) = (10

,

10) and

y

(0) = (

±

10

, ±

10), respectively.

The dissipativity of the scheme is apparent.

For comparison, we apply to the system (4.4.5)–(4.4.6) the standard forward Euler method (3.7.17) with the same step size and initial condition

y

(0) = (10

,

10). The phase diagram of the numerical solution given in Fig.4.8 is not indicative of dissipativity.

15

10

5

0

−5

−10

t=0

5

0

−5

−15

−15 −10 −5

y

0

1

5 10

15

Figure 4.6: Dissipative non-standard scheme

15

10

−10

−15

−15 −10 −5

y

0

1

5 10 15

Figure 4.7: Further dissipative non-standard scheme

30

0

−10

−20

20

10

−30

−40

−30 −20 −10 0

y

1 t=0

10 20 30

Figure 4.8: Non-dissipative standard forward Euler scheme

80

81

Example 4.4.5.

We consider the dynamical system defined by

y y

0

2

0

1

=

− y

1

= 5

y

1

5

y

− y

2

2

+

+ p

y y

2

2

1 p

y

2

1

y

+

1

+

y

2

2

y

.

2

2

(4.4.8)

(4.4.9)

Once again the conditions (2.2.12) and (4.4.1) hold. Indeed, for the right hand side

 

− y

1

5

y

2

+ p

y

2

1

y

1

+

y

2

2

f

(

y

) =

,

5

y

1

− y

2

+ p

y

2

1

y

2

+

y

2

2 we have

h f

(

y

)

, y

)

i

= (

=

− y

+(5 q

y

2

1

1

y

1

5

y

− y

+

y

2

2

2

2

+

+

(

y

p

y y

2

1

y

1 p

y

2

1

2

+

y

2

2

+

y

2

2

)

y

1

)

y

2

2

1

+

y

2

2

)

=

|| y || − || y ||

2

=

2

1

1

(1 +

|| y ||

2

2

1

2

|| y ||

2

)

− || y ||

2

,

82 i.e.,

α

=

|| f

(

y

)

||

2

2

1

2 and

β

=

1

2 in (2.2.12). Furthermore,

= (

− y

1

5

y

2

+

+(5

y

1

− y

2

+ p

y y

2

1

y

1 p

y

2

1

2

+

y

2

2

+

y

2

2

)

)

2

2

= 1 +

y

2

1

= 1 + 26(

+ 10

y

2

1

= 1 + 26

|| y ||

2

1 + 26

|| y ||

2

y

1

y

2

+

y

2

2

)

q

y

2

1

− || y

+ 25

y

2

2

||

+ 25

+

y

2

2

y

2

1

10

y

1

y

2

+

y

2

2

p

2

1

y

2

1

+

y

2

2

+

y

2

2

Hence (4.4.2) holds with

γ

= 1 and

c

= 26. Then the non-standard scheme considered in Theorem 4.4.3 is given by (4.4.7) where

q

=

c

β

=

52. We take ∆

t

= 0

.

1 and

y

(0) = (5

,

5) or

y

(0) = (0

.

1

,

0). On Fig.

4.9 and Fig. 4.10 one can observe that the non-standard numerical solutions eventually belong to the absorbing set

B

(0

,

1

.

4277

...

+

²

) given in Theorem 4.4.3. The ball with radius 1.55 is plotted on the figures by a dotted line. The numerical solution on Fig. 4.9 originates outside this ball and enters it after certain number of time steps, while the numerical solution on Fig. 4.10 originates inside the ball and does not leave it. We notice from Fig. 4.11 that the standard Euler method with the same step size and initial condition

y

(0) = (0

.

1

,

0) is not dissipative.

8

6

4

2

0

−2

−4

−6

−6

t=0

−4 −2

y

0

1

2 4

Figure 4.9: Dissipative non-standard scheme

6

1.5

1

0.5

0

−0.5

−1

−1.5

−1.5

t=0

−1 −0.5

y

0

1

0.5

1 1.5

Figure 4.10: Another dissipative non-standard scheme

20

5

0

15

10

−5

−10

−15

−15 −10 −5

y

0

1

t=0

5 10 15

Figure 4.11: Nondissipative standard scheme

83

84

Remark 4.4.6.

The absorbing sets in Examples 4.4.4 and 4.4.5 are determined by two different kinds of global attractors. In Example

4.4.4 the attractor is a hyperbolic fixed point, a case which can also be dealt with through the concept of linear stability. More precisely, equating both (4.4.5) and (4.4.6) to zero we arrive at the fixed point

(

y

1

, y

2

) = (

3

13

,

2

13

). The Jacobian matrix of the system (4.4.5)-(4.4.6) is

J

(

y

1

, y

2

) =

·

1 5

5

1

¸

.

(4.4.10)

The eigenvalues of

J

(

3

13

,

2

13

) are

λ

=

1

± j

5 with

Reλ

=

1

, ∀ λ ∈

σ

(

J

). Since

Reλ <

0, we have a linearly stable fixed-point by Theorem

2.2.17.

However, linear stability does not yield results for Example 4.4.5, since the system does not have fixed points. In fact, it can be shown that the unit circle is a

global attractor

for this system. Notice that (see

[41]) a set

A

is said to be an

attractor

if it is compact and invariant and attracts a neighbourhood of itself. Furthermore, a compact invariant set

A

is a

global attractor

for the semigroup operator

S

(

t

) if it is an attractor which attracts every bounded set in R

m

. Note also that the global attractor of a dynamical system is unique if it exists. The terminology

local attractor

is sometimes used for attractors which are not global attractors.

¥

4.5

Energy Preserving Discrete Schemes

So far, our study has been concerned with systems (2.2.1) having only hyperbolic fixed-points. Non-standard schemes for such systems were designed by using mainly first part of Definition 4.2.3 on renormalization of the denominator ∆

t

of the discrete derivative.

85

In this section, we consider the specific system

dy

1

dx

=

y

2

(4.5.1)

dy

2

dx

=

r

(

y

1

)

,

where it is assumed that (0

,

0) is the only fixed-point and that the smooth function

r

: R

R satisfies

r

(0) = 0 and

r

0

(0) = 1.

The eigenvalues of the corresponding Jacobian matrix

·

J

=

0 1

1 0

¸

(4.5.2) are

λ

1

,

2

=

± i

and this shows that the fixed-point ˜ = (0

,

0) is nonhyperbolic. Consequently, the analysis of the previous sections does not apply. Nevertheless, by using the change of dependent variable

(

y

1

y

2

=

u

=

u

0

≡ du dx

(4.5.3) the system (4.5.1) is equivalent to the scalar equation

d

2

u dx

2

+

r

(

u

) = 0

,

(4.5.4) which is the simplest model of Hamiltonian systems that occur in classical mechanics. Equation (4.5.4) is indeed equivalent to

H

(

u

0

(

x

)

, x

) =

1

2

£

(

u

0

)

2

+

K

(

u

)

¤

= constant (4.5.5) where

K

(

u

) =

Z

r

(

u

)

du.

(4.5.6)

Physically,

H

represents the sum of kinetic energy and potential energy of the mechanical system and (4.5.5) is the statement of conservation of

86 energy ( [41], p 200). Consequently, (4.5.5) is one of the more important features of the system (4.5.1). Our aim is to derive finite difference methods which are stable with respect to the principle of conversation of energy. We will see that the approximation in a non-local way of nonlinear terms plays an essential role in achieving this aim.

Equation (4.5.4) is coupled with initial conditions

u

(0) =

u

0

,

and

u

0

=

v

0

.

(4.5.7)

Let

u

be a solution of (4.5.4) or (4.5.5). Fix a point

x

written in the form

x

=

m

x

=

x m

for different values of that can be

m ∈

of the space step size ∆

x.

Let

γ

be a real-valued function on R

3

Z and that meets the consistency condition lim

x →

0

, m

x

=

x ∗

γ

(

u

(

x m −

1

)

, u

(

x m

)

, u

(

x m

+1

)) =

r

(

u

(

x

)) (4.5.8) as well as the symmetry property

γ

(

u m −

1

, u m

, u m

+1

) =

γ

(

u m

+1

, u m

, u m −

1

)

.

(4.5.9)

The notations used here are self explanatory:

u m

is an approximation of the solution

u

at the grid point

x m

.

Theorem 4.5.1.

Let ψ be a function satisfying (4.2.7). The nonstandard finite difference scheme u m

+1

2

u m

+

u m −

1

(

ψ

(∆

x

))

2

+

γ

(

u m −

1

, u m

, u m

+1

) = 0

,

(4.5.10)

for (4.5.4) is equivalent to the discrete principle of conservation of energy

1

2

µ

u m

+1

ψ

(∆

x

)

u m

2

+

K

x

(

u m

) =

1

2

µ

u m

− u m −

1

ψ

(∆

x

)

2

+

K

x

(

u m −

1

)

,

(4.5.11)

87

where the discrete potential energy is given by

0

P

(

u i

+1

if

− u i −

1

m

= 0

)

γ

(

u i −

1

, u i

,u i

+1

)

K

x

(

u m

) =

i

=1

|

P

|

2

if m >

(

u m −

1+

i

− u m

+1+

i

)

γ

(

u m

+1+

i

2

,u m

+

i

,u m −

1+

i

)

0

i

=1

if m <

0

.

(4.5.12)

Proof.

A discrete principle of conversation of energy has the form

V

x

(

u m

) =

V

x

(

u m −

1

)

, ∀ m ≥

1 (4.5.13) with the discrete energy

V

x

(

u m

) =

1

2

µ

u m

+1

− u m

2

ψ

(∆

x

)

+

K

x

(

u m

) (4.5.14) and

K

x

(

u m

) is given by (4.5.12). Expansion and simple manipulation of (4.5.11) yields the following equivalent relation

u

2

m

+1

(

u m

+1

− u

2

m −

1

2

u m

(

u m

+1

− u m −

1

ψ

(∆

x

)

− u m −

1

)

u m

+1

2

u m

+

u m −

1

ψ

(∆

x

)

)

u m

+1

2

u m

+

(

ψ

(∆

x

))

2

u m −

1

+ 2(

K

x

(

u m

)

− K

x

(

u m −

1

)) = 0

+ 2(

K

x

(

u m

)

− K

x

(

u m −

1

)) = 0

+ 2

K

x

(

u m u

)

m

+1

K u

x

(

u m −

1

m −

1

)

= 0

.

(4.5.15)

Identification of (4.5.10) with (4.5.15) reduce to the expression

K

x

(

u m

)

− K

x

(

u m −

1

)

u m

+1

− u m −

1

+

γ

(

u m −

1

, u m

, u m

+1

) = 0 which yields

(4.5.16)

K

x

(

u m

) =

K

x

(

u m −

1

)+(

u m

+1

− u m −

1

)

γ

(

u m −

1

, u m

, u m

+1

) = 0

.

(4.5.17)

88

By induction on

m

, with the initial-value

K

x

(

u

0

) = 0, we have

K

x

(

u m

) :=

X

(

u i

+1

i

=1

− u i −

1

)

γ

(

u i −

1

, u i

, u i

+1

) = 0

.

(4.5.18)

Thus, (4.5.10) is equivalent to the discrete law of conversation of energy

(4.5.11).

Remark 4.5.2.

For

m >

1 the discrete law of conservation of energy to which (4.5.10) is equivalent reads as:

1

2

1

2

ψ u m

+1

− u m u m

+1

ψ

(∆

x

− u m

(∆

x

)

)

2

+

2

+

i

=1

X

(

i

=1

u

(

u i

+1

i

+1

− u

− u i −

1

i −

1

)

)

γ

(

u

γ

(

u i

+1

, i −

1

, u i u

, u i

, u i −

1

)

i

+1

#

)

#

=

.

(4.5.19)

Notice that the scheme (4.5.10) is equally equivalent to the nonstandard finite difference scheme

y

1

,m

+1

− y

ψ

(∆

x

)

1

,m

=

y

2

,m

+1

(4.5.20)

y

2

,m

+1

− y

ψ

(∆

x

)

2

,m

=

− r

(

y

1

,m

) which is closely related to (4.5.1).

¥

Remark 4.5.3.

From (4.5.4) and (4.5.15) the natural choice of

γ

(

· , · , ·

) is given in terms of ( 4.5.6) by the mean-value theorem:

γ

(

u m −

1

, u m

, u m

+1

)

2

K

x

(

u m

)

− K

x

(

u m −

1

)

u m

+1

− u m −

1

=

K

(

u m

+1

)

− K

(

u m −

1

)

u m

+1

− u m −

1

.

(4.5.21)

(4.5.22)

89

This is the approach proposed in Anguelov and Lubuma [7]. More precisely, if

r

(

u

R

r

(

u

) =

ug

(

u

2 simply with

G

=

), these authors worked

g

(

s

)

ds

instead of

K.

In this case, the above leads to the scheme

u m

+1

2

u m

+

(

ψ

(∆

x

))

2

u m −

1

+

u m

G

(

u m u m u m

+1

)

− G

(

u m u m

+1

− u m u u m −

1

m −

1

)

= 0 (4.5.23) equivalent to its energy preserving form

1

2

u m

+1

− u m

ψ

(∆

x

)

2

+

G

(

u m u m

+1

)

#

=

1

2

u m

− u m −

1

ψ

(∆

x

)

2

+

G

(

u m u m −

1

))

#

.

Other non-standard finite difference schemes for conservative oscillators are investigated in [26].

¥

Remark 4.5.4.

The schemes (4.5.10) and (4.5.23) are non-standard in the sense of both Mickens rules in Definition 4.2.3. Firstly the exact scheme (see Table 4.1)

u m

+1

2

u m

+

u m −

1

4 sin

x

2 of the simple harmonic oscillator

+

u m

= 0 (4.5.24)

d

2

u dx

2

+

u

= 0

,

(4.5.25) motivates the need to renormalise the denominator of the discrete derivatives in the schemes (4.5.10),(4.5.22) and (4.5.23). Secondly, the nonlinear terms that arise in

r

(

u

) are approximated in a non-local way.

For example, if

r

(

u

) =

u

3

, we have by (4.5.22) and (4.5.23) the respective approximations

r

(

u

(

x

))

≈ r

(

u

(

x

(

u m

+1

))

+

u m −

1

)(

u

2

m

+1

4

+

u

2

m −

1

)

u

2

m

(

u m

+1

2

+

u m −

1

)

.

¥

90

Remark 4.5.5.

An advantage of the second choice of

γ

is that the three arguments

u m −

1

, u m

and

u m

+1 appear explicitly in the analogue term of (4.5.23) contrary to (4.5.22). Furthermore, for (4.5.25), the second choice with

g

(

u

2

) = 1

,

yields the exact scheme (4.5.24). But the first choice yields the scheme

u m

+1

2

u m

+

(

ψ

(∆

x

)) 2

u m −

1

+

u m −

1

+

u m

+1

2

= 0

.

(4.5.26)

¥

Remark 4.5.6.

For the implementation of (4.5.10) the initial values

u

(0) =

u

0 and

u

0

(0) =

v

0 are usually given as indicated in (4.5.7).

However, a value for

u

1 is needed in order to start the scheme. In analogy with a classical procedure, we utilize the approximation

u

0

(0) =

u

1

− u

1

ψ

(∆

x

)

.

(4.5.27)

In (4.5.10), put

m

= 0 and replace

u

1 with the expression obtained from (4.5.27) and this gives the missing starting value whenever the structure of

γ

makes (4.5.10) an explicit scheme. When the scheme

(4.5.10) is not explicit, one could use the less accurate approximation

u

0

(0) =

u

ψ

1

− u

0

(∆

x

)

.

(4.5.28)

¥

Example 4.5.7.

As an numerical example, we consider the Duffing conservative oscillator

d

2

u dx

2

+ 25

u

(1 + 15

u

2

) = 0

, u

(0) = 0

, u

0

(0) = 1

.

(4.5.29)

91

With ∆

x

= 0

.

1, Fig 4.12 illustrates both the stability of the nonstandard scheme (4.5.23) where of the standard scheme

ψ

(∆

x

) =

2

5 sin

5∆

x

2 and the instability

u m

+1

2

u m

(∆

x

)

2

+

u m −

1

+ 25

u m

(1 + 15

u

2

m

) = 0 (4.5.30) with respect to the principle of conservation of energy. Other examples can be found in [15] where the non-standard scheme discussed in this section have been extended to more complex problems, namely, vibroimpact mechanical systems.

10

9

8

7

6

5

0

14

13

12

11

0.5

1 1.5

2 2.5

3

Figure 4.12: Discrete energy of the Duffing equation by standard (piecewise constant) and non-standard (constant) finite difference schemes

Chapter 5

Non-standard Finite Difference

Schemes for Reaction-Diffusion

Equations

5.1

Introduction

This chapter is a dedicated analysis of the author’s results in [6]. We investigate the impact of the analysis of the previous chapters on the numerical solution of partial differential equations. We will specifically deal with the one dimensional reaction-diffusion equations the solutions

u

of which enjoy a positivity and boundedness property:

0

≤ u ≤

1

.

(5.1.1)

A typical example is the Fisher equation for which (5.1.1) is proved in Sections 5.4. In Section 5.3, we design non-standard finite difference schemes which are elementary stable in the limit case of space independent variable and which are stable with respect to the principle of conservation of energy in the stationary case. Furthermore, we show in Section 5.4 that our schemes replicate the property (5.1.1) under a certain functional relation between the time and space step sizes.

As an alternative approach, we approximate in Section 5.5 the space variable by the spectral method, while the time variable is approximated via the non-standard finite difference scheme. This results in

92

93 what we call coupled spectral and non-standard methods. Numerical tests that show the reliability of these coupled schemes are provided.

5.2

The Fisher Equation

A classic simplest case of a non-linear reaction-diffusion equation is the

Fisher equation

∂u

∂t

=

2

u

∂x

2

+

λu

(1

− u

)

, λ >

0

.

(5.2.1)

The material presented in this section is based on the books [23] and

[33]. Equation (5.2.1) is also referred to as the Fisher-Kolmogoroff equation. It is also the natural extension of the logistic growth model discussed in Section 4.3. It was suggested by Fisher (1937) as a deterministic version of a stochastic model for the spatial propagation of a mutant-gene in a population. Equation (5.2.1) can also be used in the analysis of travelling waves in chemical reactions.

It is indeed well known that when reaction kinetics and diffusion are coupled, travelling waves of chemical concentration exist and can effect a biochemical change, very much faster than straight diffusional processes governed by (5.2.1) without the term

λu

(1

− u

). Thus, we look for (5.2.1) travelling wave solutions. This means solutions of the form

u

(

x, t

) =

U

(

z

)

, z

=

x − ct,

(5.2.2) where, with

c

being the constant speed of the wave moving in the positive

x −

direction, it is assumed that

U

(

z

) is non-negative and bounded for all

z ∈

R .

The space independent Fisher equation (5.2.1) has fixed-points

u

= 0 and

u

= 1, which are unstable and asymptotically stable respectively.

This suggests that we look for a travelling wave solution which satisfies the boundedness and positivity condition (5.1.1) as well as the conditions lim

z →∞

U

(

z

) = 0

,

lim

z →−∞

U

(

z

) = 1

.

(5.2.3)

94

Substituting this travelling wavefront (5.2.2) into (5.2.1) yields a second order ordinary differential equation for

U

(

z

):

U

00

+

cU

0

+

U

(1

− U

) = 0

,

(5.2.4) where the range of

c ≥

0 is to be determined. Since this equation cannot be solved in closed form, we reduce it to a pair of first order equations by defining

V

=

U

0

leading to the autonomous system

U

0

V

0

=

=

V

− cV − U

(1

− U

)

.

(5.2.5)

(5.2.6)

The Jacobian matrix of the system (5.2.5) - (5.2.6) is

J

(

U, V

) =

·

2

U

0

1

1

− c

¸

.

(5.2.7)

The fixed-points of the system (5.2.5) - (5.2.6) are (0

,

0) and (1

,

0). The eigenvalues of

J

(1

,

0) are

λ

±

=

− c ±

√ c

2

+ 4

2

(5.2.8) and those of

J

(0

,

0) are

λ

±

=

− c ±

√ c

2

4

2

,

(5.2.9) showing that the two fixed-points are hyperbolic. Thus, Hartman-

Grobman theorem (2.2.16) applies. We can therefore conclude that the fixed-point (1

,

0) is a saddle point for any

c

, while (0

,

0) is an asymptotically stable node for

c ≥

2 and a stable spiral if

c <

2, (see [46]) .

The case

c ≥

2 is of interest as it follows by continuity arguments or by heuristic reasoning from the phase plane (

U, V

), that there exists a trajectory from the fixed-point (1

,

0) to the fixed-point (0

,

0) lying entirely in the quadrant

U ≥

0

, V ≤

0 with 0

≤ U ≤

1 and all

c ≥

2,

(see Figs 5.1 - 5.2).

In summary, we have the following result.

95

Theorem 5.2.1.

For each c ≥

2

there exists a unique travelling wave solution u

(

x, t

) =

U

(

x − ct

)

to the Fisher equation (5.2.1) with the property that in the wave variable z

=

x − ct , U

(

z

)

is monotonically decreasing and satisfies (5.2.3).

96

Figure 5.1: Phase plane trajectories for (5.2.5) - (5.2.6),

c ≥

2.

Figure 5.2: Travelling wave solution for the Fisher equation,

c ≥

2.

97

5.3

Theta Methods for Reaction-Diffusion Equations

The Fisher equation considered in the previous section motivates that we now study the general one-dimensional reaction-diffusion equation

∂u

∂t

=

2

u

∂x

2

+

r

(

u u

(

x,

0) =

g

(

x

)

)

,

(5.3.1) and we assume that there exists a unique solution satisfying

0

≤ g ≤

1 =

0

≤ u ≤

1

.

(5.3.2)

Equation (5.3.1) is used extensively in many areas of engineering and applied sciences to model a system on which reaction processes

r

(

u

) lead to the diffusion in time of the quantity

u

, (see for instance

[19] and [33]). We are interested in numerical schemes that produce reliable approximations

u k m

of the solution

u

at the time

t

the space grid point

x m

=

m

x

. To achieve this, we use non-standard finite difference schemes following the methodology of sub-equations in

[7] and [26] to address partial differential equations.

k

=

k

t

and

More precisely, we design non-standard schemes for the space independent equation on the one hand and for the stationary equation on the other hand. After that, we assemble them in suitable schemes for the reaction- diffusion equation.

Energy-preserving schemes for the stationary case of (5.3.1) were discussed in Section 4.5. Thus, for the equation

d

2

u dx

2

+

r

(

u

) = 0

,

we have in view of (4.5.10) the non-standard scheme

(5.3.3)

u m

+1

2

u m

+

(

ψ

(∆

x

)) 2

u m −

1

+

γ

(

u m −

1

, u m

, u m

+1

) = 0

.

(5.3.4)

98

The space independent equation of (5.3.1) is

du dt

=

r

(

u

)

, u

(0) =

u

0

(5.3.5) which is the scalar case of (2.2.1). We approximate it using the nonstandard one-stage (4.3.7) and two-stage (4.3.8) theta methods, which in this case read as follows:

u k

+1

− u k

φ

(

q

t

)

q u k

+1

− u k

φ

(

q

t

)

q

=

=

r

[

θr

θu

(

u k k

+1

+1

+ (1

) + (1

θ

)

θ u

)

r k

]

(

, u k

)

.

(5.3.6)

(5.3.7)

By combining (5.3.4) and (5.3.6)-(5.3.7), we arrive at the following nonstandard finite difference methods for (5.3.1):

u k

+1

m

− u k m

φ

(

q

t

)

q

=

θ u

γ k

+1

m

+1

2

u k

+1

m

+

u k

+1

m −

1

(

ψ

(∆

x

))

2 h

θ u k

+1

m −

1

+ (1

− θ

)

u

+ (1

k m −

1

− θ

)

, θ u k

+1

m u k m

+1

+ (1

− θ

2

)

u k u k m

+

u k m −

1

(

ψ

(∆

x

))

2

, θ u k

+1

m

+1

+ (5.3.8)

+ (1

− θ

)

u k m

+1 i

u k

+1

m

− u k m

φ

(

q

t

)

q

=

θ u k

+1

m

+1

2

u k

+1

m

+

u k

+1

m −

1

(

ψ

(∆

x

))

2

+ (1

− θ

)

u k m

+1

2

u k

+

u k m −

1

(

ψ

(∆

x

))

2

+

θγ

(

u k

+1

m −

1

, u k

+1

m

, u k

+1

m

+1

) + (1

− θ

)

γ

(

u k m −

1

, u k m

, u k m

+1

)

.

(5.3.9)

Notice that the denominator functions

φ

and the number

q

that captures the dynamics of the system are chosen in the manner discussed in Sections 4.3 and 4.4, namely (4.3.2) or (4.3.15) or Theorem 4.4.3

together with (4.3.4) and (4.3.6). Equally the denominator function

ψ

and the appropriate forms of the function

γ

are discussed in Section

4.5.

By construction and assuming that the function

r

(

u

) satisfies the conditions of the relevant theorems in Chapter 4, we have the following result.

99

Theorem 5.3.1.

The non-standard finite difference schemes (5.3.8) and (5.3.9) are qualitatively stable with respect to the principle of conservation of energy in the limit case of the stationary equation. Furthermore, these non-standard schemes are elementary stable in the limit case of space independent variable. In this case, they are also qualitatively stable with respect to the dissipativity property for θ

= 0

or

θ ∈

(

1

2

,

1]

whenever the continuous system is in the setting of Theorem

4.4.1 and Theorem 4.4.3.

5.4

Explicit Scheme

We would like to design schemes related in one way or another to

(5.3.8)-(5.3.9) which are stable with respect to the positivity and bounded property (5.3.2). That is,

0

≤ u

0

m

1

0

≤ u k m

1

.

(5.4.1)

We consider the explicit case (i.e.

θ

= 0) for which (5.3.8) and (5.3.9) reduce to

u k

+1

m

− u k m

φ

(

q

t

)

q

=

u k m

+1

2

u k m

+

u k m −

1

(

ψ

(∆

x

)) 2

+

γ

(

u k m −

1

, u k m

, u k m

+1

)

.

(5.4.2)

It will be necessary to modify (5

.

4

.

2) into a new formula resulting from a somewhat convenient form. A proper choice of the function

γ

in

(5.4.2) and Theorem 4.5.1 is essential in what follows. To this end, we assume that the function

γ

may be represented as

γ

(

u m −

1

, u m

, u m

+1

) = (1

− u m

)Γ(

u m −

1

, u m

, u m

+1

) (5.4.3) for some function satisfying Γ(

u m −

1

, u m

, u m

+1

)

0 for nonnegative arguments. We also assume that the symmetry property

Γ(

u m −

1

, u m

, u m

+1

) = Γ(

u m

+1

, u m

, u m −

1

)

100 holds as in (4.5.9) and that the scheme

u k

+1

− u k

φ

(

q

t

)

q

= (1

− u k

+1

)Γ(

u k

, u k

, u k

) (5.4.4) for (5.3.5) is elementary stable. We can in place of (5.3.4) consider

u m

+1

2

u m

+

u m −

1

(

ψ

(∆

x

))

2

+ (1

− u m

)Γ(

u m −

1

, u m

, u m

+1

) = 0

.

(5.4.5)

One possible combination of (5.4.4) and (5.4.5) in the spirit of (5.3.9) is

u k

+1

m

φ

(

q

t

)

q u k m

=

u k m

+1

(

ψ

2

u

(∆

k m x

+

))

2

u k m −

1

+ (1

− u k

+1

m

)Γ(

u k m −

1

, u k m

, u k m

+1

)

.

(5.4.6)

In classical finite difference methods, the quantities ∆

t

and ∆

x

do not vary independently [32]. It is therefore not suprising to require a certain functional relation between ∆

t

and ∆

x

for the scheme (5.4.6). In view of our objective to have property (5.4.1), we impose the condition

φ

(

q

t

)

/q

(

ψ

(∆

x

))

2

=

1

2

.

(5.4.7)

Solving (5.4.6) for

u k

+1

m

yields

u k

+1

m

=

1

2

¡

u k m −

1

+

u k m

+1

¢

+

φ

(

q

t

)

/q

Γ(

u k m −

1

, u k m

, u k m

+1

)

1 +

φ

(

q

t

)

/q

Γ(

u k m −

1

, u k m

, u k m

+1

)

.

(5.4.8)

If 0

≤ u

0

≤ u k

+1

m k m

1

,

it follows from (5.4.8) and the property of Γ that

1

.

In summary, we have thus shown the following result.

Theorem 5.4.1.

Under condition (5.4.7) the non-standard finite difference scheme (5.4.6) is stable with respect to boundedness and positivity property (5.3.2). Furthermore, this scheme is elementary stable in the limit case of the space independent variable and it is also stable with respect to conservation of energy in the stationary case.

101

Remark 5.4.2.

An essential feature of the scheme (5.4.6) proposed here is that it replicates property (5.3.2) under the simple relation

(5.4.7) between step sizes. Other schemes having the property (5.4.1) may be obtained but at the cost of more complicated functional relations between step sizes. For example, in the particular case of the

Fisher equation (5.2.1), which satisfies property (5.3.2), an alternative scheme preserving this property is obtained in [27] but at the cost of the more complicated restriction between step sizes, namely:

φ

(

(

ψ q

(∆

t x

)

/q

))

2

=

1

3

µ

1

− q

(

ψ

(∆

3

x

))

2

1

.

(5.4.9)

Furthermore, the relation (5

.

4

.

7) is a typical condition of Lax-Richtmyer stability of finite difference schemes for linear diffusion equation. Let us clarify this fact with the scheme

u k

+1

m

− u k m

φ

(

q

t

)

/q

=

u k m

+1

2

u k m

+

(

ψ

(∆

x

))

2

u k m −

1

+

u k m

applied to the linear problem

∂u

∂t

=

2

u

∂x

2

+

u.

In the setting of (5.3.9), the scheme (5.4.10) corresponds to

(5.4.10)

γ

(

u k m −

1

, u k m

, u k m

+1

) =

u k m

.

We use the Fourier series method [32]. The amplification factor for the scheme (5.4.10) is

ρ

(

ξ

) = 1

4

ν

sin

2

ξ

2

x

+

φ

(

q

t

)

/q, ∀ ξ ∈

R

,

where

ν

=

φ

(

q

t

)

/q

(

ψ

(∆

x

)) 2

.

The scheme (5.4.10) is stable in the sense of Lax-

Richtmyer whenever the von Neumann condition

| ρ

(

ξ

)

| ≤

1 is satisfied.

This condition is met if

ν ≤

1

2

+

φ

(

q

t

)

/q

2

.

¥

102

Remark 5.4.3.

The strategy of writing the function

γ

(

· , · , ·

) in the form

(5.4.3) and of approximating it in the nonlocal way shown in (5.4.4) is being used extensively in the literature, specifically in mathematical biology, when the discrete solution is required to replicate the positivity property of the exact solution. (See for instance [7], [13], [14], [29], [30]).

¥

To illustrate the analysis of the previous sections, we consider again the Fisher equation

∂u

∂t

=

2

u

∂x

2

+ 25

u

(1

− u

)

, u

(

x,

0) = 0

.

5 + 0

.

5 sin 2

x,

(5.4.11) for which the solution satisfies (5.3.2). We apply various non-standard methods of the form

u k

+1

m

− u

φ

(∆

t

)

k m

=

u k m

+1

2

u k m

(∆

x

)

2

+

u k m −

1

+ 25(1

− u k

+1

m

)

u

We may choose the denominator

φ

(∆

t

) =

e

25∆

t

25

1 exact scheme for the logistic equation (Table 4.1)

k m −

1

+

u

3

k m

+

u k m

+1

.

With

φ

(∆

t

) =

1

− e

25∆

t

25

(5.4.12)

, the solution of the scheme (5.4.12), which corresponds to (5.4.6), is displayed in Fig.5.3, for ∆

t

= 0

.

061 and ∆

x

= 0

.

25

.

which provides the

du dt

= 25

u

(1

− u

)

.

(5.4.13)

The solution of the resulting scheme (5.4.12) is displayed in Fig.5.4, for ∆

t

= 0

.

0231 and ∆

x

= 0

.

25

.

The discrete scheme

u k

+1

− u k

t

= 25

u k

(1

− u k

+1

)

,

(5.4.14) for (5.4.13) is elementary stable and so we can take

φ

(∆

t

) = ∆

t

in

(5.4.12). The resulting solution is visualised in Fig.5.6 for ∆

t

= 0

.

031 and ∆

x

= 0

.

25

.

103

All the results are compared with the standard scheme

u k

+1

m

− u k m

t

=

u k m

+1

2

u k m

(∆

x

)

2

+

u k m −

1

+ 25

u k

(1

− u k m

)

,

(5.4.15) whose solution is visualised in Fig.5.5 for ∆

t

= 0

.

0231 and ∆

x

= 0

.

25

.

The three figures corresponding to non-standard schemes confirm elementary stability with respect to the boundedness and positivity property. On the contrary the standard scheme shown in Fig. 5.6 fails to replicate any one of these properties.

1

0.8

0.6

0.4

0.2

0

5

4

2

3 x

2

1

0.5

1 t

1.5

0

0

Figure 5.3: Non-standard scheme not related to exact scheme.

1

0.8

0.6

0.4

0.2

0

5

4

2

3 x

2

1

0.5

1 t

1.5

0

0

Figure 5.4: Non-standard scheme related to exact scheme.

104

1

0.8

0.6

0.4

0.2

0

5

4

3 x

2

1

0.5

1 t

1.5

0

0

Figure 5.5: Non-standard scheme with

φ

(∆

t

) = ∆

t

.

2

1

0.8

0.6

0.4

0.2

0

5

4 x

3

2

1

0.5

0

0

Figure 5.6: Standard scheme.

1 t

1.5

2

105

106

5.5

Coupled Spectral and Non-standard Methods

So far, the approximations in the space variable

x

were obtained by the finite difference method. In this section, we use the spectral method.

We consider the reaction-diffusion problem

∂u

∂t

2

u

∂x

2

+

u

=

R

(

u

)

,

on (0

,

2

π

)

×

(0

, T

)

u

(

x,

0) =

u u

(0

, t

) =

u

0

(2

(

x

) for

π, t

) for

x t

(0

(0

,

2

, T

π

)

)

,

(5.5.1)

(5.5.2)

(5.5.3) where the function

R

(

) as well as the function

u

0

L

2 in the Lebesgue space

(0

,

2

π

) with inner product

h• , •i

are given. We assume of course that problem (5.5.1)-(5.5.3) has a unique solution. In view of the numerical scheme presented below, we assume that

R

(0) =

R

0

(0) = 0

.

(5.5.4)

Thus, (5.5.1) corresponds, in the setting of (5.3.1), to the case when

r

(

u

) :=

R

(

u

)

− u

is linearized about

u

= 0 by

− u

.

With each integer

m ∈

N , we associate the Fourier-Garlekin spectral approximation of the solution

u

, which is a semi-discrete solution given by (see [10])

u m

(

x, t

) =

α k

(

t

)

w k

(

x

) for (

x, t

)

(0

,

2

π

)

×

[0

, T

]

,

(5.5.5)

k

=

− m

where for

k ∈

Z ,

w k

(

x

) :=

1

2

π e ikx

for

x ∈

[0

,

2

π

]

.

(5.5.6)

The function

u m

(

x, t

) in (5.5.5) does not satisfy (5.5.1) and (5.5.2).

But this function is an approximation of the solution

u

in the sense that, for

| k | ≤ m

, we have

¿

∂u

h u m

∂t m

(0)

, w

∂ k i

2

u

∂x

=

m

2

h

+

u

0

u m

, w k

, w k i .

À

=

h R

(

u m

)

, w k i

(5.5.7)

107

Thus, with

λ k

:=

k

2

+ 1 (5.5.8) the vector function

U m

= [

α

− m

α

− m

+1

· · · α m

]

T

of Fourier coefficients in (5.5.5) is the unique solution of the initial-value problem for the system of 2

m

+ 1 ordinary differential equations in 2

m

+ 1 unknowns

α k

:

dα k dt

+

λ k

α k

=

h R

(

u m

)

, w k i

on (0

, T

)

,

α k

(0) =

h u

0

, w k i .

(5.5.9)

(5.5.10)

Remark 5.5.1.

A motivation of the spectral approximations (5.5.5) and (5.5.9)-(5.5.10) is that, in many cases, the solution

u

of (5.5.1)-

(5.5.3) admits in

L

2

(0

,

2

π

) the Fourier series expansion

X

u

(

t

)

≡ u

(

• , t

) =

α k

(

t

)

w k

(

)

.

(5.5.11)

k ∈

Z

This is in particular true for the linear diffusion equation, i.e.

R

in

(5.5.1) is a function of the independent variables

x

and

t

only but not of the dependent variable

u

(See, for example [36] ).

¥

To obtain a full discretisation of

u

, we have to approximate (5.5.9)-

(5.5.10). The main source of difficulty in (5.5.9) comes from its linearised part, which is a stiff system: from (5.5.8), 1 =

λ

0 values of

m

. The condition (5.5.4) on the reaction function

R

(

) guarantees that the null vector

U

= e

R

2

m

+1

¿ λ m

for big is a hyperbolic fixed-point of the system (5.5.9). Consequently, by Hartman-Grobman theorem

(Theorem 2.2.16), this system can be qualitatively studied from its linearisation about

U

= 0 which is

dα k dt

+

λ k

α k

= 0

, | k | ≤ m.

(5.5.12)

The approach used in [26] to approximate first-order nonlinear differential equations is based on this connection between (5.5.9) and

108

(5.5.12). We follow this approach to avoid the above mentioned difficulty by incorporating the stiffness feature of the system (5.5.9) in the numerical scheme. More precisely, by analogy with the exact scheme

α k,n

+1

− α k,n

(1

− e − λ k

t

)

/λ k

+

λ k

α k,n

= 0

n

= 0

,

1

,

2

, ...,

(5.5.13) of the linearised part of the system (5.5.9), we consider, for the nonlinear system (5.5.9)-(5.5.10), the non-standard forward Euler method

α k,n

+1

− α k,n

(1

− e − λ k

t

)

/λ k

+

λ k

α k,n

=

h R

(

u m,n

)

, w k i

0

n

= 0

,

1

,

2

, ...,

(5.5.14) where

u m,

0 is obtained from (5.5.10) by taking

t

= 0 in (5.5.5), i.e.

X

u m,

0

=

h u

0

, w k i w k

(

x

)

.

(5.5.15)

| k |≤ m

This then provides

u m,n

(

x

) =

α k,n w k

(

x

)

,

(5.5.16)

k

=

− m

as the spectral non-standard finite difference approximation of the solution

u

at the point (

x, t

) where

t

=

t n

=

n

t

is fixed.

Issues pertaining to the consistency, the stability and the convergence, with rates of convergence, of this coupled spectral-non-standard finite difference methods can be analysed along the lines of [10] and

[36]. We do not do this analysis here. Our interest is rather in testing this approach numerically. To this end, we consider (5.5.1)-(5.5.3) with

R

(

u

) =

u

2 and

u

0

(

x

) =

x

(2

π − x

)

2

. The result of the nonstandard scheme (5.5.14)-(5.5.16) is visualized in Fig.5.7, for

m

= 20 and ∆

t

= 0

.

1

.

This is to be compared with Fig.5.8, relative to the standard scheme (5.5.16) where the traditional denominator ∆

t

is used in the discrete derivative in (5.5.14) for the specific values

m

= 5 and

t

= 0

.

075, which satisfy the stability condition

λ m

t ≤

2. One observes, for instance, that the non-standard scheme is elementary stable and stable with respect to the monotonicity of solution in the limit case of space independent equation contrary to the standard scheme.

1

0.8

0.6

0.4

0.2

0

6

1

4 x

2

0.5

t

0

0

Figure 5.7: Spectral non-standard scheme based on the exact scheme.

1

0.8

0.6

0.4

0.2

0

6

4 x

2

0.5

t

0

0

Figure 5.8: Spectral standard scheme.

1

109

Chapter 6

Conclusion

The non-standard finite difference approach was initiated more than two decades ago by Mickens. The monograph [26] constitutes a selfcontained and comprehensive treatment of the non-standard finite difference method. Since the publication of this book, the non-standard approach has extensively been applied to differential models originating from problems in engineering, physics, biology, chemistry, etc. With the great potential that the non-standard finite difference schemes have been showing in replicating the essential properties of the exact solutions of the involved differential equations, we felt strongly about focusing on dynamical systems in this thesis. Indeed, dynamical systems have a wide range of important intrinsic properties, such as fixed-points and their stability, attracting sets, limit cycles, which ideally should be preserved by numerical schemes if they are to yield reliable simulations that provide qualitative information and useful insights on the exact solutions. In particular, the following facts constitute some of the specific motivations of this thesis, which show where it fits in the literature:

1. A sharper condition given in [14] for the elementary stability of the non-standard forward Euler method and a claim made therein that the condition avoids the location of the eigenvalues of the involved Jacobian matrices in some regions of the complex plane.

2. A follow up to the chapter [24] in order to investigate other types of dissipative properties of differential models, than the dissipativity of singular perturbed problems, which has a specific meaning in terms of the decay/variation of their solutions in layer regions;

110

111

3. A result on classical theta methods that restricts their dissipativity as discrete dynamical systems to the range

[41]).

θ ∈

(

1

2

,

1].(see, e.g.,

4. The design in [27] of a non-standard finite difference scheme for the

Fisher equation, which is stable with respect to the boundedness and positivity property of the solution under a certain functional relation between the time and space step sizes.

For this thesis to be relatively self-contained, we dedicated considerable time to overview classical concepts on finite difference schemes, continuous dynamical systems and discrete dynamical systems. We also studied the mathematical foundations of the non-standard finite difference method summarized in [7] by the triple question below. What is a non-standard finite difference method? In which way are non-standard schemes powerful compared to the standard ones? How to construct systematically non-standard finite difference methods?

However, the main contributions of this thesis are as follows. To address the issues 1-3, we constructed non-standard one-stage and twostage theta methods for stiff and non-stiff systems of ordinary differential equations. The schemes were obtained by using Mickens’rule about the denominator of the discrete derivatives. On the one hand, we showed that the condition in [14] is equally sufficient for the elementary stability in this general setting of non-standard theta methods. On the other hand, we proved that the stated condition is equivalent to having the eigenvalues of the Jacobian matrices located in some wedges of the complex plane and we explained how the condition can be used in practice.

For a particular class of dynamical systems, which have non-hyperbolic fixed-points and which is equivalent to some specific Hamiltonian systems, we derived energy-preserving non-standard finite difference schemes.

The schemes were constructed by using Mickens’ rule about the nonlocal approximation of nonlinear terms [26].

Unlike the work in [24], the term dissipative is used in this thesis to express the fact that the gross asymptotics of a dynamical system

112 are independent of initial conditions with everything ending up inside some absorbing set. We showed that, for

θ

taking the smallest value

0 in the forbidden interval [0

,

1

2

), our explicit scheme i.e., the nonstandard forward Euler scheme, replicates the dissipative property of the continuous dynamical system.

As for the issue no. 4, we used a much simpler functional relation between step sizes and we proposed a systematic procedure of designing new qualitatively stable schemes for the general reaction-diffusion equations that involve arbitrary reaction terms. The positivity and the boundedness of the non-standard discrete solutions was established in this general setting. Furthermore, we designed for this general case an alternative method. It consists of a spectral method (in the space variable) and a non-standard finite difference method (in the time variable) in which the stiffness feature of the linearised system of Fourier coefficients is exactly incorporated.

Throughout the thesis, we presented numerical tests that support the theory. The accomplishment of this thesis has raised some concerns for future research. Among them, we can mention the following:

1. The design of non-standard finite difference schemes for dynamical systems with non-hyperbolic fixed-points. This is actually an open problem.

2. The design of non-standard finite difference schemes that preserve global attractors of continuous dynamical systems.

3. The investigation of the dissipativity of the non-standard theta methods for any value of the parameter

θ

.

4. The design of dissipative schemes for evolution partial differential differential equations.

5. The design of schemes, which display the boundedness and positivity property of solutions for the convective/advection-reactiondiffusion equation considered in [21] and for the Burger equation, as pointed out in [27].

Bibliography

[1] H. Al-Kahby, F. Dannan and S. Elaydi, Non-standard Discretization Methods for Some Biological Models, In: R.E. Mickens (Ed),

Applications of Non-standard Finite Difference Schemes

, World

Scientific, Singapore, 2000, 155-180.

[2] R.M. Anderson and R.M. May,

Infectious Diseases of Humans:

Dynamics and Control

, Oxford University Press, Oxford, 1991.

[3] R. Anguelov, J.K. Djoko, P. Kama and J.M.-S. Lubuma, On Elementary Stable and Dissipative Non-standard Finite Difference

Schemes for Dynamical Systems,

Proceedings of the International

Conference of Computational Methods in Science and Engineering

(Crete, Greece, 27 October-1 November 2006), Lecture Series on Computer and Computational Sciences, Vol 7A, VSP International Science Publishers,Utrecht, 2006, 24-27.

[4] R. Anguelov R, J.K. Djoko, P. Kama,J.M-S Lubuma, On Finite

Difference Schemes Having the Correct Linear Stability and Dissipative Properties of Dynamical Systems.

University of Pretoria

Technical Report No. UPWT

, 2007/02, submitted.

[5] R. Anguelov, P. Kama and J.M-S. Lubuma, Non-standard Theta

Methods and Related Discrete Schemes for the Reaction-Diffusion

Equations, In T.E. Simos (Ed),

Proceedings of the International

Conference of Computational Methods in Science and Engineering

(Kastoria, Greece, 12-16 September 2003), World Scientific,

Singapore, 2003, 24-27.

113

114

[6] R. Anguelov, P. Kama and J.M-S. Lubuma, On Non-standard Finite Difference Models of Reaction-Diffusion Equations,

Journal

Computational and Applied Mathematics

, 175 (2005), 11-29.

[7] R. Anguelov and J.M-S. Lubuma, Contributions to the Mathematics of the Non-standard Finite Difference Methods and Applications,

Numer. Methods Partial Differential Equations

, 17

(2001), 518-543.

[8] R. Anguelov, J.M-S. Lubuma, Nonstandard Finite Difference

Method by Nonlocal Approximation,

Mathematics and Computers in Simulation

61 (3-6) (2003), 465-475.

[9] G.J. Barclay, D.F. Griffiths and D.J. Higham. Theta Method Dynamics,

London Mathematical Society J. comput. Math

, 3 (2000),

27-43.

[10] C. Canuto, M.Y. Hussani, A. Quarteroni and T.A. Zang,

Spectral

Methods in Fluid Dynamics

, Springer-Verlag, Berlin, 1988.

[11] R. Dautray and J-L. Lions,

Mathematical Analysis and Numerical

Methods for Science and Technology,

Vol 6, Springer,New York,

1988.

[12] K. Dekker and J.G.Verwer

Stability of Runge-Kutta Methods for

Stiff Nonlinear Differential Equations

. Elsevier Science Pub. Co,

Amsterdam, 1984.

[13] D. T. Dimitrov and H.V. Kojouharov, Positive and Elementary

Stable Non-standard Numerical Methods with Applications to

Predator-Prey Models,

Journal of Computational and Applied

Mathematics

, 189 (2006), 98-108.

[14] D. T. Dimitrov, H.V. Kojouharov and B. M. Chen-Charpentier,

Reliable Finite Difference Schemes with Applications in Mathematical Biology, In: R.E. Mickens (Ed.),

Advances in the Applications of Nonstandard Finite Difference Schemes

, World Scientific,

Singapore, 2005, 249-285.

115

[15] Y. Dumont amd J.M-S. Lubuma, Non-standard Finite Difference

Schemes for Vibro-Impact Problems,

Proceedings of the Royal Society A

, 461 (2005), 1927-1950.

[16] L.C.Evans,

Partial Differential Equations

, Vol 19, American

Mathematical Society, Providence, Rhode Island, 1998.

[17] A.B. Gumel (Ed.),

Journal of Difference Equations and Application

, Volume 9, 2003, Special Issue no 11-12 dedicated to Prof

R.E. Mickens on the occasion of his 60th birthday.

[18] H.W. Hethcote,

The Mathematics of Infectious Diseases

, SIAM

Review 42 (2000), 599-653.

[19] D.S. Jones and B.D. Sleeman,

Differential Equations and Mathematical Biology

Chapman and Hall/CRC, New York, 2003.

[20] A. Katok, B. Hasselblatt,

Introduction to the Modern Theory of Dynamical Systems

, Cambridge University Press, New York,

1995.

[21] H.V. Kojouharov and B. M. Chen, Non-standard Methods for

Advection-Diffusion-Reaction Equations, In: R.E. Mickens (Ed.),

Applications of Nonstandard Finite Difference Schemes

, World

Scientific Publishing Company, London, 2000, 55-108.

[22] J. D. Lambert.

Numerical Methods for Ordinary Differential Systems

, John Wiley and Sons, New York, 1991.

[23] J.D. Logan,

Nonlinear Differential Equations

, Wiley-Interscience,

New York, 1994.

[24] J.M.-S. Lubuma and K.C. Patidar, Contributions to the theory of non-standard finite difference methods and applications to singular perturbation problems, In: R.E.Mickens (Ed.),

Advances in the applications of nonstandard finite difference schemes

, World

Scientific, Singapore, 2005, 513-560.

[25] J.M-S Lubuma and A. Roux, An Improved Theta Method for the

Systems of Ordinary Differential equations,

J Difference Equations and Applications

, 9 (11) (2003), 1023-1035.

116

[26] R.E. Mickens,

Nonstandard Finite Difference Models of Differential Equations,

World Scientific, Singapore, 1994.

[27] R.E. Mickens, Relation Between the Time and Space Step-

Sizes in Nonstandard Finite Difference Schemes for the Fisher

Equation.

Numerical Methods for Partial Differential Equations

,

13 (1)(1997), 51-55.

[28] R.E. Mickens (Ed.),

Applications of Nonstandard Finite Difference

Schemes

, World Scientific, Singapore, 2000.

[29] R.E. Mickens, Nonstandard Finite Difference Methods, In: R.E.

Mickens (Ed),

Advances in the Applications of Nonstandard Finite Difference Schemes

, World Scientific, Singapore, 2005, 1-9.

[30] R.E. Mickens, Discrete Models of Differential Equations: the

Roles of Dynamic Consistency and Positivity, In: L.J.S. Allen,

B. Aulbach, S. Elaydi and R. Sacker (Eds),

Difference Equations and Discrete Dynamical Systems

(Proceedings of the 9th International Conference), Los Angeles, USA, 2-7 August 2004, World

Scientific, Singapore, 2005, 51-70.

[31] A. R. Mitchell and D. F. Griffiths,

Finite Difference Methods in

Partial Differential Equations

; Wiley, New York, 1980.

[32] K.W. Morton and D.F. Mayers.

Numerical Solution of Partial

Differential Equations

, Cambridge University Press, New York,

1994.

[33] J.D. Murray,

Mathematical Biology

, Springer-Verlag, Berlin,

1989.

[34] R.E. O’Malley Jr.,

Thinking About Ordinary Differential Equations,

Cambridge University Press, New York, 1997.

[35] K.C. Patidar, On the Use of Non-standard Finite Difference

Methods,

Journal of Difference Equations and Applications

,

11 (8) (2005, 735-758.

[36] P.A. Raviart and J.M. Thomas,

Introduction a L’analyse Numerique des Equations Auxderivees Partielles

, Masson, 1983.

117

[37] R. D. Richtmyer and K. W. Morton,

Difference Methods for

Initial-Value Problems

; Wiley-Interscience, New York,1967.

[38] A. Roux, Fourier Series and Spectral-Finite Difference Methods for the General Linear Diffusion Equation,

University of Pretoria

Internal Report

UPWI 2002/03.

[39] G.D. Smith.

Numerical Solution of Partial Differential Equations:

Finite Difference Methods

, Oxford University Press, New York,

1985.

[40] J.Smoller,

Shock Waves and Reaction-Diffusion Equations

,

Springer-Verlag, Berlin, 1983.

[41] A.M. Stuart and A.R. Humphries.

Dynamical Systems and Numerical Analysis

, Cambridge University Press, New York, 1998.

[42] A.M. Stuart and A.T. Peplow. The Dynamics of the Theta Methods,

SIAM J. Sci. Stat. Comput.

, 12 (1991), 1351-1372.

[43] R. Temam,

Navier-Stokes Equations Theory and Numerical Analysis

, North-Holland, Amsterdam, 1984.

[44] R. Temam.

Infinite-Dimensional Dynamical Systems in Mechanics and Physics

, Springer-Verlag, Berlin, 1988.

[45] J.W. Thomas.

Numerical Partial Differential Equations

, Springer,

New York, 1988.

[46] W. Walter,

Ordinary Differential Equations

, Springer-Verlag,

New York, 1998.

[47] S. Wiggins.

Introduction to Applied Nonlinear Dynamical Systems and Chaos

, Springer-Verlag, New York, 1990.

[48] E. Zeidler.

Applied Functional Analysis: Applications to Physics

,

Springer-Verlag, New York, 1995.

[49] E. Zeidler.

Applied Functional Analysis: Main Principles and their Applications

, Springer-Verlag, New York, 1995.

Summary

Non-standard finite difference methods in dynamical systems

Student: Phumezile Kama

Supervisor: Professor Jean M-S Lubuma

Department: Mathematics and Applied Mathematics

Degree: Philosophiae Doctor

Date submitted: April 2009

This thesis is devoted to the study of numerical methods for dynamical systems. The numerical methods are expected to define discrete dynamical systems that are required to preserve the essential properties of the exact solution. The shortcomings of the classical numerical methods, specifically the theta methods, for being reliable discrete dynamical systems is that the step size is subjected to a constraint. The time step size should be small enough if the schemes were to replicate qualitative properties of the exact solutions.

The schemes we study are non-standard variants of the theta methods. The non-standard finite difference method aims at preserving the qualitative properties at no cost with regard to the value of time step size. We analyse non-standard finite difference schemes that have no spurious fixed-points compared to the dynamical system under consideration, the linear stability/instability property of the fixed-points

118

119 being the same for both the discrete and continuous systems. We obtain a sharper condition for the elementary stability of the schemes.

For more complex dynamical systems which are dissipative, we design schemes that replicate this property.

We consider a specific class of dynamical systems which is equivalent to the simplest model of Hamiltonian systems that occur in classical mechanics. We design a non-standard finite difference scheme that replicates the underlying principle of conversation of energy.

We analyse the Fisher equation which enjoys a positivity and boundedness property. For the reaction-diffusion equation we obtain nonstandard finite difference schemes that are elementary stable in the limit case of space independent variable and which are stable with respect to the principle of conservation of energy in the stationary case.

As an alternative approach, we approximate the space variable by the spectral method, while the time variable is approximated via the nonstandard finite difference scheme.

Throughout the thesis, we provide numerical experiments that support the theory.

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project