# User manual | A Critical Study of the Finite Difference Time Dependent Schrödinger Equation ```A Critical Study of the Finite Difference
and Finite Element Methods for the
Time Dependent Schrödinger Equation
Simen Kvaal
Thesis Submitted for the Degree
of Candidatus Scientiarum
Department of Physics
University of Oslo
March 2004
Preface
This thesis is perhaps a bit lengthy compared to the standards of a cand. scient. degree
(or Master Degree as it will be called in the future in order to comply with international
standards). It is however aimed at a broad audience; from my fellow physics students
to mathematicians and other non-physicists that may have interests in the area. I have
tried to cut away material in order to make it a little bit shorter, and some less central
material is moved to appendices for the specially interested readers.
The title of the thesis pretty much describes the aim of this cand. scient. project.
The ﬁnite diﬀerence and ﬁnite element methods are two widely used approaches to
solving partial diﬀerential equations. Traditionally the ﬁnite element method has been
reserved for engineering projects and not that much in basic research ﬁelds such as
atomic physics where the ﬁnite diﬀerence method is the main method. One reason
may be that ﬁnite element methods are very complicated to implement and that they
utilize a wide range of complicated numerical tools, such as sparse matrices and iterative
solvers for linear equations. For this reason ﬁnite element solvers are usually expensive
commercial products (such as FemLab, see Ref. ) whose operation hides the numerical
details for the user, an approach that in a way makes scientists feel that they lose control
of the situation.
Some work has been done on the Schrödinger equation with ﬁnite element methods
early in the eighties and nineties, see for example Refs. [2–4], but for some reason
the development has seemed to stagnate. One reason might be the above mentioned
complexity in implementation. There is a huge threshold to climb if one wants to
generalize a simple formulation which can be coded in a few hundred lines, experiment
with diﬀerent element types and so on. Fortunately, the programming library Diﬀpack
is tailored for scientiﬁc problems, making available very powerful and ﬂexible class
hierarchies, interfaces and visualization support.
The newly initiated Centre for Mathematics with Applications (CMA) tries to join
the forces of mathematics, physics and informatics among others, and a thorough yet
basic exposition into quantum mechanics might be in the spirit of such collaborative
work. Chapters 1 and 2 contain the fundamentals of quantum mechanics, and physicists may skip these chapters and go directly to chapter 3, in which I discuss more
speciﬁc physical problems. Others may ﬁnd the preceding chapters illuminating and
interesting. A lot of work has been put into making an understandable discussion
aimed at practitioners of natural sciences with a foundation in mathematics. It places
itself in the middle of an intermediate course of quantum mechanics and a guide for
mathematically trained people with an interest in physics.
Chapter 4 deals with ordinary and partial diﬀerential equations and numerical
methods. A recipe-based approach is taken, retaining a touch of rigor and analysis along
the way. Chapter 5 is a short review of numerical methods for problems from linear
algebra; more speciﬁcally square linear systems of equations and eigenvalue problems.
We also discuss the implementations of these methods in Diﬀpack, at the same time
giving an introduction to this powerful programming library for partial diﬀerential
equations and ﬁnite element methods.
Chapter 6 turns to quantum mechanical eigenvalue problems and discusses their
i
Preface
importance and their applications in time dependent problems, which are presented in
chapter 7. Some numerical experiments are performed and analyzed, yielding an insight into the behavior of both the ﬁnite diﬀerence and ﬁnite element discretizations of
the eigenvalue problem. Chapter 7 is the pinnacle of the thesis, around which all other
material build up. We discuss the solution of the time dependent Schrödinger equation
for a two-dimensional hydrogen atom with a laser ﬁeld perturbation; a system which
is both numerically, physically and conceptually interesting. The ﬁnite element implementation is ﬂexible and allows for generalizations such as diﬀerent gauges, diﬀerent
geometries, new time integration methods and so on.
Chapter 8 concludes and sums up the contents of the thesis, discussing important
results and ideas for further work. There are a few appendices; the ﬁrst one picks
up some technical details of calculations and formalism, while appendix B contains
complete program listings for every program written for the thesis.
As is often the case with the work on a cand. scient.-degree the creative process
accelerates enormously towards the ﬁnal days before deadline. Ideas pop up, intensive
research is being done and many good (and some bad) ideas emerge. Alas, one cannot
follow each and every idea. Rather late in the work with the thesis I realized that the
mass matrix of ﬁnite element discretizations is in fact (obviously!) positive deﬁnite.
This allows for converting every generalized eigenvalue problem arising from ﬁnite
element discretizations into a standard eigenvalue problem by means of the Cholesky
decomposition. Furthermore, incorporating this into the HydroEigen class deﬁnition
should only amount to another hundred lines of code, but with proper testing rituals
and debugging sessions it would require another week of work while not adding to the
ﬂexibility of the program. On the other hand, this optimization would allow use of much
faster eigenvalue searching, which in turn would mean more numerical experiments and
more detailed results.
There are several computer programs written for this thesis in addition to essential
downloadable documentation and texts. For this reason I have created a web page, see
Ref. , giving access to all source code, reports, animations, this text and more.
I would like to thank my supervisors Prof. Morten Hjorth-Jensen and Prof. Hans
Petter Langtangen for invaluable support and guidance during the work with the thesis. Morten has spent quite a few hours of valuable time on interesting discussions,
providing constructive criticism and proofreading of the text, and making available his
deep and thorough insight into both numerical methods and quantum mechanics.
Hans Petter was co-supervisor and is responsible for this thesis being possible at all
with his thorough knowledge of Diﬀpack, his oﬀering summer holiday work relevant to
my thesis and his constant encouragement and conﬁdence in my abilities as a student
and programmer.
I would also like to thank Prof. Ragnar Winther at CMA for valuable help with
section 4.6 on stability analysis of ﬁnite element schemes. In addition, 1st Am. Per
Christian Moan (also at CMA) has provided interesting discussions on gauge invariance
and time integration.
During moments of frustration, slight despair and writer’s block my girlfriend, my
family and my friends have (as always!) put up with me. They have provided immense
encouragement and support for which I am truly thankful. I dedicate this work to all
of you, for as they say, no man is an island.
Oslo, March 2004
Simen Kvaal
ii
Contents
1 A Brief Introduction to Quantum Mechanics
1.1 A Quick Look at Classical Mechanics . . . . . . . . . . . . . . . .
1.2 Why Quantum Mechanics? . . . . . . . . . . . . . . . . . . . . .
1.3 The Postulates of Quantum Mechanics . . . . . . . . . . . . . . .
1.4 More on the Solution of the Schrödinger Equation . . . . . . . .
1.5 Important Consequences of the Postulates . . . . . . . . . . . . .
1.5.1 Quantum Measurements . . . . . . . . . . . . . . . . . . .
1.5.2 Sharpness and Commuting Observables . . . . . . . . . .
1.5.3 Uncertainty Relations: Heisenberg’s Uncertainty Principle
1.5.4 Ehrenfest’s Theorem . . . . . . . . . . . . . . . . . . . . .
1.5.5 Angular Momentum . . . . . . . . . . . . . . . . . . . . .
1.5.6 Eigenspin of the Electron . . . . . . . . . . . . . . . . . .
1.5.7 Picture Transformations . . . . . . . . . . . . . . . . . . .
1.5.8 Many Particle Theory . . . . . . . . . . . . . . . . . . . .
1.5.9 Entanglement . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 Electromagnetism and Quantum Physics . . . . . . . . . . . . . .
1.6.1 Classical Electrodynamics . . . . . . . . . . . . . . . . . .
1.6.2 Semiclassical Electrodynamics . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
4
6
11
12
12
13
15
16
17
21
22
24
26
27
28
30
2 Simple Quantum Mechanical Systems
2.1 The Free Particle . . . . . . . . . . . .
2.1.1 The Classical Particle . . . . .
2.1.2 The Quantum Particle . . . . .
2.1.3 The Gaussian Wave Packet . .
2.2 The Harmonic Oscillator . . . . . . . .
2.2.1 The Classical Particle . . . . .
2.2.2 The Quantum Particle . . . . .
2.3 The Hydrogen Atom . . . . . . . . . .
2.3.1 The Classical System . . . . . .
2.3.2 The Quantum System . . . . .
2.4 The Correspondence Principle . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
35
35
36
37
39
39
41
45
47
48
52
3 The Time Dependent Schrödinger Equation
3.1 The General One-Particle Problem . . . . . . .
3.1.1 Uni-Directional Magnetic Field . . . . .
3.1.2 A Particle Conﬁned to a Small Volume
3.1.3 The Dipole-Approximation . . . . . . .
3.2 Physical Problems . . . . . . . . . . . . . . . .
3.2.1 Two-Dimensional Models of Solids . . .
3.2.2 Two-Dimensional Hydrogenic Systems .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
53
53
54
56
56
56
57
58
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iii
Contents
iv
4 Numerical Methods for Partial Diﬀerential Equations
4.1 Diﬀerential Equations . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Ordinary Diﬀerential Equations . . . . . . . . . . . . . . . . .
4.1.2 Partial Diﬀerential Equations . . . . . . . . . . . . . . . . . .
4.2 Finite Diﬀerence Methods . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 The Grid and the Discrete Functions . . . . . . . . . . . . . .
4.2.2 Finite Diﬀerences . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.3 Simple Examples . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.4 Incorporating Boundary and Initial Conditions . . . . . . . .
4.3 The Spectral Method . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1 The Discrete Fourier Transform . . . . . . . . . . . . . . . . .
4.3.2 A Simple Implementation in Matlab . . . . . . . . . . . . . .
4.4 Finite Element Methods . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 The Weighted Residual Method . . . . . . . . . . . . . . . . .
4.4.2 A One-Dimensional Example . . . . . . . . . . . . . . . . . .
4.4.3 More on Elements and the Element-By-Element Formulation
4.5 Time Integration Methods . . . . . . . . . . . . . . . . . . . . . . . .
4.5.1 The Theta-Rule . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.2 The Leap-Frog Scheme . . . . . . . . . . . . . . . . . . . . . .
4.5.3 Stability Analysis of the Theta-Rule . . . . . . . . . . . . . .
4.5.4 Stability Analysis of the Leap-Frog Scheme . . . . . . . . . .
4.5.5 Properties of the ODE Arising From Space Discretizations . .
4.5.6 Equivalence With Hamilton’s Equations of Motion . . . . . .
4.6 Basic Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
4.6.1 Stationary Problems . . . . . . . . . . . . . . . . . . . . . . .
4.6.2 Time Dependent Problems . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
62
62
63
64
65
66
67
70
71
72
75
77
78
80
83
85
86
88
89
91
94
95
95
96
99
5 Numerical Methods for Linear Algebra
5.1 Introduction to Diﬀpack . . . . . . . . . . . . . . . . . . . . . .
5.1.1 Finite Elements in Diﬀpack . . . . . . . . . . . . . . . .
5.1.2 Grid Generation . . . . . . . . . . . . . . . . . . . . . .
5.1.3 Linear Algebra in Diﬀpack . . . . . . . . . . . . . . . . .
5.2 Review of Methods for Linear Systems of Equations . . . . . .
5.2.1 Gaussian Elimination and Its Special Cases . . . . . . .
5.2.2 Classical Iterative Methods . . . . . . . . . . . . . . . .
5.2.3 Krylov Iteration Methods . . . . . . . . . . . . . . . . .
5.3 Review of Methods for Eigenvalue Problems . . . . . . . . . . .
5.3.1 Methods For Standard Hermitian Eigenvalue Problems .
5.3.2 Iterative Methods for Large Sparse Systems . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
101
102
102
103
104
107
107
109
111
113
113
115
6 Quantum Mechanical Eigenvalue Problems
6.1 Model Problems . . . . . . . . . . . . . . . . . .
6.1.1 Particle-In-Box . . . . . . . . . . . . . . .
6.1.2 Harmonic Oscillator . . . . . . . . . . . .
6.1.3 Two-Dimensional Hydrogen Atom . . . .
6.2 The Finite Element Formulation . . . . . . . . .
6.3 Reformulation of the Generalized Problem . . . .
6.4 An Analysis of Particle-In-Box in One Dimension
6.5 The Implementation . . . . . . . . . . . . . . . .
6.5.1 Class methods of class HydroEigen . . .
6.5.2 Comments . . . . . . . . . . . . . . . . . .
6.6 Numerical Experiments . . . . . . . . . . . . . .
6.6.1 Particle-In-Box . . . . . . . . . . . . . . .
6.6.2 Two-Dimensional Hydrogen Atom . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
119
120
120
122
123
124
126
128
131
132
135
136
136
140
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
6.7
6.8
6.9
Strong Field Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Intermediate Magnetic Fields . . . . . . . . . . . . . . . . . . . . . . . . 150
Discussion and Further Applications . . . . . . . . . . . . . . . . . . . . 151
7 Solving the Time Dependent Schrödinger Equation
7.1 Physical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 The Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.1 Time Stepping . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.2 Description of the Member Functions . . . . . . . . . . . . .
7.2.3 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.1 Building and Solving Linear Systems . . . . . . . . . . . . . .
7.3.2 Comparing the Crank-Nicholson and the Leap-Frog Schemes
7.3.3 Comparing Linear and Quadratic Elements. . . . . . . . . . .
7.3.4 A Simulation of the Full Problem . . . . . . . . . . . . . . . .
7.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8 Conclusion
155
155
156
156
158
160
161
162
164
167
170
173
175
A Mathematical Topics
A.1 A Note on Distributions . . . . . . . . . . . . . . . . . . . . . .
A.2 A Note on Inﬁnite Dimensional Spaces in Quantum Mechanics
A.3 Diagonalization of Hermitian Operators . . . . . . . . . . . . .
A.4 The Fourier Transform . . . . . . . . . . . . . . . . . . . . . . .
A.5 Time Evolution for Time Dependent Hamiltonians . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
179
179
179
180
181
182
B Program Listings
B.1 DFT Solver For One-Dimensional
B.1.1 fft schroed.m . . . . . .
B.2 The HydroEigen class . . . . . .
B.2.1 HydroEigen.h . . . . . .
B.2.2 HydroEigen.cpp . . . . .
B.3 The TimeSolver class . . . . . .
B.3.1 TimeSolver.h . . . . . .
B.3.2 TimeSolver.cpp . . . . .
B.3.3 main.cpp . . . . . . . . .
B.4 The EigenSolver class . . . . . .
B.4.1 EigenSolver.h . . . . . .
B.4.2 EigenSolver.cpp . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
185
185
185
186
186
188
198
198
199
207
207
207
208
Problem
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
Chapter 1
A Brief Introduction to
Quantum Mechanics
We will present an introduction to the quantum mechanics of a single particle in this
and the subsequent two chapters; its background, formalism and application on simple
systems and examples. It will serve as an introduction to the real topic of this thesis,
which is the numerical solution of the time dependent Schrödinger equation (which we
will introduce later).
In this chapter, we will start with some basic classical mechanics to provide means
to compare quantum physical concepts to “ordinary” Newtonian concepts, such as
measurements and observables, equations of motion and probabilistic features.
Then we proceed with the postulates of quantum mechanics. By “postulates” we
shall mean concepts that are not obtainable by means of mathematics or deduction
from other areas of physics, but on the other hand are necessary and suﬃcient for
developing all quantum mechanical results.
1.1
A Quick Look at Classical Mechanics
A particle described by classical mechanics behaves in a perfectly ordinary way. That
is, it obeys the everyday mechanics of golf balls, planets and coﬀee-cups, also known
as Newtonian mechanics.
The central force behind Newtonian mechanics is Newton’s second law. This is the
well-known diﬀerential equation stating the relation between a particle’s acceleration
and the force exerted on it by its surroundings:1
mẍ = F .
(1.1)
Although Newtonian mechanics is seemingly a well-established theory in this form,
mathematicians and physicists have developed other equivalent and in some respects
more complicated ways of formulating it. On the other hand, the reformulations yield
deep insight into classical mechanics. In fact, the mathematical framework of classical
mechanics is much deeper than we will ever be able to present in even a lengthy
exposition, and it represents a huge area of mathematics.
For an excellent account of classical mechanics, see Ref. .
We will give a brief introduction to the Hamiltonian formulation of classical mechanics. Quantum mechanics may be built on this formalism, and the similarities and
1 Note the dot-notation for the time derivative: n dots means n diﬀerentiations with respect to
time.
1
A Brief Introduction to Quantum Mechanics
z
m
r(q(t))
q
y
x
Figure 1.1: Particle in three dimensions with constraints
parallels are quite instructive. (There is also another and equivalent way of introducing
quantum mechanics through the Lagrangian formulation of classical mechanics, see for
example Ref. .)
Let us ﬁrst introduce the material system: A system consisting of n point-shaped
particles moving in D-dimensional space. By point-shaped we mean that its conﬁguration is completely determined by a single position in D-space, and that each particle
has a non-zero mass mi . The conﬁguration of a material system is then speciﬁed
by nD real numbers q = (q1 , q2 , . . . , qnD ), the set of which constitutes the so-called
conﬁguration space.
To specify the state of the material system however, we need to specify the motion of
the system at an instant, and this is done through the momenta p = (p1 , p2 , . . . , pnD ).
To put it simple, the momenta represents the velocity of each particle in each direction
in D-space.
The conﬁguration and momentum of the system are not necessarily the usual cartesian coordinates (xi , yi and zi , i = 1 . . . n) and momenta (mi vx,i , mi vy,i and mi vz,i ),
but rather so-called generalized coordinates and momenta. They are also called canonical variables and p and q are called canonical conjugates of each other.
For a single particle moving in three-dimensional space, there are initially three
coordinates x, y and z in addition to the momenta px , py and pz , but if we impose
constraints on the motion, such as forcing the particle to move on an arbitrary shaped
wire like in Fig. 1.1, the number may be reduced. In this case we have only one
generalized coordinate q. The cartesian coordinates are then given as a function of q.
We will not state the general deﬁnition of generalized momenta. For the purpose
of the applications in this thesis it suﬃces to think of the pi as the velocity of qi .
However, the momenta are deﬁned in such a way that the equations of motion
in Hamiltonian mechanics equivalent to Newton’s second law are given by Hamilton’s
equations of motion:
∂H
∂pi
∂H
ṗi = −
∂qi
q̇i =
(1.2)
(1.2 )
The so-called Hamiltonian is a function of the individual coordinates and momenta
and possibly explicitly of time, viz.,
H = H(q, p, t).
The Hamiltonian may usually be thought of as the system’s energy function, i.e., its
total energy. In some systems we consider this is however not the case. We will
2
1.1 – A Quick Look at Classical Mechanics
x=(0.4,0.3,0.7)
p=(0.1,0.3,-0.1)
x=(0.4,0.3,0.7)
p=(0.1,0.3,-0.1)
Figure 1.2: A classical particle and measuring its state
introduce electromagnetic forces to the system, and this alters the Hamiltonian and
the deﬁnition of the canonical momenta pi .
If H is the total energy of the system, and if it is explicitly independent of time,
we may easily derive an important conservation law:
∂H
∂H ∂H
dH
∂H ∂H
=
+
+
.
q̇i +
ṗi =
(−ṗi q̇i + q̇i ṗi ) =
dt
∂t
∂qi
∂pi
∂t
∂t
i
i
i
Thus, the energy is conserved for a material system if the Hamiltonian does not explicitly depend on time. We shall see examples of this in chapter 2 when we discuss
simple quantum mechanical systems and their classical analogies.
An important special case of Hamiltonian systems are systems composed of a single
particle with mass m moving in three-dimensional space and under inﬂuence of an
external potential V (x ), where x is the cartesian coordinates of the particle, viz.,
H(x , p, t) = T + V =
p2
+ V (x , t).
2m
Here, T is the total kinetic energy. Writing out Hamilton’s equations of motion yields
ẋ
and ṗ
p
m
= −∇V (x , t),
=
which is exactly Newton’s second law. (Recall that in a potential ﬁeld the force is given
by −∇V .)
We will not elaborate any further on Hamilton’s equations until chapter 2, but
only note that since they are a set of ordinary diﬀerential equations governing the
time evolution of a classical system, classical dynamics is perfectly deterministic, in the
sense that given initial conditions x (0) and p(0) for the location and momentum of
the particle, one may (at least in principle) calculate its everlasting trajectory through
conﬁguration space by solving these equations.
Furthermore, the concept of ideal measurements in classical mechanics is rather
simple. If we have some dynamical quantity ω(x , p) it may be calculated at all times
using the coordinates and momenta at that time. There is no interference with the
actual system. We can picture the particle as if it had a display attached, with x and
p continuously updated and displayed for us to read, as in Fig. 1.2. Functions ω of
the canonical coordinates and momenta are called classical observables. A measurable
quantity can only be deﬁned in terms of the state of the system, hence it must be such
a function. On the other hand, x and p are measurable so that any function of these
also must be an observable.
Newtonian mechanics is in most respects very intuitive and agrees with our ordinary
way of viewing things around us. Things exist in a deﬁnite place, moving along deﬁnite
paths with deﬁnite velocities. Indeed, this mechanistic viewpoint was securely founded
3
A Brief Introduction to Quantum Mechanics
ψ1
ψI
S1
S1
I1
I 1+2
S2
ψ2
Figure 1.3: Schematic view of double-slit experiment (left) and discovery of photons
(right)
already in the 18th century, when Pierre Simon De Laplace (1749–1827) proposed his
deterministic world view, see Ref. :
“We may regard the present state of the universe as the eﬀect of its past and
the cause of its future. An intellect which at any given moment knew all of
the forces that animate nature and the mutual positions of the beings that
compose it, if this intellect were vast enough to submit the data to analysis,
could condense into a single formula the movement of the greatest bodies
of the universe and that of the lightest atom; for such an intellect nothing
could be uncertain and the future just like the past would be present before
its eyes.”
This view was however profoundly shaken through the advent of quantum mechanics.
One simply cannot measure the positions and velocities with inﬁnite accuracy. In
addition, the new science of chaos makes the Laplacian determinism somewhat hollow.
If we consider Hamilton’s equations of motion that govern many physical systems, they
turn out to exhibit chaotic behavior, i.e., small variations in initial conditions increase
exponentially with time, making long-term predictions of the system’s conﬁguration
impossible, see Ref. .
1.2
Why Quantum Mechanics?
We will not go into the historical reasons of how and why quantum mechanics came
to be, although this is very interesting in its own respect. Quantum mechanics arose
in the ﬁrst decades of the 20th century when modern experiments and measurements
contradicting the classical theories created a crisis in the physical communities. In
Refs. [10, 11] excellent and entertaining accounts of these matters are given.
We will instead make an illustration that is quite popular, explaining why we need
quantum mechanics and at the same time introducing some of the concepts, making it
easier to follow the introduction of the postulates of quantum mechanics in the next
section.
Imagine that we direct a beam of monochromatic light described by a wave ψI
onto a plate with two slits S1 and S2 . At some distance behind the plate we place a
photosensitive screen. The experimental setup of this double-slit experiment is shown
in Fig. 1.3. Some of the light will pass through the plate. The outgoing waves ψ1
and ψ2 coming from S1 and S2 , respectively, will interfere with each other, producing
an interference pattern I1+2 at the screen. (The subscript indicates what the slits are
open.)
This is very well understood if we assume the wave-nature of electromagnetic ﬁelds;
an assumption that has been made for more than a century, after Maxwell proposed
4
1.2 – Why Quantum Mechanics?
S1
I1
S2
I2
I 1+ I 2
Figure 1.4: Classical particles scattered by double-slits, and their combined arrival
distributions
his famous equations that show that light propagates as waves in a way very similar to
waves on the surface of water.2 In fact we may imagine an analogous experiment with
water-waves and observe the results above.
Suppose now that we close S2 and lower the intensity of the beam ψI . Then we will
notice that the light no longer arrives in a continuous fashion at the screen; we observe
small bursts of light. We have made the monumental discovery that light actually
comes in small bundles – they have a particle nature. If we observe a large number of
these so-called photons and make a histogram of their arrival-coordinate x, we will of
course produce the pattern I1 , see Fig. 1.3.
The fact that light turned out to be particles was surprising, but not in direct opposition to classical physics. Physicists are used to the idea that phenomena appearing
continuous in reality are composed of small and discrete components. A drop of water
is for example composed of tiny molecules. But classical physics was anyway in for a
surprise.
Experimentally we may ﬁnd that each photon carries the same energy E = hν,
where ν is the frequency of the light waves. Furthermore, we may ﬁnd that they all
share the same momentum p = h/λ, where λ is the wavelength, related to the frequency
by λν = c, the speed of light. The number h is called Planck’s constant, and
h ≈ 6.626 068 076 · 10−34 Js,
a very small number.3
What happens when we open S2 again? If the photons are particles, they will pass
either S1 or S2 , creating the pattern I1 + I2 . The photons passing S1 will contribute
to I1 , and the photons passing S2 will contribute to I2 . This picture of the photons as
classical particles is shown in Fig. 1.4. Instead, we observe that the pattern is indeed
very diﬀerent from I1 + I2 , even with the very low intensity, which can only mean that
the photons cannot pass through one single slit in a well-deﬁned way. In some strange
way they must pass both! It turns out that the classical particle picture has some ﬂaws
after all.
The same experiment may be used with electrons; physical objects that until the
1920s were solely regarded as particles in the classical sense. But as the joint diﬀraction
patterns I1 + I2 fails to reproduce I1+2 in this case as well, we must abandon the
Newtonian particle nature of the electrons; at least when they pass the slits S1 and
S2 . In fact, electron diﬀraction experiments have been carried out with an electron
intensity as low as one electron per minute, see Ref. .
In 1926 De Broglie proposed a daring hypothesis (see Refs. [10, 14]), stating that
2 See
3 The
section 1.6.
uncertainty of the cited h lies in the two last digits. Cited from Ref. .
5
A Brief Introduction to Quantum Mechanics
any physical object can be ascribed a wavelength λ given by the relation
λ=
h
,
p
(1.3)
which is the same expression as for the wavelength of photons, i.e.,
λ=
c
hc
hc
hc
h
=
=
=
= .
ν
hν
E
pc
p
Indeed, with De Broglie’s hypothesis the results from the double-slit experiment
make very much sense when we assume that a wave – not a particle – passes the slit,
but that a particle is being found at the screen.
Before the ground-shaking surprise the wave ψ could be ascribed because the electrons and photons were so large in number. Quantum mechanics says that each particle
is also a wave, in the sense that it is ascribed a wave ψ(x), whose square amplitude
|ψ(x)|2 is the probability density that the particle will be found at the position x.
Each particle is of course observed at some speciﬁc place x along the screen when the
wave arrives. However, we cannot tell what the wave itself looks like. This is because
the wave is only a probability density for one single particle’s position. Only if a large
ensemble of particles starting out the same way hit the screen we may deduce from a
histogram what |ψ(x)|2 looks like. On the other hand an intense electron beam will
completely obscure the particle-nature, and we will observe a continuous wave-front of
particles, very much like as in the case with light. We see that the probabilistic nature
of measurements is fundamental to quantum mechanics.
So why do we have Newtonian physics in the ﬁrst place? The answer lies in the
number h, which is a very small number. If we calculate the wavelength of, say, a 1
kg basket ball moving with 1 m/s, we will get λ ≈ 6 × 10−27 m, a number which is
extremely small compared to the size of the basketball. To observe diﬀraction patterns
this wavelength must be comparable to the size of the slits which is absurd. It is very
diﬃcult to come up with an experiment showing the wavelike nature of basketballs.
In summary we have experimentally turned down the concept of microscopic particles moving along deﬁnite trajectories. Instead we say that they have a wave-like nature
and that their motion is associated with some wave ψ(x). Classical Newtonian physics
cannot be used, at least in cases where the De Broglie wavelength h/p is comparable
to the size of the system considered.
1.3
The Postulates of Quantum Mechanics
In this section we will introduce the postulates of non-relativistic quantum mechanics.
Even though we until now have been talking about waves distributed in space we shall
make a formulation in more abstract mathematical terms. The connection between
these abstract terms and the concrete example of the double-slit experiment will become
clear as the discussion progresses.
The language of quantum mechanics is linear algebra, with all its implications. This
connection is clearly seen through the ﬁrst three postulates.
Postulate 1
The state of a quantum mechanical particle is described by a vector Ψ in a complex
Hilbert space H. On the other hand, all the possible states of the particle are exactly
H minus the zero vector.
Not every vector of H corresponds to distinct states. Every vector
Ψ = αΨ
6
1.3 – The Postulates of Quantum Mechanics
with α a (complex) scalar, corresponds to the same state. The diﬀerent “rays” or
directions in Hilbert space thus corresponds to distinct physical states. We will show
this explicitly later, when discussing observables and measurements.
Particles exist in physical space, and this is reﬂected through the fact that our
Hilbert space almost always contains L2 (R3 ), the set of square-integrable complexvalued functions on R3 . Indeed, the wave-shape of the double-slit experiment is just
the vector Ψ represented as an L2 -function. (See Refs. [15–17] for a discussion of L2 .)
But unlike in classical physics a particle in quantum mechanics has not only spatial
degrees of freedom. It may also have so-called spin, which is an additional intrinsic
degree of freedom found in many elementary particles. In that case our Hilbert space
looks like
H = L2 (R3 ) ⊗ Cn ,
where n depends on the fundamental particle in consideration. In other words, the
function Ψ(x ) has n complex components Ψ(σ) , σ = 1, 2, . . . , n. For so-called spin1
2 particles we have n = 2. The most fundamental examples of such particles are
electrons, protons and neutrons, the constituents of all atoms and also most of the
known universe. Photons, by the way, are spin-1 particles, with n = 3, but they do
not obey non-relativistic quantum mechanics as they have zero rest mass.
Sometimes we reduce our Hilbert space (in some appropriate way) and consider
particles in one or two dimensions. We may also neglect the spin degrees of freedom
as in the double-slit illustration, or even neglect L2 when our particle is conﬁned to a
very small volume in space.4 In addition, other ways of approximating H by neglecting
some parts of it are used. Indeed, the ﬁnite element methods to be discussed can be
viewed in this way.
There are obviously striking diﬀerences between the classical way of describing a
particle’s state and the quantum mechanical way. The classical state is fully speciﬁed
with momentum x and position p; six degrees of freedom in total. But in quantum
mechanics the particle’s state is only completely speciﬁed with an inﬁnity of complex
numbers, as L2 is an inﬁnite-dimensional vector space. This diﬀerence stems from the
probabilistic interpretation of the wave function hinted at in the previous section; we
need a probability density for all points in space.
The second postulate concerns physically observable quantities.
Postulate 2
Every measurable quantity (i.e., observable) is represented by an Hermitian linear
operator in H.
For every classical dynamical variable ω(x , p) there corresponds an operator Ω obtained by operator substitution of the fundamental position and momentum operators
X and P, respectively:
Ω = ω(x → X , p → P).
The components of X and P are operators deﬁned through the fundamental quantum commutation relation
[Xi , Pj ] = iδij .
When Ψ is represented as an L2 -function, the position operator Xi is just a multiplication of Ψ(x ) with xi . The momentum operator then becomes
Pi = −i
∂
,
∂xi
Note however that this choice is not unique, as shown in Ref. . Furthermore, as it
stands now it really makes an assumption on the diﬀerentiability of Ψ.
4 See
chapter 3 for a further discussion.
7
A Brief Introduction to Quantum Mechanics
We have simpliﬁed things somewhat. Given ω, the operator Ω may be ambiguous.
As an example consider the dynamical quantity ω = xp for a particle in one dimension.
Since X and P do not commute we cannot tell what operator XP or P X to prescribe,
and the physical results are not equivalent when using the diﬀerent operators. In
addition, neither operator is Hermitian. The remedy is most often to use the so-called
Weyl product
1
Ω = (XP + P X)
2
instead, but there is no universal recipe in more general cases where this fails. We will
not elaborate anymore on this, but for a more thorough discussion, see Ref. .
The observables Ω described above have a classical counterpart. But quantum particles have degrees of freedom that have no classical meaning. In this part of Hilbert
space we also have linear Hermitian operators, and these cannot be interpreted classically. Examples of such observables are the spin operators that are used to measure
the direction of the intrinsic spin of spin- 12 particles.
The third postulate concerns (ideal) quantum mechanical measurements of observables. The meaning of ideal measurements are delicate, see page 122 of Ref. , and
we will only consider them in a mathematical and in an operational way, without
discussing the deeper physical meanings of them, such as if ideal measurements are
possible at all.
Postulate 3
The only possible values obtainable in an (ideal) measurement of Ω are its eigenvalues
ωn . They have a probability
(1.4)
P (ωn ) ∝ (Ψ, Πn Ψ)
of occurring, with Πn the projection operator onto the eigenspace of ωn . Immediately
after a measurement of Ω, the quantum state collapses into Πn Ψ.
This postulate contains a great deal of information, vital in order to understand how
quantum mechanics work. We assume for simplicity that the spectrum ωn of Ω is
discrete.
First of all we have the collapse of the state vector. When we have obtained a value
ωn for the observable Ω the state of the particle changes into Πn Ψ, the projection of
Ψ onto the eigenspace of ωn . Contrary to a classical system, measuring an observable
quantity interferes with the system, and it is inevitable as well. This is perhaps the
most fundamental diﬀerence between classical physics and quantum physics.
An implication of this in terms of the double-slit experiment, is that we cannot follow
the particle’s trajectory through the slits. Indeed, there is no such thing as a trajectory.
When we measure the position we destroy the wave-pattern that the interaction with
the slits created, and the particle will never reach the screen and contribute to the
distribution dictated by the same wave-pattern. Thus, we cannot observe both the
particle nature and the wave nature at the same time – an important fact.
But what about the proportionality factor in Eqn. (1.4)? This is where we ﬁnd out
about which vectors in H correspond to distinct physical states.
Since the only obtainable values for Ω are the ωn s, we must have
P (ωn ) = 1.
n
That is, the total probability of getting one of the eigenvalues when performing a
measurement is unity.
Since Ω is an Hermitian operator, it has a complete set of orthonormal eigenvectors
Φn so that we may expand Ψ in this basis, viz.,
(Φn , Ψ)Φn .
Ψ=
n
8
1.3 – The Postulates of Quantum Mechanics
For simplicity we assume that we have no degeneracy of the eigenvalues ωn , that is
they are all distinct. Since Φn are orthonormal, we get
Πn Ψ = (Φn , Ψ)Φn ,
and
1=
n
P (ωn ) = C
n
(Ψ, Πn Ψ) = C
Ψ,
(Φn , Ψ)Φn
= C(Ψ, Ψ).
n
For this to hold, we must choose C = Ψ−2 . In that case, we have an explicit formula
for P (ωn ):
(Ψ, Πn Ψ)
P (ωn ) =
.
(Ψ, Ψ)
Scaling Ψ with some arbitrary nonzero complex number α, we see that the probabilities
are all conserved. This must hold for every observable, and thus the probability distribution of every possible observable is conserved when we scale Ψ. This is why each
direction in Hilbert space corresponds to distinct physical states, rather than each individual vector.5 We simply cannot observe any diﬀerence among the diﬀerent vectors
along the ray.
We only have a probabilistic way of forecasting what value an observable Ω will
have if we measure it. The double-slit experiment demonstrated this clearly when we
measured the position of each particle.
We have assumed here that the spectrum of Ω is discrete. This is not always the
case, and in the case of a continuous spectrum, we have to change our formulation a
little bit: The only obtainable values becomes ω(x), where x is a parameter in some
interval (instead of a discrete index), and P (ω(x)) becomes a probability density instead
of a probability.
So far we have discussed the state Ψ of a particle and the description of observable
quantities. What is left is the time development of the system, and this is exactly what
is given in the fourth postulate.
Postulate 4
The time development of the quantum state Ψ is given by time dependent Schrödinger
equation, viz.,
∂
(1.5)
i Ψ(t) = HΨ(t).
∂t
We may interpret this postulate for the double-slit example. Eqn. (1.5) tells us in
what way the particle-wave will move through the slits. Compare the Schrödinger
equation with Hamilton’s equations of motion and Newton’s second law, which are
two equivalent classical counterparts of the fourth postulate. They dictate the unique
evolution in time of the state of a material system.
Since Eqn. (1.5) is a ﬁrst order diﬀerential equation, knowledge of the wave Ψ at
one instant makes us able to forecast the evolution at all times, past and future. We
may demonstrate this explicitly. Note that when we describe the state Ψ as a function
of spin and the spatial coordinates, Eqn. (1.5) becomes a coupled set partial diﬀerential equations of second order in space. We will investigate (partial and ordinary)
diﬀerential equations later in chapter 3 and 4.
We will assume that H is independent of time. Given Ψ(t1 ), suppose we want to
ﬁnd Ψ(t2 ). Even though Ψ(t) is a vector, we may create a Taylor expansion of Ψ(t2 )
5 There is also a reason connected to the fourth postulate, on time development of Ψ. Does αΨ
develop diﬀerently in time compared to Ψ? The answer is negative due to the Schrödinger equation
being linear.
9
A Brief Introduction to Quantum Mechanics
around t = t1 :6
∞
∆tn ∂ n Ψ Ψ(t2 ) =
,
n! ∂tn t=t1
n=0
where ∆t = t2 − t1 . But according to Eqn. (1.5),
∂Ψ(t) i
= − HΨ(t1 ),
∂t t=t1
so that our Taylor expansion becomes:
n
∞
1
i
i
Ψ(t2 ) =
Ψ(t1 ) = exp −∆t H Ψ(t1 ),
−∆t H
n!
n=0
where we have used the deﬁnition of the exponential of an operator.
Deﬁning the so called propagator, or time evolution operator,
i
U(t2 , t1 ) = exp −(t2 − t1 ) H ,
(1.6)
we may write
Ψ(t2 ) = U(t2 , t1 )Ψ(t1 ).
In other words we have found a linear operator U that solves the time dependent
Schrödinger equation in the sense that it takes a quantum state Ψ(t1 ) into itself at
time t2 , at least formally.
Note that since H is Hermitian, that is H † = H, we have
i
U † (t2 , t1 ) = exp (t2 − t1 ) H = U(t1 , t2 ) = U(t2 , t1 )−1 ,
so that U is a unitary operator. Alternatively, we may note that the adjoint is simply
a Taylor series backwards in time, and thus unitarity follows immediately.
For time dependent Hamiltonians things are a little bit more complicated, and we
defer its discussion until appendix A. The operator U is still a unitary operator from
the Taylor expansion argument.
An important implication of the unitarity of the propagator, is that the norm of
Ψ(t) is conserved in time, viz.,
Ψ(t2 ) = U(t2 , t1 )Ψ(t1 ) = (Ψ(t1 ), U † (t2 , t1 )U(t2 , t1 )Ψ(t1 )) = Ψ(t1 ) .
2
2
2
Conservation of norm is equivalent to conservation of probability. Recall that we
may decompose a state Ψ into its components along an orthonormal basis, viz.,
Ψ=
cn Φn ,
n
where the Φn may be precisely the eigenvectors of some observable Ω, again assumed
to be non-degenerate and discrete for simplicity. Then, the probability of obtaining
the eigenvalue ωn from a measurement of Ω is
P (ωn ) =
(Ψ, Πn Ψ)
2
Ψ
=
|cn |2
2
Ψ
=
|(Φn , Ψ)|2
Ψ
2
,
6 This requires, of course, that the solution to the diﬀerential equation exists, which we silently
assume here.
10
1.4 – More on the Solution of the Schrödinger Equation
and all these add up to unity, viz.,
P (ωn ) =
n
|cn |2
n
Ψ
2
= Ψ
= Ψ−2 (Ψ,
−2
|cn |2 = Ψ
−2
n
n
Πn
(Ψ, Φn )(Φn , Ψ)
Ψ) = Ψ−2 Ψ2 = 1.
n
We have used that the eigenvectors of Ω constitute a basis, that is the sum of all the
projection operators on the eigenspaces is the identity, viz.,
Πn = 1.
n
Thus, when Ψ is conserved, so is the total probability for obtaining some of the
eigenvalues ωn when measuring Ω.
In short, unitarity of U(t2 , t1 ) allows for probabilistic interpretation of Ψ.
1.4
More on the Solution of the Schrödinger Equation
In this rather compact section we will take a closer look at the solution of the Schrödinger equation (1.5) for time independent Hamiltonians, i.e., where the Hamiltonian
does not contain an explicit dependence on time, viz.,
∂
H ≡ 0.
∂t
Since H is Hermitian, we may ﬁnd a basis H of eigenvectors of H, viz.,
HΦn = En Φn ,
(1.7)
and we assume for simplicity that the energy spectrum (i.e., the eigenvalues of H)
are discrete and non-degenerate. Eqn. (1.7) is referred to as the time independent
Schrödinger equation. We expand the wave function Ψ(t) in this basis, viz.,
cn (t)Φn .
Ψ(t) =
n
Inserting this expansion into the time independent Schrödinger equation yields
i
dcn
n
dt
Φn =
En cn (t)Φn .
n
Both sides represent a non-zero element in H in terms of a basis. Hence, the terms
must be equal as well, i.e.,
dcn
= En cn (t)
i
dt
which implies
cn (t) = cn (0)e−iEn t/ .
The solution to the time dependent Schrödinger equation then reads
Ψ(t) =
cn (0)e−iEn t/ Φn ,
(1.8)
n
where Ψ(0) =
n cn (0)Φn .
11
A Brief Introduction to Quantum Mechanics
The coeﬃcients cn along the energy basis rotate with angular frequency En /. The
magnitude |cn | is easily seen to be constant, and hence the probability of ﬁnding the
system in the state Φn remains constant. It is then easy to see why Φn is called a
stationary state. If Ψ = Φn for some n the system remains in the eigenstate at all t.
Notice that the solution (1.8) is simply the application of the time evolution operator
from Eqn. (1.6) when expressed in a basis of eigenvectors for H, i.e., when H is diagonal,
viz.,
i
Ψ(t) = e− tH Ψ(0).
A consequence of Eqn. (1.8) is that solving the time dependent Schrödinger equation
for time independent Hamiltonians is equivalent to diagonalizing H; that is equivalent
to ﬁnding its eigenvalues and eigenvectors. This may be looked upon as a simpler
problem than solving the original Schrödinger equation (1.5), and indeed in light of
Eqn. (1.8) it is not surprising that ﬁnding an algorithm for solving the Schrödinger
equation numerically for time dependent Hamiltonians is more diﬃcult than for time
independent ones.
1.5
Important Consequences of the Postulates
The postulates together with the facts of linear algebra in Hilbert spaces lay the foundation for quantum mechanics, and now is the time for gaining some physical insight
based on the postulates. The goal of this section is to present some important results
whose contents are vital for understanding and working with quantum mechanics.
1.5.1 Quantum Measurements
The second and third postulate tell us about observables in quantum mechanics. When
measuring an observable ideally, in a sense that we shall not deﬁne, the quantum state
collapses onto the eigenvector (or rather onto the eigenspace) corresponding to the
eigenvalue found. Thus, we destroy our perhaps painstakingly constructed quantum
system in the process.
As we have noted, quantum mechanics is probabilistic in nature. Some things we
do not know for certain, only with a certain probability. Measuring a quantum state
destroys it, making further investigations useless. It is clear that statistical methods
will become in handy.
In statistics, if we have some quantity A taking on the values An , where n is some
index, with some probability distribution Pn , then the expectation value A is deﬁned
as:
Pn An .
A :=
n
When performing a very large sequence of experiments, each obtaining one of the values
An , we will on average get the value A.
Applying this on a quantum observable Ω and its eigenvalues with their probabilities, we get, when we assume Ψ = 1:
Ω =
n
P (ωn )ωn =
n
(Ψ, Φn )(Φn , Ψ)ωn =
(Ψ, ΩΦn )(Φn , Ψ) = (Ψ, ΩΨ),
n
where we have used that the eigenvectors Φn constitute an orthonormal basis for our
Hilbert space. If Ψ is not normed to unity, then we must introduce an additional factor
−2
Ψ as is easily seen.
12
1.5 – Important Consequences of the Postulates
Theorem 1
The expectation value of some observable A in the state Ψ is given as:
A =
(Ψ, AΨ)
.
(Ψ, Ψ)
Although we have only shown this for a discrete, non-degenerate spectrum, it also
holds in general for continuous and degenerate spectra, the key reason being the orthonormality of the eigenvectors and the fact that they span the whole Hilbert space
H.
Finding the average value of an observable thus amounts to applying the observable
(i.e., its operator representation) to the state, and projecting the result back onto the
state. This is the best prediction we can make of the outcome of a measurement.
1.5.2 Sharpness and Commuting Observables
When we measure an observable Ω the state collapses onto the eigenspace of the obtained value ωn . If we try to measure the same observable immediately after the
previous measurements, we will with certainty get ωn as result, because by the third
postulate we know that Ψ = Φn , the eigenstate corresponding to ωn . We say that ωn
is a sharply determined value of Ω in the state Ψ. In other words, the probability is 1
that we ﬁnd ωn when measuring the observable for the state Ψ.
If the system in consideration is in an eigenstate of an observable Ω, then we know
with certainty that a measurement will yield the corresponding eigenvalue. Conversely,
if we want P (ωn ) = 1, then (Φm , Ψ) = 0 for m = n. Thus, having a sharp eigenvalue
of some observable for a system is equivalent to the system being in an eigenstate of
the observable.
An important question arises, and indeed it is one of the most important: What
conditions must be fulﬁlled for two observables to have sharply determined values at
the same time?
Theorem 2
A suﬃcient and necessary condition for two observables A and B to have a common
set of orthonormal eigenvectors is that A and B commute, i.e., if
[A, B] := AB − BA = 0.
Proof: Again we show this for a discrete spectrum, but this time degeneracy is also
included.
An immediate requirement for two observables to have sharp values in the same
state at the same time is that the observables must have common eigenvectors, because
a sharp value is only obtained in the case of the system being in an eigenstate.
Assume that an and bn are the eigenvalues of A and B, respectively, and that Φn
are the corresponding eigenvectors of both operators:
AΦn
BΦn
=
=
a n Φn
bn Φn
Operating on the ﬁrst relation with B and the second with A, and subtracting yields
(AB − BA)Φn = (an bn − bn an )Φn = 0,
and since Φn is not the zero vector, we get
[A, B] = 0.
13
A Brief Introduction to Quantum Mechanics
For the converse, assume that [A, B] = 0, and assume that we are given the eigenvalues and eigenvectors of A, viz.,
AΦn = an Φn .
Using commutability we get
A(BΦn ) = B(AΦn ) = an (BΦn ),
implying that BΦn is an eigenvector of A with the eigenvalue an .
Assume that an is a non-degenerate eigenvalue. Then BΦn must be some scalar
multiple of Φn , because the eigenspace has only dimension one. Let us call this scalar
bn , viz.,
BΦn = bn Φn ,
and we are done.
Assume then that an is degenerate, and that the eigenspace has (inﬁnite or ﬁnite)
dimension g. Assume that Φm
n , m = 1 . . . g is a basis for the eigenspace. BΦn must be
in the eigenspace and thus be a linear combination of the basis vectors, viz.,
BΦm
Mmi Φin .
n =
i
This vector is not necessarily an eigenvector of B. But a linear combination Ψ might
be, viz.,
Ψ=
ci Φin .
(1.9)
i
We assume that Ψ is an eigenvector for B with eigenvalue b, viz.,
BΨ = bΨ,
and writing out the left hand side, we get
ci Φin =
ci Mij Φjn .
BΨ = B
i
i
j
On the right hand side we have
bΨ = b
cj Φjn .
j
Since the vectors Φm
n are linearly independent, the right and left hand side must be
equal term by term also, viz.,
Mij ci = bcj ,
i
which is nothing more than the jth component of the matrix equation
M c = bc.
This is a new eigenvalue problem of dimension g. But since B is Hermitian, so is the
g × g matrix M , and thus we may ﬁnd g eigenvalues bj and eigenvectors c. This gives
coeﬃcients ci for Eqn. (1.9), and thus we have found g (orthonormal) eigenvectors of
B which also are eigenvectors of A. We have found that A and B have a common set
of eigenvectors, and we are ﬁnished. Note that even though Φm
n are eigenvectors corresponding to the same eigenvalue
an for A, the eigenvalues bj found for B may be diﬀerent.
14
1.5 – Important Consequences of the Postulates
1.5.3 Uncertainty Relations: Heisenberg’s Uncertainty Principle
The converse of sharpness is that Ψ is not an eigenstate for our observable A, and thus
there is some probability that we will get another value than an when measuring the
observable. As a measure of the degree of sharpness we may use the standard deviation
∆A, viz.,
(1.10)
∆A := A2 − A2 .
For an eigenstate Φn of A, we trivially get
∆A2 = (Φn , A2 Φn ) − (Φn , AΦn )2 = 0,
so we have zero uncertainty in the case of a sharp value for the observable A.
If A has standard deviation ∆A (in a given state), what is the optimal standard
deviation ∆B for another observable B? The question has fundamental importance,
and the answer is one of the most striking facts of quantum mechanics.
Theorem 3
Given two observables A and B, we have the uncertainty relation
∆A∆B ≥
1
|[A, B]| .
2
(1.11)
Proof: The standard text-book proof is a rather elegant one. Deﬁne two new operators
Â and B̂ by:
Â = A − A1, and B̂ = B − B1.
Of course, Â and B̂ are also Hermitian. Note that [Â, B̂] = [A, B]. Let Ψ be an
arbitrary vector in H, and deﬁne
H = Â + iαB̂,
where α ∈ R is arbitrary. We must have
HΨ2 = (Ψ, H † HΨ) ≥ 0,
by deﬁnition of the norm. Calculating the product H † H yields
H † H = Â2 + iα[Â, B̂] + α2 B̂ 2 ,
and thus
HΨ2 = Â2 + iα[Â, B̂] + α2 B̂ 2 .
Note that we must have i[Â, B̂] ∈ R since the norm is a real number.7 We then
choose
i[Â, B̂]
.
α=−
2B̂ 2 Insertion into the norm gives us
Â2 ≥ −
[Â, B̂]2
4B̂ 2 .
When noting that Â2 = ∆A2 and similarly for ∆B, we immediately get the desired
result (1.11). A very important special case is the Heisenberg uncertainty principle:
∆Xi ∆Pj ≥
7 It
δij .
2
(1.12)
is easy to show that [A, B]† = −[A, B], and thus i[A, B] is Hermitian.
15
A Brief Introduction to Quantum Mechanics
This relation states that if the particle is localized to degree ∆Xi in the ith spatial direction, then the corresponding momentum is sharp to a degree limited by the Heisenberg
uncertainty principle.
Contrast this result to the classical principle depicted in Fig. 1.2. The classical
particle oﬀers complete information of its state at all times through the coordinates
and momenta. It has ∆Xi · ∆Pi = 0 in all situations. The quantum particle behaves
however in a more complex way. A quantum particle localized perfectly like this will
have inﬁnite ∆Pi !
1.5.4 Ehrenfest’s Theorem
We will state and prove the so-called Ehrenfest’s theorem, which in some sense connects
the time development of the expectation values of Xi and Pi to Hamilton’s equations
and the movement of a classical particle.
But ﬁrst we need a more general result concerning time dependencies of expectation
values.
Theorem 4
Given an observable A, its expectation value develops in time according to the diﬀerential equation
i
∂A
d
A = [H, A] + .
(1.13)
dt
∂t
The last term accounts for the explicit dependence on time of A.
Proof: We begin by noting that with an expression such as
(Ψ, AΨ),
we may use the familiar product rule for diﬀerentiation (with respect to t). This can
be done since we may expand both operators and states in Taylor series in time,8 viz.,
∂Ψ(t)
+ O(∆t2 )
∂t
∂A(t)
+ O(∆t2 ).
and A(t + ∆t) = A(t) + ∆t
∂t
Using these expansions when diﬀerentiating for example AΨ we get
Ψ(t + ∆t) = Ψ(t) + ∆t
∂
1
(A(t)Ψ(t)) = lim
(A(t + ∆t)Ψ(t + ∆t) − A(t)Ψ(t))
∆t→0 ∆t
∂t
∂Ψ
1
∂A
= lim
)(Ψ(t) + ∆t
) − A(t)Ψ(t)
(A(t) + ∆t
∆t→0 ∆t
∂t
∂t
∂A(t)
∂Ψ(t)
=
Ψ(t) + A(t)
.
∂t
∂t
If we are able to expand some time dependent quantity in Taylor series, this argument
will hold.
Thus, we get
∂A
∂
∂Ψ
∂A
∂Ψ
= (Ψ, AΨ) = (
, AΨ) + (Ψ,
Ψ) + (Ψ, A
)
∂t
∂t
∂t
∂t
∂t
i
∂A
i
= (− HΨ, AΨ) + (Ψ, − AHΨ) + ∂t
i
∂A
i
= (Ψ, HAΨ) + (Ψ, − AHΨ) + ∂t
i
∂A
= [H, A] + .
∂t
8 Again
16
we are assuming that the solution Ψ(t) exists.
1.5 – Important Consequences of the Postulates
This completes the proof. The important special case named Ehrenfest’s theorem concerns the operators Xi
and Pi in particular.
Theorem 5
Assume H = L2 (R3 ), that is we are working with square integrable functions and
without spin degrees of freedom. Assume that ∇2 Ψ ∈ H. Assume that the Hamiltonian
is given by
P2
+ V (X ),
H=
2m
where V is a diﬀerentiable function of X . Then,
∂
Xi ∂t
∂
and
Pi ∂t
1
Pi m
∂V
= −
.
∂Xi
=
(1.14)
(1.15)
Proof: We begin by computing a commutator:
P2
P2
1
1
Xi + V Xi − Xi
− Xi V =
[P 2 , Xi ] =
[P 2 , Xi ]
2m
2m
2m
2m i
1
(Pi [Pi , Xi ] + [Pi , Xi ]Pi ) =
Pi .
=
2m
2im
[H, Xi ] =
This gives us Eqn. (1.14) upon insertion into Eqn. (1.13).
Next, we compute another commutator:
[H, Pi ] = [V, Pi ] = −i[V,
∂
∂V
] = i
,
∂Xi
∂Xi
when we regard the operators as operating on functions. This immediately gives
Eqn. (1.15), when we insert the commutator into Eqn. (1.13). In other words: Hamilton’s equations of motion (1.2) are satisﬁed for the expectation values. This is not a strong link to classical theory as one might think. Consider
a particle with a high degree of localization. Then we may take ∆Xi ≈ 0. In addition,
if the particle has low speed (or high mass) we may take ∆Pi ≈ 0. The distribution
for position and velocity exhibit the features of the state of a classical particle. Furthermore, the expectation value on the right hand side of Eqn. (1.15) become simply a
sampling of the gradient of V at the particle’s now well-deﬁned position. For a general
quantum particle one must sample the gradient of the potential in whole space, and not
locally as in Newtonian mechanics. This is a very big diﬀerence. Similar results exist
regarding semiclassical electrodynamics, as considered in section 1.6. The expectation
values do not only sample the electromagnetic forces locally through a gradient or a
curl, but through an integration of these over whole space.
1.5.5 Angular Momentum
An important class of observables comprises the angular momentum operators. They
are deﬁned as three Hermitian operators J1 , J2 and J3 constituting a vector operator
J obeying the following commutation relations:
[J1 , J2 ] = iJ3
[J2 , J3 ] = iJ1
[J3 , J1 ] = iJ2
(1.16)
17
A Brief Introduction to Quantum Mechanics
In short, the three operators Ji obey
[Ji , Jj ] = iijk Jk ,
where we have assumed Einstein’s summation convention (summation over repeated
indices) and the Levi-Civita symbol ijk which is equal to the sign of the permutation
of the numbers 1, 2 and 3, if ijk is a permutation of these, and zero otherwise.
The angular momentum operators get their names from the fact that the observables
associated with each of the three components of the classical orbital momentum obeys
the relations (1.16). The orbital momentum is given by
i
j
k L := r × p = x y z px py pz = (ypz − zpy )i + (zpx − xpz )j + (xpy − ypx )k .
(1.17)
We will return to this later, when discussing the solution of the hydrogen atom.
Let us deﬁne J 2 as the operator given by
J 2 := Ji Ji = J12 + J22 + J32 .
We call J 2 the square length (or square norm) of the angular momentum.
The components of the quantum mechanical angular momentum cannot be measured sharply at the same time, as we know from Theorem 3. For example,
∆J1 ∆J2 =
|J3 |.
2
However, is a very small number in macroscopic terms, actually essentially zero
according to the correspondence principle. Thus, for the classical orbital angular momentum of Newtonian mechanics we may measure each component Li simultaneously
with in reality perfect accuracy. However, there are angular momentum operators that
are not derived from classical quantities yielding important quantum mechanical effects. The intrinsic spin of particles such as electrons will display this, as we will see
later.
It is rather straightforward to show that J 2 commutes with all three Ji .
Theorem 6
Given three angular momentum operators Ji , i = 1, 2, 3, the square norm J 2 = Ji Ji
obeys the following commutation relation:
[J 2 , Ji ] = 0,
i = 1, 2, 3.
Proof: We begin by writing out the ﬁrst argument of the commutator, viz.,
[J 2 , Ji ] = [Jj Jj , Ji ] = Jj Jj Ji − Ji Jj Jj = Jj Jj Ji − ([Ji , Jj ] + Jj Ji )Jj
= Jj [Jj , Ji ] − iijk Jk Jj = ijik Jj Jk − iijk Jk Jj
In the last expression, we are summing over both j and k. Swapping their names in
one of the terms makes the expression vanish identically, viz.,
ijik Jj Jk = ikij Jk Jj = iijk Jk Jj .
We have used that ijk = kij since cycling the symbols does not change the sign of
the permutation. According to Theorem 2, J 2 has a common set of eigenvectors with each of the
components Ji . However, since the components by deﬁnition do not commute among
18
1.5 – Important Consequences of the Postulates
themselves, we cannot have a common set of eigenvectors for J 2 and more than one
component at the same time.
We may then have a sharp value of J 2 and for example J3 , but J1 and J2 will
not be sharply determined. Comparing this to the classical orbital momentum, this is
like saying that we may know the absolute value of the angular momentum and the
z-component, but not the x and y component.
We will now state a very important and striking theorem concerning the angular
momentum operators. The theorem puts restrictions on the possible eigenvalues of Ji
and J 2 . Its proof rests solely on the deﬁning relations (1.16).
In the proof, we use a ladder operator technique very similar to that of the solution
of the harmonic oscillator in section 2.2 later. Thus it might be of interest to read
these sections in conjunction. The ladder operators are deﬁned as
J+
and J−
:= J1 + iJ2 ,
:= J1 − iJ2 .
†
; they are Hermitian adjoints of each other.
Note that J− = J+
Theorem 7
The square of the norm of the angular momentum, J 2 , has a common set of orthonormal
eigenvectors Φjm with J3 . The eigenvalues of J 2 and J3 may only be given by
J 2 Φjm
and J3 Φjm
= j(j + 1)2 Φjm ,
= mΦjm ,
where j ≥ 0 is integral or half-integral, and where m takes the values
m = −j, −j + 1, . . . , j − 1, j.
Thus, m is also integral or half-integral, and the eigenvalues of J 2 are degenerate with
degeneracy (at least) 2j + 1.
Proof: At this stage, m and l are arbitrary, and we will gradually reduce their freedom
to the restrictions stated in the theorem.
First of all j and m are dimensionless numbers, since has the same dimension as
Li , as can be seen from Eqn. (1.16). Furthermore, we see that j(j + 1) ≥ 0, since L2
is positive deﬁnite, and we may deﬁne j ≥ 0 since the equation
j(j + 1) = x
has a non-negative solution j for all x ≥ 0.
We claim that −j ≤ m ≤ j. We prove this by considering the norm of the vectors
Ψ− and Ψ+ , which we deﬁne as
Ψ± := J± Φjm .
The norm of Ψ± is easily calculated:
2
Ψ± = (Φjm , J± J∓ Φjm ) = (Φjm , (J1 ± iJ2 )(J1 ∓ iJ2 )Φjm )
= (Φjm , (J12 + J22 ± i[J2 , J1 ])Φjm ) = (Φjm , (J − J32 ± J3 )Φjm )
2
= 2 (j(j + 1) − m(m ∓ 1)) Φjm .
Since the norm must be non-negative, we get
−j ≤ m ≤ j + 1,
and
− j − 1 ≤ m ≤ j,
19
A Brief Introduction to Quantum Mechanics
for the plus and the minus sign, respectively. Hence,
−j ≤ m ≤ j,
as were claimed.
Next, we claim that m can only diﬀer from j by an integer, and that j is either
integral or half-integral. Equivalently, j is integral or half-integral, and so is m.
To prove this, we begin by noting that J 2 commutes with both J− and J+ , so
J 2 Ψ± = 2 j(j + 1)Ψ± ,
and Ψ± is also an eigenvector for J 2 with the same eigenvalue as Φjm . In other words,
Φ± lies in the same eigenspace for the J 2 operator’s eigenvalue as Φjm . We then
operate with J3 on Ψ± :
J3 Ψ± = J3 (J1 ± iJ2 )Φjm = ([J3 , J1 ] + J1 J3 ± i[J3 , J2 ] ± iJ2 J3 )Φjm
= (iJ2 ± iJ2 J3 ∓ J1 + J1 J3 )Φjm = (iJ2 ± imJ2 ∓ J1 + mJ1 )Φjm
= (m ± 1)(J1 ± iJ2 )Φjm = (m ± 1)Ψ± .
That is, Ψ± is an eigenvector for J3 with eigenvalue (m ± 1), that is
Ψ± = J± Φjm = CΦj(m±1) .
By operating successively on one single Φjm , we get a ladder of increasing and decreasing eigenvalues and corresponding eigenvectors, but these cannot have eigenvalues
outside the range −j . . . j. Observe that
|C| = j(j + 1) − m(m ± 1),
so that the sequence terminates at j = m and j = −m for J+ and J− , respectively. For
both sequences to terminate properly, we must have j integral or half integral, because
this is the only way to make the diﬀerence between the maximum and minimum value
of m integral. Our claim is proven, and in fact we have also proved our theorem. To sum up: j may only take the values
1
3
j = 0, , 1, , . . . ,
2
2
while m ranges like
m = −j, −j + 1, . . . , j − 1, j.
We see that the eigenvalue j(j + 1) for J 2 is has a degeneracy of 2j + 1.9 For a
ﬁxed value of j, the eigenspace has dimension 2j + 1. This eigenspace has a basis of
eigenvectors of each Ji , but since they do not commute their eigenvectors are not the
same. It is clear that if j is ﬁxed we may describe the operators Ji as square Hermitian
matrices of dimension 2j + 1, once we have chosen a basis for the eigenspace.
It is rather surprising that we may get so much information just from the commutation relations (1.16), and Theorem 7 has a wide range of applications; from ﬁnding
the energy eigenstates of the hydrogen atom and other spherical symmetric systems to
the theory on eigenspin of elementary particles such as electrons, the latter which we
will turn to right now.
9 At least 2j + 1 is more correct to say; it may happen that the eigenvalues of J again are degen3
erated.
20
1.5 – Important Consequences of the Postulates
1.5.6 Eigenspin of the Electron
A surprising feature of the electron theory is the necessity of introducing an additional
degree of freedom called eigenspin or just spin. As mentioned in section 1.3, the Hilbert
spaces for electrons, protons, neutrons and many other particles are not only composed
of L2 (R3 ), but also Cn , the latter ascribed to the spin degrees of freedom.
One of the experimental facts that lead to the discovery of electron spin was the
famous Stern-Garlach experiment, in which silver ions were accelerated through a magnetic ﬁeld. If the electrons where spinning in a classical sense, they would be deﬂected
according to the distribution of the spin. It turned out however, that the deﬂected ions
were concentrated into two spots, indicating a discrete nature of the eigenspin. See for
example Ref.  for a complete account of the eigenspin discovery.
The eigenspin is described through angular momentum operators acting on vectors
in Cn . For a particle of spin s, we have the spin operator S , and S 2 has by deﬁnition
only one eigenvalue, namely 2 s(s + 1). The eigenspace of each Si is then n = 2s + 1
dimensional, according to Theorem 7, and we may choose Cn as the Hilbert space
connected to eigenspin. The spin components becomes spin matrices, and we choose
S3 to be diagonal. In other words, the eigenvectors of S3 are chosen as a basis for the
spin part of Hilbert space, and the spin matrices are described in this representation.
Let us turn to the spin-1/2 particles, such as the electron. Since s = 1/2 we get C2
as our Hilbert space. Let S3 be diagonal and write
1
χ+ :=
,
0
0
.
and χ− :=
1
Deﬁning S3 as the 2 × 2 matrix
S3 =
2
1 0
,
0 −1
makes χ+ have an S3 eigenvalue of +/2 and χ− an eigenvalue of −/2, viz.,
S 3 χ+
and
S 3 χ−
χ+
2
= − χ− .
2
=
Taking S3 as diagonal is just a conventional choice: Later, when discussing the
hydrogen atom, we use spherical coordinates in which the z-axis has a special meaning.
In addition, interactions of both orbital momentum and eigenspin with a magnetic ﬁeld
becomes simpler to describe mathematically if we direct the ﬁeld along the diagonal
axis of spin, see chapter 3.
Using the raising and lowering operators of Theorem 7, we may ﬁnd S1 and S2 as
well:
0 1
S1 =
2 1 0
0 −i
and S2 =
.
2 i 0
The matrices are easily seen to obey Eqn. (1.16).
Note that Si does not act on any spatial degree of freedom. Thus, Si commutes
with both Xi and Pi . If the Hamiltonian does not contain some interaction term such
as a magnetic ﬁeld, then the spin operators also commute with H. Then the spin state
21
A Brief Introduction to Quantum Mechanics
z
C
S
r
r
y
x
Figure 1.5: The “spinning top electron”
of the particle does not aﬀect the time development of the spatial wave function, and
we may ignore it altogether as it has no physical signiﬁcance.
On the other hand, if H contains a magnetic ﬁeld, then the magnetic moment
associated with S will interact with the magnetic ﬁeld and disturb this ignorance
of the spatial wave function of the spin, such as demonstrated in the Stern-Garlach
experiment.
Let us explain this a little further. Classically, when a charged particle of orbital
momentum L is placed in a magnetic ﬁeld B, it gains an additional potential energy
V =−
qg
µ · B,
L · B = −µ
2mc
where q is the particle charge. The dimensionless number g is called the gyromagnetic
factor and classically this is related to the charge distribution of the particle. We
write Γ = qg/2mc as a shorthand later when discussing the particular form of the time
dependent Schrödinger equation to be solved, in chapter 3.
The quantity µ deﬁned through
µ := ΓL
is called the magnetic moment. When this is nonzero we have coupling between the
magnetic ﬁeld that may be present and the orbital momentum of the electron.
We assume that the eigenspin also has an associated magnetic moment µ and a
gyromagnetic factor to be determined experimentally. After all, we imagine the spin as
some spinning property (albeit non-classical) of the electron which is a charged particle.
Perhaps we may view the electron as a pure impossible-to-perceive spinning top. Thus,
when electromagnetic forces are present in our system we must account for the spin
state of the particle as well as the spatial state.
Can we get some sort of picture of the “spinning top”? If we imagine that the
electron is in the state χ+ , then the z component of the eigenspin has a sharp value of
/2. The x and y components however, are not possible to determine. But the norm
of the spin is 32 /4. Consider Fig. 1.5, where we have depicted everything we know
about the spin state of the electron. Even though the spin vector S is not possible
to measure directly,
we may imagine that this “actual spin” lies on the circle C with
√
radius r = / 2 at a distance /2 above the xy plane.
1.5.7 Picture Transformations
In this section we shall consider so-called picture transformations, in which the vectors
are transformed by a (possibly time dependent) unitary operator T (t). It corresponds
22
1.5 – Important Consequences of the Postulates
to changing the frame of reference in Newtonian mechanics. We view our quantum
states from another point of view, so to speak.
Assume that T (t) is a unitary operator on Hilbert space with a possible explicit
time dependence. Consider the transformation
Ψ(t) −→ ΨT (t) := T (t)Ψ(t).
(1.18)
This transformation is invertible due to T (t) being unitary, i.e., T −1 = T † . The time
dependence of Ψ is governed by the time dependent Schrödinger equation
i
∂
Ψ(t) = H(t)Ψ(t),
∂t
(1.19)
and we ask: What is the governing equation for ΨT (t)? The left hand side of Eqn. (1.19)
may be rewritten as
†
∂T
∂ †
† ∂
T (t)ΨT (t) = i
i
ΨT (t) + T
ΨT (t) .
∂t
∂t
∂t
We rewrite the right hand side in a similar way, viz.,
H(t)Ψ(t) = H(t)T † (t)ΨT (t).
Multiplying Eqn. (1.19) with T (t) from the left and reorganizing the equation we get
i
∂
ΨT (t) = HT (t)ΨT (t),
∂t
with
HT (t) := T (t)H(t)T † (t) + i
We have also used that
(1.20)
∂T †
T (t).
∂t
∂T †
∂T
T = −T †
,
∂t
∂t
which follows from diﬀerentiating T T † = 1 with respect to t.
We may also ﬁnd the picture transformation of an arbitrary operator A. Let
Φ = AΨ,
and using Eqn. (1.18) we get
ΦT = T Φ = T AΨ = T AT † ΨT ,
and thus
A(t) −→ T (t)A(t)T † (t),
(1.21)
is the picture transformation of an operator A.
As a consequence of the deﬁnition of the picture transformed operator, we get
conservation of expectation values in the diﬀerent pictures:
A = (Ψ, AΨ) = (T † ΨT , AT † ΨT ) = (ΨT , T AT † ΨT ) = AT .
We summarize these ﬁndings in a theorem.
Theorem 8
Consider a picture transformation given by
Ψ(t) −→ ΨT (t) := T (t)Ψ(t)
23
A Brief Introduction to Quantum Mechanics
where T (t)T † (t) = 1. Then ΨT obeys the picture transformed Schrödinger equation
i
∂
ΨT (t) = HT (t)ΨT (t),
∂t
and the picture transformed operators obey
A(t) −→ T (t)A(t)T † (t).
The two pictures are physically equivalent, in the sense that expectation values are
conserved, viz.,
A = AT .
Note that there is no reason to think of one picture as more fundamental than
another. The original “Schrödinger picture” may be obtained from the picture given
by T (t) with the picture transformation
ΨT (t) −→ Ψ(t) = T † (t)ΨT (t).
If T is unitary, so is of course T † !
In other words: For every unitary operator there exists a picture. And when a
unitary operator acts on an orthonormal basis, we get another orthonormal basis.
Hence, changing the picture may be looked upon as simply changing the frame of
reference, if we consider the frame of reference to be the basis we describe our states
in.
To conclude we remark that through the following chain of identities,
ΨT (t ) = T (t )Ψ(t ) = T (t )U(t , t)Ψ(t) = T (t )U(t , t)T † (t)ΨT (t),
we obtain a relation for the transformed propagator UT , viz.,
UT (t , t) = T (t )U(t , t)T † (t).
1.5.8 Many Particle Theory
Quantum mechanics is not only able to describe a single particle. It is also applicable
to systems with many particles (i.e., material systems) in the same way as Hamiltonian
mechanics is. We will here give a brief introduction to the non-relativistic quantum
description of many particles. A thorough exposition is out of scope for this thesis;
see Ref. . Many-body theory is one of the most fundamental ingredients of various
disciplines in physics such as nuclear physics, solid state physics and particle physics.
The quantum mechanical formalism for many particles is obtained from assuming
that the wave function depends on all the particle coordinates xi ∈ R3 , and the Hilbert
space for N particles then becomes
H = L2 (R3N ) ⊗ C2s1 +1 ⊗ C2s2 +1 ⊗ . . . ⊗ C2sN +1 ,
The fundamental commutator is
[Xi,r , Pj,s ] = iδij δrs ,
i, j = 1 . . . N and r, s = 1 . . . 3,
and we must bear in mind that there are 3N degrees of freedom. The commutators
are thus deﬁned for coordinates and momenta belonging to diﬀerent particles as well.
Most applications of many-body theory involves so-called identical particles. Identical particles have the property that there is nothing distinguishing the states of systems
obtained from one another by permuting the positions of the particles.
For a quantum mechanical system we may describe it in this way. If we by SN denote
the set of permutations of N symbols, and if we by σ(x1 , . . . xN ) denote this permutation applied to the N coordinate vectors, then the physical properties of Ψ(x1 , . . . , xN )
24
1.5 – Important Consequences of the Postulates
should be identical to those of Ψ(σ(x1 , . . . , xN )). Here, we include the spin degrees of
freedom in xi , since it is a coordinate on equal footing as the cartesian position even
though it is just an integer.
Note that this is an additional postulate to the four already given in section 1.3.
It is diﬃcult to make this notion precise. Most textbooks use the above deﬁnition of
identical particles based on the permutations of positions in conﬁguration space. Unfortunately this is physically inconsistent and should be avoided. Indeed, if the particles
are distinguishable, the permutations of the particles is a non-observable concept.
Indistinguishability is fundamental. Introducing the postulate leads to theoretical
predictions in perfect agreement with experiments. The physical identiﬁcation of states
obtained by permuting the particles is equivalent to the statement
Ψ(pij (x1 , . . . , xN )) = ±Ψ(x1 , . . . , xN ),
where pij is a transposition of particle i and j. In other words, Ψ is either symmetric
or anti-symmetric with respect to particle exchanges. Particles of nature can thus
be divided into two categories; those with symmetric wave functions and those with
anti-symmetric wave functions. The former particles are referred to as bosons and the
latter as fermions. This is an experimental fact not to be overseen and indeed it is
fundamental in most branches of microphysics.
A further experimental fact is that bosons always have integral spin and fermions
always half-integral spin. Thus electrons, protons and neutrons are fermions, while
photons are bosons.
For fermions we may immediately identify the so-called Pauli principle, stating that
two (identical) fermions cannot occupy the same quantum state. If we assume that
two particles i and j occupy the same place in space, then
Ψ(pij (x1 , . . . , xN )) = Ψ(x1 , . . . , xN ) = −Ψ(pij (x1 , . . . , xN )),
so that
Ψ(x1 , . . . , xN ) = 0
whenever two of the coordinates coincide. It is not obvious that this implies that two
fermions are not allowed to be in the same state in general (and what is meant by “the
same state” for two particles in a state for an N -particle system), but we have not yet
developed the necessary quantum mechanical results to establish this.
In Ref. , J.M. Leinaas and J. Myrheim shows that the identical particle postulate
is actually superﬂuous. By carefully treating the transition from classical mechanics
to quantum mechanics they managed to obtain the quantum mechanical notion of
identical particles from the corresponding classical notion which is easier to describe
accurately.
The basic idea is that because of the indistinguishability of the classical particles
their conﬁguration space is not R3N but rather R3N /SN ; the quotient space obtained
by identifying conﬁgurations which diﬀer only by a permutation of the positions of
the particles. Such quotient spaces are more complicated than the Euclidean space we
normally use, as they may have non-zero curvature and singularities.
For classical systems particles are not usually allowed to inhabit the same locations
in space, simply because they are mutually repelling, such as electrons. In that case the
eﬀect of using the modiﬁed conﬁguration space is not seen. In quantum mechanics the
eﬀect is profound, and for three dimensional systems the notion coincides with the one
sketched above. For lower-dimensional systems however, Leinaas and Myrheim showed
that a diﬀerent behavior was possible as well; the sign change when transposing two
particles instead becomes a phase change eia , with a real.
25
A Brief Introduction to Quantum Mechanics
1.5.9 Entanglement
A famous quote by Einstein says that “God does not play dice.” There are many
possible ways to interpret this, but the original context in which it was uttered was
theoretical physics. In this short section we will discuss some of the rather esoteric
features of quantum physics. Ref.  contains a concise yet precise treatment of the
subjects of this section.
Considering the fourth postulate, quantum physics is deterministic in the sense that
the state evolves in time according to a diﬀerential equation. On the other hand, it is
non-deterministic in the way that the outcome of an experiment is completely random.
In 1935, Einstein, Podolsky and Rosen proposed a famous gedanken experiment in
which the non-deterministic features yielded somewhat absurd consequences. Because
of this Einstein felt that quantum physics must be incomplete; that there had to
be a bigger “super-theory” that included the present form of quantum theory. To
understand this paradox we ﬁrst need to furnish the concept of entanglement for manybody systems.
Consider a particle with Hamiltonian H, which we for simplicity assume is independent of time. We diagonalize H, i.e., ﬁnd orthonormal φn such that
Hφn = n φn ,
and φn form a basis for Hilbert space H. For arbitrary one-particle functions we have
c n φn .
ψ=
n
If we now consider an N -particle system with a Hamiltonian given by
H =
N
H(i),
i=1
where H(i) denotes that only operators concerning degrees of freedom belonging to
particle number i is present in the term, the state
Φ = φn1 (1) ⊗ φn2 (2) ⊗ · · · ⊗ φnN (N ),
is easily seen to be an eigenstate of H , viz.,
H Φ = (n1 + · · · + nN )Φ.
The Hilbert space for the N -particle system may be written as
H = H
· · · ⊗ H,
⊗H⊗
N terms
where H is the Hilbert space associated with one particle. The states Φ above is easily
seen to constitute a basis for H . Thus we may construct quite general states of the
N -particle system by considering the direct product of arbitrary one-particle functions,
viz.,
Ψ = ψ1 (1) ⊗ ψ2 (2) ⊗ · · · ⊗ ψN (N ).
These states are however not the most general ones. Indeed, a simple superposition
of two such states cannot be factorized in this way. If we considered product states
exclusively, then we would actually simply study the one-particle system. But any
square-integrable function of the diﬀerent coordinates may be used as a state and such
non-product states are called entangled states.
26
1.6 – Electromagnetism and Quantum Physics
As an example let us consider a one-particle Hamiltonian with only two eigenstates,
viz.,
Hψi = i ψi , i = 1, 2.
A basis of eigenstates for the two-particle Hamiltonian H = H(1) + H(2) is then given
by the four states
ψ1 (1)ψ1 (2),
ψ1 (1)ψ2 (2),
ψ2 (1)ψ1 (2)
and ψ2 (1)ψ2 (2).
(We have omitted the direct product symbols to economize.) Their energies are 21 ,
1 +2 (for two states) and 22 , respectively. If we measure the total energy H(1)+H(2)
of the system, then these are the values we may obtain. The energy of one single
particle, say H(1), is also an observable, and upon measurement it will yield either 1
or 2 . Consider then the state
1
Ψ = √ (ψ1 (1)ψ2 (1) − ψ2 (1)ψ1 (2)) ,
2
which is not an eigenfunction for the total energy, but rather an (anti-symmetric) linear
combination of such. It is an entangled state.
If we measure the energy of particle 1 and obtain 1 the state collapses into
Ψ −→ Ψ1 (1)Ψ2 (2).
But then we know the energy of particle 2 because the new state is an eigenstate
for H(2). In other words, there is a perfect correlation between the energies of the
particles. Measuring one particle’s energy will also determine the energy of the other.
This is the essential content of the Einstein-Podolsky-Rosen paradox, but what is so
paradoxical about this result, besides the fact that operating on one particle’s degrees
of freedom seemingly aﬀects the other, even though H(1) and H(2) commute? One
may prepare a quantum system in many ways. For example, one may prepare two
particles to be localized in space; one here on earth and one on the moon. We may also
prepare these particles’ spins to be in an entangled state at the same time. Substituting
“energy” with “spin” in the above argument then leads to the fact that there may be
perfect correlation between the spins of two particles very far away from each other. A
measurement on the particle at home will instantaneously aﬀect the one at the moon.
Einstein somewhat humorously called this phenomenon “spooky action at a distance.”
Einstein and others felt that such behavior was absurd; that quantum mechanics
should be local. An operation at a point should not immediately aﬀect other points in
space due to the limited speed of light. On these grounds, one may hope for some kind
of local “super-theory” that contains quantum mechanics as a special case. Mathematically, such a super-theory is called a hidden variable theory.
The British mathematician John Bell derived a series of inequalities that any hidden
variable theory must obey. These may actually be tested experimentally, and indeed
experimental results tend to invalidate Bell’s inequalities for quantum mechanics. Then
quantum mechanics is a complete theory and the “spooky action” must be a real
phenomenon.
1.6
Electromagnetism and Quantum Physics
An interesting class of time dependent Hamiltonians are the ones describing the interaction between a charged particle such as an electron and a classical electromagnetic
ﬁeld. In this section we will give a very brief summary of this semiclassical theory.
It is called semiclassical because only the charged particle is described in quantum
mechanical terms; the electromagnetic ﬁeld is still a classical ﬁeld which really should
27
A Brief Introduction to Quantum Mechanics
be quantized to make a consistent theory. However, when the electromagnetic ﬁeld
has macroscopic magnitudes, the quantum behavior becomes neglible compared to the
classical features, as dictated by the correspondence principle, see section 2.4.
1.6.1 Classical Electrodynamics
Let us ﬁrst describe the electrodynamic theory in classical terms. On one hand we
have a charged particle of charge q. This particle interacts with an electric ﬁeld E
and a magnetic ﬁeld B. In this context, a ﬁeld means a vector quantity with three
components varying in space. We refer to E and B jointly as the electromagnetic ﬁeld.
The response of a charged particle of charge q due to the electromagnetic ﬁeld is
given by the Lorentz force:10
1
(1.22)
F =q E + v ×B .
c
The number c is the speed of light in vacuum.
On the other hand the famous Maxwell’s equations give the response of the electromagnetic ﬁeld due to a charge, or a charge distribution ρ in general:
∇ · E = 4πρ
1 ∂B
∇×E +
=0
c ∂t
∇·B =0
4π
1 ∂E
=
j
∇×B −
c ∂t
c
(1.23)
(1.24)
(1.25)
(1.26)
The charge density ρ(x ) represents charges distributed in space. For a point particle
of charge q situated at r = r0 we have11
ρ = qδ(r − r0 ),
since the total charge then becomes
ρ d3 r = q.
all space
The current density j deﬁnes the movement of the charge density through the
continuity equation
∂ρ
+ ∇ · j = 0.
(1.27)
∂t
This equation must hold if Maxwell’s equations is to be fulﬁlled at the same time. For
a charged particle we obtain
j = qv δ(r − r0 ),
where v = ṙ0 is the velocity of the particle.
Note that the Lorentz force and Maxwell’s equations are connected through the
appearance of the charge density and the current density.
If we assume that no charges are present, i.e., ρ = j = 0, we may easily combine
Maxwell’s equations and obtain the well-known wave equations
and
∂2E
∂x2
∂2B
∂x2
=
=
1 ∂2E
,
c2 ∂t2
1 ∂2B
,
c2 ∂t2
10 In this thesis we employ Gaussian units for the electromagnetic ﬁeld, which are deﬁned through
the proportionality constants in front of each term in Eqn. (1.22). See Ref. .
11 Here we use the three dimensional Dirac delta function δ(x ) = δ(x )δ(x )δ(x ).
1
2
3
28
1.6 – Electromagnetism and Quantum Physics
which shows that the electromagnetic ﬁelds propagate as waves through charge-free
space with velocity c.
Working with the ﬁelds E and B directly may be cumbersome, and more insight
is gained if we introduce the potentials φ and A as follows: Eqn. (1.24) implies that B
may be written as a curl, viz.,
B = ∇ × A.
We insert this into Eqn. (1.25), and get
1 ∂A
∇× E +
= 0,
c ∂t
which implies that the expression in parenthesis may be written as a gradient, viz.,
−∇φ = E +
1 ∂A
,
c ∂t
or
1 ∂A
− ∇φ.
c ∂t
Note that we have reduced the six degrees of freedom of E and B to four degrees of
freedom, a considerable simpliﬁcation.
The potentials A and φ are not unique. We may add to A the gradient of an
arbitrary function Λ and still get the same ﬁelds E and B, and thus no physical
distinction between the transformed potentials and the old ones. Let us prove this
claim.
Let
A −→ A = A − ∇Λ,
E =−
and let
1 ∂Λ
.
c ∂t
This transformation is called a gauge transformation and the function Λ is called a
gauge parameter. The physical ﬁeld B of the transformed potential A becomes
φ −→ φ = φ +
B = ∇ × (A − ∇Λ) = ∇ × A − ∇ × ∇Λ = ∇ × A
= B,
since the curl of a gradient vanishes identically. For the transformed electric ﬁeld E
we obtain
1 ∂Λ
1 ∂
(A − ∇Λ) − ∇ φ +
E = −
c ∂t
c ∂t
1 ∂Λ 1 ∂Λ
1 ∂A
− ∇φ + ∇
− ∇
= E.
=−
c ∂t
c ∂t
c ∂t
Thus we have shown the gauge invariance of the electric ﬁelds, and hence the
physical laws, under a gauge transformation. It is then clear that the gauge parameter
Λ cannot have any physical signiﬁcance. Gauge invariance is a necessity of any physical
theory utilizing the potentials A and φ directly. (If only the electromagnetic ﬁelds E
and B themselves occur, then gauge invariance is automatically obtained.)
The choice of Λ is rather arbitrary. The proof rested on the diﬀerentiability of Λ
and the commutability of diﬀerentiation with respect to time and with respect to space.
Hence, at least continuously diﬀerentiable functions Λ may be used.
The gauge parameter may be chosen in diﬀerent applications at our convenience to
simplify calculations and derivations. We will return to this later on, when discussing
the interaction of a charged quantum particle such as an electron and the classical
29
A Brief Introduction to Quantum Mechanics
electromagnetic ﬁeld. We mention here however that two gauges are quite common
in classical electromagnetic theory and in the quantum description of this (also called
QED; quantum electro-dynamics). These are the Coulomb-gauge and the Lorentzgauge. In the former we require that ∇ · A = 0 and in the latter that c∇ · A = ∂φ/∂t.
These conditions may always be fulﬁlled if we have full freedom over Λ. See Ref. 
for a discussion.
We state gauge invariance as a theorem.
Theorem 9
The electromagnetic ﬁelds E and B consistent with Maxwell’s equations (1.23) can be
represented by the vector potential A and scalar potential φ such that
B
= ∇×A
and E
= −∇φ −
1 ∂A
.
c ∂t
Furthermore, the electromagnetic ﬁelds are gauge invariant, i.e., they are invariant
under the transformation
−→ A − ∇Λ
1 ∂Λ
φ −→ φ +
,
c ∂t
A
and
where Λ(x , t) is any continuously diﬀerentiable real function.
We may formulate the Lorentz force within the Hamiltonian framework in addition
to the Newtonian framework, in which the force originally was given. In order to
achieve this we must redeﬁne our (classical) Hamiltonian and the canonical momenta.
(This is due to the fact that the electromagnetic forces are not conservative.)
Hem :=
1 q 2
pem − A + V (x ) + qφ(x ),
2m
c
(1.28)
where
q
(1.29)
pem := p + A
c
is the canonical momentum for the electromagnetic Hamiltonian. Thus, the canonical
momentum is no longer the regular linear momentum, but has an additional term
proportional to the vector potential. It is easy to prove that this reproduces Newton’s
second law and the Lorentz force for the charged particle. Note also that the ﬁrst term
of Hem is exactly the kinetic energy, so that
Hem = total energy
as we are used to.
1.6.2 Semiclassical Electrodynamics
The quantization process, i.e., the process of applying the postulates to the semiclassical
system, follows easily. The postulates were formulated for canonical variables, i.e.,
generalized coordinates and their corresponding momenta, and we now have acquired
momenta that look just a little bit diﬀerent than usual. The canonical coordinates for
the electromagnetic Hamiltonian, Hem , are pi,em and xi , where pi,em is diﬀerent from
the linear momentum mvi (in Cartesian coordinates). This means that the quantum
mechanical operator Pi,em does not correspond to the ith linear momentum component,
but rather
q
Pi,em = Pi + Ai .
c
30
1.6 – Electromagnetism and Quantum Physics
Thus for the position representation, when we view our quantum states as L2 functions,
Pi −→ −i
∂
q
− Ai
∂xi
c
(1.30)
is the ith component of the linear momentum in semiclassical electrodynamics.
What about gauge invariance in this semiclassical theory? It is not obvious that a
gauge transformation will leave the physics invariant. Indeed, the vector potential A
appears explicitly in Eqn. (1.30), the expression for the linear momentum operator in
the position representation. Of course, when we transform A and φ, we transform the
Hamiltonian, so the Schrödinger equation also changes. We hope that this leaves room
for gauge invariance, and indeed it is the case. To be more precise, if
A −→ AΛ = A − ∇Λ
and
1 ∂Λ
c ∂t
are gauge transformations of the potentials, this induces a gauge transformation of the
Hamiltonian, viz.,
H = H(A, φ) −→ HΛ = H(AΛ , φΛ ).
φ −→ φΛ = φ +
If the original Schrödinger equation read
HΨ = i
∂
Ψ,
∂t
the new gauge-transformed Schrödinger equation will read
HΛ ΨΛ = i
∂
ΨΛ .
∂t
The theorem below states that the gauge transformation is just a picture transformation, and we identify the unitary operator T (t) in this case. Then the physics of the
diﬀerent gauges cannot be diﬀerent, since picture transformations conserve physical
measurements according to Theorem 8.
Theorem 10
Let Ψ be the solution to the time dependent Schrödinger equation with the semiclassical
Hamiltonian H. Let ΨΛ be the solution to the gauge transformed Schrödinger equation
corresponding to the gauge transformed H −→ HΛ . Then ΨΛ is related to Ψ by a
unitary transformation, viz.,
iq
Ψ −→ ΨΛ = exp − Λ(X , t) Ψ.
c
Furthermore,
HΛ (t) = T (t)H(t)T † (t) + i
with
∂T †
T (t),
∂t
(1.31)
iq
T (t) = exp − Λ(X , t) .
c
In short: The gauge transformation corresponds to a picture transformation.
Proof: We will prove the theorem by working in the position representation, i.e., expressing the state vector in the coordinate basis, because it is the natural choice as Λ
is expressed as a function of X .
31
A Brief Introduction to Quantum Mechanics
It is suﬃcient to prove that HΛ is given by Eqn. (1.31), because then T (t)Ψ is
governed by the corresponding Schrödinger equation, and then ΨΛ must be given by
this unitary transformation.
Consider the picture transformation
HT = T HT † + i
∂T †
T .
∂t
The ﬁrst term yields
T HT † =
q 2
1 T P − A T † + qT φT † ,
2m
c
where the ﬁrst part is the kinetic energy. The second term is easy to calculate, since
φ commutes with T as they are pure functions of X . The kinetic energy, however, is
more complicated. We consider each term in the kinetic energy when the square is
expanded:
q
q2
q 2
(1.32)
T P − A T † = T P 2 − (AP + PA) + 2 A2 T † .
c
c
c
The last A2 -term commutes with T † because it is also a function of X . We will then
have to compute the two remaining terms. These two terms will, as we shall see, give
rise to the terms arising from the gauge transformation.
We calculate the operator PT † by operating on a L2 -function Ψ:
iq †
q
T (∇Λ)Ψ + T † ∇Ψ = T † (∇Λ) + T † P Ψ.
PT † Ψ = −i∇(T † Ψ) = −i
c
c
Hence,
q
PT † = T † (∇Λ) + T † P.
c
The middle term in Eqn. (1.32) is then easily calculated:
q
q2
q
PT † A + APT † = T † 2 2 A(∇Λ) + T † (AP + PA).
c
c
c
For the P 2 term we note ﬁrst that
q
P 2 T † = P(T † (∇Λ) + T † P).
c
The second term yields
q
PT † P = T † (∇Λ)P + T † P 2 ,
c
while the ﬁrst term is found by operating on Ψ, viz.,
−iq † 2
q
q2 †
q †
†
2
P(∇Λ)T Ψ =
T (∇ Λ) + 2 T (∇Λ) + T (∇Λ)P Ψ.
c
c
c
c
Collecting all the terms we obtain
iq
q
T T † (∇2 Λ)
T (P − A)2 T † = T T † P 2 −
c
c
q2
q
+ 2 T T † (∇Λ)2 + 2T T † (∇Λ)P
c
c
2
q2
q
+ 2 T T † A2 − 2T T † 2 (∇Λ)A
c
c
q
†
− T T (AP + PA)
c
32
1.6 – Electromagnetism and Quantum Physics
Using the unitarity of T and reorganizing we get
q
q2
q
T (P − A)2 T † = P 2 − (AP + PA) + 2 A2
c
c
c
iq 2
q
q2
q2
(∇ Λ) − 2 2 A(∇Λ) + 2 (∇Λ)2 .
+ 2 (∇Λ)P −
c
c
c
c
Note that
(∇Λ)P = P(∇Λ) + i(∇2 Λ),
which we use to transform one of the (∇Λ)P terms into P(∇Λ). This cancels the ∇2 Λ
term. Then the expression reads
q
q
q2
T (P − A)2 T † = P 2 − (AP + PA) + 2 A2
c
c
c
q
q2
q
q2
+ (∇Λ)P + P(∇Λ) − 2 2 A(∇Λ) + 2 (∇Λ)2
c
c
c
c
2
q
q
= (P − A) + (∇Λ) .
c
c
The gauge transformed Hamiltonian reads
HΛ =
1 ∂Λ
1 2
q
P − (A − ∇Λ) + q φ +
,
2m
m
c ∂t
and we notice that we have successfully reproduced the gauge transformed kinetic
energy. The scalar potential has an additional term proportional to ∂Λ/∂t, and this
comes from the term (∂T /∂t)T † in HT , viz.,
i
−iq ∂Λ
q ∂Λ
∂T †
T = i
TT† =
.
∂t
c ∂t
c ∂t
Thus, we have proved that
HΛ = H(AΛ , φΛ ) = T HT † + i
∂T †
T = HT ,
∂t
and the proof is complete. We have established that performing a gauge transformation is in fact equivalent
to performing a picture transformation. Hence, the gauge parameter Λ has no physical
meaning in itself as in classical electrodynamics.
However, the potentials A and φ have a physical meaning with somewhat surprising consequences. In classical mechanics a particle moves along a deﬁnite trajectory,
sampling the electromagnetic ﬁelds E and B along the path, i.e., sampling ∇ × A and
∇φ. The quantum particle however moves according to a partial diﬀerential equation,
namely the Schrödinger equation, and thus A and φ inﬂuence the movement of the particle (i.e., wave) globally. Thus in an area of space where the ﬁelds vanish and where
a classical particle would feel no electromagnetic force, the quantum particle may be
aﬀected from non-vanishing potentials, both in the same area of space and other areas
of space!
The simplest example of this striking fact is perhaps the so-called Ahoronov-Bohm
eﬀect occurring in a modiﬁed double slit experiment. See Ref.  for a treatment.
Gauge invariance is an important concept when we deal with numerical calculations.
If our calculations show diﬀerent results in diﬀerent gauges, this indicates numerical
errors arising in time. Indeed, we have an artiﬁcial physical eﬀect of the gauge parameter. If on the other hand our results turn out to be gauge independent, then we have
an indication of a highly accurate calculation.
33
Chapter 2
Simple Quantum Mechanical
Systems
In this chapter we will have a look at some analytically solvable quantum mechanical
systems. We do this to gain some further insight into the physics described in the
previous chapter, and to show that even the simplest quantum mechanical systems are
rather diﬃcult to solve. This points in the direction of numerical methods.
We will have a look at the free particle; a particle propagating with no external
forces acting on it. We will study the quantum mechanical harmonic oscillator, perhaps
the most notorious and important system ever, regardless of physical discipline. The
hydrogen atom is the most complicated system we will solve, and the techniques used
are common.
More complicated systems can be solved, but it should be clear after solving the
hydrogen atom that numerical methods are well justiﬁed. In addition, when solving
time dependent systems things gets even worse. There are very few systems with time
dependent Hamiltonians that can be attacked with analytical means.
In addition to the systems solved here, systems that are important when testing
a numerical approach to the time independent Schrödinger equation are considered in
chapter 6. These include a particle-in-box in two dimensions, a two-dimensional free
particle in a constant magnetic ﬁeld and a two-dimensional hydrogen atom.
2.1
The Free Particle
We begin with the free particle, which is the simplest spatial problem to solve. The
free particle provides the ﬁrst system comparable to classical Newtonian mechanics
and also oﬀers some insight into the behavior of the quantum particle.
2.1.1 The Classical Particle
A free particle is a particle free to move without inﬂuence from external forces. The
classical Hamiltonian reads
p2
,
H =T =
2m
i.e., is it consists of kinetic energy only. Classically, this system has a very simple
solution.
Writing out Hamilton’s equations of motion (1.2), we get
ẋ =
p
,
m
and ṗ = 0.
35
Simple Quantum Mechanical Systems
Thus, the particle moves with constant momentum p and traces out a straight line (or
a single point) in space, viz.,
p
x (t) = x (0) + t.
m
2.1.2 The Quantum Particle
Let us turn to the quantum mechanical system and its solution. Let us ﬁrst note
that the particle may have spin degrees of freedom, but since H commutes with any
spin-dependent operator, we need not consider spin at all. Thus,
H = L2 (R3 ).
We will use the technique of section 1.4 to solve the Schrödinger equation. We then
begin with the time independent Schrödinger equation, which is just the eigenvalue
problem for the Hamiltonian, viz.,
P2
ψ = Eψ.
2m
We have separated the time dependence e−iEt/ and the space dependence ψ(x ).
Note that the Hamiltonian is proportional to the square of P. Thus, any eigenfunction of P is also an eigenfunction for H. Let us try to ﬁnd eigenfunctions ψ(x ) of
Pi :
∂
i
ψ = pi ψ.
∂xi
The solution to this diﬀerential equation is easy do deduce:
i
1
ψ(xi ) = √
e pi xi ,
2π
which are the eigenfunctions of the ith component of P with eigenvalue pi ∈ R. The
factor (2π)−1/2 is chosen to achieve orthonormal eigenvectors, in the sense described in
appendix A. The function may be scaled by an arbitrary number (that is, independent
of xi ). Such a constant may be one of the eigenfunctions corresponding to other
directions in space than the ith, viz.,
i
i
i
i
ψ(x )p = e p1 x1 e p1 x1 e p1 x1 = e p·x .
We have added a subscript to indicate the eigenvalue belonging to the eigenfunction.
Note that this function is an eigenfunction of all the components of P simultaneously.
For the Hamiltonian we get
Hψp =
1
1 2
(P 2 + P22 + P32 )ψp =
p ψp ,
2m 1
2m
so that the eigenvalues of H become
E=
p2
,
2m
p ∈ R3 .
Thus, the energies corresponding to p of constant length are the same. The time
dependence of ψp is given by
i
ψp (t) = ψp e− Et =
36
i
i
1
1
e p·x e− Et =
ei(k ·x −ωt) ,
3/2
(2π)
(2π)3/2
(2.1)
2.1 – The Free Particle
where
p
and
E
p2
k 2
ω :=
=
=
.
2m
2m
k :=
We recognize Eqn. (2.1) as a plane wave solution; a wave that travels with constant
speed through space with wave number k . The speed of the wave is p/m, as in the
classical case.
The eigenvectors of the momentum operator are plane waves that are free to move
with constant velocity in a ﬁxed direction. Compare this to the classical particle moving
in a straight line with constant speed. In both cases we know the momentum p sharply
– it has no uncertainty. Heisenberg’s uncertainty principle (1.12) then states that the
position x of the particle in for the quantum mechanical wave should have inﬁnite
uncertainty. Indeed, we have:
|ψp (x )| = a constant,
so that every point in space has the same probability density for ﬁnding the particle
there upon measurement! Note that the eigenfunctions are not proper vectors of L2 ,
because they cannot be normalized to get unit norm. As described in appendix A
this is because they are instead so-called improper vectors of Hilbert space. Improper
vectors are not normalizable to a number, but they are distributions.
Let us discuss arbitrary states of the free particle. We have found the eigenfunctions
of H for the free particle, and we know that these constitute a basis for the Hilbert
space in question. An arbitrary solution may then be found by superpositioning these,
viz.,
i
1
Φ(p)e p·x d3 p
Ψ(x ) =
(2π)3/2
As described in appendix A, we recognize this as the inverse Fourier transform of
Φ(p). Then we may write
i
1
Ψ(x )e− p·x d3 x,
Φ(p) =
3/2
(2π)
which gives the momentum distribution in terms of the spatial distribution.
It is important to be aware of the fact that the momentum distribution Φ(k ) and the
spatial distribution Ψ(p) are equivalent. They both fully determine the wave function,
the diﬀerence is that they are described in diﬀerent bases, i.e.,
Ψ(x ) = (ψx , Ψ)
and
Φ(p) = (ψp , Ψ),
These expressions are the components of the vector Ψ along the eigenvector basis for
the position and momentum operator, respectively.
2.1.3 The Gaussian Wave Packet
The plane waves are not particularly suitable for describing a particle as we are used
to experience it. Hardly ever do we deal with particles with equal probability of being
found at any place in the universe. We need some kind of localization of the particle.
Then it is natural to discuss the Gaussian wave packet. We will conﬁne this discussion
to one dimension for simplicity and ease of visualization.
37
Simple Quantum Mechanical Systems
␴p
␴q
q0
p0
Figure 2.1: Gaussian probability density for position and momentum
The Gaussian wave packet is a spatial wave function whose absolute square is a
Gaussian distribution. The wave function is given by
1
(q − q0 )2
ip0 q/
Ψ(q) =
e
exp −
.
(2.2)
4σq2
(2πσq2 )1/4
The factor exp(ip0 x/) gives rise to an average momentum p0 , as we shall see in a
minute. The probability distribution in space is given by
1
(q − q0 )2
2
P (q) = |Ψ(q)| = exp −
.
2σq2
2πσq2
Performing an (inverse) Fourier transform of Ψ to obtain the momentum distribution
yields
(p − p0 )2
1
−i(p−p0 )q/
e
exp −
,
(2.3)
Φ(p) =
4σp2
(2πσp2 )1/4
where σp = /2σq is the width of the momentum distribution which also happens to
be a Gaussian distribution. Fig. 2.1 depicts these distributions, i.e., their absolute
squares.
Thus, the Gaussian wave packet represents a particle localized around the mean
q = q0 with variance σq2 . The momentum has mean p0 and variance σp2 = 2 /4σ 2 .
Hence
∆q∆p = σq σp = ,
2
and we have achieved optimal uncertainty in q and p in the case of a Gaussian distribution.
With Ehrenfest’s theorem we get at once
d
1
q = p
dt
m
and
d
p = 0.
dt
Hence, the wave packet’s mean position moves like a classical particle of constant
momentum p0 . It can be shown that the time evolution of the free wave packet always
assumes the shape of a gaussian.
One may calculate the time evolution of σq . The width σp is constant in time by
Eqn. (1.13), since P 2 commutes with the Hamiltonian. The time evolution of σq is
given by
4σp4 2
2
2
2
σq (t) =
t .
1 + 2 2t
= σq (0) 1 +
(2.4)
4σp2
m 4m2 σq (0)4
Thus, the wave packet spreads out in space as time increases. This phenomenon is
called dispersion: The diﬀerent plane wave components of a wave packet have diﬀerent
38
2.2 – The Harmonic Oscillator
velocities (i.e., diﬀerent phase velocities), so that the group velocity (the velocity of
the envelope of the packet) diﬀers from the phase velocity, viz.,
vphase :=
ω
p
p
dω
=
=
=
=: vgroup .
k
2m
m
dk
In classical terms we may view the Gaussian wave packet as an ensemble of travelling
plane wave particles. Since they have diﬀering velocities their distribution in space
must change with time: The faster waves outrun the slower ones. On the other hand,
the particles are all free, so that their velocities does not change. Hence, the momentum
distribution does not change with time.
The Schrödinger equation is reversible, that is the time dependence of Eqn. (2.4)
also holds for negative t. This means that it is possible to start with in some respects
a more exotic wave packet U(−t0 , 0)Ψ whose space distribution gets sharper before it
spreads out again.
2.2
The Harmonic Oscillator
We will now study an ever-returning problem in both classical and quantum mechanics: The harmonic oscillator. The harmonic oscillator is in some respects the most
fundamental physical mechanical system, because in very many applications it is a
good ﬁrst approximation. There are dozens of examples of systems performing oscillatory behavior, and in very many of these the harmonic oscillator is a good choice of
approximation.
Consider Fig. 2.2 which shows some general one-dimensional potential V (q) with a
local minimum at q0 . A Taylor expansion around q0 yields
V (q) ≈ V (q0 ) + (q − q0 )
dV 1 d2 V
+
dq q0 2 dq 2
(q − q0 )2
q0
1
= V (q0 ) + mω 2 (q − q0 )2 ,
2
with ω deﬁned through
mω 2 =
d2 V dq 2 q0
and where m is the mass of the particle moving in V . This approximation is the
harmonic oscillator. (The constant term V (q0 ) is usually omitted.)
We will conﬁne our discussion to the one-dimensional oscillator for clarity and
simplicity. The full-blown general oscillator is readily obtained in more generalized
arguments than those we present in this section.
2.2.1 The Classical Particle
As stated above the Hamiltonian is given by
H=
1
p2
+ mω 2 q 2 .
2m 2
Here ω ∈ R is the frequency of oscillation in the classical sense. Sometimes the constant
k = mω 2 is used instead, and one identiﬁes ω after the solution has been found. The
coordinate name q is preferred over x in this section, because harmonic oscillators often
occur when q is not a cartesian coordinate.
39
Simple Quantum Mechanical Systems
V(q)
q
q0
Figure 2.2: Harmonic oscillator approximation
Let us solve the harmonic oscillator with Hamilton’s equations (1.2). Writing these
out yields
p
q̇ =
m
and ṗ = −ω 2 q,
and thus
q̈ = −ω 2 q.
This is a standard diﬀerential equation, and we know that its solution is
q(t) = C cos(ωt + φ),
where C is the amplitude of the oscillations and φ is an arbitrary phase shift. The
system thus undergoes periodic oscillations with period T = 2π/ω.
From section 1.1 we know that
∂H
dH
=
= 0,
dt
∂t
and hence the total energy is conserved for the harmonic oscillator. The situation
may be illustrated through Fig. 2.3, which depicts a particle of mass m trapped in the
V(q)
H
T
m
V
-C
C
q
Figure 2.3: The classical harmonic oscillator as a roller-coaster
40
2.2 – The Harmonic Oscillator
harmonic oscillator potential; we display it here as sliding on a vertical wire shaped as
the potential V (q) which makes a roller-coaster analogy.1
Since kinetic energy cannot be negative, the particle cannot be found at q coordinates outside the intersection of the graph of V (q) and the horizontal line of constant
total energy T + V = H. The region R − [−C, C] is called the classically forbidden
region. In quantum mechanics we shall see that the particle may still have a ﬁnite
probability of being found outside this region.
We have established the fact that a classical particle moving in an harmonic oscillator performs oscillations. Indeed, from the roller-coaster analogy of the system it
should be obvious for anyone who has played with small marbles and fruit bowls. When
we turn to the quantum mechanical system things get a little bit diﬀerent. Instead
of a clear oscillatory behavior we ﬁnd discrete eigenstates of the Hamiltonian. These
do not “move” in the classical sense, since they are stationary states, but for systems
with very high energy (compared to ω) they correspond to the average behavior of
the classical system, as we shall see.
2.2.2 The Quantum Particle
We now have a good understanding of the classical harmonic oscillator. Turning to the
quantum mechanical system, it turns out that it is a little bit more complicated to solve.
Solving the time independent Schrödinger equation in position representation, that is
as a diﬀerential equation eigenvalue problem, requires tedious work and knowledge
of the so-called Hermite diﬀerential equation and the Hermite polynomials. We will
instead choose an approach based on operator algebra which makes the solution readily
obtainable.
The trick is to deﬁne a lowering operator a as
P
mω
Q + i√
.
(2.5)
a :=
2
2mω
The Hermitian adjoint of a, called the raising operator is easily seen to be
P
mω
a† =
.
Q − i√
2
2mω
(2.6)
The names of the operators will be justiﬁed in Theorem 11.
A number x2 +y 2 may be factorized in the complex plane as (x−iy)(x+iy), and this
is partly the idea behind the deﬁnition of a and a† : Could we write the Hamiltonian
as a product in the same way?
We deﬁne the Hermitian number operator N as
N := a† a.
The name will be justiﬁed in Theorem 12 below. When we compare H with N , we see
that since a and a† do not commute, we get an extra term proportional to [Q, P ], viz.,
mω 2 2
1
1
mω
P2
i
†
N =a a=
+
Q +√
[Q, P ] =
H − 1.
2mω
2ω
2
ω
2
2mω
Fortunately this term is just an energy shift which we may safely ignore.2 Analyzing
the spectrum of N is much easier than analyzing H directly, as we will see. We calculate
the product in reverse order, viz.,
aa† =
1
1
H + 1,
ω
2
1 This analogy is perfectly acceptable. The acceleration of a particle sliding on a surface shaped
like V (q) is proportional to V (q) as may easily be deduced.
2 Eigenvectors are unchanged if we make the transition H → H − σ1, where σ is any scalar.
41
Simple Quantum Mechanical Systems
and this wields the commutator relation
[a, a† ] = 1.
The following theorem justiﬁes the names of the operators a and a† .
Theorem 11
Let Φn be orthonormal eigenvectors of N := a† a with eigenvalue n. Assume [a, a† ] = 1.
Then,
√
a † Φn =
n + 1Φn+1
(2.7)
√
and aΦn =
nΦn−1 .
(2.8)
In other words: The operator a transforms Φn into Φn−1 , and a† promotes Φn to Φn+1 .
The lowering and raising operators produce a whole lot of eigenvectors of N once we
are given one of them. A collective term for a and a† is ladder operators. Proof: Note
that
aΦn 2 = (Φn a† aΦn ) = n Φn 2 ,
and that
† 2
a Φn = (Φn aa† Φn ) = (Φ, ([a, a† ] + a† a)Φn ) = (n + 1) Φn 2 .
We investigate the action of N on aΦn and a† Φn :
N aΨ = ([a† , a] + aa† )aΨ = (−1 + aa† )aΨ = (n − 1)aΨ
N a† Ψ = a† ([a, a† ] + a† a)Ψ = a† (1 + N )Ψ = (n + 1)a† Ψ
Hence aΦn is an eigenvector of N with eigenvalue n − 1, and a† Φn is an eigenvector
of N with eigenvalue n + 1. With the assumption that Φn are orthonormal, the result
follows at once. We will now prove that the only eigenvalues of N are exactly the non-negative
integers. Note that Theorem 11 does not say this. It could happen that some eigenvalue
existed between n and n + 1. Furthermore, the theorem does not say whether Φn are
degenerated or not, that is if there exists more than one Φn with eigenvalue n.
Theorem 12
Assume [a, a† ] = 1. The eigenvalues n of the number operator N := a† a are integral
and non-negative.
Proof: For any Ψ ∈ H, we deﬁne Φ = aΨ. The norm of Φ is
Φ2 = (Ψ, a† aΨ) ≥ 0,
so a† a is positive deﬁnite. (That N is also Hermitian is obvious.)
Assume Φn to be an eigenvector of N with eigenvalue n (where n now is not
necessarily integral.)
From Theorem 11 we get that repeated use of a on Φn yields
am Φn ∝ (n − m)Φn−m .
But since N is positive deﬁnite the eigenvalue cannot be negative. The process must
terminate so that
n = m,
for some integer m. Thus n is integral and non-negative. For the termination to occur,
the lowest state must exist and have eigenvalue 0. 42
2.2 – The Harmonic Oscillator
We still do not know whether all eigenvectors may be generated from only Φ0 by
repeated use of a† . It could happen that we missed some eigenvector in case of a
degenerate eigenvalue n. However, in one dimension we cannot have degeneracy, such
as when applying the theorem to N = a† a in the present case.3
We have now deduced that the Hamiltonian has eigenvalues given by
1
En = ω(n + ),
2
n = 0, 1, 2, . . .
Hence, the spectrum of the one-dimensional quantum mechanical harmonic oscillator
is evenly spaced with spacing ω.
Let us ﬁnd the ground state Φ0 , i.e., the state with the lowest energy E0 = ω/2).
As noted in the proof of Theorem 12, operating with a will annihilate it, viz.,
mω
P
aΦ0 =
Q + i√
Φ0 = 0.
2
2mω
This yields a diﬀerential equation
mω
∂Φ0
=−
qΦ0 (q),
∂q
whose solution is easily found by inspection:
mω 1/4 mω 2
Φ0 =
e− 2 q .
π
To produce excited states (that is, states of higher energy than the ground state)
of the harmonic oscillator we may operate repeatedly on Φ0 with a† .
n
ip
mω
1
1
† n
q− √
Φ0
Φn = √ (a ) Φ0 = √
2
2mω
n!
n!
We get
n/2 n
mω
∂
q−
Φ0
2mω
∂q
n/2 n
2
mω
∂
1 mω 1/4
q−
=√
e−mωq /2
2mω
∂q
n! π
1
Φn (q) = √
n!
To simplify this expression, we introduce a change of variable from q to x, viz.,
mω
x :=
q.
This yields
n
2
1 mω 1/4
∂
√
Φn (x) =
e−x /2 .
x−
n
π
∂x
2 π
Let us summarize the results in a theorem. In addition we want to relate our
eigenfunctions to the so-called Hermite polynomials Hn (x), see Ref. . A basic
relation concerning these is
n
2
2
d
x−
e−x /2 = Hn (x)e−x /2 .
dx
Using this, we state our summarizing theorem.
3 A proof of this can be found in Ref. , and rests upon uniqueness of solutions of ordinary
diﬀerential equations.
43
Simple Quantum Mechanical Systems
V(q)
⌽3
⌽2
⌽1
⌽0
Figure 2.4: Lowest eigenstates of the harmonic oscillator
Theorem 13
The energies of the harmonic oscillator are given by
1
En = ω(n + ),
2
and the corresponding eigenfunctions are given by
n
2
1 mω 1/4
∂
e−x /2
Φn (x) = √
x−
n
π
∂x
2 π
2
1 mω 1/4
=√
Hn (x)e−x /2 ,
n
π
2 π
where Hn (x) are the Hermite polynomials and
mω
q.
x=
Fig. 2.4 shows the ﬁrst four eigenfunctions for the harmonic oscillator. The other
functions look very similar: An even or odd number of squiggles that fade out at each
side.
It is instructive to see what happens with the probability density |Φn (q)|2 as n
grows. According to the correspondence principle (see section 2.4) classical behavior
should be obtained in the limit of large quantum numbers, that is, for large n. When we
say “behavior’ in this context, we mean that the probability density should approach
that of the classical harmonic oscillator. Let us derive this distribution.
Imagine that the classical harmonic oscillator performs very rapid oscillations.
When we measure position, our eyes do not have any chance of catching exactly where
it is, so we cannot anticipate the position. Then it becomes clear that the probability
density P (q)dq for ﬁnding the particle between q and q + dq must be given by
P (q)dq =
|dt|
.
T /2
This is because during one half period of oscillation which takes the time T /2, the
particle will have visited all q ∈ [−C, C]. The time spent between q and q + dq is |dt|.
Diﬀerentiating q(t), we get
dq
= −Cω sin(ωt) = −ω C 2 − q(t)2 .
dt
44
2.3 – The Hydrogen Atom
0.3
0.2
0.1
-8
-6
-4
0
-2
2
4
6
8
Figure 2.5: Probability density for n = 20 together with classical probability. Dimensionless units are applied. Horizontal unit is (/mω)1/2 and vertical unit is (mω/)1/2 .
Note the dashed lines that mark the borders of the classically allowed regime.
Thus,
|dt| =
ω
dq
.
− q(t)2
C2
Remembering that T = 2π/ω we get
P (q)dq =
π
dq
C 2 − q2
(2.9)
for the probability density.
In Fig. 2.5 we see the classical probability density along with the probability density
for n = 20. The probability density clearly approaches that of the classical oscillator as
we hoped. As seen in Fig. 2.3 in the discussion of the classical harmonic oscillator, q is
classically conﬁned to the region [−C, C]. On the other hand we see that the probability
density |Ψ20 (q)|2 is nonzero outside this region for the quantum mechanical system.
This always happens in quantum systems, when V is ﬁnite: There is always some
probability that the particle will be found in the classically forbidden region dictated
by the total energy H of the bound system.
This is an example of the characteristic phenomenon in quantum mechanics called
tunnelling. Quantum particles tunnel through classically forbidden regions.
2.3
The Hydrogen Atom
The hydrogen atom is a two-body problem in which an electron orbits a proton. Classically the forces are given by the Coulomb force which is attractive for a system of
two opposite charges.
To study the hydrogen atom is interesting in several ways. It is one of the few
analytically solvable three-dimensional problems in quantum mechanics, and ﬁnding
the structure of the eigenstates of a non-trivial system could be illuminating.
In 1913 Niels Bohr proposed a modiﬁed version of Rutherford’s hydrogen atom in
which quantization principles were incorporated (though in a rather ad hoc manner)
to get rid of certain classical inconsistencies. Rutherford’s model assumed a massive
and dense nucleus of positive charge and classically orbiting electrons. (See Refs. [10,
11].) Orbiting classical electrons are accelerating. Accelerating charges emit radiation
according to Maxwell’s equations, and hence loss of energy is inevitable. The orbit is
not stable, making the electron eventually collide with the heavy nucleus. Indeed, the
large charge-to-mass ratio of the electron makes the lifetime of the atom rather short,
45
Simple Quantum Mechanical Systems
viz.,
τ ≈ 1.5 · 10−11 s.
See Ref.  for a discussion of the classical instability of the orbiting electron.
Bohr proposed a model in which the electron was allowed to orbit only in circular
paths whose angular momentum was an integral multiple of , viz.,
L = n.
(For this reason is also called the quantum of action.) The paths were still classical,
as x and p were well-deﬁned quantities. In addition to the quantization hypothesis
Bohr assumed that radiation did not occur during orbital motion; only in transitions
between diﬀerent orbits. In that case, photons of energy equal to the energy diﬀerence
between orbits were emitted or absorbed.
However unsatisfactory for various reasons, Bohr’s model featured several important
ideas, among these quantization of energy for a material system, interaction with a
quantized electromagnetic ﬁeld and excellent correspondence with experimental data.4
To deﬁne the problem at hand, assume that the nucleus, i.e., a proton with mass
mp , is at rest at the origin in R3 . Let the electron with mass me be located at the
coordinates r relative to the proton.
As the proton mass is much higher than the electron mass, viz.,
mp
≈ 1836,
me
we may neglect the motion of the proton on classical grounds. The large mass ratio
leads to small errors. However, a two-body problem such as the hydrogen atom is best
studied in the center of mass reference frame, i.e., the moving frame in which the center
of mass is at rest. Describing the electron’s position relative to the center is equivalent
to using the so-called reduced mass µ instead of the electron mass in the equations of
motion for the electron. Instead of neglecting the proton motion in our equations we
may instead use the center of mass description, and thus improve or description both
theoretically and numerically.
The center of mass Hamiltonian is identical to the original proton-neglecting Hamiltonian, but with the electron mass replaced with the reduced mass µ. In other words
we will make the substitution
me −→ µ :=
me · mp
.
me + mp
See for example Refs. [10, 22] for a thorough discussion of the center of mass system
and the Hydrogen atom.
The Coulomb force is a central force, i.e., its line of action is the line joining the
proton and the electron. Its value depends only on the distance between the particles.
Such a conservative force may be expressed as a gradient, viz.,
F = −∇V (r),
in which the potential V only depends on r, the distance between the particles, i.e. the
distance from the origin.
For the Coulomb force we have
1
V (r) = −ke2 ,
r
(2.10)
4 Notice that quantization of light was proposed before non-relativistic quantum mechanics even
though photons belong to a super-theory incorporating relativity.
46
2.3 – The Hydrogen Atom
z
v
vr
v
θ
r
φ
y
x
Figure 2.6: Spherical coordinates
where e is the fundamental charge and k is a constant whose value is such that
ke2 ≈ 1.44 eV · nm.
Spherical symmetry suggests that we use spherical coordinates in our description,
viz.,
x = r cos φ sin θ
y = r sin φ sin θ
z = r cos θ
See Fig. 2.6 for a visualization.
We will study the general case with arbitrary potentials to emphasize that the
methods are applicable to other systems as well, such as the three dimensional harmonic
oscillator.
2.3.1 The Classical System
We will consider the classical system only brieﬂy to provide means for comparing the
quantum mechanical system to the classical system.
The Hamiltonian of the problem at hand is given by
H=
p2
+ V (r).
2µ
Let us ﬁnd a constant of motion for such a system, namely the angular momentum L.
The torque on the particle is zero, viz.,
τ = r × F = 0,
because the F = −∇V is parallel to r . Thus, angular momentum is conserved, viz.,
dL
= v × (mv ) + r × F = 0.
dt
We may decompose the velocity into a radial part vr parallel to r and a part v⊥ in
the plane orthogonal to r , see Fig. 2.6. Then it is easily seen that
L = µrv⊥ ,
and thus the Hamiltonian may be rewritten as
H=
p2r
L2
+
+ V (r) .
2µ 2µr 2
(2.11)
Veff (r)
47
Simple Quantum Mechanical Systems
This is the Hamiltonian of a one-dimensional system in which r is the (generalized)
coordinate and where Veﬀ (r) is the potential.
Note that since ∂H/∂t = 0 the total energy H is conserved in this system. It can be
shown that the motion of the electron in the Coulomb potential lies on a conic section,
depending on the total energy H: For H < 0 the motion traces an ellipse, for H = 0
the motion traces a parabola and for H > 0 the trace is a hyperbola. Thus all systems
in which H < 0 yields bound motion with bounded conic sections, and H ≥ 0 yields
unbound motion.
2.3.2 The Quantum System
The derivation in this section will not be complete. We include only the most important
aspects of the method and leave it to the reader to ﬁll in the details referred to. A
good account is given in Ref. , and is recommended for further reading.
Turning to the Hamiltonian again, we have in the position representation
H =−
2 2
∇ + V (r),
2µ
and we need the Laplacian in spherical coordinates. This is given by
1 ∂2
1 ∂2
∂2
2 ∂
∂
+ 2
+
+
cot
θ
∇2 = 2 +
∂r
r ∂r r ∂θ 2
∂θ sin2 θ ∂φ2
as can be looked up in for example Ref. . The expression in brackets turns out
to be the quantum mechanical orbital momentum operator L2 as described in section
1.5.5.5 Thus, our Hamiltonian reads
2 ∂
2 ∂ 2
L2
+
+ V (r).
(2.12)
H =−
+
2
2µ ∂r
r ∂r
2µr 2
The ﬁrst term is just the radial kinetic energy, viz.,
2
∂
2 ∂
2
2
pr = −
+
.
∂r 2
r ∂r
We see from Eqn. (2.12) that L2 commutes with H, that is we may ﬁnd a set of simultaneous orthonormal eigenvectors to both operators. (Thus L2 is also a quantum
mechanical constant of motion according to Theorem 4, section 1.5.4.) Since L2 commutes with the angular momentum components Li , these are also constants of motion.
We know that the eigenstates of L2 and Lz are parameterized with two integral
(or half-integral) quantum numbers l and m. For the orbital angular momentum it
turns out that l may take on all integral values and no half-integral values. To be more
speciﬁc, the eigenfunctions Ylm (θ, φ) of L2 and Lz are the so-called spherical harmonics;
a basis for the square-integrable functions deﬁned on a sphere. For a discussion of these
functions see for example Refs. [10, 22].
We may try separation of variables to obtain the eigenstates Ψnlm of H. Let us
write
Ψnlm = R(r)Ylm (θ, φ).
This works perfectly well, and upon insertion into the time independent Schrödinger
equation (1.7) we end up with the equation
2µ
l(l + 1)
∂2
(rR)
+
[E
−
V
(r)]
−
(rR) = 0.
∂r 2
2
r2
5 Given the diﬀerential operators in Cartesian coordinates, it is no diﬃcult task (albeit tedious) to
calculate the spherical coordinate version of Li .
48
2.3 – The Hydrogen Atom
The auxiliary function u = rR(r) therefore satisﬁes a modiﬁed time independent
Schrödinger equation called the radial equation, viz.,
2 l(l + 1)
2 ∂ 2 u
+ V (r) +
u(r) = Eu(r).
(2.13)
−
2µ ∂r 2
2µr 2
Veff (r)
This is clearly the quantum mechanical counterpart of Eqn. (2.11).6 The extra term in
the eﬀective potential Veﬀ is called the centrifugal term. We see that for diﬀerent l we
have diﬀerent radial equations and hence diﬀerent solutions R(r). We expect therefore
that R must be labelled both with n and l, the nature of the former may be discrete
or continuous. It turns out that n is discrete for E < 0 and continuous otherwise.
We may at once devise some constraints on u(r) for the actual solution Ψ to be
physical (or rather mathematical). First of all we must have
u(0) = 0,
because we want R(r) = u(r)/r not to be divergent. Second, if we let r approach
inﬁnity, then the radial equation becomes
∂2u
2µE
∼ − 2 u(r).
∂r 2
For E < 0 the only acceptable solution is
u(r) ∼ e−αr ,
where α = −2µE−2 . If we on the other hand let r approach zero, and assume that
V (r) = O(r −k ) with k < 2, the centrifugal term will dominate the radial equation,
giving
∂2u
l(l + 1)
∼
u(r).
∂r 2
r2
The acceptable solution giving bounded R(r) in this case is
u(r) ∼ r l+1 ,
or
R(r) ∼ r l ,
r small.
Let us summarize what we have found so far in a theorem.
Theorem 14
Assume that we have a central symmetric Hamiltonian on the form
H=
p2
+ V (r)
2µ
where
V (r) = O(r −k ),
k < 2.
Then the solutions to the time independent Schrödinger equation are on the form
Ψ(r, θ, ψ) = R(r)Ylm (θ, φ),
6 The correspondence between the classical radial equation and the quantum mechanical equation
is a mathematical coincidence. As shown in Ref.  it is for example not true in two dimensions.
49
Simple Quantum Mechanical Systems
where Ylm are the spherical harmonics and where R(r) satisﬁes the radial equation
2 ∂ 2 (rR)
2 l(l + 1)
−
+
V
(r)
+
(rR) = E(rR).
2µ ∂r 2
2µr 2
Furthermore, the asymptotic behavior of R(r) for bound states (i.e., E < 0) is
R(r) ∼ r l ,
r small,
and R(r) ∼ exp − −2µE−2 r ,
r large.
We will now consider the Coulomb potential (2.10) in particular. We will consider
bound states, states in which E < 0. These are exactly the states that classically traces
out a bounded orbit. We expect these to remain bounded, that is the solutions to the
time independent Schrödinger equation are expected to be square integrable.
First we rewrite Eqn. (2.13) on dimensionless form, introducing the scalings
(2.14)
ρ = r −8µE/2
µ
.
(2.15)
and λ = ke2
−2E
This yields
∂2u
λ l(l + 1) 1
−
+
−
u(ρ) = 0.
∂ρ2
ρ
ρ2
4
(2.16)
The strategy uses Theorem 14: We try to incorporate the asymptotic behavior. For
large r we have
u(ρ) ∼ e−ρ/2 ,
and therefore we write
u(ρ) = e−ρ/2 v(ρ).
Inserting this into the new radial equation yields a new diﬀerential equation for v(ρ),
viz.,
∂ 2 v ∂v
λ
l(l + 1)
+ v−
−
v = 0.
∂ρ2
∂ρ
ρ
ρ2
We will not solve this equation explicitly as the arguments are a bit lengthy. See for
example Ref. . However, the acceptable solutions turn out to be polynomials of
degree λ − 1, thereby imposing a quantization on the energy E through Eqn. (2.15).
We deﬁne the principal quantum number n = λ, and we have
n ≥ l + 1,
so that for each n, the orbital momentum quantum number is limited. Note that the
radial function R depends on both n and l, but that the energy E only depends on n.
For an arbitrary spherical symmetric potential we would expect a dependence on l as
well.
We summarize the properties of the radial solution R(r) in a theorem, which we
leave without further proof.
Theorem 15
The hydrogen atom’s radial wave function is on the form
Rnl (r) = Cρl e−ρ/2 L2l+1
n+l (ρ),
where ρ is given by
ρ=r·
50
2
,
a0 n
a0 :=
2
,
ke2 µ
2.3 – The Hydrogen Atom
where a0 is called the Bohr radius. C is a normalization constant. The energy is given
by
(ke2 )2 µ 1
.
En = −
22 n2
The function L2l+1
n+l (ρ) is the 2l + 1th associated polynomial of the n + l’th Laguerre
polynomial, and it has degree n − l + 1.
We have now found all the solutions to the time independent Schrödinger equation,
viz.,
Ψnlm (r, θ, φ) = Rnl (r)Ylm (θ, φ).
Visualizing these three dimensional functions is not very easy, but in Ref.  there is
a whole chapter devoted to this.
This concludes our exploration of the hydrogen atom. There are however some
features of the solution to notice which aids us with a general understanding of bounded
states in three dimensions:
– The stationary states, i.e. the wave functions Ψnlm , and the energy eigenvalues
are indexed by three quantum numbers. We choose a numbering such that higher
quantum numbers yield higher energy. See also section 2.4 on the correspondence
principle.
– There are an inﬁnite number of bound states with E < 0 for the hydrogen atom.
This is not a general fact. There are potentials without even one bound state
even though the potential has a global minimum.
– We also have eigenstates for the hydrogen atom with energy E > 0. Classically,
these are not bound, and this is also the case for the quantum mechanical system:
The eigenstates of H turn out to be non-normalizable, just as the eigenstates of
the free particle. For the unbound states the energy spectrum is continuous,
contrary to the bound states.
– As for the one-dimensional harmonic oscillator there is a ﬁnite probability of
ﬁnding the particle at arbitrary large r, thus violating the strict boundaries of
the classically allowed regime.
– The energies obtained are actually identical to those found by Bohr in his atomic
model.
– For large n, the energy diﬀerence En − En−1 approach zero, making in a sense
a classical limit in which the energy is continuous. This is an example of the
correspondence principle, which Bohr proposed for his quantum theory. We will
say more on this in section 2.4.
– As we have explained, the square magnitude |Ψ(x )|2 of the wave function is interpreted as the probability density of locating the electron at x upon measurement.
In our case we have
|Ψnlm (r, θ, φ)|2 = Rnl (r)2 |Ylm (θ, φ)|2 .
Since |Y00 (θ, φ)| = 1, we get for the l = m = 0 states
|Ψnlm (r, θ, φ)|2 = Rn0 (r)2 = C 2 e−ρ [L1n+1 (ρ)]2 .
It is |Ψ(x )|2 dxdydz that measures the probability as opposed to the probability
density. With spherical coordinates we have
|Ψ(x )|2 dxdydz = |Ψ(r, θ, φ)|2 r 2 drdθdφ,
51
Simple Quantum Mechanical Systems
and thus
|Ψnlm (r, θ, φ)|2 r 2 dr = Rn0 (r)2 r 2 dr = C r 2 e−ρ [L1n+1 (ρ)]2 dr
where C is a normalization constant is the probability density of ﬁnding the
electron at a distance r from the nucleus.
Recall that ρ = 2r/a0 n, and thus the exponential term falls oﬀ slower with higher
energy (i.e., higher n). Without analyzing the Laguerre-polynomial term further,
it is reasonable to accept that the electron’s average distance from the nucleus
increases with energy, a feature expected from Bohr’s model. Indeed, it turns
out that r is identical to the model’s postulated quantized radii.
2.4
The Correspondence Principle
After analyzing a few simple quantum mechanical systems we are in position to state
the correspondence principle, ﬁrst put forward by Bohr. The principle is quite simple:
In the limit of high quantum numbers classical behavior should be reproduced. Alternatively, we may take the limit = 0 as the energies always will contain Planck’s
constant as a factor.
If we for instance consider the harmonic oscillator, the energies are given by
1
En = ω(n + ),
2
and in letting → 0 we must compensate with letting n → ∞ to keep the energy of
the system ﬁxed.
As seen in Fig. 2.5 on page 45, the quantum mechanical probability density is
approaching the classical density in the limit n → ∞,7 equivalently if → 0 and the
energy E is kept constant.
The principle simply expresses that quantum mechanics is supposed to be a supertheory for Newtonian mechanics in the sense that everything obeying Newton’s laws is
supposed to also obey non-relativistic quantum mechanics, but with the non-classical
features of the dynamics obscured by the fact that is very small compared to macroscopic quantities.
On the other hand, if the correspondence principle is violated for some system, then
we may ﬁnd a macroscopic realization of a system in which quantum eﬀects is visible.
7 We must choose a numbering of the energy eigenvalues such that energy is increasing with higher
quantum numbers.
52
Chapter 3
The Time Dependent
Schrödinger Equation
This chapter is a detailed description of the time dependent Schrödinger equation for
a single particle in a classical electromagnetic ﬁeld. This is the domain of the systems
we wish to study with our numerical methods.
3.1
The General One-Particle Problem
We study the quantum description of a single particle. We shall in general let µ be the
mass of the particle. In addition to existing in (at most) three dimensions the particle
also has spin s. Hence, the wave function for our particle is a square integrable function
of (at most) three real variables (i.e., Ω ⊂ Rd , d ≤ 3) into the complex vector space Cn ,
with n = 2s + 1. For each point in Ω, the wave function has n complex components.
We denote each component function by Ψ(σ) , where σ = 1, . . . , n.
We let S be the spin operators Si for the particle. In our chosen representation
(σ,τ )
these are n × n matrices with components Si
. We take the standard basis vectors
to be eigenvectors of S3 (i.e., the spin along the z-axis) with increasing eigenvalues.
Hence, |Ψ(σ) (x , t)|2 is the probability density of ﬁnding the particle at x at time t with
a spin eigenvalue m = (σ − 1 − s) along the z-axis.1
The particle will in general have charge q and move in an electromagnetic ﬁeld, i.e.,
in an electric ﬁeld E and a magnetic ﬁeld B. Equivalently, the ﬁelds are described by
the potentials A and φ, as described in section 1.6.
A classically spinning particle gains an additional potential energy −ΓB ·S from its
rotational motion, where Γ = qg/2µc is the scaled gyromagnetic factor. For an electron
we have g ≈ 2.2 For a quantum particle we assume the same formal expression for the
potential energy of a spin, and thus we do in a way envisage the electron as spinning,
as described in section 1.5.6. In fact, the coupling with B is the only observable eﬀect
of the spin of the particle. We call this the Zeemann eﬀect.3
The full-blown Hamiltonian of this system then reads
H=
1 As
2
1 q
−i∇ − A(x , t) + qφ(x , t) + Vext (x , t) − ΓB(x , t) · S .
2µ
c
(3.1)
m = −s . . . s, σ = m + s + 1 = 1 . . . 2s + 1.
may be determined experimentally or by means of relativistic corrections to the quantum
2 This
theory.
3 See Ref. .
53
The Time Dependent Schrödinger Equation
Note that all but the last term commute with Si , and it becomes natural to write
H = Hspatial + Hspin ,
with Hspin = −ΓB · S . However, Hspin does not commute with neither the momentum
nor the position operator due to the magnetic ﬁeld which may vary in space. For each
component Ψ(σ) of the quantum state we have the equation
i
∂Ψ(σ)
= Hspatial Ψ(σ) − Γ
B · S (σ,τ ) Ψ(τ ) .
∂t
τ
(3.2)
Clearly, Hspin couples the diﬀerential equations for each component wave function. If
B = 0 this coupling vanishes and the equations become identical. Hence, there is no
need for implementing a simulator for coupled PDEs in this case. In the case of a
constant (in both space and time) magnetic ﬁeld B we may align it along the z-axis,
and this makes Hspin diagonal and again the equations become decoupled.
Eqn. (3.2) is the most general problem we would wish to solve numerically. There
are however many special cases yielding considerable simpliﬁcations to the problem such
as the above-mentioned decoupling of the PDEs for Ψ(σ) or gauge-transformations of
the potentials.
As discussed in section 1.6 we may perform a unitary transformation on the form
iq
T = exp − Λ(x , t) ,
c
which is equivalent to the gauge transformation
A −→ AΛ = A − ∇Λ,
φ −→ φΛ = φ +
1 ∂Λ
.
c ∂t
The transformed quantum state ΨΛ = T Ψ obeys the Schrödinger equation obtained
by substituting the potentials with the transformed potentials. (Theorem 10, section
1.6, which also holds for the case in which spin is incorporated because T commutes
with the spin operators S .)
We can of course not eliminate A or φ in general and in this way obtain a simpler description of our particle. But in several circumstances, such as in the dipole
approximation discussed below, some gauges have advantages over others.
In the below discussions of diﬀerent magnetic ﬁelds it is important to bear in mind
that for the potentials A and φ to have any meaning they must represent ﬁelds consistent with Maxwell’s equations (1.23). For example, a magnetic ﬁeld depending on
time alone is not consistent.4 We discuss diﬀerent kinds of magnetic ﬁelds, such as a
uni-directional ﬁeld, but it may happen that they are inconsistent. In that case we
must be somewhat careful in our manipulations of the ﬁelds and potentials. We must
have some kind of justiﬁcation for using such a ﬁeld. One such justiﬁcation is that our
system is small compared to the spatial variations of the ﬁeld so that we may neglect
the spatial dependence of the ﬁelds.
3.1.1 Uni-Directional Magnetic Field
A uni-directional magnetic ﬁeld is a ﬁeld given by
B = B(x , t)n(t),
4 It is easily seen that B = B(t) implies that B = const is a contradiction through Maxwell’s
equations. B = B(t) implies Et = c∇ × B = 0, which again implies E = const. This in turn implies
Bt = −c∇ × E = 0, and ﬁnally B = const.
54
3.1 – The General One-Particle Problem
where n(t) is a unit vector dependent on time only. The ﬁeld strength B may vary in
space as well. Hence, the direction of B only varies with time.
Assume that the unit normal is independent of time. If the magnetic ﬁeld B is
directed along the z-axis (if not, we just rotate the frame of reference) we note a
considerable simpliﬁcation in the Schrödinger equation (3.2). Let B = B(x , t)k̂, where
k̂ is the unit vector in the z-direction. Then
Hspin = −ΓB(X , t)S3 ,
and Eqn. (3.2) becomes
i
∂Ψ(σ)
(σ,σ)
= Hspatial Ψ(σ) − ΓB(X , t)S3 Ψ(σ) .
∂t
(3.3)
Hence, the diﬀerent equations are no longer coupled due to S3 being diagonal.
We may not do this if n varies with time because the spin-diagonal Schrödinger
equation (3.3) is not valid at all times. One could imagine an attempt at diagonalizing
Hspin with an operator T deﬁning a picture transformation. The columns of T would
be the eigenvectors of n · S , and
HT = Hspatial − ΓBS3 + i
∂T †
T ,
∂t
but the last term will not in general be diagonal. There is no easy way to decouple
the PDEs for Ψ(σ) whenever n varies with time. If B is allowed to vary even more
arbitrarly the diagonalization of Hspin of course also breaks down.
Since the diagonal elements of S3 are just real numbers, each PDE diﬀers only
by a scaling of the magnetic ﬁeld. Thus, each component Ψ(σ) evolves like the other
components except for a magnetic ﬁeld of diﬀering strength. For example, if s = 1/2
we have
(1,1)
(2,2)
and S3
=−
=+ ,
S3
2
2
and thereby
∂Ψ(σ)
= Hspatial Ψ(σ) ± ΓB(x , t)Ψ(σ) .
i
∂t
2
If B in addition is independent of x (i.e., a dipole-approximation; see below), then the
PDEs become even simpler. In fact, we may integrate each PDE analytically, if we
know the eigenvectors of Hspatial .
It is easy to see that if H = H0 + f (t), where a basis Ψn of eigenvectors of H0 are
known and where f (t) is a scalar function of time alone, then Ψn are also eigenvectors
of H. The time development of each eigenvector is given by
t
f (t ) dt ) Ψn (0),
Ψn (t) = exp −i(En t +
0
where En are the eigenvalues of H0 , and hence we can calculate
the time development
of any linear combination of the Ψn s. In fact, if Ψ = n cn Ψn , then by linearity of
the evolution operator we get
t
Ψ(t) = e−i 0 f (t ) dt
e−iEn t Ψn .
n
In other words; the wave function diﬀers from the one found by solving the time
dependent Schrödinger equations with H = H0 only by a phase factor. This means
that it is only necessary to solve one PDE in the uni-directional case with a time
dependent homogenous magnetic ﬁeld. Note that we must require the direction of the
ﬁeld to be constant in time in order to de-couple the PDEs.
55
The Time Dependent Schrödinger Equation
3.1.2 A Particle Conﬁned to a Small Volume
Imagine a particle conﬁned in a very small volume, i.e., approximately to a point in
space. The particle is then approximately in an eigenstate of both the position operator
and the momentum operator. The position is x0 and the momentum is zero.5 Then
we may take
H = Hspin = −ΓB(x0 , t) · S
as our Hamiltonian and ignore the spatial dependence altogether, leaving us with an
element of Cn whose components are Ψ(σ) (x0 ) as the full quantum description of the
particle.
In this case it is not necessary to solve a PDE; indeed the Schrödinger equation
becomes an ODE. Note that we may choose the spin s to be arbitrary, making arbitrary
large systems of ODEs. This system may be a good test case for ODE integrators that
should preserve qualitative features of the full Schrödinger equation.
There are also physically interesting systems of this type which are analytically solvable, in which a time dependent magnetic ﬁeld represents an explicit time dependence
in the Schrödinger equation. For example, if
B(t) = B0 (ak̂ + b(cos(ωt)î + sin(ωt)ĵ)) = B0 n(t),
the spin state performs so-called Rabi oscillations. This magnetic ﬁeld rotates with
constant angular frequency ω around the z-axis. The Hamiltonian reads
H = −ΓB(t) · S .
This system may be solved analytically for spin-1/2 particles, i.e., for a two-dimensional ODE; see Ref. . This is one of very few solvable time-dependent quantum
mechanical problems. For diﬀerent angular frequencies ω the Rabi oscillations display
resonances, and such systems are utilized in both the theory of quantum computing
and in laser cooling techniques.
3.1.3 The Dipole-Approximation
Imagine that our particle is under inﬂuence of a source of electromagnetic radiation
from a long distance r. The power emitted from the source is then inversely proportional to the distance, and we say that the radiation is of a dipole type. If our system
is small of extent compared to the distance from the source it is a good approximation
to set the electromagnetic ﬁelds to be independent of position, i.e.,
E = E (t),
B = B(t).
Clearly, this is a very simple form for the electromagnetic ﬁelds as they are independent
of position. We stress that these ﬁelds are not consistent with Maxwell’s equations
(1.23) as proven on page 54. Our model allows us to neglect the necessary spatial
dependence of the ﬁelds. Care must be taken when considering potentials A and φ,
since they must be derived according to the full model and not the simpliﬁed ﬁelds.
3.2
Physical Problems
We have described the features of various simpliﬁcations of the full problem represented
by the Hamiltonian (3.1). In this section we will brieﬂy describe some actual and
interesting physical systems that is worth investigating numerically. Many applications
are possible to ﬁnd in atomic physics and laser physics as well as solid state physics.
5 Of course it is not possible to perfectly conﬁne a particle in this way due to Heisenberg’s uncertainty
principle, but it may be a very good approximation.
56
3.2 – Physical Problems
atom at xij
Figure 3.1: Finite element grid corresponding to a two-dimensional model of a solid
with node perturbation and reﬁnement
3.2.1 Two-Dimensional Models of Solids
We have not yet described the ﬁnite element method, but it is nevertheless interesting
to hint at a stationary model that directly may exploit the features of the ﬁnite element
discretization in a numerical solution.
As a model of a solid (e.g., a metal, crystal or similar) we may study a twodimensional system in which the atoms of the solid are arranged in a simple grid.
We imagine a quadratic slab of length L with N 2 atoms, ions or similar distributed
evenly, i.e., at positions
i−1 j−1
,
xij =
, i, j = 1, . . . , N.
N −1 N −1
An electron inserted into the system experiences a potential Vij (x ) from each lattice
site, e.g., an eﬀective repulsive Coulomb force. The one-particle Hamiltonian then
reads
N
2
Vij (x ).
H = − ∇2 +
2µ
i,j=1
Such models are popular and some of them are in fact possible to solve analytically.
One feature of the model is that, roughly speaking, the inserted electrons surprisingly
enough acts as if the potentials were absent; they move unhinderedly at constant
velocity through the grid, see for example Ref.  in which a one dimensional model
with inﬁnitely many atoms is considered. The eigenfunctions in an inﬁnite periodic
model are plane waves exp(ikx) modulated with periodic functions. (Not all wave
numbers k are allowed, however.) In practice, such behavior is not found in real
systems, and this is due to irregularities in the grid-like structure.
To solve the system numerically we may use a rectangular ﬁnite element grid,
placing the nodes at xij and then reﬁne the grid to provide suﬃcient accuracy to
the interpolating functions between the nodes. This is illustrated in Fig. 3.1. The
ﬁnite element method ﬁnds a piecewise polynomial approximation over this grid; ﬁner
grid means a better piecewise approximation. We will return to the ﬁnite element
discretization in section 4.4; at this point we will just consider the method as one with
very ﬂexible possibilities with respect to the geometry of the problem.
The discrete Hamiltonian implies a discrete time independent Schrödinger equation.
Diagonalizing the discrete Hamiltonian arising from this process will then intuitively
give an approximation to the exact wave function and energy levels of such a system.
This is what we will do in chapter 6 for other systems. If we perturb the positions xij
randomly the discrete Hamiltonian will similarly be perturbed, yielding new eigenvalues
57
The Time Dependent Schrödinger Equation
and eigenfunctions. The eﬀect is however to perturb the locations of the atoms or ions,
and we have in addition obtained a numerical approximation.
Doing simulations on an ensemble of such perturbed systems may yield statistical
information on the distribution of for example energies.
Systems such as those described here are very hot topics in solid state physics. For
an introduction, see for example Ref. .
3.2.2 Two-Dimensional Hydrogenic Systems
An interesting problem is a two-dimensional hydrogenic system with an applied magnetic ﬁeld. Such a system has a Hamiltonian given by
H=
1 e 2 ke2
−i∇ + A −
− ΓB · S ,
2µ
c
r
where −e is the electron charge. The external potential −ke2 /r is an attractive
Coulomb force.6 The full three-dimensional system has proven to be interesting in several major research ﬁelds, such as plasma physics, astrophysics and solid state physics.
The solid state interest centers on the eﬀect of a magnetic ﬁeld on shallow impurity
levels in a bulk semiconductor, see Ref.  and references therein. Ref.  is also a
comprehensive treatment of the system.
We will concentrate on the two-dimensional model of a hydrogenic system. This
system arises as a limit in the case of a bulk semiconductor with impurities, but it
also constitutes an interesting system in itself as a fairly complicated system that may
display many features of quantum mechanics. Furthermore, such electronic systems in
two dimensions constitute a hot topic when viewed as so-called quantum dots (one or
more electrons conﬁned in small two-dimensional areas).
In Ref.  one concentrates on a vertical time-dependent magnetic ﬁeld arising
from for example the dipole approximation, viz.,
B = γ(t)k̂,
and a candidate for the vector potential is then the so-called symmetric gauge, viz.,
A=
γ(t)
(−y, x, 0).
2
Note that ∇ · A = 0, hence we have the Coulomb gauge and the vector potential
commutes with the momentum, viz.,
[A, P] = A · P − P · A = 0.
The spin-dependent term in the Hamiltonian becomes
Hspin = −Γγ(t)S3 ,
and the PDEs for diﬀerent spin components gets decoupled. In fact, Hspin commutes
with Hspatial (i.e., with the rest of the Hamiltonian), and hence we may ignore the spin
degrees of freedom altogether. If En are the energies of Hspatial and σ are the energies
of Hspin , the energies of Hspatial + Hspin are Enσ
= En + σ .
In Ref.  the eigenvalues of the (spinless) Hamiltonian are discussed as a function
of the applied magnetic ﬁeld strength. In the limits of a vanishing ﬁeld and of a
strong ﬁeld the eigenvalues are found analytically. Perturbative methods combined
with Padé interpolation for analytic continuation of the perturbative results are used
to ﬁnd expressions for the approximate eigenvalues in the intermediate regime.
6 Note
58
that in Gaussian units k = 1. We keep such constants however for completeness.
3.2 – Physical Problems
In Ref.  one mentions the possibility of studying the moderate magnetic ﬁeld
region by use of ﬁnite element methods to achieve eigenvalues and eigenvectors with a
high degree of accuracy. The focus here is to perform a ﬁnite element approximation to
this system and compare the results with those of the Padé approximants of Ref. .
To study the two-dimensional hydrogen atom numerically we need to write the
Schrödinger equation on dimensionless form. The numerical values of for example in
cgs or SI units is extremely small, and using such quantities in numerical computations
easily lead to round-oﬀ errors. Furthermore they are less intuitive to work with. It is
easier to study quantities of order unity instead of order 10−34 .
The Hamiltonian is given by
H = Hspatial + Hspin
where Hspin = −ΓB · S and
Hspatial =
e 2
1
1 −i∇ + A − ke2 .
2µ
c
r
Let us ﬁrst focus on Hspatial , dropping the subscript for ease of notation. We introduce
a new length scale to our problem, writing x = αx , where α is a numerical constant
with the dimension of length, hence x is a dimensionless vector. Then,
∇=
1 ∇,
α
where ∇ denotes diﬀerentiation with respect to the components of x . If we insert this
into our Hamiltonian, we get
H=
αe 2 ke2 1
2 A −
+
.
−i∇
2µα2
c
α r
Here, r = αr . Note that αe/c · A must be dimensionless, so we introduce A =
αe/c · A. Next we introduce a scaled dimensionless Hamiltonian H given by
H =
2µα2
µαke2 2
2
H = (−i∇ + A ) −
.
2
2 r If we require the factor in front of 2/r to be unity, we obtain the length scale
α=
(197.3 eV · nm)2
2
= 0.0529 nm,
≈
2
µke
0.5107 · 106 eV · 1.440 eV · nm
and accordingly
H =
22
H = β −1 H,
m(ke2 )2
where β is the energy scale. The numerical value of β is
β ≈ 0.5107 · 106 eV
(1.440 eV · nm)2
= 13.603 eV.
2(197.3 eV · nm)2
When studying the time dependent Schrödinger equation we also need a time scale,
i.e., t = τ t . Upon insertion into the Schrödinger equation we have
i
where
τ=
∂Ψ
= H Ψ,
βτ ∂t
23
6.582 · 10−16 eV · s
=
= 4.839 · 10−17 s
=
β
µ(ke2 )2
13.603 eV
59
The Time Dependent Schrödinger Equation
quantity
length
deﬁnition
2
α = µke
2
energy
time
mag. ﬁeld
β=
τ=
µ(ke2 )2
22
23
µ(ke2 )2
β
gµB
numerical value
0.0529 nm
13.603 eV
4.839 · 10−17 s
4.282 · 10−8 gauss
Table 3.1: Units for the two dimensional hydrogen atom
is the natural time scale.7 Thus,
i
∂Ψ
= H Ψ(x , t ),
∂t
with
2
,
(3.4)
r
is the dimensionless form of the time dependent Schrödinger equation. Still we need to
ﬁnd a suitable scale for the magnetic ﬁeld and the spin operators in Hspin to complete
our discussion.
The natural unit for spin is , as the spin matrices are given in terms of this
constant, i.e., S = S . Hence,
H = (−i∇ + A ) −
2
β −1 Hspin = gβ −1 µB δB · S ,
where we have introduced δ as the natural scale of the magnetic ﬁeld. The constant
µB is called the Bohr magneton and has the value
ge
= 5.789 · 10−9 eV/gauss.
2µc
Requiring gµB δ/β = 1 yields
δ=
β
= 4.282 · 10−8 gauss
gµB
where we have used g = 2.00232 for the electron, quoted from Ref. .
From now on, we will drop the primes as the conversion in units is unambiguously
given by the tabulated quantities in Table 3.1. We use the symmetric gauge A =
γ(t)/2 · (−y, x, 0) for the vector potential. Note that by the chain rule,
∂x ∂
∂y ∂
∂
∂
∂
=
+
= −y
+x ,
∂φ
∂φ ∂x ∂φ ∂y
∂x
∂y
and that A2 = γ(t)2 (x2 + y 2 )/4 = γ(t)2 r 2 /4. We obtain for the kinetic energy term in
Eqn. (3.4)
(−i∇ + A)2 = −∇2 − 2iA · ∇ + A2 = −∇2 − iγ(t)
γ(t)2 2
∂
+
r .
∂φ
4
The spinless Hamiltonian now reads
H = −∇2 − iγ(t)
γ(t)2 2 2
∂
+
r − .
∂φ
4
r
(3.5)
In section 6.1.3 we deal with the time independent version of this problem numerically.
7 Compare the time scale with the life time of the classical hydrogen atom losing its energy due to
radiation, τ ≈ 1.5 · 10−11 s.
60
Chapter 4
Numerical Methods for
Partial Diﬀerential Equations
This chapter is devoted to numerical solutions of partial diﬀerential equations (PDEs).
Most such are very hard to solve analytically, implying the need for approximate solution methods. The ﬁrst section describes the main features of partial diﬀerential
equations in general and ordinary diﬀerential equations (ODEs) in particular. Then
we turn to diﬀerent numerical methods for approximating these.
The main diﬀerence between PDEs and ODEs is that in the ODE the unknown
function we search for depends only on one parameter, i.e., the equation contains only
derivatives with respect to one variable.
Let us outline the general strategy used when attacking a PDE numerically. Consider as an example a simpliﬁed version of the Schrödinger equation, viz.,1
iut = −uxx + V (x)u.
This diﬀerential equation contains partial derivatives of ﬁrst order with respect to time
and of second order with respect to spatial coordinates. We use u(t = 0) = f as
initial condition and there are boundary conditions that must be fulﬁlled in the spatial
directions.
How do we solve such a problem on a computer? Clearly, the problem must be
ﬁnite in extent in order to be calculable. We must in some way transform diﬀerential
operators and functions into discrete and ﬁnite representations.
The unknown function u(x) at time t has inﬁnitely many degrees of freedom; one
for each x. Imagine that we instead assume that x only can take a ﬁnite number of
values xj , j = 1, 2, . . . , N , which is the main idea of ﬁnite diﬀerence approximations
among others. Clearly, u(t) becomes an element of CN . As a consequence the diﬀerential operator ∂ 2 /∂x2 must be replaced by an algebraic relation between the diﬀerent
function values u(xj , t). The multiplicative operator V (x) must be represented by some
appropriate variant; clearly also an algebraic relation.
The PDE is by these means converted into an ordinary diﬀerential equation of
dimension N , viz.,
iut = D(u) + V (u), u(t) ∈ CN ,
where D and V are mappings from CN into CN . Since the Hamiltonian −∂ 2 /∂x2 +V (x)
is an Hermitian operator, we also wish that our numerical methods yield Hermitian
discrete versions, i.e., that D and V can be represented by linear Hermitian matrices.
This ensures that unitarity of the Schrödinger equation is transferred to the ODE as
well.
1 Note
the subscript notation for partial derivatives.
61
Numerical Methods for Partial Diﬀerential Equations
Our attention may now be focused on solving the ODE. The following sections
outline some widely used methods for performing the conversion of a PDE to an ODE.
After this we outline methods for solving ODEs, focusing on methods applicable to
the Schrödinger equation. This is typically done with a ﬁnite diﬀerence method of
some kind of consideration of the exact ﬂow of the ODE such as operator splitting
methods. In other words, the temporal degree of freedom is also converted into a
discrete approximation. Ultimately, we have a complete discrete formulation of our
diﬀerential equation.
There is nothing wrong by going the other way round, i.e., ﬁrst discretize the
time dependence and then apply the spatial approximation. Indeed, in some cases
it is clearer to take this alternate approach.2 Ultimately, the PDE is reduced to an
algebraic problem, or if we wish to view it as such, a sequence of such; one problem at
each time level.
4.1
Diﬀerential Equations
In this section we review some basic concepts regarding diﬀerential equations; both
ordinary and partial diﬀerential equations. We will not employ mathematical rigor,
because it is not necessary for our applications. There are numerous standard texts to
consult, see for example Refs. [17, 32, 33].
A diﬀerential equation is an equation in which the unknown is a function. The
equation relates the unknown’s partial derivatives. We distinguish between ordinary
diﬀerential equations (ODEs) and partial diﬀerential equations (PDEs). Partial diﬀerential equations are an extremely rich class of problems. They are divided into diﬀerent
types, each displaying diﬀerent behavior. The diﬀerent kinds of equations may behave
diﬀerently with diﬀerent numerical methods. Examples of PDE types are parabolic
equations, elliptic equations (such as the Schrödinger equation) and hyperbolic equations.
4.1.1 Ordinary Diﬀerential Equations
The unknown in an ODE is a function of only one variable t. In general we search for
a function
y : I ⊂ R −→ V,
where V is some vector space called the phase space and I = [t0 , t1 ] is some interval,
and the ODE reads
ẏ = f (y, t).
This is a ﬁrst order equation because only derivatives of ﬁrst order enter the equation.
This is not a limitation, since all higher-order ODEs may be rewritten as ﬁrst-order
ODEs by extending V . The function y(·, t) is a function from V into V , i.e., a vector
ﬁeld. We say that an ODE is autonomous if f has no explicit dependence on time.
Any non-autonomous ODE may be turned into an autonomous ODE by extending the
phase space with one dimension, viz.,
d y
f (y, τ )
=
.
1
dt τ
If f is suﬃciently nice it is easy to see that the solution (if it exists) is unique, once
given the initial condition y(t0 ). We will not prove existence or uniqueness here, see
Ref. .
2 If discretizing the time domain ﬁrst, we must be aware of some existence problems in the formulation. If we ignore this, our derivations become purely formal, and the formality of the calculations
vanish when we introduce a spatial discretization. We will see this in section 4.5.
62
4.1 – Diﬀerential Equations
In the case of a unique solution we have a mapping U (t, s) that takes an initial
condition y(s) into the solution of the diﬀerential equation at time t, i.e.,
U (t, s)y(s) = y(t).
By ﬁrst propagating to t and then to t we get the composition property
U (t , s) = U (t , t)U (t, s).
For t suﬃciently close to s we may invert the time propagation, yielding
U (t, s) = U (s, t)−1 .
Of course,
U (t, t) = 1.
This mapping is called the ﬂow, the propagator or the evolution operator. It corresponds
to the propagator of the Schrödinger equation.
This means that ordinary diﬀerential equations describe the motion of a point in
phase space. The image y(I) is called the trajectory. One might consider more general
ODEs in which y is a point on a diﬀerentiable manifold instead of a vector space, and
in which f is a tangent vector ﬁeld. The diﬀerential equation for the propagator is
such a generalized ODE. By the following identities,
ẏ(t) =
d
U (t, s)y(s) = U̇ (t, s)y(s) = f (U (t, s)y(s)) = f ◦ U (t, s)y(s),
dt
which must hold for all y(s), we get
U̇ (t, s) = f ◦ U (t, s),
U (s, s) = 1.
Hamilton’s equations of motion (1.2) is a very important example of an ordinary
diﬀerential equation.
4.1.2 Partial Diﬀerential Equations
Where ODEs searched for a (multidimensional) function of a single variable, partial
diﬀerential equations (PDEs) search for a (possibly multidimensional) function of several variables. Hence, the partial derivatives with respect to all the arguments of the
unknown appear in the equation.
Assume that the function u we are seeking is deﬁned on some subset of Rn , viz.,
u : Ω ⊂ Rn −→ V,
where V again is some vector space. It is useful to consider u as an element in some
linear space, such as a Banach space or a Hilbert space.
In general we write our PDE as
L(u) = f,
where f is a function in the mentioned space, on equal footing with u. The operator
L may be complicated, but in case of a linear operator we say that the PDE is linear.
(Note that linear equations may be harder to solve than non-linear ones! Linear PDEs
form a huge class of problems.) Furthermore, the equation is said to be homogenous if
f = 0.
Here are some simple yet important examples of PDEs.
– ut + ux = 0: A transport equation.
63
Numerical Methods for Partial Diﬀerential Equations
– ut − uxx = 0: The one-dimensional diﬀusion equation.
– iut − uxx = 0: The one-dimensional time dependent Schrödinger equation for a
free particle.
– utt − uxx = 0: The one-dimensional wave equation.
– uxx + uyy = 0: Laplace’ equation.
Common to all these examples is that they are homogenous. The time dependent
Schrödinger equation is in general also homogenous. There is some missing information in the examples for them to represent well-deﬁned physical problems with a unique
solution. What is the initial distribution of temperature u in the heat equation? Assuming that we model a vibrating string with the wave equation, what are the reﬂection
coeﬃcients at the ends of the string?
Typically we separate between spatial dependence and time dependence. This is
connected to our physical way of thinking. Thus, often one silently assumes that an
initial condition problem is at hand when the PDE contains t as a parameter, and
some sort of boundary condition problem whenever x occurs. This is particularly clear
in the last two examples above. The wave equation and Laplace’ equation are almost
identical, but the presence of y instead of t in the last implies a problem of completely
diﬀerent character. Laplace’ equation is a stationary problem while the wave equation
is a time-dependent problem.
For equations of order n in time (i.e., when n is the highest order of the time
derivative that occurs) we need n initial conditions to “set oﬀ” the solutions, i.e., we
must supply ∂ k u/∂tk for k = 1, 2, . . . , n at t = 0.
Boundary conditions specify the asymptotic behavior of u. When solving PDEs
numerically we consider only compact domains (i.e., closed and bounded domains),
and in that case the boundary conditions specify the behavior of the solution at the
boundary ∂Ω. Most important for our purposes are Dirichlet boundary conditions in
which we specify the value of u at the boundary, i.e. we specify u(∂Ω). Other kinds of
conditions include Neumann conditions, in which the normal component of the gradient
of u is speciﬁed, i.e., n · ∇u is supplied, where n is a unit normal ﬁeld of ∂Ω.
Usually we formulate the Schrödinger equation with Ω = Rn , but for computational
purposes this is not suitable. Physically, limiting Ω to some compact region in eﬀect
sets the potential to inﬁnity there since the particle is not allowed to penetrate into the
region outside Ω. Then the wave function vanishes at the boundary for the equation
to be fulﬁlled. In other words, when solving the time dependent Schrödinger equation
we use homogenous Dirichlet boundary conditions, viz.,
u(∂Ω) ≡ 0.
PDEs and ODEs are closely related, especially for computational purposes. On
one hand, an ODE is a PDE in which the unknown depends only on t. On the other
hand, if we can specify u as being an element in some (inﬁnite-dimensional) Hilbert
space and if we can ﬁnd a countable basis, then a time-dependent PDE is equivalent to
an inﬁnite-dimensional ODE. An example of this is seen in the discussion on the time
dependent Schrödinger equation in section 1.4. The typical computational approach is
then to in some way limit the dimensionality of the Hilbert space by ignoring all but
a ﬁnite subspace, as indicated in the introduction.
4.2
Finite Diﬀerence Methods
Perhaps the most obvious and intuitive way of discretely approximating diﬀerential operators is by means of ﬁnite diﬀerences. This method replaces the continuous function
64
4.2 – Finite Diﬀerence Methods
h1
x(2,4)
(N1,N2)
L2
h2
(0,0)
L1
Figure 4.1: A uniform grid
by a discrete function deﬁned at evenly spaced grid points in space. Partial derivatives
are approximated with diﬀerence expressions derived from considerations of Taylor
expansions of the original function.
4.2.1 The Grid and the Discrete Functions
Assume that we are given a PDE with unknown u, viz.,
u : Ω ⊂ Rn −→ V.
Let us introduce some notation. A multi-index α is an ordered sequence of n integers,
viz.,
α = (α1 , α2 , · · · , αn ),
where n is the dimension of Ω in our applications. We may add and subtract multiindices in a component-wise fashion, i.e.,
(α + β)i = αi + βi ,
and without danger of confusion, if we specify a multi-index α + βi , then βj is supposed
to have zero components βi for i = j.
Suppose that Ω is a rectangular domain, i.e., it is the cartesian product of intervals,
viz.,
Ω = I1 × I2 × · · · × In .
If we subdivide each interval Ii into Ni subintervals of uniform length hi , we get Ni + 1
well-deﬁned “joints” xij , j = 0, · · · , Ni . See Fig. 4.1 for an illustration. The grid G
now consists of the points xi1 i2 ...in given by
xi1 i2 ...in = (x1i1 , x2i2 , . . . , xnin ),
ij = 0, . . . , Nj , j = 1, . . . , n.
Thus, the grid G is given as
G = {xα : αj = 0, 1, . . . , Nj }.
Such a G is referred to as a uniform grid. The mesh width is the largest of the spacings
hi and intuitively measures the “ﬁneness” of the approximation.
Assuming that Ω is rectangular is not really hampering, because one may specify
boundary conditions in the discrete formulation mimicking the real shape of Ω. This
is illustrated in Fig. 4.2. Our computational algorithm gets much more complicated
when employing such complicated geometries, one of the major drawbacks with ﬁnite
diﬀerence methods. Furthermore, if the discretized boundary has for example “staircase
shape” artiﬁcial physical eﬀects may be introduced to the numerical solution.
65
Numerical Methods for Partial Diﬀerential Equations
Ω
Figure 4.2: Discretizing a non-rectangular domain
As for the function u and other functions in the problem deﬁnition such as initial
conditions, we deﬁne a discrete approximation uh deﬁned in the grid points of G, viz.,
u(x ) −→ uh (xα ).
Thus, uh may be viewed as an N1 N2 . . . Nn -dimensional vector whose components are
elements in V , or as a tensor of rank n with Ni components of type V in each direction.
Indeed, we write
uα := uh (xα )
for the components of this tensor or discrete function. When doing practical computations in for example two dimensions, we simplify the notation and write xij for the
grid points and uij for the components of uh , where i and j are indices in the x and y
directions, respectively.
4.2.2 Finite Diﬀerences
The diﬀerential operators occurring in the PDE problem are approximated by diﬀerence
expressions called ﬁnite diﬀerences. At a given grid point xα the partial derivatives are
approximated by a (linear) function of uα and the neighboring grid points. There is no
unique way to deﬁne ﬁnite diﬀerence expressions for the diﬀerent partial derivatives.
The choice of the approximation is highly problem dependent. In addition, more
complicated compositions of operators such as (f (x)ux )x may be approximated by
choosing a perhaps non-obvious but “smart” ﬁnite diﬀerence.
There are however some ﬁnite diﬀerences that are more widely used than others,
and we shall introduce a special notation for these. This notation makes it easy to
develop diﬀerence schemes for various PDEs and the notation follows closely that of
the actual partial derivatives.
We introduce the notation in the setting of a one-dimensional problem. The generalization to several dimensions is obvious. First we deﬁne a centered diﬀerence for the
partial derivative ∂/∂x:
[δnx u]j :=
1
(uj+n/2 − uj−n/2 ).
nh
(4.1)
Here, h is the grid spacing and n > 0 scales the step size in the ﬁnite diﬀerence. Note
that this is clearly inspired by the deﬁnition of the partial derivative. The same goes
for the one-sided forward and backward diﬀerences, viz.,
+
[δnx
u] :=
1
(uj+n − uj ),
nh
(4.2)
−
[δnx
u] :=
1
(uj − uj−n ).
nh
(4.3)
and
66
4.2 – Finite Diﬀerence Methods
In all three cases, the distance along the axis of the sampling points are nh.
These three ﬁnite diﬀerence expressions all approximate the ﬁrst derivative. A
widely used expression for the second derivative is given by combining δx twice, viz.,
[δx δx u]j =
1
(uj+1 − 2uj + uj−1 ).
h2
(4.4)
It is fundamental to be able to analyze the error in the numerical discretizations.
One of the concepts to consider is the truncation error, deﬁned as the error in the
numerical discretization when we insert an exact solution. Thus, if L is a diﬀerential
operator and if Lh is a numerical approximation, then the truncation error is deﬁned
as
τ := Lh (u) − L(u).
For example, the approximation (4.4) of the second derivative has truncation error
τ=
1
(u(x + h) − 2u(x) + u(x − h)) − u (x).
h2
When we expand u(x ± h) in Taylor series we easily obtain
τ=
1 h2 (2 u (x) + O(h4 ) − u (x) = O(h2 ),
h2 2
and so the truncation error is of second order in the step size. When τ vanishes with
vanishing mesh width we say that we have a consistent approximation. The name of τ
is related to the fact that it is the error arising from truncating the Taylor series after
a few terms. The truncation errors for the other ﬁnite diﬀerences presented above is
easily derived, viz.,
1
(nh)2 u (x) + O(h4 ),
24
1
+
[δnh
u]j = u (xj ) + nhu (x) + O(h2 ),
2
1
−
and [δnh
u]j = u (xj ) − nhu (x) + O(h2 ).
2
[δnh u]j = u (xj ) +
(4.5)
(4.6)
(4.7)
The centered diﬀerence provides a better approximation to u (xj ) than the one-sided
diﬀerences. Intuitively this is so because it utilizes information from both sides of xj .
4.2.3 Simple Examples
Let us apply the ﬁnite diﬀerences to some simple model problems. We will see that
the notation introduced makes the similarity between the PDE formulation and ﬁnite diﬀerence formulation obvious. The examples will also illustrate some important
concepts, such as incorporating boundary conditions and initial conditions.
The Heat Equation.
Consider the two-dimensional heat equation, viz.,
ut = uxx + uyy .
We assume that the spatial grid is given and has points which we denote as xi,j =
(xi , yj ). Time is discretized as t = ∆t and although we according to the above
description of the grid should include it in x it is somewhat clearer to separate the
time-dependent part of the grid. We also assume that hx = hy = h. First, let us
consider a ﬁnite diﬀerence scheme applied to only the spatial degrees of freedom, viz.,
[u̇(t) = δx δx u + δy δy u]j .
67
Numerical Methods for Partial Diﬀerential Equations
This clearly turns the spatial derivatives into algebraic operators on the discrete function. Since we still have a time derivative in the equation we now have an ordinary
diﬀerential equation. We will employ two diﬀerent diﬀerence schemes for resolving the
derivative; namely a forward diﬀerence and an backward diﬀerence approximation. A
forward diﬀerence yields
[δt+ u]i,j = [(δx δx + δy δy )u]i,j .
Note that we have placed the time level index as a superscript. When performing an
implementation we seldom keep the solution at all time levels in memory.
Writing out the diﬀerence scheme and reorganizing yields
u+1
i,j = ui,j +
∆t ui+1,j + ui−1,j + ui,j+1 + ui,j−1 − 4ui,j .
2
h
This scheme is called explicit, because updating the solution at the next time level is
explicitly given as a function of the solution of the current time level. Time stepping
with δt+ is called forward Euler or explicit Euler.
Replacing δt+ with δt− yields a diﬀerent scheme. This way of time stepping is called
backward Euler or implicit Euler. The scheme reads
ui,j −
∆t ui+1,j + ui−1,j + ui,j+1 + ui,j−1 − 4ui,j = u−1
i,j .
2
h
This scheme is called implicit because the solution at time level is given implicitly;
we have to solve a non-trivial set of algebraic equations to ﬁnd u . It is easy to see
that the system of equations is a linear algebraic system, i.e., we may rewrite it as a
matrix equation.
Boundary conditions were neglected in the above discussion, but we include them
in the next example.
The Wave Equation. Let the domain be given as Ω = [0, 1]. Let us describe a ﬁnite
diﬀerence method for the one-dimensional wave equation, i.e.,
utt = uxx ,
with boundary conditions
u(0, t) = u(1, t) = 0.
The initial condition is
u(x, 0) = f (x),
ut (x, 0) = g(x).
We use N + 1 grid points, i.e.,
G = {xj = jh : j = 0, . . . N }.
The time levels are given as t = ∆t as in the previous example. We devise the scheme
[δt δt u]j = [δx δx u]j ,
j = 1, . . . , N − 1.
For j = 0 and j = N we impose the boundary conditions, i.e., that u0 = uN = 0. For
= 0 we use the initial condition, viz.,
u0j = f (xj ).
For = 1 we use the second part of the initial condition and a forward diﬀerence
approximation, viz.,
u1j = u0j + ∆tg(xj ).
68
4.2 – Finite Diﬀerence Methods
For the subsequent time levels we use the devised scheme, which when written out
reads
∆t2 −1
uj+1 − 2uj + uj−1 .
u+1
=
2u
−
u
+
j
j
j
2
h
This is an explicit scheme.
Note the way in which the initial conditions were incorporated by a special rule for
the two ﬁrst time levels. Similarly, the boundary conditions were incorporated by an
(obvious) special rule.
Let us mention some stability properties of this scheme. It turns out that the
scheme is stable if and only if the Courant number C := ∆t2 /h2 ≤ 1. In fact we obtain
the exact solution at the grid points if C = 1. (This is not true for the two-dimensional
generalization of the scheme.) Thus, taking too large time steps makes the process
unstable, a feature that we also will see when studying the Schrödinger equation. See
Ref.  for a thorough discussion.
A Non-Linear PDE.
Now for a more delicate treat. Consider the PDE
ut = (f (u)ux )x ,
where f (u) may be any positive and smooth function. This equation is a non-linear
Heat equation. We will ignore boundary conditions for now to focus on the nonlinearity, and we use the same grid in space and time as for the wave equation.
We devise the following scheme:
[δt− u]j = [δx (f (u)δx u]j .
On the right hand side we have a small problem. When written out it reads
1
f (uj+1/2 )[δx u]j+1/2 − f (uj−1/2 )[δx u]j−1/2 ,
h
and the trouble is we do not know uj±1/2 . Therefore, we choose an approximation
given by
uj+1 + uj
).
f¯j := f (
2
It is easy to see that
f¯j = f (uj+1/2 ) + O(h2 ),
so this approximation is of second order.
When we write out the equation and gather the unknowns uj on one side we get
u −
∆t ¯ fj (uj+1 − uj ) − f¯j−1
(uj − uj−1 ) = u−1
,
j
2
h
which is non-linear in uj . Therefore, to generate the solution on the next time step we
must be able to solve a non-linear set of algebraic equations. There are several ways to
do this, but perhaps the most popular one is Newton-Rhapson iteration. See Ref. 
for a discussion.
By choosing for example forward Euler instead of backward Euler one may get an
explicit scheme instead, with no non-linear equations to solve. However, forward Euler
tends to be unstable.
In addition to being more complicated to solve, the non-linear schemes are more
diﬃcult to analyze with respect to stability and convergence. Ref.  contains some
material on the subject.
69
Numerical Methods for Partial Diﬀerential Equations
The One-Dimensional Schrödinger Equation. As an appetizer to solving the time dependent Schrödinger equation in chapter 7 we will solve the simpliﬁed one-dimensional
version for a spinless particle, i.e.,
iut = −uxx + V (x)u(x),
where the potential V (x) is some scalar function that is independent of time. The
Hamiltonian is then H = ∂ 2 /∂x2 +V . We choose for simplicity Ω = [0, 1] as for the wave
equation and the corresponding grid. The standard ﬁnite diﬀerence approximation
reads
iu̇j = [−δx δx u]j + Vj uj ,
where Vj = V (xj ). Notice that δx δx can be written as a tridiagonal matrix when acting
on uh .
The big question is what discretization to use for time integration. In section 4.5
we will study the Crank-Nicholson scheme, a scheme ﬁrst introduced by Goldberg et
al. in Ref. . Let us write out this scheme for our example. It is deﬁned by
i[δt+ u] =
This yields
1
1
Hu+1 + Hu .
2
2
1
1
+1
= 1 − i∆tH u ,
1 + i∆tH u
2
2
and we see that we have an implicit scheme. This scheme preserves the norm of
uh ∈ CN exactly. In other words, the scheme is stable.
The implicit nature of the scheme means that we have to solve a set of linear
equations of dimension N at each time step. This system is easily seen to be tridiagonal,
and hence it can be solved in O(N ) operations, see chapter 5. When going to higher
dimensions the system no longer becomes tridiagonal ant it takes much longer to solve.
4.2.4 Incorporating Boundary and Initial Conditions
As seen in the last example, incorporating boundary conditions was no big trouble.
Let us however review the process in a more general setting.
When devising a grid G corresponding to the domain Ω, we should choose a subset
Γ ⊂ G corresponding to the boundary ∂Ω ⊂ Ω. On this set we impose our discrete
boundary conditions. Choosing Γ is not always a simple task, but on a rectangular Ω
it is of course in a natural way given by
Γ = {xα ∈ G : αj = 0 or αj = Nj for at least one j}.
Imposing Dirichlet boundary condition is the simplest task, because one can simply set
uα = ψ(xα , t ) = ψα ,
xα ∈ Γ, = 0, 1, . . . ,
where ψ : ∂Ω → V is the boundary condition.
Neumann conditions may impose diﬃculties. How do we interpret n, i.e., the unit
normal, when considering the discrete boundary Γ? The answer is problem dependent.
For now, let us only consider rectangular domains. In this case, the boundary of Ω
is clearly an n − 1 dimensional box in Rn consisting of 2n sides (that is hyperplanes).
If such a side is described by the condition xk = c with c a constant, then the unit
normal is ±ek ; the kth standard basis vector in Rn , the sign depending on what side
we are considering. Thus,
∂u
n · ∇u = ± k
∂x
along this side.
70
4.3 – The Spectral Method
Let us consider a concrete example for illustration. Given a two dimensional PDE
(with perhaps an additional time dependence) and assume that we want to apply the
Neumann condition
n · ∇u = a
along the boundary of our rectangular domain Ω given by
Ω = [x0 , x1 ] × [y0 , y1 ].
Along the boundary x = x0 the unit normal is −ex and thus
n · ∇u = −
∂u
= a.
∂x
Similarly,
∂u
=a
∂x
along x = x1 . For the two remaining parts y = y0 and y = y1 we obtain
n · ∇u =
−
∂u
=a
∂y
∂u
= a,
∂y
and
respectively.
Solving a transient problem of course means solving the PDE for t ∈ [t0 , t1 ], and
so the time grid is always rectangular. The initial conditions are imposed by devising
special rules for = 0, . . . , p − 1, where p is the order of the highest time derivative
occurring in the PDE. In each case we must consider exactly how, but u0 is of course
the initial function and u1 is usually calculated with either a forward Euler step and
similarly for u1 up to up−1 .
4.3
The Spectral Method
The ﬁnite diﬀerence approximations for spatial derivatives that are employed are typically of rather low order in the step size h, i.e., typically of order two. Based on the
discrete Fourier transform (DFT) we may create a method that is of order N , where N
is the number of grid points in one spatial direction. While this method is applicable
up to three dimensions, and while it provides great accuracy, there are some catches.
First of all the computational eﬀort is much higher. A ﬁnite diﬀerence approximation requires typically not much more than O(n) operations (where n is the total
number of grid points), while the spectral method requires O(n log n) operations when
we utilize the so-called fast Fourier transform algorithm (FFT) for performing the
DFT.3 In addition, the FFT algorithm requires 2k grid points in each spatial direction
with an equal spacing each, and the method requires periodic boundary conditions in a
natural way. Thus the freedom of the geometry in question gets somewhat restricted.
The idea behind the spectral method is that diﬀerentiation is a diagonal operator
in the frequency domain. Recall that the Fourier transform of a function f (x) is given
by
∞
1
f (x)e−ikx dx,
F [f ](k) := g(k) =
2π −∞
and the inverse Fourier transform is given by
−1
F [g](x) := f (x) =
∞
g(k)eikx dx.
−∞
3 Se
Ref. [...] for a discussion of the fast Fourier transform.
71
Numerical Methods for Partial Diﬀerential Equations
Diﬀerentiating we get
∞
f (x) =
ikg(k)eikx dx.
−∞
Thus,
F [f (x)](k) = ikF [f ](k) = ikg(k),
and diﬀerentiating becomes multiplication with ik in the frequency domain, i.e., it is
a diagonal operation. It is easy to see that if A is an operator combining diﬀerent
∂
),4 then
diﬀerentiations, i.e. A = A( ∂x
F [A(
For example, if A =
∂2
∂x2
∂
)f (x)](k) = A(ik)F [f ](k).
∂x
∂
+ β ∂x
, then
F [Af (x)](k) = (−k2 + βik)F [f ](k).
It is not diﬃcult to imagine that if we can make a good approximation to the Fourier
transform of our discrete function, then we get a good approximation to any diﬀerentiation process with a simple diagonal (i.e., multiplicative) operator on the Fourier
transformed function. Operating with A on f (x) is then equivalent to calculating
A(
∂
)f = F −1 [A(ik)F [f ]] .
∂x
4.3.1 The Discrete Fourier Transform
Assume that we have a discretely sampled function f (x) on [0, a], i.e., that we have a
one-dimensional grid G of N + 1 points (with N even), viz.,
G = {xn = hn : n = 0, 1, . . . , N }
with fn = f (xn ) and h = a/N . We shall assume f0 = fN , viz., that f is periodic.
If we imagine the discrete f as being a superposition of plane waves, i.e. of diﬀerent
eikx , then clearly we cannot have wave numbers greater than π/h, that is, waves
with wavelength smaller than 2h. If the wavelength were smaller, then the discrete
eikx would be equivalent to a wave of bigger wavelength. This phenomenon is called
aliasing or folding. The critical wave number kc = π/h = N π/a is called the Nyquist
wave number.5
As motivation for the discrete Fourier transform we need the so-called Nyquist
theorem which we state without proof.
Theorem 16
Assume that g(k) = F [f ](k) is identically zero for |k| ≥ kc = π/h, i.e., that f (x)
is band-limited. Then f (x) is uniquely determined by the discrete version fn , n =
0, 1, . . . , N − 1.
This remarkable theorem states that the information content in f (x) is very much
less than the content in a function that is not band-limited. In fact, the complete
information is given by a ﬁnite number of values, namely fn . On the other hand, this
means that there is much redundant information in the Fourier transformed function
g(k); it would be suﬃcient to devise N numbers gm = g(km ), m = 0, 1, . . . N − 1
that can be put in a one-to-one correspondence with fn . In other words, we may
4 Of course, some care should be taken when claiming this, but at least for simple polynomial
expressions it holds trivially.
5 Some texts work with time t and angular frequency ω as conjugate variables. In that case one
speaks of the Nyquist frequency.
72
4.3 – The Spectral Method
devise a discrete Fourier transform (that of course in some sense should be a good
approximation to the continuous transform) containing all the information in fn .
The DFT is deﬁned by deﬁning a second grid G with N + 1 points, viz.,
G = {km =
m
N
N
2πm
= 2kc
: m = − , . . . , },
Nh
N
2
2
and by using the trapezoidal rule for approximating the Fourier transform at the grid
points km , viz.,
gm =
1
2π
a
e−ikm x f (x) dx ≈
0
N −1
h −ikm xn
e
fn
2π n=0
N −1
h −2πinm/N
=
e
fn ,
2π n=0
(4.8)
where a = N h is the length of the interval, and where we have used the periodicity of
f . If we deﬁne the matrix Z by
Zmn = e−2πinm/N ,
then
gm =
n, m = 0, 1, . . . , N − 1,
N
−1
Zmn fn .
n=0
Clearly, f has only N independent components due to f0 = fN and we also have
∗
. If we can prove that Z is invertible, then we have a well-deﬁned oneg−N/2 = gN/2
to-one mapping between fn and gm . It turns out that Z is in fact unitary (up to a
multiplicative constant), so in the same way as in the original Fourier transform we may
interpret the transformation as an orthogonal change of basis.6 To be more speciﬁc,
ZZ † = N I ⇒ Z −1 =
1 †
Z ,
N
where I is the identity matrix. Let us prove this.
Theorem 17
The N × N matrix Z given by
Zmn = e−2πinm/N ,
n, m = 0, 1, . . . , N − 1,
obeys
ZZ † = N I,
Z −1 =
1 †
Z .
N
In other words, the column vectors form an orthogonal basis for CN , their length being
√
N.
Proof: To prove this we must compute
[ZZ † ]mn =
N
−1
†
Zmj Zjn
=
j=0
e−2πimj/N e2πinj/N
j=0
=
N
−1
j=0
6 See
N
−1
2πij(n−m)/N
e
≡
N
−1
φj .
j=0
Appendix A.
73
Numerical Methods for Partial Diﬀerential Equations
Note that for n = m we have φj = 1, so that the diagonal elements are equal to N . For
the oﬀ-diagonal elements, note that |n − m| < N and that if we deﬁne k = N − 1 − j,
then we may write
S=
0
e2πi(N −1−k)(n−m)/N
k=N −1
= e2πi(n−m) e−2πi(n−m)/N
N
−1
e−2πik(n−m)/N = e−2πi(n−m)/N S.
k=0
Since |n − m| < N then (n − m)/N is not an integer and so e2πi(n−m)/N = 1, and
consequently we must have S = 0.
Hence, ZZ † = N I, and if ψn is the n’th column vector, then (ψm , ψn ) is the inner
product with the m’th row vector of Z † , and hence
√
(φm , φn ) = δnm N ⇒ φn = N .
By virtue of the unitarity property of Z the inverse transformation of Eqn. (4.8) is
then given by
N −1
N −1
2π †
2π 2πinm/N
fn =
Znm gm =
e
gm .
(4.9)
N h m=0
N h m=0
Note that considered as functions deﬁned on the whole real line, both fn and gm are
periodic functions: fn ≡ fN +n and gm ≡ gN +m . Hence, the spectral range |k| ≤ kc is
not unique. For an interpretation of the inverse DFT we turn to the inverse continuous
Fourier transform and again use the trapezoidal rule for approximation, viz.,
fn =
kc
ikxn
e
−kc
2π
g(k) dk ≈
Nh
m =N/2−1
eikm xn gm .
(4.10)
m =−N/2
We have used the natural frequency range given by the Nyquist theorem.
It is convenient however for both implementation purposes and for the intuitive
appeal to the ﬁnal formulas of the DTFs to have the same summation range in both
cases. By using Eqn. (4.10) we get the wave number range |k| ≤ kc , and by using
Eqn. (4.9) we get the range 0 ≤ k ≤ 2kc .
If we shift the negative indices in Eqn. (4.10) upward by N , we get Eqn. (4.9). But
then we must bear in mind that the actual frequencies km are changed, according to
Fig. 4.3. Taking −N/2 < m < 0 corresponds to k > 0, and taking 0 ≤ m < N/2 − 1
corresponds to k < 0. We may of course just use the range 0 ≤ k ≤ 2kc ; the numerical
results are equivalent due to the periodicity of eikx . Note that taking −kc or kc at
m = N/2 − 1 is equivalent.
In a similar fashion, one sees that the restriction to the interval 0 ≤ x ≤ a is not
necessary. Shifting away only leads to multiplying gm by a constant of magnitude 1.
The number h is in computational settings a quite small number. If we deﬁne
hGm := gm , we may rid ourselves of this factor in the transformations. We now have
the pair
N −1
2π †
fn =
Z Gm . (DFT)
N m=0 nm
and
Gm =
74
N −1
1 Zmn fn
2π n=0
(inverse DFT).
4.3 – The Spectral Method
wave number
kc
G´
0
N/2–1
N
–kc
Figure 4.3: Adjusting the wave numbers
Note that h does not appear anywhere in the formulas, implying that the coeﬃcients
Gn is independent of h.
Because of the trapezoidal approximation nature of the deﬁnition, it is natural to
∂
take ∂x
to be a diagonal operator when applied to the transformed function (which is
now a vector in CN ), viz.,
[f (x)]n :=
1 †
[Z D(Z f˜)]n ,
N
where f˜ is the vector whose components are fn and D is a diagonal matrix whose
ﬁrst N/2 diagonal elements Dmm , m = 0, . . . , N/2 − 1 are given by 2π/N h. The
last N/2 − 1 diagonal elements are then given as Dm+N,m+N = 2π/N h. It can be
shown that the DFT provides an approximation to f (x) of order N , i.e., we have
exponential convergence of the diﬀerential operator. Intuitively this is so because the
spectral method uses all values fn to compute the derivative, not just the ones in the
immediate vicinity of xn .
4.3.2 A Simple Implementation in Matlab
Here we present a very simple implementation of the spectral method applied to a
time-independent model problem. More speciﬁcally, we wish to study the scattering of
a wave packet onto a square barrier potential. We will also make a brief comparison
of the DFT results with ﬁnite diﬀerence results. Further error analysis will not be
performed in this example.
The Hilbert space for this problem is given by
H = L2 ([−c, c]),
where c is chosen large enough so that we may neglect interference of the wave packet
with itself due to the boundary conditions. The Hamiltonian is
H = T + V,
with T = −∂ 2 /∂x2 and
V (x) =
a x ∈ [−b, b],
0 otherwise.
Here, a is a positive constant deﬁning the strength of the barrier, and 2b is the width.
75
Numerical Methods for Partial Diﬀerential Equations
As initial condition we use a Gaussian wave packet centered at x0 with initial
momentum k0 and width σ, i.e.,
ψ0 (x) =
2
2
1
e−(x−x0 ) /2σ e−ik0 x .
2
1/4
(2πσ )
We will choose x0 and k0 so that the wave is neglible outside the barrier and so that
it is travelling towards it from the left.
Since we are going to use the DFT for implementing the diﬀerential operator, we
employ periodic boundary conditions.
For integration in time of the Schrödinger equation we use a split-operator method
(see Ref. ) which approximates the propagator to third order in the time step size
τ , viz.,
Uτ = e−iτ T /2 e−iτ V e−iτ T /2 = U (t0 + τ, t0 ) + O(τ 3 ).
Note that V is diagonal in position representation and that T is diagonal when applied
to the Fourier transformed wave function. Therefore, the algorithm for obtaining the
(approximate) wave function at time t + τ reads:
1. Use DFT on the numerical solution.
2. Apply e−iτ T /2 (a diagonal operator.)
3. Use inverse DFT.
4. Apply e−iτ V /2 (again a diagonal operator.)
5. Use DFT on the numerical solution.
6. Apply e−iτ T /2 (a diagonal operator.)
7. Use inverse DFT.
If we perform this scheme several times in succession, we are doing quite a few redundant Fourier transforms. Usually one combines the last and the ﬁrst application of
e−iτ T /2 if no output of the wave function is desired in between time steps. For this
simple problem however, it is no problem to waste a few computer cycles.
The integration scheme is also applicable to ﬁnite diﬀerence methods. If we use the
−δx δx diﬀerence approximation to T , then we have by the deﬁnition of the exponential
operator
τ
τ2
eiτ δx δx /2 = 1 + i δx δx − (δx δx )2 + O(τ 3 ),
2
4
and the error of this approximation is of the same order as the scheme itself. The δx δx
operator with periodic boundary conditions is easily implemented with a sparse matrix
in Matlab, making the implementation quick and easy to read. See appendix B for the
code.
In our program we implement both the spectral method and the ﬁnite diﬀerence
method for comparison.
Fig. 4.4 shows a series of numerical wave functions for a simulation with a = 28,
b = 2.5, x0 = −10, k0 = 5 and σ 2 = 4. Parameters for the numerical methods
were N = 1024 and τ = 0.00025. Note that the kinetic energy of the wave packet
is k02 = 25 < a, so that classically the particle is not allowed to pass the barrier.
Nevertheless we see that some of the packet passes the obstacle, i.e., there is some
probability that the particle upon measurement of its position will be found in the
classically forbidden regime.
Performing numerical experiments of this kind can give very rich insight into the
behavior of the system. This particular case can display resonance phenomena such as
76
4.4 – Finite Element Methods
2
2
t =0.0
1.6
1.2
1.2
0.8
0.8
0.4
0.4
0
-30
-20
-10
0
10
20
30
2
0
-30
-20
-10
0
10
20
30
2
t =0.5
1.6
1.2
0.8
0.8
0.4
0.4
-30
-20
-10
0
10
20
t =2.0
1.6
1.2
0
t =1.0
1.6
30
0
-30
-20
-10
0
10
20
30
Figure 4.4: Results from running a simple simulator using the spectral method and
split-operator time stepping.
some of the probability getting caught inside the barrier. (This is actually hinted at
in the picture at t = 2.0.) Furthermore, we see an interference pattern at the moment
of collision. What is the width of the fringes? What parameters in the simulation
does it vary with? Another interesting experiment is to measure the particle’s mean
position at each side of the wall as function of t and compare this with the classical
results. Animations of systems of this kind can be reached from Ref. . These are
produced with ﬁnite diﬀerence methods and the leap-frog scheme for time integration.
(See section 4.5 for details on the leap-frog scheme.) These simulations were used in
lectures with the aim that the students should gain some insight into the behavior of
quantum mechanical systems.
In the last picture, the ﬁnite diﬀerence solution is also shown for comparison. Qualitatively we see that they match very well, indicating that both the ﬁnite diﬀerence
method and the spectral method yield reasonable results.
4.4
Finite Element Methods
In this section we introduce the ﬁnite element method; a powerful class of numerical
methods for solving partial diﬀerential equations. The perhaps most intuitively appealing aspect of the method is the handling of complicated geometries. Furthermore, the
method is formulated in a highly modularized way, giving object-oriented languages
such as C++ a great advantage when one wishes to implement the methods.
We start out by introducing the ﬁnite element method in a rather informal way,
emphasizing the algorithm over the numerical properties of the method. We begin
by introducing the weighted residual method of which the ﬁnite element method is
a special case. Next, we compute an example in one dimension and brieﬂy describe
77
Numerical Methods for Partial Diﬀerential Equations
instances of the method in higher dimensions.
Finally, implementations of the ﬁnite element method with the aid of the programming library Diﬀpack for C++, which is the combination of tools used in this thesis,
are discussed.
Throughout the discussion we will use the typical Hamiltonian
H = −∇2 + V (x )
as a prototype for diﬀerential operators that we need to discretize. (We omit constants
such as and µ.)
Finite elements are typically used in the spatial domain. For time dependent problems we use some diﬀerence scheme in time, leading to a sequence of spatial problems
that we may solve with the ﬁnite element method. Hence, we will consider stationary
problems in the introduction to keep it simple.
Consider for simplicity the backward Euler method in time, i.e.,
1 +1
u
− u = −iHu ,
∆t
where u denotes the (complex) wave function at time t = ∆t. This equation may
be written
(1 + ∆tiH)u+1 = u ,
or
Au = f,
(4.11)
where A = 1+∆tiH, u = u+1 is the unknown and f = u . Eqn. (4.11) is the prototype
of the kind of equations we solve in both ﬁnite diﬀerence and ﬁnite element methods.
We see that the operator A is a simple function of the Hamiltonian, and this is the
case in all our applications. Hence, it is important to understand how operators such
as ∇2 , V (x ) and so on are discretized with the ﬁnite element method.
The PDE is deﬁned on some domain Ω ⊂ Rd , where d is the dimension of our system.
Finite element methods require Ω to be a compact domain, i.e., closed, bounded and
path-connected. For unbounded quantum mechanical systems we must in some way
choose an appropriate bounded representative of whole space.
We also have some kind of boundary conditions in our PDE. Typically, we may have
Dirichlet boundary conditions or Neumann conditions. Divide the boundary Γ = ∂Ω
into two parts: ΓD and ΓN , on which we impose Dirichlet and Neumann conditions,
respectively:
and
u(x ) = g(x ),
∂u
= n · ∇u(x ) = h(x ),
∂n
x ∈ ΓD ,
x ∈ ΓN ,
where n is the unit normal on ΓN .
4.4.1 The Weighted Residual Method
In the weighted residual method we deﬁne a subspace Vh ⊂ V as
Vh
dim Vh
= sp {N1 (x ), N2 (x ), . . . , Nm (x )} ,
= m.
Hence, Ni form a basis for Vh .7 In Vh we seek our discrete approximate solution uh .
7 The subscript h usually denotes some kind of discretization. Recall that h is usually used for the
mesh width in discretization formulations.
78
4.4 – Finite Element Methods
Any element uh ∈ Vh can be written
uh =
m
uj Nj ,
i=1
and we deﬁne a vector U ∈ Cm by letting Uj = uj . Note the fundamental diﬀerence
from ﬁnite diﬀerence methods in which we ignore the behavior of u in between grid
points. In the weighted residual method the discrete function is exactly that; a function
deﬁned in the whole domain.
Next, for any u ∈ V deﬁne the residual R through
R := Au − f.
The residual vanishes if and only if u is a solution of our prototype linear PDE (4.11).
The idea of the weighted residual method is to choose m functions Wi ∈ V with
which we weight the residual R, i.e., we take the inner product. Requiring this inner
product to vanish for each i then forces R to be small (in some sense) and leads to m
equations for the m unknown coeﬃcients ui . In other words, we require that
(Wi , R) = 0,
i = 1, . . . , m.
By linearity of A we obtain
m
uj (Wi , ANj ) = (Wi , f ).
j=0
Clearly, this is a matrix equation of ﬁnite dimension. Deﬁne the matrix Ah as
Wi∗ (x )ANj (x ) dΩ
(Ah )ij := (Wi , ANj ) =
Ω
and the vector b as
bi := (Wi , f ) =
Wi∗ (x )f (x ) dΩ,
Ω
and the weighted residual method becomes equivalent to the matrix equation
Ah U = b.
If this equation has a unique solution, we have found the unique discrete function
uh ∈ Vh such that the residual is orthogonal to all weighting functions Wi . If we deﬁne
W = span {W1 , W2 , . . . Wm } ⊂ V,
then R is orthogonal to all vectors in W , i.e., it belongs to the orthogonal complement
W ⊥ of W . If W spans a large portion of V then W ⊥ must be only a small portion of
V (in some intuitive sense; dim W ⊥ may easily be inﬁnite!)
If our PDE is non-linear our system of algebraic equations also becomes non-linear.
This is a more complicated problem to solve, and methods such as the Newton-Rhapson
iteration or successive substitution are popular solution methods, see Ref. .
There are of course many diﬀerent ways of choosing the weighting functions Wi .
We mention two popular choices here.
– The collocation method, in which we require the residual to vanish at m speciﬁed
points, i.e., the weighting functions are given by
Wi (x ) = δ(x − x [i] )
for each of the m points x [i] .
79
Numerical Methods for Partial Diﬀerential Equations
– Galerkin’s method, in which we require R to be orthogonal to Vh which is equivalent to R being orthogonal to the basis functions Ni . Hence,
Wi = Ni
are the weighting functions.
We will exclusively use Galerkin’s method in this text. When considering the numerical properties of the weighted residual method and the ﬁnite element method,
Galerkin’s method shows some remarkable and fortunate properties, such as best approximation properties in diﬀerent norms and equivalence with minimation of a functional over V and Vh , respectively, in the continuous and discrete problems.
Galerkin’s method written out in terms of our prototype diﬀerential equation then
reads
m
uj (Ni , ANj ) = (Ni , f ), i = 1, . . . , m.
j=1
4.4.2 A One-Dimensional Example
We have introduced the weighted residual method, of which the ﬁnite element method
is a special case. The ﬁnite element method deﬁnes the basis functions Ni and in a
quick introduction such as this it is best explained through an example.
When solving the discrete equations numerically the particular form of Ah matters.
First, if Ni are far from orthogonal we will in general obtain a dense matrix Ah which
requires a lot of computer time and storage to process. Second, the matrix may become
ill-conditioned, rendering the solution sensitive to round-oﬀ errors. Hence, we should
choose orthogonal (or nearly orthogonal) basis functions. The ﬁnite element method
in a natural way ensures this.
The ﬁnite element basis functions Ni are deﬁned in conjunction with the grid. The
grid in ﬁnite element methods is somewhat more complicated than in ﬁnite diﬀerence
methods. In addition to deﬁning grid points called nodes, the domain is also divided
into disjoint elements with which the nodes are associated. The elements play a fundamental role when deﬁning the basis functions.
For the one-dimensional example, consider Ω = [0, 1] and choose m points x[k] such
that
0 = x < x < · · · < x[m] = 1.
The grid points, or nodes, naturally divide Ω into sub-intervals Ωe = [x[e] , x[e+1] ].
These sub-intervals are our elements, and we see that we have ne = m − 1 elements.
Now we deﬁne our m basis functions Ni :
– Each Ni should be a simple polynomial over each element Ωe . Thus, uh becomes
a piecewise polynomial function. In our case we choose linear functions.
– Fundamental in the ﬁnite element method is the requirement
Ni (x[j] ) = δij .
This imposes as many conditions on each polynomial as there are nodes in an
element. In our case we have two nodes per element, which implies that Ni must
be linear over each element.
Note that
uh (x[i] ) =
uj Nj (x[i] ) =
j
i.e., ui is uh evaluated at a node point.
80
j
uj δij = ui ,
4.4 – Finite Element Methods
uh(x)
Ne(x)
x[e-1]
x[e]
Ωe-1
x[e+1]
x
Ωe
Figure 4.5: Linear elements in one dimension. A sample basis function Nk is showed
together with a piecewise linear function uh .
Fig. 4.5 illustrates our one-dimensional grid. If we let he = |Ωe |, then it is easy to
see that for i = 2, . . . , m − 1, Ni (x) is given by


0
Ni (x) =
)
hi−1 (x − x


1
[i]
1 − hi (x − x )
1
[i−1]
x∈
/ Ωi−1 ∪ Ωi
x ∈ Ωi−1
.
x ∈ Ωi
For i = 1 or i = m the deﬁnition is similar, i.e.,
0
N1 (x) =
1−
and
Nm (x) =
1
h1 (x
x∈
/ Ω1
− x ) x ∈ Ω1

0
1
hm−1 (x
−x
[m−1]
)
x∈
/ Ωm−1
x ∈ Ωm−1 .
Hence, the element functions vanish identically over most of the grid. The “tent functions” are also almost orthogonal, as (Ni , Nj ) = 0 whenever x[i] and x[j] does not
belong to the same element. It is also easy to see that they can be combined to create
any piecewise linear function with x[k] deﬁning the joints of the linear functions.
It is easy to see that the nodal point condition implies that
(Ni , Nj ) =
Ni Nj dΩ
Ω
vanishes when i and j are nodal indices belonging to diﬀerent elements. Looking
forward, we see that matrices whose elements are on the form (ANi , BNj ) with A and
B arbitrary linear operators obtain the same sparse structure.
Let us sketch a weighted residual statement with these basis functions. We wish to
solve
Au = f, A = 1 − ∇2 .
We choose as boundary conditions
u(0) = C1 ,
u (1) = C2 ,
a Dirichlet and a Neumann condition, respectively. The Dirichlet condition automatically implies u1 = C1 . As we will see the Neumann condition enters when we rephrase
our equations into weak form; see also section 4.6 for more on the weak form.
81
Numerical Methods for Partial Diﬀerential Equations
Note that Ni is not deﬁned, since Ni is piecewise linear and hence cannot be
diﬀerentiated.8 The weighted residual statement reads
m
uj (Ni , Nj − Nj ) = (Ni , f ),
i = 1, . . . , m.
j=1
In other words,
m
Ni (x)Nj (x) −
uj
Ω
j=1
Ni (x)Nj (x)
Ω
=
Ni (x)f (x),
Ω
where we have used that Ni are real functions. The second order derivative is eliminated
by integrating by parts, viz.,
!
"1
Ni (x)Nj (x) = −
Ni (x)Nj (x) + Ni (x)Nj (x) 0 .
Ω
Ω
This gives
m
j=1
uj
Ni (x)Nj (x) +
Ω
Ni (x)Nj (x) + Ni (0)uh (0) − Ni (1)uh (1)
Ω
Ni (x)f (x).
=
Ω
The boundary term at x = 0 seems troublesome as we do not know uh (0). However,
equation i = 1 is eliminated due to the left boundary condition, a Dirichlet condition.
As Ni (0) = 0 for the remaining equations, the term drops out. The boundary term at
x = 1 involves the Neumann condition, and we have
Ni (1)uh (1) = δim C2 .
Not only did we get rid of the second derivatives by integration by parts, but we
introduced the Neumann condition into the discrete system in a convenient way.
Finally we obtain the equations
m
uj
j=1
Ni (x)Nj (x) +
Ω
u1 = C 1 ,
Ni (x)Nj (x) =
Ni (x)f (x) − δim C2 ,
Ω
(4.12)
i = 2, . . . , m.
Ω
(4.13)
This is a set of m linear equations in the unknowns ui .
There are two matrices that appear naturally in the equations; the stiﬀness matrix
K and the mass matrix M . They are deﬁned by
∇Ni · ∇Nj dΩ
(4.14)
Kij :=
Ω
and
Mij :=
Ni Nj dΩ.
(4.15)
Ω
They have a tendency to appear in ﬁnite element formulations and it is wise to get
to know them for the most used element types, such as the linear elements in one
dimension. They are very sparse in large systems, and this indicates that iterative
methods could be used to solve them with great eﬃciency, see chapter 5.
Let us sum up the general results from these calculations:
8 Strictly
speaking, neither can Ni be diﬀerentiated as Ni is undeﬁned as the nodes. Whatever
value we choose for Ni (x[j] ) does however not contribute to the integrals. This is connected with the
concept of weak derivatives, see Ref. .
82
4.4 – Finite Element Methods
– Galerkin’s method leads to an m-dimensional system of linear equations,
Ah U = b,
where A = M + K and bj = (Nj , f ) except for modiﬁcations due to boundary
conditions.
– A Dirichlet boundary condition at node x [k] is enforced by replacing equation
k in the system with the boundary condition, i.e., Akk = 1, Akj = 0 whenever
j = k, and bk = g(x [k] ), where g is the prescription of u at the boundary.
Dirichlet conditions are called essential boundary conditions in ﬁnite element
contexts because their incorporation is done immediately in the linear system.
– Neumann boundary conditions are imposed by integration by parts of second
order derivatives. This leads to extra terms in the right hand side vector b.
Neumann conditions and other boundary conditions involving derivatives are
called natural boundary conditions in ﬁnite element contexts. They “appear naturally” in the process of integrating by parts.
4.4.3 More on Elements and the Element-By-Element Formulation
In this section we will brieﬂy describe the generalization of the ideas presented in the
one-dimensional example above.
In the one-dimensional case the shape of the elements Ωe becomes rather limited.
They are simply intervals of varying length. In higher dimensions we have greater
freedom of choosing element shapes. For example, if linear elements (i.e., linear Ni )
are employed, the shape of Ωe may be any quadrangle. Furthermore, triangular shapes
may be used, and indeed this is a very popular choice because very good algorithms
exists for dividing a region Ω into triangles. In three dimensions we would use deformed
parallelepipeds and tetrahedrons.
A simple example is depicted in Fig. 4.6. A simple rectangular grid is subdivided
into equal square slabs. The nodes are located at the corners of each slab, and an element function is depicted at a particular node. Note the obvious tensor-generalization
of the one-dimensional case, see Ref.  for details on tensor-product generalizations
of one-dimensional element types.
Clearly, the process of calculating the element basis becomes rather complicated
with increasing complexity of the geometries, location and shapes of elements and
so on. All geometric variants of an element are reﬂected in the expressions for the
corresponding basis functions over that element. Hence, it is natural to use a reference
element and map the results from this element back to the real geometry. This is
done in the element-by-element formulation, which we brieﬂy describe here. For more
details, see Ref. .
The idea is to ﬁrst rewrite the integral over Ω as a sum of the integrals over each
element Ωe , viz.,
A=
Ae , and b =
be ,
e
e
where each term in the sums are by deﬁnition identical to A and b but with integration
only over Ωe . The mass matrix for example becomes
e
Mij =
Mij =
Ni Nj dΩe .
(4.16)
e
e
Ωe
Next, one observes that due to the localized character of each basis function Ni , almost
all of the components of Ae and be are zero. In fact, if i and j are indices that correspond
to nodes outside of element e, then the basis functions Ni are zero and hence (Ae )ij and
83
Numerical Methods for Partial Diﬀerential Equations
L
Nk(x)
L
0
0
0
x[k]
0
L
L
Figure 4.6: Linear elements in two dimensions. The nodes are located at line intersections.
(be )i are also automatically zero. Each element in our one-dimensional example has
two nodes, hence Ae can be represented by a 2 × 2-matrix Ãe and be can be represented
by a two-dimensional vector b̃e .
In our example, we choose a reference element ξ ∈ [−1, 1] = Ω̃. We use a linear
change of coordinates to map Ωe into Ω̃. Then only two basis functions do not vanish
over Ω̃. The two nodes ξ = ±1 correspond to x[e] and x[e+1] , respectively. Clearly,
there is a mapping q(e, r), where r = 1, 2 is the local node number, that maps the
element number e and node number r into the global node number k. This mapping
exists in more general cases as well, but usually it is only known from a table due to
complex geometries.
The mapping from local coordinates to global coordinates is
1
x(e) (ξ) = x[e] + (ξ + 1)(x[e+1] − x[e] ).
2
Hence,
Ni (x(e) (ξ)) = Nq(e,r) (x) = Ñr (ξ)
is the global basis function in terms of the local basis function Ñr (ξ). In Fig. 4.7 this
is illustrated. Integration over the reference element introduces the Jacobian of the
coordinate change into the integrand.
The linear one-dimensional elements are examples of so-called isoparametric elements. Such elements are characterized by the fact that the same mapping is used
both for interpolating uh and for mapping from local to global coordinates. I.e., Ñr (ξξ )
deﬁnes both the basis functions and the mapping from local to global coordinates. The
˜1(␰)
N
x
⍀1
˜2(␰)
N
␰
+1
–1
⍀̃
⍀2
coordinate change
Figure 4.7: Illustration of local coordinates for one-dimensional linear elements.
84
4.5 – Time Integration Methods
(inverse) coordinate change in isoparametric elements are in general given by
x (e) (ξξ ) =
nno
Ñr (ξξ )x [q(e,r)] ,
r=1
where nno is the number of nodes in each element.
The introduction of local coordinates eases the implementation of a ﬁnite element
solver and we also realize the modularized nature of the ﬁnite element method and
hence also the appropriateness of object-oriented programming techniques in this case.
Everything from the linear equations, via the grid and the elements to the assembly
routines are actually deﬁned in an object-oriented manner, so to speak.
4.5
Time Integration Methods
In its simplest form, we again state the time dependent Schrödinger equation:
∂
Ψ = −iH(t)Ψ.
∂t
Again, Ψ is a complete quantum state, encapsulating space- and spin-degrees of freedom
that the particle might have. The Hamiltonian H(t) contains some spatial dependencies
through the kinetic energy term (i.e., spatial second-order diﬀerentiation) and the
potential energy. For example,
H(t) = −∇2 + V (x , t),
in the case of a spinless particle under the inﬂuence of the time dependent potential
V (x , t). The Schrödinger equation imposes stronger constraints on the state than the
states allowed in quantum theory. It must be at least twice diﬀerentiable with respect
to spatial coordinates, for example.
The time dependence is our concern in this section. Naturally, we will use some
kind of ﬁnite diﬀerence approximation to the time derivative, which yields a sequence
of discrete problems, regardless of how we choose to handle the space dependencies and
their approximations.
Our weapons of choice are the leap-frog scheme, an explicit integration scheme
whose name points to the staggered nature of the time stepping, and the theta-rule, an
implicit scheme covering both the forwards and backwards Euler schemes in addition
to the popular Crank-Nicholson scheme.
What characterizes the quality of a numerical scheme? How do we decide whether
it is a good scheme or not? What do we mean by “a good approximation?” Several
factors contribute, among them the accuracy, the stability and the computational cost.
Clearly, when our time step becomes small enough our scheme should become more
and more like the continuous diﬀerential equation. This is measured through the socalled truncation error τ .
When we write our PDE in homogenous form, i.e.,
L(Ψ) = 0,
and formulate a discretization, viz.,
L∆ (Ψh ) = 0,
we deﬁne the truncation error τ as the residual we are left with when we insert a
solution of the continuous PDE into the discrete version, viz.,
τ = L∆ (Ψ),
where
L(Ψ) = 0.
85
Numerical Methods for Partial Diﬀerential Equations
We use Ψh for the time-discretized wave function. It should not be confused with
the spatially discretized wave function from the previous sections. The expression for
τ and also for the integration scheme are purely formal. The iteration process implies
that we must apply operators like the diﬀerential operator more then once. We have
not assumed that Ψ is more than two times diﬀerentiable. However, the formality
vanishes when we apply a spatial discretization. For instance, the operator δx δx may
always be applied to a discrete function.
Clearly, if τ does not vanish when the temporal mesh width becomes inﬁnitely
small, L∆ cannot represent a good approximation to the equation. In the opposite
case of a vanishing τ , we say that the numerical scheme is consistent.
We also need stability. Stability may be deﬁned as the property that the numerical
solution Ψh reﬂects the qualitative properties, i.e., the physical properties, of the continuous solution Ψ. If this is not fulﬁlled, then our approximation is useless! We want
physical solutions.
This demand is somewhat informal, but it is almost always easy to devise the proper
requirement for a given PDE. If both stability and consistency is present, then a famous
theorem by Lax states that our numerical solution is convergent, that is the solution
to the numerical equations converges to the continuous ones in the limit of vanishing
grid spacings. The converse is also true. See Ref. . We study this further in section
4.6.
The stability criterion we are searching for in the case of the time dependent
Schrödinger equation is conservation of probability, i.e., the norm of the wave function. Basic quantum mechanics teaches us that the norm of the wave function Ψ is
conserved at all times, and this property we also demand from the numerical solution
in order to ensure that it converges to the true solution by Lax’ theorem.
4.5.1 The Theta-Rule
The theta-rule is a means for discretizing a ﬁrst order derivative on the form
∂ψ
= G.
∂t
(4.17)
Here, G is some arbitrary expression that we assume to be diﬀerentiable so we may
take the Taylor expansion of G around some arbitrary time to obtain the expression
at some other time.
The θ-rule is an interpolation between the forwards and backwards Euler discretizations and is deﬁned through
1 +1
Ψh − Ψh = θG+1
+ (1 − θ)Gh .
h
∆t
(4.18)
We comment that the derivations in this section are formal, in the sense that we do not
actually know whether the quantities are deﬁned in all circumstances. For example,
we do not know whether G2 Ψh is a function. If G = −∇2 then Ψh must be four times
diﬀerentiable and so on. On the other hand, when introducing spatial discretization
the problem vanishes. For example, the diﬀerence operator is always deﬁned on any
discrete function.
Reorganizing Eqn. (4.18) yields the updating scheme for the numerical solution:
"
"
!
!
= 1 − i(1 − θ)∆tH Ψh .
(4.19)
1 + iθ∆tH +1 Ψ+1
h
When discretizing a diﬀerential equation on the form of Eqn. (4.17) it is common to
consider the forwards and backwards Euler schemes. For the Schrödinger equation
though, neither one is particularly suitable. Forwards Euler is notoriously unstable
and the backwards variant contains damping, making the scheme non-unitary. We
86
4.5 – Time Integration Methods
recover forwards and backwards Euler with θ = 0 and θ = 1, respectively. (The
claimed stability and damping will be clear after we have analyzed the θ-rule.) The
case θ = 1/2 gives us the Crank-Nicholson scheme and is of special interest which will
be clear later.
Let us derive the truncation error of the θ-rule.
We start by considering the left hand side of Eqn. (4.18) and insert an exact solution.
We formally expand the Taylor series of Ψ+1 around t , viz.,
1 ∆t∂t
1 +1
Ψ
e
− Ψ =
− 1 Ψ .
(4.20)
∆t
∆t
Note the notation for the Taylor series. To second order we obtain
∂Ψ
1 +1
∆t ∂ 2 Ψ
∆t2 ∂ 3 Ψ
Ψ
+
− Ψ =
+
+ O(∆t3 ).
(4.21)
∆t
∂t
2 ∂t2
6 ∂t3
This is nothing but the familiar truncation error for a forward diﬀerence, see section
4.2. Consider the right hand side in a similar fashion, viz.,
θG+1 + (1 − θ)G = θe∆t∂t G + (1 − θ)G .
(4.22)
Expanding the series we obtain
∂Ψ
∆t ∂ 2 Ψ
∆t2 ∂ 2 G
∆t2 ∂ 3 Ψ
∂G
+
+
θ
+
=
G
+
θ∆t
+ O(∆t3 ).
∂t
2 ∂t2
6 ∂t3
∂t
2 ∂t2
(4.23)
The PDE (4.17) written in homogenous form is
∂Ψ
− G(Ψ, t) = 0,
∂t
and our discretization is correspondingly
1 +1
Ψh − Ψh − θG+1 − (1 − θ)G = 0.
L∆ (Ψh ) =
∆t
The truncation error τ is deﬁned as
L(Ψ) =
τ = L∆ (Ψ).
(4.24)
We insert the results obtained in Eqns. (4.21) and (4.23) and when noting that
∂ n G
∂ n+1 Ψ
=
,
∂tn
∂tn+1
we obtain
1
1 θ ∂ 3 Ψ
∂ 2 Ψ
τ = L∆ (Ψ ) = ∆t( − θ) 2 + ∆t2 ( − ) 3 + O(∆t3 ).
2
∂t
6 2 ∂t
(4.25)
Hence, the theta-rule has truncation error of order O(∆t) for θ = 1/2, but in the case
of θ = 1/2 we obtain a truncation error of order O(∆t2 ). This particular case is called
the Crank-Nicholson-scheme and is very well suited for solving the time dependent
Schrödinger equation; not only because of its higher accuracy, but we shall also see that
we obtain unitarity of the scheme. Thinking about it, since the forward Euler scheme
makes the solution unstable in the sense that the norm of the solution is growing, and
the backwards Euler scheme behaves in the opposite way, it is not unnatural to expect
that for some θ these eﬀects are exactly balanced.
That our scheme is of second order is ﬁne, but another important question is: What
is the error in Ψh after time integration steps in time? How much does it diﬀer from
Ψ , which is the exact solution found from solving the continuous diﬀerential equation
(4.17), when we in both cases start with the same initial condition? In the case of the
θ-rule the answer is somewhat delicate, but in the leap-frog case it is rather trivial.
is implicitly deﬁned in Eqn. (4.19).
The trouble is that Ψ+1
h
87
Numerical Methods for Partial Diﬀerential Equations
4.5.2 The Leap-Frog Scheme
We now turn to the leap-frog scheme. In the context of the time dependent Schrödinger
equation this scheme was ﬁrst proposed by Askar and Cakmak in 1978 (see Ref. ) as
an alternative to the implicit Crank-Nicholson scheme. Given a ﬁrst order diﬀerential
equation on the form of Eqn. (4.17) we simply use a centered diﬀerence on the left
hand side and evaluate the right hand side in-between, viz.,
1 +1
Ψh − Ψ−1
= G(Ψh , t ).
h
2∆t
(4.26)
Reorganizing this equation yields for the new wave function Ψ+1
Ψ+1
= Ψ−1
− 2i∆tH Ψh .
h
h
(4.27)
to ﬁnd the new Ψ+1
Notice that we need two earlier wave functions Ψh and Ψ−1
h
h ,
i.e., twice the information needed in the theta-rule. In reality we will not need this
double information. If we separate the real and imaginary parts of the wave function,
i.e., write
Ψh = Rh + Ih ,
we obtain
R+1
I +2
= R−1 + 2∆tH I = I − 2∆tH +1 R+1 .
If we assume that H applied to a real vector again is a real vector, we have created
an algorithm to update R and I alternately. In this case we say that we are using a
staggered grid.9
On homogenous form we have
L∆ (Ψh ) =
1 +1
Ψh − Ψ−1
− G(Ψh , t ).
h
2∆t
To ﬁnd the truncation error we insert a solution Ψ of the continuous problem in L∆
and expand Ψ±1 in Taylor series around t . This yields for the time diﬀerence
#∞
\$
1 +1
1 ∆t2n+1 ∂ 2n+1 Ψ
−1
Ψ
=
−Ψ
2∆t
∆t n=0 (2n + 1)! ∂t2n+1
(4.28)
∂Ψ
∆t2 ∂ 3 Ψ
∆t4 ∂ 5 Ψ
=
+
+
+ O(∆t6 ).
∂t
6 ∂t3
120 ∂t5
Hence, the truncation error is of second order also for the leap-frog scheme, viz.,
τ=
∆t2 ∂ 3 Ψ
+ O(∆t4 ).
6 ∂t3
(4.29)
Note that in both the Crank-Nicholson case and the current leap-frog scheme is not only
τ of second order, but the dominating term in the truncation error is also proportional
to the third time derivative of Ψ, i.e.,
τ∼
∂2G
∂3Ψ
=
.
3
∂t
∂t2
9 The assumption does not hold in general. If ψ is a real function and if H = −i∂/∂x, Hψ is purely
imaginary.
88
4.5 – Time Integration Methods
4.5.3 Stability Analysis of the Theta-Rule
Recall from section 1.3 that the norm of the wave function Ψ is conserved at all times
due to the unitarity of the evolution operator. In light of the probabilistic interpretation
of the wave function this means that the probability of ﬁnding the particle at some place
with some spin orientation at any time is unity. The unitarity of the evolution operator
should of course be reﬂected in the numerical solution of our discretized equations, i.e.,
in the numerical evolution operator. The updating rules for the theta-rule and the leapfrog scheme in reality are approximations to the time evolution operator U (t+1 , t ).
In particular, for the theta-rule we have
−1 1 − i(1 − θ)∆tH .
= 1 + iθ∆tH +1
U∆
(4.30)
A simple special case is the case where H is independent of time. If n is its discrete
set of eigenvalues,10 then the eigenvalues of U∆ clearly are
λn =
1 + iθ∆tn
,
1 − i(1 − θ)∆tn
and the eigenvectors coincide with those of H. Furthermore, the norm of the eigenvalues
are
1/2
1 + θ 2 ∆t2 2n
|λn | =
.
1 + (1 − θ)2 ∆t2 2n
It is easy to see that for θ = 1/2, |λn | ≡ 1. Furthermore, θ < 1/2 yields |λn | > 1 and
θ > 1/2 yields |λn | < 1, in other words unconditionally unstable and stable schemes,
respectively, in the sense of ampliﬁcation and damping of the solution.
For time dependent Hamiltonians however, the matter is more complicated.
Let us deﬁne two operators A and B so that U∆ = A−1 B, viz.,
= 1 + iθ∆tH +1 −1 1 − i(1 − θ)∆tH .
U∆
A
B
Given some wave function Ψh , the norm of the new wave function after evolving it
is
with U∆
+1 2
†
Ψ = (Ψh , (U∆
) U∆ Ψh ) = (Ψh , B † (AA† )−1 BΨh ).
h
†
If the scheme is to be perfectly unitary, then (U∆
) U∆ must be the identity operator
1. This happens if the Hamiltonian is independent of time and θ = 1/2. For the nonautonomous cases we cannot expect this to happen. We can only hope for unitarity
up to some order of ∆t.
First we note that
AA† = 1 + ∆t2 θ 2 (H +1 )2 .
We need the inverse of this operator. In general, in a similar fashion as with ordinary
scalar numbers, we have the power series expansion:
(1 − X)−1 =
∞
Xk,
k=0
provided that all the eigenvalues of X is less than 1 in magnitude. This puts a restriction
on ∆t for the series to converge, i.e.,
2
∆t2 θ 2 (+1
n ) < 1,
∀n,
10 Even though H may have a continuous spectrum, any spatially discretized version will have a
discrete spectrum with a ﬁnite number of eigenvalues.
89
Numerical Methods for Partial Diﬀerential Equations
where n are the (real) eigenvalues of H .
We cannot invert A with a series in a general inﬁnite dimensional Hilbert space,
since the eigenvalues are not bounded in general.11 However, we are going to do some
kind of discretization in space, leaving the Hilbert space with a ﬁnite dimension and
thus a bounded spectrum. We may then choose ∆t small enough and perform the
expansion of A−1 .
Increasing the dimension of the discrete Hilbert space will typically lower the typical
mesh width h of the spatial grid used. This leaves us with a better approximation of
the complete, inﬁnite dimensional space and also more eigenvalues, which of course will
grow in magnitude as we get more of them. Hence, the ∆t-criterion must contain h in
such a way that lowering h will automatically require a lower ∆t.
Assume that the series is convergent, i.e.,
† −1
(AA )
=
∞
(−θ∆tH +1 )2m = 1 − ∆t2 θ 2 (H +1 )2 + ∆t4 θ 4 (H +1 )4 + O(∆t6 ).
m=0
Multiplying with B from the right yields
(AA† )−1 B = 1 − i(1 − θ)∆tH − θ 2 ∆t2 (H +1 )2 + i(1 − θ)θ 2 ∆t3 (H +1 )2 H +θ 4 ∆t4 (H +1 )4 − i(1 − θ)θ 4 ∆t5 (H +1 )4 H + O(∆t6 ).
Then, multiply with B † from the left and note that the ﬁrst order terms cancel, viz.,
B † (AA† )−1 B = 1 + (1 − θ)2 ∆t2 (H )2 − θ 2 ∆t2 (H +1 )2
−i(1 − θ)θ 2 ∆t3 H (H +1 )2 + i(1 − θ)θ 2 ∆t3 (H +1 )2 H −(1 − θ)2 θ 2 ∆t4 H (H +1 )2 H (4.31)
+θ 4 ∆t4 + O(∆t5 )
This ﬁnal expression shows that the operator is of order O(∆t2 ). However, an examination of the terms of this power reveals that if the Hamiltonian varies slowly enough,
then they are of order O(∆t3 ) in the θ = 1/2 case. This is because
H +1 = e∆t∂t H ,
so we get
(H +1 )2 = (H )2 + ∆t(H ∂H ∂H +
H ) + O(∆t2 ).
∂t
∂t
Inserting this into Eqn. (4.31) yields
"
∆t2 ! +1 2
(H ) − (H )2 + O(∆t3 )
4
∂H ∂H ∆t3
+
H + O(∆t4 ).
=1−
H
4
∂t
∂t
†
U∆
U∆ = 1 −
(4.32)
If we insert Eqn. (4.32) into the norm of Ψ+1
h , we obtain
+1 2 Ψ = 1 + O(∆t3 ) Ψh 2
h
in the Crank-Nicholson case. In the other cases like forward Euler, we cannot erase the
O(∆t2 ) term, and we know that this integration is unstable and explodes after just a
few integration steps. Hence, the crucial point in the stability is this term, which we
managed to get rid of.
11 For example the hydrogen atom has a spectrum divided into two parts: A discrete bounded
spectrum part of energies below zero, and an unbounded continuous spectrum part above zero.
90
4.5 – Time Integration Methods
4.5.4 Stability Analysis of the Leap-Frog Scheme
We will now analyze the stability of the leap-frog scheme, i.e., of
Ψ+1
= Ψ−1 − 2i∆tH Ψh .
h
(4.33)
Where we have reorganized Eqn. (4.26) to obtain the rule that updates Ψh . Using the
truncation error τ = O(∆t2 ) yields
Ψ+1
= Ψ+1 + O(∆t3 ).
h
Hence, the numerical development of the wave function in time has an error of third
, viz.,
order in ∆t. We may use this in Eqn. (4.33) to ﬁnd a good approximation to U∆
Ψ+1
= Ψ−1
− 2i∆tH Ψ + O(∆t3 )
h
h
−1
= Ψ−1
− 2i∆tH e∆t∂t Ψ−1 + O(∆t4 )
h
(4.34)
−1
∆t∂t−1
3
= Ψ−1
−
2i∆tH
(Ψ
+
O(∆t
))
+ O(∆t4 )
e
h
h
−1
Ψ−1
= 1 − 2i∆tH e∆t∂t
+ O(∆t4 )
h
We expand the exponential operator to second order, making our operator correct to
third order. To easily see the result, we operate on an arbitrary function Ψ, viz.,
∂Ψ ∆t2 ∂Ψ2 ∆t∂t−1
e
Ψ = Ψ + ∆t
+
+ O(∆t3 )
∂t t−1
2 ∂t2 t−1
∆t2 ∂ = Ψ − i∆tH −1 Ψ +
(−iHΨ) + O(∆t3 )
2 ∂t t−1
(4.35)
∆t2
∂H −1
−1
−1 2
3
= Ψ − i∆tH Ψ +
Ψ − (H ) Ψ + O(∆t )
−i
2
∂t
∆t2
∂H −1
= 1 − i∆tH −1 −
i
+ (H −1 )2 + O(∆t3 ) Ψ.
2
∂t
With the deﬁnition
∂H −1
+ (H −1 )2 ,
∂t
our discretized time evolution operation becomes
!
"
= 1 − 2i∆tH − 2∆t2 H H −1 + i∆t3 H A Ψ−1
+ O(∆t4 ).
Ψ+1
h
h
A=i
(4.36)
U∆
in terms of Ψ−1
we must calculate the product
In order to ﬁnd the norm of Ψ+1
h
h
† (U∆ ) U∆ , where the operator U∆ is deﬁned in Eqn. (4.36). It is clear that the order
)†U∆
is also the order of the unitarity of the leap-frog scheme.
of 1 − (U∆
The ﬁrst order terms cancel immediately, viz.,
† (U∆
) U∆ = 1 − i∆t3 A† H + 4∆t2 (H )2 + 4i∆t3 H −1 (H )2
− 2∆t2 H H −1 − 4i∆t3 (H )2 H −1 + i∆t3 H A + O(∆t)4
Introducing a commutator relation yields
† ) U∆ =1 − 2∆t2 (H −1 H + H H −1 ) + 4∆t2 (H )2
(U∆
+ i∆t3 (H A − A† H ) + 4i∆t3 [H −1 , (H )2 ] + O(∆t4 ).
91
Numerical Methods for Partial Diﬀerential Equations
The second order terms should “almost” cancel because H −1 is equal to H to ﬁrst
order, viz.,
∂H + O(∆t2 ).
H −1 = H − ∆t
∂t
Introducing this relation transforms the second order terms into third order terms, and
this
∂H ∂H 2∆t2 (H −1 H + H H −1 ) = 4∆t2 (H )2 − 2∆t3
H + H
.
∂t
∂t
The other third order term may be simpliﬁed, viz.,
∂H −1
∂H −1 H A − A† H = i H +
H + [H , (H −1 )2 ]
∂t
∂t
The commutator [H , (H −1 )2 ] is of ﬁrst order in ∆t. To see this we Taylor expand
and use
H
(H −1 )2 = (H )2 + O(∆t),
−1
and thereby,
[H , (H −1 )2 ] = [H , (H )2 ] + O(∆t) = O(∆t).
Thus, we may ignore the commutator altogether as the third order term becomes
a fourth order term. By the same reasoning we also ignore the other commutator
[H −1 , (H )2 ]. Furthermore,
∂H −1
∂H =
+ O(∆t).
∂t
∂t
Combining these considerations yields
∂H † 3
∂H
(U∆ ) U∆ =1 + ∆t H
+
H + O(∆t4 ).
∂t
∂t
Hence we have secured unitarity to third order. For time independent problems, we
see that we have at least fourth order unitarity.
Simpliﬁed Analysis in One Dimension. We will use a variant of the von Neumann
stability analysis, see Ref. . Let us assume that the Hamiltonian is independent
of time. Assume an initial condition on the form of an eigenfunction for the discrete
Hamiltonian, viz.,
u0j = sin(kπxj ).
The eigenvalues for the Hamiltonian of a particle-in-box (in the unit interval) are
=
FDM
k
4
sin2 (kπh/2),
h2
FEM
=
k
12 sin2 (kπh/2)
,
h2 2 + cos(kπh)
and
for the ﬁnite diﬀerence method and ﬁnite element method with linear elements, respectively. See section 6.4. The number h is the mesh width.
The leap-frog scheme reads
u+1 = u−1 − 2i∆tHu .
We postulate that the wave function at time t is given by
u = ξ sin(kπx),
92
4.5 – Time Integration Methods
where ξ is a complex number. It is clear that the scheme is then stable if and only if
|ξ| ≤ 1 and unitary if and only if |ξ| ≡ 1. Inserting uj into the time stepping scheme
yields
ξu0j = ξ −1 u0j − 2i∆tk u0j .
As the eigenvectors u0j for the Hamiltonian form a basis for the set of discrete functions
this means that
ξ 2 + 2i∆tk ξ − 1 = 0.
This yields
ξ = −i∆tk ±
1 − (∆tk )2 .
Assume that the radicand is positive, i.e., that
1 − ∆t2 2k ≥ 0,
∀k.
Then we have
|ξ|2 = ∆t2 2k + 1 − ∆t2 2k = 1,
and the scheme is unitary. This imposes the constraint
1
∆t ≤
maxk {k }
on the time step. On the other hand, assume that there exists a k such that the
radicand is negative, i.e., that ξ is purely imaginary, viz.,
ξ = −i∆tk ± i 1 − ∆t2 2k .
Unless ∆t2 2k = 0 there is no way to make |ξ| = 1. But this is impossible since both the
square energies 2k and the time step must be positive quantities. Hence, the leap-frog
scheme is stable if and only if the time step is restricted by the inverse of the largest
eigenvalue, i.e.,
1
∆t ≤ max .
For the ﬁnite diﬀerence method we obtain for the particle-in-box
∆t ≤
h2
h2
=
.
4
4 sin (N πh/2)
2
For the ﬁnite element method we obtain another estimate, viz.,
∆t ≤
h2 2 + cos(N πh)
h2
.
=
2
12 sin (N πh/2)
12
If we have an additional potential, i.e., that H = Hpib + V (x), and if we assume that
V (x) varies slowly in space, the stability criterion should not be severely modiﬁed. If
on the other hand V (x) varies rapidly, we must lower ∆t.
We see that the ﬁnite element method actually requires a smaller step size than
the ﬁnite diﬀerence method. If we use a two-dimensional uniform grid, the largest
eigenvalue has twice the magnitude as in the one-dimensional grid. Hence, ∆t must be
halved in the two-dimensional case.
Actually, when looking at Fig. 6.2 we see that the fact that the ﬁnite element method
produces qualitatively better eigenvalues and that the eigenvalues are overestimated
leads to a more strict stability criterion.
Notice that the number of time steps required is not linear in the number of grid
points N = 1/h, but quadratic. For the ﬁnite element method we must solve a linear
system at each time step, each requiring perhaps O(N 2 ) operations. This totals O(N 4 )
operations for the complete simulation! For the theta-rule the time step may be chosen
more arbitrarily. On the other hand, it is clear that to improve the quality of the
numerical solution we must also lower ∆t if we lower h in the theta-rule as well, so it
is not clear at this point what method is best.
93
Numerical Methods for Partial Diﬀerential Equations
4.5.5 Properties of the ODE Arising From Space Discretizations
We have seen that in general we have represented the wave function Ψ with a ﬁnitedimensional vector which we call y ∈ CN . Accordingly, the spatial part of the PDE is
represented by N algebraic equations in N unknowns. Spatially linear PDEs become
linear systems, i.e., the spatial operators become N × N matrices.
For the time dependent Schrödinger equation, the spatial operator is just the Hamiltonian; an Hermitian operator. When solving the resulting ODE it would be fortunate
if this Hermiticity is preserved, i.e., that the discretized operator is still Hermitian.
This implies that some qualitative properties such as unitarity also is inherited by the
ODE.
Let us be more speciﬁc. The time dependent Schrödinger equation (1.5) has been
converted into a linear ODE, viz.,
iẏ = H(t)y,
y ∈ CN ,
(4.37)
where N is the total number of degrees of freedom in our discretization. Let y be
the Euclidean norm on CN . Diﬀerentiation yields
d
y2 = ẏ † y + y † ẏ = iy † H † y − iy † Hy = iy † (H † − H)y,
dt
i.e., the ODE conserves the Euclidean norm of the solution if and only if H † = H.
Can we be sure of that all the numerical methods described in this chapter yield
Hermitian H? Let us for simplicity consider the case where the PDEs for the diﬀerent
spin orientations Ψ(σ) are decoupled. The coupled case yields the same results but
with a little bit more notation. Let us also consider only the operators i∂/∂xk and xk ,
that is the momentum component k and the kth position operator. If the discretizations yield Hermitian matrices Pk and Xk for these, then any Hermitian combination
A(i∂/∂xk , xk ) also yields Hermitian discretizations if this is deﬁned by A(Pk , Xk ).
The spectral method is the simplest to analyze. For simplicity we conﬁne the
discussion to one dimension. Of course x is Hermitian, since it is just multiplication
by a diagonal matrix with real elements. The momentum operator i∂/∂x is given by
P =i
1
1 †
Z iKZ = − Z † KZ,
N
N
where K is the diagonal matrix of real wave numbers. Then P is clearly Hermitian,
and so is any power of P .
For ﬁnite diﬀerence discretizations we must be a little more careful, since we are free
to choose any consistent diﬀerence for i∇ and −∇2 as our approximation. The standard
second order diﬀerences are however easily seen to be Hermitian. The matrix for δ2x
is skew-symmetric; upon multiplication with i it becomes Hermitian. The matrix for
δx δx is real and symmetric and hence Hermitian.
The ﬁnite element method involves the stiﬀness matrix K representing −∇2 . This
matrix is by deﬁnition Hermitian, see Eqn. (4.14). For the momentum operator i∇,
we have by integration by parts the matrix elements
∗
Ni (∇Nj ) = −i (∇Ni )Nj + boundary terms = Pji
,
Pij := i
Ω
Ω
because the boundary terms vanish due to homogenous Dirichlet boundary conditions.
Hence, P is Hermitian.
For an operator A that is a function of x we have
Aij :=
Ni A(x )Nj
Ω
which clearly is Hermitian.
94
4.6 – Basic Stability Analysis
4.5.6 Equivalence With Hamilton’s Equations of Motion
We may develop a useful analogy with Hamilton’s equations of motion of the discretized
time dependent Schrödinger equation (4.37). This analogy will also hold for quantum
systems of ﬁnite dimension such as pure spin systems. The analogy may be very useful
when solving the Schrödinger equation, because many good integrators for classical
Hamiltonian systems have been discovered.
The vector y ∈ CN is complex. If we by q and p denote the real and imaginary
parts, respectively, we have
d
(q + ip) = −iH(t)(q + ip) = −iH(t)q + H(t)p.
dt
If we assume that the action of the Hamiltonian on a real vector is again a real vector,
we have
dp
dq
= H(t)p, and
= −H(t)q,
dt
dt
where q and p are vectors in RN for all t. If we deﬁne the function H as
H=
1
1 T
q H(t)q + pT H(t)p,
2
2
then it is easy to see that q and p satisfy Hamilton’s equations of motion with H as
the Hamiltonian, viz.,
q̇i =
∂H
,
∂pi
and ṗi = −
∂H
,
∂qi
i = 1, 2, . . . , N.
Solving the Schrödinger equation with y(0) as initial condition is then equivalent to
solving Hamilton’s equations of motion with p(0) = Re y(0) and q(0) = Im y(0) as
initial conditions.
The unitarity of Eqn. (4.37) is equivalent to
y2 = q T q + pT p = constant.
This is not a general feature of Hamiltonian mechanics. But Hamilton’s equations are
known to be symplectic, that is area preserving. If we take a ball B ⊂ R2N of initial
conditions, then the sum of the areas of the projections of B onto the qi pi planes is
conserved, i.e.,
N
A=
Area Pi (B) = constant,
i=1
where Pi is the projection onto the qi pi -plane.
Furthermore, the volume of B is conserved. This is referred to as Liouville’s theorem, viz.,
V =
. . . dq1 dp1 . . . dqN dpN = constant.
B
This property deﬁnes a conservative ODE.
There are many useful theorems and interesting views on classical Hamiltonian
mechanics. See Refs. [6, 23, 38] for a full treatment.
4.6
Basic Stability Analysis
This short section is an introduction to the methods and concepts used when analyzing the numerical properties of time dependent and stationary ﬁnite element discretizations. The mathematical theory of ﬁnite element methods introduces advanced
95
Numerical Methods for Partial Diﬀerential Equations
function spaces such as Sobolev spaces and also concepts such as weak derivatives.
This is beyond the scope of this thesis, but we nevertheless aim at giving a sketch of
how the analysis is performed. We focus on the basic ideas, leaving out many details
and calculations.
4.6.1 Stationary Problems
A PDE is an equation in which the unknown is a function. Mathematical analysis
deﬁnes diﬀerent function spaces. These are linear spaces of functions, and the Hilbert
spaces such as L2 of quantum mechanics are examples of such.12 The spaces Lp is
deﬁned as the p-integrable functions (where p > 0 is an integer), i.e., the set of functions
u : Ω ⊂ Rd → R for which the norm is ﬁnite, viz.,
1/p
u :=
|u|
< ∞.
p
Ω
The integral in question is the Lebesgue integral. These spaces are Banach spaces, i.e.,
complete normed spaces. The space L2 is a special case, and this space is also a Hilbert
space with respect to the inner product
(u, v)L2 :=
uv,
Ω
and hence the norm is u := (u, u). The Sobolev spaces Wpk are generalizations of
Lp in which we assume that the partial derivatives of u up to order k are elements of
Lp . The norm is deﬁned by
uWpk :=
Dα uLp ,
|α|≤k
where we have used the multi-index notation from section 4.2.1. See also See Refs. [16,
17] for details on Sobolev spaces and other Hilbert spaces, that are the function spaces
most common in ﬁnite element analysis.13 The Sobolev spaces W2k =: H k also become
Hilbert spaces with the inner product
(Dα u, Dα v)L2 .
(u, v)H k :=
|α|≤k
The most common class of Hilbert spaces used in ﬁnite element analysis are the Sobolev
spaces H k , of which L2 = H 0 is an example.
Whatever space we are working with, let us call it V . A stationary PDE is then an
equation on the form
Au = f, u ∈ V, v ∈ W.
Here, A is assumed to be a linear operator from V to W . The space W may be
larger than V . Introducing a discretization, i.e., a subspace Vh ⊂ V , gives a discrete
formulation of the problem, viz.,
Ah uh = f,
uh ∈ Vh ,
where Ah is a linear operator on Vh into W . The subscript h indicates a discretization
of some kind with parameter h. For ﬁnite diﬀerence methods it may be is the largest
grid spacing and for ﬁnite element methods it may be the diameter of the largest
12 To
comply with the notation of Sobolev spaces we will use L2 instead of L2 in this section.
derivatives appearing in the deﬁnition of Sobolev spaces are actually so-called weak derivatives, i.e., derivatives of L2 -functions that do not necessarily have a classical derivative. The weak
derivative coincides with the classical if the latter exists.
13 The
96
4.6 – Basic Stability Analysis
element. If h → 0 then Vh → V (in some sense) and we expect that uh → u. (Unless
otherwise stated, u → v means u − v → 0.) We assume for simplicity that Ah is
deﬁned on all of V and that it has an inverse. Let us estimate the error of the discrete
solution uh :
−1
−1
u − uh V = A−1
h Ah (u − uh )V = Ah (Ah u − Au)V ≤ Ah L(W,V ) · Ah u − AuW .
Hence, for the discrete solution uh to approach u it is enough to require a uniform
−1
boundedness on A−1
h , i.e., that it exists a constant C > 0 such that Ah ≤ C for all
h. This property is called stability. Furthermore, we must require that
(Ah − A)u = Ah u − f → 0
as h → 0. This property is consistency. It measures the error in the equation, as we
easily see.
This formulation is the one usually used in the analysis of ﬁnite diﬀerence methods,
and we easily recognize stability and consistency as they were introduced earlier. Both
properties are easy to deduce in this case. For ﬁnite element methods, however, the
usual approach is diﬀerent, as the truncation error Ah u − f is not easy to estimate.
In ﬁnite element methods, our discrete problem is a discretization of a weak formulation of the PDE. For simplicity, we will conﬁne the discussion to real PDEs.
The above formulation of the PDE was: Find u such that Au = f . This pointwise
fulﬁllment (u is then called a classical solution) of the PDE is replaced with a fulﬁllment
on average, i.e., ﬁnd u such that
a(u, v) = (f, v),
∀v ∈ H.
Here, a(·, ·) : H × H → R is a bilinear form on H, i.e., linear in both arguments.
We have replaced V with another space H ⊃ V , as the weak formulation has lesser
constraints on the solution than the classical, pointwise PDE. To see this, consider the
PDE
−∇2 u = f,
i.e., Laplace’ equation. Note that we must require u to be at least twice continuously
diﬀerentiable if f is continuous. Multiplying with a test function v and integrating
yields
∂u
2
=
v∇ u =
∇v · ∇u −
v
f v.
−
Ω
Ω
∂Ω ∂n
Ω
Assuming that the boundary terms vanish as in the case of homogenous Neumann
boundary conditions, we may identify the bilinear form
∇u · ∇v.
a(u, v) =
Ω
Notice that we only need u and v to be diﬀerentiable. Hence, a solution of a(u, v) =
(f, v) is weaker than the classical solution. Before integrating by parts, we had to
require u ∈ H22 . Integrating by parts then reduced the constraints, i.e., u ∈ H21 .
We must assume some fundamental properties of bilinear forms to carry out our
discussion. First, we need14 a(·, ·) to be symmetric, i.e., a(u, v) = a(v, u). Second, we
need boundedness, i.e., there exists a constant C1 ≥ 0 such that
a(u, v) ≤ C1 uv,
∀u, v ∈ H.
Boundedness of the bilinear form is easily seen to be equivalent with continuity. Third,
we need coercivity, i.e., there exists a constant C2 > 0 such that
a(u, u) ≥ C2 u2 ,
14 Not
∀v ∈ H.
really, but it makes life much simpler.
97
Numerical Methods for Partial Diﬀerential Equations
The bilinear form in the example is easily seen to be symmetric, bounded and coercive.
If a(·, ·) is symmetric and coercive the weak PDE is equivalent to the minimization of
the functional
1
J(u) = a(u, u) − (f, u),
2
i.e., to ﬁnd u such that J(u) ≤ J(v) for all v ∈ H; see Ref. . In our example,
1
J(u) =
|∇u|2 −
f u.
2 Ω
Ω
For this reason, the weak formulation is also called the variational formulation. When
formulating a discrete problem one chooses Vh ⊂ H as for the classical PDE and seek
uh such that
a(uh , v) = (f, v), ∀v ∈ Vh .
This is exactly Galerkin’s method in the weighted residual method. Let us analyze the
error u − uh of the weak formulation.
Note that by coercivity, we have
C2 uh 2 ≤ a(uh , uh ) = (f, uh ) ≤ f ∗ uh ,
where f ∗ is the dual norm of f , i.e., the norm of f as a linear functional on H.
Hence,
1
f ∗ ,
uh ≤
C2
and uh is bounded automatically by f . This is the stability criterion, and we see that
in the weak formulation we get this for free, so to speak, as long as coercivity of a(·, ·)
holds.
Let us turn to consistency, i.e., if the discrete equation converges to the exact
equation as h → 0. Let δh (u) be the distance from u to Vh , i.e.,
δh (u) := inf u − v.
v∈Vh
This really deﬁnes that Vh → V . If u is the solution to the exact problem, then δh (u)
must vanish as h → 0 in order to make the discrete formulation consistent. We see
that consistence depends on the choice of Vh . In ﬁnite element methods, it turns out
that the error u − uh is proportional to δh (u). Hence, both stability and consistence
is fulﬁlled for every ﬁnite element method. See for example Refs. [16, 17].
Finally we show the best approximation in norm property of Galerkin’s method.
First note that for any v ∈ Vh ,
a(u − uh , v) = a(u, v) − a(uh , v) = (f, v) − (f, v) = 0.
Hence, for any v ∈ Vh ,
C2 u − uh 2 ≤ a(u − uh , u − uh ) = a(u − uh , u − v) ≤ C1 u − uh · u − v.
Hence,
u − uh ≤
C1
u − v,
C2
∀v ∈ Vh .
(4.38)
We see that the discrete formulation actually ﬁnds uh ∈ Vh that is closest to u. If
δh (u) → 0 as h → 0 then clearly uh → u. See also Ref. .
Let us introduce a theoretical tool which will become useful when we study the
stability of time dependent problems. We deﬁne an operator Rh ∈ L(V, Vh ). This
98
4.6 – Basic Stability Analysis
operator projects a function u onto the discrete space Vh with respect to the energy
inner product a(·, ·), i.e.,
a(Rh u, v) = a(u, v)m,
∀v ∈ Vh .
Eqn. (4.38) then becomes
(1 − Rh )uH 1 ≤
C1
inf u − vH 1 .
C2 v∈Vh
4.6.2 Time Dependent Problems
Although we argued in the introduction to this chapter that discretization in time is
independent from discretization in space this is not entirely true as far as the numerical
convergence concerned. For example, when h → 0 in space, the resulting ODE becomes
larger and larger. Hence, the convergence in time is aﬀected by the mesh width in space.
We need some kind of uniformity to circumvent this.
Let us perform an analysis similar to the one above for a time dependent problem.
Will consider a PDE on the form
ut + Au = f,
u ∈ H 2,
where we seek u : [0, T ] −→ H 1 , where T ≤ ∞. We are given an initial condition
u(0) = g ∈ H 1 . A mathematical question we will not investigate here is whether u(t)
for a given t exists. We will assume that it does.
The weak formulation reads
∀v ∈ H 1
(ut , v) + a(u, v) = (f, v),
(4.39)
where we assume a(·, ·) to be a symmetric, bounded and coercive bilinear form on H 1 .
In particular, the weak formulation must hold for v = u, i.e.,
(ut , u) + a(u, u) = (f, u).
We have
(ut , u) =
ut u =
Ω
Ω
1 ∂ 2
1 d
(u ) =
2 ∂t
2 dt
u2 =
Ω
1 d
u2L2 ,
2 dt
where we assume that integration and diﬀerentiation with respect to time is interchangeable. This gives
1 d
u2L2 ≤ (f, v),
(4.40)
2 dt
as a(u, u) ≥ 0 by coercivity. We note that by the triangle inequality,
|(f, v)| ≤ f L2 uL2 ≤
1
f 2L2 + u2L2 .
2
Integrating Eqn. (4.40) from t = 0 to t = T and using u(0) = g gives
1
1
1
u(T )2L2 ≤ gL2 +
2
2
2
T
f L2
0
1
dt +
2
T
u(t)L2 dt.
0
This is a stability criterion. To see this, consider Grönwall’s inequality, see Refs. [larsson2003,mcowen2003]. If x(t) ≥ 0 obeys
x(T ) ≤ h(T ) + b
T
x(t) dt,
0
99
Numerical Methods for Partial Diﬀerential Equations
with h(t) ≥ 0 and 0 ≤ T ≤ t, then
T
x(T ) ≤ eT x(0) +
eT −t f (t) dt.
0
Turning to a discrete version of the weak formulation, we see that like in the stationary case we need an approximation property similar to Eqn. (4.38). Consistency is
built into the formulation in the case of ﬁnite element methods. The discrete version
of Eqn. (4.39) reads
((uh )t , v) + a(uh , v) = (f, v),
∀v ∈ Vh ,
with uh (0) = gh as initial condition. From this we immediately obtain
uh (t)L2 ≤ gL2 ,
i.e., stability is built into the formulation.
We wish to say something about the time development of the error u − uh . We
introduce an operator Rh : H 1 → H 1 that takes an element u ∈ H 1 to its projection
on Vh using a(·, ·) as inner product.15 I.e., Rh u is deﬁned by
a(Rh u, v) = a(u, v),
∀v ∈ Vh .
Hence, Rh u ∈ Vh . This operator is only a theoretical tool; we will never actually
compute it (or it’s action on u).
The time development of the error is given by
((u − uh )t , v) + a(u − uh , v) = 0,
∀v ∈ Vh .
Writing u − uh = u − Rh u + Rh u − uh where Rh u − uh ∈ Vh yields
((Rh u − uh )t , v) + a(Rh u − uh , v) = ((Rh u − u)t , v),
∀v ∈ Vh ,
where the right hand side may be interpreted as a source term. We have used that by
deﬁnition a(u − Rh u, v) = 0 for all v ∈ Vh . We see that the time development of u − uh
is rewritten in terms of functions in Vh . Letting v = Rh u − uh yields
1 d
Rh u − uh 2L2 ≤ (Rh ut − ut , Rh u − uh ),
2 dt
in the same way as when we analyzed stability of u ∈ H 1 . Hence, if we deﬁne e =
Rh u − uh ∈ Vh we have
T
T
Rh ut − ut 2L2 dt +
e(t)2L2 dt,
e(T )2L2 ≤
0
0
where we have used uh (0) = Rh u(0) by deﬁnition. By Grönwall’s inequality we have
T
e(T )2L2 ≤ eT
(1 − Rh )ut 2L2 dt + e(0)2L2
0
≤ Te
T
max (1 − Rh )ut 2L2 + e(0)2L2 ,
0≤t≤T
where e(0) is the initial error. The initial error is not zero because the discretized
initial condition is not necessarily equal to the continuous initial condition. On the
other hand, we see that if δh becomes smaller then so must e(0)L2 . In that case
the stability criterion becomes stronger. The function (1 − Rh )w is the orthogonal
complement to Rh w whose norm also must become smaller when δh decreases. We see
that decreasing the mesh width improves the stability criterion.
15 An
100
inner product is a symmetric, positive deﬁnite bilinear form.
Chapter 5
Numerical Methods for Linear
Algebra
This chapter is a rather brief overview of the various methods that are in use for solving
(square) systems of linear equations and (standard and generalized) eigenvalue problems. Linear algebra is perhaps the most fundamental tool in numerical computation,
and hence it is important to master the powerful tools that have been developed.
The theoretical aspects of linear algebra may be found in Ref. . The numerical
methods are described well in Ref.  and Ref. .
In this section we are primarily concerned with square matrices. We shall consider
two kinds of equations; square systems of linear equations and eigenvalue problems.
Let A ∈ CN ×N be any square matrix of dimension n and let b ∈ CN be any
n-dimensional vector. We then seek x ∈ CN such that
Ax = b.
(5.1)
This equation has a unique solution if and only if A has an inverse A−1 and if and only
if determinant does not vanish, i.e., det A = 0. The matrix is then called non-singular.
If det A = 0, i.e., if A is singular, the set of solutions to the homogenous equation
Ax = 0 is a subspace of CN . A solution to Eqn. (5.1) may happen not to exist in
the non-singular case, but if it does then it is in general given by x + xp , where xp
is any particular (i.e., an arbitrary) solution of Eqn. (5.1). We will assume that A is
non-singular in the following discussion.
The need for solving Eqn. (5.1) arise in a wide range of problems in computational
physics. For our part, we need to solve large systems when using the ﬁnite element
method and also when we are given implicit time integration schemes. In addition,
solving eigenvalue problems also implies solving linear systems, as will be described
later. In fact, most of the time spent in solving PDEs is invested in linear algebra
problems, and so it is of major importance to choose fast and reliable methods.
The generalized eigenvalue problem reads: Given a pair of square matrices A and
B, ﬁnd the eigenpairs (λ, x), where λ is a scalar and x is a vector such that
Ax = λBx..
(5.2)
The scalar λ is called an eigenvalue and x is called an eigenvector. In the case B = I,
i.e., the identity matrix, we obtain the standard eigenvalue problem, viz.,
Ax = λx.
We shall only concern ourselves with the Hermitian eigenvalue problems, i.e., both A
and B are Hermitian matrices. In this case one may always ﬁnd N eigenpairs with real
eigenvalues and orthonormal eigenvectors, i.e., a basis for CN .
101
Numerical Methods for Linear Algebra
5.1
Introduction to Diﬀpack
Before we discuss the numerical methods we give a brief introduction to Diﬀpack, the
C++ library used in the ﬁnite element implementations in this thesis. It is natural to
introduce Diﬀpack at this point, after having introduced the ﬁnite element discretizations. A basic knowledge of Diﬀpack is also useful to get an overview of how the
numerical methods are used in the program. We cannot even scratch the surface of
Diﬀpack in this thesis. We will however mention some of the key points in the structure,
class hierarchy and implementations of ﬁnite element solvers in order to understand
pros and cons, limitations and possibilities.
Diﬀpack is a very complex library. In addition to deﬁning hundreds of classes
and functions it also includes many of auxiliary utilities, performing a wide range of
tasks such as generation of geometrically complicated grids, visualization of data and
conversion of data to and from various common formats. The programming library
and the utilities together constitute a complete environment for solving PDE related
problems; including planning and preparation, implementation, simulation and data
organizing, visualization and presentation of results.
The Diﬀpack project was started in the mid 90’s by Sintef and the University of Oslo.
As the project grew in size, complexity and reputation the ﬁrm Numerical Objects was
founded in 1997. Diﬀpack is now owned by the German company inuTech and jointly
developed with Simula Research Laboratory. See Refs. [42–44] for details.
Diﬀpack is a library developed with extensive use of object-oriented techniques, as
is a must when developing ﬂexible ﬁnite element solvers. Many classes are templated1
and a hierarchy of smart pointers (so-called handles) are implemented, easing the
memory handling which otherwise have a tendency to require many debugging session
in C++. There are class hierarchies for matrices, grids and grid generators, ﬁnite
element abstractions, solvers for systems of linear equations and so on.
Ref.  is an excellent introduction to both the ﬁnite element method and Diﬀpack
programming. It is an easy-to-read and informative account and a good introduction
and reference for practitioners at all levels. In addition, the online Diﬀpack documentation in Ref.  is recommended.
5.1.1 Finite Elements in Diﬀpack
Diﬀpack implements many diﬀerent ﬁnite element types, including (but not limited to)
linear and quadratic elements in one dimension, and tensor product generalizations to
more dimensions. The subclasses of ElmTensorProd exemplify this.
For example, the element class ElmB2n1D implements a linear element in one dimension, i.e., one-dimensional elements (intervals) with two nodes located at the endpoints, and the class ElmB4n2D is the two-dimensional generalization, by taking the
tensor product of two one-dimensional elements. Hence, it is deﬁned on quadliterals
and has four nodes located at the corners. The (piecewise bilinear) element functions
are depicted in Fig. 4.6.
The class ElmB3n2D is a one-dimensional quadratic element with three nodes in
each element, two located at the endpoints of the interval and the third somewhere
in-between. The two-dimensional generalization is ElmB9n2D with 9 nodes. In three
dimensions we have the class ElmB27n3D with 27 nodes distributed in a parallelepiped.
Besides methods that evaluate the basis functions and their partial derivatives, the
element classes contain Gaussian integration rules that may be used when assembling
the element matrices and vectors. This is done in the integrands() method of FEM, the
class from which our ﬁnite element simulators are derived. This method evaluates the
integrands in the one-dimensional ﬁnite element system example in Eqns. (4.12) and
1 Actually, templating is emulated with preprocessor macros and directives due to compilerdependent behavior the template feature in C++.
102
5.1 – Introduction to Diﬀpack
(4.13). Assembling the element matrices and vectors is done when calling FEM::makeSystem(). Numerical integration takes part and FEM::integrands() evaluates the
integrands at the points deﬁned by the current integration rules. Hence, numerical
integration is a very important part of ﬁnite element implementations with Diﬀpack.
A detailed description of Diﬀpack is out of the scope for this text. The code for the
programs written in this thesis use many of Diﬀpack’s features than described here,
but the code is well commented in order to compensate for this.
5.1.2 Grid Generation
Diﬀpack is bundled with lots of utilities and scripts that typically call third-party
grid generation software (called preprocessors) in order to produce compatible grids.
Examples of such third-party software are Triangle and GeomPack. Diﬀpack comes with
classes with interfaces to such utilities in addition to scripts and utilities. With these,
both simple and complicated grids may easily be created. Ref.  is a comprehensive
introduction to ﬁnite element grid preprocessors used in Diﬀpack. In this thesis, we
employ square grids and disk-grids (i.e., approximations to circular regions).
A typical way to generate a grid is by the external utility (written with Diﬀpack)
makegrid. This is a command line based interface to the various grid preprocessors.
Here is a sequence of commands that generate a unit box in two dimensions with 9 × 9
linear elements:
set
set
ok
set
set
ok
preprocessor method = PreproStdGeom
output gridfile name = box
geometry = d=2 [0,1]x[0,1]
partition = d=2 e=Elmb4n2D div=[10,10] grading=[1,1]
For making a disk grid, one may create a ﬁle describing the outline of the circular
region in terms of a polygon and then use Triangle to triangulate it. Here is a short
snippet of Python code to create such a grid:
import os
from math import *
# Generate a disk with radius r and mesh width h.
# Uses the "Triangle" program.
def makeDiskTri(r, h):
div = floor(r/h + 1) # number of segments in outline polygon
print "generating disk with div=%d, h=%f" % (div, h)
nel = floor((div-1)*(div-1)/(2*pi)) # desired number of elms
el_area = pi*r*r/nel
# average element area
# create the input file to Triangle.
poly_file_text = "%d 2 0 1\n" % (div)
for i in range(div):
angle = i*2*pi/float(div)
(x, y) = (cos(angle)*r, sin(angle)*r)
poly_file_text = poly_file_text + "%d %g %g 20010\n" %
(i+1, x, y)
103
Numerical Methods for Linear Algebra
poly_file_text = poly_file_text + "%d 1\n" % (div)
for i in range(div):
if (i==div-1):
i2 = 1
else:
i2 = i+2
poly_file_text = poly_file_text + "%d %d %d 20010\n" %
(i+1, i+1, i2)
poly_file_text = poly_file_text + "0\n"
f= open(".disk.poly", "w")
f.write(poly_file_text)
f.close()
# Call Triangle. No angles should be less than 28 deg,
# all triangle areas should be less than el_area
# (yielding at least nel elements).
os.system("triangle -q28Ipa%f .disk.poly" % (el_area))
os.system("triangle2dp .disk")
fname = "disk_%d_%f" % (div, h)
os.system("mv .disk.grid %s.grid" % (fname))
A grid generated with this script can be seen in Fig. 5.1.
5.1.3 Linear Algebra in Diﬀpack
It is important to be aware of the diﬀerent kinds of matrices that Diﬀpack oﬀers with
their advantages and disadvantages. Here we give a brief overview of the matrix types
and their corresponding classes. Each matrix type may be complex or real. As the
matrix classes in Diﬀpack are templated this is indicated as a template parameter, e.g.,
Matrix(Complex) or Matrix(real). In general we write NUMT for the type. All matrix
classes are derived from Matrix(NUMT). See the online documentation in Ref.  for
details.
The important diﬀerences between the matrix types lies in that we assume diﬀerent
structures of the matrices. This allows for optimizations when calculating for example
matrix-vector products. There is no need to store a full 1000 by 1000 matrix if we know
that only the diagonal is diﬀerent from zero, and performing a matrix-vector product
with nearly a million multiplications with zero is a waste of resources. As a matter of
fact, the matrix-vector product is very important to optimize for this thesis, as it is the
most fundamental operation when both solving linear systems of equations iteratively
and when solving eigenvalue problems.
An important property of the ﬁnite element matrices is sparsity. The element
matrices are sparse, meaning that most of the elements are known to be zero. Using
a grid with 108 nodes yields a linear system of the same dimension, and storing a full
matrix clearly cannot (and should not!) be done. The matrix elements Aij where x [i]
and x [j] are indices not belonging to the same element are known to be zero. Hence,
the number of non-zeroes in the element matrix is of order N , yielding very sparse
matrices as the fraction of non-zeroes is actually of order 1/N . We must mention,
104
5.1 – Introduction to Diﬀpack
however, that this is a somewhat simpliﬁed picture. Higher order elements yields more
couplings between the nodes (as more nodes belong to the same element) and hence
increase the number of non-zeroes.
Fig. 5.1 shows a typical moderately-sized ﬁnite element grid with linear triangular
elements and the corresponding sparsity pattern. The ﬁgure shows a square picture
where each point corresponds to a matrix element. Black dots are non-zeroes and white
dots are zeroes.
Dense Matrices. Dense N × M matrices are the most general matrices. Every entry
is assumed to be a priori a possible non-zero. Hence, a full array of N · M (complex
or real) numbers must be allocated in an implementation.
The ith component of the matrix-vector product w = Av is given by
wi =
n
Aij vj ,
j=1
and as w has M components, this amounts to N M multiplications and additions, and
hence O(N M ) ﬂoating point operations are required for a matrix-vector product in
the real case. For complex matrices, the multiplications and additions are of course
complex, roughly quadrupling the number of multiplications.
Dense matrices are implemented in the class Mat(NUMT).
Diagonal Matrices. On the other extreme of dense matrices we ﬁnd the diagonal
matrices, in which only the diagonal elements are non-zeroes. The lumped mass matrix
is an example of a diagonal matrix. For a square N -dimensional matrix exactly N (real
or complex) numbers must be stored. The matrix vector product reduces to
wi = Aii vi ,
and hence only O(N ) operations are needed for a matrix-vector product. Again, complex matrices require about four times more work than the real matrices.
Diagonal matrices are supported through the MatDiag(NUMT) class.
Tridiagonal Matrices and Banded Matrices. Tridiagonal matrices are allowed to have
non-zeroes at the ﬁrst super- and subdiagonal as well as the main diagonal, i.e., only
Ai,j , j = i − 1, i, i + 1 are allowed to be nonzero. A square N -dimensional tridiagonal
matrix is stored as a 3 × N dense matrix. It is easily seen that a matrix-vector product
requires O(3N ) operations. (Again, complex matrices require a bit more.)
Tridiagonal matrices are special cases of banded matrices. A banded matrix allows
non-zeroes at the k ﬁrst sub- and superdiagonals. The total number 2k + 1 of non-zero
diagonals is called the bandwidth. It is stored as a (2k + 1) × N dense matrix and
clearly O((2k + 1)N ) operations are needed for a matrix-vector product.
General Sparse Matrices. A matrix is called sparse if in some sense the number of
(possible) non-zeroes is low. A typical example is diagonal and tridiagonal matrices,
in which the number of non-zero elements is of order N . A matrix-vector product then
requires only O(N ) operations as is easily seen.
Fig. 5.1 shows the banded structure of an element matrix and the fact that many of
the zeroes inside the band is zero, resulting in very many unnecessarily stored elements
if we use MatBand(NUMT). Indeed, the fraction of non-zeroes in the band decreases
rapidly for larger grids (i.e., larger element matrices), and hence we need support for
matrices with an even more general structure.
In general sparse matrices we store only the non-zero elements. The class MatSparse(NUMT) implements the so-called compressed sparse row storage scheme (CSR
105
Numerical Methods for Linear Algebra
(1,1)
(N,N)
Figure 5.1: Sparsity pattern matrix for a ﬁnite element grid (inset). Non-zeroes are
shown as black dots. The matrix dimension is 1148.
for short). It is the most compact way of storing the matrix and allows very fast
computations of matrix-vector products.
The N × M sparse matrix is stored by means of three one-dimensional vectors
a ∈ Cn , c ∈ Nn and r ∈ NN +1 , where n is the total number of non-zero entries and
N is the number of rows. The elements of a are the non-zero matrix entries ordered
row-wise from left to right, i.e., in reading order. The entries of c are the column indices
for these, i.e., a(s) resides in column c(s). Element number i of r stores the index s
of the ﬁrst entry of row i. In addition, r(N + 1) = M + 1 by deﬁnition. Hence, the
number of non-zero entries in row i is r(i + 1) − r(i). For example, consider the matrix


1 0 2
0 −3 4

A=
5 0 0 .
0 0 6
Here, n = 6 and
a = (1, 2, −3, 4, 5, 6)
are the entries. The column-indices are
c = (1, 3, 2, 3, 1, 3)
and the row pointers are
r = (1, 3, 5, 6, 7).
106
5.2 – Review of Methods for Linear Systems of Equations
Solvers for Linear Systems of Equations. As we have seen, large sparse matrices arise
from ﬁnite element discretizations and ﬁnite diﬀerence approximations. Hence, solving
large systems of linear equations with a sparse coeﬃcient matrix is fundamental. As
we shall see, the corresponding eigenvalue problems, i.e., ﬁnding the eigenvalues and
eigenvectors of the discrete Hamiltonian, also requires solutions to linear systems in
order to be solved.
For linear systems Diﬀpack implements many solvers, from Gaussian elimination
to Krylov subspace methods with preconditioning. In the next section we will review
the most important methods that are implemented. Again, we refer to Ref.  for
details. Here we merely mention that a separate hierarchy of classes are implemented
in Diﬀpack (i.e., LinEqSolver and its descendants) that allows the user to solve linear
systems with all available methods.
Eigenvalue Problems in Diﬀpack. Unfortunately there is no direct support for solving
eigenvalue problems in Diﬀpack as of yet. To achieve this one must either implement
the proper methods or in some way connect Diﬀpack matrix classes with external
solvers. As eigenvalue computation is fairly complicated business we would prefer to
use ready-made libraries. In an external report written for Simula Research Laboratory
(see Refs. [5, 44]) I have investigated the possibility of using the popular ARPACK
eigenvalue package with matrices from Diﬀpack. A simple implementation was made
and the eigenvalue solvers in this thesis are an extension of the implementation found
in this work.
5.2
Review of Methods for Linear Systems of Equations
In this section we consider square linear systems of equations, i.e., ﬁnd x such that
Ax = b,
where A is an N × N matrix and x and b are N -dimensional (complex) vectors.
For large systems the time spent on solving the system might be much higher than
constructing the system, for example via assembly of the element matrices. It is then
vital to have access to a variety of numerical methods for solving linear systems. The
optimal choice for solving the linear systems is highly problem dependent, and one
should not rely solely on for example Gaussian elimination. It is then important that
the programming environment makes is easy to switch between the methods.
There are three basic properties that we need to consider for each method: For what
matrices they apply, the computational cost (both number of operations and storage
requirements) of the method and how the method actually works. Se Refs. [34, 40] for
details on the methods presented here.
5.2.1 Gaussian Elimination and Its Special Cases
Strictly speaking, Gaussian elimination is only one of the methods that usually is
referred to by that name. The pure Gaussian elimination is rarely implemented; instead
one uses the LU decomposition method, for example.
LU Decomposition. The LU decomposition method assumes that we can rewrite our
matrix A as
A = LU,
where L is lower triangular and U is upper triangular. Solving a lower or upper
triangular system is trivial and is done by forward substitution and backsubstitution,
107
Numerical Methods for Linear Algebra
respectively. Let us write
LU x = L(U x) = b.
The equation Ly = b is easily solved by forward substitution, i.e.,
y1 =
b1
,
α11


i−1
1 
yi =
bi −
αij yj  ,
αii
j=1
and
i = 2, 3, . . . , N.
Here, αij are the lower triangular elements of L. Then we solve the equation U x = y
with forward substitution, i.e.,
yN
xN =
,
βN N

and
xi =
N

1 
yi −
βij xj  ,
βii
j=i+1
i = N − 1, N − 2, . . . , 1.
Here, βij are the upper triangular elements of L. As we see, this produces the solution
of Ax = b in O(N 2 ) operations, given L and U .
The LU decomposition is usually performed with an algorithm called Crout’s algorithm. (The algorithm also shows that is always is possible to do LU decomposition,
by construction.) The algorithm needs O(N 3 ) operations for a dense matrix. Clearly,
with N ∼ 107 the LU decomposition is not feasible in general, but for some special
cases the amount of work needed reduces drastically.
Notice that once the LU decomposition is performed, any right hand side b can be
solved with forward and backsubstitution. If we need to solve several right hand sides
this should be taken advantage of.
In Diﬀpack, Gaussian elimination with LU decomposition is implemented in the
GaussElim class, a subclass of LinEqSolver.
Gaussian elimination with LU decomposition is mathematically fool-proof. If A is
non-singular the solution x is given by the above formulas. Numerically we however
cannot avoid round-oﬀ errors. One way to minimize such errors is to perform a process
called pivoting in the LU decomposition. In fact, Gaussian elimination methods are
numerically unstable if pivoting is not carried out.
Round-oﬀ errors are still a problem if the matrix A is ill-conditioned, meaning that
its determinant is almost zero. The condition number κ is deﬁned as the square root
of the ratio of the magnitudes of the largest and the smallest eigenvalue, i.e.,
/
|λmax |
.
κ=
|λmin |
If the condition number is inﬁnite the determinant is zero as one of the eigenvalues
is zero. Hence, the matrix is singular. If κ−1 approaches the machine precision, i.e.,
the smallest representable number in the computer, it is singular as far as the numerical methods concerned. Hence we can get unpredictable results when using such
ill-conditioned coeﬃcient matrices.
Tridiagonal Systems and Banded Systems. The situation is greatly simpliﬁed if A is a
tridiagonal matrix. Recall that tridiagonal matrices have non-zero elements only along
the diagonal and the ﬁrst sub- and superdiagonal. Hence, it has approximately 3N
nonzero elements. For such systems LU decomposition and forward and backsubstitution takes only O(N ) operations, which is optimal. The algorithm can be coded in
108
5.2 – Review of Methods for Linear Systems of Equations
merely a handful of lines, see for example Ref. . The storage requirements are only
a vector of length N for temporary storage, contrary to an array of length N 2 needed
for the full LU decomposition.
Banded systems are more general, in that the matrix A has non-zero elements
along k sub- and superdiagonals. Banded matrices are frequently output from element
assembly routines, although general sparse matrices perform better. (One complication
is however that Gaussian elimination is not implemented for complex sparse matrices,
see the paragraph below on sparse systems.)
The computational cost of banded Gaussian elimination depends on the bandwidth,
but if n N it is much faster than full Gaussian elimination.
It is noteworthy that the tridiagonal Gaussian elimination need almost no extra
storage space. Nor does Crout’s algorithm for dense matrices need extra space if we
can live with the original dense matrix being destroyed. Banded Gaussian elimination
however creates additional non-zeroes called ﬁll-ins at locations outside the band in A.
If the bandwidth is comparable to N the number of ﬁll-ins are considerable.
Tridiagonal and banded Gauss elimination is also implemented in GaussElim.
Sparse Systems. Sparse matrices contain mostly zeroes. Often the number of nonzeroes is of order N . The structure of a sparse matrix may vary greatly, see for
example Fig. 2.7.1 of Ref. . The structure may for example lie somewhere between
a dense matrix and a banded matrix.
The tridiagonal LU decomposition with forward and backsubstitution can be performed in only O(N ) operations. This is due to clever application of the algorithm and
bookkeeping of the zeroes in the system. The structure of the sparse matrix plays a
fundamental role here, and for general sparse matrices one must in some way analyze
the sparsity pattern in order to implement an eﬀective algorithm. Keeping the number
of ﬁll-ins reasonable and at the same time minimizing the number of operations needed
is no simple task.
[Should have more on Diﬀpack’s approach to sparse systems here. Also mention
that no Gaussian elimination is implemented for complex sparse systems.]
5.2.2 Classical Iterative Methods
Gaussian elimination is a direct method, i.e., an algorithm for ﬁnding x with an exactly
known complexity. We can precisely count the number of operations needed for an exact
solution to be found. Iterative methods take another approach. As the name suggests
one creates a sequence of trial vectors xk that hopefully converges to the real solution
x. Each iteration should be in some sense cheap in order make this meaningful.
Classical iterative methods split A into A = M − N and hence
M x = N x + b,
suggesting an iterative process given by
M xk+1 = N xk + b.
This process is the same at each iteration, which is characteristic for classical methods.
We see that the system M x = b must be in some sense cheap to solve if the iteration
process should have any practical use. If we deﬁne the residual r k as
r k := b − Axk ,
(note the similarity to the ﬁnite element formulation) we obtain
xk+1 = xk + M −1 r k .
109
Numerical Methods for Linear Algebra
Deﬁning the matrix G = M −1 N and the vector c = M −1 b we obtain
xk+1 = Gxk + c,
(5.3)
and this makes us able to analyze the stability and convergence properties of the
iteration. Through the following chain of identities,
xk+1 − x = Gxk + c − x = Gxk + M −1 (b − M x)
= Gxk + M −1 (b − Ax − N x) = Gxk − Gx = G(xk − x),
we obtain
xk − x = Gk (x0 − x).
In other words the error ek = x − xk converges to zero if and only if
lim Gk = 0.
k→∞
If we can diagonalize G, then G = (maxλ∈σ(G) |λ|) =: (G), where σ(G) is the set of
eigenvalues and is called the spectrum of G. The quantity (G) is called the spectral
radius. Hence,
lim ek = e0 lim Gk = e0 lim (G)k ,
k→∞
k→∞
k→∞
and the iteration process converges if and only if the spectral radius (G) < 1.
The asymptotic rate of convergence is deﬁned as
R∞ (G) := − ln (G),
and to reduce the error with a factor one needs k = − ln /R∞ (G) iterations. Unfortunately, the convergence rate of the diﬀerent iteration methods are highly problem
dependent. It is not obvious what method is the best. In Ref.  an analysis of a
few cases and model problems are quoted and the iterative methods are interpreted in
terms of a model problem. The text is highly recommended for further reading.
In the following, notice that when A is a sparse matrix the iterations become exceedingly simple to perform as each iteration only costs O(n) operations, where n is
the number of non-zeroes in the matrix.
The classical iterative solvers are implemented in subclasses of BasicItSolver
(which again is a subclass of IterativeSolver derived from LinEqSolver).
Simple Richardson Iteration and Jacobi Iteration. For the Richardson iteration (or
just simple iteration) one chooses M = I. The system M x = b then becomes trivial.
This yields the iteration
xk+1 = xk + r k = xk − Axk + b.
For the Jacobi iteration we choose M = D, where D is the diagonal of A, i.e., Dii = Aii
and Dij = 0 for i = j. Again, M x = b is very cheap. This yields for the iteration
xk+1 = D−1 ((D − A)xk + b) = xk − D−1 Axk + D−1 b.
Writing out the equation for xk+1
yields
i
xk+1
i
110


N
1
bi −
= xki +
Aij xkj  ,
Aii
j=i+1
i = 1, 2, . . . , N.
5.2 – Review of Methods for Linear Systems of Equations
Relaxed Jacobi Iteration. If we by x∗ denote the Jacobi iteration, use instead
xk+1 = ωx∗ + (1 − ω)xk ,
i.e., a weighted mean between the old and new Jacobi iteration. It may turn out that
using 0 < ω < 1 leads to faster convergence, but this is highly problem dependent.
Gauss-Seidel Iteration. As noted above in section 5.2.1, equations involving upper or
lower triangular matrices are easy to solve by forward or backsubstitution. Hence, we
could choose M to be the upper or lower triangular part of A, not to be confused with
L or U in the LU decomposition. If we write M = D + L + U where D is the diagonal
from the Jacobi iteration and L and U is upper or lower-triangular, respectively, we
may choose M = D + L which deﬁnes the Gauss-Seidel iteration. Hence,
(D + L)xk+1 = −U xk + b.
Writing out the equations yields


i−1
N
1
bi −
=
Aij xk+1
−
Aij xkj  ,
xk+1
i
j
Aii
j=1
j=i+1
i = 1, 2, . . . , N.
in xk+1
for
Hence, it is similar to the Jacobi iteration, but we reuse the values of xk+1
i
i
i<i.
SOR and SSOR Iteration. In successive over-relaxation (SOR) we use Gauss-Seidel
iteration with relaxation, viz.,
xk+1 = ωx∗ + (1 − ω)xk ,
where x∗ is xk+1 from the Gauss-Seidel iteration. The relaxation parameter ω might
be chosen to be greater than unity, and it is in fact a good choice for Poisson-type
problems, see Ref. . We obtain
M = ω −1 D + L,
N = (ω −1 − 1)D − U.
In symmetric successice over-relaxation (SSOR) one combines SOR with a backwards sweep, i.e., one ﬁrst uses the lower triangular part in a “sweep” and then the
upper triangular part (i.e., performing a Gauss-Seidel iteration with M = D + U ). The
matrix M is given by
−1 1
1
1
1
D+L
D
D+U .
M=
2−ω ω
ω
ω
Reading from left to right (which is the order of appliance for M −1 ) we see that ﬁrst
the lower triangular part is used and then the upper triangular part. Both are relaxed
with ω as parameter.
5.2.3 Krylov Iteration Methods
An alternative approach to the presentation of the methods in this section can be found
in Ref. . Here we adopt the presentation in Ref. .
Krylov iteration methods are iterative methods originally proposed as direct methods for solving linear systems, but as such they are more expensive than Gaussian
elimination. Nevertheless it turns out that the methods produce a good approximation
to the exact solution in k N iterations. They do not ﬁt into the framework of
classical iterative methods as we cannot devise a “constant” matrix G as in Eqn. (5.3).
111
Numerical Methods for Linear Algebra
Again we seek a sequence of approximations x0 , x1 , x2 et.c. that hopefully converge
to the real solution x. The idea is to devise a subspace Vk ⊂ CN with a basis qj ,
j = 1, . . . , k such that
xk := xk−1 + δxk ,
δxk =
k
αj qj ,
j=1
i.e., δxk is sought in Vk . We must determine the coeﬃcients αj and an algorithm for
creating subspaces Vk at each iteration.
The subspaces employed are so-called Krylov subspaces. The iterative methods in
this section is therefore referred to as Krylov iteration methods or Krylov subspace
methods. The Krylov subspace is deﬁned as
Vk (A, u) := span {u, Au, A2 u, . . . , Ak−1 u}.
Note that Vk need not be of dimension k. For more information on Krylov subspaces,
see for example Ref. . Our chosen Krylov subspace is Vk = Vk (A, r 0 ), i.e., the span
of iterates of the initial residual with A. Recall that
r k := b − Axk = r k−1 − Aδxk .
To deﬁne the coeﬃcients αj at each iteration k, two choices are popular, and these
may be viewed as a Galerkin method or least-squares method, respectively. For the
Galerkin method we require that the residual r k is orthogonal to Vk , i.e., that
(r k , qj ) = 0,
This leads to
k
j = 1, . . . , k.
αj (Aqj , qi ) = (r k−1 , qi ),
i = 1, . . . k.
j=1
This is clearly a linear system of (at most) dimension k. This of course implies that
at iteration N we need to solve a system at the size of A. On the other hand, if
dim VN = N we must have r N = 0 so that the exact solution is found. In this way
the method is a direct method, at least if r 0 and A are such that dim VN = N . The
Galerkin method is referred to as the conjugate gradient method.
For the least-squares method, we minimize the norm of the residual with respect
to αi , i.e.,
∂
(r k , r k ) = 0, i = 1, . . . k.
∂αi
Assuming that A is a real matrix and that b is a real vector, we obtain
∂r k
= −2αi (Aqi , r k ) = 0.
∂αi
Hence,
k
αj (Aqj , Aqi ) = (r k−1 , Aqi ),
i = 1, . . . k.
j=1
We will not go into the details of ﬁnding the new basis vector qk for Vk in the two
methods, see instead Ref. . Choosing some orthogonality condition and assuming
some properties of A will however simplify the iteration process and the algorithm for
qk . We state the types of systems for which the Krylov iteration methods described
here are eﬀective. The Galerkin method needs a symmetric (Hermitian for complex
A), positive or negative deﬁnite matrix, i.e., that all the eigenvalues of A are either
112
5.3 – Review of Methods for Eigenvalue Problems
positive or negative. The least-squares can be used for every kind of real matrix but
may require more temporary storage.
For sparse matrices each iteration needs O(n) operations, where n is the number of
non-zero entries. If the method converges in k N operations, we have a method with
speed comparable to the tridiagonal Gaussian elimination. A complication is the fact
that we need a start vector x0 to set oﬀ the iteration. For time dependent problems,
choosing an initial vector x0 (and hence an initial residual r 0 ) is easy; we simply use
the solution at an earlier time step. The residual then starts out small and the method
may converge quickly. As for the classical iterative methods, the convergence rate of
the Krylov iteration methods are highly problem dependent.
In addition to the two mentioned above, there are also many other Krylov iteration methods implemented in Diﬀpack, such as SYMMLQ for non-deﬁnite symmetric
(Hermitian) systems (such as our discrete Hamiltonian from both FEM and FDM) and
BICGSTAB for non-symmetric systems. See the Diﬀpack documentation in Ref. ,
under the KrylovIt class which contains links to subclasses for the diﬀerent solvers.
5.3
Review of Methods for Eigenvalue Problems
In contrast to solvers for linear systems of equations, eigenvalue solvers are exclusively
of an iterative nature, at least for large systems. Iterative methods based on Krylov
subspaces are also very popular for large systems. Ref.  is a thorough exposition
on numerical methods for large eigenvalue problems and is highly recommended reading. For moderately-sized systems, Ref.  describes the widely-used methods of QR
factorization, the Givens and Househölder methods et.c. for Hermitian standard problems. In addition, the documentation to ARPACK++ in Ref.  is very informative
with respect to the Arnoldi method used in this thesis.
In this section we are concerned with the generalized eigenvalue problem, i.e.,
Ax = λBx,
where A and B are square matrices of dimension N , x is a vector and λ is a scalar.
The classical methods used for moderately-sized matrices are for standard eigenvalue
problems, i.e., for problems where B = I. We describe these ﬁrst and then move on
to the Krylov subspace methods such as the Arnoldi method of which the Lanczos
method is a special case.
Is B is symmetric and positive deﬁnite it turns out that we may manipulate the
eigenvalue problem to make it a standard eigenvalue problem. This will be addressed
in section 6.3
Algorithms for eigenvalue and eigenvector computations are often very complicated,
often due to complicated convergence properties, stability control and orthogonalization
procedures. It is therefore usually not a good idea to implement the methods from
scratch but instead use an existing library known to be stable, fast and that suits ones
needs. On the other hand, a knowledge of the complexity and areas of application of
the algorithms is necessary in order to choose the right method and also to invoke it
properly. Hence, the below exposition is not meant to be complete. One should consult
the references if more details are needed.
5.3.1 Methods For Standard Hermitian Eigenvalue Problems
The methods described in this section works for Hermitian matrices (and hence real
symmetric matrices). They do not handle generalized problems.
The material in this section is taken from Ref. , and this reference is well worth
spending some time on.
113
Numerical Methods for Linear Algebra
Jacobi Transformations. The method described here is ﬁne for small matrices, but as
the dimension increases, so do the number of iterations. One should then instead use
the Househölder or Givens transformations described below.
Diagonalizing A is equivalent to ﬁnding a similarity transformation such that
D = V −1 AV
is diagonal. Then column vector i of V is an eigenvector of A with eigenvalue Dii . If
A is Hermitian, i.e., A† = A, the eigenvectors are all orthogonal which implies that
V −1 = V † , i.e., V is unitary.
The idea in the Jacobi method is to use a sequence of orthogonal (i.e., unitary)
transformations P i to zero out oﬀ-diagonal elements in our matrix A, gradually reducing it to diagonal form. The diagonalizing operator is then given by
V = P 1P 2P 3 . . . P n.
i
The transformations P i are referred to as Jacobi rotations. It is deﬁned by Ppp
=
i
i
i
i
Pqq = cos θ, Pqp = −Ppq = sin θ and Pij = δij otherwise. The angle θ is chosen such
that Apq and Aqp become zero under the similarity transform
A −→ A = P † AP.
(The choice of p and q is of course dependent on i.) Unfortunately, performing a
succession V of Jacobi rotations destroy the previous “annihilations” of non-zeroes,
but when chosen appropriately the sequence yields convergence to D, albeit in an
inﬁnite number of steps.
Each rotation requires O(N ) operations for a dense matrix. The number of rotations
required to sweep through the whole matrix is about N (N − 1)/2, i.e., the number of
elements in the upper (or lower) triangular part. Typically one needs several sweeps
to achieve convergence to machine precision; typical matrices require 6 to 10 sweeps.
This totals O(20N 3 ) operations if 10 sweeps are required, according to Ref. . This
is of course prohibitive for dense matrices of large order.
If the matrix is sparse we may obtain better convergence properties, but the rotations introduces non-zeroes at new positions in the matrix.
Househölder and Givens Reductions. Using the Jacobi method for reducing A to diagonal form takes an inﬁnite number of steps. Reducing it to tridiagonal form can
however be done in a ﬁnite number of steps, and tridiagonal systems may eﬃciently be
diagonalized by iterative methods such as the QR algorithm described below.
The Givens method chooses a sequence of (modiﬁed) Jacobi rotations that in a ﬁnite
number of steps reduces any Hermitian matrix to tridiagonal form. The Househölder
method is a little bit more eﬃcient in achieving the same goal and generally preferred.
See Ref.  for details on the Givens reduction and the Househölder reduction.
The QR algorithm. The QR method eﬀectively diagonalizes a tridiagonal Hermitian
matrix. In simple problems the coeﬃcient matrix may be tridiagonal to begin with,
but usually one obtains a tridiagonal problem by means of the Househölder reduction.
The method is an iterative method that diagonalizes A through a series of unitary
transforms. As the name suggests it is based on the well-known QR factorization, i.e.,
that any N × M matrix X can be rewritten as
X = QR,
where Q is an N × N unitary matrix and R is N × M and upper triangular. Indeed,
it is simply a way of rewriting the standard Gram-Schmidt process for creating an
114
5.3 – Review of Methods for Eigenvalue Problems
orthonormal basis (the columns of Q) for the column space of X. The matrix R
provides the change of coordinates. Equivalently, we may decompose X into X = QL,
where L is lower triangular. The corresponding method is called the QL decomposition.
It can be shown that the QR (and QL) decomposition preserves both Hermiticity and
tridiagonality of the matrix X.
Given our tridiagonal matrix A = A0 we construct an iterative process as follows:
Decompose Ai = Qi Ri and deﬁne
Ai+1 := Ri Qi = Q†i Ai Qi ,
where the last equality follows from unitarity of Qi . Hence, each iteration is a change
of coordinates to an orthonormal basis for the columns of Ai . As is easily seen, one
may choose either of the QL or QR decompositions to deﬁne the algorithm.
Under reasonable mild conditions on the eigenvalues of A this series can be shown
to converge to a diagonal matrix and hence the diagonal contains the eigenvalues. To
improve convergence (and to lessen the assumptions on A) one may introduce shifts at
each iteration by instead factorize Ai − σi I where σi is a scalar.
If A is tridiagonal each iteration can be implemented in only O(N ) operations,
yielding a very eﬃcient iteration process, also for large systems. However, large systems
are not that often tridiagonal and hence other methods is needed for such matrices, as
the Givens reduction may be too complicated.
5.3.2 Iterative Methods for Large Sparse Systems
When the dimension of the matrices in our eigenvalue problem becomes very large the
Househölder method described above fails badly as it requires too many operations to
complete. Instead we must turn to pure iterative methods that work with A directly.
Common to the methods are the fact that each iteration uses (mostly) matrix-vector
products with A (or some simple modiﬁcation of A) as coeﬃcients. If each product
can be calculated fast the iteration process is in principle fast as well.
Ref.  is a comprehensive exposition into the methods discussed in this section.
Ref.  also contains some details concerning the power method and the simple subspace iteration. The ARPACK user guide in Ref.  also describes the implicitly
restarted Arnoldi method in detail.
The Power Method and Inverse Power Method. The simplest method for ﬁnding an
eigenvalue of a standard eigenvalue problem is the power method. Given a start vector
x0 one simply deﬁnes the iteration
xk =
1
Axk−1 ,
α(k)
where α(k) = maxj |xkj |, i.e., the magnitude of the largest component of xk . It is easy
to see that if there is only one eigenvalue of largest magnitude and if this eigenvalue
is not degenerated, the sequence will converge to the corresponding eigenvector. See
Ref.  section 1.1 for a proof. The method converges faster the larger the ratio of
the largest and next-to-largest eigenvalue magnitude is.
The power iteration method is the basis for the inverse power iteration method.
Notice that the matrix A − σI has the same eigenvectors as A but that the spectrum
is shifted by −σ, i.e., if λ is an eigenvalue of A then λ − σ is an eigenvalue of A − σI.
Furthermore, if A − σI is non-singular, the matrix B = (A − σI)−1 has the eigenvalue
(λ − σ)−1 . Hence, if we want an eigenvalue of A around σ we may iterate B instead in
the power method as the eigenvalues of B corresponding to eigenvalues of A close to σ
will dominate the process.
The eigenvectors of A, A − σI and (A − σI)−1 are all the same. The iteration with
B may be done by performing an LU decomposition and successively solve the linear
115
Numerical Methods for Linear Algebra
system
Bx∗ = xk
and scale x∗ with the appropriate factor to obtain xk+1 .
A process called deﬂation may be used in order to compute the next eigenvalue
when the one of largest modulus is found. One may manipulate the matrix such that
the largest eigenvalue is displaced to a lower value. Then a new iteration process will
ﬁnd the eigenvalue of highest magnitude.
The power method is appropriate if one seeks only a few eigenvalues and when
a lot of information on A is known. More sophisticated methods are used if several
eigenvalues and eigenvectors are needed, as in this thesis. Hence, similar to Gaussian
elimination it is rarely used except for the case where only a very few eigenvalues are
needed.
Simple Subspace Iteration. The power method and the inverse power method generates information at each iteration which is one-dimensional, i.e., only one vector is
generated at each step. It would be nice if we instead choose m vectors, iterated these
and used the extra information to obtain faster or more results, in some way. This is
the idea of the simple subspace iteration method.
We start out with an N ×m matrix X 0 whose column vectors span an m-dimensional
subspace Vm ⊂ CN . If we use the power method on each column vector by iterating
X k+1 = A · X k · D,
(k)
where D is an m × m diagonal matrix with renormalization factors, i.e., Dii = 1/αi ,
each column vector of X k will converge to the eigenvector with eigenvalue of highest
magnitude. In other words, the columns will gradually lose their linear independence.
The idea is then to re-establish linear independence once in a while. The column
vectors of X k will then converge to several eigenvectors if this is done properly.
The re-orthogonalization is simply a Gram-Schmidt process, creating a set of orthonormal vectors from a set of linear independent vectors. Every once in a while one
computes the QR factorization of X k and uses Q as starting point for further iterations,
i.e., the orthogonal basis for the subspace spanned by the column vectors of X k .
For further details on this algorithm see for example Ref. .
Krylov Subspace Methods. The methods based on Krylov subspaces are deﬁnitely the
most complicated algorithms both to implement and use. They are however very fast,
converge quickly and are applicable to very general matrices. The algorithm used in
ARPACK is called the implicitly restarted Arnoldi method (IRAM) and is perhaps the
most powerful Krylov method around. All the problems in this text are Hermitian,
and traditionally most eigenvalue methods are developed for such systems. The reason
is double: Most physics applications give rise to Hermitian problems and Hermitian
problems are the easiest to solve. The IRAM ﬁnds eigenvalues and eigenvectors of real
and complex matrices, both symmetric and non-symmetric and also handles generalized
problems easily. It is also used in the commercial computational software Matlab for
solving large and sparse eigensystems.
Similar to the Krylov subspace methods in section 5.2.3 for solving systems of linear
equations the eigenvalue methods employ Krylov subspaces Vk . Notice the similarity
with the deﬁnition of the Krylov subspace Vk (A, x) and the simple iteration method.
One basic idea in the Krylov subspace method is to exploit more of the information
generated by a simple iteration sequence. This is described in detail in Ref. . Some
of the information is used in the subspace iteration but there is much more to gain.
Actually, the IRAM is deﬁned in terms of a Galerkin condition and hence there is a
close relationship between the ﬁnite element methods, conjugate gradient-like methods
for linear systems of equations and the IRAM.
116
5.3 – Review of Methods for Eigenvalue Problems
Diving further into the IRAM is out of the scope of this text. The implementation
in ARPACK is well-tested and robust and we will use it without hestitating. The
important things to know is what ARPACK can compute, what information ARPACK
needs to perform the iteration process, how much each iteration costs and how much
storage space we must set aside.
As for what ARPACK can compute, it ﬁnds eigenvalues and eigenvectors in diﬀerent
parts of the spectrum. It may ﬁnd the largest eigenvalues, the smallest, those with
largest real or imaginary parts, centered around a shift σ and so on. We are for the
most part interested in the lowest eigenvalues of the Hamiltonian which is the default.
Fortunately, the only information needed to solve a standard problem is simple
matrix-vector products, used in subspace iterations inside the IRAM. Internally ARPACK
sets aside an amount of memory proportional to the space required by about N k real
(or complex) numbers, i.e., an N × m matrix. The computational cost of each iteration
varies slightly, but it is proportional to the number k of eigenvalues (or eigenvalues)
sought and the number of operations needed for a matrix-vector product. The number
of iterations needed varies however. In fact, seeking more eigenvalues does not necessarily slow down the process as it may take fewer iterations before convergence. This
is due to the extra information in the Krylov subspace generated from more search
vectors.
If we introduce shifting to the algorithm, i.e., seek eigenvalues around σ, linear
systems must be solved with coeﬃcient matrix A − σI.
For a generalized problems we need to solve linear systems with B and A − σB as
coeﬃcient matrix. In other respects the method is performed similar to the standard
problem.
We will keep an empirical approach to these matters and use ARPACK as a black
box. In addition to Ref. , see the Simula report available from Ref. .
117
Chapter 6
Quantum Mechanical
Eigenvalue Problems
In this chapter we will return to the time independent Schrödinger equation, i.e., the
eigenvalue equation for the Hamiltonian:
HΦ = EΦ,
(6.1)
where E is a scalar. We will study discretized versions of the equation using ﬁnite
diﬀerence and ﬁnite element methods.
Assume that Φn and En are the (orthonormal) eigenvectors and eigenvalues of
H; we assume a discrete set of eigenvalues for simplicity and we always order the
eigenvalues in increasing order as is the convention
in quantum mechanical formalism.
The time development of an initial state Ψ(0) = n cn (0)Φn is given by
Ψ(t) = U(t, 0)Ψ(0) =
e−iEn t/ cn Φn .
n
In other words, the coeﬃcients evolve as
cn (t) = e−iEn t/ cn (0).
Their change in time is only a phase change; the magnitude of cn is conserved.
Introducing a spatial discretization leads to a natural discrete form of the eigenvalue
problem, viz.,
Hh u = Eu,
where H is an N × N Hermitian matrix and u is an N -dimensional complex vector.
The eigenvalues and eigenvectors of Hh will reﬂect the eigenvalues and eigenvectors of
the full Hamiltonian H. Intuitively, the better approximation Hh is to H, the better
the correspondence will be. We may also view Hh as the Hamiltonian for an altogether
diﬀerent physical system with ﬁnitely many degrees of freedom. The approximative
character of Hh to H makes us expect that the behavior of this new system reﬂects
the behavior of the original one; a line of thinking often applied in physics to capture
essential features of a very complicated model.
As for the time development of an initial state given as a linear combination of the
eigenvectors of Hh , it is of course given by the ODE (4.37). Indeed,
u(t) =
N
e−iEn t/ cn (0)un .
n=1
119
Quantum Mechanical Eigenvalue Problems
As discussed in chapter 5 we typically search for some k N eigenvalues and eigenvectors. On the other hand, when starting with a suﬃciently regular initial condition,
the coeﬃcients cn are neglible whenever n > k. In this way we may actually solve the
time dependent Schrödinger equation for stationary problems.
For time dependent problems the time dependent part is typically some perturbation that is turned on and oﬀ. Before and after the perturbation we have a stationary problem, and the spectrum of the initial and ﬁnal state with respect to the
unperturbed Hamiltonian contains important information. For example, if we study
a Hydrogen atom perturbed with a laser beam and start out in the ground state, the
spectrum after the perturbation shows us the probability of exciting the ground state
to higher states with a laser. Hence, the eigenvalue problem of the Hamiltonian is of
great relevance also for time dependent problems.
In this chapter we will focus on time independent problems, solving numerically
some analytically solvable problems. We will also treat a simple one-dimensional problem, ﬁnding the numerical solution analytically.
6.1
Model Problems
To investigate the properties of the ﬁnite element method we will investigate some
model problems. These problems are analytically solvable and provides excellent means
for testing ﬁnite element discretizations with diﬀerent grids; in particular we will employ
square grids and grids that approximate a circular region.
6.1.1 Particle-In-Box
First, we consider the particle-in-box problem, in which our particle is free to move
inside the domain whose boundary deﬁnes the shape of the box. The Hamiltonian is
H =−
2 2
∇ + V (x ),
2µ
where the potential V is zero inside the domain Ω and inﬁnite everywhere else. This
leads to the boundary condition
Ψ(x ) = 0,
∀x ∈ ∂Ω,
and to the time independent Schrödinger equation
2µE
, x ∈ Ω.
2
For some particular shapes of Ω this problem can be solved analytically. We will
consider two-dimensional geometries, namely a square and a circular domain Ω.
−∇2 Ψ = λΨ,
λ=
Square Domain. First consider the square domain Ω = [0, 1] × [0, 1]. We may use
separation of variables, i.e., Ψ(x, y) = u(x)v(y), which leads to the identical equations
and
−u (x) = λx u(x),
− v (y) = λy v(y),
u(0) = u(1) = 0,
v(0) = v(1) = 0.
The solution is readily obtained, viz.,
un (x) = C sin(nπx),
where the quantum number n = 1, 2, . . . labels the eigenfunctions and where C is an
irrelevant normalization constant. Hence,
Ψnm (x, y) = C 2 sin(nπx) sin(mπy) and λnm = π 2 (n2 + m2 ).
120
6.1 – Model Problems
Note that whenever n = m the eigenvalue is not degenerated. Otherwise, we have
λnm = λmn , i.e., twice degenerated eigenvalues. (There could also be some accidental
degeneracy due to solutions of the equation n21 + m21 = n22 + m22 .)
Circular Domain. Second, consider a circular domain, viz.,
0
1
Ω = (x, y) : x2 + y 2 ≤ 1 .
This problem is more complicated to solve and involves Bessel functions and Neumann
functions, see Ref. . We will sketch some of the main steps in the solution. For a
complete account, see Ref. .
Due to rotational symmetry it is wise do employ polar coordinates, i.e.,
x = r cos φ,
and
y = r sin φ.
The Laplacian in polar coordinates is given by
∇2 =
1 ∂2
∂2
1 ∂
+ 2 2,
+
2
∂r
r ∂r r ∂φ
as is readily obtainable by use of the chain rule for diﬀerentiation. We use separation
of variables and write
Ψ(r, φ) = eimφ R(r).
This yields for R(r) the eigenvalue equation (called the radial equation1 )
m2
1 d
d2
+ 2 R(r) = λR(r).
− 2−
dr
r dr
r
As for the number m it must be an integer for the wave function to be periodic and
diﬀerentiable. Furthermore, the energy λ must be positive (due to H being positive
deﬁnite) and we write λ = k2 . By means of changing variable to ρ = rk and multiplying
the radial equation by ρ2 we obtain
ρ2 R + ρR + (ρ2 − m2 )R = 0,
which is called Bessel’s equation. Its solutions are
R(ρ) = C1 Jm (ρ) + C2 Nm (ρ),
where Jm are called Bessel functions of the ﬁrst kind, and where Nm are called Neumann functions of the ﬁrst kind. As Nm (0) = −∞ they give unphysical solutions,
hence
R(ρ) = CJm (ρ).
To impose the boundary conditions, note that Jm has inﬁnitely many zeroes rms . These
are typically tabulated, and Table 6.1 shows some of the numerical values for various
integral m. Note that J−m = (−1)m Jm , and this yields degeneracy of the eigenvalues
λ whenever |m| > 0. We obtain an eigenfunction R(r) whenever kms is a zero of Jm (ρ).
Thus, we label the energies λms . The energies are given by
2
λms = kms
and are tabulated in Table 6.1. Fig. 6.1 shows the probability densities of the ground
state and the state m = 2,s = 3, i.e., |Ψ0,1 |2 and |Ψ2,3 |2 .
1 In general, for rotationally symmetric potentials V (r) the operator on the left hand side has the
form −d2 /dr 2 − r −1 d/dr + m2 /r 2 + V (r).
121
Quantum Mechanical Eigenvalue Problems
Figure 6.1: The two states m = 0, s = 1 (left) and m = 2, s = 3 (right) of the circular
particle-in-box.
k0s
k1s
k2s
k3s
λ0s
λ1s
λ2s
λ3s
s=1
2.404825558
3.831705970
5.135622302
6.380161896
5.783185964
14.68197064
26.37461643
40.70646582
s=2
5.520078110
7.015586670
8.417244140
9.761023130
30.47126234
49.21845632
70.84999891
95.27757254
s=3
8.653727913
10.17346814
11.61984117
13.01520072
74.88700679
103.4994540
135.0207088
169.3954498
s=4
11.79153444
13.32369194
14.79595178
16.22346616
139.0402844
177.5207669
218.9201891
263.2008542
Table 6.1: Some zeroes kms of the Bessel functions Jm and the corresponding energies
2
λms = kms
6.1.2 Harmonic Oscillator
The one-dimensional harmonic oscillator was solved in section 2.2. We now consider
the two-dimensional isotropic harmonic oscillator, i.e.,
HΨ = EΨ,
where
H=−
2 2 1
∇ + mω 2 (x2 + y 2 ).
2µ
2
Multiplying with 2µ/2 and deﬁning γ 2 /4 = µ2 ω 2 /2 and λ = 2µE/2 yields
γ2 2
2
2
−∇ + (x + y ) Ψ(x, y) = λΨ(x, y)
4
We use separation of variables and write Ψ(x, y) = u(x)v(y). The corresponding equations read
γ2 2
x u(x)
4
2
γ
− v (y) + y 2 v(y)
4
−u (x) +
and
= λx u(x)
= λx v(y).
These equations are simply two one-dimensional harmonic oscillators.
From section 2.2 we have En = ω(n+1/2) for the eigenvalues of a one-dimensional
oscillator. The total energy of the two-dimensional oscillator is thus
λnm = γ(n + m + 1),
n, m = 0, 1, 2, . . .
where we have used λ = 2µE/2 . Hence, the eigenvalues are precisely γν where ν is
any positive integer. The multiplicity of each eigenvalue is easily seen to be ν.
122
6.1 – Model Problems
Note that due to rotational symmetry we could write Ψ(r, φ) = eimφ R(r) as with
the circular particle-in-box. However, the radial equation for the two-dimensional
harmonic oscillator is more diﬃcult to solve.
When solving the harmonic oscillator numerically, we will choose a circular and
ﬁnite domain. This will in eﬀect set the potential to inﬁnity at a ﬁnite distance from
the origin, a source of error in our eigenvalues and energies. However, if we choose
the radius of the disk suﬃciently large we should be able to reproduce many of the
lowest-lying energies accurately.
6.1.3 Two-Dimensional Hydrogen Atom
The two-dimensional hydrogen atom was discussed in section 3.2.2. We did not however
discuss the time independent Schrödinger equation for this problem.
In polar coordinates (r, φ) and in the symmetric gauge the Hamiltonian is given by
H = −∇2 − iγ
2 γ2
∂
− + r2 ,
∂φ r
4
(6.2)
where the units of length, energy and the magnetic ﬁeld is given by Table 3.1.
Weak Field Limit. In Ref.  the limits in which the ﬁeld γ is weak and strong,
respectively, are treated perturbatively. The limit γ = 0 yields a pure Coulombic
Hamiltonian, viz.,
2
H w = −∇2 − .
r
Similarly to the harmonic oscillator and the particle-in-box, we employ separation of
variables, i.e., Ψ(r, φ) = eimφ R(r). The radial equation then reads
m2
d2
1 d
2
+ 2 −
− 2−
(6.3)
Rn (r) = nm Rn (r).
dr
r dr
r
r
The energies are given in Ref.  as
w
nm = −
1
,
(n − 1/2)2
m = −n + 1, −n + 2 . . . , n − 2, n − 1.
Hence, we have a 2n − 1-fold degeneracy of the eigenvalues. We omit the derivations
of the energies and the radial functions as they are lengthy and not particularly interesting.
The ground state energy in normal units is
E0 ≈ 13.603 eV · w
10 = −54.41 eV.
Thus, the electron in the two-dimensional hydrogen atom is more bound than in the
three-dimensional hydrogen atom, in which E0 ≈ −13.603 eV.
Strong Field Limit. If we consider the strong ﬁeld limit the Coulomb term is considered as vanishing,2 viz.,
γ2
∂
+ r2 .
H s = −∇2 − iγ
∂φ
4
In other words, the electron is considered free except for the inﬂuence of the magnetic
ﬁeld. The energies are given by
sN M = 2γ(N + 1/2).
2 Although we have a singularity in the potential at r = 0 the expectation value 1/r is dominated
by γ 2 r 2 for large γ. This justiﬁes the perturbative treatment.
123
Quantum Mechanical Eigenvalue Problems
The azimuthal quantum number m is given by the quantum numbers M and N through
m = N − M .3 The number N numbers the so-called Landau-levels. We see that the
Landau levels are degenerate, and in fact they are inﬁnitely degenerate.
In particular the ground state energy (i.e., energy of the lowest Landau level) is
0M = 1 and the corresponding wave function is a Harmonic oscillator ground state
multiplied with an arbitrary function of x + iy, i.e.,
Ψ0 = f (x + iy) · e−r
2
/4
,
where f is analytic. A basis for the ground state eigenspace is then ψ0,m = (x +
2
iy)m e−r /4 with m = 0, 1, 2, . . .. This is the solution found in the symmetric gauge, see
Ref. . We may change the Hamiltonian by performing a gauge transformation of the
vector potential. An example of a diﬀerent but physically equivalent gauge is
A = γ(−y, 0, 0),
whose corresponding magnetic ﬁeld is easily seen to be identical to that if Asymm , i.e.,
B = γ k̂. The solution to a gauge transformed problem is equivalent to the original
solution and given by an explicit unitary transform, see section 1.6. The magnetic
system with the alternative gauge is solved in Ref. .
The inﬁnite degeneracy of the ground state is easy to fathom intuitively. There is
no sense of localization in the problem deﬁnition; the electron is free and the magnetic
2
2
ﬁeld is constant throughout space. If e−x /2 was a ground state, so must e−(x −x0 ) /2
(although these states are not orthogonal). Every choice of x0 gives rise to a diﬀerent
ground state and the set of these states are easily seen to span an inﬁnite dimensional
space.
If the domain is on the other hand a disk with radius r0 (such as in our simulations
below) the degeneracy is instead
r2
g = 0.
2
See Refs. [7, 48].
In the symmetric gauge, the states ψ0,m are states that are easy to interpret with
the correspondence principle. One can show that for
large m the probability density
|ψ0,m | is concentrated in a circle with radius rm = 2m/γ. Classically an electron in
a magnetic ﬁeld moves in circular paths with a radius that increases with the kinetic
energy. The kinetic energy of the quantum particle also increases with m. Furthermore,
the classical path’s radius is given by
√
2cT
2µT c
v2
=
=
,
r=
a
qvB
qB
which, if we assume T = 2 m2 /r 2 quantum mechanically, becomes precisely
2m
r=
γ
in our case.
6.2
The Finite Element Formulation
In this section we will describe how we arrive at the discretized eigenvalue problem for
the discrete Hamiltonian Hh with the ﬁnite element method.
3 The quantum number N is not to be confused with the dimension of the matrices in the numerical
problem. It should be clear from the context which N we refer to.
124
6.2 – The Finite Element Formulation
Assume that we are given a set of m ﬁnite element basis functions Ni (x ). Hence,
the subspace V ⊂ H has dimension m. These functions must be deﬁned appropriately
when given a grid G of m nodes representing an (approximate) subdivision of our
domain Ω. As usual with the ﬁnite element method we assume that the exact solution
Ψ is well approximated with an element in the ﬁnite element space V , viz.,
Ψ(x ) ≈ Ψ̂(x ) =
m
uj Nj (x ).
j=1
Inserting Ψ̂(x ) for Ψ in the time independent Schrödinger equation, multiplying with
Ni (x ) and integrating over the domain Ω yields
uj
Ni (x )[HNj (x )] = E
uj
Ni (x )Nj (x ).
(6.4)
j
Ω
j
Ω
If we write u for the vector whose jth component is uj , we arrive at the generalized
eigenvalue equation
Au = λM u,
(6.5)
where M is the mass matrix and A is the element matrix obtained by integrating by
parts any ∇2 term in H. As an example, consider the two-dimensional Hydrogen atom
Hamiltonian, viz.,
2 γ2
∂
− + r2
∂φ r
4
∂
∂
γ2
2
2
+x
= −∇ − iγ −y
+ (x2 + y 2 ).
−
∂x
∂y
4
x2 + y 2
H = −∇2 − iγ
(6.6)
The ﬁrst term becomes the stiﬀness matrix K, and the second term yields a matrix L
whose elements are
∂
∂
+x
Ni (x ) −y
Nj (x ).
Lij = −iγ
∂x
∂y
Ω
The Coulomb term yields a matrix C given by
2
Cij =
Ni (x ) Nj (x ),
2
x + y2
Ω
and the last harmonic oscillator term yields a matrix O, viz.,
γ2
Oij =
Ni (x )(x2 + y 2 )Nj (x ),
4 Ω
The total element matrix is then A = K + L + C + O.
Note that no boundary conditions have been imposed as of yet. Hence A contains
unknown boundary terms from the integration by parts. These terms will however be
eliminated.
How do we impose the boundary conditions in an eigenvalue problem such as this?
The homogenous boundary conditions state that uk = 0 whenever x [k] ∈ G is a boundary node of our grid. If we diagonalize the pair (A, M ) as it stands in Eqn. (6.5)
we have no guarantee whatsoever that a boundary component uk of an eigenvector
is zero. Furthermore, doing what is usually done in ﬁnite element applications, i.e.,
letting Akk = 1 and Akj = 0 whenever j = k gives no meaning in this case as we do
not deal with equations on the form Au = b. (In addition to modifying A we would
set bk equal to the boundary condition; in our case zero.)
125
Quantum Mechanical Eigenvalue Problems
Imposing homogenous conditions in an eigenvalue problem fortunately turns out
to be very simple in principle. It amounts to erasing row k and column k from both
matrices A and M , and at the same time erase uk from u. In other words, we reduce the
dimension of our problem with 1 for each boundary node and remove every reference
to uk in the equations. As matrices may be represented in many diﬀerent ways in
the computer, e.g., dense matrices (Mat in Diﬀpack), banded matrices (MatBand) or
sparse matrices (MatSparse), implementing this principle must be done for every kind
of matrix used in the ﬁnite element context; a task that involves non-trivial algorithms
and speed considerations.
Let us ﬁrst state the boundary condition imposing process in a more abstract way
to make it clearer. Let P be the projection matrix onto a subspace W of Cm V . We
want W to be the subspace that corresponds to the interior of Ω, i.e.,
P : Cm −→ Cm−n ,
where n is the number of boundary nodes in G, and P is hence an (m − n) × m matrix.
Indeed, it is the identity matrix Im with the row number k removed for all boundary
nodes x [k] . The matrix P then maps u ∈ Cm to a vector v ∈ Cm−n with the boundary
node values removed.
The above reduction on dimensionality for all boundary nodes can then be written
as
Ãv = λM̃ v,
(6.7)
where v = P u and where X̃ = P XP T for any m × m matrix X. Note that X̃ equals
X with the rows and columns corresponding to boundary nodes removed.
If x [k] is a boundary node, then uk does not appear anywhere in the equations. On
the other hand, if uk was a priori known to be zero, then the equations would look
like Eqn. (6.7). Notice that the mentioned boundary terms in the matrix A resided
precisely in row and column k, which are now removed from the problem.
As for the implementation of the matrices A and M , i.e., the integrals of Eqn. (6.4),
numerical integration must be used in general. The mass matrix M is evaluated exactly
if we use Gaussian quadrature of suﬃciently high order.4 In fact, Diﬀpack provides class
methods that creates the mass matrix automatically. As in the above example, the
element matrix A may contain more complicated terms. Each potential term must be
considered in particular. For example, both the harmonic oscillator term O and the
angular momentum term L contain a simple polynomial expression that is very well
tackled by Gaussian integration. On the other hand, the Coulomb term C contains an
1/r term which may or may not be well integrated with Gaussian integration as it has
a singularity. In all our applications we will use default Gaussian integration.
6.3
Reformulation of the Generalized Problem
Solving a generalized eigenvalue problem is much more diﬃcult than solving a standard
problem. The numerical methods presented in chapter 5 were mostly only applicable
to standard problems with Hermitian matrices.
We observe that the mass matrix M is symmetric and positive deﬁnite. Let Vh
be our ﬁnite element space and let uh ∈ Vh be arbitrary and nonzero. Let Ni , i =
1, 2, . . . , m be our ﬁnite element basis. Hence, Mij := (Ni , Nj ) and M is a real,
symmetric matrix (if we assume Ni to be real). For the norm of uh we obtain
ui Ni ,
uj Nj ) =
u∗i uj (Ni , Nj ) = u† M u,
0 < (uh , uh ) = (
i
j
ij
4 Gaussian integration evaluates the integral of a polynomial of a certain degree exactly. See Ref. 
for a description of common Gaussian integration rules that are implemented in Diﬀpack.
126
6.3 – Reformulation of the Generalized Problem
and hence M is positive deﬁnite. Here u is the CN vector whose components are uj .5
Any positive deﬁnite matrix is invertible. Hence, our generalized eigenvalue problem
is equivalent to
M −1 Hx = λx.
Alas, the matrix M −1 H is not Hermitian, rendering the algorithms from chapter 5
useless. Luckily, the Cholesky decomposition comes to our rescue.
It is well-known that every symmetric and positive deﬁnite (real) matrix M can be
rewritten as
M = L · LT ,
where L is a lower triangular matrix. This decomposition is called the Cholesky decomposition.
We now state a theorem.
Theorem 18
Given the generalized eigenvalue problem
Hu = λM u,
where H is Hermitian and M is symmetric, real and positive deﬁnite. Let (u, λ) be
an eigenpair. Let M = LLT be the Cholesky decomposition of M . Then the matrix
C = L−1 H(L−1 )T is Hermitian and has the eigenpair (LT u, λ).
Proof: Hermiticity is easily seen to hold, viz.,
C † = (L−1 H(L−1 )T )† = [(L−1 )T ]† H † (L−1 )† = L−1 H(L−1 )T .
Multiplying Hu = λM u with L−1 from the left yields
L−1 Hu = λLT u,
and writing v = LT u gives
L−1 H(L−1 )T v = λv,
and we are ﬁnished. From this theorem it is easy to see that solving the eigenvalue problem
Cv = λv
gives the correct eigenvalues. Furthermore, obtaining u = (LT )−1 v is eﬃciently done
with backsubstitution. One potential problem is the fact that C is a dense matrix and
hence matrix-vector products, if they may be computed at all, is an O(N 2 ) process.
However, to compute v = Cv = [L−1 H(LT )−1 ]v we may follow these steps:
1. Solve the equation LT x = v by backsubstitution; an O(N 2 ) process in worst case
but much faster if LT is banded.
2. Compute y = Hx by matrix multiplication.
3. Solve Lv = x by forward substitution.
It is easy to see that if H is a sparse matrix and M is sparse and stored in a banded
format, this process is eﬃcient.
We mention that the matrix L of the Cholesky decomposition of a banded matrix is
again banded with the same bandwidth. Hence, a general sparse structure of M should
5I
apologize for the ambiguous notation here.
127
Quantum Mechanical Eigenvalue Problems
not be chosen but rather a banded structure, allowing the Cholesky decomposition to
be made en place.
Unfortunately, there was not time to implement this strategy in the HydroEigen
solver for the two-dimensional hydrogen atom. Even though it does not represent
any addition of ﬂexibility except for the ability to solve sparse generalized eigenvalue
problems, it would represent a considerable speed-up in the simulations. It is on the
other hand an obvious future project to implement the Cholesky factorized mass matrix
for ﬁnite element eigenvalue problems.
6.4
An Analysis of Particle-In-Box in One Dimension
As an introduction to the work with discretized eigenvalue problems we will analyze
the particle-in-box in one dimension, using both ﬁnite diﬀerence methods and ﬁnite
element methods. The exact eigenfunctions and eigenvalues can be obtained, and will
serve as an indicator of the behavior of both methods.
Recall the eigenvalue equation for a particle in a box, viz.,
−u (x) = λu(x),
x ∈ [0, 1],
with the boundary conditions
u(0) = u(1) = 0.
This problem has the eigenvectors and corresponding eigenvalues given by
uk (x) = sin(kπx),
λk = (kπ)2 ,
as stated in section 6.1.1.
For the discretized version, we use a uniformly spaced grid with N + 1 points, hence
the grid spacing is h = N −1 and the grid points are given by xj = hj, j = 0, 1, . . . , N .
If we use the standard diﬀerence scheme
[−δx δx v(x) = λv(x)]j ,
we obtain a ﬁnite-dimensional eigenvalue problem, viz.,
Av = λv,
(6.8)
where v is the components of the discrete solution, i.e., vj = v(xj ), j = 1, . . . , N − 1.
(The components v0 and vN are identically zero due to the boundary conditions.) The
matrix A is an (N − 1) × (N − 1) symmetric and positive deﬁnite matrix.6 Hence, we
expect N − 1 positive and real eigenvalues for this problem.
Let us assume that the discrete eigenfunctions are given as
vk (x) = sin(kπx),
i.e., we guess that the eigenfunctions have the same form as in the continuous problem.
They clearly fulﬁll the boundary conditions and the components are given by
vj = sin(kπhj).
(We omit the subscript k to economize.) We will not consider the matrix A explicitly,
but instead write out the diﬀerence equations, viz.,
−
6 See
128
1
(vj−1 − 2vj + vj+1 ) = λvj ,
h2
for example Ref. .
j = 1, . . . , N − 1.
6.4 – An Analysis of Particle-In-Box in One Dimension
Using the trigonometric identity
sin(x + y) + sin(x − y) = 2 cos y sin x,
(6.9)
we obtain
vj−1 + vj+1 = 2 cos(kπh) sin(kπhj) = 2 cos(kπh)vj ,
and hence,
−δx δx vj =
1
(2 − 2 cos(kπh))vj .
h2
Using 1 − cos(x) = 2 sin2 (x/2), we arrive at
−δx δx v(xj ) = λk v(xj ),
with
4
kπh
),
sin2 (
2
h
2
and indeed the assumed form of v(x) is valid. The integer k now numbers the eigenvalues, but there are not inﬁnitely many as A has at most N − 1 eigenvalues. It is easy
to see that vN ≡ 0, so it is not a proper eigenfunction, and furthermore vN +k does not
yield further eigenvectors, this due to periodicity of vk (x). But for k = 1, . . . , N − 1
we have distinct eigenvalues.
In summary, the eigenfunctions are identical to the N −1 ﬁrst ones of the continuous
problem, but the eigenvalues are not the same. We may estimate the deviation by using
a Taylor expansion for sin2 (x), viz.,
λk =
1
sin2 (x) = x2 − x4 + O(x6 ).
3
This yields
1
λk = k2 π 2 1 − π 2 k2 h2 + O(k4 h4 ) .
12
Now we turn to a simple ﬁnite element approach, employing linear and uniformly
sized elements with the node points deﬁned by xj as in the ﬁnite diﬀerence case. We
will obtain a generalized eigenvalue problem reading
Kv = λM v,
(6.10)
where K and M are the stiﬀness matrix and mass matrix, respectively. These are
(N − 1) × (N − 1) matrices given by
Ki,i =
2
,
h
1
Ki,i±1 = − ,
h
Kij = 0 otherwise,
and
4h
h
, Mi,i±1 = , Mij = 0 otherwise.
6
6
Note that K = hA, i.e., the stiﬀness matrix is essentially the ﬁnite diﬀerence operator
used in the above analysis. In this case we also expect N − 1 real eigenvalues. If we
try the same discrete eigenfunctions vk (x) as above, the left hand side becomes
Mi,i =
[Kv]j =
4
kπh
sin2 (
)vj .
h
2
If we write out the jth component of M v, we get
[M v]j =
h
(vj−1 + 4vj + vj+1 ) .
6
129
Quantum Mechanical Eigenvalue Problems
3
Eigenvalues/N2π2
2.5
FEM
2
Exact
1.5
1
FDM
0.5
0
0.2
0.4
0.6
0.8
1
πkh/2
Figure 6.2: Plot of eigenvalues
We use the trigonometric identity (6.9) and obtain
[M v]j =
h
(2 + cos kπh)vj .
3
With the identity 1 + cos(x) = 2 cos2 (x/2) we obtain
h
2 kπh
[M v]j =
1 + 2 cos
vj .
3
2
Hence, vk (x) is an eigenfunction of M also. We may convert the generalized eigenvalue
problem to a standard one by writing
M −1 Kv = λv,
and hence we obtain
λk =
sin2 kπh
12
2
.
2
h 1 + 2 cos2 kπh
2
Using the Taylor expansion for λ, we get
1 2 2 2
2 2
4 4
λk = k π 1 + π k h + O(k h ) .
12
Figure 6.2 shows a plot of the normalized eigenvalues for the ﬁnite diﬀerence, ﬁnite
element and exact calculations, respectively. Clearly, the ﬁnite diﬀerence eigenvalues
underestimate the exact ones, while the ﬁnite element eigenvalues overestimate the
eigenvalues. Both numerical methods yield good approximations for the lowest eigenvalues, but the ﬁnite element calculations are clearly qualitatively more correct than
the ﬁnite diﬀerence calculations for the higher eigenvalues.
Several questions and interesting topics arise already at this point. The ﬁnite
diﬀerence method is equivalent to lumping the mass matrix, i.e., replace M by the
is the sum of the elements in row i of M .
diagonal matrix M whose element Mi,i
Seemingly, the ﬁnite element method yielded qualitatively better results, but is this
130
6.5 – The Implementation
true in general? We shall see that the answer is not aﬃrmative: Lumping the mass
matrix may improve the convergence of the eigenvalues for some systems.
Actually, if lumping the mass matrix turns out to be fortunate one must not immediately conclude that the ﬁnite element method is worthless in the case at hand, as for
example the geometry may play a signiﬁcant role in the precision. A combination of
lumping the mass matrix with using higher-order elements and exotic geometries may
turn out to be very powerful.
Speaking of which: How fast does the eigenvalues of the discrete problem converge
to the exact values? In the above application the convergence was O(h2 ) in both
ﬁnite diﬀerence and ﬁnite element approximations, and in fact the leading terms in
the error were identical in the two cases. If we use higher-order elements, will the
convergence be correspondingly higher? This could clearly give the ﬁnite element
method an advantage.
We will address these questions when doing numerical experiments below.
Actually, the solution method used in this section may be extended to two (or more)
dimensions by using separation of variables analogously to the continuous problem. If
we write
nm
= v n (xk )v m (yl ),
Ukl
where the superscript indicates the quantum numbers in the diﬀerent directions we
obtain
nm
nm
(−δx δx − δy δy )Ukl
= (λn + λm )Ukl
,
where
λnm =
4
h2
nπh
mπh
) + sin2 (
) ,
sin2 (
2
2
is the numerical eigenvalues of the two-dimensional ﬁnite diﬀerence formulation. As for
the mass matrix, one may ﬁnd similar results, making it possible to obtain the exact
eigenvalues also in the ﬁnite element case with linear elements.
Nevertheless, we will use the computer program described in section 6.5 to numerically diagonalize our particle in box. This in order to test our implementation which
may be used with more complicated problems.
6.5
The Implementation
When programming with Diﬀpack one usually implements a class derived from one of
the preexisting solver classes. In this case we derive class HydroEigen from class
FEM, the base class for ﬁnite element solvers. HydroEigen is instantiated in the main()
function and the simulation is then run.
The class deﬁnition reimplements various virtual functions in class FEM which
performs certain standardized tasks, such as initialization, evaluation of integrals and
so on. In this way the programmer of a ﬁnite element solver does not need to bother
with ﬁnite element algorithms or the sequence of actual functions from class FEM that
is called, but instead focus on the formulation of the problem.
Listings for class HydroEigen can be found in appendix B. The source code is
available from Ref.  as well. The main() function is not listed in conjunction with the
eigenvalue solver. When we turn to time dependent implementations in chapter 7, we
derive class TimeSolver from class HydroEigen to make a more general program.
The main() function listed instantiates class TimeSolver instead.
The eigenvalue problems that are implemented in our class is the two-dimensional
hydrogen atom with an applied magnetic ﬁeld. Furthermore, terms in the Hamiltonian
may be turned on and oﬀ at will so that other problems such as the free particle or the
harmonic oscillator may be solved as well.
131
Quantum Mechanical Eigenvalue Problems
We will inevitably touch upon features of Diﬀpack that are beyond the scope of this
text to describe further. Refs. [34, 45] contains a thorough description of most features
used. However, the purpose of each feature should be clear from the context. The code
is well commented and is recommended for additional reading.
The program was compiled in the Debian Linux operating system with GNU gcc
2.95.4.
We mention some important command line parameters for the application here.
These are standard for Diﬀpack programs. The name of the executable is assumed to be
SchroedSolve.x. (The makeﬁles are not listed in the appendix, but it can be obtained
from Ref. .) If one passes --GUI as parameter, the menu system will be displayed
in a graphical user interface. Otherwise, a command based interface in the console is
used. If --casename cname is passed, the simulation casename will be set to cname .
The casename is the base name of ﬁles produced by the simulator such as the log ﬁle
(cname.log), the simres database holding ﬁelds, grids, curves et.c. (.cname.simres)
and so on. The casename feature is particularly useful when performing a sequence of
simulations with varying parameters, e.g., the magnetic ﬁeld, the grid size and initial
condition. Finally, we have the --verbose 1 command line parameter that displays
various runtime statistics, such as the time spent on solving linear systems of equations.
During execution Diﬀpack also provides a lot of error and warning messages in case
of unforeseen events, such as memory overﬂow, erroneous grid parameters and so on.
6.5.1 Class methods of class HydroEigen
Here we describe the most important class methods. Hopefully this will draw a clear
picture of how the class works. But ﬁrst we list some important class members that
have a central role:
Handle(GridFE) grid; This is a handle (i.e., a smart pointer) to a ﬁnite element grid.
Handle(DegFreeFE) dof; This is a handle to a Diﬀpack object that holds the mapping
q(e, r) from local node numbers and element numbers to global node numbers. It
determines the correspondence between ﬁeld components Uj and the nodes x [k]
in the grid.
Handle(LinEqAdmFE) lineq; This object is a wrapper around linear systems of equations and solver methods. The solvers are also implemented as classes and are
connected to this object via the menu system described below.
Handle(Matrix(NUMT)) M; This is a handle to the mass matrix. Another handle K
points to the element matrix of the Hamiltonian.
The following description of the member functions is not exhaustive. It only describes the most important functions and lines of thinking. As always in large programming projects a lot of small and big problems needs to be solved, and to describe
them all in this text would be both lengthy and unnecessary. The code is however well
documented, such that algorithms for such things as removing a row and a column in
a sparse matrix should be easy to read and understand.
define(). An important feature of Diﬀpack is the menu system. Every parameter to a
solver should be accessible via the menu system, and with proper usage experimenting
with for example solver methods for linear equations become very easy. The menu
system may be run in command-mode in which diﬀerent items are set with commands
from standard in like ‘set gamma=1.0’. A list of available commands is listed with
the ‘help’ command.
If a Diﬀpack class comes with a (virtual) define() method, it usually adds some
knobs and handles to the menu system. The define() method of class HydroEigen
132
6.5 – The Implementation
adds parameters that may be used to adjust the deﬁnition of the problem, desired
matrix encoding scheme and so on.
The entries deﬁned are as follows:
– gridfile File to read grid from or a preprocessor command that is used to create
the grid.
– nev Number of eigenvalues to search for.
– nucleus Boolean (with value true or false) that turns on or oﬀ the Coulomb
attraction in the Hamiltonian.
– epsilon When evaluating the Coulomb term in the Hamiltonian, 1/r must be calculated. If r is too small this may lead to overﬂow and loss of accuracy in the
calculations. Therefore, if r < epsilon, we use 1/epsilon instead in the integrand.
– gamma Strength of magnetic ﬁeld. If this is zero together with false for nucleus,
the problem will turn into particle-in-box.
– angmom A boolean that turns on or oﬀ the term in the Hamiltonian (6.2) proportional
to the angular momentum i∂/∂φ. Setting the parameter to false together with
the nucleus parameter turns the problem into a harmonic oscillator.
– nitg Number of integration points if trapezoidal integration is preferred over Gaussian integration. A zero value turns on Gaussian integration. See comments in
listing for details on the trapezoidal rule used.
– lump A boolean variable indicating whether or not lumping the mass matrix should
be done.
– warp The grading parameter w described on page 144. The grading makes the grid
points and elements concentrate around the origin if w > 1. This feature may be
used to improve the accuracy of the numerical integration around the origin.
– scale A scalar that is used to uniformly scale the grid after warping. For example,
a value of 2 will double the size of the grid in each direction.
– renum A boolean variable indicating whether or not an optimization of the q(e, r)
mapping is to be done in order to minimize the bandwidth of the element matrices.
If banded matrices are used this option should be set to true. For sparse matrices
it has no practical signiﬁcance except for that it spends some time. (A lot,
actually, if the grid is large.)
– savemat A boolean that indicates if one wishes to save the element matrices after
they are assembled. Could be useful if one wishes to compare the eﬃciency of the
program with, e.g., Matlab or Maple. Only works for sparse matrices and they
are stored in Matlab compliant .m-ﬁles.
– use arpack A boolean that indicates if ARPACK is to be invoked for solving the
eigenvalue problem. Usually this is the case, but if one simply wishes to test
other features (such as timing the assembly process) it could be handy to be able
to turn it oﬀ.
– gauge A string literal with value symmetric or non-symmetric indicating the gauge
to be used for the magnetic ﬁeld.
133
Quantum Mechanical Eigenvalue Problems
– store evecs If set to a value outside [0,nev], all the eigenvectors are stored. They
are chosen for storing in order of increasing eigenvalue. Storing eigenvectors take
up a lot of drive space, so for large problems it could be useful to store, e.g., only
the ground state.
– store prob density A boolean that indicates whether or not the probability density ﬁeld is to be stored alongside the eigenvectors. The number of ﬁelds stored
is given by store evecs.
The define() method also attaches submenus accessing parameters for linear
solvers and matrix storage schemes. From the menu system, the submenu controlling matrices and linear solvers are accessed with the command ‘sub LinEqAdm’ or
‘sub L’. Two submenus Matrix prm and LinEqSolver prm can be accessed from
here, making available settings for matrix storage and solver methods, respectively.
The most important commands are ‘set matrix type=classname ’ and ‘set basic
method=method ’, selecting matrix storage scheme and solver method, respectively.
scan(). This method initializes the HydroEigen solver class according to the settings
from the menu system. It allocates memory for various arrays and objects, creates
(or reads from ﬁle) the grid, opens a log ﬁle to which various statistics and results is
written and so on. In short, an instance of the class is ready to solve an eigenvalue
problem when scan() is ﬁnished.
solveProblem(). This method is the most central function in the program. It creates
the matrix A (called K in the code) and the matrix M , imposes boundary conditions,
instantiates EigenSolver and solves the eigenvalue problem. The eigenvalues and
eigenvectors are then written to ﬁle by HydroEigen::report().
enforceHomogenousBCs(). Boundary conditions are implemented diﬀerently in eigenvalue problems than in regular ﬁnite element simulators. The problem deﬁning matrices have to be modiﬁed by erasing rows and columns corresponding to nodes on the
boundary of the grid. This member function accomplishes this with both sparse matrices and banded matrices. It uses other member functions such as eraseRowAndCol()
to do this.
getElapsedTime() and reportElapsedTime(). These functions perform simple timing tasks. getElapsedTime() simply returns the number of seconds since the ﬁrst call
to this function. reportElapsedTime() writes this to the log ﬁle as well.
report(). After solveProblem() has ﬁnished its main task, report() is called to
write the eigenvectors (i.e., the discrete eigenfunctions) to a ﬁle. In addition, the
eigenvalues and various other information is written to the log ﬁle. This information
may be used in visualization programs to show the approximate eigenfunctions found
by the program. Each eigenvector is labeled with its eigenvalue for reference.
integrands(). This method is perhaps the core of every ﬁnite element formulation. It
evaluates the integrand in the numerical integration routines used in the element matrix
assembly. Integrals in Diﬀpack are exclusively done numerically with Gauss integration
of varying order. The integrands() method evaluates the integrand (multiplied with
the integration weight and the Jacobian of the coordinate change mapping between
local and global coordinates) at a given point (passed in a class FiniteElement parameter object) and updates the element matrix and vector also passed as parameters.
calcExpectation r(). It would be useful if the simulator application could produce
information on expectation values of various observables. Given two FieldFE objects
134
6.5 – The Implementation
corresponding to two discrete functions u and v, this method calculates (u, rv). The
method is implemented by integrating numerically over the elements in the grid similar
to the assembly of the element matrix A. Even though only this and one other such
method is implemented here (namely calcInnerProd()), it is easy to write new ones.
In fact, the time dependent solver implements a more general expectation value method
using a general matrix. This allows for computation of for example the total energy
without integration over the elements as this is already done in the assembly process.
6.5.2 Comments
Some comments are given here, summing up some information gained during implementation, simulation and in the aftermath after having implemented the time dependent
solver (in chapter 7).
The Matrices. The matrices stored in the handles M and K correspond to the mass
matrix M and the matrix H, respectively. The matrices may be created in either
MatBand or MatSparse format, the latter deﬁnitely being the most favorable as the
matrices become increasingly sparse with increasing dimension of the problem. Banded
matrices take up a lot more space and require much more operations during for example
matrix-vector products. Furthermore, if a sparse matrix is used there is no need for
optimizing the grid. In Fig. 5.1 this optimization is done. Without optimization the
non-zeroes are scattered throughout the whole matrix, making a banded storage scheme
not at all attractive.
Other matrix formats should not be used as their support is not implemented in
for example enforceHomogenousBCs().
The mass matrix can be lumped or not lumped. In the former case the full matrix
is not used in the diagonalization process, but instead A (i.e., K) is multiplied with its
inverse so that the eigenvalue problem becomes a standard problem. This speeds up
the simulation considerably.
Linear Solvers.
The linear solver method is chosen by the following commands:
sub LinEqAdm
sub LinEqSolver prm
set basic method = method
The name method can be one of many choices, e.g., GaussElim, GMRES and ConjGrad.
With complex sparse matrices Gauss elimination cannot be used because it is not
implemented in Diﬀpack. The conjugate gradient method ConjGrad supposedly works
best for positive deﬁnite systems, but it seemingly works well for other systems as well.
The method used in the simulations in this thesis was GMRES, however. The method
takes longer than Gaussian elimination for small systems, bot when the number of nodes
increases, the method becomes much faster. Optimization of the grid does not seem to
aﬀect the eﬃciency. (This is intuitively so because only matrix-vector operations are
used in the algorithm.)
The Invocation of ARPACK. The class EigenSolver instance used for diagonalization has support for many options that are not used at all. For example one might want
to search for eigenvalues of intermediate magnitude instead of the lowest magnitudes.
If one wishes to study for example the structure of highly excited states in a classically
chaotic system, this may be the case. It is only minor modiﬁcations that is needed to
incorporate this in the class.
EigenSolver is also somewhat limited with respect to what features of ARPACK
is actually used, such as monitoring of convergence, reading the Schur vectors and so
135
Quantum Mechanical Eigenvalue Problems
L
h
0
0
L
Figure 6.3: A square grid with D = 6. A few linear and quadratic element shapes have
been drawn to indicate their distribution.
on. In a more complex implementation these features could be taken advantage of in
HydroEigen as well.
6.6
Numerical Experiments
The HydroEigen class may be used to solve the particle-in-box, the harmonic oscillator
and the hydrogen atom. In this section we will go through some numerical experiments
with the program. There are several parameters that may aﬀect the precision of the
eigenvalues, such as mesh width, element type and order and the numerical geometry.
In addition, the presence of the mass matrix M in the problem may or may not improve
the quality of a ﬁnite element approximation in comparison with a ﬁnite diﬀerence
approximation.
6.6.1 Particle-In-Box
Square Domain
When studying the particle-in-box in a square domain, the most natural choice for grid
is a uniform square grid. The analytic eigenfunctions have the form sin(kx πx) sin(ky πy)
and do not concentrate in some regions of the square. Hence, the grid should be
uniform. Fig. 6.3 shows a square grid with sides L. Each side is subdivided into D
intervals of length h = L/D, where h is the mesh width, and the grid has (D + 1)2
nodes. When using linear elements, each elements requires 2 × 2 = 4 nodes, and when
using quadratic elements 3 × 3 = 9 nodes are required. If we then choose D as an even
number we may use both linear and quadratic elements on the same subdivision and
hence the same mesh width.
To study the quality of the ﬁnite element method (and the ﬁnite diﬀerence method
in this case) we study the relative error of the eigenvalues as function of h and the
element order p, i.e.,
λnum
− 1.
δ(h, p) =
λ
The qualitative behavior of the relative error, e.g., if it is increasing slowly or rapidly
will also indicate the quality of the methods.
Furthermore, comparing a simulation where the mass matrix is lumped and a simulation with the full mass matrix may indicate if lumping improves the quality of the
eigenvalues.
This particular problem of a particle-in-box was solved analytically in one dimension
and also partially in two dimensions in section 6.4. These results should be set in
136
6.6 – Numerical Experiments
n
k
20
19.77982922
49.69408652
79.60834382
100.3720148
1
2,3
4
5
40
19.74935767
49.43433728
79.11931689
99.1128192
∞
19.73920881
49.34802202
78.95683523
98.69604404
60
19.74371889
49.38636767
79.02901644
98.88109094
Table 6.2: Results from numerical simulations of particle-in-box with linear elements
␦k(n)
0.1
0.09
0.08
0.07
n=20
0.06
0.05
0.04
0.03
n=30
0.02
n=40
n=50
n=60
k
0.01
0
10
20
30
40
Figure 6.4: Relative error of eigenvalues, linear elements
conjunction with the present discussion.
Linear Elements. A sequence of simulations was run with n = 20, 30, 40, 50 and 60
subdivisions of the sides in the grid. Table 6.2 shows the ﬁrst few numerical eigenvalues
compared to the analytic eigenvalues for some grid sizes. Fig. 6.4 shows the relative
error for each simulation. Note that many eigenvalues are pairwise equal, also in the
discrete formulation, and this shows up as many pairwise equal relative errors.
If we assume that δ(h) ∼ Chν = Cn−ν we obtain
ln δ = C − ν(ln n),
i.e., a straight line if we plot ln δ against ln n. The constant should depend on the
eigenvalue number k. Fig. 6.5 shows such plots for a few eigenvalues λnum
. As seen
k
from the ﬁgure, we obtain perfectly linear plots; hence the assumed form of δ ﬁts well.
Notice that increasing eigenvalues yield increasing Cs, reﬂecting that higher eigenvalues
tend to have higher error, as seen in Fig. 6.4 as well. By inspection, ν = −2 for all the
sample plots. Hence,
λnum = λ + O(h2 )
is a reasonable guess.
Quadratic Elements. Performing the same experiments but with quadratic elements
yields very similar results. Simulations with n = 20, 30, 40 and 50 were done. Fig. 6.6
shows the relative error for each eigenvalue when using quadratic elements, while
Fig. 6.7 shows ln δ as function of ln n. Clearly,
δ = O(h4 ),
which is a much better convergence rate than for linear elements.
137
Quantum Mechanical Eigenvalue Problems
ln ␦
–4
–5
k=20
k=5
–6
k=4
k=2
–7
–8
3
3.2
3.4
3.6
3.8
4
k=1
ln n
Figure 6.5: ln |δ| versus ln n. The graphs are straight lines with slope −2.
␦k(n)
0.03
0.025
0.02
0.015
n=20
0.01
0.005
n=30
n=40
n=50
0
10
20
30
k
40
Figure 6.6: Relative error of eigenvalues, quadratic elements
The same number of nodes and hence the same dimensions of the matrices yields
much faster convergence with quadratic elements. What we have to pay is an increase
in the bandwidth of the matrices, as quadratic elements contain more nodes, and hence
couplings between nodes farther away from each other than in linear elements.
We notice that δ > 0 for all performed experiments, i.e., the ﬁnite element method
seems so overestimate the eigenvalues.
Lumping the Mass Matrix. It is a well-known fact that when solving the one-dimensional
wave equation utt = uxx the standard ﬁnite diﬀerence method δt δt u = δx δx u yields the
exact solution. This diﬀerence scheme is actually equivalent to using linear elements
and lumping the mass matrix; hence an improvement of the numerical results is a
consequence of lumping in this case.
Lumping the mass matrix when using linear elements creates an eigenvalue problem
equivalent to using the ﬁnite diﬀerence method, as shown in section 6.4. Furthermore,
lumping the mass matrix makes our eigenvalue problem a standard eigenvalue problem
which is easier and quicker to solve numerically as we do not need to solve linear
systems of equations. Hence, it is of importance whether or not lumping improves or
degrades the eigenvalues and eigenvectors.
A few numerical experiments shows that δ = O(h4 ) also for the lumped eigenvalue
problem with quadratic elements, see Fig. 6.7. In addition, δ < 0, i.e., the lumped
system tends to underestimate the eigenvalues.
138
6.6 – Numerical Experiments
ln ␦
–6
–8
k=20
–10
k=4
–12
k=1
–14
3
3.2
3.4
3.6
ln n
3.8
Figure 6.7: ln |δ| versus ln n. The graphs are straight lines with slope −4. The dashed
lines are for the lumped eigenvalue problem.
0.04
ⱍ␦k(k=20)ⱍ
lumped
0.02
regular
k
0
10
20
30
40
50
Figure 6.8: Comparison of the relative error |δ| in the eigenvalues for lumped system
and regular system. A grid with n = 20 was used.
.
The qualitative behavior of the relative error is however diﬀerent in the lumped
system when compared to the regular eigenvalue problem. The regular problem has
a relative error that ﬂuctuates a great deal with increasing eigenvalue number. The
lumped version shows a smoother behavior, see Fig. 6.8 in which a comparison is made.
As we see, the relative error grows faster with increasing eigenvalue number than in
the original problem. This eﬀect can also be seen in Fig. 6.2, section 6.4.
As we see, lumping might be an option when only the lowest eigenvalues are required. Furthermore, with a lumped mass matrix the eigenvalues are in principle much
easier to compute, an important fact. As the behavior of the eigenvalues are qualitatively very diﬀerent when the mass matrix is lumped, it should not come as a surprise
if the method is superior for some Hamiltonians. Indeed, when we study the twodimensional hydrogen atom we see that lumping the mass matrix drastically improves
the behavior of the method.
Circular Domain
When discretizing a circular domain we use triangulation. This is supported by Diﬀpack, see Ref. . Parameters to the circular grid are its radius r, the number of line
segments on the boundary and the total number of desired elements. The only element
type supported is linear triangular elements. As the node locations and element shapes
are not known a priori we will have to estimate the mesh width h. If we by Ã denote
139
Quantum Mechanical Eigenvalue Problems
the average area of the elements, we have Ã ∝ h2 , i.e.,
Ã =
πr 2
∝ h2 ,
nel
where nel is the number of elements. Thus,
r
h∝ √ .
nel
In section 3.5 of Ref.  the suggested number of elements to produce a uniform grid
with optimal triangle angles are
nel =
(n − 1)2
,
2π
where n is the number of line segments used to discretize the border of the circular
region. (In the grid generation utility, nel must be supplied, but it is treated as a
“wish,”, i.e., the program will try to create this number of elements.) Using the formula
for nel , we obtain
r
h∝
,
n−1
hence using evenly spaced n will produce meshes comparable to the ones used for the
square domain. We will use n = 50, 70, 90, 110 and 130. Fig. 6.12 shows two grids
used in the simulation.
We will do numerical simulations similar to the square domain case. We will do a
comparison between the lumped system and the regular system in addition to ﬁnding
the relative error order.
Fig. 6.9 shows the relative error δ(k) for the diﬀerent grid sizes. Fig. 6.10 shows
the same for the lumped system. The relative error is negative for the lumped system,
hence, eigenvalues are underestimated in this case. For the regular system, eigenvalues
are exclusively overestimated.
Fig. 6.11 shows ln |δ| as function of ln n for selected eigenvalues for both the regular
and lumped system. Clearly, ν = −2 in the case of linear, triangular elements (both
lumped and regular) similar to the rectangular domain case. I.e.,
λnum = λ + O(h2 ).
The irregularity of the curve shapes are probably due to the deﬁnition of h. The
grids are not related in a simple way as with the rectangular grid used above.
Clearly, the lumped system performs better than the regular for this system as the
relative error is lower for all eigenvalues.
6.6.2 Two-Dimensional Hydrogen Atom
The Hamiltonian for the two-dimensional hydrogen atom with an applied magnetic
ﬁeld in polar coordinates is
H = −∇2 −
∂
γ2
2
− iγ
+ r2 ,
r
∂φ
4
where we have used the symmetric gauge in which the magnetic vector potential is
given by
γ
A = (−y, x, 0).
2
The parameter γ is the strength of the magnetic ﬁeld which is aligned along the z-axis
perpendicular to the plane of motion of the electron. The time independent Schrödinger
equation reads
HΨnm = nm Ψnm
140
6.6 – Numerical Experiments
␦k(n)
0.35
0.3
n=50
0.25
0.2
0.15
n=70
0.1.
n=90
0.05
0
10
20
30
40
n=110
n=130
k
Figure 6.9: Relative error for each eigenvalue for the diﬀerent grids in the regular
system.
-␦k(n)
0.2
n=50
0.18
0.16
0.14
0.12
0.1
n=70
0.08
0.06
n=90
0.04
n=110
0.02
k
0
10
20
30
40
Figure 6.10: Relative error for each eigenvalue for the diﬀerent grids in the lumped
system.
.
141
Quantum Mechanical Eigenvalue Problems
ln ␦k(n)
regular
±2
lumped
±3
k=1
±4
±5
k=20
±6
±7
3.8
4
4.2
4.4
4.6
4.8
5
ln n
Figure 6.11: ln |δ| as function of ln n for a few eigenvalues λnum
. Reference lines with
k
slope −2 are shown.
.
where the eigenvalues are given by
nm = −
1
,
(n + 1/2)2
n = 1, 2, . . . ,
m = −n + 1, −n + 2, . . . , n − 2, n − 1.
We see that each eigenvalue has degeneracy 2n − 1. The corresponding eigenfunctions
are obtained by separation of variables, viz.,
Ψnm = eimφ Rnm (r),
where Rnm (r) fulﬁlls a diﬀerential equation we cannot solve in closed form. The numerical values of the ﬁrst few eigenvalues are
4
4
4
≈ −0.4444, 3m = −
= −0.16, 4m = −
≈ −0.08163.
9
25
49
Table 6.3 shows the standard deviation σnm = r 2 , quoted from Ref. . The
width of the wave function tends to grow rapidly with higher energy. Our domain of
discretization must be large enough in order to capture the essential features of Ψnm
for as many states as we wish. On the other hand we have a singularity at r = 0 and
the potential varies rapidly here. In order to reproduce the Hamiltonian
faithfully near
the origin we must have an appropriately ﬁne mesh. As σ00 = 3/8 is a small number
it is not surprising if the region r < 1 needs a quite ﬁne mesh.
Numerically we do not use the natural numbering of the eigenvalues, simply because
we do not know a priori if they are degenerate or not. We simply choose to sort them
in increasing order and we label them with an integer k. The eigenvectors found for
the ﬁnite element matrix corresponds to an approximate eigenfunction through the
mapping
Uj Nj (x ),
uh =
10 = −4, 2m = −
j
and therefore we will use the term eigenvector and eigenfunction interchangeably in
this section.
142
6.6 – Numerical Experiments
1
0
-1
-1
0
1
-1
0
1
1
0
-1
Figure 6.12: Grids for n = 50 (above) and n = 130 (below).
.
143
Quantum Mechanical Eigenvalue Problems
n=1
n=2
n=3
n=4
|m| = 0
3
≈ 0.612
8
14 58 ≈ 3.825
103 18 ≈ 10.155
385 78 ≈ 19.644
|m| = 1
|m| = 2
11 14 ≈ 3.354
93 34 ≈ 9.682
367 21 ≈ 19.170
Table 6.3: Standard deviation
65 85 ≈ 8.101
312 83 ≈ 17.764
|m| = 3
220 12 ≈ 14.849
r 2 for diﬀerent eigenstates.
The simulations for the two-dimensional hydrogen atom is very similar to the
particle-in-box-simulations. However, in this system we do not have a bounded geometry, hence we must truncate it. The obvious ﬁrst-choice is to create a rotationally
symmetric domain, i.e., a disk with radius r0 . However, this limits the element type
to linear triangles due to constraints in Diﬀpack’s grid preprocessors.
We will use triangulated disk grids for our simulations in this section. The grid
generation utility from the particle-in-box simulations had to be replaced, due to apparent instabilities of the algorithm. Nevertheless, the grids are parameterized by the
radius r0 and the typical element size h. To sum up, we have
nel
=
Ã =
1
(n − 1)2
2π
πr 2
= h2 ,
nel
where n is the number of line segments used to approximate the boundary of the disk.
To keep the number of elements ﬁxed but increase the accuracy near the origin we
may introduce a grading to our grid. The grading is a transformation of the nodes in
the grid given by
[k] w−1
x x [k] .
x [k] →
r0
The eﬀect of this mapping is to change the length of each point x into x w and then
rescale such that the size r0 of the grid is conserved. With w > 1 this will create a
grid with smaller triangles near the origin. The parameter must be chosen with some
care, however, as a value which is too large will stretch some triangles and produce
undesiredly small angles. (This may introduce round-oﬀ errors in the computations.)
Notice that grading with this procedure keeps h ﬁxed as the average element area is
preserved.7
Importance of grading. To illustrate the importance of the mesh width near the origin
we present simulations with r0 = 20, h = 0.2 and with diﬀerent grading parameters,
viz.,
w ∈ {1.0, 1.5, 1.8, 2.0, 2.5}.
The grid had nno = 1275 nodes (of which 1174 remained after incorporating boundary
conditions) and nel = 2447 triangular elements. Fig. 6.13 shows the ﬁrst few eigenvalues
for each simulation compared to the analytic eigenvalues. We have used a lumped
mass matrix in each simulation. It is worthwhile to mention that the simulation time
increased with increasing w, even though the dimension of the system was the same in
each simulation. The simulation with w = 2.0 took 13 times as long time to ﬁnish as
the homogenous grid, i.e., w = 1.0, while w = 1.8 took 6.5 times as long.
7 More sophisticated methods for grading and/or reﬁning the grid can be used as well, such as
specifying each element’s desired in a second pass of grid generation. This is out of scope for this
thesis. See for example Ref. .
144
6.6 – Numerical Experiments
0.5
linear behaviour
0
-0.5
εn(k)m(k)
-1
faithful reproduction
of εnm
-1.5
-2
-2.5
-3
analytic
w=1.0
w=1.5
w=1.8
w=2.0
-3.5
-4
0
5
10
15
20
k
25
30
35
40
Figure 6.13: Eigenvalues for grids with diﬀerent grading factors w. Parameters for
grid: r0 = 20, h = 0.2.
The Numerical Eigenfunctions. Fig. 6.13 shows the numerical eigenvalues together
with the analytic eigenvalues. Clearly, up to a certain k, the qualitative behavior is
reproduced. At some point the numerical eigenvalues seems to grow linearly instead of
the expected 1/n(k)2 behavior. Intuitively, the eigenfunctions of such high states must
analytically go beyond the limits of the disk, i.e., σ > r0 , and this is the reason for the
wrong behavior of the eigenvalues. For high energies the particle-in-box features of the
eﬀective potential (i.e., Coulomb attraction plus inﬁnite box potential) dominate.
Fig. 6.14 shows plots of a few eigenstates (the real parts) for the grid with parameters (r = 120, h = 2). By looking at the imaginary parts (not shown here) they
are easily seen to be proportional to the real parts, hence the eigenfunctions found by
ARPACK can be written as eiα Ψ(x ), with α ∈ R and Ψ a real function.
When solving the eigenvalue problem analytically one typically uses separation of
variables, i.e., Ψnm = eimφ Rnm (r). The radial function R(r) is a solution of Eqn. (6.3),
and it depends on both the quantum number n and the magnitude |m|. For a given n
we have an eigenspace of dimension 2n − 1 for which we may ﬁnd an orthonormal basis,
and indeed the functions eimφ Rnm (r) are orthogonal. The IRAM also ﬁnds orthogonal
eigenvectors, but this time discrete eigenvectors. Incidentally, it ﬁnds something very
similar to what we ﬁnd when using separation of variables in this case. Consider the
three eigenfunctions that clearly correspond to 2 ≈ −0.4444. We see that the ﬁrst
one is approximately independent of φ, hence it corresponds to Ψ20 . The two next are
clearly a radial function modulated with a periodic function approximately equal to
cos φ and sin φ. We obtain these by taking linear combinations of eiφ and e−iφ , viz.,
2 cos φR(r) = (eiφ + e−iφ )R(r),
2i sin φ = (eiφ − e−iφ )R(r).
In other words,
∝ Ψ2,−1 + Ψ2,1
Ψnum
3
145
Quantum Mechanical Eigenvalue Problems
⑀=-3.383182
⑀=-0.407101
⑀=-0.425269
⑀=-0.426326
⑀=-0.146063
⑀=-0.155577
⑀=-0.155459
⑀=-0.151320
⑀=-0.150936
⑀=-0.068412
⑀=-0.068736
⑀=-0.050594
Figure 6.14: Eigenstates found with r0 = 20, h = 0.2, w = 2.0. Gray tones indicate
low values (white) and high values (black). Lowest-lying states are upper left.
146
6.7 – Strong Field Limit
and
∝ Ψ2,−1 − Ψ2,1 .
Ψnum
4
In the same manner we proceed with the ﬁve graphs with ≈ −0.15, i.e., they corresponds to n = 3. Again, one of these is independent of φ and corresponds to m = 0.
Two functions are proportional to sin φ and cos φ, and the last two are proportional to
cos 2φ and sin 2φ, corresponding to m = 2. In addition we see that the radial functions
R30 , R31 and R32 are diﬀerent, as expected from the radial equation’s dependence on
|m|.
We must emphasize that this structure of the eigenfunctions are partially a coincidence, probably due to the approximate rotational symmetry of the grid. There is no
other reason why ARPACK should not have chosen a diﬀerent orthogonal basis for the
eigenspace corresponding to n . Furthermore, the eigenvalues are only approximately
degenerate in the numerical case.
Increasing the Grid Size. It was mentioned that the eigenvalues tend to grow linearly
when k > 10. We can ﬁnd an explanation in terms of the eigenfunction plots. For
the states with the lowest eigenvalues the dominating features are near the origin. For
higher-lying states we see that the features approach the rim of the disk. Therefore the
particle-in-box part of the numerical model starts to show up in both the eigenvalues
and eigenfunctions. Already for n = 4 we have σnm ∼ r0 and hence we are losing
features of the functions.
To study this behavior in more detail simulations with increasing disk sizes were
carried out. The grids had radii given by
r0 ∈ {20, 30, 40, 50},
and they all had h = 0.2 and w = 1.5. As seen in Fig. 6.16 the eigenvalues get progressively better as we increase the grid size, also for the lowest-lying states. Physically,
this corresponds to the slow dying out of the Coulomb potential. Fig. 6.15 shows the
10th eigenstate for r = 50. If we compare this to the corresponding plot in Fig. 6.14
we see that more of the features are captured by this large grid.
6.7
Strong Field Limit
We have studied the eigenvalue problem for the weak-ﬁeld limit, i.e., γ = 0. We now
turn to the limit in which the Coulomb interaction is treated as a perturbation, i.e., we
turn it oﬀ and set γ = 1 in order to study the numerical properties of the eigenvalue
problem.
If we ﬁnd a common grid in which both the weak-ﬁeld limit (i.e., the pure Coulomb
system) and the strong-ﬁeld limit (i.e., the pure magnetic system with γ = 1) yield
results that agree with analytic results, it is reasonable to believe that also the combined
system (i.e., Coulomb attraction and magnetic ﬁeld) will be solved with accuracy.
Our goal in this thesis is not to achieve high-precision results in this respect. In
order to achieve this we must perform simulations that are too heavy for the resources
available at this stage in the work. Instead we aim at a good understanding of the behavior of the numerical problem for diﬀerent grid parameters and physical parameters.
When more resources are available we can solve the eigenvalue problem with higher
accuracy.
The Hamiltonian for the pure magnetic system reads
H = (−i∇ + Asymm )2 = −∇2 − iγ
∂
γ2
+ r2 .
∂φ
4
147
Quantum Mechanical Eigenvalue Problems
⑀=-0.080439
50
40
30
20
10
0
-10
-20
-30
-40
-50
-50
-40
-30
-20
-10
0
10
20
30
40
50
Figure 6.15: Sample eigenstate of a simulation with r0 = 50, h = 0.2, w = 1.5. Grid
size from Fig. 6.14 shown for reference.
0.5
0
0.9
-0.5
0.8
0.7
-1
0.6
ε
n(k)m(k)
0.5
-1.5
0.4
0.3
0.2
-2
0.1
0
-2.5
-0.1
30
40
50
60
70
80
100
analytic
r=20
r=30
r=40
r=50
r=60
-3
-3.5
-4
90
0
5
10
15
k
20
25
30
Figure 6.16: The eigenvalues for successively larger grids. Here, h = 0.2, w = 1.5. The
inset graph shows the continuation of the larger.
148
6.7 – Strong Field Limit
7
6
5
εk
4
3
2
r=30, h=0.2, w=1.5
r=40, h=0.2, w=1.5
1
0
r=30, h=0.1, w=1.5
0
10
20
30
40
50
k
60
70
80
90
100
Figure 6.17: Eigenvalues for thee simulations of the magnetic Hamiltonian
The eigenvalues (called the Landau levels) are inﬁnitely degenerate if the domain is R3 .
The symmetric gauge is employed here. Recall that any other gauge will give diﬀerent
eigenfunctions but the same eigenvalues. Recall that the degeneracy of each landau
level if the domain is a disk with radius r0 such as in our simulations is
g=
r02
.
2
(6.11)
There are two important facts to be aware of. First, we do not know which basis
of eigenvectors our numerical method will choose for us. (As we shall see it is a quite
interesting basis.) Second, we do not know if the numerical methods are indeed gaugeinvariant such as the original problem. If we diagonalize in a diﬀerent gauge, will
we ﬁnd the same eigenvalues? We will exclusively use the symmetric gauge in our
simulations.
First we consider two simulations with
r0 = 30,
h = 0.2,
w = 1.0 and 1.5,
and compare the results. The eigenvalues are shown in Fig. 6.17. Clearly, the graded
grid yielded better eigenvalues than without grading. Next, we do a simulation with
r0 = 30,
h = 0.1,
w = 1.5,
(6.12)
i.e., we reduce the mesh width. The eigenvalues are shown in Fig. 6.17. Clearly,
reducing the mesh width has a dramatic impact on the approximate degeneracy of the
lowest Landau level. We also see the eﬀect on the energy levels that correspond to
2N + 1.
Analytically, the eigenvalue = 1 is multiply degenerate. Numerically, the eigenvalues starts out close to 1 but rapidly increase. Clearly, a smaller mesh width enhances
the convergence of the lowest eigenvalues. It is intuitively clear that all g = r02 /2 states
for every Landau level will not be found with a ﬁnite mesh width, as the number of
states equals the number of internal grid points in the mesh which is perhaps much
less than g to start with. It also becomes clear that decreasing the mesh width should
increase the approximate degeneracy of the lowest Landau level in particular.
149
Quantum Mechanical Eigenvalue Problems
40
30
30
20
20
10
10
0
0
-10
-10
-20
-20
-30
-30
-40
-40
-30
-20
-10
0
⌿*⌿
⑀17=2.094214
Re(⌿)
40
10
20
30
40
-40
-40
-30
-20
-10
0
10
20
30
40
Figure 6.18: The 17th eigenfunction of the Landau Hamiltonian in the symmetric
gauge. Light colors are low values and black colors are high values.
We notice that large fractions of the numerical eigenvalues tend to cluster around
the analytical eigenvalues 2N +1, creating a (crude) approximation to a ladder function.
In fact, as h → 0 one expects that the function k converges to
(k) = 2k/g + 1,
representing the exact eigenvalues although we have no proof of this.
Recall that for the hydrogen atom ARPACK tended to ﬁnd eigenfunctions that were
closely related to the eigenfunctions found analytically by separation of variables. This
is also the case in the Landau system and in a both interesting and amusing way.
Fig. 6.18 shows the state Ψ16 from the simulation with h = 0.1. It belongs to the
lowest Landau level as seen from Fig. 6.17. First of all the quantum number m = 16
is readily identiﬁed if we assume Ψ0M =√eimφ R0M (r). Second, notice the localization
of the probability density around r16 = 2 · 16 ≈ 5.6.
This state is qualitatively representable for the eigenfunctions found for the lowest
Landau level. The angular quantum number m is readily identiﬁed and the probability
is concentrated in rings. As the momentum is proportional to the gradient of Ψ, it is
easily seen that in this case it is tangent to the ring at the centre. (At the edges of the
ring the gradient points in a slightly diﬀerent direction.) Hence the probability density
corresponds to that for a particle moving in a circular path. To sum up, ARPACK
ﬁnds a basis that is “identical” to the analytical basis, and in addition we can see the
correspondence principle in work from the numerical results!
6.8
Intermediate Magnetic Fields
The ﬁnal experiment is a series of simulations with a constant grid but with varying magnetic ﬁeld γ. Both the Landau level simulations in the previous section and
the pure hydrogen atom simulations yielded qualitatively correct eigenvalues for the
lowest-lying states. As we have pointed out it is therefore natural to expect that with
an intermediate magnetic ﬁeld γ ∈ [0, 1] we will also obtain qualitatively correct eigenvalues and eigenfunctions. More information on the intermediate magnetic ﬁeld regime
can be found in Refs. [29, 30].
We shall not perform a thorough analysis. The simulation data is too coarse to
ﬁnd numerical values that are very interesting to present. The simulation produces
150
6.9 – Discussion and Further Applications
1
0.5
0
energy levels
-0.5
-1
-1.5
-2
-2.5
-3
-3.5
-4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
␥
Figure 6.19: The energy levels of the hydrogen atom for intermediate magnetic ﬁelds
however qualitatively very interesting results.
We will reuse the grid from Eqn. (6.12) for our simulations. We use a magnetic
ﬁeld with values
γ ∈ {0, 0.05, 0.1, . . . 0.95, 1.0}.
Fig. 6.19 shows the eight lowest energy levels as function of the magnetic ﬁeld.
To produce the ﬁgure the eigenvalues from each simulation were sorted before they
were plotted. We easily recognize the degenerate energy levels at γ = 0, and as the
ﬁeld is turned on, the degeneracy is broken, producing fork-like structures in the graph.
We can also easily recognize crossings of the eigenvalues. As argued in Ref. 
there must be inﬁnitely many crossings. Furthermore, the eigenvalues accumulate at
the Landau levels 2γ(N + 1/2). Intuitively this is so because the Landau levels are
inﬁnitely degenerate, needing inﬁnitely many hydrogen eigenvalues that are ﬁnitely
degenerate to converge. For example, we see a crossing around γ = 0.15 of the 2,−1
and 3,−2 hydrogen levels. Here, we interpret the curves in terms of the states Ψnm
found with separation of variables, and use the well-known fact that it is indeed these
states that wander to the split energy levels, see Ref.  for the perturbative treatment
of the magnetic ﬁeld. We also see hints of crossings with curves not shown in the graph,
such as the abrupt change of slope of the 2,0 hydrogen state around γ = 0.6.
6.9
Discussion and Further Applications
Was This a Good Idea? We have spent quite a lot of time ﬁnding eigenfunctions and
eigenvalues of the two-dimensional hydrogen atom with numerical methods that are
quite complicated. We could have found much better eigenvalues and eigenfunctions by
concentrating on the radial equation instead of the full Hamiltonian. However, when
introducing more complicated Hamiltonians we can no longer use the separation of
variables, and the full Hamiltonian must be considered anyway. For example, consider
151
Quantum Mechanical Eigenvalue Problems
a two-dimensional non-isotropic harmonic oscillator potential with an applied magnetic
ﬁeld. This system has no rotational symmetry, and hence techniques such as separation
of variables are no longer applicable.
Furthermore, we have gained a thorough insight into the eigenvalue problem in the
ﬁnite element context. Besides its value as a catalyst for insight into the numerical
methods, information can be extrapolated to time dependent problems as the quality
of eigenvalues have a profound eﬀect on the quality of the time dependent simulations.
Time Evolution of Time-Independent Systems. If the system under consideration is
independent of time, i.e.,
∂H
≡ 0,
∂t
we know from section 1.4 that the solution to the time-dependent Schrödinger equation
is easy to calculate:
1. Diagonalize Hh , i.e., ﬁnd the k (where k N is a possibility) lowest eigenvalues
and their corresponding eigenvectors, viz.,
Hh Φ n = E h Φ n ,
n = 1, 2, . . . k.
2. Pick an initial condition Ψ(0) assumed to be suﬃciently smooth, i.e.,
Ψ(0) ≈
k
cn (0)Φn .
n=1
The coeﬃcients are calculated by
(Φn , Ψ(0))
cn (0) = .
(Φn , Φn )
3. To ﬁnd Ψ(t), form
Ψ(t) ≈
k
e−iEn t/ cn (0)Φn .
n=1
This can be used to calculate the time development of the initial state to the ﬁnal
state. If the initial state is equal to the superposition of the k ﬁrst eigenstates the
evolution is perfect. The cost of diagonalizing Hh may however be much more than
using for example the leap-frog scheme or the theta rule. Then again, these stepping
methods introduce numerical errors in the phases of the various components.
If the wave function is desired at very many instants the cost of time-stepping may
be cheaper or more expensive than simply forming the superposition in point 3.
Forming the linear combination of eigenvectors requires O(kN ) operations. A time
stepping procedure typically includes a forward or backsubstitution of an LU factorized
matrix (costing O(N 2 ) operations) or using an iterative method for solving the linear
systems at hand (with a priori unknown price.)
Consequences for the Time Dependent Problems. In addition to providing a means
for analyzing probabilities for excitations in time dependent problems, knowledge of
the eigenvalues and eigenvectors of the numerical Hamiltonian can tell us a lot about
the development of an arbitrary chosen initial state. Such a state can be written as a
linear combination of the eigenvectors.
Let us assume that the Hamiltonian is independent of time. In that case, it was
explained in section 1.4 that solving the time dependent Schrödinger equation was
equivalent to diagonalizing the Hamiltonian.
152
6.9 – Discussion and Further Applications
Assuming that we can solve the spatially discretized Schrödinger equation exactly,
i.e., that we can solve
ẏ = −iHh y,
we know that its solution operator is given by
U(t) = e−itHh .
Expanding y in the eigenvectors φn of Hh , i.e., the discrete eigenvectors of H, viz.,
y(0) =
c n φn ,
n
then yields
y(t) = U(t)y(0) =
cn e−itn φn .
n
For the particle in box we know that linear elements reproduce the exact eigenvectors.
It is reasonable to believe that with quadratic (or higher-order) elements the eigenvectors are still (or very close to) exact. Hence, for the time development it is easy to see
that the better the eigenvalues we have obtained, the better the time development of
a wave packet will be.
If our ODE solving algorithm is exact, or close to exact, it is clear that the eigenvalues of Hh will enter the numerical solution operator of the ODE. For example, the
Crank-Nicholson scheme has a solution operator U∆ explicitly given by the eigenvectors
and eigenvalues of Hh ; see section 1.4. The better eigenvalues we obtain, the better
will the solution of the time dependent Schrödinger equation look as well.
This gives us a means for predicting whether or not a spatial discretization of some
system is reasonable or not. Thus, if we ﬁnd that the eigenvalues of, say, the twodimensional hydrogen atom is way-oﬀ the analytic eigenvalues, we must think twice
before we start solving time dependent problems with this discretization.
Doing Large-Scale Simulations. The simulations in this chapter were done on various
computers, but in general they were ordinary workstations that happened to be available at the time with moderate CPU speed and amounts of memory. For example,
the hydrogen simulations were performed on a machine running Debian Linux with 1
GB of memory and a 2.4 GHz Intel Pentium 4 processor. The simulation times ranged
from a few minutes to two hours, and the simulation time of course increases with the
dimension of the matrix. The increase is not optimal (i.e., linear) but neither is it
quadratic. (Detailed results are not sought in this thesis.)
For realistic simulations however, we will typically use a cluster with dozens of
CPUs and lots of memory. Both Diﬀpack and ARPACK has parallelization support and
this is (not only in principle) possible to take advantage of, even though it requires
some details in the implementation area that are not yet sorted out.
Simulating for example the eigenvalues of a hydrogen atom with an arbitrary magnetic ﬁeld with a grid with for example 106 grid points is then easily done, yielding
results superior to those presented in this text. Of course, the various details of memory and storage requirements, the expected accuracy from such simulations and so on
must be investigated before embarking on such a mission.
Quantum Chaos. Quantum chaos is the study of quantum mechanical systems corresponding to classical systems whose trajectories are chaotic, e.g., a double pendulum,
three-body systems, particle in a stadium-shaped box, et.c. In section 15.6 of Ref. 
eigenstates of the latter system is depicted, and perhaps the most characteristic feature is the so-called scarring of the highly-excited states. The classically periodic paths,
153
Quantum Mechanical Eigenvalue Problems
which are unstable in the sense of Lyapunov, “shine through” the otherwise noisy structure of the eigenstates. This is obviously connected with the correspondence principle,
but the fundamental mechanisms are not fully understood at the time of writing.
When we studied the strong-ﬁeld limit of the two-dimensional hydrogen atom with
an applied magnetic ﬁeld we saw the correspondence principle in work as the classically
periodic trajectories were seen in the quantum mechanical eigenstates. The idea is
then to use the techniques presented here for arbitrary systems to produce scarred
eigenstates of high energy. In addition one may do statistical computations on the
eigenvalues, e.g., as function of mesh width.
To what extent the ﬁnite element method has been applied to such systems I have
not investigated, but it is on the other hand clear that rich insight into both the
understanding of the ﬁnite element method and the spectrum and eigenfunctions of
the ﬁnite element matrices is possible to gain.
Gauge Invariance. We know that quantum physics is invariant with respect to gauge
transformations as described in section 1.6. What we have not studied up to this
point is whether or not the numerical methods are gauge invariant or not. By gauge
invariance of the numerical methods in the eigenvalue context we mean that the eigenvalues are left unchanged under a gauge transformation and that the eigenfunctions
are given by a unitary change of basis similar to that of the continuous problem. Our
HydroEigen simulator class actually implements a non-symmetric gauge in addition to
the symmetric gauge used in the simulations in this chapter. Due to time limitations
no simulations worth mentioning has been done for this thesis.
Gauge invariance of numerical methods in general is a very interesting subject to
do further studies on. First of all, it has a profound impact on our understanding
of numerical methods. Second, it will automatically yield valuable conﬁrmative or
dismissive information on the quality of the results of a numerical experiment. If the
methods are known to be gauge invariant up to a term proportional to for example
O(∆t4 ), we know that it is much less than the error from a simulation with the leap-frog
scheme or the split-operator method used in section 4.3.
Improvement of the Program and the Methods. Even though we have used the ﬁnite
element method quite generally, the complete picture has not been drawn. So-called
adaptive ﬁnite element methods may be used to successively improve the grid based
on local a posteriori error estimates. Such estimates must be derived for each PDE
in question and allows for estimating the error in for example the norm of the error
over each element. If the error is above some threshold one reﬁnes the grid around
the element and performs a new simulation. In Ref.  such a technique for the
Schrödinger equation is presented. Adaptive ﬁnite element methods are treated to
some extent in Ref. .
One can imagine that adaptive FEM can be used to ﬁnd a suitable grid for the
lowest-lying states of the physical problem, and then use this grid in a time-dependent
simulation.
As for the program, many improvements can be done. Besides cosmetic changes, a
more robust handling of matrix types would be valuable. As for now, only MatSparse
and MatBand are supported, and this only in a limited manner. Not all storage options
inside the matrices are handled properly. A complete and robust eigenvalue program
for the Hamiltonian could also serve as a starting point for more general eigenvalue
problem implementations in Diﬀpack, as the built-in support for this is virtually nonexistent at the time of writing.
154
Chapter 7
Solving the Time Dependent
Schrödinger Equation
7.1
Physical System
We will implement the solver for a single charged particle in an attractive Coulomb
potential with an applied time-dependent magnetic ﬁeld. We will choose a simple
model for an ultra-short laser pulse with amplitude γ0 , viz.,
πt
2
γ(t) = γ0 sin
cos[2πω(t − T /2 + δ)].
(7.1)
T
Here, ω is the frequency of the laser and δ is a phase shift. The envelope rises from
zero to γ0 at t = T /2 and falls back again to zero at t = T , i.e., at the end of the
simulation.
Such laser pulses are actually possible to create experimentally, see Ref. . Usually one considers very long pulses in which ω 1/T . With this model perturbative
methods may yield very accurate answers, but when ωT ∼ 1 we are outside the regime
for such treatment. The physical system is also possible to create in a laboratory,
and variants include electrons trapped in anharmonic oscillator potentials, ions in Paul
traps, et.c., see Ref. . Therefore it poses an interesting starting point for doing
simulations. Furthermore, simpler systems such as the free particle can be created as
special cases in the program by turning on and oﬀ the various parameters available.
In this thesis we will only consider this magnetic ﬁeld as a toy model with which
we test our simulator.
We will implement the possibility of either using an eigenfunction of the Hamiltonian (typically with γ = 0) or a Gaussian wave packet as initial condition. This is
described further in the next section.
We have to make a comment on the Hamiltonian. The vector potential A in both
the symmetric and non-symmetric gauges is obtained via the dipole-approximation, i.e.,
one assumes that the electromagnetic ﬁelds depend only on time. This is reasonable if
one imagines a very distant source of radiation. However, as was pointed out on page
54, we are actually neglecting Maxwell’s equations. A spatially independent magnetic
(or electric ﬁeld) must be a constant ﬁeld in order to be consistent. We should have an
extra term proportional to the scalar potential in the Hamiltonian. The electric ﬁeld
is given by
1 t
dτ A(τ ),
E=
c
but this is easily seen to be spatially dependent! It is also easy to see that the physical
electric ﬁeld becomes very diﬀerent in the two gauges.
155
Solving the Time Dependent Schrödinger Equation
We will ignore the diﬃculties in this chapter, and simply note that introducing rigor
to this topic is an interesting extension of the discussion.
7.2
The Implementation
The implementation of the time dependent solver (class TimeSolver) is derived from
class HydroEigen. There are several advantages of reusing the code from the eigenvalue solver. First, many features of the problems are common, e.g., the Hamiltonian
and the ﬁnite element assembly process. This reduces the need for code. Indeed, the
source code for the time dependent solver is a great deal shorter than the HydroEigen
class deﬁnition even though one hardly can say that the problem is less complex for
that reason. Second, the ﬁnal application may be used to solve both the eigenvalue
problem and the time dependent problem. It is also easier to keep the interfaces common for the two simulation types, which is highly desirable. Unfortunately some loss
of eﬃciency is inevitable with this approach.
7.2.1 Time Stepping
Although we have described the numerical methods in details elsewhere, we here present
in explicit terms the updating algorithms for the numerical wave functions in the leapfrog and theta-rule methods.
The Leap-Frog Scheme.
form reads
The updating rule for the leap-frog scheme (4.27) in matrix
u+1 = u−1 − 2i∆tM −1 H u ,
where H is the ﬁnite element Hamiltonian used in the eigenvalue solver and where M
is the mass matrix; a positive deﬁnite and symmetric matrix. The strategy is to build
a linear system M v = b at each time step, where
b = H u .
The solution at the next time step then becomes
u+1 = u−1 − 2i∆tv.
The linear system becomes very easy to solve explicitly if we lump the mass matrix.
This is taken advantage of in the implementation with a special hand-coded diagonal
solver.
Given the initial condition u0 (which may be a Gaussian wave packet or an eigenstate of the Hamiltonian) we need a special rule to ﬁnd u1 . This special rule is simply
a single iteration with the theta-rule.
The implementational details of building the linear system is somewhat complicated
due to the structure of Diﬀpack’s solution algorithms and the wish for reusing the code
for calculating the Hamiltonian from class HydroEigen. The resulting code should
be quite eﬀective, though. The overhead of extra function calls does not dominate the
time spent in the new element assembly routine TimeSolver::makeSystem().
The Theta-Rule.
In matrix form the updating rule (4.19) for the theta-rule reads
u+1 = [M + iθ∆tH +1 ]−1 [M − i(1 − θ)∆tH ]u .
If we deﬁne b = [M − i(1 − θ)∆tH ]u , our new wave function becomes the solution to
the linear system Au = b, with
A = M + iθ∆tH +1 .
156
7.2 – The Implementation
The theta-rule is somewhat trickier than the leap-frog scheme to implement in an
eﬃcient manner. This is due to the fact that the Hamiltonian at diﬀerent time levels
is needed on the left hand side and the right hand side of the linear system.
We will always use θ = 1/2 as we know this is the best choice. For this reason we
refer to the Crank-Nicholson method instead of the theta-rule in the following.
Solving Linear Systems. For the time dependent solver the total time spent at the
assembly process can be considerable. The linear systems must also be solved in an
eﬃcient manner as we deal with matrices of potentially very large sizes. It is obvious
for two reasons that we must avoid the use of banded matrices. First, banded matrices
store a lot of zeroes for very sparse systems, as can be seen in Fig. 5.1. Second, the linear
solvers are very slow with banded matrices due to numerous superﬂuous multiplications
and additions with zeroes. For our simulations we will therefore exclusively use iterative
methods and sparse matrices.
For the simulations in this chapter we stick to the Generalized Minimal Residual
method (GMRES in Diﬀpack, see Ref. ). It performs well on both the types of linear
equations we deal with.
Keeping Track of the Solution. For the purpose of analyzing data from the simulation,
the log ﬁle is written as a Matlab script. The script deﬁnes arrays and ﬁlls them with
simulation data, such as the time (the t array), the square norm of the solution (the
u norm array) and so on. The arrays are located in a struct variable casename data.
Some of the variables and arrays that are deﬁned are:
– casename: The case name for the simulation.
– n rep: The number of reports. Equals the number of rows in each array.
– t: The time array.
– gamma: The magnetic ﬁeld array.
– dt: The time step.
– theta: The parameter for the theta rule.
– gamma0, omega and delta: The parameters for the magnetic ﬁeld.
– gaussian params: A string with the parameters for the Gaussian initial condition.
– u norm: Array with the norm of the solution.
– u energy: Array with the energy of the solution, i.e., the expectation value of
the Hamiltonian.
– u pos: Array with the expectation value of the position. The ﬁrst column is the
x-coordinate and the second column is the y-coordinate.
– system time: An array with the time for each assembly of the linear system.
– solve time: The time for each solution of a linear system. (Not reported if the
leap-frog method is used with a lumped matrix.)
– solve niter: The number of iterations taken for the solution to converge.
– solve total and system total: The total time for the solution of linear systems
and the assembly of the systems, respectively.
157
Solving the Time Dependent Schrödinger Equation
7.2.2 Description of the Member Functions
Here we describe the most important new methods of class TimeSolver. Many member functions are inherited from class HydroEigen and therefore not described.
define(). The menu system is extended with several parameters. The menu items
from class HydroEigen such as the parameters for turning on and oﬀ terms in the
Hamiltonian are kept. All the new parameters except for simulation in time are
added inside a submenu to distinguish them from the eigenvalue problem parameters.
The submenu can be reached with the command ‘sub time’.
– simulation in time This boolean parameter selects the time dependent solver if
its value is true and solves an eigenvalue problem otherwise.
– time method This parameter is used to select the time integration method. The
only legal values are the strings theta-rule and leap-frog.
– T This is the time at which the simulation is to be stopped. In other words the
simulation is performed for t ∈ [0, T].
– dt This is the time step. Thus, about T/dt steps are taken in the simulation.
– n rep This parameter is an integer signifying how many reports are desired throughout the simulation. Thus, at the end of every time interval of length T/n rep ﬁelds
and various physical quantities are written to the ﬁeld database and the log ﬁle.
A report is automatically issued before any time integration is done, making the
total number of reports n rep + 1.
– theta This is the parameter for the theta-rule.
– gamma0, delta and omega These are the parameters for the time dependent magnetic ﬁeld in Eqn. (7.1).
– ic type Two kinds of initial conditions are implemented. If ic type is set to
gaussian, a Gaussian wave packet will be used. If ic type is equal to field, a
ﬁeld is read from a database and used as initial condition.
– field database This is the casename of a previously stored set of eigenvectors from
an eigenvalue simulation. The ﬁelds are read with the method loadFields()
and overwrites any grid deﬁned in gridfile, see section 6.5. If the value of the
parameter is none, no ﬁeld database will be loaded.
– ic field no This integer is the index of the ﬁeld to use as initial condition. If
the index is larger than the number of ﬁelds in the field database, the ground
state, i.e., ﬁeld number one, is automatically used.
– gaussian This is a string with parameters for the Gaussian initial condition. It is a
string with switches such as -x followed by a real number. The default value of
gaussian is ‘-x -10 -y 0 -sx 2 -sy 2 -kx 0 -ky 1’. The switches -x and
-y sets the mean position, the switches -sx and -sy sets the standard deviations
in the x and y directions, and ﬁnally -kx and -ky sets the mean momentum.
fillEssBC(). This method imposes the essential boundary conditions, i.e., the homogenous Dirichlet conditions. It loops through the grid nodes and gives the DegFreeFE
*dof object information on where to prescribe zeroes. This object is then responsible
for altering the element matrix before any linear system of equations is solved.
158
7.2 – The Implementation
setIC(). This method sets the initial condition according to the choices in the menu.
The two ﬁelds u and u prev are set equal to a Gaussian or an eigenvector from the
field database.
The two ﬁelds represent the current solution (u) and the previous solution (u prev).
In the theta-rule the u prev ﬁeld is not really necessary to use because we only need
the current solution to ﬁnd the next.
scan(). The scan() function is updated heavily, with support for reading previously
stored ﬁelds from a simres database. It is important to notice that if such a database
is loaded, the grid deﬁned with the gridfile menu entry is overruled with the grid
stored in the database.
The scan() function also allocates memory for various objects, such as scratch
vectors needed in the time integration. In other respects the function is similar to the
scan() function in class HydroEigen.
gammaFunc().
as parameter.
This method updates the magnetic ﬁeld according to the time passed
calcExpectationFromMatrix(). This function calculates the matrix element of a
Matrix(NUMT) object with respect to two given ﬁelds. This is particularly useful if one
wants to calculate the expectation value of observables available as matrices, e.g., the
Hamiltonian.1
calcExpectation pos(). It is useful to be able to calculate the mean position of the
wave function during the course of a simulation. This is the purpose of this method
which returns a Ptv(real) object holding x and y.
loadFields(). This function reads ﬁelds stored in a simres database. The ﬁelds are
assumed to have been stored by HydroEigen as the method selects only ﬁelds with an
integer as label.
If ﬁelds are actually read, the GridFE object in the simulator is overwritten with
the grid from the ﬁelds.
integrands(). The new integrands() method calculates the integrand used in the
assembly of the left and right hand side of the linear equations. It can also compute
the Hamiltonian; this is necessary if one wants to solve an eigenvalue problem instead
of a time dependent problem.
We will not go into details on the integrands() method; the code should contain
more than enough comments. We mention though that the use of the local-to-global
mapping of nodes q(e, r) is central when calculating the right hand side terms. This
mapping can be found in the (public) member VecSimple(int) loc2glob u of class
ElmMatVec. This class contains an n × n matrix Mat(NUMT) A and an n-dimensional
vector Vec(NUMT) b, where n is the number of nodes in the current element. These are
the local element matrix and element vector, respectively, and in the assembly process
each term Ae and be of Eqn. (4.16) is given in terms of these smaller matrices and the
q(e, r) mapping, in a way generalizing the one-dimensional example from section 4.4.2.
solveProblem(). This is a central class member function. It ﬁrst checks whether a
time dependent simulation actually is wanted (and branches to HydroEigen::solveProblem() if not) and then loops through the desired time levels.
1 The matrix used is the Hamiltonian modiﬁed due to essential boundary conditions and then
symmetrized, see Ref. . If this matrix has the same expectation value as the original matrix when
we know that the BCs are fulﬁlled is unclear at the moment, but when the wave function is essentially
zero near the boundary we may assume that it is very close.
159
Solving the Time Dependent Schrödinger Equation
reportAtThisTimeStep(). During the simulation it is desirable that the simulator
provides some quantitative results, such as the norm of the solution, the mean position,
the energy and so on. This is provided by the reportAtThisTimeStep() method. It
combines dumping the solution u and the corresponding probability density ﬁeld to
the simres database (named with casename ; see the description of class HydroEigen)
with writing statistical data to stderr and to caseneme.log in Matlab format.
7.2.3 Comments
More on the Initial Conditions. The wave packet used for initial conditions is implemented as a subclass GaussFunc of the Diﬀpack class FieldFunc, i.e., a functor that
encapsulates functions that can be evaluated at a space-time point. For example, a
FieldFE object such as the current solution u can be initialized to a Gaussian with
this code:
GaussFunc gaussian; // create functor
gaussian.scan(menu.get("gaussian")); // init functor
u.fill(gaussian); // fill u
The mathematical expression used for the nodal values of the Gaussian is
(x[j] − x0 )2
(y [j] − y0 )2
[j]
Uj = exp −
−
+ ik · x
.
2σx2
2σy2
The parameters x0 , y0 , σx and σy correspond to the switches -x, -y, -sx and -sy of
the initialization string, respectively. The momentum k corresponds to -kx and -ky.
Numerical normalization is provided by the setIC() method and calcInnerProd().
As described above one can use a ﬁeld read from a ﬁeld database as initial condition
as well as a Gaussian wave packet. In the eigenvalue solver class HydroEigen the ﬁelds
are stored and labeled with the integer index k from the eigenvalue k of the ﬁeld. (The
corresponding probability densities are labeled prob k .) The loadFields() method
reads the ﬁelds with integer labels only and stores them in an array of Handle(FieldFE)
objects. The ﬁeld objects are useful when one wants to compute the spectrum of a
numerical state, i.e., the overlap with the eigenstates of the (stationary) Hamiltonian.
This can be done easily with HydroEigen::calcInnerProduct().
Analytical Integration. It is claimed in this thesis that Diﬀpack solely relies on numerical integration to compute the systems of linear equations. This is not entirely true as
one can override the FEM::calcElmMatVec() method with another one which actually
computes the element vectors and matrices analytically. For the Coulomb interaction
term this is actually possible, and we may avoid any numerical inaccuracy due to the
limited order of the Gaussian quadrature.
This is not investigated any further in this thesis, but might be interesting in a
future project.
Obtaining a Finite Diﬀerence Scheme. We may obtain the standard ﬁnite diﬀerence
scheme (at least for some classes of Hamiltonians) with the ﬁnite element formulation
and a special set of parameters. More speciﬁcally, we use a uniform grid, linear elements
and simple nodal point numerical integration. Let us demonstrate this for a onedimensional example.
Let h be the grid spacing. Let there be m nodes such that Ω = [0, (m − 1)h] and
the grid points are given by xk = (k − 1)h. The nodal-point integration is deﬁned by
f (x) dx ≈ h
Ω
160
m
k=1
f (xk ).
7.3 – Numerical Experiments
The mass matrix becomes
Mij = h
Ni (xk )Nj (xk ) = h
k
δik δkj = hδij ,
k
i.e., we obtain the lumped mass matrix. For an arbitrary operator V (x) we similarly
obtain the matrix
Vij = hδij V (xi ),
i.e., a diagonal operator. Notice that the nodal point integration corresponds to zeroeth
order Gaussian quadrature in local coordinates. Hence, it integrates constant functions
analytically. The derivatives Ni (x) are constant over each element. Therefore, the stiﬀness matrix is integrated analytically, and yields the standard ﬁnite diﬀerence operator
(multiplied by h), viz.,
2
−1
.
Kii = , Ki,i±1 =
h
h
Thereby, any Hamiltonian on the form
H =−
∂2
+ V (x)
∂x2
is discretized with a scheme identical to the standard ﬁnite diﬀerence scheme.
The argument generalizes to more dimensions, but we must be careful with the
angular momentum operator and other ﬁrst order diﬀerential operators which do not ﬁt
so easily in. Anyway we can consider the ﬁnite element method with linear elements and
nodal point integration (and a lumped mass matrix) as corresponding to a (modiﬁed)
ﬁnite diﬀerence scheme. This rule also in some sense generalizes the ﬁnite diﬀerence
concept if we include non-uniform meshes and perhaps triangular elements as well.
7.3
Numerical Experiments
In this section we do a collection of numerical experiments with various settings to
ﬁgure out some properties of the numerical methods. We do not focus on physics right
now, because it is more important to know whether our methods work or not and what
methods are best to use.
We will use a mixture of analytical reasoning and physical intuition when analyzing
the results. The aspects we wish to study are:
– The time spent on building and solving linear systems. When using ﬁnite element
methods (and implicit ﬁnite diﬀerence methods) most of the computational eﬀort
is put into solving linear systems of equations. For time dependent problems
the time spent on building the systems is also an important factor. We try to
determine the relationship between the size (i.e., the number of nodes) in the
grid and the time spent on these operations for diﬀerent methods.
– Comparison of the Crank-Nicholson scheme and the leap-frog scheme. The schemes
may seem similar when it comes to computational cost, at least for ﬁnite element
methods. There are however some diﬀerences, and we will try to study some of
them.
– Diﬀerent element types. The ﬁnite element method allows a wide range of element
types, such as triangular elements, quadratic elements and so forth. We do a quick
comparison between linear and quadratic ﬁnite elements.
– A simulation of the full physical model. Finally we are ready to do a simulaton of the full system. We will describe the results qualitatively and compare
simulations with diﬀerent methods.
161
Solving the Time Dependent Schrödinger Equation
Hopefully, these experiments will provide us with valuable information on the performance of the numerical methods. Along the way we will also come across several
interesting questions that may lay grounds for exciting future work, both physical and
numerical.
It is important to keep in mind that the experiments performed here are very simple
and to not exploit the full capability of the ﬁnite element method, such as the extremely
ﬂexible geometry of the grid. Analyzing numerical results are much simpler on highly
structured grids. When one wants to perform “real” experiments, the ﬂexibility in the
location and size of the elements should be exploited.
7.3.1 Building and Solving Linear Systems
2.5
1.5
polyfit(log(n),log(tsolve )=[2.0007 -8.9127]
2
1
1.5
0.5
log(tsolve )
log(tsystem )
polyfit(log(n),log(tsystem))=[2.0005 -8.0297]
1
0.5
0
-0.5
0
-1
-0.5
-1.5
-1
3.6
3.8
4
4.2
4.4
4.6
4.8
5
-2
3.6
5.2
3.8
4
4.2
log(n)
4.6
4.8
5
5.2
30
t
solve
niter
n=40
n=80
n=120
n=160
9
8
25
tsolve and n iter
7
6
tsystem
4.4
log(n)
10
5
4
20
15
10
3
2
5
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
time step l
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
tl
Figure 7.1: Lower left: the time for creating linear systems for each time step. Upper
left: Log-log plot of the average time spent on building the system. Lower right: The
time for solving the linear systems with n = 160 and the number of iterations needed.
Upper right: Log-log plot of the average time spent on the solving.
The most time consuming part of a ﬁnite element simulation is the solution of linear
systems. These tend to be very large and sparse. Here we will investigate the increase
in the time spent on solving linear equations as function of the number of nodes in the
grid. Here and in the rest of the simulations on this section, we will employ a “ﬁnite
diﬀerence grid,” i.e., a grid with uniform spacing in every direction and elements with
a square shape.
The time spent on building linear systems should be approximately the same from
time step to time step. In the theta-rule we build two Hamiltonians and mass matrices
(for the left and right hand side, respectively) at each time step, roughly doubling the
time spent in the leap-frog scheme.
The linear systems are solved with the GMRES method. The iteration process requires
162
7.3 – Numerical Experiments
a start vector, and we use the previous solution u as initial vector when solving for
u+1 . At least for the Crank-Nicholson scheme it is intuitively close to the new solution
and therefore we expect that the iteration process converges fast. One may also expect
that the time step aﬀects the convergence rate: The smaller the time step, the faster
the convergence. This can turn out to be very important if true! Indeed, then the
simulation time does not increase linearly with the total number of time steps, but
slower than linear, i.e. O(∆t−1/p ), with p ≥ 1.
We perform a series of simulations with constant physical problem deﬁnition but
with increasing number of subdivisions n in each spatial direction.
– We use the full Hamiltonian with the Coulomb term and the time-dependent
magnetic ﬁeld with ω = δ = 0 and γ0 = 1 in order to capture eﬀects from every
aspect of the full problem.
– Our initial condition
√ is√a Gaussian wave packet located at x0 = (−10, −10) with
momentum k = ( 2, 2) and width σ = (4, 4). It is located safely away from
the origin but moving towards it with velocity 2k . (Its speed is 2k = 4.)
– We simulate for t ∈ [0, 1] with ∆t = 0.002.
– We utilize a square domain Ω = [−20, 20] × [−20, 20] subdivided into n2 squares
with sides 40/n.
– We let n vary over a series of single simulations, viz.,
n = (40,
80,
120,
160).
(7.2)
– We utilize the leap-frog scheme in these simulations.
We monitored the time tsystem (n, ) spent on building the linear systems and the
time tsolve (n, ) spent on solving the linear equations, where is the time step. It is
expected that the time spent on building the linear system is independent of because
it is the exact same sequence of operations that is performed each time. This is also
observed, see Fig. 7.1 in which we have graphed the time spent on building the linear
systems for each simulation. All the simulations produced qualitatively the same timedependent solutions. (They are not very interesting so we do not show them here.)
The time spent on solving the equations is also expected to not depend too much on
the time step number , and this is also seen in Fig. 7.1.
The time step was unfortunately chosen arbitrarily and believed to be within the
stability condition for the leap-frog scheme with linear elements, while it actually violated the condition for the largest grids. The simulations for the two ﬁnest grids
developed instabilities near the middle and end of the simulations. This is the reason
why the corresponding graphs are shorter.
The average (over ) time in seconds spent on building linear systems is
t̄system = (0.5226,
2.0796,
4.7194,
8.3457).
The order of the numbers is from low to high n. The average time spent on solving
the systems is
t̄solve = (0.2346, 0.7656, 1.7548, 3.9920).
We conjecture that the average time t̄(n) = O(np ) for some p ≥ 1 for both cases.
To check this we graph the logarithm of t̄ versus the logarithm of n, similar to what
we did in section 6.6. Fig. 7.1 shows these plots. The data were run through Matlab’s
regression function polyfit and the resulting polynomials are shown as well. (For
t̄system the linear function is not plotted because it is indistinguishable from the data.)
Clearly, we have
t̄system, solve ∼ n2 = N
163
Solving the Time Dependent Schrödinger Equation
18
16
14
12
〈x(t)〉
10
8
6
LF 0.0015
LFL 0.0015
CN 0.0015
CN 0.01
CN 0.02
CN 0.1
expected
4
2
0
0
0.5
1
1.5
2
2.5
t
3
3.5
4
4.5
Figure 7.2: Expectation value x for the diﬀerent simulations
where N is the dimension of the element matrix. In other words we have approximately a linear scaling of the computational cost with the leap-frog method. It is not
unreasonable to guess that this holds also for the Crank-Nicholson scheme, because the
matrices have exactly the same structure in the two cases.
Contrast the O(N ) result of solving linear systems with the O(N 3 ) result for gaussian elimination of dense matrices. It would be interesting to check this scaling property
with other element types, such as quadratic elements that generate denser matrices.
As for Fig. 7.1 a few comments are in place. First, the plots with t on the horizontal axis (the lower plots) show small ﬂuctuations. This may be due to for example
other processes running on the computer. The number of iterations niter is clearly proportional to tsolve , reﬂecting that each iteration requires a ﬁxed amount of numerical
work.
7.3.2 Comparing the Crank-Nicholson and the Leap-Frog Schemes
The Crank-Nicholson scheme and the leap-frog scheme might look similar at ﬁrst when
it comes to numerical cost. If we do not lump the mass matrix in the leap-frog scheme,
we need to solve a linear system of equations at each time step. Considering the stability
criterion for each scheme, the leap-frog scheme might look useless when compared to
the Crank-Nicholson scheme. As we saw in section 4.5.4, the unitarity of the scheme
(for a time-independent Hamiltonian) is only secured if the time step was smaller than
the inverse of the largest eigenvalue of H, viz.,
∆t ≤
1
max
.
The Crank-Nicholson scheme has no such restrictions in the time-independent case.
However, there are two questions that must be answered. First, is the accuracy of
the Crank-Nicholson scheme just as good as the leap-frog scheme if we take longer time
steps in the former? Maybe we need to reduce the time step in order to achieve the
164
7.3 – Numerical Experiments
same precision as the leap-frog scheme. Second, are the solutions to the linear equations
utilizing M as the coeﬃcient matrix just as expensive as the ones used in the CrankNicholson scheme? Intuitively no, because M is positive deﬁnite and symmetric, while
the coeﬃcient matrix in the Crank-Nicholson scheme is non-symmetric. Furthermore,
lumping the mass matrix in the leap-frog scheme makes the linear systems trivial to
solve, improving the eﬃciency drastically. (If we also use nodal-point integration and
linear elements, we obtain a scheme equivalent to the standard ﬁnite diﬀerence scheme.)
To determine some answers experimentally we perform a series of simulations with
a time independent Hamiltonian with diﬀerent methods and time steps. We use the
particle-in-box Hamiltonian and a Gaussian initial conditions, because we know (at
least before the packet hits the boundary) the exact qualitative behavior of the wavepacket and expectation values of position. Indeed, the free wave packet has mean
momentum k , so the mean position should be
x = x0 + 2tk ,
where x0 is the initial mean position and where k is the momentum.
The wave packet used for our simulations start out at x0 = (0, 0) with momentum
k = (2, 0). The expected velocity is therefore v = 2k = (4, 0). The width of the wave
packet is σ = (5, 5).
The grid is a “ﬁnite diﬀerence grid” discretizing the domain Ω = [−16, 16]2 with
n = 160 subdivisions along each side. We use linear elements. The stability criterion
for the leap-frog scheme is
h2
= 0.005
∆t ≤
8
for the ﬁnite diﬀerence method, and roughly
∆t ≤
h2
= 0.0016666
24
for the linear ﬁnite elements. (The latter is only an estimate because the highest energy
in two dimensions in the ﬁnite element method is not exactly twice the one-dimensional
energy.) Therefore we use ∆t = 0.0015 for the leap-frog method, hopefully staying on
the right side of the stability criterion. In the simulations we let t ∈ [0, 4].
We perform six diﬀerent simulations with the following numerical parameters:
1. The leap-frog method with the full mass matrix, ∆t = 0.0015. This case is
referred to as LF 0.0015 in the ﬁgures et.c.
The total simulation time (in seconds) for this case was ttotal ≈ 15.5 · 103 .
2. The leap-frog method with a lumped mass matrix and nodal point integration,
i.e., the ﬁnite diﬀerence method, and using the same time step as before. This is
referred to as LFL 0.0015.
The total simulation time was ttotal ≈ 10.1 · 103 .
3. The Crank-Nicholson method with ∆t = 0.0015. This is referred to as CN 0.0015.
The total simulation time was ttotal ≈ 32 · 103 .
4. Same as the previous, but with time step ∆t = 0.01. This is referred to as CN
0.01.
The total simulation time was (somewhat surprisingly) ttotal ≈ 2.3 · 103 .
5. Same as the previous, but with time step ∆t = 0.02 This is referred to as CN
0.02.
The total simulation time (in seconds) for this case was ttotal ≈ 1.1 · 103 .
165
Solving the Time Dependent Schrödinger Equation
-2
-3
log 10 (abs(||u||2 -1))
-4
-5
-6
-7
LF 0.0015
LFL 0.0015
CN 0.0015
CN 0.02
CN 0.01
CN 0.1
-8
-9
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
t
Figure 7.3: Deviation from unitarity in the simulations
6. Finally, we use a time step ∆t = 0.1 in the Crank-Nicholson method. This we
refer to as CN 0.1.
The total simulation time was ttotal = 665.8.
Animations of the simulations can be reached from Ref. . These are only interesting inasmuch that qualitatively they are all the same, and they reproduce the expected
qualitative behavior.2
The simulation times must be taken with a pinch of salt. The ﬁnite diﬀerence
scheme LFL 0.0015 was not optimized in the implementation. On the contrary, building linear systems and solving them (even if they are trivial) represent much more
work than needed in an implementation of the method. A proper implementation is
much faster. The Crank-Nicholson method on the other hand needs solutions to linear
systems, even in the ﬁnite diﬀerence case.
In Fig. 7.2 the mean position x as function of t is plotted for each simulation. The
expected behavior is plotted as a dashed straight line. We notice that the expected
behavior is largely reproduced. The slope of the curves drop near the end of the
simulations due to collision with the boundary. Contrary to a classical particle which
bounces right oﬀ the wall, the quantum particle turns around in a smooth fashion.
The diﬀerent simulations show small but important variations in the mean position
x. Some of the simulations overestimate the velocity of the packet while others
underestimate it. The simulations that overestimate the velocity are LF 0.0015, CN
0.0015, CN 0.01 and CN 0.02. The simulation LFL 0.0015, i.e. the ﬁnite diﬀerence
method, yielded qualitatively the same result as CN 0.1, i.e., Crank-Nicholson with a
very large time step. This indicates (bot it does not prove) that the ﬁnite diﬀerence
leap-frog method is similar to Crank-Nicholson with (much) larger time step than the
stability criterion.
2 Actually, behavior characteristic to quantum mechanical collisions show up in the animations,
namely the packet’s interference with itself as it collides with the wall. See also the link to other
animations and descriptions in Ref.  with further investigation of this phenomenon.
166
7.3 – Numerical Experiments
40
1
linear
quad.
0.9
35
0.8
30
0.7
25
tsolve
γ(t)
0.6
0.5
0.4
20
15
0.3
10
0.2
5
0.1
0
0
0.5
1
1.5
2
t
2.5
3
3.5
0
4
0
1.0005
0.5
1
1.5
2
t
2.5
3
3.5
4
150
linear
quad.
linear
quad.
1
0.9995
100
niter
||u(t)||2
0.999
0.9985
0.998
50
0.9975
0.997
0.9965
0
0.5
1
1.5
2
t
2.5
3
3.5
4
0
0
0.5
1
1.5
2
t
2.5
3
3.5
4
Figure 7.4: Various simulation data as function of time: The magnetic ﬁeld (top left),
the square norm (bottom left), the number of iterations per linear system (top right)
and the time per linear system (bottom right)
The leap-frog method for ﬁnite element methods (LF 0.0015) used roughly half the
simulation time than the corresponding Crank-Nicholson simulation.
We will not show it here, but for larger time steps, solving linear equations in the
Crank-Nicholson method takes more time, indicating that the closeness of the previous
solution plays some role on the performance of the time stepping. This is a subject for
later studies.
The Crank-Nicholson method preserves the norm of the solution exactly, but as
we employ iterative solvers there might be some ﬂuctuations. The leap-frog scheme is
not exactly unitary because we have violated the assumption we used when deriving
the stability criterion by using the Crank-Nicholson integration for the ﬁrst time step.3
Fig. 7.3 shows the logarithm of the deviation from unity of the square norm of the
numerical solution. Clearly, Crank-Nicholson preserves unitarity better, with variations
around 10−6 . The leap-frog method shows clear ﬂuctuations, but never larger than
10−3 . This is acceptable if we normalize the wave function before calculating physical
quantities such as expectation values.
In summary, it seems that the Crank-Nicholson method can take bigger time steps
while reproducing the same results as the leap-frog scheme. The explicit ﬁnite diﬀerence
method is quick to implement, but one loses the advantages from the ﬁnite element
method, such as geometry freedom, the mass matrix (which yields higher accuracy of
the eigenvalues) and so on.
7.3.3 Comparing Linear and Quadratic Elements.
We have several parameters that aﬀect the ﬁnite element method, most important is
perhaps the element order. Increasing the element order will increase the number of
3 The
“hole” in the LF 0.0015 curve is due to missing data because of a mishap during the simulation.
167
Solving the Time Dependent Schrödinger Equation
Figure 7.5: Stills from simulation with linear (left) and quadratic (right) elements. The
time levels are (from bottom to top) t = 0.0000, t = 1.9825, t = 2.6825 and t = 4.0000.
168
7.3 – Numerical Experiments
non-zeroes in the matrices of the problem, thereby increasing the time needed for each
iteration in the linear solver. On the other hand, increasing the element order (while
keeping the number of grid points ﬁxed) may aﬀect the accuracy of the solution.
We will do a simulation with a varying magnetic ﬁeld with linear and quadratic
elements, respectively, to check if they give very diﬀerent results. If they do, then
one or both of the, experiences loss of numerical precision, because both simulations
certainly cannot be correct. This will indicate whether or not the Schrödinger equation
is sensitive to the order of the elements.
There are of course many more aspects to study, such as the grading of the grid,
the choice of geometry and so on.
Again a square grid was used. This time the domain was given by
Ω = [−40, 40] × [−40, 40],
with n = 200 subdivisions along each side. Two simulations with linear elements
and quadratic elements, respectively, were performed. Thus, we had 2002 = 40, 000
elements in the linear case and 1002 = 10, 000 elements in the quadratic case. The
magnetic ﬁeld parameters were ω = δ = 0 and γ0 = 1 and the initial condition was a
Gaussian with
x0 = (0, 0), k = (0, 2), and σ = (10, 10).
The Coulomb attraction was turned oﬀ; hence we had a free particle with an applied
magnetic ﬁeld. The time interval was t ∈ [0, 4] with ∆t = 0.0025. We used the
Crank-Nicholson scheme in both cases.
The only diﬀering setting was the element type. Hence, if the simulations agree
we must conclude that the Schrödinger equation appear insensitive to the element
order (even though only two experiments is too little to make decisive conclusions.)
If the simulations disagree, we must conclude that the element order do have some
importance.
The particular system and initial condition we tested turned out to show very
interesting dynamics: The wave packet moves like a free particle as expected until
the magnetic ﬁeld starts to become signiﬁcant. The wave packet then shrinks very
quickly, concentrating into almost a single point. As the magnetic ﬁeld diminishes it
spreads out again. When concentrated in an area covering only a few elements, all the
information from the original wave packet is reduced to only a handful of numbers, i.e.,
the components of the numerical wave function. Hence, there is loss of information.
This means that we are on thin ice when studying the wave function dynamics after
this event.
Animations showing the development of the probability density can be reached
from Ref. . Fig. 7.5 shows some still images of the probability density for the two
simulations at key points in the development. Fig. 7.4 shows the development of the
square norm of the wave function, the magnetic ﬁeld γ(t) and the time and number
of iterations spent at solving the linear systems for each time step. The time levels at
which we have plotted the probability density is showed as circles on the graphs.
The qualitative behavior of the probability density is similar for linear and quadratic
elements, at least before the “collapse” of the wave function into approximately a
single point around t = 2.6825. A noteworthy diﬀerence is that the quadratic element
simulation tend to develop high-frequency spatial oscillations near the edge of the wave
packet. These can be seen at t = 1.9825. If this is a physical eﬀect or a numerical eﬀect
is not easy to say, but certainly deserves an investigation. If this is a numerical eﬀect
it indicates that (at least for this magnetic Hamiltonian) that higher order elements
introduce noise. On the other hand, if it is a physical eﬀect, it indicates that using
higher order elements captures features of the wave function that needs a reﬁning of
the grid in the ﬁnite diﬀerence method.
169
Solving the Time Dependent Schrödinger Equation
The wave functions collapse approximately at the same time and place as can be
seen from the ﬁgure. After the collapse the wave functions do not bear resemblance
to each other, except for spreading rapidly. The spreading is easily understood in
terms of Heisenberg’s uncertainty principle. Highly localized wave packets have a
high uncertainty in the momentum, making the packet spread fast. The fact that the
wave functions are so diﬀerent means that the error must be large in either or both
simulations after the collapse.
Turning to the graphs in Fig. 7.4, the norm is seen to be approximately conserved for
both linear and quadratic elements, but shows a tendency to drop somewhat when the
magnetic ﬁeld is strong. The time spent on solving linear systems and the corresponding
number of iterations show the opposite behavior. When the magnetic ﬁeld is strong
several times more iterations are needed for solving the systems than γ = 0. The time
of collapse of the wave function (where the wave function changes very rapidly) this
does not seem to behave diﬀerently with respect to this.
7.3.4 A Simulation of the Full Problem
As a ﬁnal study we will do a numerical experiment with the complete physical system.
We will start with a Gaussian wave packet in a Coulomb ﬁeld and add an ultra-short
laser pulse. We will not discuss the accuracy of this simulation, for that we need much
more time and space. However, we will describe the results qualitatively and generate
a motion picture of the resulting “trajectory” of the wave packet. We will do the
simulations for three diﬀerent parameter sets for the time integration for comparison.
The magnetic ﬁeld has parameters
γ0 = 1,
ω = 1.23,
δ = 0.13
and is shown in Fig. 7.7. The initial condition has parameters
x0 = (−5, −5),
k = (2, 0),
σ = (3, 3).
The initial condition along with other stills from one of the simulations is shown in
Fig. 7.6.
We will use three representative time integration schemes.
– We use the leap-frog method with ∆t = 0.001 and nodal point integration, i.e.,
we use the ﬁnite diﬀerence method. The time step is chosen based on h = 80/150
which gives
∆t < 0.03555.
We assume that the Coulomb potential has no eigenvalue of magnitude greater
than the maximum energy for the particle-in-box. This is reasonable, since the
ground state has magnitude |0 | ≤ 4 and since the system “looks like” a particle
in box for very highly excited states. Due to the changing magnetic ﬁeld we
choose ∆t even lower, namely
∆t = 0.001.
We refer to this simulation as LFL 0.001 in the ﬁgures.
– For the second simulation we use the Crank-Nicholson method with larger time
step, viz.,
∆t = 0.02.
We use linear elements and Gaussian quadrature and refer to the simulation as
CN 0.02.
170
7.3 – Numerical Experiments
Figure 7.6: Stills from simulation. Time levels are t = 0.00 (bottom left), t = 0.98
(bottom right), t = 1.98 (middle left), t = 2.98 (middle right), t = 3.48 (top left) and
t = 3.98 (top right).
– The third and last simulation also uses Crank-Nicholson but with quadratic elements. We refer to the simulation as CN 0.02 quad.
Stills from the LFL 0.001 simulation are shown in Fig. 7.6. Animations of all three
simulations can be reached from Ref. . The diﬀerent simulations yielded qualitatively
similar results for the probability density. Actually it is hard to spot any diﬀerence
at all. Furthermore, we can not see any high-frequency oscillations in the quadratic
elements case as we did in Fig. 7.5. In Fig. 7.7 we have graphed the magnetic ﬁeld γ,
the square norm u2 of the wave function, the energy H and the expectation value
x of the position as function of time.
All the plots indicate that the Crank-Nicholson simulations behave very similar to
each other. Furthermore, they are qualitatively diﬀerent from the leap-frog simulation
in all plots but the energy plot.
The square norm has large ﬂuctuations in the Crank-Nicholson scheme. The ﬂuctuations clearly follow the time development of the magnetic ﬁeld. Recall from section
171
Solving the Time Dependent Schrödinger Equation
1
ω=1.23, δ=0.13
0.8
0
0.2
-1
〈y〉
1
0.4
γ(t)
0.6
0
CN 0.02
LFL 0.001
CN 0.02 quad
-2
-0.2
-3
-0.4
-4
-0.6
-5
-4
-2
0
-0.8
-1
2
4
6
8
〈 x〉
0
0.5
1
1.5
2
t
2.5
3
3.5
4
25
1.01
CN 0.02
LFL 0.001
CN 0.02 quad
1
20
0.99
〈 H〉
||u(t)||
2
15
10
0.98
0.97
0.96
5
CN 0.02
LFL 0.001
CN 0.02 quad
0.95
0
0.94
0
0.5
1
1.5
2
t
2.5
3
3.5
4
0
0.5
1
1.5
2
2.5
3
3.5
4
t
Figure 7.7: Plots of dynamical quantities: The magnetic ﬁeld (top left), the mean
position (top right), the mean energy (bottom left) and the square norm (bottom
right).
4.5.4 that the norm should experience O(∆t3 ) ﬂuctuations depending on ∂H/∂t for
both methods. This is clearly the case for all the simulations, and in fact the smaller
ﬂuctuations for the leap-frog method must be due to the smaller time step.
Even though we have ﬂuctuations in the norm, the expectation value H seems
insensitive to these ﬂuctuations as all three simulations give the same result, both
qualitatively and quantitatively.4 The energy ﬂuctuates and follows the “beat” of the
magnetic ﬁeld. This indicates that violating the strict unitarity not necessarily destroys
the physical content of the wave.
In the graph of the expectation value x (and in the animations) the wave packet
actually reverses its motion, indicating that the kinetic energy should have local minima. This is clearly seen (if we assume that the potential energy is neglible). The
changing direction of the magnetic ﬁeld (as γ goes from positive to negative or vice
versa) means that a changing torque is exerted on the particle. This classical line of
thinking is of course possible due to the correspondence principle and indicate that our
simulations behave according to the laws of quantum mechanics.
The expectation value x of the position shows that the wave packet is being
deﬂected by the Coulomb potential, similar to a classical system. An interference
pattern is also observed connecting the wave packet with the origin with long “ﬁngers”
of probability.
At the end of the simulation we observe something interesting. Similar to the system
in which the Coulomb potential was absent the wave packet starts to shrink. Thinking
classically we would expect the opposite to happen, i.e., that the packet would spread
out due to the uncertainty in the linear momentum. But in our simulations it seems
that the magnetic ﬁeld’s eﬀect on the particle is to localize it. This localization feature
4 The
172
program output is not normalized, but this is done prior to plotting.
7.4 – Discussion
is not present in the pure Coulomb system, see Ref. . An additional animation
showing a simulation in which γ = 0 is also found in Ref. .
We have obtained these qualitative results in diﬀerent systems with diﬀerent numerical integration schemes, indicating that there might be some interesting physics
behind this behavior as well. Studying wave packet dynamics in magnetic ﬁelds should
be an interesting future project, also because it poses a numerical challenge if the wave
collapses into a very small area as in Fig. 7.5.
From chapter 6 we know that the time development of the wave function (at least
for a time independent Hamiltonian) is given directly in terms of the eigenvalues and
eigenvectors of the system. If the discretization reproduces the eigenvalues then the
time development of the exact system will be accurately reproduced. On the other
hand, if the eigenvalues are wrong, the time development of the discrete system will
not reﬂect the exact development. Even though our diﬀerent numerical methods all
yielded very similar results we cannot say that it reﬂects the true time development.
For that we need to know in some way whether the eigenvalues are correct. In the
present case we could do a diagonalization of the Hamiltonian over the current grid for
γ = 0. This would give further indications of a good numerical method and simulation.
7.4
Discussion
Even though the experiments were limited in extent we have come across several important indications on the performance and behavior of the numerical methods. Can
we draw any conclusions from our work?
The most important beneﬁt of the Crank-Nicholson method over the leap-frog
scheme is stability. The method is stable for much larger time steps than the leap-frog
method. The time step required to reproduce the accuracy of the leap-frog scheme
(with ﬁnite diﬀerences in space) is much larger than the stability criterion for the
explicit scheme, making this beneﬁt even more important.
On the other hand, the ease of implementing the leap-frog scheme with ﬁnite differences makes it very attractive. The linear systems in the ﬁnite element method is
cheaper to solve with the leap-frog method, which is an important fact.
As for the spatial discretization, ﬁnite diﬀerences are without doubt the fastest to
implement and run. But we have seen that the ﬁnite element approach reproduces
eigenvalues that are qualitatively more correct than the ﬁnite diﬀerence method. Furthermore, there might be something to gain from using higher-order elements. We saw
that the time development of a wave function with quadratic elements was qualitatively
diﬀerent than with linear elements. Then again, the systems of linear equations are
harder to solve.
This takes us to the next important fact we have learned. Choosing the linear
solver appropriately is very important. The eﬃciency of the implicit time stepping
improves drastically if we choose a somewhat more sophisticated linear solver (i.e.,
iterative methods) than the usual banded Gaussian elimination that is used in most
two-dimensional approaches. In three dimensions it becomes vital. With the generalized minimal residual method the time spent on solving linear systems scaled linearly
with the matrix dimension, and this is optimal. Fine-tuning the choice and parameters
of the linear solver is an interesting future project.
Our simulations exclusively used a Gaussian wave packet for initial condition even
though we implemented the possibility of using the eigenvectors of the Hamiltonian
over the current grid. This was mostly due to limitations in computing time when
the submission deadline approached. If the time-dependent simulations done here were
interesting, using for example the ground state of the hydrogen atom over a properly
graded and ﬁne-tuned grid would be really cool, to put it that way.
173
Chapter 8
Conclusion
This short chapter summarizes the results and ﬁndings of this cand. scient. project.
Along the way valuable experience with the numerical methods has been gained, and
some ideas for possible future work have emerged as well. The focus has not been
as much on physics as on numerical methods, but this is not a limitation. On the
contrary, I believe that the experience gained with the numerical methods may serve
as an invaluable basis when complicated simulations and work is to be done.
The Eigenvalue Problem. The eigenvalue problem of the ﬁnite element discretized
Hamiltonian is closely related to the development of the time-discrete wave function, as
we have seen. The eigenvalues dictate the evolution, but apart from this the eigenvalue
problem has few consequences for the time dependent simulations. Of course, we still
have the usefulness of being able to analyze the time dependent state with respect to
the spectrum of the Hamiltonian, but this is not directly linked with the numerical
methods.
On the other hand, the eigenvalue problem by itself is very interesting to consider.
Many questions one might have concerning a physical system can be answered in terms
of the allowed energy levels. Take for example the hydrogen atom, in which the spectrum tells us the frequency of light that is needed to be able to excite or ionize the
electron from its ground state. By exploiting the ﬂexible geometry speciﬁcation in the
ﬁnite element method we are able to concentrate our numerical work in areas where
the potential energy varies rapidly. We saw examples of this when trying to ﬁnd the
spectrum of the two-dimensional hydrogen atom. With other methods such as ﬁnite
diﬀerence methods, the choice of geometry is very limited.
Because of the ﬂexible geometry in the ﬁnite element method, the possibility of
studying quantum chaotic systems was mentioned in chapter 6. This implies the study
of highly excited states, i.e., eigenvectors of the Hamiltonian far from the ground state.
We saw that for the particle-in-box, the ﬁnite element method produced eigenvalues
that qualitatively followed the analytic eigenvalues better than the ﬁnite diﬀerence
method. If this is a general feature of ﬁnite element methods (or variational methods
in general) they must be superior when studying such systems. Furthermore, if the
energy of the wave function in a time dependent simulation is very high, we should
use whatever method that most faithfully reproduces the higher eigenvalues. Hence,
further studies of the ﬁnite element eigenvalue problem is very attractive, as it has
implications that are far-reaching.
It must be mentioned that our approach to the eigenvalue problem was rather naı̈ve
and based on the idea that we could learn something about the time development. Very
sophisticated methods for ﬁnding eigenstates of the Hamiltonian called adaptive ﬁnite
element methods have been developed and applied to some classes of systems, even
though they are not completely general. Doing a critical study of adaptive methods
175
Conclusion
could prove interesting, perhaps also for the time development of the wave function.
As starting point for the work on eigenvalue problems one could perform an extension of the one-dimensional analysis in section 6.4 to include higher order elements
in several dimensions. This will then indicate lines of action with respect to timeintegration as well.
An additional study is the energy levels for the two-dimensional hydrogen atom
at intermediate magnetic ﬁelds. We discussed this system to some extent, ﬁnding
interesting qualitative results. Extending this analysis to for example spatially varying
magnetic ﬁelds could be interesting, as well as doing one-dimensional calculations on
the radial equation. For the tree-dimensional atom this has been done with ﬁnite
element methods as early as in the eighties, see Refs. [52, 53].
We have also mentioned gauge invariance of the eigenvalues and eigenvectors as an
important physical and numerical concept. Establishing results concerning this may
prove fruitful. This has never been properly done before.
Time Integration Methods. We have just scratched the surface of the wide range of
time integration methods that exist for classical Hamiltonian systems and the Schrödinger
equation, see Ref. . A concept that becomes more and more emphasized in numerical analysis of ordinary diﬀerential equations is geometric integration, i.e., integration
methods that preserve certain qualitative features of the diﬀerential equations, thereby
incorporating stability in a natural way. It is clear that a thorough insight into what
methods exist and to what kinds of systems they have been applied to would form an
invaluable basis for future work on solving the time dependent Schrödinger equation.
Trying out these methods with diﬀerent spatial discretizations and physical systems is
very interesting and should form interesting projects.
As for the methods studied in this text, there are still some unanswered questions.
Most important is perhaps the question of gauge invariance, which in the literature is
not even asked. For the time dependent methods this must be viewed in conjunction
with the spatial discretization, since both spatial and temporal degrees of freedom enter
the problem. A gauge invariant integration scheme will provide more accurate results
in a simulation, simply because the original formulation is gauge invariant and that
the chosen gauge is arbitrary in principle. If we must “choose the correct gauge” in
order to be able to trust our results in each case, we automatically have a lot more
parameters to adjust before starting the numerical simulations, and we must be able
to ﬁnd out what gauge is best in the ﬁrst place.
In the extension of this analysis we may study what kinds of schemes that in general
might prove gauge invariant, such as implicit schemes, explicit schemes and unitary
schemes, if any.
We stress that gauge invariance is an area of numerical analysis that never before
has been studied properly. On the other hand the concept of covariance of numerical
methods is a ﬁeld of research in progress.
Linear Solvers and the Schrödinger Equation. A lot of simulation time was spent on
solving linear systems. Linear systems pervade most computational scientist’s work,
and having a thorough knowledge of what solver to use and when to use it really
distinguishes an experienced practitioner from the crop. In this thesis we simply used
a ﬁxed solver because it worked and spent reasonably small time at doing its job.
The Schrödinger equation and the particular time integration method yield linear
systems of a particular kind. The leap-frog scheme (and any other explicit scheme)
have the mass matrix M as coeﬃcient matrix. The Crank-Nicholson scheme have a
coeﬃcient matrix M + i∆tH/2. As we saw the solution of the latter systems required
longer time than the former, indicating that knowing what method to use is important.
Are there any way of deciding what solver is the best for the Schrödinger equation?
What about modifying an existing solver to ﬁne-tune it for the Schrödinger equation?
176
Such questions also become increasingly important if higher-dimensional systems are
to be solved, in which the matricial dimensions approach the limits of what is possible
to handle with today’s computer technology.
The technique of preconditioning the linear systems in order to improve the convergence rate for the Schrödinger equation is also a topic which could be interesting to
analyze further, see Ref. .
As mentioned in the concluding remarks of chapter 6, both Diﬀpack and ARPACK
has supports parallelization. Studying the parallelized version of the numerical methods
will certainly make us able to do much more detailed simulations and give better
physical results.
Physical Applications. In this thesis we have focused on two-dimensional systems.
They were chosen for several reasons, most prominent were perhaps their ease of visualization and their intermediate diﬃculty of implementation. We can use moderately
large grids and obtain accurate results, something that would be diﬃcult in a threedimensional problem. Two-dimensional systems also arise in a variety of applications,
such as solid-state physics and plasma physics.
In Ref.  several atomic systems are considered, in particular Rydberg atoms
(i.e., highly excited atoms) in electromagnetic ﬁelds. Several interesting conclusions
and results are presented, several of which could prove interesting to reproduce and
study in further details, such as the excitation of Rydberg atoms with laser pulses.
With the ﬁnite element method and accurate integration schemes we may study
these aspects in a systematical manner and provide further insight into an exciting (no
pun intended) area of physics. Atomic physics describe systems that we actually can
create and control in a laboratory, providing basic insight into quantum mechanical
systems and also possible technological advances in the future.
In the extension we may also study systems of ions trapped in arbitrary geometrically shaped potentials, such as anharmonic and anisotropic harmonic oscillators,
quadrupole potentials, Paul traps and even quantum wires and nanotubes. See Ref. 
for details.
We could also study the Schrödinger equation on more exotic manifolds, such as
a graph with several one-dimensional manifolds that meet at common points (see
Ref. ), a curved two-dimensional manifold such as a sphere and so on. Pointinteraction Hamiltonians (i.e., Hamiltonians with δ-function potentials) could also be
attacked in a direct manner with ﬁnite element methods in which the singularities are
integrated out of the formulation, leaving a well-deﬁned matrix equation instead.
Closely related to the point-interaction Hamiltonians are the systems from solidstate physics mentioned in section 3.2.1. These systems encourage in a natural way
the use of the ﬁnite element methods. If the system is very regular we may ﬁnd the
exact solution, but if the particles in the model are displaced in a random fashion
the behavior of the system is largely unknown. On the other hand we know that
irregularities in such systems are responsible for many well-known phenomena, such as
super-conductance at low temperatures.
The Schrödinger equation is linear, but in several areas of physics non-linear variants of the equation arise. Bose-Einstein condensates can be described by the GrossPitaevskii equation, see Ref. . This equation has an extra term proportional to
|Ψ(x)|2 and describes the eﬀective one-particle wave function for the interacting bosons.
Solving the time-dependent Gross-Pitaevskii equation is a challenge and will certainly
provide valuable insight into both the physics and the numerics of linear and non-linear
systems.
177
Appendix A
Mathematical Topics
This appendix discusses some mathematical and technical points in the text in detail.
For a thorough discussion on linear operators on Hilbert spaces and in quantum
mechanics, see . The physics textbook  contains mathematically oriented appendices easier to grasp at ﬁrst reading.
A.1
A Note on Distributions
The most famous example of a distribution is perhaps the Dirac distribution, also
(improperly) called the δ-function. Originally, this was deﬁned as the “function” δ(x)
satisfying
δ(x) = 0, ∀x ∈ R, x = 0,
and
+∞
δ(x) dx = 1.
−∞
If δ(x) is a function, then its value at x = 0 must be inﬁnte in order to make the
integral diﬀerent from zero.
The Dirac δ-function is an example of a distribution. Distributions play a fundamental role in the mathematical theory of ﬁnite elements as well as quantum mechanics.
The rigorous deﬁnition is beyond the scope of this text, see Ref.  for an introduction.
Roughly speaking they represent a way to take a weighted average of a function.
From the “deﬁnition” of the Dirac distribution we must have
δ(x)f (x) = f (0),
and so
δ : L2 → C,
δ(f ) = (δ, f ) = f (0).
Here, we must assume that f (0) is well-deﬁned, but it is not! Any element f in L2 is
considered identical to elements g whose function values diﬀer at a set of (Lebesgue)
measure zero. Thus, the δ-distribution is not well-deﬁned on the whole space L2 .
A.2
A Note on Inﬁnite Dimensional Spaces in Quantum Mechanics
Quantum mechanics use linear algebra in Hilbert spaces as framework for the theory.
More speciﬁcally, the Hilbert spaces employed are inﬁnite-dimensional and separable.
179
Mathematical Topics
Separable means that one may always ﬁnd a countable orthonormal basis. Every ﬁnitedimensional vector space is separable.
As stated in section 1.3, the Hilbert space for one particle is
H = L2 (R3 ) ⊗ C2s+1 ,
where s is the intrisic spin of the particle. The space L2 is the square-integrable
functions on R3 into the complex numbers, viz.,
2
3
3
L (R ) := {f : R → C :
|f |2 < ∞}.
Usually, physicists work with H as if it is a ﬁnite-dimensional space, in the sense that
Hermitian operators always may be diagonalised (see below) and that operators are
well deﬁned for every vector in H. This is a rough simpliﬁcation of the real situation.
Physicists also work with the elements as if they have well-deﬁned values at each
point in space. Because of this, quantum mechanical calculations are actually often
merely formal calculations, or “juggling with symbols.” For example, consider the
point-interaction Hamiltonian in one-dimension, viz.,
H =−
∂2
+ µδ(x0 ),
∂x2
where µ is some coupling constant. This is not an operator on L2 ! First, the kinetic
energy term is not deﬁned on the whole space, and δ is not an even an operator, much
less a function.
Consider the momentum operator in one dimension. We write this as
P = −i
∂
,
∂x
but strictly speaking this is an abuse of notation and not correct. There are many
vectors (i.e., functions) in H that cannot be diﬀerentiated, i.e., the operator is not
deﬁned for all vectors. (It is not even deﬁned on a closed subspace of H, since a sequence
of diﬀerentiable functions not necessarily converges to something diﬀerentiable.)
If we on the other hand consider the weak derivative of an element f in L2 , the
operator is well-deﬁned. If f happens to be diﬀerentiable in the classical sense, the
weak derivative is the same as the classical derivative. This is why physics calculations
with the momentum operator usually pose no diﬃculties.
A.3
Diagonalization of Hermitian Operators
As discussed in chapter 1, quantum mechanical observables are represented as linear
operators A on the Hilbert space H. All observables are Hermitian operators, that is
A† = A. This is equivalent to the requirement:
(Ψ, AΦ) = (AΨ, Φ),
for all Ψ, Φ ∈ H.
Note that we like to consider operators not deﬁned for all vectors, such as the
momentum operator or the position operator. Many of these (including the momentum
operator) do not have an orthonormal basis of eigenfunctions in L2 .
Diagonalization of an operator A means ﬁnding an orthonormal basis of vectors φn
and a set of scalars an such that
Aφn = an φn ,
180
n ∈ I,
A.4 – The Fourier Transform
where I is some set. Usually it is a countable set, i.e. a subset of the natural numbers,
or it is uncountable as a subset of the real numbers.
All ﬁnite-dimensional (complex) Hilbert spaces are isomorphic to Cn where n =
dim(H). Thus linear operators become n × n matrices, and Hermitian operators in
ﬁnite-dimensional spaces may in this way always be diagonalized, since ﬁnite-dimensional
matrices may always be diagonalized.
For iniﬁte-dimensional Hilbert spaces such as L2 , the situation is not so simple.
Many operators may be diagonalized, but others can not. Take for example the operator
P = −i
∂
,
∂x
acting on L2 (R)-vectors. Writing out the eigenvalue equation (using the classical
derivative!) yields
∂φ
= pφ(x).
−i
∂x
This diﬀerential equation has the solutions
φp (x) = Aeipx .
These solutions are however not vectors of L2 (R), since
∞
eipx dx
−∞
does not exist. As another example, consider the operator X; multiplication by x. The
eigenvalue equation becomes
xφ(x) = x φ(x),
whose only sulution is
φ(x) = Bδ(x − x ).
This is not a function!
Both operators considered above are fundamental in quantum mechanics. Even
though their “eigenvectors” are not proper vectors of H, it is very useful to consider
them as such. Physicists use the name eigenvectors for the above solutions, even though
it is mathematically incorrect. Actually, they are distributions.
Do the “eigendistributions” form a basis for H in some way? In the cases of P and
X they do, since any square integrable function ψ(x) obeys
ψ(x) =
δ(x − x0 )ψ(x0 ) dx0 ,
and ψ(x) =
eipx φ(p) dp,
so that there exist a superposition (i.e., an integral) of the eigendistributions that equals
ψ(x). The above equations are nothing more than the formal deﬁnitions of the Fourier
transform and the Dirac δ-function, respectively. In most cases physicists encounter
the eigendistributions yields a basis for H in this sense.
A.4
The Fourier Transform
Given a suﬃciently nice function f : R → R we may deﬁne the Fourier transform as:
1
(A.1)
g(k ) = F[f ] :=
f (x )e−ik ·x dn x.
(2π)n/2
181
Mathematical Topics
This transforms the function f (x ) into another function g(k ), and the transformation
may be inverted, viz.,
1
f (x ) = F −1 [g] :=
(A.2)
g(k )eik ·x dn k.
(2π)n/2
In other words: A function f (x ) may be viewed as equivalent to g(k ) and conversely.
Furthermore, f (x ) is the Fourier transform of g(−k ), as may easily be seen.
There is a strong connection between the Fourier transforms and quantum mechanics. The inverse Fourier transform may be interpreted in the following way. The
function f (x ), a function of space coordinates, is represented as a superposition of
plane waves exp(ik · x ) of wavenumber k with coeﬃcients g(k ). It is the description
of the spatial wave function in the basis of plane waves.
A.5
Time Evolution for Time Dependent Hamiltonians
Consider the time dependent Schrödinger equation, viz.,
i
∂Ψ
= H(t)Ψ(t),
∂t
where the Hamiltonian H (which is linear and Hermitian) may depend explicitly on
time. We should really make some assumptions on what function space Ψ belongs to,
but let us instead consider the calculations as formal.
Assume that the solution Ψ(t) exists in an interval t ∈ [0, T ]. Let U (t) be the
operator that takes the initial condition Ψ(0) into Ψ(t). As the Schrödinger equation
is linear, U is a linear operator. This yields
i
∂U
Ψ(0) = H(t)U (t)Ψ(0).
∂t
This holds for all Ψ(0) so that
i
∂U
= H(t)U (t)
∂t
with U (0) = 1 is the diﬀerential equation governing the solution operator (or the
propagator.) Integration yields
U (t) = 1 − i
t
H(t1 )U (t1 ) dt1 .
(A.3)
0
This suggests an iteration process called Picard iteration. The ﬁrst iteration is
U (t) = 1 − i
t
H(t1 )[1 − i
t1
H(t2 )U (t2 ) dt2 ] dt1
t
t t1
=1−i
H(t1 ) dt1 + (−i)2
H(t1 )H(t2 )U (t2 ) dt1 t2 .
0
0
0
0
0
The succesive iterations is given by replacing U (tn ) with Eqn. (A.3). Performing inﬁnitely many iterations then gives
U (t) =
∞
(−i)n
n=0
182
t
0
t1
dt1
dt2 . . .
0
tn−1
dtn H(t1 )H(t2 ) . . . H(tn ).
0
A.5 – Time Evolution for Time Dependent Hamiltonians
If [H(t1 ), H(t2 )] = 0 we can rewrite this as
U (t) =
t
t
∞
(−i)n t
dt1
dt2 . . .
dtn H(t1 )H(t2 ) . . . H(tn ).
n!
0
0
0
n=0
Unfortunately, we cannot. The Hamiltonians at diﬀerent time levels may be arbitrarly
diﬀerent from each other. We therefore introduce the time ordering operator T . For
H(t1 ) and H(t2 ) it is deﬁned by
T [H(t1 )H(t2 )] = H(max{t1 , t2 })H(min{t1 , t2 }).
For an set of n time levels S = {ti , i = 1, . . . , n} it is deﬁned by
2
2
H(t) = H(max{S}) T
H(t) ,
T
t∈S t∈S
where S = S − max{S}. We also deﬁne T [A + B] = T A + T B. With this deﬁnition,
we have
t
t
t
∞
(−i)n
T
dt1
dt2 . . .
dtn H(t1 )H(t2 ) . . . H(tn )
U (t) =
n!
0
0
0
n=0
t
= T exp −i
H(t) dt .
0
This is the formal expression for the propagator in the case of a time dependent Hamiltonian. As we see, the expression reduces to
U (t) = exp [−itH]
in the case of a time independent Hamiltonian.
183
Appendix B
Program Listings
B.1
DFT Solver For One-Dimensional Problem
This is the Matlab source code for the simulations in chapter 4. The code is very simple
and easy to adapt to new problems.
B.1.1 fft schroed.m
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
S i m p l e p r o g r a m for s o l v i n g the time d e p e n d e n t S c h r o e d i n g e r eqn . in
one d i m e n s i o n with the s p e c t r a l m e t h o d . The PDE is g i v e n by
i p s i _ t = H psi , psi ( t0 ) = phi .
The h a m i l t o n i a n H is
H = T + V = - del ^ 2 + V ( x )
The e x a c t s o l u t i o n is g i v e n by
psi ( t ) = U (t , t0 ) phi ,
w h e r e the p r o p a g a t o r U is g i v e n as
U (t , t0 ) = exp ( - i ( t - t0 ) H )
We e m p l o y a split - o p e r a t o r m e t h o d in time , in w h i c h U ( t0 + tau , t0 ) is
a p p r o x i m a t e d by
U = DED , D = exp ( - i tau /2 T ) and E = exp ( - i tau V ) .
Note that T ( and h e n c e also D ) is d i a g o n a l in the F o u r i e r
t r a n s f o r m e d r e p r e s e n t a t i o n , viz . ,
T g ( k ) = - k ^2 g ( k ) ,
g ( k ) = FT ( psi ( x ) ) .
Thus A b e c o m e s a d i a g o n a l op . with d i a g o n a l e l e m e n t s
D (k , k ) = exp ( - ik * k * tau /2 ) .
V is d i a g o n a l in position - rep . , so
E (x , x ) = exp ( - iV ( x ) * tau ) .
% s o l v e e q u a t i o n on i n t e r v a l [ xmin , xmax ].
% Use N p o i n t s with s p a c i n g h .
xmin = -30;
xmax = 30;
L = xmax - xmin ;
N = 1024;
h = L /( N -1) ;
b = 2.5; % half w i d t h of b a r r i e r
a = 28; % h e i g h t of b a r r i e r
% c h o o s e time step and end - of - s i m u l a t i o n time .
tau = 0.00025;
t_final = 2.0;
plot_at_time = [0 , 0.5 , 1 , 1.5 , 2 , t_final ];
plot_colors = [ ’g ’ , ’r ’ , ’g ’ , ’r ’ , ’g ’ , ’r ’ ];
% make a g a u s s i a n i n i t i a l cond . c e n t e r e d at x0 with i n i t i a l m o m e n t u m k0 .
% note : k i n e t i c e n e r g y <T > = k0 ^2.
% w i d t h of g a u s s i a n : sqrt ( s i g m a 2 )
x0 = -10.0;
sigma2 = 4;
k0 = 5;
X = linspace ( xmin , xmax , N ) ; % x - c o o r d i n a t e s of grid p o i n t s .
psi0 = exp ( -( X - x0 ) .*( X - x0 ) / sigma2 ) .* exp ( j * k0 * X ) ;
% also c r e a t e a p o t e n t i a l V .
V = zeros (1 , N ) ;
for i =1: N
if ( X (i ) >= - b ) & ( X ( i) <= b )
V (i ) = a ;
end ;
end ;
rel_energy = k0 * k0 / a ;
% r e l a t i v e e n e r g y of i n i t i a l wave p a c k e t
% set ICs .
% psi is the s o l u t i o n with DFT , p s i _ f d with FDM
185
Program Listings
psi = psi0 ;
psi_fd = psi0 ’;
% d e f i n e f r e q u e n c i e s for A o p e r a t o r .
freqs = [ linspace (0 ,N -1 , N ) ] * 2* pi / L ;
freqs ( N /2+1: N) = freqs ( N /2+1: N ) - N *2* pi /L ;
% d e f i n e d i a g o n a l o p e r a t o r s for split - o p e r a t o r s c h e m e
D = exp ( - j * tau * freqs .* freqs /2) ;
E = exp ( - j * tau * V ) ;
% b u i l d the f i n i t e d i f f e r e n c e o p e r a t o r as a s p a r s e m a t r i x .
e = ones (N ,1) ;
FD = spdiags ([ e -2* e e ] , -1:1 , N , N ) ;
FD (N ,1) =1;
FD (1 , N ) =1;
FD = - FD /( h * h) ;
hold off
newplot
plot (X , V /a , ’b ’) ;
plot_indices = floor ( plot_at_time / tau ) ;
% p r o p a g a t e ...
for step = 0: round ( t_final / tau )
n = find ( plot_indices == step ) ;
if ~ isempty ( n )
hold on
plot (X , rel_energy + real ( conj ( psi ) .* psi ) , plot_colors ( n ) ) ;
plot (X , rel_energy + real ( conj ( psi_fd ) .* psi_fd ) , strcat ( ’: ’ , plot_colors ( n ) ) ) ;
end ;
% p r o p a g a t e the s p e c t r a l m e t h o d wave f u n c i o n psi .
phi = fft ( psi ) ;
psi = ifft ( D .* phi ) ;
psi = E .* psi ;
phi = fft ( psi ) ;
psi = ifft ( D .* phi ) ;
% p r o p a g a t e the f i n i t e d i f f e r e n c e m e t h o d wave f u n c t i o n p s i _ f d .
tempFD1 = j * tau /2* FD * psi_fd ;
tempFD2 = - tau * tau /4* FD *( FD * psi_fd ) ;
temp = psi_fd + tempFD1 + tempFD2 ;
psi_fd = ( E .*( temp ’) ) ’;
tempFD1 = j * tau /2* FD * psi_fd ;
tempFD2 = - tau * tau /4* FD *( FD * psi_fd ) ;
psi_fd = psi_fd + tempFD1 + tempFD2 ;
end ;
xlabel ( ’x ’) ;
ylabel ( ’ |\ psi (x , t ) |^2 ’) ;
B.2
The HydroEigen class
This is the class deﬁnition of HydroEigen. It is not used directly in the main program
but rather through the subclass TimeSolver
B.2.1 HydroEigen.h
# ifndef HydroEigen_h_IS_I NC LU D ED
# define HydroEigen_h_IS_I NC LU D ED
# include < FEM .h >
// i n c l u d e s F i e l d F E .h , G r i d F E . h
# include < TimePrm .h >
# include < DegFreeFE .h >
# include < Arrays_real .h >
// for M a t D i a g
# include < SaveSimRes .h >
# include < FieldFormat .h >
# include < FieldSummary .h >
# include < LinEqAdmFE .h >
# include < MatBand_Complex .h >
# include < IsOs .h >
//
// s i m p l e i n n e r p r o d u c t i n t e g r a n d . i n l i n e d .
//
class InnerProdIntegrand Ca lc : public IntegrandCalc {
protected :
Handle ( FieldFE ) u , v ;
NUMT result ;
public :
InnerProdIntegran dC al c () { result = 0; }
virtual void setFields ( FieldFE & u_ , FieldFE & v_ ) { u . rebind ( u_ ) ; v . rebind ( v_ ) ; }
virtual NUMT getResult () { return result ; }
virtual void integrandsG ( const FiniteElement & fe ) {
real detJxW = fe . detJxW () ;
NUMT uval = u - > valueFEM ( fe ) ;
NUMT vval = v - > valueFEM ( fe ) ;
186
B.2 – The HydroEigen class
result += conjugate ( uval ) * vval * detJxW ;
// i n t e g r a t e and s t o r e in r e s u l t .
}
};
//
// s i m p l e i n t e g r a n d for <r >. i n l i n e d .
//
class IntegrandOfExpect at io n_ r : public InnerProdIntegran dC al c {
public :
IntegrandOfExpect at i on _r () { }
virtual void integrandsG ( const FiniteElement & fe ) {
real detJxW = fe . detJxW () ;
NUMT uval = u - > valueFEM ( fe ) ;
NUMT vval = v - > valueFEM ( fe ) ;
Ptv ( real ) x = fe . getGlobalEvalPt () ;
real r = sqrt ( x (1) * x (1) + x (2) * x (2) ) ;
result += conjugate ( uval ) * vval * r * detJxW ;
// i n t e g r a t e and s t o r e in r e s u l t .
}
};
//
//
//
//
//
//
//
**********************************************************************
C l a s s d e c l a r a t i o n of H y d r o E i g e n - - a c l a s s that s o l v e s the time i n d e p .
öS c h r d i n g e r e q u a t i o n for the two - d i m e n s i o n a l h y d r o g e n atom with an
a p p l i e d m a g n e t i c f i e l d in two d i f f e r e n t g a u g e s .
**********************************************************************
class HydroEigen : public FEM
{
protected :
// P r o p e r t i e s that c o n c e r n s g e o m e t r y of the s y s t e m
// and the s t r u c t u r e of the l i n e a r e q u a t i o n s .
Handle ( GridFE )
grid ;
// h n a d l e of the grid
Handle ( FieldFE )
u;
// a f i e l d over the grid
Handle ( DegFreeFE ) dof ;
// m a p p i n g : f i e l d < - > e q u a t i o n s y s t e m
// P r o p e r t i e s that c o n c e r n s matrices , v e c t o r s and f i e l d s
// used .
// Note : the L e n E q A d m F E obj is not a c t u a l l y used for eqns in this c l a s s .
// H o w e v e r it is a c o n v e n i e n t m e a n s for b u i l d i n g the e l e m e n t
// m a t r i x with m a k e S y s t e m (* dof , * l i n e q ) ;
Vec ( NUMT )
linsol ;
// a v e c t o r that is used for l i n e a r eqns
Handle ( LinEqAdmFE ) lineq ;
// k e e p s l i n e a r e q u a t i o n s Ax = b
Handle ( Matrix ( NUMT ) )
M;
// mass m a t r i x
Handle ( Matrix ( NUMT ) )
K;
// e l e m e n t m a t r i x
Handle ( SaveSimRes ) database ; // for d u m p i n g f i e l d s to disk
// P a r a m e t e r s
// in scan () .
real gamma ;
bool nucleus ;
bool angmom ;
bool lump ;
real epsilon ;
that are read from the m e n u s y s t e m and i n i t i a l i z e d
//
//
//
//
//
s t r e n g t h of m a g n e t i c f i e l d
c o u l o m b on / off
ang - mom op . on / off
lump mass m a t r i x or not .
singularity tolerance
String mat_type ;
String gridfile ;
String gauge ;
// c h o s e n s t o r a g e s c h e m e for m a t r i c e s .
// g r i d f i l e from menu s y s t e m
// i n d i c a t e s g a u g e . " s y m m e t r i c " or " non - s y m m e t r i c "
int nev ;
bool store_evals ;
int store_evecs ;
bool savemat ;
bool store_prob ;
bool arpack_solve ;
bool * erase_log ;
//
//
//
//
//
//
//
//
n u m b e r ov e i g e n v a l u e s/ v e c t o r s to c o m p u t e .
s t o r e the e i g e n v a l u e s or not .
n u m b e r of e i g e n v e c t o r s to s t o r e .
save m a t r i c e s or not .
s t o r e prob . d e n s i t y or not .
use a r p a c k to s o l v e s y s t e m .
a r r a y that i n d i c a t e s u n k n o w n s to be e r a s e d .
when i n c o r p o r a t i n g BCs .
void * the_solver ;
// a void p o i n t e r used to s t o r e the E i g e n S o l v e r o b j e c t .
Os logfile ;
String real_format ;
// w r i t e s info to c a s e n a m e . log .
// c o n t a i n s e . g . " % 1 0 . 1 0 g "
public :
// C o n s t r u c t o r s , d e s t r u c t o r s and " c o m p u l s o r y " m e t h o d s
// for a FEM c l a s s .
HydroEigen () ;
virtual ~ HydroEigen () ;
virtual void adm
( MenuSystem & menu ) ;
virtual void define ( MenuSystem & menu , int level = MAIN ) ;
virtual void scan () ;
virtual void solveProblem () ;
virtual void report () ;
virtual void resultReport () ;
// H e l p s k e e p i n g t r a c k of time s p e n t on c o m p u t a t i o n s.
time_t getElapsedTime () ;
void reportElapsedTime () ;
// Add a s t r i n g to c a s e n a m e . log and s t d e r r .
void addToLogFile ( const String & s ) ;
// M e t h o d s that e r a s e rows and c o l u m n s from m a t r i c e s .
void eraseRowAndCol ( MatSparse ( NUMT ) & A , int k ) ;
void eraseRowAndCol ( MatDiag ( NUMT ) & A , int k ) ;
void eraseRowAndCols ( MatBand ( NUMT ) & New , MatBand ( NUMT ) & Old ) ;
// f i l l E r a s e L o g() m a r k s the d e g r e e s of f r e e d o m that c o r r e s p o n d s to
// b o u n d a r y nodes , uses bool * e r a s e _ l o g d e c l a r e d a b o v e .
187
Program Listings
void fillEraseLog () ;
// e n f o r c e H o m o g e n o u s B C s e r a s e s the rows and c o l u m n s from the m a t r i c e s
// i n d i c a t e d by bool * e r a s e _ l o g .
void enforceHomogenousBC s ( Handle ( Matrix ( NUMT ) ) & K , Handle ( Matrix ( NUMT ) ) & M ) ; // e r a s e dofs s t o r e d in e r a s e _ l o g (
shrink matrices )
// S a v e s a m a t r i x in M a t l a b c o m p a t i b l e f o r m a t . ( D i f f p a c k ’s own is not r e l i a b l e ...)
void saveMatrix ( MatSparse ( NUMT ) & A , const String & Aname , const String & fname ) ;
// C r e a t e s the mass matrix , l u m p e d or not , in a p p r o p r i a t e s t o r a g e f o r m a t .
void makeMassMatrix2 ( Handle ( Matrix ( NUMT ) ) & Dest , const Handle ( Matrix ( NUMT ) ) & WithPattern ) ;
// M u l t i p l i e s A with i n v e r s e of d i a g o n a l m a t r i x D .
void multInvMatDiag ( Handle ( Matrix ( NUMT )) D , Handle ( Matrix ( NUMT ) ) A );
// M e t h o d that c a l c u l a t e d e x p e c t a t i o n v a l u e s .
NUMT calcExpectation_r ( FieldFE & u , FieldFE & v ) ;
NUMT calcInnerProd ( FieldFE & u , FieldFE & v ) ;
// C a l u c l a t e the ( a p p r o x i m a t e) p r o b a b i l i t y d e n s i t y . ( u_j - - > | u_j |^2)
void calcProbabilityDen si ty ( const FieldFE & u , FieldFE & prob , bool redim = false ) ;
protected :
// C o m p u t e s the i n t e g r a n d in the FEM m a t r i x f o r m u l a t i o n. C a l l e d in FEM :: m a k e S y s t e m () .
virtual void integrands ( ElmMatVec & elmat , const FiniteElement & fe );
};
# endif
B.2.2 HydroEigen.cpp
# include
# include
# include
# include
# include
# include
# include
# include
< HydroEigen .h >
< ElmMatVec .h >
< FiniteElement .h >
< readOrMakeGrid .h >
< SparseDS .h >
< IsOs .h >
< time .h >
< MatSparse_Complex .h >
# include
# include
# include
# include
< RenumUnknowns .h >
< AMD .h >
< GibbPooleStockm .h >
< Puttonen .h >
# include < IntegrateOverGridFE .h >
# include " EigenSolver . h "
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// C l a s s d e f i n i t i o n of E i g e n S o l v e r. See also E i g e n S o l v e r. h
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
// m i n i m a l c o n s t r u c t o r
HydroEigen :: HydroEigen () { erase_log = NULL ; }
// d e s t r u c t o r
HydroEigen ::~ HydroEigen () { if ( erase_log ) delete [] erase_log ;
}
// adm
void HydroEigen :: adm ( MenuSystem & menu ) {
SimCase :: attach ( menu ) ;
define ( menu ) ;
// let user come up with some c h o i c e s .
menu . prompt () ;
// i n i t i a l i z e s o l v e r .
scan () ;
}
//
// Set up menu s y s t e m e x t e n s i o n s .
//
void HydroEigen :: define ( MenuSystem & menu , int level ) {
menu . addItem ( level , " gridfile " , " file or prepro command " ,
" P = PreproStdGeom | DISK r =40 degrees =360 | e = ElmT3n2D nel =1000 resol =100 ") ;
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
menu . addItem ( level ,
" nev " , " no of eigenvalues " , " 100 ") ;
" gamma " , " magnetic field strength " , " 0.0 " ) ;
" nucleus " , " turn on / off coulomb term " , " true " ) ;
" epsilon " , " singularity tolerance " , " -1.0 " ) ;
" angmom " , " turn on / off angmom term " , " false " ) ;
" nitg " , " number of itegration points (0 uses GAUSS_POINTS ) " , " 0 " ) ;
" lump " , " lump mass matrix or not " , " false " ) ;
" warp " , " warp factor for grid " , " 1.0 " ) ;
" scale " , " scale factor for grid " , " 1.0 " ) ;
" renum " , " try to renumber nodes ? " , " true " ) ;
" savemat " , " save matrices as Matlab files " , " true " ) ;
" store prob density " , " store probability density fields " , " false " ) ;
" use arpack " , " diagonalize with arpack or not " , " true " ) ;
" gauge " , " gauge " , " symmetric " ) ; // or " non - s y m m e t r i c "
menu . addItem ( level , " store_evecs " ,
" number of evecs to store " , " -1 ") ; // -1 m e a n s all ...
menu . addItem ( level , " real format " , " real format for streams used " , " %10.10 g " ) ;
// T h e s e c o m m a n d s m a k e s the d i f f e r e n t c l a s s e s
188
B.2 – The HydroEigen class
// f e t c h d e f a u l t s e t t i n g s from the menu s y s t e m .
// For example , the d e f a u l t s o l v e r type for l i n e a r
// s y s t e m s of e q u a t i o n s h a n d l e d by L i n E q A d m ( FE )
// can be set in the menu , and t h e s e c h o i c e s are
// p a s s e d on when c o n s t r u c t i n g s o l v e r o b j e c t s .
SaveSimRes :: defineStatic ( menu , level +1) ;
// L i n E q A d m F E :: d e f i n e S t a t i c( menu , l e v e l +1) ;
LinEqAdm :: defineStatic ( menu , level +1) ;
FEM :: defineStatic ( menu , level +1) ;
}
//
// I n i t i a l i z e s i m u l a t o r
//
void HydroEigen :: scan () {
MenuSystem & menu = SimCase :: getMenuSystem () ;
// Get the m a t r i x type .
mat_type = menu . get ( " matrix type " ) ;
// If s a v e m a t == true , the m a t r i c e s of the ev - p r o b l e m will be
// s a v e d in M a t l a b c o m p a t i b l e f i l e s .
savemat = menu . get ( " savemat " ) . getBool () ;
// If s t o r e _ p r o b == true , then the p r o b a b i l i t y d e n s i t y
// for each e i g e n v e c t o r that is s t o r e d is also s t o r e d .
store_prob = menu . get (" store prob density " ) . getBool () ;
// If a r p a c k _ s o l v e == true , then A R P A C K will be c a l l e d
// to a c t u a l l y d i a g o n a l i z e the p r o b l e m . U s u a l l y this is done .
// but to c o m p a r e the d i a g o n a l i z a t i o n t i m e s with e . g . M a t l a b or
// L A P A C K one m i g h t want to i n s t e a d save the m a t r i c e s and
// skip A R P A C K d i a g o n a l i z a t i o n.
arpack_solve = menu . get ( " use arpack ") . getBool () ;
// g a u g e i n d i c a t e s what g a u g e to use for the p o t e n t i a l s .
// U s u a l l y set to " s y m m e t r i c ".
gauge = menu . get ( " gauge " ) ;
if (!(( gauge == " symmetric " ) || ( gauge == " non - symmetric " ) ) ) {
// u n k n o w n g a u g e
s_e < < " Unknown gauge ! must be ’ symmetric ’ or ’ non - symmetric ’. Now set to ’ symmetric ’. " < < endl ;
gauge = " symmetric " ;
}
//
// Set up the l o g f i l e .
//
logfile . rebind ( *( new Os ( aform ( " % s . log " , casename . c_str () ) , NEWFILE ) ) ) ;
//
// Get the grid from the menu s y s t e m .
//
gridfile = menu . get ( " gridfile " ) ;
grid . rebind ( new GridFE () ) ;
readOrMakeGrid (* grid , gridfile ) ;
//
// If the grid is a disk , the f i r s t node is a l w a y s
// at the edge . We s h o u l d r e a l l y loop t h r o u g h the n o d e s
// to be sure . . . : )
// r is only used when w a r p i n g ( g r a d i n g ) the grid .
//
real r = grid - > getCoor (1) . norm () ; // get the r a d i u s of the disk , if it is a disk .
//
//
//
//
if
If r e n u m == true , we r e n u m b e r the n o d e s in the
grid to o p t i m i z e the b a n d w i d t h of the m a t r i x .
This is i m p o r t a n t (!) if we use b a n d e d matrices ,
but not that i m p o r t a n t if M a t S p a r s e is b e i n g used .
( menu . get ( " renum " ) . getBool () ) {
// try to r e n u m b e r the n o d e s to o p t i m i z e ...
s_e < < " Optimizing grid ... " < < endl ;
Handle ( RenumUnknowns ) renum_thing ;
// not s t a b l e ?? c r a s h e s for some disk g r i d s . :(
// r e n u m _ t h i n g. r e b i n d ( new G i b b P o o l e S t o c k m() ) ;
renum_thing . rebind ( new Puttonen () );
renum_thing - > renumberNodes ( grid () );
}
// Get grid m o d i f i c a t i o n p a r a m e t e r s .
// See t h e s i s text for d e t a i l s on the
// g r a d i n g ( w a r p i n g ) p r o c e s s .
real warp = menu . get ( " warp " ) . getReal () ;
real scale = menu . get (" scale " ) . getReal () ;
if ( warp != 1) {
s_e < < " Warping grid with factor " < < warp < < " ... " < < endl ;
for ( int i =1; i <= grid - > getNoNodes () ; i ++) {
real d = pow ( grid - > getCoor ( i ) . norm () , warp -1) ;
grid - > putCoor ( grid - > getCoor ( i ) * d* pow (r ,1 - warp ) , i ) ;
}
}
if ( scale != 1) {
s_e < < " Scaling grid with factor " < < scale < < " ... " < < endl ;
for ( int i =1; i <= grid - > getNoNodes () ; i ++) {
grid - > putCoor ( grid - > getCoor ( i ) * scale , i ) ;
}
}
// S a v c e the grid to . temp . grid .
// U s u a l l y not n e c e s s a r y as d u m p i n g f i e l d s
// will c r e a t e c a s e n a m e . grid as well ...
Os gridf ( " . temp . grid " , NEWFILE ) ;
grid - > print ( gridf ) ; // w r i t e w a r p e d / o p t i m i z e d grid to file
s_e < < " Saving the warped / optimized grid to . temp . grid ... " < < endl ;
189
Program Listings
// Get n u m b e r of e i g e n v e c t o r s to find .
String nev_string = menu . get ( " nev " ) ;
nev = atoi ( nev_string . c_str () ) ;
// n u c l e u s t u r n s on / off the C o u l o m b i n t e r a c t i o n
// in the H a m i l t o n i a n. g a m m a is the m a g n e t i c f i e l d .
// If lump == true , we lump the mass m a t r i x .
// a n g m o m t u r n s on / off the term prop . to d / d phi in H .
nucleus = menu . get ( " nucleus " ) . getBool () ;
gamma = menu . get ( " gamma " ). getReal () ;
lump = menu . get ( " lump " ) . getBool () ;
angmom = menu . get ( " angmom " ) . getBool () ;
// Get n u m b e r of e i g e n v e c t o r s to s t o r e .
// A n e g a t i v e v a l u e a u t o m a t i c a l l y s a v e s all v e c t o r s .
store_evecs = atoi ( menu . get ( " store_evecs " ) . c_str () ) ;
if (( store_evecs < 0) || ( store_evecs > nev ) )
store_evecs = nev ;
// Init the s i m r e s d a t a b a s e .
// We s t o r e f i e l d s ( i . e . , e i g e n v e c t o r s) with
// database - > dump ( f i e l d ) ;
FEM :: scan ( menu ) ;
database . rebind ( new SaveSimRes () ) ;
database - > scan ( menu , grid - > getNoSpaceDim () ) ;
// Init the f i e l d u and the dof .
// dof is a c t u a l l y not used in the s i m u l a t o r ,
// n e e d e d in the D e g F r e e F E o b j e c t .
u . rebind ( new FieldFE (* grid , " u " ) ) ;
dof . rebind ( new DegFreeFE (* grid , 1) ) ; // 1 for 1 u n k n o w n per node
// Init the l i n e a r s y s t e m . ( H o l d s m a t r i x in e i g e n v a l u e f o r m u l a t i o n)
// FEM s y s t e m s are not s o l v e d in H y d r o E i g e n , but the s t r u c t u r e is n e e d e d .
lineq . rebind ( new LinEqAdmFE () ) ;
lineq - > scan ( menu ) ;
linsol . redim ( grid - > getNoNodes () ) ;
linsol . fill (0.0) ;
lineq - > attach ( linsol ) ;
//
//
//
//
Set up i n t e g r a t i o n r u l e s .
This is s o m e w h a t t e n t a t i v e : A t r a p e z o i d a l rule
is set up if nitg > = 2 . This rule s h o u l d ONLY
be used with t r i a n g u l a r e l e m e n t s .
s_e < < " Setting up integration rules ... " < < endl ;
String n_string = menu . get ( " nitg " ) ;
int n = atoi ( n_string . c_str () ) ;
int N = n *( n +1) /2;
if ( n >=2) {
// set up t r a p e z o i d a l rule .
// NB : ONLY W O R K S FOR T R I A N G U L A R E L E M E N T S !
s_e < < " Using " < < N < < " integration points w / weights . " < < endl ;
s_e < < " ( Are you sure that your elements are triangular ?) " < < endl ;
VecSimple ( Ptv ( real ) ) itg_x ( N ) ;
VecSimple ( real ) itg_w ( N) ;
int i = 1 , j = 1;
real dx = 1.0/( real ) (n -1) ;
real dx2 = dx * dx ;
for ( int c =1; c <= N ; c ++) {
// set up c o o r d i n a t e s
real xi1 = dx *(i -1) ;
real xi2 = dx *(j -1) ;
itg_x ( c) . redim (2) ;
itg_x ( c) (1) = xi1 ;
itg_x ( c) (2) = xi2 ;
// set up w e i g h t
if ( (( i ==1) && ( j ==1) ) || (( i ==1) && ( j == n ) ) || (( i == n ) && ( j ==1) ) ) {
if ( (( i ==1) && ( j == n ) ) || (( i == n ) && ( j ==1) ) )
itg_w ( c ) = 0.125* dx2 ; // s h a r p c o r n e r s ( 1 / 8 s q u a r e )
else
itg_w ( c ) = 0.25* dx2 ;
// ( 1 / 4 s q u a r e )
} else
if ( ( i ==1) || ( j ==1) || ( i ==( n - j +1) ) )
itg_w ( c ) = 0.5* dx2 ; // e d g e s ( 1 / 2 s q u a r e )
else
itg_w ( c ) = dx2 ; // i n t e r i o r ( 1 / 1 s q u a r e )
// a d v a n c e to next p o i n t
if ( i == ( n - j +1) ) {
j ++;
i = 1;
} else i ++;
} // end of c =1.. N loop .
// Save i n t e g r a t i o n rule into E l m I t g R u l e s FEM :: i t g _ r u l e s .
itg_rules . ignoreRefill () ;
itg_rules . setPointsAndWeights ( itg_x , itg_w ) ;
} else {
// use g a u s s i a n quad if n <2.
s_e < < " Using quadrature from menu system . ( Phew . I don ’t like experimental setups .) " < < endl ;
}
// O u t p u t f o r m a t from menu .
real_format = menu . get ( " real format " ) ;
s_o - > setRealFormat ( real_format ) ;
190
B.2 – The HydroEigen class
s_e - > setRealFormat ( real_format ) ;
logfile - > setRealFormat ( real_format ) ;
}
//
// F u n c t i o n for l o g g i n g both to s_e and l o g f i l e .
//
void HydroEigen :: addToLogFile ( const String & s ) {
// I c o u l d w r i t e a joke here ; did you get it ?
logfile < < s ;
s_e < < s;
}
//
// This f u n c t i o n e r a s e s rows / cols a B A N D E D m a t r i x a c c o r d i n g to e r a s e _ l o g []
// Some a s s u m p t i o n s are made on the m a t r i x s t o r a g e f o r m a t that I r e a l l y
// have no r e a s o n to b e l i e v e in . However , I have n e v e r e x p e r i e n c e d a
// c o u n t e r e x a m p l e.
//
void HydroEigen :: eraseRowAndCols ( MatBand ( NUMT ) & New , MatBand ( NUMT ) & Old ) {
int rows = Old . getNoRows () ;
int bw = Old . getBandwidth () ;
bool symm_stor = false ;
bool pivot_allow = false ;
int cols2 = 2* bw - 1;
int i , j , k ;
int ndiag = cols2 > > 1; // n u m b e r of sub / s u p e r d i a g o n a l s .
int middlecol = ndiag + 1;
for ( k = rows ; k >=1; k - -) {
// if row / col k is to be deleted , fix m a t r i x data in p l a c e !
if ( erase_log [k -1]) {
// " move up " rows > k by one .
for ( i = k ; i <= rows -1; i ++) {
for ( j =1; j <= cols2 ; j ++) {
Old (i ,j ) = Old ( i +1 , j) ;
}
}
// fix u p p e r part of m a t r i x
for ( i =k -1; i >= k - ndiag ; i - -) { // t o t a l s nd rows
if ( i >=1) {
for ( j= middlecol +( k -1) -i +1; j <= cols2 -1; j ++)
Old (i , j ) = Old (i , j +1) ;
Old (i , cols2 ) =0;
}
}
// fix l o w e r part of m a t r i x
for ( i = k ; i <= k + ndiag -1; i ++) { // t o t a l s nd rows
if ( i <= rows ) {
for ( j= middlecol - 1 + ( k - i ) ; j >=2; j - -)
Old (i , j ) = Old (i , j -1) ;
Old (i , 1) = 0;
}
}
rows - -; // we have one less row now .
} // end of if
} // end of k - loop
// u p d a t e o t h e r m a t r i x p a r a m e t e r s - - > New .
New . redim ( rows , rows , bw , symm_stor , pivot_allow ) ;
for ( i =1; i <= rows ; i ++)
for ( j =1; j <= cols2 ; j ++)
New (i , j ) = Old (i , j ) ;
}
//
// E r a s e row and col n u m b e r k from s p a r s e m a t r i x _en p l a c e _ .
//
void HydroEigen :: eraseRowAndCol ( MatSparse ( NUMT ) & A , int k ) {
SparseDS & pat = A . pattern () ;
int n = pat . getNoRows () ;
int m = pat . getNoColumns () ;
// NOTE : c o u l d not use g e t N o N o n z e r o e s() b e c a u s e of
// not c o r r e c t n u m b e r of e n t r i e s . I n s t e a d use d e f i n i t i o n of irow ( n +1) .
// int nnz = pat . g e t N o N o n z e r o e s() ; / / l e n g t h of e n t r i e s a r r a y .
int nnz = pat . irow ( n +1) -1;
//
// * * * first , e r a s e row k . * * *
//
//
//
//
//
must e r a s e e n t r i e s ( s ) from s = irow ( k ) to
s = irow ( k +1) -1 , and also same i n d i c e s in jcol .
then , we must c o r r e c t the e n t r i e s in irow ,
by s u b t r a c t i n g d e l t a from irow ( i ) ,
191
Program Listings
// i > k and r e m o v e irow ( k ) a l t o g e t h e r .
// d e l e t e e n t r i e s ( s ) and jcol ( s )
int first = pat . irow ( k ) ;
int last = pat . irow ( k +1) -1;
int delta = last - first + 1; // n u m b e r of e l e m e n t s to r e m o v e from j
for ( int s = first ; s <= nnz - delta ; s ++) {
A ( s ) = A (s + delta ) ;
pat . jcol (s ) = pat . jcol (s + delta ) ;
}
// fix irow ( i )
// take a copy .
VecSimple ( int ) irow_copy (n -1) ;
for ( int i =1; i < k ; i ++)
irow_copy ( i ) = pat . irow ( i ) ;
for ( int i =k ; i <= n -1; i ++)
irow_copy ( i ) = pat . irow ( i +1) - delta ;
// r e d i m pat . irow . and fill with m o d i f i e d copy .
// cols are set to m -1 , a n t i c i p a t i n g r e m o v a l of c o l u m n ...
pat . redimIrow (n -1 , m -1) ;
for ( int i =1; i <= n -1; i ++)
pat . irow (i ) = irow_copy ( i ) ;
// last e n t r y of irow must be set p r o p e r y .
pat . irow ( n +1 -1) = nnz - delta +1;
//
// * * * next , e r a s e c o l u m n k
//
//
//
//
//
//
//
//
run b a c k w a r d s , s = nnz d o w n t o 1.
for each s such that jcol ( s ) = k , jcol ( s )
and e n t r i e s ( s ) must be d e l e t e d .
f u r t h e r m o r e , jcol ( s ’) must be l e s s e n e d
by one for jcol ( s ’) > k . we have e r a s e d an
element , so i r o w s must be f i x e d . we s u b t r a c t
1 from e v e r y irow ( i ) >= s .
// u p d a t e v a r i a b l e s
n = pat . getNoRows () ;
nnz = pat . irow ( n +1) -1;
// loop b a c k w a r d s ...
for ( int s = nnz ; s >=1; s - -) {
if ( pat . jcol ( s ) > k )
pat . jcol ( s ) - -;
else if ( pat . jcol ( s ) == k ) {
// r e m o v e e n t r y if at c o l u m n k .
// ( s l i g h t s p e c i a l case if s = nnz . n o t h i n g
// n e e d s to be done with e n t r i e s / jcol then .)
for ( int t = s ; t <= nnz -1; t ++) {
pat . jcol ( t ) = pat . jcol ( t +1) ;
A ( t ) = A ( t +1) ;
}
// u p d a t e i r o w s ..
for ( int i =1; i <= n +1; i ++)
if ( pat . irow ( i ) >s ) pat . irow ( i ) - -;
}
}
}
//
// W r a s e row / col no . k in a M a t D i a g ( NUMT ) o b j e c t . very s i m p l e i n d e e d ...
//
void HydroEigen :: eraseRowAndCol ( MatDiag ( NUMT ) & A , int k ) {
Vec ( NUMT ) temp ( A . getNoRows () -1) ;
for ( int i =1; i < k ; i ++)
temp ( i ) = A ( i ) ;
for ( int i =k ; i < A . getNoRows () ; i ++)
temp ( i ) = A ( i +1) ;
A . redim ( A . getNoRows () -1) ;
for ( int i =1; i <= A . getNoRows () ; i ++)
A ( i ) = temp ( i ) ;
}
//
// S i m p l e stop - w a t c h f u n c t i o n s .
// This s h o u l d be i m p r o v e d ; only s e c o n d s are c o u n t e d .
//
time_t HydroEigen :: getElapsedTime () {
static time_t start = time ( NULL ) ;
return time ( NULL ) - start ;
}
void HydroEigen :: reportElapsedTime () {
addToLogFile ( aform ( " - - - elapsed time so far : % d s .\ n " , getElapsedTime () ) ) ;
}
//
192
B.2 – The HydroEigen class
// e r a s e _ l o g is an a r r a y of b o o l s ; one per each u n k n o w n in l i n e a r s y s t e m .
// e r a s e _ l o g [ k ] is set to true if u n k n o w n k c o r r e s p o n d s with a b o u n d a r y node .
//
void HydroEigen :: fillEraseLog () {
int no_nodes = grid - > getNoNodes () ;
int no_elms = grid - > getNoElms () ;
erase_log = new bool [ no_nodes ];
for ( int k =0; k < no_nodes ; k ++) erase_log [ k ] = false ;
// fill e r a s e _ l o g with true for b o u n d a r y n o d e s .
for ( int e =1; e <= no_elms ; e ++) {
int no_dof_per_elm = dof - > getNoDofInElm ( e ) ;
for ( int e_dof =1; e_dof <= no_dof_per_elm ; e_dof ++) {
int n = grid - > loc2glob (e , e_dof );
if ( grid - > boNode ( n) ) {
int k = dof - > loc2glob (e , e_dof ) - 1;
erase_log [ k ] = true ;
}
}
}
}
//
// this f u n c t i o n e n f o r c e s the h o m o g e n o u s BCs by r e m o v i n g all rows / cols in the
// m a t r i c e s K and M that c o r r e s p o n d s to b o u n d a r y n o d e s . uses f i l l E r a s e L o g() .
//
void HydroEigen :: enforceHomogenousBC s ( Handle ( Matrix ( NUMT ) ) & K , Handle ( Matrix ( NUMT ) ) & M ) {
addToLogFile ( " Erasing rows and columns ...\ n " ) ;
// find the rows / cols that s h o u l d be r e m o v e d from the s y s t e m .
// l o o p s t h r o u g h the n o d e s in the grid .
fillEraseLog () ;
addToLogFile ( aform ( " Total number of nonzeroes before == % d \ n " , K - > getNoNonzeroes () ) ) ;
//
// d i f f e r e n t t r e a t m e n t of M a t B a n d and M a t S p a r s e ...
//
if ( mat_type == " MatBand " ) {
Handle ( MatBand ( NUMT )) K_new , M_new ;
K_new . rebind ( new MatBand ( NUMT ) ( 1 ,1 ) ) ;
M_new . rebind ( new MatBand ( NUMT ) ( 1 ,1 ) ) ;
eraseRowAndCols ( K_new () , ( MatBand ( NUMT ) &) K () ) ;
if (! lump )
eraseRowAndCols ( M_new () , ( MatBand ( NUMT ) &) M () ) ;
addToLogFile ( " MatBand ( NUMT ) object K after BC incorporation :\ n" ) ;
addToLogFile ( aform ( "
bandwidth == % d \ n
dimension == % d \ n
nonzeroes
K_new - > getBandwidth () , K_new - > getNoRows () ,
K_new - > getNoNonzeroes () ) ) ;
==% d \ n " ,
// copy back ...
if (! lump )
M . rebind ( M_new () ) ;
K . rebind ( K_new () ) ;
} else if ( mat_type == " MatSparse " ) {
for ( int k =K - > getNoRows () ; k >=1; k - -)
if ( erase_log [k -1]) {
s_e < < ". " ;
eraseRowAndCol (( MatSparse ( NUMT ) &) (* K) , k ) ;
if (! lump )
eraseRowAndCol (( MatSparse ( NUMT ) &) (* M ) , k ) ;
}
s_e < < endl ;
addToLogFile ( " MatSparse ( NUMT ) object K after BC incorporation :\ n " ) ;
int nz = (( MatSparse ( NUMT ) &) K () ) . pattern () . irow (K - > getNoRows () +1) -1;
int rows = K - > getNoRows () ;
addToLogFile ( aform ( "
dimension == % d \ n
nonzeroes == % d (% g percent ) \ n " ,
rows , nz , 100.0* nz /( real ) ( rows * rows ) ) ) ;
}
// BCs for l u m p e d mass m a t r i x ...
if ( lump ) {
addToLogFile ( " Mass matrix M is lumped . Enforcing BCs ...\ n " ) ;
for ( int k =K - > getNoRows () ; k >=1; k - -)
if ( erase_log [k -1])
eraseRowAndCol (( MatDiag ( NUMT ) &) (* M ) , k ) ;
}
}
//
// This f u n c t i o n is the main f u n c t i o n of the s o l v e r .
// C a l c u l a t e s matrices , s o l v e s e i g e n v a l u e p r o b l e m and r e p o r t s .
//
void HydroEigen :: solveProblem () {
int n , m ;
// m a t r i x d i m e n s i o n s .
// Init t i m e r .
getElapsedTime () ;
reportElapsedTime () ;
// Make the m a t r i x K , i . e . , the left m a t r i x .
addToLogFile ( " Creating the K matrix ...\ n " ) ;
193
Program Listings
makeSystem (* dof , * lineq );
// f e t c h size .
lineq - > A () . size (n , m ) ;
// Make the mass m a t r i x M
addToLogFile ( " Creating the M matrix ... " ) ;
makeMassMatrix2 (M , lineq - >A () ) ;
// F e t c h a copy of K .
// For huge s y s t e m s this may be p r o b l e m a t i c.
// I s h o u l d i n v e s t i g a t e the p o s s i b i l i t y of
// s i m p l y m a k i n g a r e f e r e n c e i n s t e a d .
lineq - > A () . makeItSimilar (K ) ;
* K = lineq - >A () ;
reportElapsedTime () ;
// Tell user what ’s the deal with the grid .
addToLogFile ( " Grid properties :\ n " ) ;
addToLogFile ( aform ( "
gridfile == % s \ n" , SimCase :: getMenuSystem () . get ( " gridfile " ) . c_str () ) ) ;
addToLogFile ( aform ( "
number of nodes == % d \ n
number of elements == % d \ n " , grid - > getNoNodes () , grid - > getNoElms ()
));
// E n f o r c e BCs .
addToLogFile ( " Enforcing homogenous Dirichlet BCs ...\ n" ) ;
enforceHomogenousBC s (K , M) ;
reportElapsedTime () ;
//
// Here we s t a r t the real p r o b l e m s o l v i n g part .
//
addToLogFile ( " Instantiating the EigenSolver object ...\ n " ) ;
EigenSolver * solver ;
if (! lump ) {
// If M is full then the p r o b l e m is a g e n e r a l i z e d p r o b l e m .
solver = new EigenSolver ( K () , M () , nev , SimCase :: getMenuSystem () ) ;
} else {
// If M is l u m p e d we e l i m i n a t e the r i g h t hand m a t r i x
// by m u l t i p l y i n g with its inverse , w h i c h is r e a l l y
// s i m p l e to to ...
multInvMatDiag (M , K ) ;
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
if ( m a t _ t y p e = = " M a t B a n d ") {
for ( int p =1; p <= K - > g e t N o R o w s () ; p ++)
for ( int q =1; q <= K - > g e t N o C o l u m n s() ; q ++)
if ( ( ( M a t B a n d ( NUMT ) &) K () ) . i n s i d e B a n d ( p , q ) )
K - > elm (p , q ) /= M - > elm (p , p ) ;
}
else if ( m a t _ t y p e = = " M a t S p a r s e ") {
M a t S p a r s e ( NUMT ) & X = ( M a t S p a r s e ( NUMT ) &) (* K ) ;
S p a r s e D S & pat = X . p a t t e r n () ;
for ( int p =1; p <= pat . g e t N o R o w s () ; p ++) {
int f i r s t = pat . irow ( p ) ;
int last = pat . irow ( p +1) -1;
for ( int q = f i r s t ; q <= last ; q ++)
X ( q ) /= M - > elm ( p , p ) ;
}
}
solver = new EigenSolver ( K () , nev , SimCase :: getMenuSystem () ) ;
}
// Save m a t r i c e s if d e s i r e d .
// No need to save M if l u m p e d ; inv ( M ) is a l r e a d y m u t i p l i e d into K .
if ( savemat && ( mat_type == " MatSparse " ) ) {
addToLogFile ( " Saving matrices in Matlab format ...\ n" ) ;
saveMatrix ( ( MatSparse ( NUMT &) ) (* K ) , "K " , aform ( " % s_K . m " , casename . c_str () ) ) ;
if (! lump )
saveMatrix ( ( MatSparse ( NUMT &) ) (* M ) , " M " , aform ( " % s_M . m " , casename . c_str () ) ) ;
}
// Skip d i a g o n a l i z a t i o n?
if (! arpack_solve ) {
addToLogFile ( " Skipping diagonalization with ARPACK !\ n " ) ;
} else {
// set up c o m p u t a t i o n a l mode . Note that E i g e n S o l v e r
// in fact s u p p o r t s finding , e . g . , the _ l a r g e s t _ e i g e n v a l u e s
// i n s t e a d of the l o w e s t .
// The r e a s o n why we use a void * is that
// due to not - so - good t e m p l a t i n g used
// in A R P A C K ++ we c a n n o t i n c l u d e the c l a s s
// d e f i n i t i o n s in H y d r o E i g e n . h ...
the_solver = ( void *) solver ;
solver - > setCompMode ( MODE_REGULAR ) ;
solver - > setMatrixKind ( MATRIX_COMPLEX );
solver - > setSpectrumPart ( SPECTRUM_SMALLEST _R EA L ) ;
// Go !
addToLogFile ( " Finding eigenvalues and eigenvectors ...\ n " ) ;
solver - > solveProblem () ;
addToLogFile ( " Done !\ n " );
reportElapsedTime () ;
// R e p o r t on the r e s u l t s and f i n d i n g s .
194
B.2 – The HydroEigen class
report () ;
}
}
//
// To c a l c u l a t e K m a t r i x ...
// e v a l u a t e s the i n t e g r a n d s in the f i n i t e e l e m e n t f o r m u l a t i o n.
//
void HydroEigen :: integrands ( ElmMatVec & elmat , const FiniteElement & fe ) {
/*
The H a m i l t o n i a n for this p r o b l e m is :
H = - n a b l a ^ 2 + g a m m a ^2 r ^ 2 / 4 + i g a m m a ( - y , x ) . n a b l a - 2/ r
*/
int i ,j ,k ;
const int nsd = fe . getNoSpaceDim () ;
const int nbf = fe . getNoBasisFunc () ;
const real detJxW = fe . detJxW () ;
const real gamma2 = gamma * gamma ;
Ptv ( real ) coords = fe . getGlobalEvalPt () ;
real X = coords (1) ;
real Y = coords (2) ;
static int gauge_int = ( gauge == " symmetric " ? 1 : 2) ;
real nabla2 ;
real r2 = X * X + Y * Y ;
real r2c = sqrt ( r2 ) ;
if ( r2c <= epsilon ) r2c = epsilon ;
// C o m p l e x Im (0 ,1) ;
Complex Im (0 ,1) ;
for ( i = 1; i <= nbf ; i ++)
for ( j = 1; j <= nbf ; j ++) {
nabla2 = 0;
for ( k = 1; k <= nsd ; k ++)
nabla2 += fe . dN (i , k ) * fe . dN (j , k) ;
// add c o n t r i b . from n a b l a ^2 ( s t i f f n e s s m a t r i x K )
elmat . A (i , j ) += nabla2 * detJxW ;
if ( gauge_int == 1) {
// * * * s y m m e t r i c g a u g e ***
// add c o n t r i b . from h a r m o n i c o s c i l l a t o r term
elmat . A (i , j ) += fe . N ( i) * fe . N ( j ) * r2 *0.25* gamma2 * detJxW ;
// add c o n t r i b . from <A , nabla >.
if ( angmom )
elmat .A (i , j ) += - Im * fe . N ( i ) * gamma *(
-Y * fe . dN (j ,1) + X * fe . dN (j ,2)
) * detJxW ;
} else {
// * * * non - s y m m e t r i c g a u g e ***
elmat . A (i , j ) += 2* Im * gamma * fe . N ( i ) * Y* fe . dN (j ,1) * detJxW ;
elmat . A (i , j ) += gamma2 * fe . N ( i ) * fe . N (j ) * Y * Y * detJxW ;
}
// add c o n t r i b from c o u l o m b term
if ( nucleus )
elmat . A (i , j ) += - fe . N (i ) * fe . N ( j ) *( 2.0/ sqrt ( r2 ) ) * detJxW ;
}
}
//
// S i m p l e r e p o r t f u n c t i o n . W r i t e s e i g e n v e c t o r s
// as f i e l d s over grid . Also w r i t e s e i g e n v a l u e s.
//
void HydroEigen :: report () {
// F e t c h the E i g e n S o l v e r o b j e c t p o i n t e r and cast it .
EigenSolver & solver = *(( EigenSolver *) the_solver );
// In e v e c _ f i e l d we s t o r e a f i e l d c o r r e s p o n d i n g to the e i g e n v e c t o r
// f o u n d .
Handle ( FieldFE ) evec_field ;
evec_field . rebind ( new FieldFE () ) ;
// In this we s t u r e the n o d a l v a l u e s of the e i g e n v e c t o r f i e l d .
Handle ( ArrayGen ( NUMT ) ) evec_values ;
evec_values . rebind ( new ArrayGen ( NUMT ) ( grid - > getNoNodes () ) ) ;
// Get the e i g e n v e c t o r s and e i g e n v a l u e s.
Mat ( NUMT ) & eigenvectors = solver . getEigenvectors () ;
Vec ( real ) eigenvals ( solver . getEigenvalues () . getNoEntries () ) ;
// Copy e i g e n v a l u e s. ( We will m o d i f y this array , so we take a
// copy i n s t e a d of a r e f e r e n c e .)
for ( int k =1; k <= eigenvals . getNoEntries () ; k ++)
eigenvals ( k ) = solver . getEigenvalues () ( k ) . Re () ;
// i n d e x - - p e r m u t a t i o n of i n d i c e s that sort the e i g e n v a l u e s.
// i n v _ i n d e x - - the i n v e r s e p e r m u t a t i o n.
195
Program Listings
VecSimple ( int ) index ( eigenvals . size () ) , inv_index ( eigenvals . size () );
// Make the s o r t i n g i n d i c e s and the i n v e r s e .
eigenvals . makeIndex ( index ) ;
for ( int i =1; i <= index . size () ; i ++)
inv_index ( index ( i ) ) = i ; // F a n c y huh ! Thanks , HPL !
// Sort a c c o r d i n g to i n d e x .
eigenvals . sortAccording2index ( index ) ;
addToLogFile ( " Expectation values of <r >:\ n " ) ;
for ( int i = 1; i <= store_evecs ; i ++) {
//
// C r e a t e the f i e l d c o r r e s p o n d i n g to e i g e n v e c t o r.
//
int j2 = 1; // i n d e x into e i g e n v e c t o r c o m p o n e n t s .
// loop t h r o u g h each grid p o i n t .
for ( int j =1; j <= grid - > getNoNodes () ; j ++) {
if (! erase_log [j -1]) {
evec_values () ( j ) = eigenvectors ( inv_index ( i ) , j2 ) ;
j2 ++; // go to next c o m p o n e n t ...
} else {
// do n o t h i n g with j2 , but s t o r e BC .
evec_values () ( j ) = 0;
}
}
// R e d i m f i e l d and put the f i e l d v a l u e s into it ...
evec_field - > redim ( grid () , evec_values () , aform ( " % d " , i , eigenvals ( i ) ) . c_str () );
// . . . and off you go !
database - > dump ( evec_field () ) ;
// W r i t e the e x p e c t a t i o n v a l u e < r >.
addToLogFile (
aform ( real_format . c_str () , calcExpectation_r ( evec_field () , evec_field () ) . Re () )
if ( i < store_evecs )
addToLogFile ( " , " ) ;
else
addToLogFile ( " \ n " ) ;
// S t o r e p r o b a b i l i t y d e n s i t y if n e e d e d .
if ( store_prob ) {
// c a l c u l a t e p r o b a b i l i t y d e n s i t y
for ( int j =1; j <= grid - > g e t N o N o d e s () ; j ++)
e v e c _ v a l u e s() ( j ) = pow ( e v e c _ v a l u e s() ( j ) . Re () ,2) + pow ( e v e c _ v a l u e s() ( j ) . Im () ,2) ;
//
//
//
//
// Dump it .
evec_field - > redim ( grid () , aform ( " prob_ % d " , i , eigenvals ( i ) ) . c_str () ) ;
calcProbabilityDen si ty (* evec_field , * evec_field , false ) ;
database - > dump ( evec_field () ) ;
}
}
// Get e i g e n v a l u e s , p r i n t them to s t d o u t ...
// a python - e v a l a b l e file .
Os ev_file = Os ( aform ( " % s. eigenvalues " , casename . c_str () ) , NEWFILE );
addToLogFile ( " Eigenvalues ( real parts ) :\ n " ) ;
for ( int k =1; k <= eigenvals . getNoEntries () ; k ++) {
addToLogFile ( aform ( " %10.10 g " , eigenvals ( k ) ) ) ;
ev_file < < aform ( " %10.10 g " , eigenvals ( k ) ) ;
if ( k < eigenvals . getNoEntries () ) {
addToLogFile ( " , " ) ;
ev_file < < " , " ;
}
}
addToLogFile ( " \ n " ) ;
ev_file < < endl ;
}
//
// This f u n c t i o n is not i m p l e m e n t e d.
//
void HydroEigen :: resultReport () {}
//
// c a l c u l a t e e x p e c t a t i o n v a l u e of r .
//
NUMT HydroEigen :: calcExpectation_r ( FieldFE & u , FieldFE & v ) {
IntegrateOverGridFE integrator ;
IntegrateOverGridFE integrator2 ;
IntegrandOfExpec ta ti on _ r integrand ;
InnerProdIntegran dC al c integrand2 ;
integrand . setFields (u , v );
integrand2 . setFields (u , v) ;
integrator . volumeIntegral ( integrand , * grid ) ;
integrator2 . volumeIntegral ( integrand2 , * grid ) ;
return integrand . getResult () / integrand2 . getResult () ;
}
//
// c a l c u l a t e i n n e r p r o d u c t of u and v .
//
NUMT HydroEigen :: calcInnerProd ( FieldFE & u , FieldFE & v ) {
196
);
B.2 – The HydroEigen class
IntegrateOverGridFE integrator ;
InnerProdIntegrand Ca l c integrand ;
integrand . setFields (u , v ) ;
integrator . volumeIntegral ( integrand , * grid ) ;
return integrand . getResult () ;
}
//
// C r e a t e ( a p p r o x i m a t e) p r o b a b i l i t y d e n s i t y f i e l d .
//
void HydroEigen :: calcProbabilityDen si ty ( const FieldFE & u , FieldFE & prob , bool redim ) {
real R , I ;
int nno = grid - > getNoNodes () ;
if ( redim )
prob . redim (* grid , " prob " ) ;
for ( int i =1; i <= nno ; i ++) {
R = u . values () ( i ) . Re () ;
I = u . values () ( i ) . Im () ;
prob . values () ( i ) = R * R + I * I ;
}
}
//
// save a s p a r s e m a t r i x in a Matlab - r e a d a b l e f o r m a t .
// M a t r i x ( NUMT ) :: save is not e n t i r e l y r e l i a b l e ...
//
void HydroEigen :: saveMatrix ( MatSparse ( NUMT ) & A , const String & Aname , const String & fname ) {
Os f ( fname , NEWFILE ) ;
SparseDS & pat = A . pattern () ;
int rows = pat . getNoRows () ;
int cols = pat . getNoColumns () ;
NUMT dummy ;
bool reals = ( sizeof ( dummy ) == sizeof ( real ) ) ;
f < < " % This is a matrix saved by HydroEigen :: saveMatrix () " < < endl ;
f << "%
gridfile == " < < gridfile < < " . " < < endl ;
f < < endl ;
f < < Aname < < " = sparse ( " < < rows < < " , " < < cols < < " ) ; " < < endl < < endl ;
for ( int i =1; i <= rows ; i ++) {
for ( int s = pat . irow (i ) ; s <= pat . irow ( i +1) -1; s ++) {
f < < Aname < < " ( " < < i < < " , " < < pat . jcol ( s ) < < " ) = " ;
if ( reals )
f < < A ( s) < < " ; " < < endl ;
else
f < < " complex " < < A ( s ) < < " ; " < < endl ; // c o m p l e x ( re , im ) ;
}
}
f - > close () ;
}
//
// m a k e M a s s M a t r i x 2. F i r s t a r g u m e n t is ref . to
// m a t r i x that s h a l l hold mass matrix , s e c o n d
// a r g u m e n t has the c o r r e c t p a t t e r n .
//
void HydroEigen :: makeMassMatrix2 ( Handle ( Matrix ( NUMT )) & Dest , const Handle ( Matrix ( NUMT ) ) & WithPattern ) {
// Make mass m a t r i x a c c o r d i n g to menu c h o i c e s et . c .
// L u m p e d m a t r i c e s are a l w a y s M a t D i a g ( NUMT ) o b j e c t s .
// Not l u m p e d are same f o r m a t as r i g h t hand side m a t r i x .
int n , m ; // m a t r i x size .
WithPattern - > size (n , m) ;
if (! lump ) {
// Make a full mass m a t r i x .
addToLogFile ( " ( not lumped ) \ n " ) ;
if ( mat_type == " MatSparse " ) {
// must COPY the s p a r s i t y p a t t e r n . m a k e I t S i m i l a r only
// c o p i e s a _ r e f e r e n c e _ to the S p a r s e D S o b j e c t . Hence ,
// m o d i f y i n g one m a t r i x will d e s t r o y the s t r u c t u r e of the o t h e r .
MatSparse ( NUMT ) & orig = ( MatSparse ( NUMT ) &) ( WithPattern () ) ;
int nnz = orig . getNoNonzeroes () ;
Dest . rebind ( new MatSparse ( NUMT )(n ,m , nnz ) ) ;
SparseDS & pattern = (( MatSparse ( NUMT ) &) Dest () ). pattern () ;
for ( int i =1; i <= m +1; i ++)
pattern . irow ( i ) = orig . pattern () . irow ( i ) ;
for ( int i =1; i <= nnz ; i ++)
pattern . jcol ( i ) = orig . pattern () . jcol ( i ) ;
} else {
// The s t r u c t u r e of M a t B a n d is s i m p l e r to copy ...
WithPattern - > makeItSimilar ( Dest );
}
} else {
197
Program Listings
// C r e a t e a l u m p e d mass matrix , i . e . , a M a t D i a d .
addToLogFile ( " ( lumped ) \ n " ) ;
M . rebind ( new MatDiag ( NUMT ) ( n ) ) ;
}
makeMassMatrix (* grid , * Dest , lump ) ;
}
//
// M u l t i p l y A with i n v e r s e of D , w h i c h is d i a g o n a l .
//
void HydroEigen :: multInvMatDiag ( Handle ( Matrix ( NUMT ) ) D , Handle ( Matrix ( NUMT ) ) A ) {
if ( mat_type == " MatBand " ) {
for ( int p =1; p <= A - > getNoRows () ; p ++)
for ( int q =1; q <= A - > getNoColumns () ; q ++)
if ( (( MatBand ( NUMT ) &) A () ) . insideBand (p , q ) )
A - > elm (p , q ) /= D - > elm (p , p ) ;
}
else if ( mat_type == " MatSparse " ) {
MatSparse ( NUMT ) & X = ( MatSparse ( NUMT ) &) (* A ) ;
SparseDS & pat = X . pattern () ;
for ( int p =1; p <= pat . getNoRows () ; p ++) {
int first = pat . irow ( p ) ;
int last = pat . irow ( p +1) -1;
for ( int q = first ; q <= last ; q ++)
X ( q ) /= D - > elm (p , p ) ;
}
}
}
//
// End of c l a s s d e f i n i t i o n .
//
B.3
The TimeSolver class
This is the class deﬁnition of TimeSolver that solves the time dependent Schrödinger
equation. The main program is main.cpp and this instatiates the TimeSolver class.
Notice that the full functionality from the base class HydroEigen is retained.
B.3.1 TimeSolver.h
# ifndef TimeSolver_h
# define TimeSolver_h
# include " HydroEigen . h "
//
// s i m p l e i n t e g r a n d for <x >. i n l i n e d .
//
class IntegrandOfExpect at i on _x : public InnerProdIntegrandC al c {
public :
IntegrandOfExpec ta ti on _ x () { }
virtual void integrandsG ( const FiniteElement & fe ) {
real detJxW = fe . detJxW () ;
NUMT uval = u - > valueFEM ( fe ) ;
NUMT vval = v - > valueFEM ( fe ) ;
Ptv ( real ) x = fe . getGlobalEvalPt () ;
result += conjugate ( uval ) * vval * x (1) * detJxW ;
// i n t e g r a t e and s t o r e in r e s u l t .
}
};
//
// s i m p l e i n t e g r a n d for <y >. i n l i n e d .
//
class IntegrandOfExpect at i on _y : public InnerProdIntegrandC al c {
public :
IntegrandOfExpec ta ti on _ y () { }
virtual void integrandsG ( const FiniteElement & fe ) {
real detJxW = fe . detJxW () ;
NUMT uval = u - > valueFEM ( fe ) ;
NUMT vval = v - > valueFEM ( fe ) ;
Ptv ( real ) x = fe . getGlobalEvalPt () ;
result += conjugate ( uval ) * vval * x (2) * detJxW ;
// i n t e g r a t e and s t o r e in r e s u l t .
}
};
// just
# define
# define
# define
# define
some h a n d y defs
THETA_RULE 1
LEAP_FROG 2
IC_GAUSSIAN 1
IC_FIELD 2
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
198
B.3 – The TimeSolver class
// C l a s s d e c l a r a t i o n of T i m e S o l v e r - - - s o l v i n g the time
// d e p e n d e n t öS c h r d i n g e r e q u a t i o n . This is d e r i v e d from
// H y d r o E i g e n b e c a u s e the two p r o b l e m s s h a r e many i m p o r t a n t
// p r o p e r t i e s , such as the H a m i l t o n i a n.
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
class TimeSolver : public HydroEigen {
private :
Handle ( Matrix ( NUMT ) ) A ;
// u s u a l l y h o l d s m a t r i x from M a k e S y s t e m . M is d e f i n e d in H y d r o E i g e n .
// Time p a r a m e t e r s .
int time_method ;
Handle ( TimePrm ) tip ;
real dt ;
real t_final ;
//
//
//
//
real omega ;
real delta ;
real gamma0 ;
// p a r a m e t e r s for
// v a r y i n g m a g n e t i c f i e l d
// g a m m a = g a m m a 0 * pow ( sin ( t /( pi * T ) ) , 2) * cos ( o m e g a * t + d e l t a )
real theta ;
bool do_time_simulation ;
// for t h e t a rule , \ in [ 0 , 1 ] . , 0 . 5 = = Crank - N i c h o l s o n
// i n d i c a t e s w h e t h e r og not a time dep . s i m u l a t i o n s h o u l d be made .
// R e p o r t v a r i a b l e s .
real dt_rep ;
int n_rep ;
// time i n t e r v a l b e t w e e n r e p o r t s .
// n u m b e r of r e p o r t s d u r i n g s i m u l a t i o n .
i n d i c a t e s what m e t h o d to use , T H E T A _ R U L E or L E A P _ F R O G
n u m e r i c a l c l o c k ...
time step
f i n a l time
// I n i t i a l c o n d i t i o n p a r a m e t e r s .
int ic_type ;
// I C _ G A U S S I A N or I C _ F I E L D
String field_database ;
// name of the d a t a b a s e that h o l d s f i e l d s to load .
int field_no ;
// n u m b e r of f i e l d to use as IC .
String gaussian_parameters ; // p a r a m e t e r s for g a u s s F u n c IC .
Handle ( FieldFE ) * fields ; // a r r a y of f i e l d h a n d l e s . h o l d s f i e l d s l o a d e d from d a t a b a s e .
int no_of_fields ;
// n u m b e r of f i e l d s to load , i . e . , l e n g t h of f i e l d s a r r a y .
// f i e l d s et . c .; r e c a l l that the grid is d e f i n e d in H y d r o E i g e n .
// H a n d l e ( F i e l d F E ) u ;
// u is d e f i n e d in H y d r o E i g e n !
Handle ( FieldFE ) u_prev ;
// p r e v i o u s s o l u t i o n .
Handle ( Vec ( NUMT ) ) scratch ;
Handle ( Vec ( NUMT ) ) scratch2 ; // s c r a t c h v e c t o r s .
// for the i n t e g r a n d s () f u n c t i o n .
// d e f i n i n g t h e s e v a r i a b l e s here s a v e s some time
// when a s s e m b l i n g the l i n e a r s y s t e m at each time step .
MassMatIntg * mass_integrand ; // H a n d l e ( . . . ) not impl . ? ? ?
ElmMatVec elmat2 ;
ElmMatVec elmat3 ;
public :
// c o n s t r u c t o r s and d e s t r u c t o r s.
TimeSolver () { mass_integrand = NULL ; };
~ TimeSolver () { if ( mass_integrand ) delete mass_integrand ; };
//
// " c o m p u l s o r y " m e t h o d s .
//
virtual void define ( MenuSystem & menu , int level = MAIN ) ;
virtual void scan () ;
virtual void solveProblem () ;
// v i r t u a l void r e p o r t () ;
virtual void resultReport () { };
virtual void fillEssBC () ; // fill e s s e n t i a l BCs .
virtual void setIC () ;
// set i n i t i a l c o n d i t i o n .
virtual void integrands ( ElmMatVec & elmat , const FiniteElement & fe ) ; // o v e r l o a d e d i n t e g r a n d s .
//
// P r o v i d e m a g n e t i c f i e l d as f u n c t i o n of time .
//
virtual real gammaFunc ( real t ) ;
// C a l c u l a t e e x p e c t a t i o n v a l u e when g i v e n a m a t r i x .
NUMT calcExpectationFr om M at ri x ( FieldFE & u , FieldFE & v , Matrix ( NUMT ) & A ) ;
//
// R e p o r t m e t h o d s . Used d u r i n g s i m u l a t i o n .
//
virtual void initialReport () ;
virtual void finalReport () ;
virtual void reportAtThisTimeSte p ( int t_index , int r_index ) ;
Ptv ( real ) calcExpectation_pos ( FieldFE & u , FieldFE & v ) ;
protected :
//
// Load f i e l d s ( and grid !) from d a t a b a s e
//
void loadFields ( String & db_name ) ;
};
# endif
B.3.2 TimeSolver.cpp
# include " TimeSolver . h "
# include < IntegrateOverGridFE .h >
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
199
Program Listings
// C l a s s d e f i n i t i o n of T i m e S o l v e r .
//
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
//
// Main p r o b l e m s o l v i n g m e t h o d .
//
void TimeSolver :: solveProblem () {
if (! do_time_simulation ) {
s_e < < " Skipping time dependent simulation . Diagonalizing instead ! " < < endl ;
HydroEigen :: solveProblem () ;
return ;
}
// H a n d l e ( M a t r i x ( NUMT ) ) A , M ;
A . rebind ( NULL ) ;
M . rebind ( NULL ) ;
// h o l d s the l i n e a r s y s t e m and mass m a t r i x .
// bool m a t r i x _ h a s _ c h a n g e d = true ;
bool first_step_lf = ( time_method == LEAP_FROG ) ;
// real l a s t _ r e p o r t _ t i m e = 0 ; // used for r e p o r t book - k e e p i n g .
int last_report_index = 0; // used for r e p o r t book - k e e p i n g .
int report_index = 0;
int t_index = 0;
// h o l d s n u m b e r of s t e p s t a k e n .
tip - > initTimeLoop () ;
// i n i t i a l i z e t i m e l o o p .
// make M and A .
gammaFunc ( tip - > time () ) ; // u p d a t e m a g n e t i c f i e l d .
s_e < < " Make M and A ... " < < endl ;
do_time_simulation = false ;
makeSystem (* dof , * lineq ) ; // make H ^0.
do_time_simulation = true ;
A . rebind ( lineq - > A () ) ; // f e t c h p o i n t e r to the e l e m e n t m a t r i x .
s_e < < " Make mass matrix ... " < < endl ;
A - > makeItSimilar ( M ) ; // copy s t r u c t u r e of A into M .
makeMassMatrix (* grid , M () , lump ) ;
setIC () ; // set i n i t i a l c o n d i t i o n .
linsol = u - > values () ; // s t a r t v e c t o r for f i r s t time step .
initialReport () ; // r e p o r t v a r i o u s s t u f f
reportAtThisTimeSte p (0 , 0) ; // r e p o r t i n i t i a l c o n d i t i o n .
while (! tip - > finished () ) { // loop u n t i l t = t _ f i n a l
tip - > increaseTime () ; // t - > t + dt
addToLogFile ( " % " ) ;
addToLogFile ( aform ( " Solving for t == % g \ n " , tip - > time () ) ) ;
// b u i l d H a m i l t o n i a n and mass m a t r i x at this time step .
s_e < < " Essential BCs ... " < < endl ;
fillEssBC () ; // ess . BCs .
// f i r s t step i leap - frog s h o u l d be a theta - rule step .
// c r e a t e mass m a t r i x also .
if ( first_step_lf ) {
time_method = THETA_RULE ;
}
//
// - - - s o l v e for leap frog m e t h o d . - - //
if ( time_method == LEAP_FROG ) {
s_e < < " ( Using LEAP_FROG ) " < < endl ;
s_e < < " ( " ;
s_e < < " making system ... " ;
// make c o e f f i c i e n t m a t r i x M and r i g h t hand side -2* i * dt * H * u
makeSystem (* dof , * lineq ) ;
//
//
//
if
s o l v i n g a l u m p e d s y s t e m in the leap - frog m e t h o d
is very easy ! : -)
t h e r e f o r e we s e p a r a t e the two c a s e s .
(! lump ) {
s_e < < " solving linear system ... " ;
lineq - > solve ( true ) ;
} else {
s_e < < " solving linear system ( by inspection ) ... " ;
Vec ( NUMT ) & rhs = ( Vec ( NUMT ) &) lineq - > b () ; // a l i t t l e bit ugly but ...
for ( int p =1; p <= linsol . size () ; p ++)
linsol ( p ) = pow ( lineq - > A () . elm (p , p ) , -1) * rhs ( p ) ;
}
// add u _ p r e v to s o l u t i o n of l i n e a r s y s t e m .
// l i n s o l now then the new u ’s f i e l d v a l u e s .
linsol . add ( linsol , u_prev - > values () ) ;
* u_prev = * u ; // u p d a t e u _ p r e v for next time step .
dof - > vec2field ( linsol , * u ) ; // s t o r e new s o l u t i o n in f i e l d o b j e c t .
s_e < < " ) " < < endl ;
}
//
// - - - s o l v e for t h e t a rule m e t h o d . - - //
if ( time_method == THETA_RULE ) {
s_e < < " ( Using THETA_RULE ) " < < endl ;
s_e < < " ( " ;
// make the s y s t e m : rhs = [ M - i * dt *(1 - t h e t a ) * H (t - dt ) ]* u ,
200
B.3 – The TimeSolver class
// A = M + i * t h e t a * dt * H ( t )
s_e < < " making system ... " ;
makeSystem (* dof , * lineq ) ;
s_e < < " solving linear system ... " ;
lineq - > solve ( true ) ;
// l i n s o l h o l d s s o l u t i o n
* u_prev = * u ; // u p d a t e u _ p r e v for next time step
dof - > vec2field ( linsol , * u ) ; // c o n v e r t to f i e l d o b j e c t .
s_e < < " ) " < < endl ;
}
// if we did a theta - rule step in the f i r s t leap - frog step ...
if ( first_step_lf )
time_method = LEAP_FROG ;
first_step_lf = false ;
// s o l v e d a new time step we did !
t_index ++;
report_index = ( int ) floor ( tip - > time () / dt_rep ) ;
// r e p o r t
if ( tip - > finished () || ( report_index > last_report_index ) ) {
// u p d a t e l a s t _ r e p o r t _ t i m e
last_report_index = report_index ;
reportAtThisTimeStep ( t_index , report_index ) ;
}
}
// end of time loop
finalReport () ;
// r e p o r t v a r i o u s s t u f f .
}
//
// R e p o r t in time - loop
//
void TimeSolver :: reportAtThisTimeSte p ( int t_index , int r_index ) {
Handle ( FieldFE ) prob ;
prob . rebind ( new FieldFE ) ;
String prefix = aform (" % s_data . " , casename . c_str () ) ;
s_e < < " Reporting ... " ;
// S t o r e c u r r e n t s o l u t i o n of d e s i r e d .
if ( store_evecs >0) {
database - > dump (* u , &( tip () ) ) ;
s_e < < " ( u saved ) " ;
}
// S t o r e p r o b a b i l i t y d e n s i t y if d e s i r e d .
if ( store_prob ) {
// Calc . prob . d e n s i t y from c u r r e n t s o l u t i o n .
calcProbabilityDen si ty (* u , * prob , true ) ;
database - > dump (* prob , &( tip () ) ) ;
s_e < < " ( u * u saved ) ";
}
Ptv ( real ) pos = calcExpectation_pos (* u ,* u ) ;
addToLogFile ( " \ n % " ) ;
addToLogFile ( aform ( " reporting at t (% d ) == %10.10 g ;\ n " , t_index , tip - > time () ) );
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " t (% d )
= %10.10 g ;\ n " , r_index +1 , tip - > time () ) ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " u_norm (% d )
= %10.10 g ;\ n " , r_index +1 , calcInnerProd (* u ,* u ) . Re () ) ) ;
// a d d T o L o g F i l e( a f o r m (" u _ e n e r g y (% d )
= % 1 0 . 1 0 g ;\ n " , r _ i n d e x +1 , c a l c E x p e c t a t i o n F r o m M a t r i x(* u , * u , * A ) . Re () ) ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " gamma (% d )
= %10.10 g ;\ n " , r_index +1 , gammaFunc ( tip - > time () ) ) ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " u_pos (% d ,:)
= [%10.10 g , %10.10 g ];\ n " , r_index +1 , pos (1) , pos (2) ) ) ;
if (( t_index > 0) && !( lump && ( time_method == LEAP_FROG ) ) ) {
// r e t r i e v e p e r f o r m a n c e info .
LinEqStatBlk & perf = lineq - > getPerformance () ;
SolverStatistics & stats = perf . solver_info ;
addToLogFile ( prefix );
addToLogFile ( aform ( " solve_time (% d ) = %10.10 g ;\ n" , r_index +1 , stats . cputime ) ) ;
addToLogFile ( prefix );
addToLogFile ( aform ( " solve_niter (% d) = % d ;\ n " , r_index +1 , stats . niterations )) ;
}
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " system_time (% d ) = %10.10 g ;\ n " , r_index +1 , cpu_time_makeSystem ) ) ;
addToLogFile ( prefix ) ;
do_time_simulation = false ;
s_e < < " Making Hamiltonian ... " < < endl ;
gammaFunc ( tip - > time () );
makeSystem (* dof , * lineq ) ; // make H ^\ ell .
do_time_simulation = true ;
addToLogFile ( aform ( " u_energy (% d )
= %10.10 g ;\ n " , r_index +1 , calcExpectationFr om Ma t ri x (* u , * u , lineq - > A () ) . Re () ) ) ;
}
//
// E x t e n d f u n c t i o n a l i t y of s o l v e r menu .
201
Program Listings
//
void TimeSolver :: define ( MenuSystem & menu , int level ) {
menu . addItem ( level , " simulation in time " , " set to false to solve eigenvalue problem instead " , " true " ) ;
int level1 = level +1;
MenuSystem :: makeSubMenuHeader ( menu , " Params for time dependent solve " , " time " , level1 , ’T ’) ;
menu . addItem ( level1 , " time method " , " time integration method " , " theta - rule " ) ; // or " leap - frog "
menu . addItem ( level1 , " T " , " final time " , " 1.0 " ) ;
menu . addItem ( level1 , " dt " , " time step " , " 0.1 " ) ;
menu . addItem ( level1 , " omega " , " omega " , " 0 " ) ;
menu . addItem ( level1 , " delta " , " delta " , " 0 " ) ;
menu . addItem ( level1 , " gamma0 " , " gamma0 " , " 0 " ) ;
menu . addItem ( level1 , " n_rep " , " number of reports " , " 10 " ) ;
menu . addItem ( level1 , " theta " , " theta " , " 0.5 " ) ;
menu . addItem ( level1 , " ic type " , " initial condition type " , " gaussian " ) ; // or " f i e l d "
menu . addItem ( level1 , " field database " , " field database " , " none " ) ; // d a t a b a s e that s h o u l d hold e i g e n v e c t o r s.
menu . addItem ( level1 , " ic field no " , " number of field to use as ic " , " 1 " ) ;
// 1 s h o u l d be g r o u n d s t a t e ...
menu . addItem ( level1 , " gaussian " , " gaussian parameters " , " -x -10 - y 0 - sx 2 - sy 2 - kx 0 - ky 1 " ) ; // used to
initialize gaussFunc .
// Set up p r e v i o u s f u n c t i o n a l i t y as well .
HydroEigen :: define ( menu , level ) ;
}
//
// E x t e n d f u n c t i o n a l i t y of scan () .
//
void TimeSolver :: scan () {
MenuSystem & menu = SimCase :: getMenuSystem () ;
int ok = false ;
do_time_simulation = menu . get ( " simulation in time " ) . getBool () ;
//
// If user sets ’ s i m u l a t i o n in time ’ to false , the H y d r o E i g e n s o l v e r is used i n s t e a d .
//
if (! do_time_simulation ) {
// we do not want to do a time d e p e n d e n t s i m u l a t i o n but r a t h e r a time i n d e p e n d e n t s i m u l a t i o n .
s_e < < " Calling HydroEigen :: scan () . " < < endl ;
// call the old scan () .
HydroEigen :: scan () ;
s_e < < " Back from HydroEigen :: scan () ." < < endl ;
s_e < < " Skipping initialization of time dependent solver . " < < endl ;
return ;
}
// Get the time m e t h o d from menu .
// A b o r t if not " theta - rule " or " leap - frog "
String s = menu . get ( " time method " ) ;
ok = false ;
if ( s == " theta - rule " ) {
time_method = THETA_RULE ;
ok = true ;
} else if ( s == " leap - frog " ) {
time_method = LEAP_FROG ;
ok = true ;
}
if (! ok )
fatalerrorFP ( " TimeSolver :: scan () " , " Illegal time method ! " ) ;
// Get time p a r a m e t e r s , set up tip .
t_final = menu . get ( " T " ) . getReal () ;
dt = menu . get ( " dt " ) . getReal () ;
theta = menu . get ( " theta " ). getReal () ;
gamma0 = menu . get ( " gamma0 " ) . getReal () ;
omega = menu . get ( " omega " ). getReal () ;
delta = menu . get ( " delta " ). getReal () ;
tip . rebind ( new TimePrm () ) ;
tip - > scan ( aform ( " dt =% g , t in [% g ,% g ] " , dt , 0.0 , t_final ) ) ;
// Set up r e p o r t p a r a m e t e r s .
n_rep = ( int ) menu . get ( " n_rep " ) . getReal () ;
if ( n_rep < 1)
n_rep = 1; // at l e a s t 1 r e p o r t ( at the end of sim )
dt_rep = t_final /( real ) n_rep ;
if ( dt_rep < dt ) {
dt_rep = dt ; // not too many r e p o r t s ...
n_rep = ( int ) ( t_final / dt ) ;
}
// Set up IC p a r a m e t e r s .
field_database = menu . get ( " field database " ) ;
field_no = ( int ) menu . get (" ic field no " ). getReal () ;
s = menu . get ( " ic type " ) ;
gaussian_parameters = menu . get ( " gaussian " ) ;
ok = false ;
if ( s == " gaussian " ) { ic_type = IC_GAUSSIAN ; ok = true ; }
if ( s == " field " ) { ic_type = IC_FIELD ; ok = true ; }
if (! ok )
fatalerrorFP ( " TimeSolver :: scan () " , " Illegal IC type ! " ) ;
// Call the old scan () . Set up grid et . c .
202
B.3 – The TimeSolver class
s_e < < " Calling HydroEigen :: scan () . " < < endl ;
HydroEigen :: scan () ;
s_e < < " Back from HydroEigen :: scan () . " < < endl ;
// Load f i e l d s from d a t a b a s e .
// This o v e r w r i t e s the grid read by H y d r o E i g e n :: scan () if f i e l d s are f o u n d / read .
if ( ic_type == IC_FIELD )
loadFields ( field_database ) ;
// set up u and u _ p r e v .
u_prev . rebind ( new FieldFE ) ;
u_prev - > redim (* grid , " u_prev " ) ;
u . rebind ( new FieldFE ) ;
u - > redim (* grid , " u " ) ;
// set up s c r a t c h v e c t o r s
scratch . rebind ( new Vec ( NUMT ) ) ;
scratch - > redim ( grid - > getNoNodes () ) ;
scratch - > fill (0.0) ;
scratch2 . rebind ( new Vec ( NUMT ) ) ;
scratch2 - > redim ( grid - > getNoNodes () ) ;
scratch2 - > fill (0.0) ;
// for i n t e g r a n d s () .
mass_integrand = new MassMatIntg ( lump ) ; // d e l e t e d in d e s c t r u c t o r...
elmat2 . attach (* dof ) ;
elmat3 . attach (* dof ) ;
}
//
// G a m m a as f u n c t i o n of t .
//
real TimeSolver :: gammaFunc ( real t ) {
// g a m m a = 1 ;
if ( gamma0 ==0)
gamma =0;
else
gamma = gamma0 * pow ( sin ( M_PI * t / t_final ) , 2) * cos ( ( omega *( t -0.5* t_final ) + delta ) *2* M_PI
);
return gamma ;
}
//
// Set h o m o g e n o u s BCs .
//
void TimeSolver :: fillEssBC () {
dof - > initEssBC () ;
const int nno = grid - > getNoNodes () ;
for ( int i =1; i <= nno ; i ++)
if ( grid - > boNode ( i ) )
dof - > fillEssBC (i , 0.0) ;
}
//
// S i m p l e g a u s s i a n f u n c t o r .
//
class GaussFunc : public FieldFunc {
public :
Ptv ( real ) r0 ;
// c e n t r e
Ptv ( real ) sigma ; // w i d t h
Ptv ( real ) x ;
// work v e c t o r
Ptv ( real ) k0 ;
// m o m e n t u m
GaussFunc () { // d e f a u l t c o n s t r u c t o r: exp ( - r ^ 2 / 2 )
r0 . redim (2) ; sigma . redim (2) ; x . redim (2) ; k0 . redim (2) ;
r0 . fill (0.0) ;
sigma . fill (0.5) ;
x . fill (0.0) ;
k0 . fill (0.0) ;
}
virtual void init ( real x0 , real y0 , real s1 , real s2 , real k1 , real k2 ) { // exp ( -( x - x0 ) ^2/ s i g m a 1 ^2 - ( y - y0 ) ^2/
s i g m a 2 ^2)
r0 . redim (2) ; sigma . redim (2) ; x . redim (2) ; k0 . redim (2) ;
k0 (1) = k1 ; k0 (2) = k2 ;
r0 (1) = x0 ; r0 (2) = y0 ;
sigma (1) = s1 ; sigma (2) = s2 ;
sigma (1) = 0.5/( sigma (1) * sigma (1) );
sigma (2) = 0.5/( sigma (2) * sigma (2) );
x . fill (0.0) ;
};
virtual NUMT valuePt ( const Ptv ( real ) & r , real t = DUMMY ) { // e v a l u a t e g a u s s i a n
NUMT temp ;
x (1) = r (1) - r0 (1) ;
x (2) = r (2) - r0 (2) ;
temp = exp ( Complex (0 , 1) * x . inner ( k0 ) ) ;
x (1) = x (1) * x (1) ;
x (2) = x (2) * x (2) ;
return exp ( - x . inner ( sigma ) ) * temp ;
};
virtual void scan ( String & s ) ;
};
void GaussFunc :: scan ( String & s ) {
real x = 0 , y = 0 , sx = 1 , sy = 1 , kx = 0 , ky = 0;
int ntok = s . getNoTokens ( " " ) ;
for ( int i =1; i <= ntok ; i ++) {
String token , token2 ;
s . getToken ( token , i , " " ) ;
203
Program Listings
if
if
if
if
if
if
( token
( token
( token
( token
( token
( token
==
==
==
==
==
==
" -x " ) { s . getToken ( token2 , i +1 , " " ) ; x =
" -y " ) { s . getToken ( token2 , i +1 , " " ) ; y =
" - sx " ) { s. getToken ( token2 , i +1 , " " ) ; sx
" - sy " ) { s. getToken ( token2 , i +1 , " " ) ; sy
" - kx " ) { s. getToken ( token2 , i +1 , " " ) ; kx
" - ky " ) { s. getToken ( token2 , i +1 , " " ) ; ky
token2 . getReal () ; }
token2 . getReal () ; }
= token2 . getReal () ;
= token2 . getReal () ;
= token2 . getReal () ;
= token2 . getReal () ;
}
}
}
}
}
init (x , y , sx , sy , kx , ky ) ;
}
//
// Set i n i t i a l c o n d i t i o n s .
//
void TimeSolver :: setIC () {
// If i c _ t y p e == I C _ G A U S S I A N , set up a g a u s s i a n i n i t i a l
// c o n d i t i o n . If i c _ t y p e == IC_FIELD , we c h o o s e a l o a d e d
// e i g e n v e c t o r as IC .
if ( ic_type == IC_GAUSSIAN ) {
GaussFunc gaussian ;
gaussian . scan ( gaussian_parameters ) ;
u_prev - > fill ( gaussian , 0.0) ;
} else {
if (( field_no > no_of_fields ) || ( field_no <1) )
field_no = 1;
// copy f i e l d from d a t a b a s e .
* u_prev = *( fields [ field_no - 1]) ;
}
// e n s u r e unit norm .
u_prev - > mult ( 1.0/ sqrt ( calcInnerProd (* u_prev , * u_prev ) . Re () ) ) ;
// set u = u _ p r e v .
* u = * u_prev ;
}
//
// c a l c u l a t e ( u , Av ) . uses s c r a t c h a s s u m e d to be of same l e n g t h as u and v .
//
NUMT TimeSolver :: calcExpectationF ro mM a tr ix ( FieldFE & u , FieldFE & v , Matrix ( NUMT ) & A ) {
Vec ( NUMT ) & u_values = u . valuesVec () ;
Vec ( NUMT ) & v_values = v . valuesVec () ;
A . prod ( v_values , * scratch ) ;
return u_values . inner (* scratch ) ;
}
//
// f u n c t i o n that l o a d s the f i e l d s s t o r e d in d a t a b a s e d b _ n a m e .
// they are t y p i c a l l y c r e a t e d with H y d d r o E i g e n s o l v e r s over the
// same grid and H a m i l t o n i a n.
//
void TimeSolver :: loadFields ( String & db_name ) {
real t = 0.0;
int nsd = 0;
String field_name ;
String field_type ;
int component =0 , max_component =0;
int test = 0;
if ( db_name == " none " ) {
no_of_fields = 0;
return ;
}
// open the s i m r e s file , e . g . , S I M U L A T I O N .
SimResFile s ( db_name ) ;
s_e < < " Dataset name == " < < s . getDatasetName () < < endl ;
int n = s . getNoFields () ; // get n u m b e r of f i e l d s s t o r e d .
bool load_it [ n ]; // i n d i c a t e s what f i e l d s s h o u l d be l o a d e d .
s_e < < " I check " < < n < < " fields from " < < db_name < < " . " < < endl ;
no_of_fields = 0;
for ( int i =0; i < n ; i ++) {
s . locateField ( i +1 , field_name , t , nsd , field_type , component , max_component ) ;
// e i g e n v e c t o r s are s t o r e d with t h e i r n u m b e r as f i e l d name , the n u m b e r >0.
test = atoi ( field_name . c_str () ) ;
if ( test >0) {
load_it [ i ] = true ;
no_of_fields ++; // i n c r e a s e t o t a l n u m b e r of f i e l d s .
}
else {
load_it [ i ] = false ;
}
}
s_e < < " I found " < < no_of_fields < < " appropriate fields to load ";
fields = new Handle ( FieldFE ) [ no_of_fields ];
int j = 0;
for ( int i =0; i < n ; i ++) {
if ( load_it [ i ]) {
// a l l o c a t e h a n d l e s .
fields [j ]. rebind ( new FieldFE ) ; // a l l o c a t e a new f i e l d .
SimResFile :: readField ( fields [ j ]() , s , aform ( " % d " , j +1) , t ) ;
204
// read it .
B.3 – The TimeSolver class
// e n s u r e unit norm .
fields [ j ] - > mult ( 1.0/ sqrt ( calcInnerProd (* fields [ j ] , * fields [j ]) . Re () ) ) ;
j ++;
s_e < < " . " ;
}
}
s_e < < " ok " < < endl ;
s_e < < " Fetching the grid . " < < endl ;
grid . rebind ( fields  - > grid () ) ;
// b e c a u s e we have a new grid , we must u p d a t e some s t u f f .
// init the u n k n o w n u and the dof . a c t u a l l y not used in the s i m u l a t o r .
// n e e d e d in the D e g F r e e F E o b j e c t .
// u . r e b i n d ( new F i e l d F E (* grid ," u ") ) ;
dof . rebind ( new DegFreeFE (* grid , 1) ) ; // 1 for 1 u n k n o w n per node
// init the l i n e a r s y s t e m . ( h o l d s m a t r i x in e i g e n v a l u e f o r m u l a t i o n)
// FEM s y s t e m s are not s o l v e d in H y d r o E i g e n , but the s t r u c t u r e is n e e d e d .
lineq . rebind ( new LinEqAdmFE () ) ;
lineq - > scan ( SimCase :: getMenuSystem () ) ;
linsol . redim ( grid - > getNoNodes () ) ;
linsol . fill (0.0) ;
lineq - > attach ( linsol ) ;
}
//
// Do an i n i t i a l r e p o r t .
//
void TimeSolver :: initialReport () {
String prefix = aform (" % s_data . " , casename . c_str () ) ;
addToLogFile ( " %\ n " ) ;
addToLogFile ( aform ( " % s simulation log from case % s\ n " , " % " , casename . c_str () )) ;
addToLogFile ( " \ n % erase variables we use .\ n " ) ;
addToLogFile ( aform ( " clear % s_data ;\ n" , casename . c_str () ) ) ;
addToLogFile ( aform ( " % scasename = ’% s ’;\ n " , prefix . c_str () , casename . c_str () ) );
addToLogFile ( " %\ n \ n " ) ;
MenuSystem & menu = SimCase :: getMenuSystem () ;
addToLogFile ( " % simulation parameters : \ n " ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " gridfile = ’% s ’;\ n " , menu . get (" gridfile " ) . c_str () ) ) ;
if ( time_method == THETA_RULE ) {
addToLogFile ( aform ( "\ n % s using the theta - rule .\ n" , " % " ) ) ;
addToLogFile ( prefix );
addToLogFile ( aform ( " theta = %10.10 g ;\ n " , theta ) );
}
else {
addToLogFile ( " % using the leap - frog method .\ n " ) ;
}
addToLogFile ( " % is mass matrix lumped ?\ n " ) ;
addToLogFile ( prefix ) ;
addToLogFile ( " lump = ") ;
if ( lump )
addToLogFile ( " 1;\ n " );
else
addToLogFile ( " 0;\ n " );
addToLogFile ( " \ n " ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " dt = %10.10 g ;\ n " , dt ) ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " t_final = %10.10 g ;\ n " , t_final ) ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " gamma0 = %10.10 g ;\ n " , gamma0 ) );
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " omega = %10.10 g ;\ n " , omega ) ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " delta = %10.10 g ;\ n " , delta ) ) ;
if ( ic_type == IC_GAUSSIAN ) {
addToLogFile ( " \ n % using a Gaussian for IC .\ n " ) ;
addToLogFile ( prefix );
addToLogFile ( " gaussian_params = ’ ") ;
addToLogFile ( menu . get ( " gaussian " ) );
addToLogFile ( " ’;\ n " );
} else {
addToLogFile ( " % using a field from database for IC .\ nfield_database = ’ " ) ;
addToLogFile ( prefix );
addToLogFile ( menu . get ( " field database " ) ) ;
addToLogFile ( " ’;\ n " );
addToLogFile ( prefix );
addToLogFile ( " field_number = " ) ;
addToLogFile ( menu . get ( " field no " ) );
addToLogFile ( " ;\ n " ) ;
}
}
//
// Do a f i n a l r e p o r t .
//
void TimeSolver :: finalReport () {
205
Program Listings
String prefix = aform ( " % s_data . " , casename . c_str () ) ;
addToLogFile ( " \ n % finally ...\ n " ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " n_rep = length (% st ) ;\ n " , prefix . c_str () ) ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " solve_total = sum (% ssolve_time ) ;\ n " , prefix . c_str () ) ) ;
addToLogFile ( prefix ) ;
addToLogFile ( aform ( " system_total = sum (% ssystem_time ) ;\ n " , prefix . c_str () ) ) ;
}
//
// new i n t e g r a n d s for time d e p e n d e n t p r o b l e m s .
// r e u s e s H y d r o E i g e n :: i n t e g r a n d s () as well .
//
void TimeSolver :: integrands ( ElmMatVec & elmat , const FiniteElement & fe ) {
// if we s o l v e a e i g e n v a l u e problem , use H y d r o E i g e n :: i n t e g r a n d s ()
if (! do_time_simulation ) {
HydroEigen :: integrands ( elmat , fe ) ;
return ;
}
instead .
int nbf = fe . getNoBasisFunc () ;
// i n i t i a l i z e s c r a t c h o b j e c t s .
MassMatIntg mass_integrand ( lump ) ;
elmat2 . attach (* dof ) ;
elmat3 . attach (* dof ) ;
elmat2 . refill ( elmat . elm_no ) ;
elmat3 . refill ( elmat . elm_no ) ;
if ( time_method == LEAP_FROG ) {
// goal : A = M , b = -2* i * dt * H ( t ) * u ( t )
// c r e a t e the leap - frog e l e m e n t m a t r i x and v e c t o r .
// e l e m e n t m a t r i x = mass m a t r i x .
mass_integrand . integrands ( elmat , fe ) ;
// add c o n t r i b u t i o n from mass m a t r i x .
// e l e m e n t v e c t o r = -2* i * dt * H ( t ) * u
// c r e a t e a new E l m M a t V e c to hold the H a m i l t o n i a n c o n t r i b u t i o n s.
gammaFunc ( tip - > time () - dt ) ; // u p d a t e g a m m a .
HydroEigen :: integrands ( elmat2 , fe ) ;
// fill e l m a t 2 . b with u ’ s v a l u e s .
for ( int i =1; i <= nbf ; i ++)
elmat2 .b ( i ) = u - > values () ( elmat . loc2glob_u ( i ) ) ;
elmat2 . A . prod ( elmat2 .b , elmat3 . b ) ; // e l m a t 2 . b = e l m a t 2 . A * e l m a t 2 . b
elmat3 . b . mult ( Complex (0 , -2* dt ) ) ; // m u l t i p l y Hu by -2 i * dt
elmat . b . add ( elmat .b , elmat3 . b ) ; // u p d a t e e l m a t . b
} else
if ( time_method == THETA_RULE ) {
// c r e a t e the theta - rule e l e m e n t m a t r i x and v e c t o r
// b = [ M - i * dt *(1 - t h e t a ) * H ( t - dt ) ]* u
gammaFunc ( tip - > time () - dt ) ;
mass_integrand . integrands ( elmat2 , fe ) ;
// add c o n t r i b u t i o n from mass m a t r i x .
// add c o n t r i b to M * u in rhs .
for ( int i =1; i <= nbf ; i ++)
elmat2 .b ( i ) = u - > values () ( elmat . loc2glob_u ( i ) ) ;
elmat2 . A . prod ( elmat2 .b , elmat3 . b ) ;
elmat . b . add ( elmat .b , elmat3 . b ) ; // e l m a t . b += M * u
elmat2 . A . fill (0.0) ;
elmat2 . b . fill (0.0) ;
elmat3 . b . fill (0.0) ;
HydroEigen :: integrands ( elmat2 , fe ) ; // H a m i l t o n i a n.
elmat2 . A . mult ( Complex (0 , - dt *(1.0 - theta ) ) ) ; // . . . t i m e s - i * dt *(1 - t h e t a )
for ( int i =1; i <= nbf ; i ++)
elmat2 .b ( i ) = u - > values () ( elmat . loc2glob_u ( i ) ) ;
elmat2 . A . prod ( elmat2 .b , elmat3 . b ) ;
elmat . b . add ( elmat .b , elmat3 . b ) ;
// e l m a t . b += - i * dt *(1 - t h e t a ) * H * u
// A = M + i * dt * t h e t a * H ( t )
gammaFunc ( tip - > time () ) ; // u p d a t e g a m m a .
elmat2 . A . fill (0.0) ; // e r a s e v a l u e s .
HydroEigen :: integrands ( elmat2 , fe ) ; // fill it
elmat2 . A . mult ( Complex (0 , dt * theta ) ) ;
elmat . A . add ( elmat .A , elmat2 . A ) ; // e l m a t . A = - i * dt * t h e t a * H ( t ) ;
mass_integrand . integrands ( elmat , fe ) ; // add M .
}
206
B.4 – The EigenSolver class
}
//
// c a l c u l a t e e x p e c t a t i o n v a l u e of p o s i t i o n .
//
Ptv ( real ) TimeSolver :: calcExpectation_pos ( FieldFE & u , FieldFE & v ) {
IntegrateOverGridFE integrator ;
IntegrateOverGridFE integrator2 ;
IntegrandOfExpect at i on _x integrand_x ;
IntegrandOfExpect at i on _y integrand_y ;
InnerProdIntegrand Ca l c integrand2 ;
integrand_x . setFields (u , v ) ;
integrand_y . setFields (u , v ) ;
integrand2 . setFields (u , v ) ;
Ptv ( real ) result ; result . redim (2) ;
integrator . volumeIntegral ( integrand_x , * grid ) ;
integrator . volumeIntegral ( integrand_y , * grid ) ;
integrator2 . volumeIntegral ( integrand2 , * grid ) ;
result (1) = ( integrand_x . getResult () / integrand2 . getResult () ) . Re () ;
result (2) = ( integrand_y . getResult () / integrand2 . getResult () ) . Re () ;
return result ;
}
B.3.3 main.cpp
# include < TimeSolver .h >
int main ( int argc , const char * argv [])
{
initDiffpack ( argc , argv ) ;
global_menu . init ( " Time solver test " , " TimeSolver ") ;
TimeSolver sim ;
global_menu . multipleLoop ( sim ) ;
return 0;
}
B.4
The EigenSolver class
This class extends the class deﬁnition found in Ref.  to include standard and generalized complex eigenvalue problems.
B.4.1 EigenSolver.h
# ifndef _EIGENSOLVER_H_
# define _EIGENSOLVER_H_
# include
# include
# include
# include
< FEM .h >
< Matrix_Complex .h >
< LinEqSolver .h >
< LinEqAdm .h >
// d i s p l a y w a r n i n g m e s s a g e s
# define WARNING ( s ) { s_e < < " > > > EigenSolver warning : " < < s < < endl ; }
// c o n s t a n t s used in the c l a s s d e f i n i t i o n
# define PROBLEM_UNDEFINED 0
# define PROBLEM_STANDARD 1
# define PROBLEM_GENERALIZED 2
# define MATRIX_SYMMETRIC 1
# define MATRIX_NONSYMMETRIC 2
# define MATRIX_COMPLEX 3
# define MODE_REGULAR 1
# define MODE_SHIFT_INVERT 2
# define MODE_BUCKLING 3
# define MODE_CAYLEY 4
# define MODE_COMPLEX_SHIFT 5
# define S P E C T R U M _ S M A L L E S T _ M A G N I TU D E 0
# define S P E C T R U M _ S M A L L E S T _ A L G E B RA I C 1
# define SPECTRUM_SMALLEST_R E AL 2
# define SPECTRUM_SMALLEST_I M AG 3
# define SPECTRUM_LARGEST_ M AG NI T UD E 4
# define SPECTRUM_LARGEST_ A LG EB R AI C 5
# define SPECTRUM_LARGEST_RE AL 6
# define SPECTRUM_LARGEST_IM AG 7
//
// c l a s s d e f i n i t i o n of E i g e n S o l v e r.
//
class EigenSolver {
private :
int n , nev ;
//
Handle ( Matrix ( NUMT ) ) A , B ;
//
int problem_kind ;
//
int matrix_kind ;
//
int comp_mode ;
//
int spectrum_part ;
//
Mat ( NUMT ) eigenvectors ;
//
d i m e n s i o n of problem , n u m b e r of e i g e n v a l u e s to seek
problem defining matrices
s t a n d a r d or g e n e r a l i z e d
symm , non - symm or c o m p l e x
regular , shift - and - i n v e r t et . c .
i n d i c a t e s what part of s p e c t r u m to find
s t o r e the e i g e n v e c t o r s here
207
Program Listings
Vec ( NUMT ) eigenvalues ;
Handle ( Vec ( NUMT ) ) V , W ;
real sigma , sigma_im ;
// s t o r e the e i g e n v a l u e s here
// aux v e c t o r s for matrix - v e c t o r o p e r a t i o n s
// real ( and i m a g i n a r y ) part of s h i f t
// l i n e a r e q u a t i o n s o b j e c t s .
Handle ( Matrix ( NUMT ) ) C , D;
Handle ( LinEqSolver ) linear_solver ;
Handle ( LinEqSolver_prm ) linear_solver_prm ;
Handle ( LinEqAdm ) lineq ;
bool c_has_changed ; // i n d i c a t e s w h e t h e r we do the f i r s t l i n e a r s o l v e or not
public :
// c o n s t r u c t o r s...
EigenSolver ( Handle ( MenuSystem ) menu_handle = NULL ) ;
EigenSolver ( Matrix ( NUMT ) & the_A , int the_nev = 1 , Handle ( MenuSystem ) menu_handle = NULL ) ;
EigenSolver ( Matrix ( NUMT ) & the_A , Matrix ( NUMT ) & the_B , int the_nev = 1 , Handle ( MenuSystem ) menu_handle = NULL ) ;
// m e t h o d s for s e t t i n g the m a t r i c e s
bool setA ( Matrix ( NUMT ) & the_A ) ;
bool setB ( Matrix ( NUMT ) & the_A ) ;
// m e t h o d s for s e t t i n g / g e t t i n g n u m b e r of e i g e n v a l u e s to seek
bool setNev ( int the_nev ) ;
int getNev () ;
// set the m a t r i x c h a r a c t e r i s t i c s
bool setMatrixKind ( int kind ) ;
// set the real and i m a g i n a r y part of the s p e c t r u m s h i f t
void setSigma ( real the_sigma ) ;
void setSigmaIm ( real the_sigma_im ) ;
// set the c o m p u t a t i o n a l mode
bool setCompMode ( int mode ) ;
bool setSpectrumPart ( int part ) ;
// s o l v e the problem , s i l l y r e p o r t and s i l l y p r i n t i n g of m a t r i c e s .
void solveProblem () ;
void report () ;
void sillyPrint ( Matrix ( NUMT ) & matrisen );
// r e t r i e v e r e f e r e n c e s to the i n t e r n a l e i g e n t h i n g s s t o r a g e
Mat ( NUMT ) & getEigenvectors () ;
Vec ( NUMT ) & getEigenvalues () ;
protected :
void resetAllMembers () ;
// c l e a r s e v e r y t h i n g , incl . p o i n t e r s .
// i n i t i a l i z e the c h a i n of o b j e c t s c o n s t i t . l i n e a r s o l v e r .
void initLinearSolver ( Handle ( MenuSystem ) menu_handle = NULL ) ;
void removeA () ;
// t h r e e m e t h o d s that kill m a t r i c e s ...
void removeB () ;
void removeMatrix ( Handle ( Matrix ( NUMT ) ) h ) ;
// matrix - v e c t o r o p e r a t i o n s m e t h o d s . p a s s e d to A R P A C K ++ o b j e c t s .
void multAx ( NUMT * , NUMT *) ;
void multBx ( NUMT * , NUMT *) ;
void multInvCDx ( NUMT * , NUMT *) ;
void multInvCx ( NUMT * , NUMT *) ;
}; // E i g e n S o l v e r
# endif
B.4.2 EigenSolver.cpp
# include " EigenSolver . h "
// i n c l u d e the A R S y m G e n E i g c l a s s t e m p l a t e
# include " argsym . h "
# include " argcomp . h "
//
//
//
//
//
//
//
//
//
//
//
//
//
I m p l e m e n t a t i o n of E i g e n S o l v e r c l a s s .
Author:
Simen Kvaal.
C u r r e n t l y only s y m m e t r i c p r o b l e m s are s u p p o r t e d .
The rest of the p r o b l e m s may e a s i l y be
a d d e d in the S o l v e P r o b l e m m e t h o d .
Last u p d a t e :
Aug . 1 2 , 2 0 0 3
//
// r e s e t all m e m b e r s . h e l p s k e e p i n g t h i n g s c l e a n .
//
void EigenSolver :: resetAllMembers () {
removeA () ;
removeB () ;
n = 0;
nev = 0;
problem_kind = PROBLEM_UNDEFINED ;
matrix_kind = MATRIX_SYMMETRIC ;
comp_mode = MODE_REGULAR ;
spectrum_part = S P E C T R U M _ S M A L L E S T _ M A G N I T U DE;
eigenvalues . redim (0) ;
eigenvectors . redim (0 ,0) ;
removeMatrix ( C ) ;
208
B.4 – The EigenSolver class
removeMatrix ( D ) ;
sigma = 0;
sigma_im = 0;
}
//
// i n i t i a l i z e c h a i n of o b j e c s c o n s t i t u t i n g
// the l i n e a r s o l v e r .
//
void EigenSolver :: initLinearSolver ( Handle ( MenuSystem ) menu_handle ) {
// c r e a t e the L i n E q S o l v e r o b j e c t
linear_solver_prm . rebind ( LinEqSolver_prm :: construct () ) ;
if ( menu_handle != NULL )
linear_solver_prm - > scan (* menu_handle ) ;
// i n i t F r o m C o m m a n d L i n e A r g(" - s " , l i n e a r _ s o l v e r _ p r m - > b a s i c _ m e t h o d , " G a u s s E l i m ") ;
a c t u a l l y may use the s e t t i n g s in the menu s y s t e m i n s t e a d !
linear_solver . rebind ( linear_solver_prm - > create () ) ;
/// c o m m e n t e d out ; i t h i n k we
// c r e a t e L i n E q A d m o b j e c t for s o l v i n g l i n e a r s y s t e m s .
lineq . rebind ( new LinEqAdm ( EXTERNAL_STORAGE ) ) ;
lineq - > attach ( linear_solver () ) ;
c_has_changed = true ;
}
//
// d e f a u l t c o n s t r u c t o r
//
EigenSolver :: EigenSolver ( Handle ( MenuSystem ) menu_handle ) {
// make it c l e a n ...
resetAllMembers () ;
initLinearSolver ( menu_handle ) ;
V . rebind ( new Vec ( NUMT ) (0) ) ;
W . rebind ( new Vec ( NUMT ) (0) ) ;
}
//
// c o n s t r u c t o r for s t a n d a r d p r o b l e m s .
//
EigenSolver :: EigenSolver ( Matrix ( NUMT ) & the_A , int the_nev , Handle ( MenuSystem ) menu_handle ) {
resetAllMembers () ;
initLinearSolver ( menu_handle ) ;
V . rebind ( new Vec ( NUMT ) (0) ) ;
W . rebind ( new Vec ( NUMT ) (0) ) ;
// a t t e m p t to set A .
if (! setA ( the_A ) ) {
WARNING ( " Could not set A . Bailing out of constructor . " ) ;
return ;
}
nev = the_nev ;
}
//
// c o n s t r u c t o r for g e n e r a l i z e d p r o b l e m s .
//
EigenSolver :: EigenSolver ( Matrix ( NUMT ) & the_A , Matrix ( NUMT ) & the_B , int the_nev , Handle ( MenuSystem ) menu_handle ) {
resetAllMembers () ;
initLinearSolver ( menu_handle ) ;
V . rebind ( new Vec ( NUMT ) (0) ) ;
W . rebind ( new Vec ( NUMT ) (0) ) ;
// a t t e m p t to set A .
if (! setA ( the_A ) ) {
WARNING ( " Could not set A . Bailing out of constructor . " ) ;
return ;
}
// a t t e m p t to set B .
if (! setB ( the_B ) ) {
WARNING ( " Could not set B . Solver is now a standard solver . " ) ;
nev = the_nev ;
return ;
} else {
nev = the_nev ;
}
}
//
// r e m o v e m a t r i c e s ...
//
void EigenSolver :: removeA () {
removeMatrix ( A ) ;
}
void EigenSolver :: removeB () {
removeMatrix ( B ) ;
}
void EigenSolver :: removeMatrix ( Handle ( Matrix ( NUMT ) ) h ) {
h . detach () ; h . rebind ( NULL ) ;
}
//
// set m a t r i c e s ...
209
Program Listings
// also sets p r o b l e m _ k i n d and n , m a k i n g a c o m p l e t e
// p r o b l e m s p e c i f i c a t i o n.
//
bool EigenSolver :: setA ( Matrix ( NUMT ) & the_A ) {
int N , M ;
// get the m a t r i x size .
the_A . size (N , M ) ;
// the m a t r i x must be s q u a r e ...
if ( N == M ) {
A . rebind ( the_A ) ;
n = N;
}
else {
return false ;
}
// set p r o b l e m kind .
if ( B . getPtr () )
problem_kind = PROBLEM_GENERALIZED ;
else
problem_kind = PROBLEM_STANDARD ;
return true ;
}
//
// set B . A must be a l r e a d y set .
//
bool EigenSolver :: setB ( Matrix ( NUMT ) & the_B ) {
int N , M ;
// get the m a t r i x size .
the_B . size (N , M ) ;
// the m a t r i x must be s q u a r e and of same size as A ...
if (( N == M ) && ( N == n ) ) {
B . rebind ( the_B ) ;
problem_kind = PROBLEM_GENERALIZED ;
return true ;
}
else {
return false ;
}
}
// set nev and r e t u r n true if s u c c e s s .
// nev must be in r a n g e 1 . . n -1
bool EigenSolver :: setNev ( int the_nev ) {
if (( the_nev >= 1) && ( the_nev <= n -1) ) {
nev = the_nev ;
return true ;
} else {
WARNING ( " nev is out of range . " ) ;
return false ;
}
}
// get nev ...
int EigenSolver :: getNev () { return nev ; }
// set s p e c t r u m part ...
bool EigenSolver :: setSpectrumPart ( int part ) {
if (( part == S P E C T R U M _ S M A L L E S T _ M A G N I T U DE) ||
( part == S P E C T R U M _ S M A L L E S T _ A L G E B R A IC) ||
( part == SPECTRUM_SMALLEST _R EA L ) ||
( part == SPECTRUM_SMALLEST _I MA G ) ||
( part == SPECTRUM_LARGEST _ MA GN I TU DE ) ||
( part == SPECTRUM_LARGEST _ AL GE B RA IC ) ||
( part == SPECTRUM_LARGEST_R EA L ) ||
( part == SPECTRUM_LARGEST_I MA G ) ) {
spectrum_part = part ;
return true ;
}
WARNING ( " Invalid spectrum part . " ) ;
return false ;
}
// set m a t r i x kind ...
bool EigenSolver :: setMatrixKind ( int kind ) {
if (( kind == MATRIX_SYMMETRIC ) || ( kind == MATRIX_NONSYMMETRIC ) ||
( kind == MATRIX_COMPLEX ) ) {
matrix_kind = kind ;
} else {
WARNING ( " Illegal matrix kind . " ) ;
return false ;
}
return true ;
}
//
// s o l v e the p r o b l e m !
// note that only s y m m e t r i c p r o b l e m s
// are i m p l e m e n t e d at this point ,
// but a d d i n g more m a t r i x k i n d s is
// r e a l l y easy .
//
void EigenSolver :: solveProblem () {
210
B.4 – The EigenSolver class
bool supported_mode = false ;
char * descriptive_array [] = { " SM " , " SA " , " SR " , " SI " , " LM " , " LA " , " LR " , " LI " } ;
char * descriptive = descriptive_array [ spectrum_part ];
// c r e a t e a p o i n t e r to the base c l a s s in the
// A R P A C K ++ h i e r a r c h y .
ARrcStdEig < real , real > * solver = NULL ;
ARrcStdEig < real , arcomplex < real > > * solverComplex = NULL ;
typedef void ( EigenSolver ::* real_multfunc ) ( real * , real *) ;
typedef void ( EigenSolver ::* Complex_multfunc ) ( arcomplex < real > * , arcomplex < real > *) ;
// c r e a t e i n s t a n c e of p r o p e r
// c l a s s b a s e d on m a t r i x kind and p r o b l e m kind
// and r e g u l a r mode .
// a t t e m p t at c o m p l e x m a t r i x i m p l e m e n t a t i o n. s e e m s to work !
if ( matrix_kind == MATRIX_COMPLEX ) {
if ( problem_kind == PROBLEM_STANDARD ) {
if ( comp_mode == MODE_REGULAR ) {
ARCompStdEig < real , EigenSolver > * temp =
new ARCompStdEig < real , EigenSolver >( n , nev , this , ( Complex_multfunc ) & EigenSolver :: multAx , descriptive ) ;
solverComplex = temp ;
supported_mode = true ;
WARNING ( " ARCompStdEig in regular mode created . " ) ;
}
}
if ( problem_kind == PROBLEM_GENERALIZED ) {
if ( comp_mode == MODE_REGULAR ) {
// use the r e g u l a r mode c o n s t r u c t o r.
ARCompGenEig < real , EigenSolver , EigenSolver > * temp =
new ARCompGenEig < real , EigenSolver , EigenSolver >(n , nev , this , ( Complex_multfunc ) & EigenSolver :: multInvCDx ,
this , ( Complex_multfunc ) & EigenSolver :: multBx , descriptive )
;
solverComplex = temp ;
// set up h e l p e r m a t r i c e s .
D . rebind ( A () ) ;
B - > makeItSimilar ( C ) ;
C () = B () ; // note d i f f e r e n t l i n e a r s y s t e m in the r e g u l a r case .
supported_mode = true ;
WARNING ( " ARCompGenEig in regular mode created . " ) ;
supported_mode = true ;
}
}
}
// s y m m e t r i c p r o b l e m s . - - > r e q u i r e s real m a t r i c e s !
if ( matrix_kind == MATRIX_SYMMETRIC ) {
// s t a n d a r d p r o b l e m s .
if ( problem_kind == PROBLEM_STANDARD ) {
if ( comp_mode == MODE_REGULAR ) {
// use the r e g u l a r mode c o n s t r u c t o r
// must cast p o i n t e r due to d i f f e r e n t NUMT ...
ARSymStdEig < real , EigenSolver > * temp =
new ARSymStdEig < real , EigenSolver >(n , nev , this , ( real_multfunc ) & EigenSolver :: multAx , descriptive ) ;
solver = temp ;
supported_mode = true ;
WARNING ( " ARSymStdEig in regular mode created . " ) ;
}
if ( comp_mode == MODE_SHIFT_INVERT ) {
// use the shift - and - i n v e r t mode c o n s t r u c t o r
ARSymStdEig < real , EigenSolver > * temp =
new ARSymStdEig < real , EigenSolver >(n , nev , this , ( real_multfunc ) & EigenSolver :: multInvCx , sigma ,
descriptive ) ;
solver = temp ;
A - > makeItSimilar ( C ) ;
Handle ( Matrix ( NUMT ) ) Eye ; A - > makeItSimilar ( Eye ) ; Eye () = 0.0;
for ( int i =1; i <= n ; i ++) Eye - > elm (i , i ) = 1.0;
add ( C () , A () , - sigma , Eye () ) ; // C = A - s i g m a * I
supported_mode = true ;
WARNING ( " ARSymStdEig in shift - and - invert mode created . " ) ;
}
}
// g e n e r a l i z e d p r o b l e m s .
if ( problem_kind == PROBLEM_GENERALIZED ) {
if ( comp_mode == MODE_REGULAR ) {
// use the r e g u l a r mode c o n s t r u c t o r.
ARSymGenEig < real , EigenSolver , EigenSolver > * temp =
new ARSymGenEig < real , EigenSolver , EigenSolver >( n , nev , this , ( real_multfunc ) & EigenSolver :: multInvCDx ,
this , ( real_multfunc )& EigenSolver :: multBx , descriptive ) ;
solver = temp ;
// set up h e l p e r m a t r i c e s .
D . rebind ( A () ) ;
B - > makeItSimilar ( C ) ;
C () = B () ; // note d i f f e r e n t l i n e a r s y s t e m in the r e g u l a r case .
supported_mode = true ;
WARNING ( " ARSymGenEig in regular mode created . " ) ;
}
if ( comp_mode == MODE_SHIFT_INVERT ) {
// use shift - and - invert - mode c o n s t r u c t o r.
ARSymGenEig < real , EigenSolver , EigenSolver > * temp =
new ARSymGenEig < real , EigenSolver , EigenSolver >( ’S ’ , n , nev , this , ( real_multfunc ) & EigenSolver :: multInvCx ,
211
Program Listings
this , ( real_multfunc ) & EigenSolver :: multBx , sigma ,
descriptive );
solver = temp ;
// set up l i n e a r s y s t e m .
B - > makeItSimilar ( C ) ;
add ( C () , A () , - sigma , B () ) ;
supported_mode = true ;
WARNING ( " ARSymGenEig in shift - and - invert mode created . " ) ;
}
if ( comp_mode == MODE_BUCKLING ) {
ARSymGenEig < real , EigenSolver , EigenSolver > * temp =
new ARSymGenEig < real , EigenSolver , EigenSolver >( ’B ’ , n , nev , this , ( real_multfunc ) & EigenSolver :: multInvCx ,
this , ( real_multfunc ) & EigenSolver :: multAx , sigma ,
descriptive );
solver = temp ;
// set up l i n e a r s y s t e m .
B - > makeItSimilar ( C ) ;
add ( C () , A () , - sigma , B () ) ;
supported_mode = true ;
WARNING ( " ARSymGenEig in buckling mode created . ") ;
}
if ( comp_mode == MODE_CAYLEY ) {
// use b u c k l i n g mode c o n s t r u c t o r.
ARSymGenEig < real , EigenSolver , EigenSolver > * temp =
new ARSymGenEig < real , EigenSolver , EigenSolver >( n , nev , this , ( real_multfunc ) & EigenSolver :: multInvCx ,
this , ( real_multfunc ) & EigenSolver :: multAx ,
this , ( real_multfunc ) & EigenSolver :: multBx , sigma ,
descriptive );
solver = temp ;
// set up l i n e a r s y s t e m .
B - > makeItSimilar ( C ) ;
add ( C () , A () , - sigma , B () ) ;
supported_mode = true ;
WARNING ( " ARSymGenEig in Cayley mode created . " ) ;
}
}
}
if (! supported_mode ) {
WARNING ( " Your computational mode is not supported ! Sorry . " ) ;
return ;
}
// find the e i g e n v a l u e s and e i g e n v e c t o r s.
// copy them to i n t e r n a l a r r a y s i n s i d e E i g e n S o l v e r.
void * eval_ptr ;
void * evec_ptr ;
int converged ;
if ( solver ) {
solver - > FindEigenvalues () ;
solver - > FindEigenvectors () ;
eval_ptr = ( void *) solver - > RawEigenvalues () ;
evec_ptr = ( void *) solver - > RawEigenvectors () ;
converged = solver - > ConvergedEigenvalues () ;
}
if ( solverComplex ) {
solverComplex - > FindEigenvalues () ;
solverComplex - > FindEigenvectors () ;
eval_ptr = ( void *) solverComplex - > RawEigenvalues () ;
evec_ptr = ( void *) solverComplex - > RawEigenvectors () ;
converged = solverComplex - > ConvergedEigenvalue s () ;
}
eigenvalues . redim ( converged ) ;
eigenvectors . redim ( converged , n ) ;
for ( int i =0; i < converged ; i ++)
eigenvalues ( i +1) = (( NUMT *) eval_ptr )[ i ];
for ( int i =0; i < converged ; i ++)
for ( int j =0; j < n ; j ++)
eigenvectors ( i +1 , j +1) = (( NUMT *) evec_ptr ) [ n * i + j ];
// c l e a n up m e m o r y .
if ( solver ) delete solver ;
if ( solverComplex ) delete solverComplex ;
}
//
// matrix - v e c t o r m u l t i p l i c a t i o n: w < - Av
//
void EigenSolver :: multAx ( NUMT * v , NUMT * w ) {
s_e < < " . " ;
// matrix - v e c t o r o p e r a t i o n s m e t h o d s . p a s s e d to A R P A C K ++ o b j e c t s .
void multAx ( real * , real *) ;
void multBx ( real * , real *) ;
void multInvCDx ( real * , real *) ;
void multInvCx ( real * , real *) ;
// let V use v as u n d e r l y i n g pointer , and W use w .
V - > redim (v , n ) ;
W - > redim (w , n ) ;
// m u l t i p l y : y = Ax
A - > prod ( V () , W () ) ;
212
B.4 – The EigenSolver class
}
//
// matrix - v e c t o r m u l t i p l i c a t i o n: w < - Bv
//
void EigenSolver :: multBx ( NUMT * v , NUMT * w ) {
s_e < < ". " ;
// let V use v as u n d e r l y i n g pointer , and W use w .
V - > redim (v , n ) ;
W - > redim (w , n ) ;
// m u l t i p l y : y = Ax
B - > prod (V () , W () ) ;
// s i l l y P r i n t ( V () ) ; s_o < < " - - - Bx - - - > ";
// s i l l y P r i n t ( W () ) ; s_o < < endl ;
}
//
// matrix - v e c t o r m u l t i p l i c t i o n: w < - inv ( C ) Dv , v < - Av
//
void EigenSolver :: multInvCDx ( NUMT * v , NUMT * w ) {
s_e < < "* " ;
// let V use v and W use w as u n d e r l y i n g p o i n t e r .
V - > redim (v , n ) ;
W - > redim (w , n ) ;
// m u l t i p l y : y = Ax
D - > prod (V () , W () ) ;
// let v < - - w
V () = W () ;
// s o l v e w = B ^ -1 y
//
C
x =
b
lineq - > attach ( C () , W () , V () ) ;
lineq - > solve ( c_has_changed ) ;
if ( c_has_changed ) c_has_changed = false ;
}
//
// matrix - v e c t o r m u l t i p l i c t i o n: w < - inv ( C ) v
//
void EigenSolver :: multInvCx ( NUMT * v , NUMT * w ) {
s_e < < "* " ;
// let V use v and W use w as u n d e r l y i n g p o i n t e r .
V - > redim (v , n ) ;
W - > redim (w , n ) ;
// s o l v e w = B ^ -1 y
//
C
x = b
lineq - > attach ( C () , W () , V () ) ; // C W = V
lineq - > solve ( c_has_changed ) ;
if ( c_has_changed ) c_has_changed = false ;
}
//
// make a s i m p l e report , d i s p l a y i n g m a t r i c e s and so on .
//
void EigenSolver :: report () {
s_o < < " n
== " < < n < < endl ;
s_o < < " nev == " < < nev < < endl < < endl ;
s_o < < " sigma == " < < sigma < < endl ;
s_o < < " A == " < < endl ;
if ( A . getPtr () )
sillyPrint ( A () ) ;
else
s_o < < " [ undefined ] " ;
s_o < < " B == " < < endl ;
if ( B . getPtr () )
sillyPrint ( B () ) ;
else
s_o < < " [ undefined ] " ;
s_o < < endl ;
}
//
// m e t h o d that p r i n t s m a t r i x in a s i l l y way
//
void EigenSolver :: sillyPrint ( Matrix ( NUMT ) & matrisen ) {
int m , n;
matrisen . size (n , m ) ;
s_o < < "[ " ;
213
Program Listings
for ( int i =1; i <= n ; i ++)
for ( int j =1; j <= m ; j ++) {
s_o < < matrisen . elm (i , j ) ;
if (( i == n ) && ( j == m ) )
s_o < < " ] " < < endl ;
else
s_o < < " , " ;
}
}
//
// set the s h i f t
//
void EigenSolver :: setSigma ( real the_sigma ) { sigma = the_sigma ; }
void EigenSolver :: setSigmaIm ( real the_sigma_im ) { sigma = the_sigma_im ; }
//
// sets the c o m p u t a t i o n a l mode .
//
bool EigenSolver :: setCompMode ( int mode ) {
bool success = false ;
// r e g u l a r mode and shift - and - invert - m o d e s are a l w a y s ok .
if (( mode == MODE_REGULAR ) || ( mode == MODE_SHIFT_INVERT ) )
success = true ;
// two more m o d e s for g e n e r a l i z e d , s y m m e t r i c p r o b l e m s .
if (( problem_kind == PROBLEM_GENERALIZED ) && ( matrix_kind == MATRIX_SYMMETRIC ) ) {
if (( mode == MODE_CAYLEY ) || ( mode == MODE_BUCKLING ) )
success = true ;
}
// one more mode for g e n e r a l i z e d , non - s y m m e t r i c p r o b l e m s .
if (( problem_kind == PROBLEM_GENERALIZED ) && ( matrix_kind == MATRIX_NONSYMMETRIC ) ) {
if ( mode == MODE_COMPLEX_SHIFT )
success = true ;
}
if ( success )
comp_mode = mode ;
return success ;
}
//
// get r e f e r e n c e s to e i g e n t h i n g s.
//
Mat ( NUMT ) & EigenSolver :: getEigenvectors () { return eigenvectors ; }
Vec ( NUMT ) & EigenSolver :: getEigenvalues () { return eigenvalues ; }
214
Bibliography
 Home page for FemLab. Web address:
http://www.comsol.com/.
 C. Bottcher. Accurate quantal studies of ion-atom collisions using ﬁnite-element
techniques. Phys. Rev. Lett., 48:85–88, 1982.
 T.N. Rescigno and C.W. McCurdy. Numerical grid methods for quantum mechanical scattering problems. Phys. Rev. A, 62, 2000.
 F.S. Levin and J. Schertzer. Finite-element solution of the schrödinger equation
for the helium ground state. Phys. Rev. A, 32:3285–3290, 1985.
 S. Kvaal. Home page for this cand. scient. project. Web address:
http://www.fys.uio.no/˜simenkva/hovedfag/.
 H. Goldstein. Classical Mechanics. Addison Wesley, 1970.
 R. Shankar. Principles of Quantum Mechanics, 2nd ed. Plenum, 1994.
 Wikipedia – the free encyclopedia. Web address: http://www.wikipedia.org/.
 M.C. Gutzwiller. Chaos in Classical and Quantum Mechanics. Springer, 1990.
 J.J. Brehm. Introduction to the Structure of Matter. Wiley, 1989.
 H. Kragh. Quantum Generations: A History of Quantum Physics in the Twentieth
Century. Princeton Univ. Press, 1999.
 B.E. Lian and O. Øgrim. Størresler og enheter i fysikk. Universitetsforlaget, 2000.
 G.A Biberman, N. Sushkin, and V. Fabrikant. Dokl. Akad. Nauk, 26:185, 1949.
 L. de Broglie. Ann. Phys., Lpz., 10:22, 1925.
 J.E. Marsden and M.J. Hoﬀman. Elementary Classical Analysis. Freeman, 1993.
 S.C. Brenner and L.R. Scott. The Mathematical Theory of Finite Element Methods. Springer, 1994.
 S. Larsson and V. Thomée. Partial Diﬀerential Equations with Numerical Methods.
Springer, 2003.
 A. Galindo and Pascual P. Mecánica Quántica. Alhambra, 1978.
 J.M. Leinaas and Myrheim J. On the theory of identical particles. Il Nuovo
Cimento, 37B:1, 1977.
 J.-L. Basdevant and Dalibard J. Quantum Mechanics. Springer, 2002.
 E.D. Jackson. Classical Electrodynamics. Wiley, 1962.
215
Bibliography
 P.C. Hemmer. Kvantemekanikk, 2nd ed. Tapir, 2000.
 J. Frøyland. Forelesninger i klassisk teoretisk fysikk II. UNIPUB, 1996.
 K. Rottmann. Matematisk Formelsamling. Spektrum forlag, 1999.
 G. Paz. On the connection between the radial momentum operator and the hamiltonian in n dimensions. Eur. J. Phys., 22:337, 2000.
 S. Brandt and H.D. Dahmen. The Picture Book of Quantum Mechanics. Springer,
1995.
 J.M. Leinaas. Non-relativistic Quantum Mechanics – Lecture notes in FYS4110.
(Unpublished), 2003.
 C. Kittel. Introduction to Solid State Physics, 6th ed. Wiley, 1986.
 A.H. MacDonald and D.S. Ritchie. Phys. Rev. B, 33:8336, 1986.
 M. Robnik and V. G. Romanovski. Two-dimensional hydrogen atom in a strong
magnetic ﬁeld. J. Phys. A: Math. Gen., 36:7923–7951, 2003.
 L.R. Ram-Mohan. Finite Element and Boundary Element Applications in Quantum Mechanics. Oxford, 2002.
 R.C. McOwen. Partial Diﬀerential Equations; Methods and Applications. PrenticeHall, 2003.
 A. Tveito and R. Winther. Introduction to Partial Diﬀerential Equations – a
Computational Approach. Springer, 1998.
 H.P. Langtangen. Computational Partial Diﬀerential Equations. Springer Verlag,
2001.
 A. Goldberg, H.M. Schey, and J.L. Schwartz. Computer-generated motion pictures
of one-dimensional quantum-mechanical transmission and reﬂection phenomena.
Am. J. Phys., 35:177, 1967.
 P.C. Moan and S. Blanes. Splitting methods for the time-dependent schrödinger
equation. Phys. Lett. A, 265:35–42, 2000.
 A. Askar and A.S. Cakmak. Explicit integration method for the time dependent
schrödinger equation for collision problems. J. Chem. Phys., 68:7294, 1978.
 L.D. Landau and E.M. Lifshitz. Course of Theoretical Physics Volume 1: Mechanics. Butterworth, 1976.
 J.B. Fraleigh and R.A. Beauregard. Linear Algebra, 3rd ed. Addison-Wesley, 1995.
 W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical
Recipes in C++, Second Edition. Cambridge, 2002.
 Y. Saad. Numerical Methods for Large Eigenvalue Problems. Wiley, 1992.
 Home page for inutech. Web address:
http://www.inutech.de/.
 Home page for numerical objects as. Web address:
http://www.nobjects.com/.
 Home page for simula research laboratory. Web address:
http://www.simula.no/.
216
Bibliography
 Online Diﬀpack documentation. Web address:
http://www.nobjects.com/diﬀpack/refmanuals/current/classes.html.
 H.P. Langtangen, N.N.G. Pedersen, K. Samuelsson, and H. Semb. Finite element preprocessors in diﬀpack. The Numerical Objects Report Series #1999-01,
February 1, 2001.
 Various Authors. ARPACK User Guide and ARPACK++ Reference Manual. Available online from http://www.ime.unicamp.br/˜chico/arpack++/.
 R.L. Liboﬀ. Introductory Quantum Mechanics, 4th ed. Addison-Wesley, 2003.
 W. Dörfer. A time- and spaceadaptive algorithm for the linear time-dependent
schrödinger equation. Numer. Math., 73:419–448, 1996.
 L.B. Madsen. Gauge invariance in the interaction between atoms and few-cycle
laser pulses. Phys. Rev. A, 65:053417, 2002.
 D. Leibfried, R. Blatt, C. Monroe, and Wineland. Quantum dynamics of single
trapped ions. Rev. Mod. Phys., 75:281–306, 2003.
 J. Schertzer. Finite-element analysis of hydrogen in superstrong magnetic ﬁelds.
 J. Schertzer, L.R. Ram-Mohan, and D. Dossa. Finite-element analysis of low-lying
states of hydrogen in superstrong magnetic ﬁelds.
 H. Møll Nilsen. Aspects of the Theory of Atoms and Coherent Matter and Their
Interaction With Electromagnetic Fields. PhD thesis, University of Bergen, 2002.
 S. Albeverio, J.E. Fenstad, H. Holden, and T. Lindstrøm, editors. Ideas and
Methods in Quantum and Statistical Physics. Cambridge, 1992.
 T.F. Jordan. Linear Operators for Quantum Mechanics. Wiley, 1969.
217
```
Was this manual useful for you?
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

advertisement