-portal.org Journal of Chemical Physics

-portal.org Journal of Chemical Physics
http://www.diva-portal.org
This is the published version of a paper published in Journal of Chemical Physics.
Citation for the original published paper (version of record):
Kieri, E., Holmgren, S., Karlsson, H. (2012)
An adaptive pseudospectral method for wave packet dynamics.
Journal of Chemical Physics, 137: 044111:1-12
http://dx.doi.org/10.1063/1.4737893
Access to the published version may require subscription.
N.B. When citing this work, cite the original published paper.
Copyright (2012) American Institute of Physics. This article may be downloaded for personal use only.
Any other use requires prior permission of the author and the American Institute of Physics.
Permanent link to this version:
http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-179058
Uppsala University
This is a published version of a paper published in Journal of Chemical Physics.
Citation for the published paper:
Kieri, E., Holmgren, S., Karlsson, H. (2012)
"An adaptive pseudospectral method for wave packet dynamics"
Journal of Chemical Physics, 137(4): 044111:1-12
URL: http://dx.doi.org/10.1063/1.4737893
Access to the published version may require subscription.
Copyright 2012 American Institute of Physics. This article may be downloaded for
personal use only. Any other use requires prior permission of the author and the
American Institute of Physics.
Permanent link to this version:
http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-179058
http://uu.diva-portal.org
THE JOURNAL OF CHEMICAL PHYSICS 137, 044111 (2012)
An adaptive pseudospectral method for wave packet dynamics
Emil Kieri,1,a) Sverker Holmgren,1 and Hans O. Karlsson2
1
2
Division of Scientific Computing, Department of Information Technology, Uppsala University, Sweden
Theoretical Chemistry, Department of Chemistry - Ångström Laboratory, Uppsala University, Sweden
(Received 4 May 2012; accepted 5 July 2012; published online 26 July 2012)
We solve the time-dependent Schrödinger equation for molecular dynamics using a pseudospectral
method with global, exponentially decaying, Hagedorn basis functions. The approximation properties of the Hagedorn basis depend strongly on the scaling of the spatial coordinates. Using results
from control theory we develop a time-dependent scaling which adaptively matches the basis to the
wave packet. The method requires no knowledge of the Hessian of the potential. The viability of
the method is demonstrated on a model for the photodissociation of IBr, using a Fourier basis in the
bound state and Hagedorn bases in the dissociative states. Using the new approach to adapting the
basis we are able to solve the problem with less than half the number of basis functions otherwise
necessary. We also present calculations on a two-dimensional model of CO2 where the new method
considerably reduces the required number of basis functions compared to the Fourier pseudospectral
method. © 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4737893]
I. INTRODUCTION
The detailed understanding of the dynamics of chemical reactions is a fundamental challenge in chemical physics.
Mathematical models describing dynamic quantum mechanical processes have provided important tools for such understanding for almost one hundred years. Such models are based
on the time-dependent Schrödinger equation (TDSE) which
can only be solved analytically in very simplified settings.
Hence, numerical solution methods are of great importance
to provide a deeper theoretical understanding of reaction dynamics and to design and interpret experimental studies. In
this paper, we solve the TDSE for molecular dynamics,
i¯
∂
= Ĥ , where
∂t
Ĥ = −
d
¯2 ∂ 2
+ V (x, t),
2mj ∂xj2
j =1
(1)
(2)
using an expansion of the wave function in a non-standard
time-adaptive basis set consisting of global but exponentially
decaying functions. x denotes the nuclear degrees of freedom.
We study the dynamics of a wave packet on a potential energy
surface (PES), as well as photo-induced and non-adiabatic
coupling of different PES. Common to the problems studied
is that the wave packet on a given PES stays localised and
that it occupies different parts of the computational domain at
different times, i.e., it has bounded, time-dependent support.
This opens for spatial adaptivity. We can save computational
work if we resolve only the part of space currently occupied
by the wave packet, instead of resolving the whole computational domain at all times.
Basis function expansions have been dominant in computational methods for quantum molecular dynamics since
a) Electronic mail: [email protected]
0021-9606/2012/137(4)/044111/12/$30.00
decades. Well-established examples of such computational
methods, using different basis sets, are given in, e.g.,
Refs. 1–3. We use the basis set due to Hagedorn,4, 5 which is
a generalisation to arbitrary dimensions of complex Hermite
functions. A Hagedorn function subject to appropriate evolution of its parameters is the exact solution of the TDSE with a
quadratic potential. In Ref. 6, Faou et al. constructed a computational algorithm for the TDSE using a Galerkin approximation on spaces of Hagedorn functions. The algorithm can
be related to the time-dependent discrete variable representation (TDDVR) developed by Billing and co-workers.7–9 In
one spatial dimension, the Hagedorn and TDDVR basis functions are identical up to a variable substitution. However, the
Hagedorn basis is more flexible when used for problems with
multiple degrees of freedom and provides a promising framework for more challenging quantum dynamics problems. This
paper presents a new computational scheme for wave packet
dynamics which is based on the algorithm in Ref. 6, but
improves the approximation properties of the basis set by
introducing an adaptive scaling. By using well-established results from control theory we can match the support of the basis to the wave packet. We also improve the computational
complexity for practical computations by using a collocation
(pseudospectral) scheme instead of a Galerkin (spectral) approach.
Control theory has been used before to improve numerical methods for differential equations. In Ref. 10, Gustafsson et al. used control theory to design a step size controller for Runge-Kutta methods which is robust with respect
to stiffness. In this work, we use control theory for spatial
adaptivity through moving meshes. When working with moving meshes one faces two questions: where should the mesh
points be moved, and how should they be brought there? Moving mesh methods have been studied within the numerical
analysis community. An extensive, recent review is given in
Ref. 11. The most well-studied cases are formulated on timeindependent, bounded domains, where the indicator functions
137, 044111-1
© 2012 American Institute of Physics
044111-2
Kieri, Holmgren, and Karlsson
for where to move the mesh points typically are given through
local a posteriori error estimates. In Ref. 12, Pettey and Wyatt
presented a moving mesh method for the TDSE. They used
the Fourier representation for calculating spatial derivatives
and needed a uniform grid. They let the grid spacing be timeindependent and did not move interior grid points. Instead,
they added or removed points near the boundaries according
to indicators for the support of the solution, given through
semiclassical trajectory tracing. Boundary treatment is an omnipresent issue for unbounded domain problems such as the
Schrödinger equation. On fixed grids one can either make
the domain big enough for the wave function to stay away
from the boundary, or use absorbing boundary conditions13, 14
which dampen outgoing waves. The present method adaptively changes the size and location of the computational domain such that the wave function is negligible by the boundaries.
In one spatial dimension, the Hagedorn functions are
complex Gaussians multiplied by Hermite polynomials. The
convergence of an approximation using a Hermite basis can
be perceived as slow compared to approximation in, e.g., the
Fourier basis.15 This is a consequence of the Hermite functions being global. While the Fourier basis lives on a finite
interval, the Hermite functions form a basis of L2 (R). If the
basis consists of K functions, the highest resolvable
√ frequency
grows as O(K) for the Fourier basis and as O( K) for the
Hermite basis, as K → ∞. However, at the
√ same time, the support of the Hermite basis grows as O( K).16 This compromise between the resolution of higher frequencies and larger
support for the Hermite basis is an intriguing and potentially
useful property. Also for functions of finite support, the approximation error can be reduced considerably by introducing a K-dependent scaling of the basis.17 In Ref. 18, a timedependent scaling of a Hermite basis was used for parabolic
equations and the viscous Burgers’ equation. In the present
paper, we introduce an adaptive scaling which makes the support of the basis follow the support of the wave packet. As
noted by Billing and Adhikari,8 using the classical equations
of motion for propagating the shape parameters of the basis is
not in all cases a good choice. By using a basis which is better
adapted to the solution one gets better accuracy with fewer basis functions. The number of basis function needed, and thus
also the computational complexity and memory requirements,
still scales exponentially with the number of dimensions. This
is a feature shared with other methods aiming at resolving
the wave function.2, 7, 19 However, the savings of being able
to use a smaller basis also grows with dimensionality. While
still constrained to low-dimensional problems, adaptivity can
make larger problems accessible for computation.
In Secs. II and III we review the construction of Hagedorn wave packets and their time evolution. We discuss the
Galerkin time propagation used in Ref. 6 and an alternative
collocation method. In Sec. IV we discuss the approximation
properties of the Hagedorn basis and give motivation to the
need for spatial adaptivity. We formulate a control target, i.e.,
conditions for the desired basis, which in Sec. V is developed
into a controller which dynamically adapts the basis to the
evolving wave function. We demonstrate the performance of
the new method on a photodissociation problem, an IBr model
J. Chem. Phys. 137, 044111 (2012)
with three PES, in Sec. VI. Coupled PES is a strongly quantum mechanical phenomenon which makes classical propagation of the basis parameters infeasible. By letting the evolution of the solution determine the propagation of basis parameters, this problem is elegantly solved. We also consider
propagation on a multi-dimensional PES, a collinear model
of the CO2 molecule, in Sec. VII. The spatial adaptivity improves the resolution capabilities of the basis, reducing the
error substantially.
II. HAGEDORN WAVE PACKETS
A Hagedorn wave packet4, 5 is an expansion of the wave
function on a global basis which diagonalises the Hamiltonian for the quantum mechanical harmonic oscillator. In this
section we will review the construction of the Hagedorn basis
and the propagation algorithm of Ref. 6 for solving the TDSE
for quantum molecular dynamics. We consider the TDSE in
semiclassical scaling,
iε
d
ε2 ∂ 2
∂
=−
+ V ,
∂t
2mj ∂xj2
j =1
(3)
where the variables have been scaled as t → t/ε, m → m/ε2 .
is the nuclear wave function and ε is a free scaling parameter, conventionally chosen as the reciprocal square root
of a characteristic nuclear mass. This scaling is convenient
since it scales the involved quantities to similar orders of
magnitude. It also allows us to classify problems, since the
value of ε gives an indication to how close to classical mechanics the system is. Note that we do not treat the method
semiclassically, considering convergence in the classical limit
ε → 0. Instead, we consider fixed, finite ε and a collocation method for fully quantum mechanical problems. We are
thus dependent on resolving the high-frequency oscillations
of the wave function. Let ϕk denote the d-dimensional basis
functions, where the index vector k = (k1 , . . . , kd )T , kj = 0,
1, . . . . We also introduce the unit vectors ej with (ej )i = δij .
The Hagedorn basis functions are characterised by the parameters ε ∈ R; q, p ∈ Rd ; Q, P ∈ C d×d , and are defined by
ϕ0 (x) = (π ε)− 4 (det Q)− 2
i
i T
T
−1
(x − q) P Q (x − q) + p (x − q) ,
× exp
2ε
ε
d
1
(4)
R̂j ϕk =
L̂j ϕk =
kj + 1ϕk+ej ,
kj ϕk−ej ,
(5)
(6)
with the raising and lowering operators
i ∗
P (x̂ − q) − Q∗ (p̂ − p) ,
R̂ = √
2ε
(7)
044111-3
Kieri, Holmgren, and Karlsson
i T
P (x̂ − q) − QT (p̂ − p) .
L̂ = − √
2ε
J. Chem. Phys. 137, 044111 (2012)
(8)
Here, p̂ = −iε∇ is the momentum operator, and L̂ is the adjoint operator of R̂. Equations (5)–(8) can be combined into
the two-step recurrence relation
d
d
2
(x − q)ϕk − Q̄ kj ϕk−ej j =1 ,
Q kj + 1ϕk+ej j =1 =
ε
(9)
which is convenient since it does not involve differentiation.
(·)dj=1 denotes a d-dimensional vector. The parameters Q and
P are required to satisfy the compatibility relations
QT P − P T Q = 0,
(10)
Q∗ P − P ∗ Q = 2iI.
(11)
A wave packet is expressed as a linear combination of the
basis functions ϕk multiplied by a common phase factor,
(x, t) = eiS(t)/ε
ck (t)ϕk (x).
(12)
k
We split the potential in a local harmonic approximation and
a remainder,
1
V (x) = V (q) + (x − q)T Ṽ1 + (x − q)T Ṽ2 (x − q) + Wq (x).
2
(13)
In Ref. 6 Ṽ1 = ∇V (q) and Ṽ2 = ∇ 2 V (q), the gradient vector
and Hessian matrix of V (x) evaluated at q, were chosen. Let
M = diag(mj ). With the expansion (12) of the wave function,
the TDSE can be transformed to the following system of ordinary differential equations:
Ṡ =
q̇ = M −1 p,
(14)
ṗ = −Ṽ1 ,
(15)
Q̇ = M −1 P ,
(16)
Ṗ = −Ṽ2 Q,
(17)
1 T −1
p M p − V (q),
2
iεċ = F c, Fjk = ϕj | Wq | ϕk .
(18)
(19)
If Ṽ1 and Ṽ2 are chosen as the gradient and Hessian of the
potential, respectively, Wq ≡ 0 and the propagation is exact
with time-independent c if the potential is harmonic. We refer to this potential splitting as semiclassical propagation of
(14)–(17). If we do not have a harmonic potential we get a
Galerkin-type approximation on the time-dependent Hilbert
space H spanned by the Hagedorn basis functions. This approximation approaches the exact solution when K → ∞,
since the Hagedorn basis is complete.5 Also, since F is a
Hermitian matrix, exponential time propagation of (19) as in
Ref. 6 is norm conserving. The matrix elements ϕj | Wq | ϕk can in general not be evaluated analytically, but are calculated
with Gauss-Hermite quadrature. The Gauss-Hermite quadrature rule using Kq nodes is exact for integrands consisting of
an exponential kernel and a polynomial part of order up to
2Kq − 1. Kq ∼ K is thus enough for accurate evaluation of
the integrals. Note that the matrix elements with the highest order basis functions are normally multiplied with very
small expansion coefficients. Loss of accuracy in quadrature
there typically has small impact on the solution as a whole.
To be safe one might want to use a few more quadrature
nodes than basis functions. Throughout this paper we will use
Kq = K + 3.
III. TIME STEPPING
In this section we consider how to make the time stepping
more efficient. As in Ref. 6, we integrate the system (14)–(18)
with the second order accurate Störmer-Verlet scheme, splitting the Hamiltonian in its kinetic and potential parts. The
coefficients c were in Ref. 6 propagated using exponential integration with a Krylov subspace method. Direct application
of an exponential integrator on the Galerkin equation (19) is
expensive. In each time step we need to evaluate the K × Kmatrix F. Each element in F is defined as an integral, which
is evaluated with Gauss-Hermite quadrature using Kq ∼ K
points. The computational effort in each time step then scales
as O(K 3 ). For practical computations this quickly becomes
prohibitive and an approach for reducing the computational
cost is needed. We tackle this problem by using a collocation
approach instead of the Galerkin method. In each time step
we then need to
r evaluate the wave function in Kq quadrature points,
r apply the action of Wq , and
r project back to the Hagedorn space.
In the Hagedorn space, the wave function is given as a
linear combination of K basis functions. Evaluating in Kq
quadrature points,
(γj , tn ) =
K−1
ck ϕk (γj ), j = 1, . . . , Kq ,
(20)
k=0
requires O(K 2 ) operations. We do not need to consider the
phase factor eiS/ε since it will cancel out in the projection step.
In coordinate space the potential energy operator is diagonal.
Thus applying the action of Wq ,
t
(γj , tn+1 ) = e iε W (γj ) (γj , tn ), j = 1, . . . , Kq ,
(21)
is done in O(K) operations. Projecting back to the Hagedorn
space requires the evaluation of K integrals,
ck = ϕk | (·, tn+1 ),
(22)
at the total cost of O(K 2 ) operations. Hence, using collocation instead of Galerkin reduces the computational complexity from cubic to quadratic in the number of basis functions.
If we take the limit t → 0 we reobtain (19), i.e., the
collocation method is consistent with the Galerkin approximation. However, as a consequence of the orthogonal projection 2 → H in (22) the method is no longer norm conserving, and it is formally only first order accurate in t. We now
argue that this is of minor importance.
044111-4
Kieri, Holmgren, and Karlsson
J. Chem. Phys. 137, 044111 (2012)
1
−1
10
−2
10
−3
10
10
0
10
l −error
−4
10
−3
10
−5
10
−4
10
−6
10
−2
10
∞
∞
l −error
−1
10
−5
−3
10
−2
10
Δt
(a)
−1
10
10
−3
10
−2
10
Δt
−1
10
(b)
FIG. 1. Error as function of the time step for Galerkin (crosses) and collocation (plain lines). Errors compared to a reference solution within the Hagedorn space are shown with dashed lines, and errors compared to a spatially
well-resolved reference are shown with solid lines. The TDSE is integrated
up to t = 1 with 64 basis functions in (a), and to t = 5 with 8 basis functions
in (b).
When propagating the coefficients c given an approximation space H, we make two different errors: the O(t 2 )
splitting error between the kinetic and potential parts of the
Hamiltonian, and the O(t) projection error. However, the
projection error is non-zero only when the solution extends
outside the approximation space. Then the error induced by
the truncation of the basis will be dominant anyway, since the
time discretisation error is a result of inability to represent
the solution well in the approximation space. As an example,
consider the one-dimensional (1D) TDSE for the torsion potential, V = 1 − cos x. We use ε = 0.1, and a Gaussian initial
condition with parameters q = 1, p = 0, Q = 1, and P = i. If
we integrate up to t = 1 using 64 Hagedorn basis functions,
the solution will be well resolved. Consequently, the projection error will be negligible and collocation and Galerkin will
yield essentially identical solutions, as can be seen in Figure 1(a). However, if we integrate up to t = 5 with only 8
basis functions we will not be able to resolve the solution,
and we will get an O(t) projection error from collocation.
If we compare to a reference solution calculated with small
time steps but the same number of basis functions, i.e., study
the accuracy of time-integration only, the Galerkin approach
gives much smaller error. It will be second order accurate,
compared to first order for collocation. But if we compare
to a spatially well-resolved solution, we see that this does
not transfer to the full partial differential equation problem.
In Figure 1(b) we see how the spatial error dominates. We
thus argue that collocation is more sensible computationally,
though Galerkin is more elegant mathematically.
The reference solution was calculated with short-iterative
Lanczos (SIL) time integration with a time step t = 2−10
≈ 0.00098, and a well-resolved Fourier basis consisting of
128 basis functions on the interval [− π , π ).
IV. ADAPTIVE SCALING OF THE BASIS SET
We now turn to the approximation properties of the
Hagedorn basis and present new tools for optimising them.
The Kq optimal interpolation nodes, given a one-dimensional
Gauss-Hermite basis set {ϕk }K−1
k=0 , are the zeros of ϕKq , i.e.,
the nodes for the corresponding Gauss-Hermite quadrature
rule. For a motivation, as well as a description of the relation between Gauss quadrature and interpolation, we refer to
Chap. 4.7 of Ref. 20. We denote the quadrature nodes by
γ1 < γ2 < · · · < γKq . These nodes give us an indication of
our ability to resolve the wave function. The node spacing determines the highest frequency that can be resolved, and the
extension of the node set determines the support of the basis. If the wave function is not small at the most peripheral
nodes the representation will suffer from oscillations because
of the Runge phenomenon. The node distribution for GaussHermite quadrature is not far from uniform on the interval
[γ1 , γKq ], but slightly denser close to the midpoint. As was
mentioned
√ in the Introduction, the node spacing x decreases
as O(1/ K) as the number of basis functions K increases,
while the distance between
the most peripheral nodes γ 1 and
√
γKq increases as O( K). Here, x is an approximate measure since the node spacing is not fully uniform. The ability
to resolve high-frequency oscillations
√ within the support for
a wave packet thus increases as O( K), compared to O(K)
for the Fourier and Chebyshev bases. The Fourier and Chebyshev bases are however defined on a finite interval I ⊂ R. If
the function to be approximated is defined on an infinite domain one has to truncate it, introducing a domain truncation
error. For the method to converge,
√ I must approach R as K
→ ∞. If we let |I| grow as O( K)
√ the highest resolvable
frequency would also grow as O( K), just as for Hermite
functions. For a more detailed discussion of different approximation techniques for infinite domains, see, e.g., Chap. 17 of
Ref. 16.
Billing and Adhikari noted8 that propagation of the width
parameter of the Gauss-Hermite basis determined by a harmonic approximation was not optimal in all cases. In particular, they were concerned by the basis occasionally getting very
narrow support, resulting in limited ability to resolve peripheral parts of the wave packet. Their solution was a fixed-width
treatment where Ṽ2 was chosen such that the width parameter
stayed constant, similar to the frozen Gaussians of Heller.21
Here, we present a time-dependent scaling which adapts the
basis to the wave packet.
Any wave packet will have bounded support. If the
spatial variables are scaled, the support of the basis functions can be adapted to match the wave packet. In Ref. 17,
Tang illustrated how the convergence of interpolation can be
sped up by scaling the basis functions. By choosing a Kdependent scaling factor, we can weight the importance of resolving high frequencies contra large domains. As long as x
→ 0 and (γKq − γ1 ) → ∞, the approximation will converge
as K → ∞. The importance of choosing an appropriate scaling factor is illustrated in Figure 2, where a function is projected to two different Hagedorn spaces. The spaces differ by
a factor two in the spread parameter Q, resulting in drastic difference in ability of representing the function in question. In
real simulations with finite-precision arithmetics, the condition of formal asymptotic convergence as K → ∞ can be relaxed. We may instead scale the basis functions so that |(x)|
≤ δ outside [γ1 , γKq ] for some small δ which determines the
044111-5
Kieri, Holmgren, and Karlsson
u(x)
projection
1
0.5
u(x)
u(x)
u(x)
projection
1
0.5
0
−0.5
0
−0.5
−1
−3 −2 −1
J. Chem. Phys. 137, 044111 (2012)
−1
0
x
(a)
1
2
3
−3 −2 −1
0
x
(b)
1
2
3
FIG. 2. Projection of u(x) = cos(10x)e−x to two different Hagedorn
spaces, both with K = 32 basis functions. The figures show u(x) with solid
lines and its projection with dashed lines. In both cases, ε = 1, q = p = 0,
P = i/Q. In (a), Q = 1 and in (b), Q = 0.5. The ∞ -errors are 0.93 and 1.8
× 10−5 , respectively. In (b), the two lines are indistinguishable.
2
domain truncation error. The philosophy of this approach is
similar to that of domain truncation dependent methods, such
as the Fourier method. If the domain truncation is chosen independently of the number of basis functions in the Fourier
method there will be a non-vanishing domain truncation error, destroying formal convergence. The support of the Hagedorn basis functions is determined by the parameter Q and
the semiclassical scaling parameter ε. The variance of the
one-dimensional basis function ϕ k is (2k + 1)(ε/2)QQ*, and
we use QQ* to control the support of the basis set.
The unscaled Gauss-Hermite nodes γ̃k are the zeros
of the Hermite polynomial of order Kq , and can be calculated as the eigenvalues of a tridiagonal matrix.22 Let
γ̃ k = (γ̃k1 , γ̃k2 , . . . , γ̃kd )T . Then the scaled and translated
nodes corresponding to a Hagedorn wave packet are given by
(23)
γ k = q + εQQ∗ γ̃ k .
The square root of a symmetric positive definite matrix is defined through its eigendecomposition,
√
QQ∗ = U U T ,
(24)
where U is an orthogonal matrix and = UT QQ*U is diagonal. For simplicity we will restrict Q and P to be diagonal,
i.e., we will not skew or rotate the grid. The Hagedorn basis
will still be a complete basis of L2 (Rd ). We want |(x)| ≤ δ
everywhere outside the convex hull of {γ k }. Keeping track of
the smallest set fulfilling this property is straight-forward in
one dimension, then we only need to track its two boundary
points. In multiple dimensions the task is nontrivial. We settle with tracking its boundary points along a number of rays
originating from q, extending in the directions of the positive
and negative coordinate axes. These boundary points, indexed
by i, can be expressed as
(i) (i)
x(i)
δ = q + x̂δ rδ ,
vantage of the exponential asymptotic decay of the wave
function. If the wave function is approximately Gaussian,
(i)
η
|(q + x̂(i)
δ ηrδ )| ≈ δ , where η can be chosen arbitrarily big
without loss of stability. The condition for validity of this
approximation is that at r ≥ rδ(i) the polynomial part of the
wave function should have negligible influence compared to
the exponential part, and can be treated as a constant. There
will then exist quadrature nodes beyond rδ(i) along each ray.
We generate the value of the wave function at all γ k , they
will be needed again when propagating the expansion coefficients c. Along each ray, we use the last quadrature node
where |(γ k )| > δ as approximation for x(i)
δ . We then determine the desired shape of the grid for negligible cost. Let γ ki
denote the Gauss quadrature node which is the most peripheral in direction x̂(i)
δ and centred in all other directions. The
desired value of QQ*, which we call the control signal, reads
R = QQ∗ such that |(γ ki )| ≈ δ η .
(26)
(i)
Equivalently, we have the control target γ ki = q + x̂(i)
δ ηrδ .
Since Q and P are diagonal it is sufficient to treat each dimension, and sign, separately. If we consider only the coordinate
direction which x̂(i)
δ is aligned along, the control signal can be
expressed as
Ri =
(i) 2
ηrδ
εγ̃K2 q
.
(27)
For systems with several PES, we scale δ by the 2 -norm of
the wave function on each PES.
V. THE CONTROLLER
In Ref. 6, Q was propagated according to the local harmonic approximation of the potential. The splitting (13) was
made according to the Taylor expansion so that Ṽ1 = ∇V (q)
and Ṽ2 = ∇ 2 V (q). By using a different splitting of the potential we can control the value of QQ*, and thus the support
of the basis functions. The matrix Ṽ2 can be treated as a free
parameter, which we will require to be real and diagonal. An
alternative splitting will no longer yield the exact solution for
the harmonic oscillator with a truncated Hermite expansion,
but we do not lose convergence for general potentials as long
as x → 0 and γKq → ∞ as K → ∞. By dynamically controlling the support of the basis we put the effort on approximating the wave packet in its support, giving a more accurate
representation. Another advantage of this splitting is that we
no longer need the Hessian of the potential, which normally
is expensive to calculate.
We now consider the problem of how to choose Ṽ2 in
order to give QQ* the desired value. This can be achieved
using standard tools from control theory.23 We reformulate
the Eqs. (16) and (17) for Q and P as
(25)
(i)
where x̂(i)
are
δ are time-independent unit vectors and rδ
scalar. If δ is very small, ∇(x) will be small close to x(i)
δ ,
and the determination of rδ(i) will be ill-conditioned. We therefore choose a moderately small δ, e.g., 10−3 , and take ad-
d
QQ∗ = M −1 P Q∗ + QP ∗ M −1 ,
dt
(28)
d
P Q∗ = −Ṽ2 QQ∗ + P P ∗ M −1 ,
dt
(29)
044111-6
Kieri, Holmgren, and Karlsson
J. Chem. Phys. 137, 044111 (2012)
d
QP ∗ = M −1 P P ∗ − QQ∗ Ṽ2T ,
dt
(30)
d
P P ∗ = −Ṽ2 QP ∗ − P Q∗ Ṽ2T .
dt
(31)
L{QQ∗ }(s) =
By the compatibility relations (10) and (11), Im(PQ*) = I at
all times. We let X2 = Re(PQ*) and use the fact that all involved matrices are diagonal and therefore commute and are
symmetric. The system (28)–(31) can then be rewritten as
d
QQ∗ = 2M −1 X2 ,
dt
(32)
d
X2 = −Ṽ2 QQ∗ + P P ∗ M −1 ,
dt
(33)
d
P P ∗ = −2Ṽ2 X2 ,
dt
(34)
where all terms are real. QQ*, PP*, and M are positive definite. The conditions for steady-state are
⎧
⎨ P P ∗ = Ṽ2 QQ∗ M,
X2 = 0,
(35)
⎩d
Ṽ
=
0,
2
dt
and the second time-derivative of QQ* is
d2
(QQ∗ ) = 2M −1 P P ∗ M −1 − 2M −1 Ṽ2 QQ∗ .
dt 2
(36)
We now consider the dynamical system (32)–(34) as a control
system with output signal QQ*, input signal Ṽ2 , and control
signal R(t), where R is a diagonal d × d-matrix. The objective
is to choose the input signal Ṽ2 such that the output signal
QQ* stays close to the control signal R. We first make an exact
linearisation of the control system by choosing
1
Ṽ2 = P P ∗ M −1 (QQ∗ )−1 − MG(QQ∗ )−1 ,
2
(37)
G = G(QQ∗ , X2 , R).
(38)
QQ* is positive definite and therefore invertible. This choice
of potential splitting yields
d2
(QQ∗ ) = G,
dt 2
(39)
if G is real and diagonal. Equation (39) is then the well-known
double integrator, which can be controlled to arbitrary accuracy by a proportional-derivative (PD) controller,
G = −α(QQ∗ − R) − β2M −1 X2 .
poles of the Laplace transformed control system (39) and (40),
(40)
Here, α and β are free, positive parameters. The first term is
proportional to the deviation of QQ* from the control signal,
and the second term is proportional to the time derivative of
QQ*. The performance of the controller is determined by the
αL{R}(s)
s 2 + βs + α
+
sQQ∗ |t=0 + βQQ∗ |t=0 +
s 2 + βs + α
d
QQ∗ t=0
dt
.
(41)
By choosing the parameters α and β we can place the poles
of the system arbitrarily. Pole placement is a standard procedure for controller design.23 The performance of a control
system is conventionally measured through its step response,
i.e., how the output signal reacts to a unit discontinuity in the
control signal. The more negative the real parts of the poles,
the faster the response. The angle to the real axis determines
how oscillatory the response is. If the poles have non-zero
imaginary parts there will be overshoot in the step response,
and the larger the angle, the more overshoot. By placing the
poles far enough out in the left half-plane we can get an arbitrarily fast, arbitrarily well-damped step response. The price is
increased stiffness in the ordinary differential equation (17).
Thus, the larger α and β we choose the smaller time steps
we need to take, but typically it is enough to keep them in
a regime where accuracy requirements on the solution limit
the time step. Steps in the control signal may occur if a hump
emerges in the periphery of the wave function, or if a hump
drops below |(x)| = δ. Such steps may result in substantial discontinuities in Ṽ2 , which can impair the stability of
the method. It is therefore convenient to put a cap on |G|.
Since the adaptive scheme is formulated as a double integrator, noise in the control signal is smoothed out. The method is
robust to discontinuities, and can increase the accuracy of the
spatial approximation substantially.
It is important to note that the present methodology considers a fully quantum mechanical description of the molecular system. It is intended, and will only work, for resolved
computations. If important high frequency features of the
wave function are not resolved, the expansion coefficients ck
for the highest order basis functions will get significant moduli due to aliasing error. We will then suffer from the Runge
phenomenon. This is analogous to the Fourier method, where
we need the solution to be essentially zero close to the boundaries. If it is not, the periodic boundary conditions will induce error. In the present approach, the adaptivity will in this
case add fuel to the fire. It will include the features caused
by the Runge phenomenon in the effective computational domain by stretching the basis, making it even less able to resolve high frequencies. If one cannot afford to resolve the
high-frequency content of the wave function one should go for
semiclassical methods using uncoupled basis functions.5, 24
To illustrate the new methodology, we apply it to the
1D torsion potential problem discussed in Sec. III. Using
K = 32 basis functions, the time step t = 2−10 , α = 25,
β = 10, δ = 10−3 , and η = 1.5, we reduce the pointwise
error by an order of magnitude to 3.9 × 10−5 , compared to
8.2 × 10−4 with semiclassical propagation of (16) and (17).
This supports our proposition that semiclassical evolution of
the basis is not necessarily optimal for a collocation scheme.
In Figure 3 we show the time evolution of the uncertainty
044111-7
Kieri, Holmgren, and Karlsson
J. Chem. Phys. 137, 044111 (2012)
the ground electronic state X1 0+ is taken as initial condition.
The X1 0+ and B 3 +
0 states are coupled by a laser pulse f(t),
+
and
Y
and the B 3 +
0
0 states have a non-adiabatic coupling
Vc . The full Hamiltonian of the three-state system then reads
⎞
⎛
Ĥ1
μf (t) 0
⎟
⎜
(42)
Ĥ = ⎜
Vc ⎟
Ĥ2
⎠.
⎝ μf (t)
Ĥ3
0
Vc
1
10
Control propagation
SC propagation
Control signal
2
−1
10
1.2
|Ψ|2
|Q|
2
1.6
−3
10
0.8
−5
10
0.4
−7
0
0
1
2
3
4
5
10
t
−3 −2 −1
0
x
(a)
(b)
1
2
3
Ĥi = −
FIG. 3. (a) The time evolution of the uncertainty parameter QQ*. The solid
line indicates control-based propagation and the dotted semiclassical propagation. The dashed line indicates the control signal. (b) A lin-log plot of the
squared modulus of the wave function at time t = 5. The crosses show the
location of the collocation points in the control scheme.
parameter Q, as well as the Gauss point distribution. Note
how the wave function decays rapidly close to the peripheral
Gauss points, and is negligible outside the support of the basis. The jaggedness of the control signal is a consequence of
the crude approximation of xδ . One can make it smoother by
applying linear interpolation or a few steps of bisection on the
interval (γ k , γ k + 1 ) where the last sign change of |(x)| − δ
occurs. This would however be of disputable benefit, judging
from the smoothness of the output signal QQ*. This amount
of noise in the control signal may appear as disturbing from a
numerical or physical standpoint, but it is a normal situation in
the engineering applications for which the PD controller was
developed.
VI. PHOTODISSOCIATION OF IBR
We now move on to a more realistic problem. We solve
the TDSE (1) for the IBr molecule with the three electronic
+
states X1 0+ , B 3 +
0 and Y0 , as has been done previously in,
e.g., Ref. 25. A schematic overview of the molecular system
is given in Figure 4. The first vibrational eigenstate φ0(1) of
4
3.5
x 10
3
3
Y+
0
2
−1
V (cm )
+
B Π0
2.5
1.5
1
0.5
1 +
X Σ0
0
0.2
0.25
0.3
x (nm)
0.35
The local Hamiltonians are defined by
0.4
FIG. 4. The PES of the IBr molecule with sketched wave packets and grids.
The equidistant Fourier grid of the X1 0+ state stays fixed, while the GaussHermite grids in the two excited states follow the wave packets.
¯2 ∂ 2
+ Vi (x),
2m ∂x 2
i = 1, 2, 3,
(43)
where m = 49.0317 a.u. is the reduced mass of the molecule
and Vi (x) are the PES. The potentials and pulse parameters
are adopted from Refs. 26 and 25, respectively. The coupling
term f(t) = g(t)cos (ωt) models a laser pulse with envelope
function
g(t) = E0 e−
(t−t0 )2
2σ 2
.
(44)
The dipole moment μ is taken to be constant, and the pulse
−1
parameters are λ = 2π c/ω
√ = 500 nm, μE0 = 219 cm ,
t0 = 60 fs, and σ = 50/(2 2 ln 2) fs. The non-adiabatic coupling Vc = 150 cm−1 . A system of this type poses several
computational difficulties:
r The excited states are dissociative. The wave packets
thus move over extended distances, and a fixed-grid
treatment leads to inefficient use of grid points. For
long time propagation, fixed grids require the use of
absorbing boundary conditions.
r We want the bases on the respective PES to be able
to evolve independently. Implementing the coupling
between the states for non-coinciding time-dependent
grids is non-trivial.
r The coupling is a quantum phenomenon which occurs
during an extended period of time. Evolving the basis
according to classical mechanics would give a basis
which is poorly adapted to the solution.
Simulation of molecular systems with several PES using Hagedorn wave packets has previously been considered
in Ref. 27. In the method proposed there one common set
of parameters {q, p, Q, P, S}, propagated according to one,
guiding, PES was used for the wave packets on all surfaces.
Each PES had its own set of expansion coefficients c, and the
non-diagonal part of the potential, the coupling of the surfaces, was treated together with its non-quadratic remainder.
This approach may have difficulties if the wave packets on
different PES move away from each other. A large number of
basis functions would then be needed in order to accurately
represent the wave packets on all surfaces. We propose a different approach using a coupled solver, which uses a pseudospectral method with a Fourier basis in space and Strang
splitting28, 29 in time for the bound X1 0+ state, and Hagedorn
wave packets for the two dissociative excited states B 3 +
0 and
Y0+ . We use one set of parameters per PES. The wave funcT
tion = ( ψ1 ψ2 ψ3 ) has three components, one for each
electronic state. Since the ground electronic state X1 0+ is
044111-8
Kieri, Holmgren, and Karlsson
J. Chem. Phys. 137, 044111 (2012)
bound, the corresponding component of the solution will stay
localised. We can therefore without difficulty represent it on
a fixed grid, and using a Fourier representation allows us to
take advantage of the fast transform when calculating its spa+
tial derivatives. The B 3 +
0 and Y0 states are however dissociative. The support of the solution on those states is subject
to significant variation over time. We would therefore need
a lot of grid points for an accurate fixed-grid representation.
Using Hagedorn representations for the excited states allows
us to let the grids follow the time evolution of the solution.
A. Coupling of the electronic states
We treat ψ 1 and the coupling between the states with
Strang splitting between the kinetic and potential parts of the
Hamiltonian. Since the coupling is time-dependent we use a
Magnus expansion.30 ψ 2 and ψ 3 are handled with a Hagedorn
solver. The components are therefore represented on different grids. ψ 1 is represented on a uniform grid with periodic
boundary conditions, ψ 2 and ψ 3 are represented by their parameters q, p, Q, P, and S, and the coefficients c, as described
in Sec. II. A Hagedorn representation is associated to a grid
of Gauss-Hermite points used for quadrature which follows
its movements. The Hamiltonian is split as
Ĥ = ĤP + ĤH , with
⎛
Ĥ1
⎜
ĤP = ⎝ μf (t)
0
μf (t)
0
⎛
⎞
0
⎟
Vc ⎠,
Vc
0
(45)
0
⎜
ĤH = ⎜
⎝0
0
0
Ĥ2
0
t
t
C0 =
1
t
Vc
t
− ψin , i = 2, 3.
When probability is excited to a PES for the first time, we
need to set the parameters of a new wave packet. Spawning
of Hagedorn wave packets has previously been considered in
Ref. 32. Given a pointwise defined wave function ψ, we set
the wave packet parameters according to
⎟
0 ⎟
⎠.
Ĥ3
(46)
p=
ψ | p̂ | ψ
,
ψ | ψ
(53)
(47)
(49)
0
n+ 14
B. Determination of parameters for the Hagedorn
wave packets
(52)
0
The ground electronic state is fully handled with the Fourier
pseudospectral method. After the half time step with the
Fourier method, the Hagedorn representations of ψ 2 and ψ 3
have to be updated according to the transfer between the
states. Let
ψi = ψi
We project the difference ψ i instead of the whole excited
part of the wave function since we want to be able to propagate the Hagedorn wave packet out of the computational domain of the pseudospectral scheme. We interpolate ψ i to the
local grid of state i with cubic splines, and evaluate the integral in (51) using Gauss-Hermite quadrature. After updating
the coefficients ck, i , we are ready to make a time step with the
Hagedorn method. Note that transfer between the electronic
states is confined to certain parts of the domain, namely close
+
to where B 3 +
0 and Y0 cross, and where the separation of
+
+
X1 0 and B 3 0 matches the frequency of the laser pulse.
There is therefore no need to include the full computational
domain in the coupling scheme, we only need to consider the
parts where the couplings are active.
ψ | x | ψ
,
ψ | ψ
0
μf (tn + τ )dτ.
(51)
q=
Before each time step for the pseudospectral part we need
to generate representations of ψ 2 and ψ 3 on the uniform grid.
Then a time step is made with Strang splitting of a Magnus expansion of ĤP . Since the Hagedorn method and Strang splitting are both second order, we use a second order truncation
of the Magnus expansion of ĤP , which is given in Ref. 31 as
⎞
⎛
Ĥ1 C0 0
⎟
⎜
H̃P = t ⎝ C0 0 Vc ⎠, where
(48)
0
n+ 1
n
+ eiS/ε ϕk,i | ψi , i = 2, 3.
ck,i 4 = ck,i
⎞
The time stepping is made with Strang splitting of the pseudospectral and Hagedorn parts of the Hamiltonian, so that
n+1 = e−i 2 ĤP /¯ e−it ĤH /¯ e−i 2 ĤP /¯ n .
This is then projected back to the Hagedorn spaces via
(50)
ηrδ
Q= √
,
ε γ̃Kq
P =
i
,
Q
S = 0.
(54)
(55)
(56)
The classical coordinate and momentum are set as their expectation values. Q is set so that the most peripheral Gauss
point γ Kq = q + ηrδ , and P so that the system (28)–(31) starts
in steady-state. Since we are doing this with ψ represented on
a uniform grid, spatial derivatives are conveniently calculated
with a Fourier pseudospectral method. Taking the classical action S to be 0 is no restriction, the initial phase factor will be
included in the coefficients c.
The duration of the laser pulse cannot be considered as
short compared to the vibrational motion of the nuclei. Transfer between the electronic states is localised in space, but what
is excited early in the simulation will have moved considerably by the time the pulse is turned off. The same localisation in space but long duration holds for the non-adiabatic
coupling between the two excited states, where the time of
transfer is related to the time needed for ψ 2 to cross the area
of coupling. This poses a challenge in the Hagedorn basis.
The wave functions on the excited states, ψ 2 and ψ 3 , get their
044111-9
Kieri, Holmgren, and Karlsson
J. Chem. Phys. 137, 044111 (2012)
ψ3
2
Re (ψ )
0.5
0
ψ
2
ψ
i
−0.5
0.22
0.24
0.26
0.28
0.3
x (nm)
0.32
0.34
0.36
FIG. 5. The real part of ψ 2 at time t0 , when the laser pulse peaks. The cross
marks the coordinate of strongest coupling with the ground electronic state.
The circle marks where the classical coordinate q would have been located if
no control were used.
1
0
0.2
classical coordinates q in the parameterisation which occurs
during the first time step where probability is excited. If we
let this coordinate follow its classical trajectory as previously
it will follow the front of the wave packet, not its centre. We
will then spend half of the resolution capabilities of the basis
on an interval where the wave function is negligible. Figure 5
shows the wave function ψ 2 at the time of the peak of the laser
pulse. On the first excited state the molecule dissociates. The
wave packet moves towards larger x-values while more probability is excited around the same coordinate. The approximation properties of the Hagedorn basis would be better if q
followed the centre of the wave packet. We achieve this using
the same control approach which is used for Q.
The time evolution of q is governed by a double integrator, just as for Q. Putting the Eq. (14) and (15) together, we get
1
Ṽ1 .
(57)
m
The midpoint of the wave packet can be calculated as
K−1
√
√
2ε
∗
x = ψ|x|ψ = q + K−1 ∗ Re Q
ck ck−1 k .
k=0 ck ck
k=1
(58)
Ṽ2 is chosen as before, and in a similar manner we choose
p
Ṽ1 = m α(q − x) + β
.
(59)
m
We can then control q to arbitrary accuracy as well, thus being in control of both the position and the spread of the wave
packet. We will use the same α and β as for Ṽ2 , but one is free
to make a different choice.
q̈ = −
ψ1
−1
0.3
0.4
0.5
x (nm)
FIG. 6. The solution, real parts and moduli, at the three potential energy
surfaces at time T = 120 fs.
the control parameters α = 16, β = 8, δ = 10−3 , η = 2, and
put the cap Gmax = 2.5 on |G|.
The solution at T = 120 fs is shown in Figure 6. The error is then 2.1 × 10−3 in the maximum norm given by
∞ = max max |ψi (xj )|,
i
j
(60)
compared to the reference solution. xj are the nodes of the
Fourier grid, the error is measured over the full domain of
support of the wave function. The maximum error occurs for
ψ 1 , which is expected since most of the probability is there.
If we scale by the 2 -norm of the solution, or population, on
each state, i.e.,
ψ = max
j
|ψ(xj )|
,
ψ
2
(61)
the error is 2.3 × 10−3 , 4.4 × 10−3 , and 6.0 × 10−3 for ψ 1 ,
ψ 2 and ψ 3 , respectively.
On the second excited state Y0+ , a hump eventually appears by the front of the wave function ψ 3 . xδ at that front is
the rightmost coordinate where |ψ3 (x)| = δψ3 2 . As more
probability is excited to Y0+ , ψ3 2 grows and the hump will
sink below the limit δψ3 2 . There will then be a discontinuity in xδ , and hence also in the control signal. The controller
responds with a smooth transition with limited overshoot, as
can be seen in Figure 7.
VII. PROPAGATION IN TWO DIMENSIONS
C. Computational results
We integrate the system up to the time T = 120 fs with
the new method, taking time steps of length t = 0.1 fs. For
the ground electronic state and the photo-induced coupling we
consider the spatial interval x ∈ [0.20, 0.35] nm, and for the
non-adiabatic coupling x ∈ [0.22, 0.40] nm. The spatial step
size is x = 0.3/512 nm. On the excited states we use Hagedorn bases with 256 basis functions per PES. For comparison
we compute a reference solution on x ∈ [0.2, 0.5) nm using a
Fourier basis with 512 basis functions per PES. We integrate
it in time with SIL, using a fourth order Magnus expansion of
the evolution operator,31 and the same time step t. We use
To show the applicability of the new method to multidimensional problems we apply it to a two-dimensional anharmonic PES. We use the collinear model of an excited electronic state of CO2 presented in Refs. 33 and 34, expressed
in the symmetric and asymmetric stretch coordinates (s, a).
If R1 and R2 are the CO bond distances and mO and mC
the atomic masses, s = (R1 + R2 )/2 and a = (R1 − R2 )/
(2 + mC /mO ). The potential is symmetric with respect to the
line a = 0, and has a saddle point at (s, a) = (2.41, 0). It
has two dissociation channels corresponding to O+CO and
OC+O. The PES is illustrated in Figure 8. The masses corresponding to the chosen degrees of freedom are ms = 2mO and
044111-10
Kieri, Holmgren, and Karlsson
J. Chem. Phys. 137, 044111 (2012)
−5
x 10
1
1
0.5
0.5
1.5
3
a (a.u.)
4
a (a.u.)
2
0
−1
2
|Q|
0.5
1
7
7.5
0
0
20 40 60 80 100 120
t (fs)
(a)
(b)
2.8
−1
2
3
1
0.5
0.5
a (a.u.)
6.5
x (a.u.)
2.4 2.6
s (a.u.)
(a)
1
3
6
2.2
Control signal
δ ||ψ ||
0
5.5
−0.5
2
a (a.u.)
|Q|
|ψ3|
−0.5
1
0
−0.5
FIG. 7. (a) The front of ψ 3 in modulus at time 45 fs. The dashed line indicates δψ3 2 , and the cross xδ . As ψ3 2 grows, the dashed line will rise
above the top of the hump, and xδ will encounter a discontinuity. The response to this is shown in (b). The dashed line shows the control signal for
|Q|, and the solid line its actual value.
ma = mC (1 + mC /(2mO )). The Hamiltonian operator then
reads
Ĥ = −
0
¯2 ∂ 2
¯2 ∂ 2
−
+ V (s, a).
2
2ms ∂s
2ma ∂a 2
(62)
As initial state we use the vibrational ground state of the
ground electronic state (sudden approximation). As model
for the ground state PES we use a harmonic oscillator34
with ¯ωs = 1388.3 cm−1 , ¯ωa = 2349.0 cm−1 . Its vibrational ground state is a Gaussian with centrepoint (s, a)
= (s0 , 0), s0 = 2.20 a.u., situated at the bottom of the well.
After transfer to the excited state, where the simulation starts,
it will be slightly dislocated from the dissociation channels as
shown in Figure 8. The initial condition then reads
√
mω
ms ma ωs ωa
ma ωa 2 s s
exp −
(s − s0 )2 −
a .
ψ0 =
π¯
2¯
2¯
(63)
We integrate up to the time T = 15 fs, and for the Fourier-SIL
reference solution we truncate the domain to (s, a) ∈ [1.9, 3.5)
× [ − 1.2, 1.2). We use 256 Fourier basis functions per coordinate direction, and take time steps of 2.42 × 10−3 fs for both
the Fourier and Hagedorn solutions. Errors are calculated in
the ∞ -norm over the Fourier grid points, covering the full do-
−1
2
2.4 2.6
s (a.u.)
(b)
2.8
3
2.2
2.4 2.6
s (a.u.)
(d)
2.8
3
0
−0.5
2.2
2.4 2.6
s (a.u.)
(c)
2.8
−1
2
3
FIG. 9. Time evolution of the wave function. The modulus of the wave function, |(s, a)|, is plotted with a separation of 1 between the level curves. The
cross indicates the location of the saddle point in the potential. The wave
function is plotted at times t = (a) 0 fs, (b) 5 fs, (c) 10 fs, and (d) 15 fs.
main of support of the wave function. The control parameters
used are δ = 10−3 , η = 1.5, α = 25, and β = 10.
The time evolution of the wave function is shown in
Figure 9. Initially, the wave packet starts moving towards
growing s. Its centrepoint crosses the saddle point of the potential at around t = 7 fs. It then starts spreading in the asymmetric stretch direction. Semiclassical propagation of Q overdoes this spreading so that we get difficulties resolving the
high-frequency oscillations of the wave packet. When using
the new method, the growth of |Q2 |, and hence the spread of
the collocation points, stays more modest, as does the aliasing error. At t = 11 fs, the difference in error is between three
and four orders of magnitude. However, as the simulation proceeds, the wave packet gets more and more delocalised, and
increasingly difficult to resolve. As a result of this, also the
error for the control-based propagation starts to grow rapidly
around t = 12 fs. At the simulation end-time, t = 15 fs, the
8
1
10
|Q |, semiclassical
error, semiclassical
error, control
2
|Q2|, control
−1
2
|Q |
10
l −error
6
1
4
−3
10
a (a.u.)
∞
0.5
2.2
0
−0.5
−1
2
−5
2
10
0
0 2 4 6 8 10 12 14
t (fs)
10
(a)
2.2
2.4
2.6
s (a.u.)
2.8
−7
0 2 4 6 8 10 12 14
t (fs)
(b)
3
FIG. 8. Level curves for the excited state PES of CO2 . The location of the
saddle point is indicated by a cross, and the centrepoint of the initial Gaussian
wave packet by a ring. The level curves are separated by 0.015 hartree.
FIG. 10. (a) The modulus of Q2 , which determines the spread of the collocation points along the asymmetric stretch coordinate axis, plotted against time.
Semiclassical propagation of Q and P is shown with solid lines, and controlbased propagation with dashed lines. (b) The time evolution of the maximum
pointwise error for the two propagation schemes.
044111-11
Kieri, Holmgren, and Karlsson
J. Chem. Phys. 137, 044111 (2012)
the wave function is negligible by the boundary at the time
of comparison, with 248 × 208 Fourier basis functions. For
the Fourier solution in Figure 12(b), the error is only evaluated in the points which are not inside the absorbing region,
i.e. only the part of the domain pictured in the figure. For the
Hagedorn solution, the error is evaluated in all points on the
reference Fourier grid. The time step was 2.42 × 10−3 fs in
all simulations.
2
Control signals
|Qi|
1.8
1.6
1.4
|Q|
1.2
1
0.8
0.6
VIII. CONCLUSIONS
0.4
We have presented a method for dynamic adaptation of
the Hagedorn basis to the evolving wave function. By matching the support of the basis to the wave function we can
achieve smaller error with fewer basis functions. Numerical
experiments on one- and two-dimensional molecular systems
show how the error can be reduced by orders of magnitude
compared to simulation with semiclassical propagation of the
basis. Deviation from the semiclassical propagation scheme
rids us from the need of knowing the Hessian of the potential,
which typically is expensive to calculate. By using a PD controller for the adaptivity we are not dependent on smoothness,
or even continuity, of the control target. We can then make use
also of crude support estimates for the wave function which
come essentially for free. Experiments on a model of IBr illustrates how the control approach can improve accuracy substantially when several electronic states are involved. We also
provide a motivation for why a collocation method, with reduced computational complexity, induces little additional error compared to direct exponential integration of the Galerkin
formulation. The computation times for the numerical examples in this paper, which were implemented in Matlab and run
on a laptop computer, were reduced from hours to minutes by
using the collocation method. The computational work does
however scale quadratically with the number of basis functions, and the computation times for the presented examples
are longer than for the Fourier method. Similarly to other grid
methods, the number of basis functions needed scales exponentially with the number of dimensions. Still, moving grid
methods like the present provide a promising framework for
higher dimensional problems since adaptivity can reduce the
required number of basis functions, and thus also the amount
of data. For parallel computation of big problems, the amount
of data needed is usually a more severe bottleneck than the
computational work due to limitations both in cache size and
communication throughput. The possible savings are greatest when the wave packet stays localised, but moves over a
considerable area. In a fixed grid setting one would have to
resolve the entire domain at all times. With a moving grid we
only need to resolve the subdomain currently occupied by the
wave packet. The savings are considerable already in 1D, and
increasingly so in higher dimensions.
This paper describes how our solution method can be
applied to problems where the solution consists of a localised wave packet but where quantum phenomena are
still important for an accurate description of the dynamics. This is also the class of problems where we expect
our method to be the most useful. Such problems arise e.g.
from photodissociation.35, 36 As for other approaches using a
0.2
0
0
2
4
6
8
t (fs)
10
12
14
FIG. 11. The performance of the controller. |Q1 | (below) and |Q2 | (above)
are plotted against time. The square root of the control signal is shown with
solid lines, and the actual parameter values with dashed lines.
error is about 0.020. In order to proceed further one would
have to increase the number of basis functions, and eventually
also introduce a mechanism to split the wave packet into two.
The time evolution of |Q2 | and the pointwise error is shown
in Figure 10. The performance of the controller itself is illustrated in Figure 11. In Figure 12 we illustrate the benefits of
a time-dependent basis. The control-based Hagedorn computation with 322 basis functions is compared at time t = 10 fs
to a computation in a fixed grid Fourier basis on (s, a) ∈ [1.9,
3.0) × [−0.8, 0.8), with a polynomial complex absorbing potential (CAP) in (s, a) ∈ [2.8, 3.0) × [−0.8, 0.8), by the right
boundary. The potential with CAP is given by
s − 2.8 6
, s ∈ [2.8, 3.0). (64)
Ṽ (s, a) = V (s, a) − 10i
0.2
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0
−5
||⋅||∞ = 8.7⋅10
−0.2
a (a.u.)
a (a.u.)
In order to get the ∞ -error to the same order as for the Hagedorn solution we need 52 Fourier basis functions per coordinate direction. To calculate the pointwise error, a reference solution is calculated on (s, a) ∈ [1.9, 3.2) × [−0.8, 0.8), where
0.2
−0.2
−0.4
−0.4
−0.6
−0.6
−0.8
2 2.2 2.4 2.6 2.8 3 3.2
s (a.u.)
(a)
−5
||⋅||∞ = 7.2⋅10
0
−0.8
2
2.2 2.4
s (a.u.)
2.6
2.8
(b)
FIG. 12. Level curves for the pointwise error at time t = 10 fs for solutions
calculated with (a) a control-propagated Hagedorn basis with 322 basis functions, and (b) a Fourier basis with 522 basis functions. The curve separation is 1.0 × 10−5 , and the maximum pointwise error is 8.7 × 10−5 and
7.2 × 10−5 for the Hagedorn and Fourier solutions, respectively.
044111-12
Kieri, Holmgren, and Karlsson
localised representation,2, 19, 37 additional mechanisms must
often be introduced when handling problems with more complex dynamics. Also, some bound state problems are normally
best handled by using the standard Fourier representations, as
we do for the ground state in the IBr example in Sec. VI.
In some applications involving, e.g., tunnelling a wave function may split. For efficient treatment of such cases within our
framework one must introduce a methodology for the splitting
of the basis, so that different “families” of basis functions follow disconnected parts of the wave function, similarly to what
is done in the schemes in Refs. 19 and 37. One would then not
have to resolve the unoccupied area between the wave packets. Such splitting of Hagedorn wave packets has been studied
previously in a simplified setting.32 How to conduct splitting
and merging automatically remains an area of future research.
ACKNOWLEDGMENTS
Discussions with Bengt Carlsson, Vasile Gradinaru,
Magnus Gustafsson, Katharina Kormann, Hans Norlander,
and Elias Rudberg are gratefully acknowledged.
1 M.
F. Herman and E. Kluk, Chem. Phys. 91, 27 (1984).
Kosloff and R. Kosloff, J. Comput. Phys. 52, 35 (1983).
3 J. C. Light, I. P. Hamilton, and J. V. Lill, J. Chem. Phys. 82, 1400
(1985).
4 G. A. Hagedorn, Commun. Math. Phys. 71, 77 (1980).
5 G. A. Hagedorn, Ann. Phys. 269, 77 (1998).
6 E. Faou, V. Gradinaru, and C. Lubich, SIAM J. Sci. Comput. (USA) 31,
3027 (2009).
7 S. Adhikari and G. D. Billing, J. Chem. Phys. 113, 1409 (2000).
8 G. D. Billing and S. Adhikari, Chem. Phys. Lett. 321, 197 (2000).
9 K. L. Feilberg, G. D. Billing, and M. S. Johnson, J. Phys. Chem. A 105,
11171 (2001).
10 K. Gustafsson, M. Lundh, and G. Söderlind, BIT 28, 270 (1988).
2 D.
J. Chem. Phys. 137, 044111 (2012)
11 C.
J. Budd, W. Huang, and R. D. Russell, Acta Numerica 18, 111 (2009).
R. Pettey and R. E. Wyatt, Chem. Phys. Lett. 424, 443 (2006).
13 D. Neuhasuer and M. Baer, J. Chem. Phys. 90, 4351 (1989).
14 J.-P. Berenger, J. Comput. Phys. 114, 185 (1994).
15 J. P. Boyd, J. Comput. Phys. 54, 382 (1984).
16 J. P. Boyd, Chebyshev and Fourier Spectral Methods, 2nd ed. (Dover,
New York, 2001).
17 T. Tang, SIAM J. Sci. Comput. (USA) 14, 594 (1993).
18 H. Ma, W. Sun, and T. Tang, SIAM (Soc. Ind. Appl. Math.) J. Numer. Anal.
43, 58 (2005).
19 M. Ben-Nun, J. Quenneville, and T. J. Martínez, J. Phys. Chem. A 104,
5161 (2000).
20 B. Fornberg, A Practical Guide to Pseudospectral Methods (Cambridge
University Press, 1996).
21 E. J. Heller, J. Chem. Phys. 75, 2923 (1981).
22 G. H. Golub and J. H. Welsch, Math. Comput. 23, 221 (1969).
23 G. Franklin, J. D. Powell, and A. Emami-Naeini, Feedback Control of Dynamic Systems, 5th ed. (Prentice-Hall, Upper Saddle River, 2006).
24 H. Liu, O. Runborg, and N. M. Tanushev, “Error estimates for Gaussian
beam superpositions,” Math. Comput. (to be published).
25 K. Kormann and A. Nissen, in Proceedings of the 2009 ENUMATH Conference, Uppsala, 2009, edited by G. Kreiss, P. Lötstedt, A. Målqvist, and
M. Neytcheva (Springer-Verlag, Berlin, 2010), pp. 523–531.
26 H. Guo, J. Chem. Phys. 99, 1685 (1993).
27 R. Bourquin, V. Gradinaru, and G. A. Hagedorn, J. Math. Chem. 50, 602
(2012).
28 G. Strang, SIAM (Soc. Ind. Appl. Math.) J. Numer. Anal. 5, 506 (1968).
29 M. D. Feit, J. A. Fleck, and A. Steiger, J. Comput. Phys. 47, 412 (1982).
30 W. Magnus, Commun. Pure Appl. Math. 7, 649 (1954).
31 K. Kormann, S. Holmgren, and H. O. Karlsson, J. Chem. Phys. 128, 184101
(2008).
32 V. Gradinaru, G. A. Hagedorn, and A. Joye, J. Chem. Phys. 132, 184108
(2010).
33 K. C. Kulander and J. C. Light, J. Chem. Phys. 73, 4337 (1980).
34 K. C. Kulander, C. Cerjan, and A. E. Orel, J. Chem. Phys. 94, 2571 (1991).
35 E. A. Coronado, V. S. Batista, and W. H. Miller, J. Chem. Phys. 112, 5566
(2000).
36 D. V. Shalashilin, M. S. Child, and A. Kirrander, Chem. Phys. 347, 257
(2008).
37 Y. Wu and V. S. Batista, J. Chem. Phys. 118, 6720 (2003).
12 L.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement