Multi-scale methods for wave propagation in heterogeneous media HENRIK HOLST Licentiate Thesis

Multi-scale methods for wave propagation in heterogeneous media HENRIK HOLST Licentiate Thesis
Multi-scale methods for wave propagation in
heterogeneous media
HENRIK HOLST
Licentiate Thesis
2009-05-20
TRITA-CSC-A 2009:12
ISSN 1653-5723
ISRN KTH/CSC/A–09/12-SE
ISBN 978-91-7415-357-6
© Henrik Holst, 2009-05-20
Tryck: Skolan för datavetenskap och kommunikation
KTH
SE-100 44 Stockholm
SWEDEN
3
Abstract
Multi-scale wave propagation problems are computationally costly to solve by traditional techniques because the smallest scales must be represented over a domain determined by the largest scales of the problem. We have developed new numerical methods for multi-scale wave propagation in the framework of heterogeneous multi-scale
methods. The numerical methods couples simulations on macro and micro scales with
data exchange between models of different scales. With the new method we are able
to consider a general class of problems including some problems where a homogenized
equation is unknown. We show that the complexity of the new method is significantly
lower than that of traditional techniques. Numerical results are presented from problems in one, two and three dimensional and for finite and long time. We also analyze
the method, in one and several dimensions and for finite time, using Fourier analysis.
Acknowledgements
I wish to thank my academic advisors Professor Björn Engquist and Professor Olof Runborg for their enthusiasm and guidance. I also wish to thank my co-workers at NADA for
making it a great place to work.
I thank Wallenbergstiftelserna for the generous travel grant supporting my visit in
Spring 2009 to the Institute for Computational Engineering and Sciences at the University
of Texas at Austin, and I thank Björn and his family for inviting me to their home.
Last, but not least, I thank the Swedish Foundation for Strategic Research (SSF) and the
Swedish tax payers for making my research possible.
5
Contents
Contents
6
1 Introduction
7
2 Homogenization
9
2.1 Homogenization of the wave equation . . . . . . . . . . . . . . . . . . . . . . 10
2.2 The effective equations for finite time and long time . . . . . . . . . . . . . . 11
2.3 Other limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Heterogeneous multi-scale methods (HMM)
3.1 HMM for the wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Algorithms and Numerical Schemes . . . . . . . . . . . . . . . . . . . . . . .
3.3 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
16
20
27
4 Analysis
4.1 Convergence theory for the one-dimensional problem and finite time
4.2 Analysis of the one-dimensional problem over long time . . . . . . .
4.3 Stability analysis of the macro scheme for long time . . . . . . . . . .
4.4 Convergence theory for the d-dimensional problem . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
31
36
38
43
5 Numerical results
5.1 1D results, finite time . .
5.2 2D results . . . . . . . . . .
5.3 3D results . . . . . . . . . .
5.4 1D long time . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
49
52
52
53
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 Conclusions
65
A Analytic computation of harmonic average
67
Bibliography
69
6
Chapter 1
Introduction
It is typically very computationally costly to solve multi-scale problems by traditional numerical techniques. The smallest scale must be well represented over a domain, which is
determined by the largest scales. For wave propagation the smaller scales may originate
from high frequencies or from strong variations in the wave velocity field. We will focus
on the latter case. Examples of such variable velocity problems are seismic wave propagation in subsurface domains with inhomogeneous material properties and microwave
propagation in complex geometries.
A new class of numerical multi-scale methods couples simulations on macro and micro scales [6, 16]. We are using the framework of the heterogeneous multi-scale method
(HMM) [6, 7, 5]. In HMM a numerical macro scale method gets necessary information
from micro scale models that are only solved on small sub domains. This framework has
been applied to a number multi-scale problems as, for example, ODEs with multiple time
scales [13], elliptic and parabolic equations with multi-scale coefficients [9, 21, 1], kinetic
schemes [7] and large scale MD simulation of gas dynamics [17]. Other potential application of the specific HMM method we will develop is in seismic-, acoustic-, electromagneticor and other wave propagation problems in cluttered domains.
Suppose we have a wave equation formulated as an initial value problem,
( ε
utt − ∇ · (Aε ∇u ε ) = 0,
Ω × {t > 0},
(1.1)
ε
ε
u = f , ut = g,
Ω × {t = 0},
on a smooth domain Ω ⊂ RN and generally with boundary conditions. Let Aε (x) oscillate
very fast on a scale proportional to ε. The solution of (1.1) will then also be, locally, highly
oscillating in both time and spatial direction on the scale ε.
Computing the full solution to (1.1) is expensive with standard direct methods such as
Finite Differences (FD), Finite Element Method (FEM), Finite Volume (FV), Discontinuous
Galerkin (DG) or Spectral Methods. They are all computational costly to use because the
smallest scales must be resolved over a much lager domain Ω. Instead of trying to obtain
a complete solution we will create a HMM process which has a reduced computational
complexity and gives a coarse scale solution without a full resolution of the high frequency
scales proportional to ε. However, this coarse scale solution incorporates fine scale effects.
The goal of our research is to better understand the HMM process with wave propagation as example and also to derive computational techniques for future practical wave
equation applications. One contribution is a convergence proof in the multidimensional
case that includes a discussion on computational complexity. The analysis is partially
based on the mathematical homogenization theory, [3, 4]. This theory gives the macro
7
8
CHAPTER 1. INTRODUCTION
scale limit equations as ε → 0. The numerical methods apply to a more general class of
problems for which the explicit form of the homogenized equation may not be known.
A central contribution is the development and testing of numerical HMM techniques in
one, two and three space dimensions. Finally we explore simulation over very long time
intervals. The effective equation for very long time is different from the finite time homogenized equation. Dispersive effects enter, and the effective equation must be modified [26].
It is interesting to notice that our HMM approach just with minor modifications accurately
captures these dispersive phenomena.
Chapter 2 contains relevant homogenization theory where we try to give motivation to
the HMM algorithms we develop in Chapter 3, which also includes details about numerical
schemes and the method’s complexity. Chapter 4 discusses convergence theory for the
algorithms. In Chapter 5 we show numerical experiments by comparing HMM methods
with homogenization theory and by directly solving the equations. In Chapter 6 we draw
conclusions on our results and discuss ideas for future work.
Chapter 2
Homogenization
The idea behind classical homogenization is finding limiting equations and solutions to
a wide range of problems with a suitable “scale”, denoted here by ε, which is allowed to
pass to zero in the limit. In the setting of composite materials, consisting of two or more
mixed constituents, homogenization theory gives the macroscopic properties of the composite. The macroscopic properties are often different than the average of the individual
constituents that makes up the composite [4]. When the scale in the micro structure ε goes
to zero, the homogenized equations and the homogenized solution can be used as an approximation of the full equation typically with an accuracy of order O(ε). Homogenization
is a good idea also from a numerical perspective. The homogenized equation is often “easier” to solve than the equation full equation which might have fast oscillations depending
on the micro-scale composition.
The homogenized equation in d-dimension corresponding to some PDE can in certain
cases be computed analytical by replacing u ε in the PDE with a multi-scale expansion on
the form
x
u ε (x) = u 0 (x, y) + εu 1 (x, y) + ε2 u 2 (x, y) + · · · where y =
(2.1)
ε
and where all functions are 1-periodic in their y argument. We will often denote the domain Y = [0, 1]d and say Y-periodic instead. By plugging the series (2.1) into the PDE and
collecting equal powers of ε we can form a system of equations for u 0 , u 1 , . . . The solution
u 0 itself is what we call the homogenized solution in the cases where the formal multi-scale
expansion agrees with strict homogenization theory. It can be shown that u 0 (x, y) = u 0 (x)
and u 0 does not depend on ε. We also often denote u 0 (x) by ū(x). This method gives the
correct solution in most cases but not always. Also the multiple-scale series method does
not directly provide a rigorous proof since there is nothing a priori that says that the right
hand side of (2.1) converges to u ε . Other homogenization techniques that generalizes classical homogenization include 2-scale convergence, Γ -convergence and G-convergence. We
refer to [22, 19, 2, 4, 15, 23, 18, 12, 20] for further details about these other techniques.
It should be noted that even if our numerical methods use ideas from homogenization
theory they do not solve the homogenized equations directly. The goal is to develop computational techniques that can be used when there is no known homogenized equation
available. The closed form of the homogenized equation may not be available for example
due to non-periodic coefficients. In the research presented here the homogenized equations are actually available and could in practice be numerically directly approximated.
We have chosen this case in order to be able to develop a rigorous convergence analysis
and to have a well-understood environment for numerical tests.
9
10
2.1
CHAPTER 2. HOMOGENIZATION
Homogenization of the wave equation
We consider the wave equation in d-dimensions, formulated as an initial value problem,
( ε
Ω × {t > 0},
utt − ∇ · (Aε ∇u ε ) = 0,
(2.2)
ε
ε
Ω × {t = 0}.
u = f , ut = g,
We assume that Aε (x) = A(x, ε−1 x) where A is, symmetric, positive definite and is locally
Y-periodic. By locally Y-periodic we mean that A(x, y) is periodic with respect to y ∈ Y if x
is held fixed.
A classic result, also from [3], shows that the solution to the equation (2.2) will converge
to the solution of the homogenized wave equation,



in Ω × {t > 0},
ūtt − ∇ · Ā∇ū = 0,
(2.3)


ū = f , ū = g,
on Ω × {t = 0},
t
as ε → 0. The scale of variation in the initial data f and g is assumed to be comparable
to the slow scale. The matrix Ā is independent of ε, symmetric, positive definite and the
elements can be computed with explicit formulas. In the one-dimensional case, the matrix
Ā is equal to the harmonic average of Aε over one cell Y,
!−1
Z y1
dy
1
Ā(x) =
.
(2.4)
|Y| 0 A(x, y)
In a d-dimensional setting Ā is given implicitly by the relations [4, Proposition 6.8],
Z
1
Ā = (āij ), āij =
∇ŵei · A∇ŵej dy,
1 ≤ i, j, ≤ N,
(2.5)
|Y| Y
where ŵλ is defined as
ŵλ := −χ̂λ + λ · y,
and χλ is the unique solution to

− ∇ · A(y)∇χ̂y = −∇ · Aλ,






χ̂λ Y-periodic,

Z



1


χ̂ = 0.

|Y| Y λ
(2.6)
in Y,
(2.7)
In a typical situation A is not precisely periodic, so the theory above does not apply. Still
there might exist an equation of the form (2.3) which gives a smooth solution u, with no εscale, that is a good approximation of u ε . However, there may not be a known expression
for Ā in that case. This idea is followed in Chapter 3 where we develop a more general
multi-scale method for the wave equation.
Classical Theorem: Homogenization of the Wave Equation
Let us show a typical rigorous result about homogenization of the wave equation. Suppose
the wave equation is given on the form
 ε
utt − ∇ · Aε ∇u = hε
on Ω × {0 < t < T}



 ε


on ∂Ω × {0 < t < T}
u = 0
(2.8)

ε


u (x, 0) = f ε (x)
in Ω × {t = 0}



u ε (x, 0) = g ε (x)
in Ω × {t = 0}
t
2.2. THE EFFECTIVE EQUATIONS FOR FINITE TIME AND LONG TIME
11
Here Aε (x) = A(ε−1 x) where A ∈ L∞ (Ω)N×N is symmetric, Y-periodic, bounded and positive
definite uniformly x, i.e. there exists constants c > 0 and d > 0 independent of x such that
λT Aε (x)λ ≥ ckλk2
kAε λk ≤ dkλk
∀λ ∈ RN ,
∀λ ∈ RN .
We consider solutions to a variational form of (2.8)


Find u ε ∈ W2 such that



Z



ε


,
vi
hu
+
Aε (x)∇uε (x, t)∇v(x) dx
1

tt
H−1 (Ω),H0 (Ω)


Ω


Z


ε

=
h (x, t)v(x) dx
in D0 (0, T), ∀v ∈ H10 (Ω)




Ω

 ε


u (x, 0) = f ε (x)
in Ω × {t = 0}




u ε (x, 0) = g ε (x)
in
Ω × {t = 0}.
t
where
(2.9)
(2.10)
n
o
W2 = v : v ∈ L2 (0, T; H10 (Ω)), v 0 ∈ L2 (Ω × {0 < t < T}) .
Then one can show the following Theorem. The proof is classical and can be found in [4]
and [3].
Theorem 1. Suppose that hε ∈ L2 (Ω × {0 < t < T}), and f ε ∈ H10 (Ω), g ε ∈ L2 (Ω). Let u ε be the
solution of (2.10) with Aε defined as above. Assume that


i)
fε*f
weakly in H10 (Ω)



ε
ii) g * g
weakly in L2 (Ω)



 iii) hε * h
weakly in L2 (Ω × {0 < t < T}).
Then, one has the convergences


i)
u ε * ū
weakly* in L∞ (0, T; H10 (Ω))



ε
weakly* in L∞ (0, T; L2 (Ω))
ii) ut * ūt



ε
ε
 iii) A ∇u * Ā∇ū
weakly in L2 (Ω × {0 < t < T})N .
where ū is the solution of the homogenized problem:

ūtt − ∇ · Ā∇u = h
in Ω × {0 < t < T}






on ∂Ω × {0 < t < T}
ū = 0



ū(x, 0) = f (x)
in Ω × {t = 0}




ūt (x, 0) = g(x)
in Ω × {t = 0}
(2.11)
and Ā is constant, symmetric, positive definite given by (2.5), (2.6) and (2.7).
2.2
The effective equations for finite time and long time
The solution of the homogenized problem (2.2) can be used as an approximation to the
full problem when ε is small and for finite time. Now suppose that we want to compute
the solution over long time, to T = ε−1 or T = ε−2 . Is the homogenized solution still valid
12
CHAPTER 2. HOMOGENIZATION
1.2
DNS
HOM
1
0.8
0.6
0.4
0.2
0
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 2.1: The full solution to a wave equation (DNS) obtained using a finite difference
solution at time T ≈ 62
compared to the homogenized solution (HOM). The homogenized
Ā
solution does not capture the dispersive effects over long time.
up to those T? The answer to this question is no. We give an example where the homogenized equation above approximate u ε in Figure 2.1. Clearly as we saw in Figure 2.1, the
coarse scale of the solution for long time is not captured by the standard homogenization
described in the previous section.
The homogenized equation (2.2) has to be modified to be be accurate for longer time.
Santosa and Symes [26] showed that the effective equation in one-dimension and long time
is
utt − Āuxx − βε2 uxxxx = 0,
(2.12)
which they derived using Bloch wave expansions. It should be noted that (2.12) is not
well-posed since β > 0. However one can solve a regularized version of it. We discuss this
in more detail in Chapter 4. The coefficient Ā is the same as before. The new coefficient, β,
represents dispersion effects generated by the micro-scale in Aε but is only seen after long
time. The expression for β given in [26] is
β=
Ω1 Ω3
12π2
(2.13)
where the constants Ω1 and Ω3 are functions of the tensor A [26, (36, 37)]. The constants
Ω1 and Ω3 are then computed from the expressions
 2

Ω1 =Ā,




$
$


dy ds dr 3Ω51
dy ds dr

Ω3 3Ω31


+
= 3


2
3

A(r)
2π
2π
Ψ
Ψ A(y)A(r)
π

(2.14)

!2 
"
"


 1
dy dr 3Ω31
dy dr 
3Ω1




−Ā 
+ 2
−
 ,


Ω1

π
4π4
Ψ 0 A(r)
Ψ 0 A(r)




 Ψ ={0 ≤ y ≤ 2π, 0 ≤ r ≤ y, 0 ≤ s ≤ r}, Ψ 0 = {0 ≤ y ≤ 2π, 0 ≤ r ≤ y}.
2.3. OTHER LIMITS
2.3
13
Other limits
There exists many types of limits of the problem (2.2). Let ε denote the local material scale
and λ the wave length of the initial or boundary data. In [3] three main types are classified
(names given here)
a. Resonance limit (λ ≈ ε).
b. Low frequency (homogenization) limit (λ ε).
c. W.K.B. (geometrical optics) limit (λ ε).
All cases are computationally difficult in a similar way. To solve numerically either of
them, one has to have a number of grid points at least proportional to min(ε, λ)−(d+1) to
solve for finite time. The resonance limit (a) and W.K.B. limit (c) also have effective equations but of a totally different type than the low frequency limit (b). We will consider only
the low frequency limit in this thesis.
Chapter 3
Heterogeneous multi-scale methods
(HMM)
The heterogeneous multi-scale method (HMM) was presented in [5] as a general framework for designing multi-scale methods for many problems.
The setting that we consider is the following: We seek a macro scale solution, i.e. a
coarse scale or low frequency solution, of the problem. The problem can be described
accurately by a micro model but using that to solve the entire problem is too expensive.
The final assumption is that there exist an equation to model the macro behavior, but the
model is incomplete in some sense, and requires additional data. The HMM method is to
provide the missing data in the macro model using a local solution to the micro model.
The HMM framework has been used for development of multi-scale methods for multiscale elliptic equations [9, 21], parabolic equations [1], wave equation [10], ordinary differential equations [13] and stochastic differential equations [8].
The structure of the HMM process consists of a number of important components:
• The macro solution U and a macro solver. The macro solver will lack some information to solve or time step the solution. This information is obtained from the micro
solver.
• A micro solution u and a micro solver. The micro solution is initialized with data
from the macro solution and the macro solver using a compression operator.
• A compression operator Qu 7→ U. The compression operator Q maps (locally) the
macro solution U to some particular micro solution u.
• A reconstruction operator RU 7→ u. The reconstruction operator R will be used to
map the micro solution back to data which is used by the macro solver.
• Constraints on F(U, D) = 0 applied to f (u, d) = 0. This for instance be of the type
initial data or boundary conditions.
• Data estimation from f (u, d) = 0 to fill in missing data in F(U, D) = 0.
We will formulate an HMM process which touches upon all aspects of the HMM framework, with the exception that we do not define the compression and reconstruction operators. We will define a macro and a micro solver where some needed data in the macro
solver is computed by the micro solver. The micro solver itself uses initial data based on
data from the macro solver.
15
16
3.1
CHAPTER 3. HETEROGENEOUS MULTI-SCALE METHODS (HMM)
HMM for the wave equation
We will formulate a general framework for d-dimensional problems on the domain Ω =
[0, 1]d . Let u ε be Ω-periodic and solving
 ε

in Ω × {t > 0},
u = ∇ · (x, Aε (x)∇u ε )


 tt

ε
ε
on Ω × {t = 0},
u = f , ut = g
(3.1)




u ε
Ω-periodic .
We will now continue and formulate an HMM process for this equation. We follow the
same strategy as in [1] for parabolic equations and in [25] for the one-dimensional advection equation.
We assume there exists a macro scale PDE of the form
(
Utt − ∇ · F(x, U, ∇U, ∇2 U, . . .) = 0,
in Ω × {0 < t < 1},
(3.2)
U = f , Ut = g,
Ω × {t = 0},
where F is a function of x, U and higher derivatives of U. In the clean homogenization case
we would have F = Ā∇Ū, but we will not assume knowledge of a homogenized equation.
Instead we will solve the PDE (3.1), only in a small time and space box, and from the
solution extract a value for F. The form of the initial data will be determined from the
local behavior of U.
One dimension
Let us first consider one dimension and suppose F = F(x, ∂x U). We then proceed as follows:
Step 1: Macro model discretization. We choose a numerical method to solve (3.2). We
will use a finite difference scheme. We discretize (3.1) using central differences where
n approximates u(X , T ) and X = mH, T = nK with time step K = T/N and grid size
Um
m n
m
n
H = |Ω|/M,

K2 n

n+1
n
n−1
n


Um = 2Um − Um + H Fm+1/2 − Fm−1/2 ,



 n
n
Fm−1/2 = F(xm−1/2 , Pm−1/2
),






1

n
n
n
Pm−1/2
),
= (Um
− Um−1
H
1≤m≤M
(3.3)
n
where Fm±1/2
is F evaluated at point Xm±1/2 and
Tn = nK
Xm = mH,
,
n
Um
≈ U(Tn , Xm ).
(3.4)
n
approximates ∂x U in the point Xm±1/2 . It should be noted that we
The quantity Pm±1/2
cannot solve this equation in this form without knowing what F is; this is the information
we lack in the macro model.
n
Step 2: The micro solver. The actual evaluation of Fm±1/2
in each grid point is done using
a micro solver. We need to “fill in” the missing data in the macro model. This is done by
solving a small micro scale problem with data from the macro model. The result from the
micro scale will be used to form the quantity needed by the macro scale solver.
3.1. HMM FOR THE WAVE EQUATION
n
Given the parameters Xm−1/2 and Pm−1/2
we solve the following micro problem
 ε

utt − ∂x Aε uxε = 0,
Y × {0 < t < τ},



u ε (x, 0) = Pn
ε
x,
u
on Y × {t = 0},

t (x, 0) = 0,
m−1/2


 ε
ε

u − u (x, 0)
Y-periodic.
17
(3.5)
The domain Y × {0 < t < τ} we call the micro box and its size in time and space will be of
the order ε, i.e. τ, |Y| = O(ε).
Step 3: Reconstruction step. After we have solved for for u ε for all Y × [0, τ] we approxn
n
imate Fm−1/2
≈ F̃(xm−1/2 , Pm−1/2
). The function F̃ is the mean value over [−η, η] × {0 < t < τ}
ε
ε
of the expression f = A uxε , more precisely
Z τZ η
1
n
F̃(xm−1/2 , Pm−1/2 ) :=
(3.6)
f ε dx dt
f ε = Aε uxε .
2τη 0 −η
The parameter η is chosen such that [−η, η] ⊂ Y and sufficiently small for information not
to propagate into the region [−η, η]. Typically we use
p
Y = [−ymax , ymax ],
ymax = η + τ sup |Aε |.
(3.7)
In this way we do not need to worry about the effect of boundary conditions. Note therefore that other types of boundary conditions could also be used in (3.6).
The approximation can be improved by using a weighted mean of f ε in (3.6). Let Kη
and Kτ be smooth kernels with support in [−η, η] and [0, τ] both with integral one. Then
we can use
"
n
F̃(xm−1/2 , Pm−1/2 ) =
Kτ (t)Kη (x)f ε dx dt,
f ε = Aε uxε .
(3.8)
We consider kernels described in [13]. We let Kp,q (I) denote the kernel space, of functions
q
K such that K ∈ Cc (R) with supp K = I, and

Z


1, r = 0;
r
K(t)t dt = 

0, 1 ≤ r ≤ p.
Furthermore we will denote Kη as a scaling of K
!
x
1
Kη (x) := K
,
K ∈ Kp,q ([−1, 1]),
η
η
and Kτ as a non-symmetric kernel
1
t
K
,
K ∈ Kp,q ([0, 1]).
τ
τ
We conclude this section with a few remarks:
Kτ (t) :=
(3.9)
• Since τ, |Y| ∼ O(ε) the computational cost to solve each micro problem is independent
of ε. Hence the cost of the entire HMM process is independent of ε.
• The convergence analysis for the HMM process, with respect to micro box sizes and
kernel choice, is done in Chapter 4.
• We also show in Chapter 4 that in the one dimension we only need to average in
time. A single point sampling is enough in space.
• The initial data used in (3.5) will give the same results as if u ε (x, 0) was a linear
n
n.
interpolant of Um−1
and Um
18
CHAPTER 3. HETEROGENEOUS MULTI-SCALE METHODS (HMM)
Two dimensions and higher
Let us now consider an d-dimensional problem and suppose F = F(x, ∇x U). We follow the
same outline as in the one-dimensional case.
Step 1: Macro model discretization. We choose a numerical method to solve (3.2). We
will use a finite difference scheme. We discretize using central differences with time step
K and grid size H in all directions,


K2 (d)
K2 (1)
(1)
(d)

n+1
n
n−1

F 1 − F 1 + ... +
F 1 −F 1 ,
= 2Um
− Um
+
Um

m− 2 e1
m− 2 ed
H m+ 2 e1
H m+ 2 ed
(3.10)

 n

n
n

k = 1, . . . , d (Note: Fm− 1 e is a vector.)
Fm− 1 e = F(xm− 1 ek , Pm− 1 e ),
2 k
2
2 k
2 k
n
where Fm±1/2e
is F evaluated at point Xm±1/2ek . As in the one-dimensional case, the quank
n
tity P 1 approximates ∇x U in the point Xm±1/2 . We show an example in Figure 3.1 of
m± 2 ek
the numerical scheme in two dimensions. There Pn
m+ 12 e2
x
i,j+1
is given by the expression (3.21)
G
i,j+1/2
Fi+1/2,j
xij
xi+1,j
Figure 3.1: The Numerical scheme (3.21) for P in two dimensions. In the figure above
the two components of F in two different positions are given by Fi+1/2,j and Gi,j+1/2 . The
U points involved in computing Fn 1 = Gi,j+1/2 and ∇U ≈ Pn 1 are indicated by filled
circles.
m+ 2 e2
m+ 2 e2
Step 2: The micro solver. As before, the evaluation of Fn
m± 12 ek
in each grid point is done
using a micro solver to fill in the missing data in the macro model.
3.1. HMM FOR THE WAVE EQUATION
Given the parameters Xm− 1 ek and Pn
2
m− 21 ek
19
we solve the following micro problem
 ε

utt − ∇xT Aε ∇x u ε = 0,
Y × {0 < t < τ},




u ε (x, 0) = (Pn
T
) x, utε (x, 0) = 0,
on Y × {t = 0},

m− 21 ek




u ε − u ε (x, 0)
Y-periodic.
(3.11)
We keep the micro box size of order ε, i.e. τ, |Y| = O(ε).
Step 3: Reconstruction step. After we have solved for for u ε for all Y × [0, τ] we approximate Fn 1 ≈ F̃(xm− 1 ek , Pn 1 ). The function F̃ is the mean value over [−η, η]d × {0 < t < τ},
m− 2 ek
m− 2 ek
2
thus we need to compute the expression fkε = (Aε ∇x u ε )k . We will use the same type of
kernels to accelerate the convergence in the mean value computation as described in the
one-dimensional case,
"
n
f ε = Aε ∇x u ε ,
(3.12)
F̃(xm− 1 ek , Pm−
)
=
Kτ (t)Kη (x − xm− 1 ek )f ε dx dt,
1e
2
2
2 k
where here the multivariable kernel Kη (x) is defined as
Kη (x) = Kη (x1 )Kη (x2 ) · · · Kη (xd ).
(3.13)
using the single valued kernel Kη , still denoted by Kη .
One dimension, long time
Now finally, we consider the one dimensional for long time. The macro problem formulation (3.2) remains the same, but we need to pass additional information to the micro
problem. The additional information will be additional derivatives. We suppose that
F = F(x, ∂x U, ∂xx U, ∂xxx U). We follow the same outline as in the one-dimensional case.
Step 1: Macro model discretization. We choose a numerical method to solve (3.2). We
will use a finite difference scheme. We discretize (3.2) using central differences with time
step K and grid size H,


K2 n

n+1
n
n−1
n

U
=
2U
−
U
+
F
−
F
,

m
m
 m
m− 12
H m+ 12
(3.14)



n
n
n
n

Fm−
1 = F(xm− 1 , Pm− 1 , Qm− 1 , Rm− 1 ),
2
2
2
2
2
n
n
, Qm±1/2
and Rnm±1/2 are
where
is F evaluated at point Xm±1/2 . The values Pm±1/2
approximations of Ux , Uxx and Uxxx at the points Xm±1/2 .
n
Fm±1/2
n
Step 2: The micro solver. As before, the evaluation of Fm−1/2
in each grid point is done
using a micro solver to fill in the missing data in the macro model. This is done by solving
a small micro scale problem with data from the macro model. The initial data in the micro
problem will be on the form
q
r
u ε (x, 0) = px + (x − x0 )2 + (x − x0 )3 ,
2
6
utε (x, 0) = 0,
(3.15)
ε (x , 0) = q and u ε (x) = r. The accuracy of the
such that u ε (x, 0) will have uxε (x0 , 0) = p, uxx
0
xxx
micro problem needs to be slightly higher for long time computations. That means we
20
CHAPTER 3. HETEROGENEOUS MULTI-SCALE METHODS (HMM)
need to use a larger micro box. We will use τ, η ∼ εα , where α < 1. Besides that we make no
changes to the micro problem formulation (3.5) used in the 1-dimensional micro problem
for finite time. We will discuss the details of the kernel needed for long time in Section
4.2.
Step 3: Reconstruction step. After we have solved for for u ε for all Y × [0, τ] we apn
n
n
proximate Fm−1/2
≈ F̃(xm−1/2 , Pm−1/2
, Qm−1/2
, Rnm−1/2 ). The function F̃ is the mean value over
d
[−η, η] × {0 < t < τ}, thus we need to compute the expression fkε = (Aε ∇x u ε )k . We will use
the same type of kernels to accelerate the convergence in the mean value computation as
described in the one-dimensional case,
"
n
n
F̃(xm−1/2 , Pm−1/2
, Qm−1/2
, Rnm−1/2 ) =
Kτ (t)Kη (x − xm−1/2 )f ε dx dt,
f ε = Aε uxε . (3.16)
3.2
Algorithms and Numerical Schemes
In this section we describe the two main HMM algorithms that together forms the HMM
process. We have tried to give in detail the one-dimensional algorithm in in pseudo code.
We will then explain how to modify the algorithm for higher dimensions or to solve for
long time. The notation is nonstandard but should be self explanatory. As above for the
macro problem discretization we let N be the number of time steps and M be the number
of grid intervals. We define the macro grid points Xi = iH (= 1, 2, . . . , M) and time values
Tn = nK (n = 1, 2, . . . , N). For the micro problem we again use N and M to denote the
number of time steps and grid intervals respectively. For the micro problem discretization
we require that M is an even number. The micro grid points xi = ih+x0 , (i = ±1, ±2, . . . , + M
2)
and tn = nk + t0 , (n = 1, 2, . . . , N).
Remark. In an Fortran or MATLAB implementation we would have a natural base 1 index
M
j = 1, 2, . . . , M where j = i + M
2 . In that notation we have xj = jh + xO where xO = x0 − 2 h.
The macro scale solver for one dimension is described in Figure 3.2 and the corresponding micro scale solver in Figure 3.3.
Algorithms
Macro Scale Solver
We now go through the Macro Scale Solver 3.2 step by step:
• The for-loop on line 1 to 14 is the time stepping loop.
• On line 2-4 we partly compute the initial data, the remaining computation is done
at line 8-10.
• On line 5 we compute the approximation of Ux (xm−1/2 , tn ) using the central difference scheme (3.18). We describe how to do this in two and three dimensions in the
schemes (3.20) and (3.24). For the long time one dimensional problem we use the
scheme (3.26). The long time solver can compute the variables Q ≈ Uxx and R ≈ Uxxx
in this stage.
3.2. ALGORITHMS AND NUMERICAL SCHEMES
21
• Line 8-10 is a second order accurate implementation of the initial conditions. Suppose t0 = 0 and use the PDE and write Utt (x, t0 ) as ∇F(x, t0 ). Then we have,
U(x, −K) = U(x, 0) − KUt (x, 0) +
K2
K2
Utt (x, 0) + O(K3 ) = U(x, 0) − Kg(x) +
∇ · F + O(K3 ).
2
2
(3.17)
• We call for the micro solver on line 6 with the computed x and P data. For the long
time solver we need to pass the additional variables Q and R besides x and P. In the
next section we will show how this process can be accelerated by the use of lookup
tables (LUT).
• At line 7 we calculate the numerical divergence of F. This process (3.18) can as on
line 5 be replaced (3.20) and (3.24) in in two and three dimensions. For long time in
one dimension we use the expression in (3.26).
• Finally, on line 11-13, we time step the solution from t = tn to t = tn+1 .
Micro Scale Solver
We describe the The Micro Scale Solver 3.3 step by step:
• The for-loop on line 1 to 14 is the time stepping loop.
• On line 2-4 we partly compute the initial data, the remaining computation is done
at line 8-10.
• On line 5 we compute the approximation of Aε (xm−1/2 , uxε (xm−1/2 , tn ) using a central
difference scheme (3.19). We describe how to do this in two and three dimensions
in the schemes (3.22) and (3.25). For the long time one dimensional problem we use
the scheme (3.27).
• At line 6 we calculate the numerical divergence of f ε . This scheme (3.19) can as on
line 5 be replaced with (3.22) and (3.25) in in two and three dimensions. For long
time in one dimension we use the expression in (3.26).
n
• On line 7 we compute the mean value of f ε over Y (denoted fmean
) using The Midpoint Rule. This process is generalized to a d dimensional domain Y = [0, 1]d in the
sub Section “Integral computations” below.
• Line 8-10 is an second order accurate implementation of the initial conditions, c.f.
Macro Solver line 8-10.
• Finally, on line
P 15, we compute the mean of fmean solution from t = t0 to t = tN . The
primed sum 0 is a weighted sum where the first and last term is divided by 2.
22
CHAPTER 3. HETEROGENEOUS MULTI-SCALE METHODS (HMM)
Figure 3.2: Macro Scale Solver
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
for n = 0, 1, . . . , N do
if n = 0 then
0 ← f (X )
Um
m
end if
1
n
n − Un
Pm−1/2
←H
Um
m−1
n
n
call micro_scale_solver([in] Xm−1/2 , [in] Pm−1/2
, [out] Fm−1/2
),
1
n
n
n
Ym ← H Fm+1/2 − Fm−1/2
if n = 0 then
−1 ← U0 − Kg(X ) + 1 K2 Yn
Um
m
m
m
2
end if
if n < N then
n+1 ← 2Un − Un−1 + K2 Yn
Um
m
m
m
end if
end for
Figure 3.3: Micro Scale Solver
Require: x0 : midpoint of micro cell, P: macro slope
Ensure: F: space and time average of f ε
1: for n = 0, 1, . . . , N do
2:
if n = 0 then
0 = px
3:
um
m
4:
end if
i
h n
n
n
− um
5:
fm−1/2
← Aε (xm−1/2 ) 1h um+1
n
n ← 1 fn
6:
ym
h m+1/2 − fm−1/2
PM
n
n
7:
fmean
← h j=1
Kη (xm−1/2 − x0 )fm−1/2
8:
if n = 0 then
−1 ← u 0 + 1 k 2 y 0
9:
um
m
m 2
10:
end if
11:
if n < N then
n+1 ← 2u n − u n−1 + k 2 y n
12:
um
m
m
m
13:
end if
14: end for
P0N
n
15: F ← k n=0 Kτ (tn − t0 )fmean
3.2. ALGORITHMS AND NUMERICAL SCHEMES
23
Numerical schemes
We now continue with a detailed description of the numerical schemes used in the macro
and micro solvers. The solvers are designed for one, two and three dimensions in finite
time. We also show a fourth order accurate scheme used in the micro solver for long time
micro problems. We keep the grid and time notation from the introduction to this Section
on page 20.
1D equation
The finite difference scheme on the macro level has the form
 n+1
n
n−1

U
= 2Um
− Um
+ K2 Ymn ,


 m


 n 1 n
n
F
=
−
F
,
Y

m

m− 21

H m+ 12



n
Fn
m±1/2 = F(xm±1/2 , Pm±1/2 ),
n
where Pm−1/2
=
1
H
(3.18)
n − Un
Um
m−1 . The micro level scheme defined analogously:
 n+1
n
n−1
n

um = 2um
− um
+ k 2 ym
,





1

n
n
n


ym = h fm+1/2 − fm−1/2 ,



n
n

um+1
− um

n

f
=
a
,

1

m+ 2
m−1/2

h



n − un

um

m−1
n

fm+1/2
= am− 1
,
2
h
(3.19)
2D equation
The two dimensional problem is discretized with a scheme with the following schemes:
The finite difference scheme on the macro level
 n+1
n
n−1
Um = 2Um
− Um
+ K2 Ymn ,






1 (2)
(1)
(2)
Yn = 1 F(1)
−
F
+
F
−
F
,
 m H m+ 1 e1

m− 12 e1
m− 21 e2
H m+ 21 e2

2



Fn
= F(x 1 , Pn
),
m± 12 ek
where Pn
m+ 12 e2
m± 2 ek
(3.20)
m± 12 ek
is given by (see Figure 3.1)
n
Pm+
1e =
2 2
1
2H
Um+e1 +Um+e1 +e2
2
−
Um−e1 +Um−e1 +e2
2
1
H
Um+e2 − Um
T
.
(3.21)
24
CHAPTER 3. HETEROGENEOUS MULTI-SCALE METHODS (HMM)
and the other Pn
m± 12 ek
mulated as
are components defined analogously. The micro level scheme is for-
 n+1
n
n−1
n

u
= 2um
− um
+ k 2 ym


 m


1 (2)
1 (1)

(1)
(2)
n


=
y
f
f
−
f
+
−
f

m

h m+ 12 e1 m− 12 e1 h m+ 12 e2 m− 12 e2




(11)
(12)


a 1 un
a 1 n
n
n


um−e
+ um+e
m+ 2 e1
m+ 2 e1
 (1)
m+e2 + um+e1 +e2
2
1 −e2
n
n


=
u
−
u
+
−
f

m+e1
m

m+ 12 e1
h
2h
2
2




(12)
(11)


a 1 un
a 1 n
n
n
um−e
+ um−e

m− 2 e1
m− 2 e1
m+e2 + um−e1 +e2
(1)

2
1 −e2
n
n

−
u
+
f
u
−
=

1
m−e
m

1
m− 2 e1

h
2h
2
2




(22)
(21)


a 1 a 1 un
n
n
n

um−e
+ um−e

m+ 2 e2
m+ 2 e2
m+e1 + um+e1 +e2
(2)

1
1 +e2
n
n

−
u
+
u
f
=
−

1
m
m+e

2
m+ 2 e2

2h
2
2
h




(21)
(22)


a 1 a 1 un
n
n
n

um−e
+ um−e
 (2)
m− 2 e2
m− 2 e2
m+e1 + um+e1 −e2
1
1 −e2

n
n

f
=
−
+
u
−
u
 m− 1 e
m
m−e2
2h
2
2
h
2 2
(2)
m− 12 e1
When approximating f
(3.22)
n
n
we take the average of um±e
and um+e
to approximate
2
1 ±e2
u(xm+ 1 e1 ±e2 , t n ). Then we use those two averages to approximate the y derivate of u at
2
u(xm− 1 e1 ). The scheme is second order in both space and time.
2
3D equation
The Macro scheme for the three dimensional problem is on the form
 n
n
n−1
Um = 2Um
− Um
+ K2 Ymn ,






Yn = 1 F(1,n) − F(1,n) + 1 F(2,n) − F(2,n) + 1 F(3,n) − F(3,n) ,

 m H m+ 12 e1
m− 12 e1
m− 12 e2
m− 12 e3
H m+ 21 e2
H m+ 12 e3




n
n
F
= F(x 1 , P
),
m± 21 ek
where Pn
m+ 21 e3
(3.23)
m± 12 ek
m± 2 ek
is defined as,
n
Pm+
1e
2 3

 U +U
Um−e1 +Um−e1 +e3
m+e1
m+e1 +e3
 1

−
 2H

2
2

 =  1 Um+e2 +Um+e2 +e3 − Um−e2 +Um−e2 +e3  ,
2
 2H

2


1
U
−
U
m+e3
m
H
(3.24)
3.2. ALGORITHMS AND NUMERICAL SCHEMES
and the other Pn
m± 12 ek
25
defined analogously. The micro level scheme is a second order accu-
rate scheme defined analogous with the 2D scheme (3.22)
 n+1
n
n−1
n

+ k 2 ym
− um
= 2um
u


 m


1 (1)
1 (2)
1 (3)
(1)
(2)
(3)

n


y
=
f
f
f
−
f
+
−
f
+
−
f
 m h m+ 1 e1 m− 1 e1 h m+ 1 e2 m− 1 e2

h m+ 21 e3 m− 12 e3
2
2
2
2




(11)
(12)


a 1 un
a 1 n
n
n

um+e
+ um−e
 (1)
m+ 2 e1
m+ 2 e1
m+e1 +e2 + um+e2

1 −e2
2
n
n

um+e
−
u
+
−
f 1 =

m

1
m+ 2 e1

h
2h
2
2




(13)


a 1 un
n
n
n

um+e
+ um+e

m+ 2 e1
m+e1 +e3 + um+e3

1 −e3
3

+
−



2h
2
2




(12)
(11)


a 1 un
a 1 n
n
n

um−e
+ um−e
 (1)
m− 2 e1
m− 2 e1
m+e2 + um−e1 +e2

2
1 −e2
n
n

f
−
u
+
=
u
−

m−e1
m

m− 12 e1

h
2h
2
2




(13)


a 1 un
n
n
n

um−e
+ um−e
m− 2 e1

m+e3 + um−e1 +e3
3
1 −e3


−
+



2h
2
2



(21)
(22)


a 1 a 1 un

n
n
n

um−e
+ um−e
m+ 2 e2
m+ 2 e2

m+e1 +e2 + um+e1
1 +e2
1
n
n

f (2) 1 =
−
+
u
−
u

m+e
m
2

m+ 2 e2
2h
2
2
h




(23)


a 1 un
n
n
n
um+e
+ um−e

m+ 2 e2
m+e2 +e3 + um+e3

2 −e3
3

−
+



2h
2
2




(21)
(22)


a 1 a 1 un
n
n
n

um−e
+ um−e

m− 2 e2
m− 2 e2
m+e1 + um+e1 −e2
(2)

1
1 −e2
n
n

=
f
−
+
u
−
u
 m− 1 e
m
m−e2


2h
2
2
h
2 2




(23)


a 1 un
n
n
n

um−e
+ um−e

m− 2 e2
m+e3 + um−e2 +e3
3
2 −e3


−
+



2h
2
2



(31)


a 1 un

n
n
n

um−e
+ um−e
m+ 2 e3

m+e1 +e3 + um+e1
1 +e3
1

f (3) 1 =
−


m+ 2 e3
2h
2
2




(32)
(33)


a 1 a 1 un
n
n
n


+
u
u
+
u
m+
e
m+ 2 e3

m+e
+e
m+e
m−e
+e
m−e
3
2
2 3
2 3
2
2
n
n


+
−
+
u
−
u

m+e
m
3

2h
2
2
h




(31)


a 1 un
n
n
n


um−e
+ um−e
m− 2 e3
m+e1 + um+e1 −e3

(3)
1
1 −e3


f
=
−

 m− 12 e3
2h
2
2




(32)
(33)


a 1 a 1 un
n
n
n

+ um+e
+ um−e
um−e

m− 2 e3
m− 2 e3
m+e
−e
−e

2
2
3
2
2
3
n
n

+
−
+
um
− um−e

3
2h
2
2
h
(3.25)
1D equation - long time
The finite difference scheme on the macro level
 n+1
n
n−1
2 n

Um = 2Um − Um + K Ym ,




 n 1 n
n
Ym =
Fm+ 1 − Fm−

1 ,


H
2
2



n
n
n
n
F
m±1/2 = F(xm±1/2 , Pm±1/2 , Qm±1/2 , Rm±1/2 ),
(3.26)
26
CHAPTER 3. HETEROGENEOUS MULTI-SCALE METHODS (HMM)
where P, Q and R are defined via U,

−Um+1 + 27Um − 27Um−1 + Um−2

n


= Ux (xm−1/2 , tn ) + O(H4 ),
Pm−1/2
=


24H



 n
Um+1 − Um − Um−1 + Um−2

= Uxx (xm−1/2 , tn ) + O(H2 ),

Qm−1/2 =
2

2H




Um+1 − 3Um + 3Um−1 − Um−2


= Uxxx (xm−1/2 , tn ) + O(H2 ).
Rnm−1/2 =
H3
(3.27)
The micro level scheme is second order in time and fourth order accurate in space,
 n+1
n
n−1
n

um = 2um
− um
+ k 2 ym





1

n
n
n
n
n

=
y
−f
+
27f
−
27f
+
f

m

m− 23
m− 21
m+ 21
m+ 23

24h



a
3


m+ 2

n
n

=
−u
+
27u
−
27u
+
u
f

3
m+2
m+1
m
m+3
 m+ 2

24h


am+ 1 

2
n
n

f
=
−u
+
27u
−
27u
+
u
 m+ 1
m+1
m
m−1
m+2


24h
2



a
1

m− 2

n
n


f
=
−u
+
27u
−
27u
+
u

1
m
m−1
m−2
m+1

m− 2
24h




a
3

m− 2

n

f n 3 =
−u
+
27u
−
27u
+
u
m−1
m−2
m−3
m
m− 2
24h
(3.28)
Integral computations
In this section we will describe the numerical quadrature used to compute accurate space
and time averages of f ε in the micro solver part of the HMM process. The nontrivial part
of integrating f ε is due to the fact that f ε is given only at grid points shifted one half grid
point. We formulate the result independently of its application in the micro solver:
Suppose that Y = [0, ymax ]d . Also assume that f ∈ L∞ (Y)d , where d is the dimension of
(k)
Y and that index k of fm (fm ) is defined on a grid hGN,k where hN = ymax and
1
GN,k = m − ek : m ∈ (Z+ )d , max m ≤ N
2
(3.29)
where Z+ = {z ∈ Z : z > 0}. We can integrate each component f (k) using a combination of
the midpoint rule in direction k and trapezoidal rule in all other directions
Z
f (k) dY ≈ hd
Y
N
X
0
i1 =0
···
N
X
ik =1
···
N
X
0
fi1 ,...,ik−1/2 ,...,iN = hd
iN =0
X
fm .
(3.30)
m∈GN,k
The first and last terms in all primed sums are divided with two. A weighted mean using
a kernel Kη is computed with the formula
Z
Kη (y)f (k) (y) dy ≈ hd
Y
X
(k)
Km fm ,
(3.31)
m∈GN,k
where the kernel is computed on Km = Kη (hm) (m ∈ GN,k ). The approximation (3.30) is
second order with respect to the grid size h.
3.3. COMPUTATIONAL COMPLEXITY
3.3
27
Computational complexity
We will discuss the the computational complexity for solving the problem
 ε

Ω × {0 < t < T}
utt − ∇ · Aε ∇u ε = 0,



 ε
ε
Ω × {t = 0}
u (x, 0) = f (x), ut (x, 0) = g(x),


 ε

u is Y-periodic.
(3.32)
in terms of number of floating point operations. We will analyze the situation when the
macro scale domain is Ω = [0, 1]d and T = O(ε−L ) and L ∈ {0, 2}. For simplicity we assume that the time step must be proportional to ε in all direct solvers to maintain a fixed
accuracy.
Using a direct solver on (3.32) with the above assumption implies that the cost to solve
(3.32) is of order
1 1 d+1
= ε−(L+d+1) .
(3.33)
εL ε
The total cost for HMM is of the form
1
(cost of micro problem) Md
εL
(3.34)
where Md is the number of micro problems needed to be solved per macro time step. The
HMM micro problem, with η = τ, cost
η d+1
,
η = ε1−α .
(3.35)
ε
For finite time computations we can use a kernel with η ∼ ε (i.e. α = 0). The total cost for
HMM for finite time (L = 0) is therefore on the form
1
(cost of micro problem) Md = (cost of micro problem) Md = O(1),
εL
(3.36)
thus independent of ε and d. For long time computations we need to have a kernel where
η ∼ ε1−α and 0 < α < 1 is a function of (p, q) in the corresponding kernel space Kp,q . The
reason for this is that we need to meet the accuracy requirements we describe in Section 4.2
on page 36. For long time (L = 2) the cost is given by
1
(cost of micro problem) Md = ε−(2+α(d+1)) .
εL
(3.37)
We will show in the next sub section how the computational complexity can be reduced
even further by precomputing the micro problems.
Accelerating the HMM process
In this section we will show that computational cost of the HMM process can be reduced
significantly. We observe that the function (3.6) is linear and as a corollary we can apply
the HMM process to a finite number of canonical micro problems and form linear combinations of those for any given F̃ computation.
The function F̃ given in (3.12) is computed in a number of steps. We will see that
all steps are linear and concatenation of linear operators is again a linear operator and
therefore F̃(x, p) is a linear operator in p. This is true also for the case in (3.16) via a similar
argument. The steps in the HMM algorithm are:
28
CHAPTER 3. HETEROGENEOUS MULTI-SCALE METHODS (HMM)
1. Compute initial data u(x, 0) and ut (x, 0) from p, u(x, 0) = pt x.
ε
2. Solve utt
− ∇ · Aε ∇u ε = 0 for 0 < t < τ.
!
3. Compute average F̃ = Kτ Kη f ε dx dt where f ε = Aε ∇u ε
The first operation is a linear operation since we consider polynomials where p are the
coefficients. In step two we compute a solution to a linear PDE, therefore this step is linear
as well. Computing the integral average in step 3 is also a linear operation. One can do
a similar argument for the long time problem, where the initial data is a degree three
polynomial.
Example: Optimizing the long time computation As an example, we consider optimizing the one dimensional case for long time by a lookup table (LUT). The general idea is the
following: Since F̃ is linear (in p, q and r) we can express it on the form
F̃(x, p, q, r) = pF1 (x) + qF2 (x) + rF3 (x),
(3.38)
where F1 (x) = F̃(x, 1, 0, 0), F2 (x) = F̃(x, 0, 1, 0) and F3 (x) = F̃(x, 0, 0, 1). By precomputing the
table Fm,k = Fk (Xm−1/2 ) for all macro grid points X = Xm−1/2 (m = 1, 2, . . . , M) we need to
compute at most 3M macro problems. In the case when Aε does not have any slow scales
we actually have that Fk (x) is ε-periodic. Therefore we only need to resolve Fk over one
ε-period. This can be done efficiently and accurately with very few points by using Fourier
transformation.
Remark. For a smooth function u(x),
F̃(x, ux (x), uxx (x), uxxx (x)) = c1 ux (x) + c2 uxx (x) + c3 uxxx (x)
(3.39)
where ck (k = 1, 2, 3) depend on x, ε, η and τ, Aε = A(x/ε), one can show—using Bloch Wave
Theory—that c2 approaches 0 and that c1 and c3 goes toward Ā and βε2 respectively, as the
micro problem is solved more and more accurately [11].
We end this Section on computational complexity, with an complexity analysis of the
HMM algorithm using a lookup table (LUT). By using a LUT we need to compute fewer
micro problems. The total cost for HMM for long time (L = 2), when using a LUT, is then
of the form
1
(cost of micro problem) Md + L = max ε−α(d+1) , ε−2 .
(3.40)
ε
For finite time HMM has a much better complexity than DNS. For long time it depends on
the choice of kernel. We give an example of the complexity in terms of ε in Table 3.1, where
we used a kernel from K2,7 to get a low value to α = 0.30 (c.f. Table 4.1). We show the cost
of using DNS, HMM and HMM with the LUT technique in Table 3.1. The conclusion is
that HMM together with a lookup table has superior asymptotic behavior to DNS from the
perspective of computational cost.
3.3. COMPUTATIONAL COMPLEXITY
dimension
1
2
3
cost DNS
4
5
6
29
cost HMM
2.6
2.9
3.2
cost HMM (w/LUT)
2
2
2
Table 3.1: Cost κ (complexity O(ε−κ )) for DNS and HMM for a long time problem in one,
two and three dimensions, using a kernel K ∈ K2,7 and α = 0.30.
Chapter 4
Analysis
4.1
Convergence theory for the one-dimensional problem and finite
time
In this section we apply the HMM process to the problem (2.2) and show that it generates
results close to a direct discretization of the homogenized equation (2.3). In particular we
show that
!q !
ε
.
(4.1)
F̃(x, p) = F̄(x, p) + O
η
The function F̃ and F are defined in (3.6) and (3.2) respectively and we note that here
F(x, p) = Āp. The integer q is depends on the smoothness of the kernel used to compute a
weighted average of f ε in (3.8).
We will formulate the problem in the setting of elliptic operators to deduce that F̃ will
converge to F̄ without ε dependency in the case when Aε = A(x/ε) and A is a Y-periodic
function. For the analysis we solve the micro problem (3.5) over all of R
( ε
utt − ∂x A(x/ε)uxε = 0,
in R × {0 < t < τ},
(4.2)
ε
ε
u = px, ut = 0,
on R × {t = 0}.
Note that this gives the same F̃ as in (3.8) if (3.7) holds.
Theorem 2. Let F̃(x0 , p) be defined by (3.8) where u ε solves the micro problem (4.2), Aε (x) =
A(x/ε) and A is Y-periodic and smooth. Moreover suppose K ∈ Kp,q , f and g is smooth, H = nε
for some integer n and τ = η. Then for p , 0
!q
ε
1 F̃(x0 , p) − F(x0 , p) ≤ C
,
p
η
where C is independent of ε, η and p. Furthermore for the numerical approximation given in
(3.3) we have the error estimate for 0 ≤ tn ≤ τ,
n
|Um
− Ū(xm , tn )| ≤ C(T) H2 + (ε/η)q
where ū is the homogenized solution to (2.3).
Proof. We will prove the Theorem in the following steps:
1. Reformulate the problem as a PDE for a periodic function.
31
32
CHAPTER 4. ANALYSIS
2. Define an elliptic operator L(y).
3. Expand Ay (y) and v(y, t) (to be defined) in eigenfunctions to L(y).
4. Compute time dependent vj (t) coefficients in the above eigenfunction expansion.
5. Compute the integral of f ε to get F̂.
6. Compute the solution to a cell problem and give final estimate.
In the final step we show the error estimate in Theorem 2.
Step 1:
Express the solution to (4.2) as
u ε (t, x) = px + v(x/ε, t).
(4.3)
We insert this into (4.2) to get a PDE for v

1
1



vtt = ε ∂y A(y)p + 2 ∂y A(y)∂y v(y),
ε



v(x, 0) = 0, v (x, 0) = 0,
t
(4.4)
where y = x/ε. Since A is Y-periodic, so is v, and we can solve (4.4) as a Y-periodic problem.
Step 2: We define the linear operator L(y) := −∂y A(y)∂y on Y with periodic boundary
conditions. Denote by wj (y) the eigenfunctions and λj the corresponding (non-negative)
eigenvalue of L. Since L is uniformly elliptic, standard theory on periodic elliptic operators
informs us that all eigenvalues are strictly positive, bounded away from zero, except for
the single zero eigenvalue
0 = λ0 < λ1 ≤ λ2 ≤ · · ·
(4.5)
and wj ∈ C∞ forms an orthonormal basis for L2per (Y). Note also that w0 = |Y|−1 is a constant
function.
Step 3:
We express ∂y A(y) and v(y) in eigenfunctions of L:
Ay (y) =
∞
X
aj wj (x)
and v(y, t) =
j=1
∞
X
vj (t)wj (y).
(4.6)
j=0
Note that a0 = 0 since the mean value of ∂y A(y) is zero.
Step 4:
We plug the eigenfunction expansions (4.6) into (4.4) and find that
∞
X
j=0
vj00 wj =
∞
∞
∞
∞
j=1
j=1
j=1
j=1
X paj
X λj
pX
1 X
a j wj − 2
Lvj wj =
wj −
vj wj .
ε
ε
ε
ε2
(4.7)
By collecting terms of wj we get
vj00 −
paj
ε
+
λj
ε2
vj = 0.
(4.8)
4.1. CONVERGENCE THEORY FOR THE ONE-DIMENSIONAL PROBLEM AND
FINITE TIME
33
This is a system of ODE:s similar to the form,
y 00 + αy = β,
(4.9)
which has the solution of the form (α > 0)
√
y(t) = Aeit
√
α
+ Be−it
α
+
β
.
α
(4.10)
Notice all λj > 0 (j > 0) so it is known that the vj functions in the problem have the form,
√
√
it
vj (t) = Aj e
−it
λj
ε
+ Bj e
λj
+ rj ,
ε
rj = −
εσaj
λj
(4.11)
,
and the special v0 is given by
v0 (t) =
pa0 t 2
+ Ct + D = Ct + D since a0 = 0.
2ε
(4.12)
By plugging the general solution (4.11) into the initial conditions of (4.4), we can formulate
equations for Aj and Bj (j > 0),
v(0, x) = 0 ⇒
∞
X
vj (0)wjε (x) = 0 ⇒ vj (0) = 0 ⇒ Aj + Bj + rj = 0;
(4.13)
j=0
vt (0, x) = 0 ⇒
∞
X
vj0 (0)wjε (x) = 0 ⇒ vj0 (0) = 0 ⇒
j=0
p
i λj
ε
Aj −
p
i λj
ε
Bj = 0.
(4.14)
Similary, for v0 (t)
v0 (0) = 0 ⇒ C = 0,
and
v00 (0) = 0 ⇒ D = 0,
(4.15)
j = 1, 2, . . .
(4.16)
thus v0 (t) ≡ 0. We solve for Aj and Bj and get
Aj = Bj =
rj
2
=−
εσaj
2λj
,
All in all, the vj (t) coefficients in explicit form are

v0 (t) = 0,





εpaj


 vj (t) = −


2λj
√ 
 it√λ
−it λj
j
 εpaj εpaj

e ε + e ε  +
=


λj
λj
p 

t λj 


1 − cos

ε 
j = 1, 2, . . .
(4.17)
The solution to our problem (4.4) can then be expressed as
p 

∞
X
t λj 
aj 
 w (y).

v(y, t) = εp
1 − cos
λj 
ε  j
j=1
(4.18)
34
Step 5:
CHAPTER 4. ANALYSIS
Now plug the expression (4.18) into the expression (4.3)


p 

∞

X

t λj 
aj 

ε
ε
 ∂y wj (x/ε) .
1 − cos
f = A(x/ε)ux = pA(x/ε) 1 +




λj
ε
(4.19)
j=1
We write down and analyze the function f ε in two parts f ε = p(Λ1 + Λ2 ), where



∞


X


a
j





Λ1 (x/ε) = A(x/ε) 1 +
∂y wj (x/ε) ,




λ

j

j=0


p

∞

X

t λj
aj



cos
∂y wj (x/ε).
Λ2 (x/ε, t) = −A(x/ε)


λj
ε

(4.20)
j=0
Step 6a: First we show that Λ1 = Ā. To do that we need to use the so-called cell problem
(or corrector problem, see [14, 4.5.4]),

in Y,

L(y)χ = −Ay ,
(4.21)

χ
Y-periodic.
We rewrite the cell problem (4.21) using a eigenfunctions expansion
∞
X
λj χj wj = −
j=0
∞
X
j=1
aj wj ⇒ χj = −
aj
λj
,
j = 1, 2, . . . ,
(4.22)
where χj are the coefficients of χ in the eigenfunctions expansion. For the other term Λ1
we now will make good use of the eigenfunction expansion of the cell solution χ,


Z
"
∞

X

aj


Kτ (t)Kη (x)Λ1 dx dt = Kη (x) A(x/ε) + A(x/ε)
∂y wj (x/ε) dx


λj
j=1
Z
(4.23)
= Kη (x) A(x/ε) − A(x/ε)χy (x/ε) dx
= Ā.
Remark. In the one dimensional case and for our type of initial data the spatial averaging is
not needed. In general when Aε = A(x, x/ε) and in higher dimensions, we will have a O ((ε/η)q )
term adding to (4.23). But now, since χ solves
∂y (A(y) − A(y)χy ) = 0,
(4.24)
A(y) − A(y)χy = constant,
(4.25)
we have that
thus Λ1 (x/ε) = Λ1 is constant.
Step 6b:
Now we should show that
"
Kτ (t)Kη (x − x0 )Λ2 dt → 0,
τ → ∞.
(4.26)
We now apply the following Lemma from [13] on averaging with the kernels defined in
Definition (3.1),
4.1. CONVERGENCE THEORY FOR THE ONE-DIMENSIONAL PROBLEM AND
FINITE TIME
35
Lemma 1. Let f ε (t) = f (t, t/ε), where f (t, s) is 1-periodic in the second variable and ∂r f (t, s)/∂t r
is continuous for r = 0, 1, . . . , p − 1. For any K ∈ Kp,q there exists constants C1 and C2 , independent of ε and η, such that
ε
E = |Kη ∗ f (t) − f¯(t)| ≤ C1 ηp + C2
η
!q
ε
f¯(t) =
,
Z
1
f (t, s) ds.
0
If f = f (t/ε) then we can take C1 = 0. Furthermore, the error is minimized if η is chosen to scale
with εq/(p+q) .
With f (t, s) = cos 2πs and ε = 2πελ−1/2
, we obtain
j
Z
q
p

q
t λj  2πε 
ε
0 1


K
(t)
cos
≤
C
dt
=
C
p


τ
2
q/2 τ
ε
λj τ 
λ
(4.27)
j
Let bj and g(y) be defined as
Z
bj =
Kτ (t) cos
p
t λj
ε
dt,
g(y) =
∞
X
bj χj wj (y),
(4.28)
j=1
where we again
! used the solution to the cell problem (4.21) in the formulation of g(y). We
then express Kτ Kη Λ2 dx dt using g, followed by a change of variables:
"
Z
η
x
dx
ε
−η
Z1
ηx + x ηx + x 0
0
=−
K(x)A
g0
dx
ε
ε
−1
Z1
ηx + x ηx + x d
ε
0
0
=−
K(x)A
g
dx.
η
ε
dx
ε
−1
Kτ (t)Kη (x − x0 )Λ2 dx dt = −
Kη A(x/ε)g 0
By doing integration by parts, using K(1) = K(−1) = 0, together with Cauchy-Schwartz
inequality,
Z
Z 1
ηx ! ηx ηx !!2 Z 1 ηx 1/2

d ε
d ε


K(x)A
g
dx ≤ 
K(x)A
dx
g2
dx
ε
ε
ε
ε
−1 dx η
−1
−1 dx η
|
{z
}
1
O(1)
≤ CkgkL2per ,
where C is independent of ε, and η. Finally we need to show that kgk → 0. This is done by
observing that
∞
∞
X
X
2
2
kgk2L2 =
bj2 χ2j ≤ bmax
χ2j = bmax
kχk2L2 ,
(4.29)
per
where |bmax | is bounded by,
j=1
j=1
C0
q
ε
,
q/2 τ
λ
1
per
(4.30)
36
CHAPTER 4. ANALYSIS
following the computations in (4.27). Then finally, we add our results from the calculations above and get,
"
F̂(x0 , p) = p
Kτ (t)Kη (x − x0 )f ε dx dt
"
=p
Kτ (t)Kη (x − x0 )(Λ1 (t) + Λ2 (x/ε, t)) dx dt
(4.31)
!q !!
ε
= p Ā + O
.
η
n − ū(x , t )| ≤ C(T)(H2 + (ε/η)q ). We obFinal step: Now we show the error estimate |Um
m n
serve that F̃ in the Theorem is of the form
F̃ = Ã(x)p
(4.32)
where à is ε-periodic. By (4.31),
|Ã(x) − Ā| ≤ C
ε
η
!q
.
(4.33)
By choosing H = nε for some integer n, we find that the macro scheme (3.3) is a standard
second order discretization of the problem
(
utt − Ã(0)uxx = 0,
Ω × {0 < t < τ},
(4.34)
u(0, x) = f (x), ut = g,
Ω × {t = 0},
since Ã(Xm ) = Ã(mnε) = Ã(0) for all m. Hence, if g = 0 (the result is true also for g , 0),
!
q
q
1
n
um
=
f (xm − Ã(0)tn ) + f (xm + Ã(0)) + O(H2 ).
(4.35)
2
On the other hand, the solution of the homogenized (2.3) with g = 0 is
p p
1
ū(xm , tn ) =
f (xm − Ātn ) + f (xm + Ā) .
2
(4.36)
Therefore we get the error estimate
n
|Um
− ū(xm , tn )| ≤
q
p sup f (x + Ã(0)t) − f (x + Āt) + C(T)H2
(4.37)
|t|≤T
p
p
≤ |f 0 |∞ T| Ã − Ā|+ ≤ C(T)H2
≤ C(T) H2 + (ε/η)q ,
(4.38)
(4.39)
for 0 ≤ t ≤ T.
This proves the Theorem.
4.2
Analysis of the one-dimensional problem over long time
As we mentioned in Section 3.1 on page 16 a few changes to the HMM procedure must be
made in the long time case. We discuss those changes here. First of all, for long time we
get from (2.12) that F in (3.2) is of the form
F = F(x, U, Ux Uxx , Uxxx ) = ĀUx + βε2 Uxxx .
(4.40)
4.2. ANALYSIS OF THE ONE-DIMENSIONAL PROBLEM OVER LONG TIME
37
Since we have a term Uxxx in (4.40), it is not enough to have initial data just as a linear
function. We need to have initial data at least of degree three, to be able to approximate
the third derivative.
Secondly, we need to use a larger micro box in terms of ε. We argue as follows: We
assume that in general the error in computing F is given by O(ηp+1 + (η/ε)q ). When using
a kernel to average a function that is not strictly periodic, this is the expected form of the
error, c.f. Lemma 1. We now need our approximation of F to be better than O(ε2 ) by (4.40)
so we want to have
!q )
(
p+1 ε
< Cε2 .
(4.41)
max η ,
η
Let m = η/ε be the micro box size in term of ε and suppose the two errors balance, such
that the two terms ηp+1 and (ε/η)q are equal. Then we have a relation for m in terms of ε, p
and q
p+1
− p+1+q
m∼ε
.
(4.42)
If we choose such an m and require that both errors must be smaller than ε2 ,
(
!q )
q(p+1)
ε
ηp+1 ,
∼ ε p+1+q < ε2 ,
η
(4.43)
then we end up with an inequality in terms of p and q
Ep,q :=
q(p + 1)
> 2.
p+1+q
(4.44)
We conclude that the error only will get smaller with a larger q and that
Ep,q → p + 1
(monotonic increasing) when q → ∞.
(4.45)
Also, we need p > 1 and q must be chosen such that
q>
2(p + 1)
p−1
(4.46)
to have an error strictly less than ε2 in the approximation of F in (3.16). We can compute
the error terms Ep,q and compare exponents for ε as seen in Table 4.1. When the error only
depends on ε/η, as in Theorem 2, then we simply have m ∼ ε−2/q and we need q > 2, with
no condition on p.
Remark. It is possible to find functions with infinite q. In [13] a kernel Kexp is given with
support I = [−1, 1] , p = 1 and infinite q,

5


C0 exp x2 −1 , |x| < 1,
Kexp (x) = 

0,
|x| ≥ 1,
R
where C0 is chosen such that Kexp (x) dx = 1. We will use this kernel in our numerical tests in
the cases when the weighted mean computation is independent of the number of zero moments
the kernel has.
38
CHAPTER 4. ANALYSIS
(p, q)-pair
(2,7)
(3,5)
(3,6)
(3,7)
(4,4)
(4,5)
(4,6)
(4,7)
(5,4)
(5,5)
(5,6)
(5,7)
(6,3)
(6,4)
(6,5)
(6,6)
(6,7)
(7,3)
(7,4)
(7,5)
(7,6)
(7,7)
logε Ep,q
2.10
2.22
2.40
2.55
2.22
2.50
2.73
2.92
2.40
2.73
3.00
3.23
2.10
2.55
2.92
3.23
3.50
2.18
2.67
3.08
3.43
3.73
logε η/ε
-0.30
-0.44
-0.40
-0.36
-0.56
-0.50
-0.45
-0.42
-0.60
-0.55
-0.50
-0.46
-0.70
-0.64
-0.58
-0.54
-0.50
-0.73
-0.67
-0.62
-0.57
-0.53
Table 4.1: Kernel width η ∼ mε and error term Ep,q = εq(p+1)/(p+1+q) in relation to kernel
Kη ∈ Kp,q .
4.3
Stability analysis of the macro scheme for long time
As we described in the Introduction, the effective equation for the wave equation for long
time (2.12)
utt − Āuxx − βε2 uxxxx = 0,
is ill posed. The high frequency components grow without bounded for wave numbers
larger than ε−1 . However, since we are only interested in low frequency solutions it should
be possible to use a regularized version of (2.12) where high frequencies are suppressed.
The equation can be regularized to a well-posed form, either with a low-pass filter Plow ,
projecting on frequencies smaller than ε−1 ,
utt = Plow Āuxx + βε2 uxxxx ,
(4.47)
or by adding a small 6:th order term to the effective equation,
utt = Āuxx + βε2 uxxxx + cε4 uxxxxxx .
(4.48)
Another regularization technique is to use large time and space grid sizes for which a
standard discretization is stable. This is similar to the low-frequency technique. We show
here that this is indeed possible. An example is shown in Figure 5.15. We thus apply
4.3. STABILITY ANALYSIS OF THE MACRO SCHEME FOR LONG TIME
39
standard von Neumann analysis [27] to show stability of the macro scheme

2


n−1 k
n
n
n
n+1


um = 2um − um + h fm+1/2 − fm−1/2



n
n

fm±1/2
= (α∂x + βε2 ∂xxx )pm±1/2
(x)x=x
(4.49)
m±1/2
n as the
used in the HMM algorithm for the 1D problem and long time. Here we denote um
numerical approximation of u(xm , tn ) = u(mh, kn) and k is the time step and h is the grid
size. The scheme (4.49) is second order accurate with respect to both k and h, i.e. in time
and space. A numerical scheme is said to be stable if
X
X
(ujn )2 ≤ C(T)
(uj0 )2
n = 1, 2, . . . , N, Nk = T,
(4.50)
j
j
for some constant C(T) independent of n. We also need to define the interpolation polynomial pm−1/2 of degree three over 4 grid points um−2 , um−1 , um and um+1 . We assume a
uniform grid and write down the associated polynomial pm−1/2
pi−1/2 (x) = c1 + c2 (x − xi−2 ) + c3 (x − xi−2 )(x − xi−1 ) + c4 (x − xi−2 )(x − xi−1 )(x − xi ),
(4.51)
where the coefficients ci are given by

c1 = um−2 ,





u
− um−2


,
c2 = m−1



∆x


u − 2um−1 + um−2



c3 = m
,


2∆x2




u
− 3um + 3um−1 − um−2


c4 = m+1
.
6∆x3
(4.52)
For this discretization we can then show stability if the ratio h/ε is large enough.
Theorem 3. The finite difference scheme (4.49) applied the the effective equation (2.12) with
1-periodic boundary conditions, is stable for k and h such that
ε
≤
h
r
7β
,
24Ā
(4.53)
and
1
k
≤ √ f (24Ā−1 βε/h)
h
Ā
(4.54)
where
q

6


q 7−x ,
f (x) = 


 2(x−1) ,
3
0 ≤ x ≤ 4,
(4.55)
4 ≤ x ≤ 7.
Proof. Throughout the proof we will use the notation c = k/h and d = ε/h. We plug in
the value of interpolation polynomials (4.51) as replacements for the numerical fluxes
40
CHAPTER 4. ANALYSIS
n
n
n
n
n
n
n and
fm−1/2
= pm−1/2
(xm−1/2 ) and fm+1/2
= pm+1/2
(xm+1/2 ) which depends on um−2
, um−1
, um
n
um+1 . By doing so, we see that the finite difference scheme (4.49) will be on the form
n−1
n
n+1
− um
= 2um
um
+
c2
n
n
n
n
n
)
(−um+2
+ 28um+1
− 54um
+ 28um−1
− um−2
24
n
n
n
n
n
),
− 4um+1
+ 6um
− 4um−1
+ um−2
+ c2 d 2 (um+2
(4.56)
k
c= ,
h
ε
d= .
h
n = g n exp(imhξ)
We perform the standard Von-Neumann analysis [27, Section 2.2] and replace um
in the scheme (4.56). After dividing the expression with exp(imhξ), we get a recurrence
relation for g n
g n+1 = (2 + c2 p(v))g n − g n−1 ,
(4.57)
where p(v) = Av 2 + Bv + C is a polynomial in v = cos θ (θ = hξ) and the coefficients A, B and
C are affine functions in d 2
1
A = − Ā + 4βd 2 ,
6
B=
7
Ā − 8βd 2 ,
3
C=−
13
Ā + 4βd 2 .
6
(4.58)
As usual, the scheme is stable if the eigenvalues λ(Q) = {λ1 , λ2 } of the amplification matrix
Q
#
"
# "
#"
g n+1
c2 p(v) + 2 −1 g n
(4.59)
=
gn
1
0 g n−1
|
{z
}
Q
satisfies |λ| ≤ 1. Now, we know that the eigenvalues of λ(Q) are the zeros of the characteristic polynomial
det(Q − λI) = λ2 − (c2 p(v) + 2)λ + 1.
(4.60)
The special form of the characteristic polynomial (4.60) gives us that λ1 + λ2 = p(v) + 2 and
λ1 λ2 = 1. This also immediately gives us that |λ1 | = |λ2 |−1 . The two eigenvalues of Q are
computed to be
q
c2 p(v) + 2 1 2
λ=
±
c p(v)(c2 p(v) + 4).
(4.61)
2
2
Its a well known fact that the zeros of the polynomial (4.60) with real coefficients has either
two real zeros or a pair of complex conjugate roots λ1 = λ2 . This leads to two cases
1. Both zeros are real and distinct. But then max(|λ1 |, |λ2 |−1 ) > 1 and the scheme is
therefore unconditionally unstable.
2. The zeros are complex conjugate (λ1 = λ2 ). This happens if and only if −4 ≤ c2 p(v) ≤
0.
We summarize: The scheme is stable if and only if −4 ≤ c2 p(v) ≤ 0. The domain of p(v)
is [−1, 1] since v = cos θ. The requirement for stability is therefore that −4 ≤ c2 p(v) ≤ 0
hold for |v| ≤ 1. We now find the requirements on c and d by applying two equivalent
conditions on p(v) by the Lemma:
Lemma 2. The function p(v) ∈ C0 [−δ, δ] satisfies for µ > 0
−µ ≤ c2 p(v) ≤ 0
when
|v| ≤ δ,
4.3. STABILITY ANALYSIS OF THE MACRO SCHEME FOR LONG TIME
41
if and only if the two conditions
max p(v) ≤ 0,
|v|≤δ
2
c min p(v) ≤ µ,
|v|≤δ
are satisfied.
We want to prove a condition on d 2 such that
max p(v) ≤ 0,
(4.62)
|v|≤1
and in addition, we also want to find a corresponding condition on c2
2 c min p(v) ≤ 4.
|v|≤1
(4.63)
We realize that, if p(v) should satisfy p(v) ≤ 0 for |v| ≤ 1, the maximum of p(v) must
be obtained when |v| = 1 or at the extreme point v ∗ = −B/(2A) if |v ∗ | ≤ 1 and A , 0. When
computing p(1) = A + B + C we see that A + B + C = 0 for all d 2 , and p(1) ≤ 0 is always
satisfied. Therefore we have just two possible values for max p(v) and min p(v) over |v| ≤ 1
that have to be controlled


p(−1)
|v ∗ | ≥ 1,




max p(v) = 
(4.64)
p(−1)
A = 0, (p(v) = Bv + C; v ∗ undefined)


|v|≤1

∗
max {p(−1), p(v )} |v ∗ | < 1, A , 0,
and min p(v) (|v| ≤ 1) defined analogously. We compute the values p(−1) and p(v ∗ )

14


p(−1) = −2B = − Ā + 16βd 2 ,



3


2
 ∗
(2A
+
B)
6Ā



=
.
p(v ) = − 4A
1 − 24Ā−1 βd 2
(4.65)
Now we wish to figure out the functions max and min of {p(−1), p(v ∗ )}. In the case when
it is clear that, since p00 = 2A,



p(v ∗ ) A < 0,
∗
max{p(−1), p(v )} = 
(4.66)

p(−1) A > 0.
A,0
|v ∗ | < 1,
and that



p(−1)
min{p(−1), p(v )} = 

p(v ∗ )
A,0
∗
A < 0,
A > 0.
(4.67)
Its obvious that |v ∗ | < 1 if |v ∗ |2 < 1. Let I be defined as I = 2A − B. It follows by the
calculations
−B 2
|v ∗ |2 =
< 1 ⇔ 0 < 4A2 − B2 = (2A + B) (2A − B) .
(4.68)
2A
| {z } | {z }
p0 (1)
I
42
CHAPTER 4. ANALYSIS
24Ā−1 βd 2 interval
[0, 1[
[1, 1]
]1, 4[
[4, 4]
]4, 7]
]7, ∞[
sgn(A)
−1
0
1
1
1
sgn(I)
−1
−1
−1
0
1
max p(v) (|v| ≤ 1)
p(−1)
p(−1)
p(−1)
p(−1) = p(v ∗ )
p(−1)
unstable
min p(v) (|v| ≤ 1)
p(−1)
p(−1)
p(−1)
p(−1) = p(v ∗ )
p(v ∗ )
Table 4.2: Sign of A = − 16 Ā + 4βd 2 and I = − 83 Ā + 16βd 2 .
Thus |v ∗ | < 1 if and only if I > 0, since p0 (1) = 2Ā > 0, independently of d 2 . We also have
that I = 0 if |v ∗ | = 1. I itself is a function of d 2
8
I = 2A − B = − Ā + 16βd 2 .
3
(4.69)
Since both A and I are affine functions (in d 2 ) their sign is easily determined by separately
solving A = 0 and I = 0 for d 2 . We compute that
1
A = − Ā + 4βd 2 = 0 ⇔ 24Ā−1 βd 2 = 1,
6
(4.70)
and
8
I = 2A − B = − Ā + 16βd 2 = 0 ⇔ 24Ā−1 βd 2 = 4.
(4.71)
3
We summarize the sign of A and I in Table 4.3 together with the value of max p(v) over
|v| ≤ 1.
Since max p(v) = p(−1) (|v| ≤ 1) for all cases in Table 4.3, the max condition is simply
p(−1) ≤ 0 ⇔ 0 ≤ 24Ā−1 βd 2 ≤ 7.
And finally, we find c2 such that
c min p(v) ≤ 4,
|v|≤1
2 (4.72)
(4.73)
For most cases in 4.3 we have both min and max condition on p(v) at p(−1). In those cases
we have our two conditions

−1
2


0 ≤ 24Ā βd ≤ 4,
(4.74)


c2 ≤ 6Ā−1 7 − 24Ā−1 βd 2 −1 .
In the last case, when A > 0 and I > 0 that is when 4 ≤ 24Ā−1 βd 2 , we have


4 ≤ 24Ā−1 βd 2 ≤ 7,




2


c2 ≤ Ā−1 24Ā−1 βd 2 − 1 .
3
(4.75)
We show an example in Figure 4.1 of the stability bounds, for the Ā and β values derived from a(x, y) = 1.1 + sin 2πy.
4.4. CONVERGENCE THEORY FOR THE D-DIMENSIONAL PROBLEM
43
3.5
3
2
√
Āk
h
2.5
1.5
1
stable
0.5
0
0
0.5
1
1.5
Figure 4.1: An graph of (x, y) where x =
ε
h
2
ε
h
2.5
3
axis is 0 ≤ x ≤
5
4
q
3.5
4
√
7Ā
24β
and y is max
Āk
h
according
to the Theorem. The Ā and β values are as given by (2.12).
4.4
Convergence theory for the d-dimensional problem
In this section we extend the theory in Section 4.1 to a d-dimensional setting. We use
a slightly changed notation in this Section. The notation is chosen to let us manipulate
divergence and gradient operations in the usual Matrix Algebra sense by a formal notation
h
∇ = ∂x 1
∂x 2
···
∂xd
iT
(4.76)
.
We give two short examples on how this notation is used. Let u ∈ C1 (Ω)d , f ∈ C1 (Ω),
Ω ⊂ Rd . As the first example we compute the divergence of u,
∇T u = ∂x1 u1 + ∂x2 u2 + · · · + ∂xd ud .
(4.77)
As the second example we compute the gradient of f ,
h
∇f = ∂x1
∂x 2
···
∂x d
iT
h
f = ∂x 1 f
∂x2 f
···
∂xd f
iT
.
(4.78)
We will also use u 2 to denote the inner product u 2 = u T u = u12 + u2 + · · · + ud2 . These
generalizations of the usual calculus notation and more is found in [24].
As in the one dimensional problem, we will formulate the problem in the setting of
elliptic operators to deduce that F̃ will converge to F̄ without ε dependency in the case
when Aε = A(x/ε) and A is symmetric and Y-periodic. For the analysis we solve the micro
44
CHAPTER 4. ANALYSIS
problem (3.11) over all of Rd

ε
T
ε


in Rd × {0 < t < τ},
utt − ∇x A(x/ε)∇x u = 0,


u ε = p T x, u ε = 0,
on Rd × {t = 0}.
t
(4.79)
Note that this gives the same F̃ as in (3.12) if we choose a sufficiently large box (cf. the one
dimensional limit (3.7)).
Theorem 4. Let F̃(x0 , p) be defined by (3.8) where u ε solves the micro problem (4.79), Aε (x) =
A(x/ε) and A is Y-periodic, symmetric and smooth. Moreover suppose K ∈ Kp,q and τ = η. Then
for p , 0
!q
1
ε
kF̃(x0 , p) − F(x0 , p)k ≤ C
,
kpk
η
where C is independent of ε, η and p.
Proof. We will prove the Theorem in the following steps:
1. Reformulate the problem as a PDE for a periodic function.
2. Define an elliptic operator L(y).
3. Expand ∇yT A(y) and v(y, t) (to be defined) in eigenfunctions to L(y).
4. Compute time dependent vj (t) coefficients in the above eigenfunction expansion.
5. Compute the integral of f ε to get F̂.
6. Compute the solution to a cell problem and give final estimate.
Step 1:
Express the solution to (4.79) as
u ε (t, x) = p T x + v(x/ε, t).
We insert this into (4.79) to get a PDE for v
(
vtt = 1ε ∇yT A(y)p + ε12 ∇yT A(y)∇y v(y),
v(x, 0) = 0, vt (x, 0) = 0,
(4.80)
(4.81)
where y = x/ε. Since A is Y-periodic, so is v, and we can solve (4.81) as a Y-periodic
problem.
Step 2: We define the linear operator L(y) := −∇yT A(y)∇y y on Y with periodic boundary
conditions. Denote by wj (y) the eigenfunctions and λj the corresponding (non-negative)
eigenvalue of L. Since L is uniformly elliptic, standard theory on periodic elliptic operators
informs us that all eigenvalues are strictly positive, bounded away from zero, except for
the single zero eigenvalue
0 = λ0 < λ1 ≤ λ2 ≤ · · ·
(4.82)
and wj ∈ C∞ forms an orthonormal basis for L2per (Y). Note also that w0 = |Y|−1 is a constant
function.
4.4. CONVERGENCE THEORY FOR THE D-DIMENSIONAL PROBLEM
Step 3:
45
We express ∇yT A(y) and v(y) in eigenfunctions of L:
∇yT A(y) =
∞
X
ajT wj (y)
and
v(y, t) =
j=1
∞
X
vj (t)wj (y).
(4.83)
j=0
Note that here aj are column vectors and as in the one dimensional case we have that a0 = 0
since the mean value of ∇yT A(y) is zero,
Z
Z
1
T
∇T A(y) dy = 0.
(4.84)
a0 =
∇y A(y)w0 (y) dy =
|Y| Y y
Y
Step 4:
We plug the eigenfunction expansions (4.83) into (4.81) and find that
∞
X
vj00 wj =
j=0
∞
∞
∞
∞
X
X
λj
p T aj
pT X
1 X
Lvj wj =
v j wj .
a j wj − 2
wj −
ε
ε
ε
ε2
j=1
j=1
j=1
(4.85)
j=1
By collecting terms of wj we get
vj00 −
p T aj
+
ε
λj
ε2
vj = 0.
(4.86)
This is a system of ODE:s similar to the form,
y 00 + αy = β,
(4.87)
which has the solution of the form (α > 0)
√
y(t) = Aeit
√
α
+ Be−it
α
+
β
.
α
(4.88)
Notice all λj > 0 (j > 0) so it is known that the vj functions in the problem have the form,
√
√
it
vj (t) = Aj e
−it
λj
ε
+ Bj e
λj
ε
+ rj ,
rj = −
εp T aj
λj
(4.89)
,
and the special v0 is given by
v0 (t) =
p T a0 2
t + Ct + D = Ct + D since a0 = 0.
2ε
(4.90)
By plugging the general solution (4.89) into the initial conditions of (4.81), we can formulate equations for Aj and Bj (j > 0),
v(0, x) = 0 ⇒
∞
X
vj (0)wjε (x) = 0 ⇒ vj (0) = 0 ⇒ Aj + Bj + rj = 0;
(4.91)
j=0
vt (0, x) = 0 ⇒
∞
X
vj0 (0)wjε (x) = 0 ⇒ vj0 (0) = 0 ⇒
j=0
Similary, for v0 (t)
v0 (0) = 0 ⇒ C = 0,
and
p
i λj
ε
Aj −
p
i λj
ε
v00 (0) = 0 ⇒ D = 0,
Bj = 0.
(4.92)
(4.93)
46
CHAPTER 4. ANALYSIS
thus v0 (t) ≡ 0. We solve for Aj and Bj and get
Aj = Bj =
rj
2
=−
εp T aj
2λj
j = 1, 2, . . .
,
(4.94)
All in all, the vj (t) coefficients in explicit form are

v0 (t) = 0,





εp T aj



v
(t)
=
−

j

2λj
√ 
 it√λ
−it λj
j
 εp T aj εp T aj

e ε + e ε  +
=


λj
λj
p 

t λj 


1 − cos

ε 
(4.95)
j = 1, 2, . . .
The solution to our problem (4.81) can then be expressed as
p 

∞
X
t λj 
aj 
 w (y).
1 − cos
v(y, t) = εp
λj 
ε  j
T
(4.96)
j=1
Step 5:
Now plug the expression (4.96) into the expression (4.80)

∞
X

aj

ε
T ε
T
f = ∇x u A(x/ε) = p 1 +

λj
j=1

p 


t λj  T

 ∇y wj (x/ε) A(x/ε).
1 − cos



ε
(4.97)
We write down and analyze the function f ε in two parts (f ε )T = p T (Λ1 + Λ2 ), where



∞


X


a

j T





∇
w
(x/ε)
Λ
(x/ε)
=
I
+
 A(x/ε),

1

y j



λ

j

j=1


p

∞

X

t λj T
aj



cos
∇y wj (x/ε)A(x/ε).
Λ
(x/ε,
t)
=
−
2


λ
ε

j=1 j
(4.98)
Step 6a: First we show that Λ1 = Ā. To do that we need to use the so-called cell problem
(or corrector problem, see [14, 4.5.4]),

T


in Y,
L(y)χ = −∇y A,


χ
Y-periodic.
(4.99)
We rewrite the cell problem (4.99) using a eigenfunctions expansion
∞
X
j=0
λj χj wj = −
∞
X
j=1
aj wj ⇒ χj = −
aj
λj
,
j = 1, 2, . . . ,
(4.100)
where χj are column vectors with coefficients of χ in the eigenfunctions expansion. For
the other term Λ1 we now will make good use of the eigenfunction expansion of the cell
4.4. CONVERGENCE THEORY FOR THE D-DIMENSIONAL PROBLEM
solution χ,
"
Z
Kτ (t)Kη (x)Λ1 dx dt =
47


∞

 X
a
j


Kη (x) I +
∇yT wj (x/ε) A(x/ε) dx


λj
j=1
Z
Kη (x) I − ∇yT χ(x/ε) A(x/ε) dx
Z
Kη (x) A(x/ε) − ∇yT χ(x/ε)A(x/ε) dx
=
=
(4.101)
Z
Kη (x) A(x/ε) − A(x/ε)∇y χ(x/ε) dx
!q !
ε
= Ā + O
,
η
=
where we used Lemma 1, in each coordinate direction.
Step 6b:
Now we should show that
"
Kτ (t)Kη (x − x0 )Λ2 dt → 0,
τ
→ ∞.
ε
(4.102)
We now apply Lemma 1 to obtain
Z
q
p

q
t λj  2πε 
ε
0 1


dt
=
C
K
(t)
cos
≤
C
p


τ
2

q/2
ε
τ
λj τ
λ
(4.103)
j
Let bj and the column vector g(y) be defined as
p
Z
∞
X
t λj
bj = Kτ (t) cos
dt, g(y) =
bj χj wj (y),
ε
(4.104)
j=1
where we again
! used the solution to the cell problem (4.21) in the formulation of g(y). We
then express Kτ Kη Λ2 dx dt using g, followed by a change of variables:
"
Z
Kτ (t)Kη (x − x0 )Λ2 dx dt = − Kη ∇yT g(x/ε)A(x/ε) dx
Z
ηx + x ηx + x 0
0
= − K(x)∇yT
A
dx
ε
ε
Z
ηx + x ηx + x ε
0
0
=−
K(x)A
∇xT g
dx.
η
ε
ε
By doing integration by parts, using K(ek ) = K(−ek ) = 0 (k = 1, 2, . . . , d), together with
Cauchy-Schwartz inequality, we obtain
Z
ηx + x ! ηx + x ε
0
0
∇xT K(x)A
g
dx
η
ε
ε
{z
}
[−1,1]d |
O(1) column vector

 Z

≤ 

[−1,1]d
∇xT
ε
K(x)A
η
ηx + x !!2
0
ε
Z
dx
[−1,1]d
g2
ηx + x 0
ε
1/2


dx

(4.105)
48
CHAPTER 4. ANALYSIS
which is bounded by Ckgk(L2per )d where C is independent of ε, and η. Finally we need to
show that kgk → 0. This is done by observing that
kgk2(L2
d
per )
where |bmax | is bounded by,
=
∞
X
2
bj2 χ2j ≤ bmax
j=1
∞
X
j=1
C0
q
ε
,
q/2 τ
λ
2
kχk2(L2
χ2j = bmax
per )
d
,
(4.106)
(4.107)
1
following the computations in (4.103). Then finally, we add our results from the calculations above and get,
"
T
T
F̂(x0 , p) = p
Kτ (t)Kη (x − x0 )f ε dx dt
"
= pT
Kτ (t)Kη (x − x0 )(Λ1 (t) + Λ2 (x/ε, t)) dx dt
(4.108)
!q !!
ε
= p T Ā + O
.
η
This proves the Theorem.
Chapter 5
Numerical results
In this Chapter we show numerical results when applying the HMM process to various problems in one, two and three dimensions. The notation in the experiments in ddimensional setting (d = 1, 2, 3) is the following: We let Y = [0, 1]d and ε be the micro
problem scale. We denote H and K the macro grid size and time step respectively and for
the micro scale we denote by h and k the grid size and time step respectively.
5.1
1D results, finite time
The general form for the one-dimensional examples is the form:
( ε
utt = ∂x (Aε uxε ) ,
Y × {0 ≤ t ≤ T},
uε = f ,
utε = 0,
Y × {t = 0},
(5.1)
where Y = [0, 1]. We show some dynamics in Figure 5.1 where we solved (5.1) for the Aε
and f given in Example 1 below. The homogenized solution to (5.1) will be on the form
(
ūtt = ∂x (Āūx )
Y × {0 ≤ t ≤ 1},
(5.2)
ū = f , ūx = 0,
Y × {t = 0},
where Ā is given by the harmonic average of A(x, y) over one Y-period,
Z
1
Ā(x) =
0
and x being held fixed.
49
dy
,
A(x, y)
(5.3)
50
CHAPTER 5. NUMERICAL RESULTS
T=0.000000
T=0.295444
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
0.5
T=0.590888
1
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
0.5
T=1.181776
1
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
0.5
1
0
0.5
T=0.886332
1
0
0.5
T=1.477220
1
0
0.5
1
√
Figure 5.1: The dynamics of the problem (5.1) for 6 snapshots at ti = 5/(i Ā), i = 0, 1, . . . 5.
We observe how the initial pulse separates in one left going and one right going pulse.
The effect of the periodic boundary condition can seen as the waves pass each other at the
boundaries between frame 4 and 5.
5.1. 1D RESULTS, FINITE TIME
51
Example one
The first wave propagation problem we choose Aε and f as
 ε

A(x, y) = 11/10 + sin 2πy,

A = A(x, x/ε),


f (x) = exp(−(x − x0 )2 /σ 2 ),
x0 = 1/2, σ = 1/10.
(5.4)
We can compute Ā analytically (see Appendix A)
r
Ā =
21
≈ 0.458257569495584 . . .
100
(5.5)
This also gives us the exact solution to (5.2)
ū(x, t) =
1
fper (x + Āt) + fper (x − Āt) ,
2
(5.6)
where fper is the odd periodic continuation of f onto R, defined explicitly as


fper (x) = −fper (−x),
fper (x) = 
f (x) = f (x − bxc),
per
x < 0,
(5.7)
x ≥ 0.
We will solve (5.2) with a fully resolved discretization or direct numerical simulation
(DNS), discretized homogenized solution (HOM) and our HMM method (HMM). We have
used ε = 0.01, η = 10ε. In Figure 5.2 we show a snapshot of the solutions these methods after time T = 1. We use the kernel K as a polynomial in K5,6 . That is K has 5 zero moments
and is 6 times continuously differentiable. We will use the same kernel in both time and
space where Kη and Kτ are derived from K([−1, 1]) and K([0, 1]) respectively. Here τ = η
√
and ymax = η + τ Amax . For this problem we also studied the accuracy of F̃ analyzed in
Chapter 4. For a fixed micro problem, we plot the error |F̃ − F| for different box sizes and
kernels. See Figure 5.3.
Example two
We now consider a variation of (5.1) where Aε is defined as
Aε (x, y) = 1.1 +
1
(cos 2πx + sin 2πy) .
2
(5.8)
The homogenized operator Ā will not be constant but a function with explicit x dependence. We can compute (see Appendix A) analytically Ā(x) to be
p
α(x)2 − β2
Ā(x) =
,
β2
α(x) = 1.1 +
1
cos 2πx,
2
β=
1
.
2
(5.9)
For this experiment we use ε = 0.01, K = 2H, H = 3.33 · 10−3 . For the micro problem we
use 64h = k and 64h = ε. The kernel from K5,6 . The small H is to lessen the effect of the
numerical dispersion. We show results from T = 1 in Figure 5.2.
52
5.2
CHAPTER 5. NUMERICAL RESULTS
2D results
In this section we present the numerical results for a two dimensional wave propagation
problem over the unit square Y = [0, 1] × [0, 1],
( ε
in Y × {0 < t ≤ T},
utt = ∇xT (Aε ∇u ε )
(5.10)
ε
ε
at Y × {t = 0},
u = f , ut = 0
We study the solution over finite time where T = 0.25. The scale parameter ε is set to
ε = 0.05, a bit higher than in the one dimensional case. The homogenized problem is on
the form
(
ūtt = ∇xT (Aε ∇ū)
in Y × {0 < t ≤ T},
(5.11)
ū = f , ūt = 0
at Y × {t = 0},
Example three
We let Aε (x) are defined by the diagonal matrix,
( ε
A (x) = diag(aε (x), aε (x))
aε (x) = A(x, x/ε),
a(x, y) = 11/10 + sin 2πy1 .
and as in 1D the initial data f is defined as a Gaussian,

2 2


f (x) = exp(−kx − x0 k /σ ),


x0 = [1/2 1/2]T , σ = 1/10.
(5.12)
(5.13)
We use the exponential kernel Kexp ∈ K1,∞ That is K has 1 zero moments and is infinity
many times continuously differentiable. The macro scheme uses H = 0.05 and K = 0.25H.
We show the numerical results in Figure 5.5, 5.6 and 5.7.
Example four
We let Aε (x) are defined by the diagonal matrix,
 ε

A (x) = diag (aε (x), 1)




1


aε (x) = A(x, x/ε), a(x, y) = 11/10 + (sin 2πx1 + sin 2πy1 )n.
2
and the homogenized equation will be

Ā(x) = diag (ā(x), 1)



p


α(x)2 − β2


ā(x) =
,
α = 1.1 + 0.5 sin 2πx1 , β = 0.5
α2
We show the numerical results in Figures 5.8, 5.9 and 5.10.
5.3
(5.14)
(5.15)
3D results
Here we present numerical results for a wave propagation problem in three dimensions in
a locally periodic media. Let Y = [0, 1]3 and consider the problem
( ε
utt = ∇ · (Aε ∇u ε ) ,
in Y × {t > 0},
(5.16)
ε
ε
u = f , ut = 0,
in Y × {t = 0},
5.4. 1D LONG TIME
53
The homogenized operator Ā will be
Ā = diag(α, 1, 1),
√
α = 0.21.
(5.17)
in the homogenized PDE



ūtt = ∇ · Ā∇ū ,


ū = f , ū = 0,
t
in Y × {t > 0},
in Y × {t = 0},
(5.18)
We compute the solution to the micro problems using a partition of u ε into a linear function p · x and a periodic function v ε
u ε (x, t) = p · x + v ε (x, t).
(5.19)
and v ε has initial data equal to zero.
Example five
In this tree dimensional problem we consider a problem on the form where Aε (x, y) is a
diagonal matrix
( ε
A (x) = diag (aε (x), aε (x), aε (x))
(5.20)
aε (x) = A(x, x/ε), a(x, y) = 1.1 + sin 2πy1 .
where the initial data f is defined as a Gaussian,
(
f (x) = exp(−kx − x0 k22 /σ 2 ),
x0 = [1/2
1/2
1/2],
σ = 1/2,
(5.21)
In this experiment we have used ε = 0.1. The homogenized simulation uses H = 0.1, T =
0.25, K = 0.25H. The HMM solver uses , T = 0.25 H = 1/320, K = 0.25H on the macro
solver. The micro solver uses η = ε, τ = 5ε, h = 64ε, k = 0.3h.
5.4
1D long time
We take the same problem as in Example one but now for long time T = O(ε−2 ). The
solution obtained from regular homogenization should not be trusted beyond T = O(1).
As described in Section 2.2 the effective equation for the long time solution of (5.20) is
given by [26]
utt = Āuxx + βε2 uxxxx
(5.22)
where the coefficient β is defined in (2.13), (2.14).
For comparison we can solve (5.22) using an accurate spectral method. Taking the
Fourier transform of (5.22) and assuming the solution is 1-periodic, we get

2
4 2

when t > 0,

ûtt + ω Ā − ω ε β û = 0,
(5.23)


û = fˆ, û = ĝ,
at t = 0.
t
The functions û, fˆ and ĝ are the Fourier transforms of their respective symbols. The
solution can now be found as a solution to a ODE, parametrized by ω, in the expression

√
√


û(ω, t) = C(w) exp(−i γt) + D(w) exp(i γt),
(5.24)


γ = ω2 Ā − ω4 ε2 β.
54
CHAPTER 5. NUMERICAL RESULTS
The unknowns C(ω) and D(ω) are found by solving the system of equations that appears
by plugging in the solution into the initial conditions,
! !
!
1
1
C
fˆ
√
√
(parametrized by ω)
(5.25)
=
−i γ i γ D
ĝ
In the case with γ = 0 we use the solution
û(ω) = C(ω) + tD(ω),
C(ω) = fˆ(ω),
D(ω) = ĝ(ω).
(5.26)
We here use a cut-off frequency of |ω| ∼ ε−1 .
Example six
√
For the Aε function defined in (5.4) we can derive that Ā = 21/100. The expression for
Ω3 is a long expression involving nested integrals (c.f. (2.14) and (33) in [26]). We used
a numerical quadrature in Maple 12 to compute Ω3 = 1.886504502 with 10 significant
digits. We thus have two numerical values for the parameters α and β in the efficient
equation (5.22).
(
Ā = 0.458257569495584
(5.27)
β = 0.01078280318
(10 significant digits).
√
We will use ε = 0.01 and T = 91.59 ≈ 62c where c = α is the wave speed of the homogenized solution. The macro discretization is chosen as H = 1/300, 2K = H. This is a fine
grid to avoid numerical dispersion in the solution with In the numerical scheme we use
k = 128h, ε = 64h. The kernel K ∈ K5,6 where η = τ = 10ε.
5.4. 1D LONG TIME
55
T=1.000000
HOM
DNS
HMM
1
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
T=1.000000
HOM
DNS
HMM
0.52
0.5
0.48
0.46
0.44
0.42
0.4
0.75
0.8
0.85
0.9
Figure 5.2: A snapshot of two super imposed solutions to (5.1) together with a zoomed
section around the top.
56
CHAPTER 5. NUMERICAL RESULTS
p=1; q=1
p=3; q=3
p=5; q=5
p=7; q=7
p=1; q=Inf
0.5
0.4
0.3
0.2
0.1
0
−0.1
1
2
3
4
5
6
7
8
9
10
√
Figure 5.3: Convergence of the micro problem (5.1) to 0.21 for 0 ≤ η ≤ 10ε.
5.4. 1D LONG TIME
57
T=1.000000
DNS
HMM
1
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
T=1.000000
0.7
DNS
HMM
0.68
0.66
0.64
0.62
0.6
0.58
0.56
0.54
0.52
0.5
0.4
0.45
0.5
0.55
0.6
Figure 5.4: A snapshot of two super imposed solutions to (5.8) together with a zoomed
section around the top.
58
CHAPTER 5. NUMERICAL RESULTS
1
0.2
0.8
0.15
0.1
0.6
0.05
0
0.4
−0.05
−0.1
0.2
−0.15
0
0
0.2
0.4
0.6
0.8
1
−0.2
Figure 5.5: Full numerical simulation of Equation (5.10) (Aε has only fast scale.).
1
0.2
0.15
0.8
0.1
0.05
0.6
0
0.4
−0.05
−0.1
0.2
−0.15
0
0
0.2
0.4
0.6
0.8
1
−0.2
Figure 5.6: Full numerical simulation of the homogenized Equation (5.11) (Ā is constant.).
5.4. 1D LONG TIME
59
1
0.2
0.15
0.8
0.1
0.05
0.6
0
0.4
−0.05
−0.1
0.2
−0.15
0
0
0.2
0.4
0.6
0.8
1
−0.2
Figure 5.7: HMM approach on Equation (5.10) (Aε has only fast scales.).
1
0.2
0.15
0.8
0.1
0.05
0.6
0
0.4
−0.05
−0.1
0.2
−0.15
0
0
0.2
0.4
0.6
0.8
1
−0.2
Figure 5.8: Full numerical simulation of Equation (5.10) with Aε defined by (5.14).
60
CHAPTER 5. NUMERICAL RESULTS
1
0.2
0.15
0.8
0.1
0.05
0.6
0
0.4
−0.05
−0.1
0.2
−0.15
0
0
0.2
0.4
0.6
0.8
1
−0.2
Figure 5.9: Full numerical simulation of the homogenized Equation (5.15) with Ā defined
by (5.14).
1
0.2
0.15
0.8
0.1
0.05
0.6
0
0.4
−0.05
−0.1
0.2
−0.15
0
0
0.2
0.4
0.6
0.8
1
−0.2
Figure 5.10: HMM approach on Equation (5.10) where Aε defined by (5.14).
5.4. 1D LONG TIME
61
0.1
0.05
1
0
−0.05
0.5
−0.1
−0.15
0
1
1
0.5
0.5
0
−0.2
−0.25
0
Figure 5.11: Three dimensional solution using direct discretization of Equation (5.16).
0.1
0.05
1
0
−0.05
0.5
−0.1
−0.15
0
1
1
0.5
0.5
0
−0.2
−0.25
0
Figure 5.12: Three dimensional solution using direct discretization of homogenized Equation (5.18).
62
CHAPTER 5. NUMERICAL RESULTS
0.1
0.05
1
0
−0.05
0.5
−0.1
−0.15
0
1
1
0.5
0.5
0
−0.2
−0.25
0
Figure 5.13: Three dimensional solution of Equation (5.16) with HMM approach.
1.2
1
0.8
0.6
EFF w/SPC
DNS
0.4
0.2
0
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 5.14: The effective equation (5.22) solved with a spectral method compared to a
DNS solution of (5.1).
5.4. 1D LONG TIME
63
1.2
1
0.8
0.6
EFF w/FDM
DNS
0.4
0.2
0
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 5.15: The effective equation (5.22) solved with a spectral method compared to a
finite difference solution.
1.2
1
0.8
0.6
HMM
DNS
0.4
0.2
0
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 5.16: Equation (5.1) solved with HMM compared to a fully discretized solution.
Chapter 6
Conclusions
We have developed and analyzed numerical methods for multi-scale wave equations with
oscillatory coefficients. The methods are based on the framework of the heterogeneous
multi-scale method (HMM) and have substantially lower computational complexity than
standard discretization algorithms. Convergence proofs for finite time approximation are
presented in the case of periodic coefficients in multiple dimensions. Numerical experiments in one, two and three spatial dimensions show the accuracy and efficiency of the
new techniques. The standard analytic homogenization results that are used in the proofs
are not valid for very long time integration. However, the numerical algorithm still works
and accurately captures relevant dispersive effects. Analysis based on eigenfunction expansions and numerical results are presented.
65
Appendix A
Analytic computation of harmonic
average
Theorem 5. Define f (x) = α + β sin 2πy1−1 x where α > β and β > 0. The harmonic average
M(f −1 )−1 is then given by
!−1 p 2
Z y1
α − β2
1
dx
−1 −1
=
,
Y = [0, y1 ].
(A.1)
M(f ) =
|Y| 0 f (x)
β2
Proof. We start with a transformation to the interval [0, 2π] by writing
Z y1
Z 2π
y
dy
ds
α
|Y|
,
y = 1 s, γ = > 1.
=
−1
−1
γ + sin s
2π
β
2πβ
0
0 α + β sin 2πy1 y
(A.2)
This is a standard integral in complex analysis that can be computed as a contour integral
in the complex plane. We substitute z = exp(ix) and apply the identity
sin x =
1
z − z −1
(exp(ix) − exp(−ix)) =
.
2i
2i
We formulate the contour integral
Z 2π
I
I
dz
2 dz
ds
=
=
,
1
−1
γ + sin s
0
∂C iz(γ + 2i (z − z ))
∂C (z − z1 )(z − z2 )
(A.3)
(A.4)
p
p
where z1 = −iγ + 1 − γ 2 and z2 = −iγ − 1 − γ 2 . We have that γ > 1 so only z1 will be inside
the unit circle C. Therefore we can apply the [28, Residue Theorem] and compute
I
2 dz
2π
= 2πi Res(2(z − z1 )−1 (z − z2 )−1 ; z1 ) = p
.
(A.5)
(z
−
z
)(z
−
z
)
1
2
γ2 − 1
∂C
So, we have that M(f −1 )−1 is
M(f
−1 −1
)
−1 p
p

2−1

 1 |Y|
γ
α2 − β2
2π
 =
= 
=
.
p

−1
|Y| 2πβ
β
β2
γ2 − 1
67
(A.6)
Bibliography
[1]
Assyr Abdulle and Weinan E. Finite difference heterogeneous multi-scale method for
homogenization problems. Journal of Computational Physics, 191(1):18–39, 2003.
[2]
A. Allaire, A. Braides, G. Buttazzo, A. Defranceschi, and L. Gibiansky. School on
homogenization. SISSA Ref. 140/73/M, 1993.
[3]
Alain Bensoussan, Jacques-Louis Lions, and George Papanicolaou. Asymptotic analysis in periodic structures. North-Holland Pub. Co., 1978. ISBN 0444851720.
[4]
Doina Cioranescu and Patrizia Donato. An Introduction to Homogenization. Number 17 in Oxford Lecture Series in Mathematics and its Applications. Oxford University Press Inc., 1999.
[5]
Weinan E and Bjorn Engquist. The Heterogeneous Multiscale Methods. Commun.
Math. Sci., pages 87–133, 2003.
[6]
Weinan E, Bjorn Engquist, and Zhongy Huang. Heterogeneous Multiscale Method: A
general methodology for multiscale modeling. Phys. Rev. B, 67(9):092101, Mar 2003.
[7]
Weinan E, Bjorn Engquist, Xiantao Li, Weiqing Ren, and Eric Vanden-Eijnden. Heterogeneous multiscale methods: A review. Communications in Computational Physics,
2(3):367–450, 2007.
[8]
Weinan E, Di Liu, and Eric Vanden-Eijnden. Analysis of multiscale methods for
stochastic differential equations. Communications on Pure and Applied Mathematics,
58:1544 – 1585, 2005.
[9]
Weinan E, Pingbing Ming, and Pingwen Zhang. Analysis of the heterogeneous multiscale method for elliptic homogenization problems. Journal of the American Mathematical Society, 18(1):121–156, 2004.
[10] Bjorn Engquist, Henrik Holst, and Olof Runborg. Multiscale methods for the wave
equation. In Sixth International Congress on Industrial Applied Mathematics (ICIAM07)
and GAMM Annual Meeting, volume 7. Wiley, 2007.
[11] Bjorn Engquist, Henrik Holst, and Olof Runborg. Analysis of HMM for wave propagation problems over long time. Work in progress, 2009.
[12] Björn Engquist and Panagiotis E. Souganidis. Asymptotic and numerical homogenization. Acta Numerica, 17:147–190, 2008.
[13] Bjorn Engquist and Yen-Hsi Tsai. Heterogeneous multiscale methods for stiff ordinary differential equations. Mathematics of Computation, 74(252):1707–1742, 2005.
69
70
BIBLIOGRAPHY
[14] Lawrence C. Evans. Partial Differential Equations. American Mathematical Society,
1998. ISBN 0821807722.
[15] V. V. Jikov, S. M. Kozlov, and O. A. Oleinik. Homogenization of Differential Operators
and Integral Functions. Springer, 1991.
[16] I. G. Kevrekidis, C. W. Gear, J. Hyman, P. G. Kevekidis, and O. Runborg. Equationfree, coarse-grained multiscale computation: Enabling microscopic simulators to perform system-level tasks. Comm. Math. Sci.", pages 715–762, 2003.
[17] X. Li and W. E. Multiscale modelling of the dynamics of solids at finite temperature.
J. Mech. Phys. Solids, 53:1650–1685, 2005.
[18] V. A. Marchenko and E. Y. Khruslov. Homogenization of partial differential equations.
Progress in Mathematical Physics, 46, 2006.
[19] Gianni Dal Maso. Introduction to Γ -Convergence, An. Birkhüser Boston, 1993. ISBN
081763679X.
[20] Ana-Maria Matache and Christoph Schwab. Generalized p-FEM in homogenization.
Applied Numerical Mathematics, 33, 2000.
[21] Pingbing Ming and Xingye Yueb. Numerical methods for multiscale elliptic problems. Journal of Computational Physics, 214(1):421–445, 2005.
[22] Gabriel Nguetseng. A general convergence result for a functional related to the theory
of homogenization. SIAM Journal on Mathematical Analysis, 20(3):608–623, 1989.
ISSN 0036-1410.
[23] Grigorios A. Pavliotis and Andrew M. Stuart. Multiscale Methods: Averaging and Homogenization. Springer, 2007.
[24] Kaare Brandt Petersen and Michael Syskind Pedersen. The matrix cookbook. November 14, 2008.
[25] Giovanni Samaey. Patch Dynamics: Macroscopic Simulation of Multiscale Systems. PhD
thesis, Katholieke Universiteit Leuven, 2006.
[26] Fadil Santosa and William W. Symes. A dispersive effective medium for wave propagation in periodic composites. SIAM Journal on Applied Mathematics, 51(4):984–1005,
1991. ISSN 0036-1399.
[27] John C. Strikwerda. Fintie Difference Schemes and Partial Differential Equations (2nd
ed.). SIAM, 2004. ISBN 0898715679.
[28] David A. Wunsch. Complex Variables with Applications (2nd ed.). Addison-Wesley
Publishing Company, 1993. ISBN 0 201 84557 1.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement