ZRSibbing_Report_NO51.

ZRSibbing_Report_NO51.
Bachelor Thesis
Numerical methods for the implementation
of the Cahn-Hilliard equation in one
dimension and a dynamic boundary
condition in two dimensions
Author:
Zimo Sibbing
Supervisors:
Prof.dr.ir. C.R. Kleijn
Dr.ir. F.J. Vermolen
M.Sc. H.G. Pimpalgaonkar
28/01/2015
Delft University of Technology
Faculty of Applied Sciences
Department of Chemical Engineering
Transport Phenomena group
Abstract
This project can be divided into two parts. The goal of the first part is to numerically implement the
Cahn-Hilliard equation in one dimension both explicitly and implicitly. This will be done using Matlab.
The goal of the second part is to validate the coupled Cahn-Hilliard-Navier-Stokes equation and the
dynamic boundary condition for moving contact lines of (Carlson et al., 2011, p. 9) by considering a
two-dimensional spreading droplet case. This will be done using the CFD software OpenFOAM.
In Chapter 1, the theory of positive and negative diffusion, including the normal diffusion equation
and the Cahn-Hilliard equation, are discussed. Some background is given regarding the thermodynamics
of the Cahn-Hilliard equation and its steady-state solution. After that, the theory of the coupled CahnHilliard-Navier-Stokes equation, the dynamic boundary condition for moving contact lines and the case
which is implemented in OpenFOAM, are discussed.
In Chapters 2 and 3, the diffusion equation and the Cahn-Hilliard equation are implemented in one
dimension, using the Euler Forward scheme. In implementing the Cahn-Hilliard equation, two different
discretizations are used, of which only one gives the desired results. Next, an extensive stability analysis is
done, using a linearization of the Cahn-Hilliard equation as well as numerical experiments. The stability
condition is increasingly severe with increasing interface width. Regarding the results of the evolving
interface, a qualitative analysis is done which discusses three subjects: the deviation of the solutions with
the steady-state solution, the interface width for different parameters and grid sizes and the interface
overshoot, which is an unphysical appearence.
In Chapter 4, two semi-implicit methods and one implicit iterative method, are discussed. The
implementations of the two semi-implicit methods, Implicit-Explicit (ImEx) and Modified Furihata, are
succesful and their stability conditions are better than the stability condition of the Euler Forward
scheme, for most interface widths. The results regarding the evolving interface are nearly identical to
the results of the Euler Forward scheme, therefore the qualitative analysis is also similar. The implicit
iterative method, which involves the use of the Gâteaux derivative, has not been succesfully implemented, eventhough two different discretizations are used. The results regarding the evolving interface
are behaving in a positive diffusive way, which results in a flattening interface with time.
Euler Forward
Implicit-Explicit (ImEx)
Modified Furihata
Gâteaux derivative
Explicit
X
Semi-implicit
Implicit
X
X
X
Table 0.1: An overview of all used discretizations and whether they are explicit, semi-implicit or implicit.
In Chapter 5, the coupled Cahn-Hilliard-Navier-Stokes equation and a dynamic boundary condition
for moving contact lines are used to model a spreading droplet on a flat surface. The implemented model
is validated using different cases in which the steady-state contact angle and the friction factor of the
surface varies. Next, parametric studies are done regarding the interface width, the surface tension and
the ratio of the surface tension and the friction factor. The conclusions are that the modeled system
differs too much from the system in literature to make an absolute comparison but, qualitatively, the
model behaves as expected.
ii
List of symbols
Symbol
Description
Unit
aij
A, B1 , B2 , K
c(x, t)
csteady
cni
cnp
C
D
Dw
f
F
Fliquid−gas
Fliquid−solid
Fs
Fsolid−gas
g
g
G
GN
k
h
h
H
I
J
L
L
M
N
p
p, q, r
r
r0
R
s
s, y
t
T
TB , TI
u
~u
uji
U (x, t)
Ukn
x
Matrix coefficient
Matrices
Concentration
Steady-state concentration
Spatially and temporally discretized concentration
Discretized concentration in iteration p
Thermal diffusivity
Diffusivity
Friction factor
OpenFOAM boundary condition constant
Scaled Gibbs free energy
Force along liquid-gas surface
Force along liquid-solid surface
Force due to surface tension
Force along solid-gas surface
Gibbs free energy density
Gravity constant
OpenFOAM boundary condition constant
Discretized chemical potential
Enthalpy density
Discretized time step size
Enthalpy
Identity matrix
Concentration diffusion flux
Dimensionless domain length
Initial droplet radius
Mobility of particles
Grid size
Pressure
Discretization constants
Spreading radius
Initial spreading radius
Dimensionless spreading radius
Entropy density
Gâteaux derivative parameter
Time
Temperature
OpenFOAM boundary condition parameter
Velocity of fluid
Convection velocity
Discretized diffusion temperature
Diffusion temperature
Discretized concentration
1-dimensional location
(-)
(-)
(mol m−3 )
(mol m−3 )
(-)
(-)
(m2 s−1 )
(m2 s−1 )
(kg m−1 s−1 )
(-)
(-)
(N)
(N)
(N)
(N)
(J m−3 )
(m s−2 ))
(-)
(-)
(J m−3 )
(-)
(J)
(-)
(mol m−1 s−1 )
(-)
(m)
(m2 s−1 )
(-)
(N m−2 )
(-)
(m)
(m)
(-)
(J K−1 m−3 )
(-)
(s)
(K)
(-)
(m s−1 )
(m s−1 )
(-)
(-)
(-)
(m)
iii
Symbol
Description
Unit
α, β
∆S
∆G
∆p
∆t
∆tEuler
∆tExp
∆tImEx
∆tM F
∆tT heory
∆x
θ
θs
κ
λ
µ
µni
ρ
ρl
σ
τ
τ
~τ
φ
φnB , φnI
ψ
OpenFOAM boundary condition constants
Entropy difference
Gibbs free energy difference
Pressure difference
Discretization time step size
Euler Forward stable time step
Experimentally stable time step
ImEx stable time step
Modified Furihata stable time step
Theoretically stable time step
Discretization grid step size
Phenomological parameter
Dynamic contact angle
Steady-state contact angle
Phenomological parameter
Eigenvalue of a matrix
Chemical potential
Discretized chemical potential
Density
Density of droplet liquid
Surface tension
Characteristic time constant
Dimensionless time
Stress tensor
Concentration
Discretized concentration
Tolerance in Gâteaux derivative algorithm
(-)
(J K−1 )
(J)
(N m−2 )
(-)
(-)
(-)
(-)
(-)
(-)
(-)
(m)
(radial)
(radial)
(m2 )
(-)
(J mol−1 )
(-)
(kg m−3 )
(kg m−3 )
(J m−2 )
(-)
(-)
(N m−2 )
(mol m−3 )
(-)
(-)
iv
Contents
Abstract
ii
List of symbols
iii
1 Theory
1.1 Negative diffusion and the Cahn-Hilliard equation .
1.2 Thermodynamics of the Cahn-Hilliard equation . . .
1.3 The Cahn-Hilliard equation in steady-state . . . . .
1.4 The coupled Cahn-Hilliard-Navier-Stokes equation .
1.5 Dynamic boundary condition of a spreading droplet
1.6 Scope of the project . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
2
4
4
5
6
7
2 Numerical methods for the diffusion equation
8
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 The explicit method of Euler Forward in 1D . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 The implicit method of Euler Backward in 1D . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Modeling the Cahn-Hilliard equation with the Euler
3.1 Discretizations . . . . . . . . . . . . . . . . . . . . . .
3.2 Boundary and initial conditions . . . . . . . . . . . . .
3.3 Experimental and theoretical stability conditions . . .
3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5 Quantitative analysis of the results . . . . . . . . . . .
.
.
.
.
.
15
15
16
16
21
23
methods for modeling the Cahn-Hilliard equation
semi-implicit scheme of Implicit-Explicit (ImEx) . . . . . . . . . . . . . . . . . . . . .
semi-implicit method of Modified Furihata . . . . . . . . . . . . . . . . . . . . . . . .
implicit iterative method involving the Gâteaux derivative . . . . . . . . . . . . . . .
28
28
32
36
spreading droplet
Implementation in OpenFOAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Validation of the implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parametric study of the dynamic boundary condition . . . . . . . . . . . . . . . . . . . . .
42
42
42
46
4 Implicit
4.1 The
4.2 The
4.3 The
5 The
5.1
5.2
5.3
Forward
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
discretization
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 Recommendations
50
References
51
1
1
1.1
Theory
Negative diffusion and the Cahn-Hilliard equation
Consider a system consisting of two fluids that are uniformly mixed. Will the fluids stay mixed or will
they separate as in the case of oil and water, for example? A simple experiment might give the answer,
but so does the thermodynamics of diffusion. The physics of diffusion can be divided into positive and
negative diffusion. Positive diffusion takes place against the direction of the concentration gradient,
and diminishes the gradient in this way, while negative diffusion takes place in the same direction. The
equilibrium state for positive diffusion is a uniform mixture whereas for negative diffusion, the equilibrium
state is a two-phase system separated by a fluid interface. These two cases are considered seperately,
where the dominant transport method is the positive diffusion in one case and the negative diffusion in
the other.
In the case with the positive diffusion only, Fick’s diffusion law is considered, in combination with
the continuity equation. Fick’s first law of diffusion is:
J = −D∇c;
(1.1)
where c is the concentration, D is the diffusivity and J is the concentration diffusion flux. The continuity
equation is:
∂c
+ ∇ · J~ = 0;
(1.2)
∂t
The combination of the first law of diffusion and the continuity equation for fluids results in Fick’s second
law of diffusion:
∂c
= D∇2 c;
(1.3)
∂t
From Equation (1.3), one expects that a uniform homogenous mixture is always in steady-state: in the
absence of a spacial concentration gradient, no concentration change will occur with time. However,
in the case where phase separation occurs, the diffusion is against the concentration gradient, which is
not in accordance with Equation (1.3). Hence, the concentration gradient is not the driving force for
diffusion: another force must be at work. In the negative diffusion case, the driving force is the gradient
of the chemical potential, according to (Cahn and Hilliard, 1958). A new expression for Equation (1.1)
can be derived:
J = −M ∇µ;
(1.4)
where M is the mobility of particles (analogous to the diffusivity D) and µ is the chemical potential.
Using this new driving force with the continuity equation, an alternate and more generic form of Fick’s
second law is found. This equation is known as the Cahn-Hilliard equation by (Cahn and Hilliard, 1958):
∂c
= M ∇2 µ;
∂t
(1.5)
According to (Schroeder, 1999), the definition of the chemical potential follows from the Gibbs free
energy density as:
∂g
µ=
;
(1.6)
∂c
where g is the Gibbs free energy density and c is the concentration. The energy density of this two-phase
system, which is a function of the concentration, is also derived by Cahn and Hilliard, in (Cahn and
Hilliard, 1958):
Z
[f (c) + κ|∇c|2 ]dV ;
F =
V
2
(1.7)
Here, F is a scaled Gibbs free energy, f (c) is the scaled free energy density due to the contributions
of both homogenous phases, κ|∇c|2 is the free energy density due to the concentration gradient at the
interface, or the interface energy, and κ = 2 is a phenomological parameter which scales with the
interface width. The function f (c) has the form of a double well potential, according to (Lee et al., 2014,
p. 218). A double well potential can be presented by Equation (1.8), where c is a scaled concentration.
In Figure 1.1, this double well potential is shown.
f (c) =
(c2 − 1)2
;
4
(1.8)
(c2 −1)2
.
4
The minima at c = −1 and c = 1 represent
Figure 1.1: The double well potential function f (c) =
the two phases of a system with negative diffusion.
Substituting κ = 2 in Equation (1.7) results in the Gibbs free energy density for the two-phase system:
g(c) = f (c) + 2 |∇c|2 ;
(1.9)
The chemical potential can be calculated substituting Equation (1.9) in Equation (1.6):
µ=
∂g
∂f (c)
∂
=
+ (2 |∇c|2 )
∂c
∂c
∂c ∂ (c2 − 1)2
=
+ 2 ∇2 c
∂c
4
(1.10)
= c3 − c + 2 ∇2 c;
With the new expression for the chemical potential in Equation (1.10), Equation (1.5) can be rewritten
as:
∂c
= M ∇2 µ = M ∇2 (c3 − c + 2 ∇2 c);
(1.11)
∂t
Equation (1.11) is the expression that will be referred to as the Cahn-Hilliard equation for the rest of
this report. It is a fourth-order, nonlinear partial differential equation.
3
1.2
Thermodynamics of the Cahn-Hilliard equation
In Equation (1.9), it was stated that the Gibbs free energy density can be expressed as:
g(c) = f (c) + 2 |∇c|2
=
(c2 − 1)2
+ 2 |∇c|2 ;
4
(1.12)
According to (Schroeder, 1999), the Gibbs free energy density is also defined as:
g = h − sT ;
(1.13)
where h is the enthalpy density, T the temparature and s the entropy density. According to the second
law of thermodynamics, the entropy of a closed system always increases. Applying this to Equation
(1.13) implies that the Gibbs free energy always decreases in a closed system.
(∆S)H,T ≥ 0 ⇔ (∆G)H,T ≤ 0;
(1.14)
By the same reasoning, any system moves to an equilibrium with a maximum enthropy and a minimum
Gibbs free energy. Minimizing the Gibbs free energy as in Equation (1.12) yields the steady-state solution
for this system.
For Equation (1.12) to be minimized, both contributions need to be minimized. The homogenous
2
2
, is minimum at c = 1 and c = −1 as is shown in Figure 1.1. These minima
phases term, f (c) = (c −1)
4
represent the Gibbs free energy of the two homogenous phases of a mixture, seperated by the fluid
interface. The interface energy term, 2 |∇c|2 , is minimimum when |∇c| is, which is the case when the
interface is as smooth as possible. The actual concentration of the phases in the mixture cannot exceed
the concentration range of c ∈ [−1, 1] since that would imply a concentration of over 100% for one of the
phases, which is not in accordance with mass conservation.
1.3
The Cahn-Hilliard equation in steady-state
In the system that is used for the implementation of the one-dimensional Cahn-Hilliard equation, Equation (1.11) converges to a steady-state solution. The initial condition of the one-dimensional system is
a centered step-function, which implies that the two phases of the fluid are initially fully separated by
a sharp interface. The boundary conditions are defined such that the concentration gradient and the
chemical potential are zero on both boundaries. This also implies mass conservation for both phases.
The derivation of the steady-state solution is outside the scope of this report. The steady-state solution,
according to (Lee et al., 2014, p. 221), is:
x
csteady (x) = tanh √
;
(1.15)
2
The initial condition and the steady-state solution are both in Figure 1.2. The two phases of the system
are indeed at c = −1 and c = 1 and the interface is located at x = 0. The interface is relocated at
x = 1/2 later (without loss of generality) but in Equation (1.15) and in Figure 1.2 it is kept at x = 0.
4
Figure 1.2: Starting with a given initial condition (for example, a discretized step function), the steadystate solution for the Cahn-Hilliard equation evolves with time.
1.4
The coupled Cahn-Hilliard-Navier-Stokes equation
In Sections 1.1 - 1.3, a system in which the dominant way of mass transfer is only diffusion, was considered.
In this section, a system in which not only diffusion but also convection is dominant, is considered. The
system becomes dynamic, hence, apart from the Cahn-Hilliard equation, the Navier-Stokes equation
must be used to describe the combination of diffusion and convection.
The continuity Equation (1.2) is rewritten, with a convection term, as:
∂c
+ ∇ · (~uc) = M ∇2 µ;
∂t
(1.16)
where (~uc) is the convection flux, ~u is the velocity field of the fluid and all other parameters are as
described before. The Navier-Stokes equation, that will be used in combination with the Cahn-Hilliard
equation, is:
∂~u
ρ
+ (~u · ∇)~u = −∇p + ∇ · ~τ + F~s + ρg;
(1.17)
∂t
where F~s is the force due to surface tension, which is added because of the interfacial stresses at the fluid
interface in the binary fluid. The interfacial stresses are treated here as locally active body forces. In
Equation (1.17), ~τ is the stress tensor. The interfacial tension is given by (Kim, 2004):
F~s = c∇µ;
(1.18)
In (1.18), the chemical potential from Equation (1.10), that is also used in the Cahn-Hilliard equation,
can be substituted.
5
1.5
Dynamic boundary condition of a spreading droplet
In this section, a two-dimensional dynamic convection-diffusion case, in which the coupled Cahn-HilliardNavier-Stokes equation is applied, is considered.
An initially spherical fluid droplet sits on a flat surface. See Figure 1.3 for this situation as modeled
in OpenFOAM. With time, the droplet will spread out over the surface until it reaches a steady-state
situation. The spreading radius and the dynamic contact angle are determined by the interfacial tensions
along the interfaces between the fluid and the gas phase, between the gas phase and the solid surface
and between the solid surface and the fluid. Young’s law states, in the case of equilibrium, that:
Fsolid−gas − Fliquid−solid = Fliquid−gas cos(θ);
(1.19)
See Figure 1.4 for the force balance between these surface tensions. In steady-state, there is a fixed
spreading radius and a fixed contact angle for each combination of surface tensions between these three
phases. However, under dynamic non-equilibrium conditions, the dynamic contact angle depends on the
velocity of the contact line. While modeling this case, apart from the coupled equation as mentioned
before, a dynamic boundary condition is used, as described in (Carlson et al., 2011, p. 9):
φ(φ − 1) cos(θ)
∂φ
√
+ u∇φ = −∇φ −
;
(1.20)
Dw
∂t
2
with φ the phasefield parameter, analagous to the concentration c, u the velocity of the fluid at the
solid surface, θ the dynamic contact angle, Dw the friction factor of the surface and a phenomological
parameter.
Figure 1.3: The initial condition of the droplet as implemented in OpenFOAM.
6
Figure 1.4: The droplet at t > 0 as implemented in OpenFOAM. The dynamic contact angle θ is
determined by Fsolid−gas , Fliquid−solid and Fliquid−gas .
1.6
Scope of the project
In this project, the focus is on the implementation of the negative diffusion case mostly. However, to get
familiar with the numerical methods concerning this implementation, positive diffusion is implemented
first. Fick’s second law of diffusion (1.3) is numerically implemented in one dimension using two temporal
discretization schemes, the explicit Euler Forward scheme and the implicit Euler Backward scheme.
Partially using the results of these implementations and partially applying other discretizations schemes,
the Cahn-Hilliard Equation (1.11) for negative diffusion is numerically implemented after that. The used
temporal discretization schemes are Euler Forward, two semi-implicit schemes that are developed on the
way and a fully implicit iterative scheme involving the Gâteaux derivative. After discussing several of
advantages and disadvantages of the various schemes, some conclusions are drawn regarding which one
can be considered the best.
Finally, the implementation of the dynamic boundary condition in OpenFOAM and the validation
of the results, are done. Some parametric studies are done and the results are analyzed.
7
2
2.1
Numerical methods for the diffusion equation
Introduction
The ordinary diffusion equation is a partial differential equation used for describing simple gradient
heat or mass diffusion as a function of space and time. In this chapter, the heat variant of the diffusion
equation is examined as the physics of the concentration variant is the same. First, the diffusion equation
is numerically discretized in time with the Euler Forward discretization scheme, along with its initial
and boundary conditions. Next, a stability analysis that uses the Gershgorin theorem from (Vuik et al.,
2006, p. 106) is done to derive the stability criteria for the numerical discretization. Finally, the Euler
Backward discretization scheme is implemented and both schemes, including stability conditions, are
compared. All numerical schemes are implemented in Matlab.
The diffusion equation in one dimension involves the temperature U (x, t) and the thermal diffusivity
C:
∂ 2 U (x, t)
∂U (x, t)
;
(2.1)
=C
∂t
∂x2
with t ≥ 0 and x ∈ [0, 1]. From now on, the diffusivity is chosen to be C = 1 for convenience. The
boundary conditions are Dirichlet boundary conditions: U (0, t) = 1 and U (1, t) = 0. These two conditions represent a hot wall at x = 0 and a cold wall at x = 1. The initial condition is U (x, 0) = 0 for
0 < x ≤ 1.
2.2
The explicit method of Euler Forward in 1D
The first method that is used is Euler Forward. This is an explicit scheme, which means that every
next time step is calculated using only the information of the previous time step(s). The spatial second
order derivative in Equation (2.1) is discretized using a central difference scheme. See Figure 2.1 for a
schematic view of the Euler Forward scheme.
Figure 2.1: The dependence of the Euler Forward discretization scheme is as shown: for every time level
n and location i, un+1
depends on uni−1 , uni and uni+1 .
i
8
The discretization of a variable is denoted by uni , where n represents the time discretization and i
the space discretization. This notation is used for the rest of this chapter, unless specified otherwise.
Using the Euler Forward discretization scheme for the time and the central difference scheme for space,
the equation becomes:
un − 2uni + uni+1
un+1
− uni
i
;
(2.2)
= i−1
∆t
(∆x)2
The allowed values of ∆x and ∆t depend on the stability analysis of Subsection 2.2.1. The discretized
Dirichlet boundary conditions are: un1 = 1 and unN = 0 where i = N corresponds to the boundary at
x = 1. The discretized initial condition is u11 = 1 and for all other i, u1i = 0.
2.2.1
The stability condition
For convenience, the Euler Forward discretization is rewritten as a system of N equations, one for every
i:
f¯
ūn+1 − ūn
;
(2.3)
= Aūn +
∆t
(∆x)2
The matrix A contains all the coefficients of uni for i = 1...N and the terms ūn and ūn+1 on both sides
of the equation represent vectors for all i. The vector f¯ represents the boundary conditions: it contains
un1 = 1 and uni = 0 for i = 2...N .


−2 1
0 ... 0
0
0
 1 −2 1
0 ... 0
0


0

1
−2
1
0
.
.
.
0


1  .

.
.
.
.
.
.
.
.
.
.
.
.
.
;
(2.4)
A=

.
.
.
.
.
. 

(∆x)2  .

 0 ... 0
1 −2 1
0

0
0 ... 0
1 −2 1 
0
0
0 ... 0
1 −2
This matrix representation allows the use of the Gershgorin theorem from (Vuik et al., 2006, p. 106),
which involves the use of the eigenvalues of the matrix. The Gershgorin theorem is used to evaluate the
staability of the system of equations.
The Gershgorin theorem
The eigenvalues of a matrix A lie within a union of circles, given by:
N
X
|λ − aii | ≤
|aij | with λ ∈ C;
(2.5)
j=1,j6=i
where aii are the main diagonal terms of a matrix and aij are the nondiagonal terms.
Since any symmetrical matrix like A has only real eigenvalues, the union of circles simplifies to an
equation for real λ. After applying the Gershgorin theorem to Equation (2.4), the inequality becomes:
∀ i = 1...N : |λ +
2
1
1
|≤|
|+|
|;
(∆x)2
(∆x)2
(∆x)2
(2.6)
This can be further simplified to:
−4
≤λ≤0
(∆x)2
4
≥ |λ|;
⇒ ∀ i = 1...N :
(∆x)2
∀ i = 1...N :
(2.7)
In the first line of equation (2.7), λ is negative, which implies a stable solution, and λ is limited from
below, which will lead to the stability condition in the end.
9
From (Vuik et al., 2006, p. 68), it is known that the characteristic amplification factor for the
Euler Forward method is given by Q(hλ) = 1 + hλ, where h = ∆t. Any stable system can’t have an
amplification factor greater than 1 in absolute value, since any error would then also amplify over time
with a factor greater than 1 in absolute value. Thus, for a stable numerical discretization, |Q(hλ) < 1| is
needed. The characteristic stability condition can be derived as follows, with the assumption of negative,
real λ.
Q(hλ) = |1 + hλ| < 1
⇔ − 1 < 1 + hλ < 1
⇔ − 2 < hλ < 0
⇔ 2 > h|λ| > 0
(2.8)
⇔ 2 > h|λ|
2
⇔h<
;
|λ|
2
is needed for a stable solution. Equations (2.7) and (2.8) are two different conditions for λ,
So ∆t < |λ|
which can be combined to conclude the stability condition in terms of ∆t as a function of ∆x:
2
4
>
∆t
(∆x)2
(∆x)2
⇒ ∆t <
;
2
(2.9)
According to the stability condition in Equation (2.9) the choice for ∆t depends on the square of
the grid size. Using ∆x = 1/100 , the time step is chosen as ∆t = 1/25,000 . In Table 2.1, the relation
between ∆x and ∆t is shown for other values of ∆x.
Grid size ∆x
Maximum ∆t
1
1
/10
1
/20
1
/50
1
/100
1
/200
1
/500
1
/1000
/200
/800
1
/5,000
1
/20,000
1
/80,000
1
/500,000
1
/2,000,000
1
Table 2.1: The relation between the chosen grid size ∆x and the maximum ∆t, which follows from
the stability analysis, is quadratic for the Euler Forward scheme. A finer grid increases ∆t and so the
simulation time very rapidly.
2.2.2
Results
See Figure 2.2 for the results of the Euler Forward discretized diffusion equation. As expected, the
temperature of the system increases with time but is strictly decreasing for increasing x at all times.
Under steady-state condition, the diffusion results in a temperature profile that is a straight line from
(0, 1) to (1, 0). The steady-state solution is also shown in Figure 2.2.
10
Figure 2.2: The diffusion equation as discretized with the explicit scheme of Euler Forward. Due to the
stability condition and the grid with ∆x = 1/100 , the time step is ∆t = 1/25.000 .
Judging from Subsections 2.2.1 and 2.2.2, some advantages and disadvantages of this discretization
scheme can be stated. As mentioned before, the Euler Forward scheme is an explicit numerical scheme,
making use of information only from the previous time step. This means that the implementation is
straightforward, which is an advantage. A disadvantage, that also follows from the fact that Euler
Forward is an explicit method, is that the stability condition puts severe restrictions on the time step
size. In Equation (2.9) and in Table 2.1 it is shown that ∆t is proportional to (∆x)2 , which means
that a decreasing grid size results in an even more rapidly decreasing time step and a rapidly increasing
simulation processing time. Especially for more complex functions, such as the Cahn-Hilliard equation,
this can be a problem.
In the next section, an implicit scheme, namely the Euler Backward scheme, is considered. In
comparison to Euler Forward, the discretization and the stability condition are different, which leads to
other advantages and also disadvantages.
2.3
The implicit method of Euler Backward in 1D
Euler Backward is an implicit method, which means that, unlike with an explicit method, every next
time step is calculated making use of that same time step. Any time derivative still contains terms of
the current and the next time step, but spatial derivatives are expressed in terms of the next time step
only. See Figure 2.3 for the dependence scheme. Generally, the stability criteria of implicit schemes are
a lot easier to satisfy, the schemes are not as stiff as the explicit ones.
11
Figure 2.3: The dependance of the Euler Backward discretization scheme is the opposite of the Euler
n+1
n
Forward scheme: un+1
and un+1
i−1 , ui
i+1 depend on eachother and on ui , for all n and i.
The discretization of the diffusion equation, using the Euler Backward scheme, is:
un+1 − 2un+1
+ un+1
un+1
− uni
i
i+1
i
= i−1
;
2
∆t
(∆x)
(2.10)
The discretization of the Dirichlet boundary conditions and initial condition are the same as with the
Euler Forward scheme. The discretized boundary conditions are: un1 = 1 and unN = 0. The discretized
initial condition is u11 = 1 and u1i = 0 for i = 2...N .
As can be seen in Figure 2.3 and in the discretization, three different terms involving n + 1 are used,
terms which are al unknown at time step n. Calculating these three terms involves solving a system of
equations for the spatial discretization. When the grid size is N , this results in a system of N equations
which can be put in a matrix of size N by N . The discretization becomes:
ūn+1 − ūn
f¯
= K ūn+1 +
;
(2.11)
∆t
(∆x)2
where K is the N by N matrix and all terms of ūn and ūn+1 are vectors of length N . The matrix K
is similar to the matrix A from Equation (2.4), the vector f¯ containing the boundary conditions is also
similar to f¯ from Equation (2.4). All terms of ūn+1 are now moved to one side of the equation:
ūn = (I − ∆t · K)ūn+1 −
∆tf¯
;
(∆x)2
(2.12)
Equation (2.12) can be rewritten as:
n+1
ū
−1
= (I − ∆t · K)
∆tf¯
n
ū +
;
(∆x)2
(2.13)
Here, the matrix I is an identity matrix of size N by N . The expression in Equation (2.13) gives ūn+1 in
terms of ūn , involving an inverse matrix. This is the final expression, which is implemented in Matlab.
Figure 2.4 shows a plot created with the Euler Backward scheme. The grid that has been used is
the same as with the Euler Forward scheme, so ∆x = 1/100 . The time step was set at ∆t = 1/500. Why
the time step can be much bigger than with the Euler Forward scheme, is explained in Subsection 2.3.1.
12
2.3.1
Stability condition
In this stability analysis, the method to obtain the stability condition of the Euler Backward scheme is
the same as the one that was used to obtain the stability condition of the Euler Forward scheme. First,
the Gershgorin theorem is applied.
n
X
|λ − aii | ≤
|aij |;
(2.14)
j=1,j6=i
The matrix K is the same as the matrix A from Equation (2.4), so the result of applying the Gershgorin
theorem is also the same:
4
≥ |λ|;
(2.15)
∀ i = 1...N :
(∆x)2
According to (Vuik et al., 2006, p. 68), the characteristic amplification factor for Euler Backward is
1
Q(hλ) = 1−hλ
where h is ∆t. As before, the discretization scheme requires |Q(hλ)| < 1 to be stable:
1
<1
1 − hλ
⇔ |1 − hλ| > 1
Q(hλ) =
⇔ 1 − hλ > 1 ∨ hλ − 1 > 1
(2.16)
⇔ hλ < 0 ∨ hλ > 2
⇔ h|λ| > 0;
In the last step it is used that λ is negative so hλ > 2 drops out. Combining Equation (2.15) and Equation
(2.16) is actually not even necessary anymore, the conclusion just follows from Equation (2.16):
∆t|λ| > 0;
(2.17)
This inequality is easily satisfied: any positive ∆t will suffice. The limiting factor on ∆t is the characteristic time constant of the system, over which the actual physical changes take place. This time constant
is defined as follows, according to (van den Akker and Mudde, 2008):
τ=
L2
;
πD
(2.18)
Here, τ is the characteristic time constant, L is the length of the domain and D is the diffusivity. All
parameters are used dimensionless. Filling in the values for L and D as used here, in Equation (2.18),
results in τ = 0.32. All major changes occur in the time interval t = [0, τ ]. To accurately resolve these
changes, ∆t τ is needed.
2.3.2
Results
In plotting Figure 2.4, the stability condition of Subsection 2.3.1 resulted in ∆t = 1/500 . From a stability
point of view, the chosen time step can be as high as ∆t = 1/10 or ∆t = 1/5 , although the accuracy of
the solution decreases significantly. Figure 2.5 shows the inaccuracy of a large time step.
13
Figure 2.4: The Euler Backward implicit scheme makes large time steps possible. This plot is identical
to the plot in Figure 2.2 but the used time step is ∆t = 1/500 , which is a factor 50 larger.
Figure 2.5: Using the Euler Backward scheme, ∆t can be chosen as large as desired. Although the
simulation runs extremely fast with ∆t very large, the accuracy decreases significantly. This is still a
limiting factor in choosing ∆t.
Comparing Figure 2.4 and Figure 2.2, one can conclude that, for fine enough ∆t and ∆x, both
discretization schemes have the same results. The differences are in the stability conditions. A great
advantage of the Euler Backward scheme is that the time step can be chosen a lot larger than while using
the Euler Forward scheme, due to the fact that Euler Backward is an unconditionally stable scheme. The
time step is bounded though, since a very large time step results in inaccuracies and the loss of desired
data. A disadvantage, in comparison to the Euler Forward scheme, is the fact that Euler Backward is
slightly harder to implement in Matlab.
14
3
Modeling the Cahn-Hilliard equation with the Euler Forward
discretization
In this chapter, the application of the Euler Forward discretization scheme to the one-dimensional CahnHilliard equation is discussed. Furthermore, the discretization involves central difference schemes for
the second and fourth order spatial derivatives. The boundary conditions are applied as described in
literature and a step-function is chosen as an initial condition. Next, a stability analysis is used to show
what the relation between grid sizes and critically stable time step sizes is. This theoretical stability
condition is compared to the stability condition, obtained from numerical experiments. At last, the
results of the implementation are discussed quantitavely.
3.1
Discretizations
The Cahn-Hilliard equation in three dimensions has been derived in Chapter 1. Derived from Equation
(1.11), the one dimensional version can be written as:
2
∂2
∂2µ
∂c
3
2∂ c
;
(3.1)
=D 2 =D 2 c −c−
∂t
∂x
∂x
∂x2
where D is the diffusivity (before, this was the mobility of the particles M ), 2 is a phenomological
parameter which scales with the width of the interface and µ is the chemical potential. Applying the
Euler Forward scheme to Equation (3.1), using D = 1, yields the following discretization:
µn − 2µni + µni+1
cn+1
− cni
i
= i−1
∆t
(∆x)2
(cn )3 − 2(cni )3 + (cni+1 )3
cn − 2cni + cni+1
= i−1
− i−1
2
(∆x)
(∆x)2
n
ci−2 − 4cni−1 + 6cni − 4cni+1 + cni+2
− 2
;
(∆x)4
2
(3.2)
3
(c )
In Equation (3.2), the term ∂ ∂x
from Equation (3.1) is discretized directly. An alternative is to rewrite
2
Equation (3.1) so that the second derivative is evaluated and, after that, discretize the result. Equation
(3.1) is rewritten accordingly as:
2
2
4
∂c
∂2
∂c
∂2c
2
2∂ c
3
2∂ c
=
c
−
c
−
=
6c
+
(3c
−
1)
−
;
∂t
∂x2
∂x2
∂x
∂x2
∂x4
(3.3)
The discretization of Equation (3.3) is as follows:
n
n
ci+1 − cni−1 2
c
− 2cni + cni+1
cn+1
− cni
i
= 6cni
+ ((3cni )2 − 1) i−1
∆t
2∆x
(∆x)2
n
n
n
n
n ci−2 − 4ci−1 + 6ci − 4ci+1 + ci+2
− 2
;
(∆x)4
(3.4)
As will be shown later, the implementation of this alternative doesn’t give desired results. This might be
∂c 2
∂2c
∂2c
2
or (3c2 − 1) ∂x
caused by the terms 6c ∂x
2 in Equation (3.3). From the diffusive term (3c − 1) ∂x2 , it
√
can be seen that negative diffusion only occurs for |c| < 33 , any c outside this range results in positive
diffusion. If the behavior of c is√as expected, though, c is bounded in the range of −1 ≤ c ≤ 1, which
overlaps with the range of |c| > 33 . Therefore, the discretization of Equation (3.2), and not of Equation
(3.4), is used throughout the rest of this chapter.
15
3.2
Boundary and initial conditions
The boundary conditions are defined as has been mentioned in Section 1.3: no concentration gradient
or chemical potential on both boundaries. Physically, this means that the in- and outflow of the system
equals zero, or, in other words: it is a closed system so mass conservation is satisfied. Mathematically,
this means that the boundary conditions are Neumann boundary conditions, involving the derivative
of both c and µ at both boundaries. It makes sense that four boundary conditions are used, since the
Cahn-Hilliard equation is a fourth order differential equation. The boundaries of the (one-dimensional)
system are located at x = 0 and x = 1.
∂c
(0, t) = 0
∂x
∂c
(1, t) = 0
∂x
∂µ
(0, t) = 0
∂x
∂µ
(1, t) = 0;
∂x
(3.5)
As stated in Section 1.3, the initial condition is a step-function. This initial condition is chosen
because its results can be compared with an analytical solution.
−1 for 0 ≤ x ≤ 12
t=0
c
=
;
(3.6)
1 for 12 < x ≤ 1
The fluid interface that arises due to the Cahn-Hilliard equation, is located at x = 12 , as is the case with
the initial condition. In terms of concentration, this interface is located at c = 0.
3.2.1
Discretized boundary and initial conditions
The boundary conditions of the Cahn-Hilliard equation are discretized in a different way than in Chapter
2.2, since they are Neumann boundary conditions instead of Dirichlet. The discretizations are as follows:
cn1 − cn−1
2∆x
cnN +1 − cnN −1
2∆x
µn1 − µn−1
2∆x
µnN +1 − µnN −1
2∆x
= 0 ⇒ cn−1 = cn1 ;
= 0 ⇒ cnN +1 = cnN −1 ;
(3.7)
= 0 ⇒ µn−1 = µn1 ;
= 0 ⇒ µnN +1 = µnN −1 ;
Because of these discretizations, the ghost points at i = −1 and at i = N + 1 can be substituted with
known values. The discretization of the initial condition is:
−1 for
0 ≤ i ≤ 21 N
t=0
c
=
;
(3.8)
1
1 for 2 N + 1 ≤ i ≤ N
3.3
Experimental and theoretical stability conditions
Since the Cahn-Hilliard equation is a nonlinear fourth order differential equation, the stability condition
is a lot harder to satisfy than the stability condition of the diffusion equation, Equation (2.9). Because
the stability condition is also harder to derive, a combination of an experimental trial-and-error approach
and a stability analysis, similar to the one in Subsection 2.2.1, is used. The experimental approach leads
to an experimental numerical stability condition and the Gershgorin theorem approach leads to the
theoretical stability condition. According to (Vuik et al., 2006), the Gershgorin theorem can only be
applied to a linear system of equations, while a nonlinear differential equation can not be presented as
a linear system of equations. Therefore, the Cahn-Hilliard equation is linearized in Subsection 3.3.1.
The stability analysis, regarding the linearized Cahn-Hilliard equation, is done in Subsections 3.3.2 and
3.3.3. Next, the theoretical stability condition is compared to the experimental numerical stability in
Subsection 3.3.4.
16
3.3.1
The linearized Cahn-Hilliard equation
Because of the nonlinearity of the Cahn-Hilliard equation, the Gershgorin theorem is not applicable to
it. The Cahn-Hilliard equation is therefore linearized and the Gershgorin theorem is applied to the
linearization. The linearization only concerns the second derivative of the nonlinear term c3 .
For the linearization, the following representation of Equation (3.1) is used:
∂
∂c
∂c
∂4c
2
=
(3c − 1)
− 2 4 ;
∂t
∂x
∂x
∂x
(3.9)
Next, the fact that 3c2 ∈ [0, 3] is used, it follows from the fact that c ∈ [−1, 1]. Taking the two bounds
of the range of 3c2 gives an estimate of what the range of the entire differential equation is.
(
∂c ∂
2 ∂4c
∂c
if
3c2 = 0
4
∂x −1
∂x − ∂x
=
;
(3.10)
4
∂
∂c
∂
c
2
∂t
if
3c2 = 3
∂x 2 ∂x − ∂x4
Which is rewritten as:
∂c
=
∂t
(
2
4
∂ c
2∂ c
−1 ∂x
2 − ∂x4
∂4c
∂2c
2 ∂x2 − 2 ∂x
4
3c2 = 0
;
3c2 = 3
if
if
(3.11)
Equation (3.11) can now be written as a system of equations, using a second derivative operator A:
∂c
= −Ac − 2 A2 c ∨
∂t
∂c
= (−A − 2 A2 )c ∨
⇒
∂t
∂c
⇒
= B1 c
∨
∂t
with:

−1
1

0
1 
 ..
A=

(∆x)2  .
0

0
0
∂c
= 2Ac − 2 A2 c
∂t
∂c
= (2A − 2 A2 )c
∂t
∂c
= B2 c
∂t
1
−2
1
..
.
0
1
−2
..
.
...
0
1
..
.
0
...
0
..
.
0
0
...
..
.
...
0
0
0
...
0
1
0
...
−2
1
0
1
−2
1
0
0
0
..
.
(3.12)






;

0

1
−1
(3.13)
and:
B1 =
1
×
(∆x)2
−22
(∆x)2 +
 32
 (∆x)2 −

 −2 2
 (∆x)









1
1
32
(∆x)2
−62
(∆x)2
42
(∆x)2
−1
+2
−1
..
.
0
.
...
0
0
..
−2
(∆x)2
42
(∆x)2 −
−62
(∆x)2 +
..
0
1
2
−2
(∆x)2
42
(∆x)2 −
..
.
0
−
(∆x)2
0
...
0
0
0
...
1
...
0
0
0
0
...
0
0
−2
(∆x)2
0
..
...
..
.
42
(∆x)2 − 1
..
.
2
.
2
4
(∆x)2 −
−2
(∆x)2
0
17
1
.
−62
(∆x)2 +
42
(∆x)2 −
−2
(∆x)2
2
1
−62
(∆x)2
32
(∆x)2
+2
−1





0


;


2
−

(∆x)2 
2

3
(∆x)2 − 1
−22
(∆x)2 + 1
(3.14)
and:
B2 =
1
×
(∆x)2
−22
(∆x)2 −
 32
 (∆x)2 +

 −2 2
 (∆x)









2
2
32
(∆x)2
−62
(∆x)2
42
(∆x)2
+2
−4
+2
..
.
0
.
...
0
0
..
−2
(∆x)2
42
(∆x)2 +
−62
(∆x)2 −
0
2
4
−2
(∆x)2
2
4
(∆x)2 +
..
..
.
0
−2
(∆x)2
0
...
0
0
0
...
...
0
0
0
0
...
0
0
0
..
.
...
..
.
42
(∆x)2 + 2
2
−
(∆x)2
2
..
.
.
42
(∆x)2 +
−2
(∆x)2
2
0
−62
(∆x)2 −
42
(∆x)2 +
−2
(∆x)2
4
2
−62
(∆x)2
32
(∆x)2
−4
+2





0


;


2
−

(∆x)2 

32
(∆x)2 + 2
−22
(∆x)2 − 2
(3.15)
To the representation in Equation (3.12), the Gershgorin theorem can be applied.
3.3.2
Applying the Gershgorin theorem
Applying the Gershgorin theorem helps to find the eigenvalues of B1 and B2 , which are needed in the
end to find the stability conditions. For the complete Gershgorin theorem, see Equation (2.5). First, the
derivation of the eigenvalues is done using the matrix B1 . The matrix B2 will be used later, seperately.
Filling in the Gershgorin theorem for the middle rows of B1 yields:
2
1 82
1 22
λ − −6 + 2
≤
−
2
+
;
(3.16)
(∆x)2
(∆x)2 (∆x)2
(∆x)2 (∆x)4
This equation has to be divided into three cases. These cases, and the results of Equation (3.16) in these
cases, are:
1
4
122
<
⇒ |λ| ≤
−
∆x
2
(∆x)2
(∆x)4
4
1
1
⇒ |λ| ≤
<
<√
(3.17)
2
2 ∆x
(∆x)
3
1
162
4
√ <
⇒ |λ| ≤
−
4
2
∆x
(∆x)
(∆x)
3
Next, the eigenvalues of B2 are derived by filling in the Gershgorin theorem again.
2
22
1 82
1 λ − −6 − 4
≤
+
+
4
;
(3.18)
(∆x)2
(∆x)2 (∆x)2
(∆x)2 (∆x)4
Here, no separate cases have to be dealt with. Rewriting Equation (3.18) yields:
|λ| ≤
162
8
+
;
(∆x)2
(∆x)4
(3.19)
After combining the inequalities of Equation (3.17) with the inequality of Equation (3.19), the only
remaining inequality is:
8
162
∀
: |λ| ≤
+
;
(3.20)
∆x
(∆x)2
(∆x)4
Equation (3.20) is the final and only condition on λ.
3.3.3
The stability condition on ∆t
As has been discussed in Subsection 2.2.1 (and can be found in Equation (2.8)), the discretization of
2
2
Euler Forward prescribes that ∆t < |λ|
holds, which can be rewritten as |λ| < ∆t
. Combining inequality
18
(3.20) with the Euler Forward inequality, results in:
∆t <
(∆x)2
;
82
4 + (∆x)
2
(3.21)
Equation (3.21) concludes with a useful stability condition for ∆t in terms of ∆x. However, literature
on the Cahn-Hilliard equation states that ∆t ∝ (∆x)4 , instead of a relation like in Equation (3.21). For
>> ∆x, Equation (3.21) converges to a relation like ∆t ∝ (∆x)4 indeed. Validation of this hypothesis
comes from the actual numerical experiments.
Equation (3.21) is just the stability condition for a linearized Cahn-Hilliard equation, not the original
one. In the next subsection, the results of Equation (3.21) and the experimental data on stability are
compared.
3.3.4
Comparing the linearized Cahn-Hilliard equation stability condition with the experimental data on stability
Concerning the numerical experiments, the definition that is used for a stable solution is as follows:
the solution of the discretization still oscillates after t = 5, 000∆t but doesn’t oscillate anymore after
t = 10, 000∆t. This is not the best definition for stability but it is consistent though.
See Table 3.1 for the comparison of the numerical experiments and the results of Equation (3.21).
∆tExp
,
The values of all ∆tExp and ∆tT heory are close. This can be seen even better from the ratio ∆tT heory
also in Table 3.1. Because the data in Table 3.1 are so close, there is no need to plot both the theoretical
data and the experimental data. A plot of only the experimental data is shown in Figure 3.1.
= 0.25∆x
∆x
∆tExp
1
1
/4,572
1
/4,608
1
/64
1
/18,400
1
/18,432
1
/128
1
1
/32
∆tT heory
/73,720
/73,728
= 0.5∆x
∆tExp
∆tT heory
∆tExp
1.0079
1
1
/6,055
1
/6,144
1.0147
1.0017
1
/64
1
/24,510
1
/24,576
1.0027
1.0001
1
/128
1
1
1.0002
/32
/98,280
= ∆x
∆x
∆tExp
1
1
1
1
/64
1
/48,930
1
/128
1
/196,600
/32
∆tT heory
= 2∆x
∆tExp
∆tT heory
1.0206
1
1
/36,150
1
/36,864
1.0198
1
/49,152
1.0045
1
/64
1
/147,000
1
/147,456
1.0031
1
/196,608
1.0000
1
/128
1
1
1.0000
/32
/589,800
= 4∆x
∆tT heory
∆x
∆tExp
1
1
1.0155
1
1
1
1
/540,672
1.0035
1
/64
1
/2,162,688
0.9986
1
/128
1
/64
1
/128
/133,100
/538,800
1
/2,163,000
/135,168
1
/589,824
= 8∆x
∆tExp
∆tT heory
1
/32
∆tT heory
∆tExp
∆tT heory
∆tExp
/12,288
∆tExp
/98,304
∆x
/12,040
∆x
∆tT heory
∆tExp
∆tT heory
∆x
/32
1
/523,000
/2,107,000
/8,451,000
∆tT heory
1
∆tExp
∆tT heory
/528,384
1.0103
1
/2,113,536
1.0031
1
1.0004
/8,454,144
Table 3.1: The stability conditions for the Euler Forward implementation, as obtained experimentally,
are compared to the theoretical stability condition, obtained from the stability analysis of the linearized
Cahn-Hilliard equation. For all different ∆x
and ∆x, the values of ∆tExp and ∆tT heory are very close,
∆tExp
which is shown by the ratio ∆tT heory .
19
Figure 3.1: The relation between ∆t and ∆x for several values of due to the stability condition obtained
, increasing ∆x by a factor 2 results in increasing ∆t by a factor
from experimental data. For constant ∆x
4.
From Figure 3.1 and Figure 3.2, it can be concluded that ∆t ∝ (∆x)2 for all and that, for any
constant ∆x, a larger results in a smaller maximum ∆t. It can also be concluded that, for given ∆x and
for at least the values of ∆x
∈ [0.25, 8], the theoretical stability condition is a very good approximation.
Therefore, the theoretical stability condition can be used as a predictor for ∆t if one is implementing the
Cahn-Hilliard equation in Euler Forward with a particular ∆x and ∆x
∈ [0.25, 8]. More specifically, for
increasing ∆x, the ratio in Table 3.1 converges to 1. Extrapolating this result, this stability condition
predictor might hold for larger ∆x than ∆x = 1/128 .
Figure 3.2: For each value of ∆x, the relation between ∆x
and ∆t follows a similar curve. Comparing
two curves, a factor 4 increasement in ∆t results in a factor 2 increasement in ∆x.
20
3.4
Results
In this section, the results of the evolution of concentration with time are presented, using the discretization of the Cahn-Hilliard equation of Equation (3.2). In the implementations that are used to obtain the
results, for all plots with different values of , the time step size is a bit smaller than the critical time
step sizes of Table 3.1. The grid size is ∆x = 1/128 for all plots.
Figure 3.3 shows the plot of the concentration with = 1/128 . The evolution of the interface, shown
by different plots which are indicated in the legend, can’t be seen as good as desired since, with this
particular , the interface is quite thin. As expected though, the plots converge to the steady-state
solution with time.
Figure 3.3: The Cahn-Hilliard equation implemented with the Euler Forward discretization scheme, using
∆x = 1/128 , = 1/128 and ∆t = 1/200.000 .
Figure 3.4 shows the plot of the concentration with = 4/128 . The evolution of the interface can be
seen more clearly, compared to Figure 3.3: lower values of t consequently show steeper interfaces than
higher values of t. Obviously, the steady-state solution shows the least steep interface.
21
Figure 3.4: The Cahn-Hilliard equation implemented with the Euler Forward discretization scheme, using
∆x = 1/128 , = 4/128 and ∆t = 1/2.200.000 . The maxima and minima near the interface are discussed in
Subsection 3.5.3.
Figure 3.5 shows only the interface of the plot with = 0.25/128 . The solutions are consequently even
steeper than with = 1/128 . Therefore, even though the plot uses only the near-interface data points,
the evolution of the interface is impossible to see.
Figure 3.5: The Cahn-Hilliard equation implemented with the Euler Forward discretization scheme, using
∆x = 1/128 , = 0.25/128 and ∆t = 1/80.000 .
In Section 3.1, two different discretizations of the Euler Forward scheme were presented. The plots
in this section were made using the discretization of Equation (3.2) until here. In Figure 3.6, it is shown
why the discretization of Equation (3.4) doesn’t give the desired results. For increasing t, the solution
22
doesn’t converge to the steady-state
solution, instead it converges to the steady-state solution with a
√
magnitude of approximately 3 higher, exceeding c = 1 and c = −1 on almost the entire domain already
for t = 0.01. It seems that the integral over all c is the same for all t, though, which is as expected.
For lower values of , the solutions converge to an even higher magnitude of the steady-state solution.
For higher values of , though, the solutions seem to converge to the desired steady-state solution.
Unfortunately, this was not investigated further since the discretization of Equation (3.2) already gives
desired results.
Figure 3.6: The Cahn-Hilliard equation implemented with the Euler Forward discretization scheme as
in Equation (3.4), using ∆x = 1/128 , = 1/128 and ∆t = 1/400.000 . The solutions diverge for increasing t,
which is not in accordance with the expectations and the theory of Section 1.2.
3.5
3.5.1
Quantitative analysis of the results
Steady-state deviation
As discussed in Section 3.4, the interface evolution always converges to the steady-state solution. How
fast the plots converge, can be seen best in a plot of the steady-state deviation, which is defined as:
Steady-state deviation (x, t) = |csteady (x) − c(x, t)|;
(3.22)
Here, csteady (x) is from Equation (1.15). Figure 3.7 shows the steady-state deviation of the case with
= 1/128 .
23
Figure 3.7: The steady-state deviation is defined as the absolute difference between the solution at time
t and the steady-state solution. The deviation as shown here, is of the implementation with ∆x = 1/128 ,
= 1/128 and ∆t = 1/200.000 . The maximum at x = 0.5 of the solution for t = 0.01 has a value of 0.03548.
In Figure 3.7, the steady-state deviation has a clear maximum which is constant with time at x = 0.5.
This is an unfortunate result: it means that the implementation doesn’t converge to the steady-state
solution at this specific location. In the vicinity of the boundaries, the steady-state deviation decreases
with time, which does show the convergence of the solutions. The grid size is an important factor for
the steady-state deviation: while keeping constant, a finer grid makes the deviation at x = 0.5 have
a lower maximum although it is still a maximum that doesn’t change with time. In Figure 3.8, this is
shown for ∆x = 1/256 and = 1/128 .
Figure 3.8: The steady-state deviation of the implementation with ∆x = 1/256 , = 1/128 and ∆t =
/2,500,000 . The maximum at x = 0.5 is lower than with ∆x = 1/128 (0.007425).
1
24
3.5.2
90%-interval
In Section 3.4, the steepness of the plots and the width of the interface was discussed qualitatively. In
this subsection, the interface width and the slope is discussed quantitavely. The 90%-interval is a good
quantity to define concerning the analysis of the interface width and so the steepness of the interface. The
90%-interval is defined as the interval where −0.9 ≤ c ≤ 0.9 holds. Since the experimental data consists
of discretized points, the 90%-interval is also of a discretized form. Concluding from these results, the
90%-interval increases with increasing , which follows from the fact that scales with the interface width
and vice versa. Table 3.2 shows the width of the 90%-intervals for different and ∆x.
Table 3.2: The width of the 90%-interval is proportional to . With a finer grid, one can determine the
90%-interval more precisely, as can be seen with = 1/32 and = 1/16 .
∆x
Interval width
∆x
Interval width
∆x
Interval width
1
1
3
1
1
3
1
1
/128
3
1
1
7
1
1
7
1
1
7
1
1
15
1
1
15
1
1
33
/32
/32
/32
/16
/32
/32
/64
/64
/64
/64
/64
/32
/64
/16
/64
/128
/128
/128
/128
3.5.3
/64
/32
/16
/128
/128
/128
/128
Interface overshoot
In Figures 3.3 and 3.4, close to the interface, all plots except the steady-state solution have maxima and
minima that are larger than c = 1 and smaller than c = −1, respectively. These extreme values will be
referred to as overshoot: the solutions exceed their maximum and minimum allowed values. According
to Section 1.2, this is against the mass conservation law. Therefore, it can cause errors in the results.
The overshoot is easily seen in Figure 3.4, but not in Figure 3.3. Therefore, Figure 3.9 shows a close-up
of the latter figure.
Figure 3.9: The close-up of Figure 3.3 shows the overshoot near the interface. It also shows that the
overshoot decreases with time.
25
The overshoot is defined for x > 0.5, the domain x < 0.5 is not taken into account because the
interface is symmetrical:
Overshoot (x, t) = c(x, t) − 1, x > 0.5;
(3.23)
Figure 3.10: The interface overshoot from Figure 3.3 and the close-up Figure 3.9, with ∆x = 1/128 ,
= 1/128 and ∆t = 1/200.000 , using the domain x > 0.5. The maximum overshoot of the plot of
t = 0.0005 is 0.01118.
Figure 3.10 shows that the overshoot has a global maximum slightly to the right of x = 0.5. It also
shows that, for the lowest two values of t, the plot partially consists of dots instead of a continuous line.
This indicates that those solutions oscillate around c = 1 (the logarithmic scale can’t handle negative
values). Although the maximum overshoot dampens with time, it increases with increasing . Figure
3.11 shows this for = 4/128 .
26
Figure 3.11: The interface overshoot is shown for the implementation with ∆x = 1/128 , = 4/128 and
∆t = 1/2.200.000 . The maxima are significantly higher than with = 1/128 in Figure 3.10. The maximum
overshoot of the plot of t = 0.0005 is 0.05288.
27
4
Implicit methods for modeling the Cahn-Hilliard equation
In previous chapters, it has been mentioned that the Cahn-Hilliard equation is a nonlinear fourthorder partial differential equation. In implicit implementing, the nonlinearity causes a lot of difficulties.
For instance, the method that has been discussed in Section 2.3, the Euler Backward scheme, is not
suitable for discretizing a nonlinear differential equation, according to (Vuik et al., 2006, p. 66-67). If
one is to implement the Cahn-Hilliard equation implicitly, iterative methods, meant to solve nonlinear
differential equations, have to be used. In this chapter, the implementation of two different semi-implicit
discretization schemes is described, both of which are developed during this research. Next, a fully
implicit, iterative discretization scheme, involving the use of the Gâteaux derivative, is discussed. All
advantages and disadvantages are discussed accordingly.
4.1
The semi-implicit scheme of Implicit-Explicit (ImEx)
The first semi-implicit discretization scheme that is used, is the Implicit-Explicit (ImEx) scheme. The
ImEx scheme uses a combination of an explicit and an implicit discretization. The nonlinear part of the
Cahn-Hilliard equation is discretized using Euler Forward, and the linear part is discretized using Euler
Backward. The method has a combination of advantages and disadvantages of both methods. First,
the stability condition that has to be satisfied is less severe than with the Euler Forward method, but
more severe than with the Euler Backward method. Second, the processing time is partially from a
matrix calculation and partially from an iteration over the grid points. Later, the quantitative results
of this method, regarding the steady-state deviation, the 90%-interval and the interface overshoot, are
discussed.
4.1.1
ImEx discretization
The discretization of the ImEx scheme in Equation (4.1) is similar to the Euler Backward discretization
with the notable difference that the nonlinear part (cni )3 is discretized using an Euler Forward scheme.
(n,n+1)
µ
cn+1
− cni
i
= i−1
∆t
(n,n+1)
(n,n+1)
− 2µi
+ µi+1
2
(∆x)
n+1
cn+1
+ cn+1
(cni−1 )3 − 2(cni )3 + (cni+1 )3
i−1 − 2ci
i+1
−
(∆x)2
(∆x)2
!
n+1
n+1
n+1
cn+1
− 4cn+1
i−2 − 4ci−1 + 6ci
i+1 + ci+2
2
−
;
(∆x)4
=
(4.1)
To convert the discretization in Equation (4.1) to a form, which can be implemented in Matlab, all terms
involving time step n + 1 are collected together and all remaining terms, involving time step n as well:
!
n+1
n+1
n+1
n+1
n+1
n+1
n+1
n+1
c
−
2c
c
+
c
−
4c
+
6c
−
4c
+
c
cn+1
i−1
i
i+1
i−2
i−1
i
i+1
i+2
i
+
+ 2
∆t
(∆x)2
(∆x)4
(4.2)
(cni−1 )3 − 2(cni )3 + (cni+1 )3
cni
=
+
;
∆t
(∆x)2
Equation (4.2) can be rewritten as a system of equations, using a matrix A and a vector B:
(I + ∆t · A) cn+1 = cn + ∆t · B(cn )
−1
⇒ cn+1 = (I + ∆t · A)
28
(cn + ∆t · B(cn )) ;
(4.3)
with:
1
×
A=
(∆x)2
 2
2
2 − 1
 (∆x)2
 −3
 (∆x)2 + 1


2
 (∆x)
2

..


.



0



0

0
−32
(∆x)2
+1
62
(∆x)2
−2
−42
(∆x)2
+1
−42
(∆x)2
+1
62
(∆x)2
−2
..
2
(∆x)2
..
.
...
.
0
0
...
0
0
0
...
0
0
2
(∆x)2
0
...
0
2
(∆x)2
0
..
.
...
..
.
−42
(∆x)2
..
+1
..
.
2
(∆x)2
.
2
−4
(∆x)2
2
+1
(∆x)2
0
...
0
62
(∆x)2
2
−4
(∆x)2
2
−2
+1
(∆x)2
−42
(∆x)2
2
0






0
 (4.4)


;


2

2
(∆x)


2
−3

(∆x)2 + 1
2
2
(∆x)2 − 1
0
+1
6
(∆x)2
−2
−32
(∆x)2
+1
and:

−(cn1 )3 + (cn2 )3
..
.










 n 3
(cj−2 ) − 2(cnj−1 )3 + (cnj )3 


1  n 3
n 3
n
3
B(cn ) =
;

(c
)
−
2(c
)
+
(c
)
j
j+1 

(∆x)2  j−1

 n 3
(cj ) − 2(cnj+1 )3 + (cnj+2 )3 




..


.


(cnN −1 )3 − (cnN )3
(4.5)
Equation (4.3) is suitable for implementation in Matlab. The boundary conditions, as described in
Section 3.2, are already in A and B(cn ) and the discretized initial condition is as before:

−1 for
0 ≤ i ≤ 12 N
ct=0 =
;
(4.6)
 1 for 1 N + 1 ≤ i ≤ N
2
This concludes the discretization of the ImEx scheme.
4.1.2
stability condition
In this section, the stability condition of the ImEx discretization scheme is obtained, applying a trialand-error approach on the numerical experiments. As mentioned before, the expectation, concerning the
maximum ∆t that follows from the stability, is that it is somewhere between the Euler Forward maximum
and Euler Backward maximum. Since the Euler Backward stability condition actually states that any
solution using a positive time step is stable, it is expected that the stability condition of the ImEx scheme
concludes with a maximum ∆t higher than the Euler Forward maximum. The experimental results for
the stability of the ImEx scheme are in Table 4.1, together with the results of the Euler Forward scheme
from Table 3.1.
29
= 0.25∆x
∆x
∆tEuler
∆tImEx
1
1
/4,572
1
1
/64
1
/18,400
1
/128
1
/32
∆x
∆tEuler
1
/6,055
1
/6,101
1.008
1
/30,670
1.667
1
/64
1
/24,510
1
/24,560
1.002
1
/122,900
1.667
1
/128
1
1
1.000
∆x
∆tEuler
1
1
1
1
/64
1
1
/128
1
/32
/98,280
∆tImEx
∆tEuler
0.168
1
1
/470
0.0130
/48,930
1
/8,200
0.168
1
/64
1
1
/2,015
0.0137
/196,600
1
/32,710
0.166
1
/128
1
1
0.0139
/12,040
/2,025
/32
∆tEuler
1
1
1
/84,2
1
1
1
1
/128
/133,100
/538,800
1
/2,163,000
∆tImEx
1
/36,150
/147,000
/589,800
/8,180
= 8∆x
∆tEuler
∆tImEx
∆x
∆tEuler
6.33 · 10−4
1
1
/466
8.65 · 10−4
1
/64
/2,010
9.29 · 10−4
1
/128
1
∆tImEx
∆tEuler
∆tImEx
∆x
∆x
/64
/98,320
= 2∆x
∆tEuler
∆tImEx
= 4∆x
/32
∆tEuler
∆tImEx
1
= ∆x
/32
∆tImEx
1.669
/7,630
/73,720
= 0.5∆x
∆tEuler
∆tImEx
/32
∆tImEx
∆tEuler
∆tImEx
/523,000
1
1.89 · 10−5
1
/2,107,000
1
/84.0
3.99 · 10−5
1
1
5.51 · 10−5
/8,451,000
/9.90
/466
Table 4.1: The experimental data on stability for the ImEx scheme, compared to the data for the Euler
Forward scheme. The maximum ∆t of the ImEx scheme decreases with increasing , which is exactly
opposite to the maximum ∆t of the Euler Forward scheme.
Figure 4.1: The maximum ∆t of the ImEx scheme due to the stability condition, for ∆x = 1/128 , is shown.
It decreases with increasing , which is exactly opposite to the maximum ∆t of the Euler Forward scheme.
Only for = 0.25∆x and = 0.5∆x, the stability conditions for the Euler Forward scheme are favorable,
for all other , the stability condition of the ImEx scheme is.
From the data in Table 4.1, it can be concluded that for most values of , the stability condition
is less severe than the stabiity condition of the Euler Forward scheme. This validates the hypothesis
30
regarding the relation of these stability conditions. However, an interesting fact is that the maximum ∆t
due to the stability condition of the ImEx scheme increases with increasing while the maximum ∆t of
the Euler Forward scheme decreases with increasing . Figure 4.1 indicates that the stability condition
2
2
instead of with ∆x
, as was the case with the stability condition
of the ImEx scheme scales with ∆x
of the Euler Forward scheme in Equation (3.21).
4.1.3
Results
In this subsection, the evolution of the interface with time, using the ImEx discretization scheme, is
= 1, the plot is very similar to the plot of the evolution,
discussed. Figure 4.2 shows the result for ∆x
making use of the Euler Forward scheme, in Figure 3.3. Other values of ∆x
also result in plots similar
to the ones made with the Euler Forward scheme, therefore they are not shown.
Figure 4.2: The Cahn-Hilliard equation implemented with the ImEx discretization scheme, using ∆x =
/128 , = 1/128 and ∆t = 1/40,000 . This plot is very similar to the plot, using the Euler Forward scheme,
in Figure 3.3.
1
4.1.4
Quantitative analysis of the results
In Subsection 4.1.3, it was concluded that the evolution of the interface using either the ImEx scheme
or the Euler Forward scheme, results in the same plots. The steady-state deviation therefore also shows
quantitatively similar behavior for both discretization schemes, it is shown in Figure 4.2 for ∆x
= 1.
−5
Apart from some oscillations in the vertical range smaller than approximately 10 , Figure 4.2 is identical
to Figure 3.7.
31
Figure 4.3: The steady-state deviation of the ImEx scheme, using ∆x = 1/128 , = 1/128 and ∆t = 1/40,000 .
The maximum at x = 0.5 of the solution for t = 0.01 has a value of 0.03547. A great similarity with
Figure 3.7 occurs: the ImEx scheme and the Euler Forward scheme have similar results but a different
stability condition.
The analyses of the overshoot and the 90%-interval are also similar, using either one of both discretization schemes, hence, they are not repeated. It can be concluded that the developed ImEx discretization scheme is a scheme that gives the same quantitative results as the Euler Forward scheme but
has a stability condition that is entirely different and, for most , favorable.
4.2
The semi-implicit method of Modified Furihata
The second semi-implicit discretization scheme that is used, is a modified form of the one suggested
by Furihata in (Furihata, 2000, p. 1-2). Furihata suggests a fully implicit method, which is impossible
to implement in Matlab due to the nonlinear term in the Cahn-Hilliard equation. The modification is
applied so that the discretization scheme becomes semi-implicit, as was the case with the ImEx scheme.
4.2.1
Furihata discretization
The original Furihata discretization is in Equation (4.7) and (4.8). All terms in Equation (4.8) are
implicit or semi-implicit, including the nonlinear part (the term with constant r).
(n+1/2)
(n+1/2) (n+1/2) !
(n+1)
(n)
Uk
− Uk
1
∂G
∂G
∂G
=
−2
+
;
(4.7)
∆t
(∆x)2
∂U k−1
∂U k
∂U k+1
with k = 0, 1...N and n = 0, 1, 2... and:
!
(n+1/2)
(n+1)
(n)
Uk
+ Uk
∂G
=p
∂U k
2
(n+1) 3
+r
(Uk
q
+
(∆x)2
(n+1) 2
) + (Uk
(n+1)
Uk−1
(n)
) Uk
(n)
(n+1)
+ Uk
4
(n+1)
+ Uk−1 − 2(Uk
2
32
(Uk )2 + (Uk )3
!
(n)
(n)
(n)
(n)
(n+1)
+ Uk ) + Uk+1
(4.8)
+ Uk+1
!
;
Here, U is the concentration and G the chemical potential, which is the notation that is also used by
Furihata in his publication. It is used here as well for convenience and for comparison. Furihata uses
three constants, being p, r and q, which are chosen, later, analogous to the constants in the Euler Forward
or ImEx discretization scheme. Equation (4.8) is modified next so that the nonlinear term is explicit.
4.2.2
Modified Furihata discretization
The discretization of Modified Furihata is:
(n+1/2)
(n+1/2) (n+1/2) !
(n+1)
(n)
Uk
− Uk
1
∂G
∂G
∂G
=
−2
+
;
∆t
(∆x)2
∂U k−1
∂U k
∂U k+1
(4.9)
with k = 0, 1...N and n = 0, 1, 2... and:
∂G
∂U
(n+1)
(n+1/2)
Uk
=p
k
q
+
(∆x)2
(n+1)
Uk−1
(n)
+ Uk
2
(n)
!
(n)
+ r(Uk )3
(n+1)
+ Uk−1 − 2(Uk
(n)
(n+1)
+ Uk ) + Uk+1
(n)
+ Uk+1
(4.10)
!
;
2
The term with r is a lot simpler after this modification. It is no longer the mean of four different explicit,
implicit and semi-implicit terms, but instead, it is just one explicit term. Equations (4.9) and (4.10) can
be written as a system of equations:
⇒ U (n+1)
(I − ∆t · A) U (n+1) = (I + ∆t · A) U (n) + ∆t · B(U (n) )
−1
= (I − ∆t · A)
(I + ∆t · A) U (n) + ∆t · B(U (n) ) ;
(4.11)
with:
1
A=
×
(∆x)2

q
p
2 − 2
 (∆x)
 −3
p
 2(∆x)2 + 2


q
 2(∆x)2

..


.



0



0

0
p
−3
2(∆x)2 + 2
3q
(∆x)2 − p
−2q
p
(∆x)2 + 2
..
q
2(∆x)2
−2q
p
(∆x)2 + 2
3q
(∆x)2 − p
0
...
0
0
q
2(∆x)2
−2q
p
(∆x)2 + 2
0
...
0
q
2(∆x)2
..
..
0
..
...
..
.
.
..
...
0
q
2(∆x)2
0
...
0
−2q
p
(∆x)2 + 2
q
2(∆x)2
0
0
...
0
.
.
.
.
3q
(∆x)2 − p
−2q
p
(∆x)2 + 2
q
2(∆x)2
0






0



;


q

2(∆x)2

p
−3

+
2(∆x)2
2
q
p
(∆x)2 − 2
(4.12)
0
−2q
(∆x)2
3q
(∆x)2
+
−3
2(∆x)2
+
p
2
−p
p
2
and:

(n)
(n)
−(U1 )3 + (U2 )3
..
.









 (n) 3
(n) 3
(n) 3 
(Uj−2 ) − 2(Uj−1 ) + (Uj ) 

r 
 (n) 3
(n) 3
(n) 3 
B(U (n) ) =

;
(U
)
−
2(U
)
+
(U
)
j−1
j
j+1

(∆x)2 
 (n) 3
(n) 3
(n) 3 
(Uj ) − 2(Uj+1 ) + (Uj+2 ) 




..


.


(n) 3
(n) 3
(UN −1 ) − (UN )
33
(4.13)
with q = −2 , p = −1 and r = 1. Equation (4.11) is suitable for implementation in Matlab. The
boundary conditions are already in A and B(cn ) and the discretized initial condition is as before:

−1 for
0 ≤ i ≤ 12 N
;
(4.14)
ct=0 =
 1 for 1 N + 1 ≤ i ≤ N
2
This concludes the discretization of the Modified Furihata scheme.
4.2.3
Stability condition
As in previous sections, a trial-and-error approach is applied to the numerical experiments to obtain
the stability conditions. Since the Modified Furihata discretization scheme is semi-implicit, like the
ImEx scheme, the stability condition is expected to be less severe than with the Euler Forward scheme.
However, at this point there is no hypothesis regarding the relation between the stability conditions of the
ImEx and Modified Furihata schemes. Table 4.2 shows the results of the trial-and-error approach, using
the Modified Furihata scheme. The relation between the maximum values of ∆t for different is again
totally different from the increasing or decreasing ∆t which were discussed in previous sections. Using
the Modified Furihata scheme, the stability condition is relatively constant for most of the considered
. Changes in ∆tM F are only observed at relatively large . This is shown, together with the stability
conditions of the the Euler Forward scheme and the ImEx scheme, in Figure 4.4.
= 0.25∆x
∆x
∆tEuler
∆tM F
= 0.5∆x
∆tEuler
∆tM F
∆x
∆tEuler
∆tM F
∆tEuler
∆tM F
1
1
/4,572
1
/6,094
1.333
1
1
/6,055
1
/6,084
1.005
1
/64
1
/18,400
1
/24,540
1.334
1
/64
1
/24,510
1
/24,540
1.001
1
/128
1
1
1.333
1
/128
1
1
1.000
/32
/73,720
/98,290
/32
/98,280
= ∆x
∆x
∆tEuler
∆tM F
1
1
1
1
/64
1
1
/128
1
/32
= 2∆x
∆tEuler
∆tM F
∆tEuler
∆tM F
0.505
1
1
/36,150
1
/6,010
0.166
/48,930
1
0.502
1
/64
1
/147,000
1
/24,600
0.167
/196,600
1
0.501
1
/128
1
1
0.167
/12,040
/6,080
/24,550
/98,400
∆x
∆tEuler
∆tM F
1
1
1
/5,570
1
1
/538,800
/24,280
/2,163,000
/64
1
/128
/133,100
1
∆tEuler
∆tM F
∆x
/32
/589,800
= 4∆x
/32
/98,300
/98,500
= 8∆x
∆tEuler
∆tM F
∆tEuler
∆tM F
∆x
∆tEuler
∆tM F
4.18 · 10−2
1
1
/523,000
1
/2,570
4.91 · 10−3
1
4.51 · 10−2
1
/64
1
/2,107,000
1
/21,650
1.03 · 10−2
1
4.53 · 10−2
1
/128
1
1
1.06 · 10−2
/97,950
/32
/8,451,000
/89,500
Table 4.2: The experimental data on stability for the Modified Furihata scheme, compared to the data for
the Euler Forward scheme. The stability condition of the Modified Furihata scheme is relatively constant
for different values of . It is also less severe compared to the stability condition of Euler Forward, for
most .
34
Figure 4.4: The stability conditions of the three used discretization schemes, are all totally different.
The Modified Furihata scheme has a relatively constant stability condition while the ImEx and Euler
Forward schemes have increasing and decreasing ∆t for increasing , respectively.
4.2.4
Results
In this subsection, the evolution of the interface with time, using the Modified Furihata scheme, is
discussed. Figure 4.5 shows the result using ∆x
= 1, a figure which has great similarity with Figure
3.3 of the evolving interface, using Euler Forward scheme and with 4.2, using the ImEx scheme. Since
plots with other values of ∆x
also show great similarity with plots made using the other discretization
schemes, they are not shown here.
Figure 4.5: The evolving interface, implemented with the Modified Furihata scheme, using ∆x = 1/128 ,
= 1/128 and ∆t = 1/110,000 , gives the same result as the Euler Forward scheme did in Figure 3.3.
35
4.2.5
Quantitative analysis of the results
In Subsection 4.2.4, it was concluded that the results, concerning the evolving interface, are the same
for all three discretizations, used in this chapter. Therefore, the steady-state deviation for different ,
as well as the interface overshoot and the 90%-interval are also the same for all three discretizations. In
= 1.
Figure 4.6, this is shown for the steady-state deviation of the Modified Furihata scheme, using ∆x
Figure 4.6: The steady-state deviation of the Modified Furihata scheme is very similar to the steady-state
deviation in Figure 3.7, using the Euler Forward or in Figure 4.3, using ImEx scheme. In this graph,
∆x = 1/128 , = 1/128 and ∆t = 1/110,000 are used and the maximum at x = 0.5 of the solution for
t = 0.01 has a value of 0.03548.
4.3
The implicit iterative method involving the Gâteaux derivative
In this section, the implicit iterative method involving the Gâteaux derivative, is discussed. This method
is fully implicit, which is a step forward in stability compared to the methods that have been used in
previous sections. Another difference with previous methods is that this method uses iterations within
each time step, making it ideal to use for the implementation of a nonlinear differential equation, such
as the Cahn-Hilliard equation. The function of the Gâteaux derivative itself is to linearize the CahnHilliard equation so implicit implementing is possible. The Gâteaux derivative method is closely related
to the Newton-Raphson method, in the sense that it uses a similar derivative formulation and it requires
an initial guess for every time step with which the algorithm can calculate the next iteration. With
that iteration, another iteration can be calculated and so on. When succesive iterations have satisfied
a convergence criterion, the algorithm continues with the next time step and repeats the iterations
algorithm.
4.3.1
Algorithm of the Gâteaux derivative method
The Gâteaux derivative is used as formulated in (Dugger and Lambert, 2013). The algorithm, which is
used to implement the Cahn-Hilliard equation, states the following about the Gâteaux derivative of the
Cahn-Hilliard equation:
∂
[F (cn+1 + sy, cn )]y=0 = −F (cn+1 , cn );
(4.15)
∂y
36
Here, y is the parameter with respect to which the derivative is taken, in the Gâteaux derivative,
F (cn+1 , cn ) is the Cahn-Hilliard equation in a representation with all terms on one side:
2 n+1
∂2
cn+1 − cn
n+1 3
n+1
2∂ c
(c
)
−
c
−
= 0;
(4.16)
−
F (cn+1 , cn ) =
∆t
∂x2
∂x2
and s is defined as:
n+1
s = cn+1
;
p+1 − cp
(4.17)
Here, p+1 is the indicator for the next iteration. Since information about iteration p+1 is not at hand in
iteration p, s is unknown, initially. In equations (4.15), (4.16) and (4.17), the discretization with respect
to n is already implemented. The discretization with respect to x is implemented later. Substituting
Equation (4.16) in Equation (4.15) gives:
2 n+1
+ sy)
∂2
∂ cn+1 + sy − cn
n+1
3
n+1
2 ∂ (c
(c
+ sy) − (c
+ sy) − −
∂y
∆t
∂x2
∂x2
y=0
(4.18)
n+1
n
2 n+1
2
c
−c
∂
n+1 3
n+1
2∂ c
=−
(c
)
−
c
−
;
+
∆t
∂x2
∂x2
In Equation (4.18), c nor s is y-dependent, by the definition of the Gâteaux derivative. The derivatives
∂
with respect to y and with respect to x can be swapped so that ∂y
can be evaluated:
2
2 n+1
cn+1 − cn
∂2
∂2
s
n+1
2
2∂ s
n+1 3
n+1
2∂ c
−
3s(c
=−
+
(c
;
+ sy) − s − ) −c
−
∆t ∂x2
∂x2 y=0
∆t
∂x2
∂x2
(4.19)
Substituting y = 0:
2
2 n+1
s
∂2
cn+1 − cn
∂2
n+1 2
2∂ s
n+1 3
n+1
2∂ c
−
3s(c
) −s−
=−
+
(c
) −c
−
;
(4.20)
∆t ∂x2
∂x2
∆t
∂x2
∂x2
By the definition of the Gâteaux derivative, Equation (4.20) is linear in terms of s. A discretization for
x can now be applied so that s can be calculated. As has been discussed in Section 3.1, two different
∂2
ways of discretizing ∂x
2 can be applied. Either the central difference discretization is applied directly to
2
∂
∂x2 or the second derivative is evaluated before discretizing a new expression of Equation (4.20). Here,
both discretization methods are applied and conclusions are drawn accordingly.
4.3.2
Direct central difference discretization
Equation (4.20) can be discretized directly, using an indicator i for the x-discretization:
"
!#
n+1
n+1
n+1
cn+1
− 4cn+1
si−1 − 2si + si+1
s
i−2 − 4ci−1 + 6ci
i+1 + ci+2
n+1 2
2
− (3(c
) − 1)
−
∆t
(∆x)2
(∆x)4
"
#
n+1 3
n+1
3
3
(cn+1
) + (cn+1
cn+1
+ cn+1
cn+1 − cn
i−1 ) − 2(ci
i+1 )
i−1 − 2ci
i+1
+
−
=−
∆t
(∆x)2
(∆x)2
"
!#
n+1
n+1
n+1
cn+1
− 4cn+1
i−2 − 4ci−1 + 6ci
i+1 + ci+2
2
+ −
;
(∆x)4
(4.21)
Equation (4.21) can be written as a system of equations for all si , the right side is a constant in terms
of si :
s
cn+1 − cn
−A·s=−
+ B(cn+1 )
∆t
∆t
1
cn+1 − cn
(4.22)
⇔ ( I − A) · s = −
+ B(cn+1 )
∆t
∆t
n+1
1
c
− cn
⇔
s = − ( I − A)−1
+ B(cn+1 ) ;
∆t
∆t
37
with:
A=

1
×
(∆x)2

...

 32
.. 
−62
42
−2
n+1 2
n+1 2
n+1 2


.
) − 1) (∆x)2 − (6(c
) − 2) (∆x)2 + (3(c
) − 1)
(∆x)2
 (∆x)2 + (3(c


.. 


−2
42
−62
42
n+1 2
n+1 2
n+1 2
.
) − 1) (∆x)2 − (6(c
) − 2) (∆x)2 + (3(c
) − 1)


(∆x)2
(∆x)2 + (3(c


..
.
2
2
2

. .
4
−6
−
n+1 2
n+1 2

;
+
(3(c
)
−
1)
−
2(3(c
)
−
1)
.
2
2
2
(∆x)
(∆x)
(∆x)



.. 
..
2
2
4
−
n+1
2

.
.
) − 1)
0


(∆x)2
(∆x)2 + (3(c

..
.. 
2

−
.
.
0
0


(∆x)2


..
..
.
.
0
0
0
(4.23)
Here, only the upperleft corner of A is shown, due to its size. The vector B(cn+1 ) is defined as:
−22
2
 (∆x)
− (3(cn+1 )2 − 1)
32
(∆x)2
−2
(∆x)2
+ (3(cn+1 )2 − 1)
0
1
B(cn+1 ) =
×
(∆x)2


n+1 n+1 n+1 n+1 3
n+1
n+1
3
2 2c1 −3c2 +c3
−(cn+1
)
+
(c
)
−
−c
+
c
−
1
2
1
2
(∆x)2




.


..

n+1 n+1 n+1 n+1 n+1 


ci−3 −4ci−2 +6ci−1 −4ci
+ci+1

 n+1 3
n+1 3
n+1
n+1
n+1
3
2

(ci−2 ) − 2(cn+1
)
+
(c
)
−
c
−
2c
+
c
−
2
i−1
i
i−2
i−1
i
(∆x)

n+1 n+1 n+1 n+1 n+1 


−4ci+1 +ci+2
c
−4ci−1 +6ci

 n+1 3
;
(ci−1 ) − 2(cn+1
)3 + (cn+1
)3 − cn+1
− 2cn+1
+ cn+1
− 2 i−2
2
i
i+1
i−1
i
i+1
(∆x)

n+1 n+1 n+1 n+1 n+1 


c
−4ci
+6ci+1 −4ci+2 +ci+3

 n+1 3

(ci ) − 2(cn+1
)3 + (cn+1
)3 − cn+1
− 2cn+1
+ cn+1
− 2 i−1
2
i+1
i+2
i
i+1
i+2
(∆x)




..




.


n+1
n+1
n+1


c
−3c
+2c
n+1 3
n+1 3
n+1
n+1
N −2
N −1
N
2
(cN −1 ) − (cN ) − cN −1 − cN
−
(∆x)2
(4.24)
Substituting a rewritten version of the definition of s from Equation (4.17) in Equation (4.22):
n+1
1
c
− cn
n+1
n+1
−1
n+1
cn+1
=
c
+
s
=
c
−
(
I
−
A)
+
B(c
)
;
p
p
p+1
∆t
∆t
(4.25)
With Equation (4.25), the algorithm is completed for one iteration. The number of iterations that is
used is determined by how fast the algorithm converges to a solution within one time step. The stop
condition for an iteration involves the root mean square and some tolerance ψ:
v
uN
uX 1
n+1 2
t
(cn+1
) < ψ;
(4.26)
p+1 − cp
N
i=1
The value of ψ, which is important for the accuracy of the solution, is given later. If the stop condition
is satisfied, the algorithm moves on to the next iteration. For every iteration, an initial condition, cn+1
p=0 ,
is needed, which is chosen to be the result of the previous time step cn . This concludes the discretization
and the algorithm.
38
4.3.3
Indirect central difference discretization
Equation (4.20) can be rewritten as:
" !
#
2
2 n+1
n+1
s
∂cn+1
∂s
n+1 ∂ c
n+1 ∂c
+c
− 6
s + 12 c
∆t
∂x
∂x2
∂x
∂x
4
2
∂ s
∂ s
+ (3(cn+1 )2 − 1) 2 − 2 4
∂x
∂x
"
#
n+1 2
4 n+1
cn+1 − cn
∂c
∂ 2 cn+1
2∂ c
n+1
n+1 2
−
=−
+ 6c
+ ((3c
) − 1)
;
∆t
∂x
∂x2
∂x4
(4.27)
To Equation (4.27), the x-discretization, using an indicator i, can be applied:
 
!
! 
n+1
n+1
n+1 2
cn+1
+ cn+1
si   ci − ci−1
i−1 − 2ci
i+1
n+1
 si 
− ci
− 6
∆t
∆x
(∆x)2
"
!
#
n+1
n+1
c
−
c
s
−
s
s
−
2s
+
s
i
i−1
i−1
i
i+1
i
i−1
− 12cn+1
+ (3(cn+1
)2 − 1)
i
i
∆x
∆x
(∆x)2
si−2 − 4si−1 + 6si − 4si+1 + si+2
− −2
(4.28)
(∆x)4

!
!
n+1
n+1 2
n+1
n+1
n+1
n
c
−
c
c
−
2c
+
c
cn+1
−
c
i
i−1
i−1
i
i+1
i

=− i
+ 6cn+1
+ (3(cn+1
)2 − 1)
i
i
∆t
∆x
(∆x)2
"
!#
n+1
n+1
n+1
cn+1
− 4cn+1
i−2 − 4ci−1 + 6ci
i+1 + ci+2
2
+ −
;
(∆x)4
Equation (4.28) can again be written as a system of equations for all si , like in Equation (4.22):
n+1
1
c
− cn
−1
n+1
s = −( I − A)
+ B(c
) ;
(4.29)
∆t
∆t
Here, A and B(cn+1 ) are analogous to Equation (4.23) and Equation (4.24), respectively, but with the
matrix coefficients that follow from Equation (4.28). Again, substituting the rewritten version of the
definition of s from Equation (4.17) in Equation (4.29):
n+1
1
c
− cn
n+1
n+1
−1
n+1
cn+1
=
c
+
s
=
c
−
(
I
−
A)
+
B(c
)
;
(4.30)
p
p
p+1
∆t
∆t
With Equation (4.30), the algorithm is completed again. The stop condition is the same as in (4.26)
n
and the initial condition for the next iteration is cn+1
p=0 = c . This concludes the discretization and the
algorithm.
4.3.4
Results
In this section, the results of implementing the discretizations of Subsections 4.3.2 and 4.3.3 are discussed.
Other than the results for the ImEx and Modified Furihata schemes in Sections 4.1 and 4.2, respectively,
the results for the evolving interface are not as desired here. In Figures 4.7 and 4.8, the evolving interface
is shown for the discretizations in Subsection 4.3.2 and 4.3.3, respectively.
39
Figure 4.7: The Cahn-Hilliard equation as implemented using the discretization from Subsection 4.3.2,
∆x = 1/128 , = 1/128 and ∆t = 1/30,000 and the stop condition is ψ = 1/1,000 . The solutions are not as
desired since they dampen with time and don’t converge to the theoretical steady-state solution.
Figure 4.8: The Cahn-Hilliard equation as implemented using the discretization from Subsection 4.3.3,
∆x = 1/128 , = 1/128 and ∆t = 1/30,000 and the stop condition is ψ = 1/10,000 . The solutions dampen
slower than in Figure 4.7 but they still do. The solutions don’t converge to the steady-state solution.
4.3.5
Qualitative analysis of the Gâteaux derivative method
In Subsection 4.3.4, it is shown that the used iterative method involving the Gâteaux derivative didn’t
work for two different discretizations. In both cases, as can be seen in Figures 4.7 and 4.8, the solutions
dampen with time and don’t converge to the theoretical steady-state solution. Instead, the solutions
converge to the line c = 0, they behave like positive diffusion solutions. Figure 4.9 shows this for the
discretization in Section 4.3.2 for several larger values of t.
40
Figure 4.9: The Cahn-Hilliard equation as implemented using the same discretization and parameter
values as used for Figure 4.7. The large values of t show convergence to the line c = 0.
It can be concluded that both the direct and the indirect discretizations from Subsections 4.3.2
and 4.3.3, respectively, do not give desired results. Extensive research and debugging has been done to
find out why. Using the indirect discretization, making a specific term zero, might give an answer to
the
which term causes
the positive diffusion. This has been done with, for example, the terms
question
2
n+1
∂cn+1
n+1 ∂ 2 cn+1
+c
= 0, 12 cn+1 ∂c∂x
= 0, (3(cn+1 )2 − 1) = 0 and all possible combinations
6
∂x
∂x2
of making several terms zero and several nonzero. Unfortunately, the precise reason for the positive
diffusive behavior was not found. It is pressumed that negative diffusive behavior only occurs in a
specific range of c, as has been discussed in Section 3.1. Only for some c, the second order derivative of
c (or of s, as is the case with the Gâteaux derivative method) has a minus sign. Therefore, a mistake
with a minus sign can be the reason that the direction of the diffusion is wrong.
41
5
The spreading droplet
In this chapter, the implementation, validation and the results of different parametric studies on the
dynamic boundary condition for a moving contact line, are discussed. This is done using the spreading
droplet case, which is a 2-dimensional convection-diffusion problem that is solved using the coupled
Cahn-Hilliard-Navier-Stokes equation. The theory regarding the dynamic boundary condition and the
spreading droplet, has been discussed briefly in theory Section 1.5. First, the implementation of the
boundary condition withing the OpenFOAM framework, is discussed. Next, a validation study is done
in which the implementation is compared to the expected behavior of a known system from literature.
Finally, parametric studies are done, in which the behavior of the spreading droplet is considered with
varying surface tension, droplet interface width and viscosity.
The dynamic boundary condition is stated in Equation (1.20). For convenience, it is repeated here:
φ(φ − 1) cos(θ)
∂φ
√
;
(5.1)
+ u∇φ = −∇φ −
Dw
∂t
2
Here, φ is analagous to the concentration c, u is the velocity of the fluid at the solid surface, θ is the
dynamic contact angle, Dw is analogous to a friction factor at the contact line between the fluid and the
solid and is a phenomological parameter.
5.1
Implementation in OpenFOAM
This section is removed intentionally due to publishing reasons of my supervisor M.Sc. H.G. Pimpalgaonkar.
5.2
Validation of the implementation
In Section 5.1, the dynamic boundary condition was discretized so that the implementation in OpenFOAM is possible. In this section, the case that is implemented is discussed more thoroughly than before
and the results are qualitavely compared to the data from (Bird et al., 2008).
The initial condition of the spreading droplet was already shown in Figure 1.3. In Figure 5.1, the
spreading droplet is shown for t > 0 s. The droplet sits on a solid surface along which it is spreading.
On the left and on the right side of the figure, two solid walls are placed. The upper limit of the
computational domain is modeled as atmosphere. The spreading radius, r, is an important indication
of the status of the process of spreading. It is defined as the distance along the liquid-solid contact line
from the center of the droplet to the liquid-gas interface, where φ = 0, on the left or the right side, as
shown in Figure 5.1. The rate of spreading is determined by Dw from Equation (5.1), by the contact
angle in steady-state, θs , and by the surface tension between the liquid and the gas, σ. The spreading
radius as a function of time is used as a qualitative analysis function for the droplet spreading during
the rest of this chapter.
42
Figure 5.1: The spreading of a droplet with initial radius L = 7.8 · 10−4 m and initial contact angle
θ = 180.0◦ as modeled using OpenFOAM with Dw = 2.0 kg m−1 s−1 , = 0.6 · 10−4 m, θs = 3.0◦ ,
σ = 0.07 J m−2 . The spreading radius here, at t = 2.5 · 10−4 s, is r = 4.47 · 10−4 m. The spreading
continues in the direction of the arrows.
The actual experimental data is available from (Bird et al., 2008) and, initially, it was planned to
use this data to compare with the results obtained here, rather than discussing the results qualitatively.
However, it was realized very late that the system being modeled here is not the same as the system
in the literature. One of the reasons for this is that the spherical droplet in the initial condition could
never have been the initial condition of the experiments, the droplet was deposited on the surface by, for
example, a pipette. This is not implemented here. More efforts are needed to be able to compare the
results of this research with the literature data, hence, the analyses here are strictly qualitative.
Figures 5.2, 5.3, 5.4 show the spreading radii for the validation cases with θs = 3.0◦ , θs = 43.0◦ and
θs = 117.0◦ , respectively. In each individual plot, solutions with various values of Dw are shown. For all
figures, the spreading radius has a strange offset on the vertical axis, making it around r = 0.2 · 10−3 m
initially. It would make sense if the spreading radius would start from r = 0, since this is the value of
the spreading radius in the initial condition, but it doesn’t.
43
Figure 5.2: The spreading radius as a function of time for θs = 3.0◦ , = 6.0 · 10−5 m, σ = 0.07 J m−2
and various values of Dw . The offset of r = 0.2 · 10−3 m on the left does not belong there. However, it
doesn’t stop the qualitative analysis.
Figure 5.3: The spreading radius as a function of time for θs = 43.0◦ , = 6.0 · 10−5 m, σ = 0.07 J m−2
and various values of Dw .
44
Figure 5.4: The spreading radius as a function of time for θs = 117.0◦ , = 6.0 · 10−5 m, σ = 0.07 J m−2
and various values of Dw .
As explained in the introduction of this chapter, Dw is analogous to the friction factor at the contact
line between the fluid and the solid. A higher value of Dw implies a higher friction at this contact line,
which causes a slower spreading rate of the droplet. This is indeed observed in Figures 5.2 - 5.4. Also,
all solution lines approximately follow the same curve, similar to a square root curve. In Figure 5.5, the
relation between a nondimensionalized spreading radius and a nondimensionalized time is shown on a
logarithmic scale for all θs . The nondimensionalized spreading radius and time are as suggested by (Bird
et al., 2008):
r − r0
R=
;
(5.2)
L
where R is the nondimensionalized spreading radius, r0 is the initial spreading radius (the offset) and L
is the initial radius of the droplet.
t
;
(5.3)
τ=q
(ρl L3 )
σ
where τ is the nondimensionalized time and ρl is the density of the liquid in the droplet.
45
Figure 5.5: The nondimensionalized spreading radius as a function of the nondimensionalized time for
= 6.0 · 10−5 m, Dw = 1.0 kg m−1 s−1 , σ = 0.07 J m−2 and various values of θs . The solutions should
show approximately R = c·τ 0.5 for θs = 3.0◦ and θs = 43.0◦ and should show R = c·τ 0.33 for θs = 117.0◦ .
The slopes of the normalized spreading radius as a function of the normalized time, in Figure 5.5,
should confirm the hypothesis of the square root behavior of the spreading rate with time, according to
(Bird et al., 2008). However, the slopes of the plots in Figure 5.5, as in Table 5.1, are not as close as
desired for θs = 3.0◦ and θs = 43.0◦ .
θ
Slope of R(τ )
Hypothetical slope of R(τ )
3.0◦
43.0◦
117.0◦
0.42
0.41
0.35
0.5
0.5
0.33
Table 5.1: The slopes of R(τ ) in Figure 5.5 compared to the hypothetical slopes, according to (Bird
et al., 2008), are not as close as desired for θs = 3.0◦ and θs = 43.0◦ . The slopes from Figure 5.5 are
calculated using an exponential fit of the form y = a · xb .
5.3
Parametric study of the dynamic boundary condition
In this section, the results of parametric studies, which are performed on the spreading rate, using the
dynamic boundary condition implementation in OpenFOAM, are discussed. These studies are performed
to test a few hypotheses regarding modeling the surface heterogeneities. The tests concern the effect of
the interface width, , the surface tension, σ and the ratio Dσw . Because of the offset in the figures in
previous chapter and the partially failed hypothesis regarding the square root behavior of Figure 5.5, the
qualitative relations between the solutions for the varying parameters are considered, rather than the
absolute values of the solutions.
First, the effect of the surface tension on the spreading rate is considered. Figure 5.6 shows the
spreading radius as a function of time for varying σ. Increasing σ results in an increasing spreading
radius, which is explained by the fact that the difference in pressure ∆p between the fluid and the gas
is proportional to σ and the spreading radius is proportional to ∆p. In other words, a higher surface
tension results in a higher ∆p, which causes a greater force to be applied to the contact line of the
46
droplet, which results in faster spreading. Applying an exponential fit to the plots of Figure 5.6, results
in powers of 0.58 − 0.60 for all σ. This is again not the desired square root behavior.
Figure 5.6: The spreading radius for θs = 3.0◦ , = 6.0 · 10−5 m, Dw = 2.0 kg m−1 s−1 and various values
of σ. Due to the proportionality between the pressure difference, between the fluid and the gas, and the
surface tension, the spreading radius increases with increasing σ.
Next, the effect of the interface width on the spreading rate is considered. Figure 5.7 shows the
spreading radius as a function of time for varying . The solutions for different are considered to be the
same, with a certain inaccuracy. This is explained by the fact that only determines the interface width,
which has no direct influence on the spreading rate. The difference in interface width can be seen best
in Figure 5.8: in Figure 5.8a, = 0.6 · 10−5 , while in Figure 5.8b, = 1.8 · 10−5 . The transition between
the red fluid and the blue gas, indicating the liquid-gas interface, is three times as wide for = 1.8 · 10−5
as for = 0.6 · 10−5 .
47
Figure 5.7: The spreading radius for θs = 3.0◦ , σ = 0.07 J m−2 , Dw = 2.0 kg m−1 s−1 and various values
of . The parameter determines the interface width, which has no influence on the spreading radius.
Therefore, the solutions are the same, with a certain inacurracy.
(a) = 0.6 · 10−4 m
(b) = 1.8 · 10−4 m
Figure 5.8: Two spreading droplets with Dw = 2.0 kg m−1 s−1 , θs = 3.0◦ and σ = 0.07 J m−2 at
t = 2.5 · 10−4 s. In Figure 5.8a, = 0.6 · 10−4 m and in Figure 5.8b, = 1.8 · 10−4 m. The interface in
Figure 5.8b is three times wider than the interface in Figure 5.8a.
Finally, the effect of Dσw on the spreading rate, is discussed. According to (Bird et al., 2008), for
a given droplet, the dynamics of the droplet spreading is determined by Dσw . To test this, two different
cases were simulated. In these cases, σ as well as Dw was increased by a factor of four, hence, Dσw was
constant. Figure 5.9 shows the spreading radius as a function of time for this test. Again, the solutions
are considered to be the same, with a certain inaccuracy. This happens because increasing σ would lead
to an increase in applied force to the contact line of the droplet, which causes a higher spreading rate.
However, proportionally increasing Dw should cancel this effect because this increases the friction at the
contact line.
48
Figure 5.9: The spreading radius for θs = 3.0◦ , = 6.0 · 10−5 m and a constant value of Dσw = 5.0 · 10−3
m s−1 . The solutions are considered to be the same, with a certain inaccuracy, because increasing σ and
Dw proportionally results in their effects to cancel.
The parametric study of varying , σ, and constant Dσw was succesful: all solutions behaved as expected,
let aside the square root behavior that is not satisfied.
49
6
Recommendations
With the knowlegde of the results of this research, several recommendations can be made concerning
future research. These recommendations focus on the topics of Chapters 4 and 5.
Concerning the semi-implicit discretizations that were discussed in Chapter 4, no theoretical stability
analysis was done. If this is done in future research, one can find the relation between the stability of
the ImEx scheme, the Modified Furihata scheme and the Euler Forward scheme for a significantly larger
range of and for different grid sizes.
Concerning the iterative Gâteaux derivative method, obviously, some future research is possible.
Apart from finding a decisive answer to what causes the positive diffusive behavior, other implicit iterative methods such as the Newton-Raphson method, can be implemented and the results can be
compared. Also, the Gâteaux derivative method can be used to implement a different nonlinear partial
differential equation than the Cahn-Hilliard equation. From such a research, interesting results might
be obtained. Finally, a stability analysis regarding the Gâteaux derivative method can be done to learn
how quantitatively valuable this method really is.
Concerning the dynamic boundary condition implementations, several improvements of the spreading droplet model can be made so that the model fits better with the experimental data from (Bird
et al., 2008). First, the droplet depositing device that was used in the experiments, was not modeled
here. Modeling this could be done by placing a horizontal wall on the top of the droplet from which
the droplet is deposited. Second, the definition of Dw that was used here is different from the definition
in the experiments. This can be improved. Third, the offset of r = 0.2 · 10−3 can be removed. Not
just in the plots, but more specifically in the model in OpenFOAM. This might lead to a more accurate
spreading radius. Finally, the grid size used here is different from the implementation by (Carlson et al.,
2011), this was realized very late and can be improved in future research.
50
References
James C. Bird, Shreyas Mandre, and Howard A. Stone. Short-Time Dynamics of Partial Wetting.
Physical Review Letters, 100, 2008.
John W. Cahn and John E. Hilliard. Free Energy of a Nonuniform System. I. Interfacial Free Energy.
Journal of Chemical Physics, 28, 1958.
A. Carlson, M. Do-Quang, and G. Amberg. Dissipation in rapid dynamic wetting. Journal of Fluid
Mechanics, 682, 2011.
Daniel Dugger and Peter J. Lambert. The 1913 paper of René Gâteaux, upon which the modern-day
influence function is based. Journal of Economic Inequality, 12:149–152, 2013.
Daisuke Furihata. A stable and conservative finite difference scheme for the Cahn-Hilliard equation.
Numerische Mathematik, 87, 2000.
Junseok Kim. A continuous surface tension force formulation for diffuse-interface models. Journal of
Computational Physics, 204, 2004.
Dongsun Lee, Joo-Youl Huh, Darae Jeong, Jaemin Shin, Ana Yun, and Junseok Kim. Physical, mathematical, and numerical derivations of the Cahn-Hilliard equation. Computational Materials Science,
81:216–225, 2014.
Daniel V. Schroeder. An Introduction to Thermal Physics. Addison-Wesley, 1999.
Harrie van den Akker and Rob Mudde. Fysische transportverschijnselen. VSSD, 2008.
C. Vuik, P. van Beek, F. Vermolen, and J. van Kan. Numerieke Methoden voor Differentiaalvergelijkingen.
VSSD, 2006.
51
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement