CAHN-HILLIARD INPAINTING AND A GENERALIZATION FOR GRAYVALUE IMAGES

CAHN-HILLIARD INPAINTING AND A GENERALIZATION FOR GRAYVALUE IMAGES
CAHN-HILLIARD INPAINTING AND A GENERALIZATION FOR
GRAYVALUE IMAGES
MARTIN BURGER∗ , LIN HE† , AND CAROLA-BIBIANE SCHÖNLIEB‡
Abstract. The Cahn-Hilliard equation is a nonlinear fourth order diffusion equation originating in material
science for modeling phase separation and phase coarsening in binary alloys. The inpainting of binary images using
the Cahn-Hilliard equation is a new approach in image processing. In this paper we discuss the stationary state
of the proposed model and introduce a generalization for grayvalue images of bounded variation. This is realized
by using subgradients of the total variation functional within the flow, which leads to structure inpainting with
smooth curvature of level sets.
Key words. Cahn-Hilliard equation, TV minimization, image inpainting
AMS subject classifications. 49J40
1. Introduction. An important task in image processing is the process of filling in missing
parts of damaged images based on the information obtained from the surrounding areas. It is
essentially a type of interpolation and is referred to as inpainting. Given an image f in a suitable
Banach space of functions defined on Ω ⊂ R2 , an open and bounded domain, the problem is to
reconstruct the original image u in the damaged domain D ⊂ Ω, called inpainting domain. In
the following we are especially interested in so called non-texture inpainting, i.e., the inpainting
of structures, like edges and uniformly colored areas in the image, rather than texture.
In the pioneering works of Caselles et al. [14] (with the term disocclusion instead of inpainting)
and Bertalmio et al. [5] partial differential equations have been first proposed for digital nontexture inpainting. The inpainting algorithm in [5] extends the graylevels at the boundary of the
damaged domain continuously in the direction of the isophote lines to the interior. The resulting
scheme is a discrete model based on the nonlinear partial differential equation
ut = ∇⊥ u · ∇∆u,
to be solved inside the inpainting domain D using image information from a small strip around D.
The operator ∇⊥ denotes the perpendicular gradient (−∂y , ∂x ). To avoid the crossing of level lines
inside the inpainting domain, intermediate steps of anisotropic diffusion are implemented into the
model.
In subsequent works variational models, originally derived for the tasks of image denoising, deblurring and segmentation, have been adopted to inpainting. In contrast to former approaches
(like [5]) the proposed variational algorithms are applied to the image on the whole domain Ω.
This procedure has the advantage that inpainting can be carried out for several damaged domains
in the image simultaneously and that possible noise outside the inpainting domain is removed at
the same time. The general form of such a variational inpainting approach is
1
2
û(x) = argminu∈H1 J(u) = R(u) + kλ(f (x) − u(x))kH2 ,
2
where f ∈ H2 (or f ∈ H1 depending on the approach) is the given damaged image and û ∈ H1
is the restored image. H1 , H2 are Banach spaces on Ω and R(u) is the so called regularizing
term R : H1 → R. The function λ is the characteristic function of Ω \ D multiplied by a (large)
constant, i.e., λ(x) = λ0 >> 1 in Ω \ D and 0 in D. Depending on the choice of the regularizing
∗ Institut für Numerische und Angewandte Mathematik,
Fachbereich Mathematik und Informatik,
Westfälische Wilhelms Universität (WWU) Münster, Einsteinstrasse 62, D 48149 Münster, Germany. Email:
[email protected]
† Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austrian Academy of Sciences,
Altenbergerstraße 69 A-4040, Linz, Austria. Email: [email protected]
‡ DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, United Kingdom. Email:
[email protected]
1
2
M. Burger, L. He, and C.-B. Schönlieb
term R(u) and the Banach spaces H1 , H2 various approaches have been developed. The most
famous model is the total variation (TV) model, where R(u) = |Du| (Ω), is the total variation of
u, H1 = BV (Ω) the space of functions of bounded variation (see Appendix A for the definition of
functions of bounded variation) and H2 = L2 (Ω), cf. [19, 17, 34, 35]. A variational model with
2
R
∇u
a regularizing term of higher order derivatives, i.e., R(u) = Ω (1 + ∇ · ( |∇u|
) )|∇u| dx, is the
Euler elastica model [16, 28]. Other examples are the active contour model based on the Mumford
and Shah segmentation [36], and the inpainting scheme based on the Mumford-Shah-Euler image
model [22].
Second order variational inpainting methods (where the order of the method is determined by the
derivatives of highest order in the corresponding Euler-Lagrange equation), like TV inpainting
(cf. [35], [17], [18]) have drawbacks as in the connection of edges over large distances or the
smooth propagation of level lines (sets of image points with constant grayvalue) into the damaged
domain. In an attempt to solve both the connectivity principle and the so called staircasing effect
resulting from second order image diffusions, a number of third and fourth order diffusions have
been suggested for image inpainting.
A variational third order approach to image inpainting is the CDD (Curvature Driven Diffusion)
method [18]. While realizing the Connectivity Principle in visual perception, (i.e., level lines are
connected also across large inpainting domains) the level lines are still interpolated linearly (which
may result in corners in the level lines along the boundary of the inpainting domain). This has
driven Chan, Kang and Shen [16] to a reinvestigation of the earlier proposal of Masnou and Morel
[28] on image interpolation based on Euler´s elastica energy. In their work the authors present the
fourth order elastica inpainting PDE which combines CDD and the transport process of Bertalmio
et. al [5]. The level lines are connected by minimizing the integral over their length and their
squared curvature within the inpainting domain. This leads to a smooth connection of level lines
also over large distances. This can also be interpreted via a second boundary condition, necessary
for an equation of fourth order. Not only the grayvalues of the image are specified on the boundary
of the inpainting domain but also the gradient of the image function, namely the directions of the
level lines are given. Further, also combinations of second and higher order methods exist, e.g.
[27]. The combined technique is able to preserve edges due to the second order part and at the
same time avoids the staircasing effect in smooth regions. A weighting function is used for this
combination.
The main challenge in inpainting with higher order flows is to find simple but effective models
and to propose stable and fast discrete schemes to solve them numerically. To do so also the
mathematical analysis of these approaches is an important point, telling us about solvability and
convergence of the corresponding equations. This analysis can be very hard because often these
equations do not admit a maximum or comparison principle and sometimes do not even have a
variational formulation.
A new approach in the class of fourth order inpainting algorithms is inpainting of binary images
using a modified Cahn-Hilliard equation, as proposed in [7], [8] by Bertozzi, Esedoglu and Gillette.
The inpainted version u of f ∈ L2 (Ω) assumed with any (trivial) extension to the inpainting domain
is constructed by following the evolution of
1
ut = ∆(−∆u + F 0 (u)) + λ(f − u)
in Ω,
(1.1)
where F (u) is a so called double-well potential, e.g., F (u) = u2 (u − 1)2 , and - as before:
(
λ0 Ω \ D
λ(x) =
0
D
is the characteristic function of Ω \ D multiplied by a constant λ0 >> 1. The Cahn-Hilliard
equation is a relatively simple fourth-order PDE used for this task rather than more complex
models involving curvature terms. Still the Cahn-Hilliard inpainting approach has many of the
desirable properties of curvature based inpainting models such as the smooth continuation of level
Cahn-Hilliard and BV-H −1 inpainting
3
lines into the missing domain. In fact in [8] the authors prove that in the limit λ0 → ∞ a stationary
solution of (1.1) solves
∆(∆u − 1 F 0 (u)) = 0
u = f
∇u = ∇f
in D
on ∂D
on ∂D,
(1.2)
for f regular enough (f ∈ C 2 ). Moreover its numerical solution was shown to be of at least an
order of magnitude faster than competing inpainting models, cf. [7].
In [8] the authors prove global existence and uniqueness of a weak solution of (1.1). Moreover,
they derive properties of possible stationary solutions in the limit λ0 → ∞. Nevertheless the
existence of a solution of the stationary equation
1
∆(−∆u + F 0 (u)) + λ(f − u) = 0
in Ω,
(1.3)
remains unaddressed. The difficulty in dealing with the stationary equation is the lack of an
energy functional for (1.1), i.e., the modified Cahn-Hilliard equation (1.1) cannot be represented
by a gradient flow of an energy functional over a certain Banach space. This is because the fidelity
term λ(f − u) isnt symmetric with respect to the H −1 inner product. In fact the most evident
variational approach would be to minimize the functional
Z
1
1
2
(1.4)
( |∇u|2 + F (u)) dx + kλ(u − f )k−1 .
2
2
Ω
This minimization problem exhibits the optimality condition
1
0 = −∆u + F 0 (u) + λ∆−1 (λ(u − f )) ,
which splits into
0 = −∆u + 1 F 0 (u)
in D
0 = −∆u + 1 F 0 (u) + λ20 ∆−1 (u − f ) in Ω \ D.
Hence the minimization of (1.4) translates into a second order diffusion inside the inpainting
domain D, whereas a stationary solution of (1.1) fulfills
0 = ∆(−∆u + 1 F 0 (u))
in D
0 = ∆(−∆u + 1 F 0 (u)) + λ0 (f − u) in Ω \ D.
One challenge of this paper is to extend the analysis from [8] by partial answers to questions concerning the stationary equation (1.3) using alternative methods, namely by fixed point
arguments. We shall prove
Theorem 1.1. Equation (1.3) admits a weak solution in H 1 (Ω) provided λ0 ≥ C 13 , for a
positive constant C depending on |Ω|, |D|, and F only.
We will see in our numerical examples that the condition λ0 ≥ C 13 in Theorem 1.1 is naturally
fulfilled, since in order to obtain good visual results in inpainting approaches λ0 has to be chosen
rather large in general. Note that the same condition also appears in [8] where it is needed to
prove the global existence of solutions of (1.1).
The second goal of this paper is to generalize the Cahn-Hilliard inpainting approach to grayvalue images. This is realized by using subgradients of the TV functional within the flow, which
leads to structure inpainting with smooth curvature of level sets. We motivate this new approach
by a Γ−convergence result for the Cahn-Hilliard energy. In fact we prove that the sequence of
functionals for an appropriate time-discrete Cahn-Hilliard inpainting approach Γ-converges to a
functional regularized with the total variation for binary arguments u = χE , where E is some
Borel measurable subset of Ω. This is stated in more detail in the following Theorem.
4
M. Burger, L. He, and C.-B. Schönlieb
Theorem 1.2. Let f, v ∈ L2 (Ω) be two given functions, and τ > 0 a positive parameter. Let
further k·k−1 be the norm in H −1 (Ω), defined in more detail in the Notation section, and |Du| (Ω)
denote the total variation of the distributional derivative Du (cf. Appendix B). Then the sequence
of functionals
2
Z λ
1
1
λ0 λ
2
2
u− f − 1−
J (u, v) =
|∇u| + F (u) dx +
ku − vk−1 +
v
2
2τ
2
λ0
λ0
Ω
−1
Γ−converges for → 0 in the topology of L1 (Ω) to
2
λ
1
λ0 λ
2
u− f − 1−
J(u, v) = T V (u) +
ku − vk−1 +
v
,
2τ
2
λ0
λ0
−1
where
(
C0 |Du| (Ω)
T V (u) =
+∞
if u = χE for some Borel measurable subset E ⊂ Ω
otherwise,
R1p
with χE denotes the characteristic function of E and C0 = 2 0 F (s) ds.
Remark 1.3. Setting v = uk and a minimizer u of the functional J (u, v) to be u = uk+1 ,
the minimization of J can be seen as an iterative approach with stepsize τ to solve (1.1).
Now, by extending the validity of the total variation functional T V (u) from functions u = 0 or
1 to functions |u| ≤ 1 we receive an inpainting approach for grayvalue images rather than binary
images. We shall call this new inpainting approach T V − H −1 inpainting and define it in the
following way: The inpainted image u of f ∈ L2 (Ω), shall evolve via
ut = ∆p + λ(f − u),
p ∈ ∂T V (u),
(1.5)
with
(
|Du| (Ω) if |u(x)| ≤ 1 a.e. in Ω
T V (u) =
+∞
otherwise.
(1.6)
The inpainting domain D and the characteristic function λ(x) are defined as before for the CahnHilliard inpainting approach. The space BV (Ω) is the space of functions of bounded variation
on Ω (cf. Appendix B). Further ∂T V (u) denotes the subdifferential of the functional T V (u) (cf.
Appendix B for the definition). The L∞ bound in the definition of the TV functional (1.6) is quite
natural as we are only considering digital images u whose grayvalue can be scaled to [−1, 1]. It is
further motivated by the Γ− convergence result of Theorem 1.2.
A similar form of the TV-H −1 inpainting approach already appeared in the context of decomposition and restoration of grayvalue images, see for example [38], [33], and [26]. Further, in
Bertalmio at al. [6] an application of the model from [38] to image inpainting has been proposed.
In contrast to the inpainting approach (1.5) the authors in [6] only use a different form of the TVH −1 approach for a decomposition of the image into cartoon and texture prior to the inpainting
process, which is accomplished with the method presented in [5].
Using the same methodology as in the proof of Theorem 1.1 we obtain the following existence
theorem,
Theorem 1.4. Let f ∈ L2 (Ω). The stationary equation
∆p + λ(f − u) = 0,
p ∈ ∂T V (u)
(1.7)
admits a solution u ∈ BV (Ω).
We shall also give a characterization of elements in the subdifferential ∂T V (u) for T V (u)
defined as in (1.6), i.e., T V (u) = |Du| (Ω) + χ1 (u), where
(
0
if |u| ≤ 1 a.e. in Ω
χ1 (u) =
+∞ otherwise.
Cahn-Hilliard and BV-H −1 inpainting
Namely, we shall prove the following theorem.
Theorem 1.5. Let p̃ ∈ ∂χ1 (u). An element
following set of equations
∇u
p = −∇ · |∇u|
∇u
p = −∇ · |∇u|
+ p̃, p̃ ≤ 0
∇u
p = −∇ · |∇u|
+ p̃, p̃ ≥ 0
5
p ∈ ∂T V (u) with |u(x)| ≤ 1 in Ω, fulfills the
a.e. on supp ({|u| < 1})
a.e. on supp ({|u| = −1})
a.e. on supp ({|u| = 1}) .
For (1.5) we additionally give error estimates for the inpainting error and stability information
in terms of the Bregman distance. Let ftrue be the original image and û a stationary solution of
(1.5). In our considerations we use the symmetric Bregman distance defined as
DTsymm
(û, ftrue ) = hû − ftrue , p̂ − ξi ,
V
p̂ ∈ ∂T V (û), ξ ∈ ∂T V (ftrue ).
We prove the following result
Theorem 1.6. Let ftrue fulfill the so called source condition, i.e.,
There exists ξ ∈ ∂T V (ftrue ) such that ξ = ∆−1 q for a source element q ∈ H −1 (Ω),
and û ∈ BV (Ω) be a stationary solution of (1.5) given by û = us + ud , where us is a smooth
function and ud is a piecewise constant function. Then the inpainting error reads
(û, ftrue ) +
DTsymm
V
λ0
1
2
2
(r−2)/r
kû − ftrue k−1 ≤
kξk1 + Cλ0 |D|
errinpaint ,
2
λ0
with constant C > 0 and
errinpaint := K1 + K2 |D| C (M (us ), β) + 2 R(ud ) ,
where K1 , K2 , and C are appropriate positive constants, M (us ) is the smoothness bound for us ,
β is determined from the shape of D, and the error region R(ud ) is defined from the level lines of
ud .
Finally we present numerical results for the proposed binary- and grayvalue inpainting approaches and briefly explain the numerical implementation using convexity splitting methods.
Organization of the paper. In Section 2 a fixed point approach is proposed to prove
Theorem 1.1, i.e., the existence of a stationary solution for the modified Cahn-Hilliard equation
(1.1) with Dirichlet boundary conditions. In Section 3 we discuss the new T V − H −1 inpainting
approach. We prove Theorem 1.2 in Subsection 3.1. This Γ−limit is generalized to an inpainting
approach for grayvalue images, called T V − H −1 inpainting (cf. (1.5)). Similarly to the existence
proof in Section 2 we prove in Subsection 3.2 the existence of a stationary solution of this new
inpainting approach for grayvalue images, i.e., Theorem 1.4. In Section 3.3 we additionally give
a characterization of elements in the subdifferential of the corresponding regularizing functional
(1.6). In addition we present error estimates for both the error in inpainting the image by means
of (1.5) and for the stability of solutions of (1.5) in terms of the Bregman distance, i.e., the
proof of Theorem 1.6, in Subsection 3.4. Section 4 is dedicated to the numerical solution of
Cahn-Hilliard- and T V − H −1 −inpainting and the presentation of numerical examples. Finally
in Appendix A the space H∂−1 is defined and its elements are characterized in order to analyze
(1.1) for Neumann boundary conditions. In Appendix B basic facts about functions of bounded
variation are presented.
Notation. Before we begin with the discussion of our results let us introduce some notations. By k.k we
in L2 (Ω) with corresponding inner product h., .i
always
denote the norm
−1
−1
and by k.k−1 := ∇∆ . the norm in H (Ω) = (H01 (Ω))∗ with corresponding inner product
h., .i−1 := ∇∆−1 ., ∇∆−1 . where ∆−1 denotes the inverse of the negative Laplacian −∆ on H01 ,
i.e., ∆−1 : H −1 (Ω) → H01 (Ω).
6
M. Burger, L. He, and C.-B. Schönlieb
2. Cahn-Hilliard inpainting - proof of Theorem 1.1. In this chapter we prove the
existence of a weak solution of the stationary equation (1.3). Let Ω ⊂ R2 be a bounded Lipschitz
domain and f ∈ L2 (Ω) given. In order to be able to impose boundary conditions in the equation,
we assume f to be constant in a small neighborhood of ∂Ω. This assumption is for technical
purposes only and does not influence the inpainting process as long as the inpainting domain D
does not touch the boundary of the image domain Ω. Instead of Neumann boundary data as in
the original Cahn-Hilliard inpainting approach (cf. [8]) we use Dirichlet boundary conditions for
our analysis, i.e., we consider
ut = ∆ −∆u + 1 F 0 (u) + λ(f − u) in Ω
(2.1)
u = f, −∆u + 1 F 0 (u) = 0
on ∂Ω.
This change from a Neumann- to a Dirichlet problem makes it easier to deal with the boundary
conditions in our proofs but does not have a significant impact on the inpainting process as long as
we assume that D̄ ⊂ Ω. In Appendix A we nevertheless propose a setting to extend the presented
analysis for (1.1) to the originally proposed model with Neumann boundary
data. In our new set
ting we define a weak solution of equation (1.3) as a function u ∈ H = u ∈ H 1 (Ω), u|∂Ω = f |∂Ω
that fulfills
1 0
F (u), φ − hλ(f − u), φi−1 = 0, ∀φ ∈ H01 (Ω).
(2.2)
h∇u, ∇φi +
Remark 2.1. With u ∈ H 1 (Ω) and the compact embedding H 1 (Ω) ,→,→ Lq (Ω) for every
1 ≤ q < ∞ and Ω ⊂ R2 the weak formulation is well defined.
To see that (2.2) defines a weak formulation for (1.3) with Dirichlet boundary conditions we
integrate by parts in (2.2) and get
R
(−∆u + 1 F 0 (u) − ∆−1 (λ(f − u))) φ dx
ΩR
− ∂Ω ∆−1 (λ(f − u)) ∇∆−1 φ · ν dH1 = 0, ∀φ ∈ H01 (Ω),
where H1 denotes the one dimensional Hausdorff measure. This yields
∆u − 1 F 0 (u) + ∆−1 (λ(f − u)) = 0 in Ω
∆−1 (λ(f − u)) = 0
on ∂Ω.
(2.3)
Assuming sufficient regularity on u we can use the definition of ∆−1 to see that u solves
in Ω
−∆∆u + 1 ∆F 0 (u) + λ(f − u) = 0
∆−1 (λ(f − u)) = −∆u + 1 F 0 (u) = 0 on ∂Ω,
Since additionally u|∂Ω = f |∂Ω , the function u solves (1.3) with Dirichlet boundary conditions.
For the proof of existence of a solution to (2.2) we follow the subsequent strategy. We consider
the fixed point operator A : L2 (Ω) → L2 (Ω) where A(v) = u fulfills for a given v ∈ L2 (Ω) the
equation
1 −1
(u − v) = ∆u − 1 F 0 (u) + ∆−1 [λ(f − u) + (λ0 − λ)(v − u)] in Ω,
τ∆
(2.4)
u = f, ∆−1 τ1 (u − v) + λ(f − u) + (λ0 − λ)(v − u) = 0
on ∂Ω,
where τ > 0 is a parameter. The boundary conditions of A are given by the second equation in
(2.4). Note that actually the solution u will be in H 1 (Ω) and hence the boundary condition is
well-defined in the trace sense, the operator A into L2 (Ω) is then obtained
with further embedding.
We define a weak solution of (2.4) as before by a function u ∈ H = u ∈ H 1 (Ω), u|∂Ω = f |∂Ω
that fulfills
1
1 0
τ (u − v), φ −1 + h∇u, ∇φi + F (u), φ
(2.5)
− hλ(f − u) + (λ0 − λ)(v − u), φi−1 = 0
∀φ ∈ H01 (Ω).
Cahn-Hilliard and BV-H −1 inpainting
7
A fixed point of the operator A, provided it exists, then solves the stationary equation with
Dirichlet boundary conditions as in (2.3).
Note that in (2.4) the characteristic function λ in the fitting term λ(f − u) + (λ0 − λ)(v − u) =
λ0 (v −u)+λ(f −v) only appears in combination with given functions f, v and is not combined with
the solution u of the equation. For equation (2.4), i.e., (2.5), we can therefore state a variational
formulation. This is, for a given v ∈ L2 (Ω) equation (2.4) is the Euler-Lagrange equation of the
minimization problem
u∗ = argminu∈H 1 (Ω),u|∂Ω =f |∂Ω J (u, v)
(2.6)
with
J (u, v) =
Z Ω
2
1
1
λ0 2
2
u − λ f − 1 − λ v . (2.7)
|∇u| + F (u) dx +
ku − vk−1 +
2
2τ
2
λ0
λ0
−1
We are going to use the variational formulation (2.7) to prove that (2.4) admits a weak solution
in H 1 (Ω). This solution is unique under additional conditions.
Proposition 2.2. Equation (2.4) admits a weak solution in H 1 (Ω) in the sense of (2.5).
For τ ≤ C3 , where C is a positive constant depending on |Ω|, |D|, and F only, the weak solution
of (2.4) is unique.
Further we prove that the operator A admits a fixed point under certain conditions.
Proposition 2.3. Set A : L2 (Ω) → L2 (Ω), A(v) = u, where u ∈ H 1 (Ω) is the unique weak
solution of (2.4). Then A admits a fixed point û ∈ H 1 (Ω) if τ ≤ C3 and λ0 ≥ C 13 for a positive
constant C depending on |Ω|, |D|, and F only.
Hence the existence of a stationary solution of (1.1) follows under the condition λ0 ≥ C/3 .
We begin with considering the fixed point equation (2.4), i.e., the minimization problem (2.6).
In the following we prove the existence of a unique weak solution of (2.4) by showing the existence
of a unique minimizer for (2.7).
(Proof of Proposition
2.2) We want to show that J (u, v) has a minimizer in H =
Proof.
1
u ∈ H (Ω), u|∂Ω = f |∂Ω . For this we consider a minimizing sequence un ∈ H of J (u, v). To
see that un is uniformly bounded in H 1 (Ω) we show that J (u, v) is coercive in H 1 (Ω). With
F (u) ≥ C1 u2 − C2 for two positive constants C1 , C2 > 0 and the triangular inequality in the
H −1 (Ω) space, we obtain
C1
C2
1 1
2
2
2
2
J (u, v) ≥ k∇uk2 +
kuk2 −
+
kuk−1 − kvk−1
2
2τ 2
2 !
λ
λ0 1
λ
2
+
kuk−1 − f + 1 −
v
2 2
λ0
λ0
−1
C1
2
2
kuk2 +
≥ k∇uk2 +
2
λ0
1
+
4
4τ
2
kuk−1 − C3 (v, f, λ0 , , Ω, D).
Therefore a minimizing sequence un is bounded in H 1 (Ω) and it follows that un * u∗ in
H (Ω). To finish the proof of existence for (2.4) we have to show that J (u, v) is weakly lower
semicontinuous in H 1 (Ω). For this we divide the sequence J (un , v) of (2.7) in two parts. We
denote the first term
Z 1
n
n 2
n
a =
|∇u | + F (u ) dx
2
|Ω
{z
}
1
CH(un )
and the second term
1
λ0
2
b =
kun − vk−1 +
|2τ
{z
} |2
n
D(un ,v)
2
n
u − λ f − 1 − λ v .
λ0
λ0
−1
{z
}
F IT (un ,v)
8
M. Burger, L. He, and C.-B. Schönlieb
Since H 1 ,→,→ L2 it follows un → u∗ in L2 (Ω). Further we know that if bn converges strongly,
then
lim inf (an + bn ) = lim inf an + lim bn .
(2.8)
We begin with the consideration of the last term in (2.7). We denote f˜ :=
We want to show
2
n ˜2
−→ u∗ − f˜
u − f −1
−1
⇐⇒
h∆−1 (un − f˜), un − f˜i −→ h∆−1 (u∗ − f˜), u∗ − f˜i.
λ
λ0 f
+ (1 −
λ
λ0 )v.
For this we consider the absolute difference of the two terms,
|h∆−1 (un − f˜), un − f˜i − h∆−1 (u∗ − f˜), u∗ − f˜i|
= |h∆−1 (un − u∗ ), un − f˜i − h∆−1 (u∗ − f˜), un − u∗ i|
≤ |hun − u∗ , ∆−1 (un − f˜)i| + |h∆−1 (u∗ − f˜), u∗ − un i|
≤ kun − u∗ k · ∆−1 (un − f˜) + kun − u∗ k · ∆−1 (u∗ − f˜)
| {z }
| {z }
→0
→0
Since the operator ∆−1 : H −1 (Ω) → H01 (Ω) is a linear and continuous operator it follows that
−1 −1 ∆ F ≤ ∆ · kF k for all F ∈ H −1 (Ω).
Thus
|h∆−1 (un − f˜), un − f˜i − h∆−1 (u∗ − f˜), u∗ − f˜i|
≤ kun − u∗ k ∆−1 un − f˜ + kun − u∗ k ∆−1 u∗ − f˜
| {z } | {z } | {z } | {z } | {z } | {z }
→0
const
bounded
→0
const
const
−→ 0 as n → ∞,
and we conclude that F IT (un , v) converges strongly to F IT (u∗ , v). With the same argument it
follows that D(un , v) converges strongly and in sum that the sequence bn converges strongly in
L2 (Ω). Further CH(.) is weakly lower semicontinuous, which follows from the lower semicontinuity
of the Dirichlet integral and from the continuity of F by applying Fatou’s Lemma. Hence we obtain
J (u∗ , v) ≤ lim inf J (un , v).
Therefore J has a minimizer in H 1 , i.e.,
∃u∗ with u∗ = argminu∈H 1 (Ω) J (u, v).
We next assert that u∗ fulfills the boundary condition u∗ |∂Ω = f |∂Ω .To see this, note that for an
admissible function w ∈ H, un − w ∈ H01 (Ω). Now H01 (Ω) is a closed, linear subspace of H 1 (Ω),
and so, by Mazur’s theorem (cf. [23] § D.4 for example), is weakly closed. Hence u∗ − w ∈ H01 (Ω)
and consequently the trace of u∗ on ∂Ω is equal to f .
For simplicity let in the following u = u∗ . To see that the minimizer u is a weak solution of
(2.4) we compute the corresponding Euler-Lagrange equation to the minimization problem. For
this sake we choose any test function φ ∈ H01 (Ω) and compute the first variation of J , i.e.,
d J (u + δφ, v)
,
dδ
δ=0
Cahn-Hilliard and BV-H −1 inpainting
9
which has to be zero for a minimizer u. Thus we have
1
λ
λ
1
h∇u, ∇φi + hF 0 (u), φi +
(u − v) + λ0 u − f − 1 −
v ,φ
= 0.
τ
λ0
λ0
−1
Integrating by parts in both terms we get
1
1
λ
λ
−∆u + F 0 (u) − ∆−1
(u − v) + λ0 u − f − 1 −
v ,φ
τ
λ0
λ0
Z
Z
1
λ
λ
+
∇u · νφ ds +
∆−1
(u − v) + λ0 u − f − 1 −
v ∇∆−1 φ · ν ds = 0.
τ
λ
λ
0
0
∂Ω
∂Ω
Since φ is an element in H01 (Ω) the first boundary integral vanishes. Further a minimizer u fulfills
the boundary condition u = f on the boundary ∂Ω. Hence, we obtain that u fulfills the weak
formulation (2.5) of (2.4).
For the uniqueness of the minimizer, we need to prove that J is strictly convex. To do so,
we prove that for any u1 , u2 ∈ H 1 (Ω),
u1 + u2
, v > 0,
(2.9)
J (u1 , v) + J (u2 , v) − 2J 2
2
based on an assumption that F (.) satisfies F (u1 ) + F (u2 ) − 2F ( u1 +u
) > −C(u1 − u2 )2 , for a
2
1
1
2
2
constant C > 0. For example, when F (u) = 8 (u − 1) , C = 8 . Denote u = u1 − u2 , we have
1
λ0
C
u1 + u2
2
2
2
J (u1 , v) + J (u2 , v) − 2J
, v > kukH 1 +
+
kuk−1 − kuk2
2
4
4τ
4
By using the inequality
2
kuk2 ≤ kukH 1 kuk−1 ,
(2.10)
and the Cauchy-Schwarz inequality, for (2.9) to be fulfilled, we need
s 1
λ0
C
2
+
≥ .
4 4τ
4
i.e.,
3
1
λ0 +
τ
≥ C 2.
Therefore J (u, v) is strictly convex in u and our minimization problem has a unique minimizer if
τ is chosen smaller than C3 for a constant C depending on |Ω|, |D|, and F only. Because of the
convexity of J in ∇u and u, every weak solution of the Euler-Lagrange equation (2.4) is in fact
a minimizer of J . This proves the uniqueness of a weak solution of (2.4) provided τ << C3 .
Next we want to prove Proposition 2.3, i.e., the existence of a fixed point of (2.4) and with
this the existence of a stationary solution of (1.1). To do so we are going to apply Schauder’s fixed
point theorem.
Proof. (Proof of Proposition 2.3) We consider a solution A(v) = u of (2.4) with v ∈ L2 (Ω)
given. In the following we will prove the existence of a fixed point by using Schauder’s fixed point
theorem. We start with proving that
2
2
2
kA(v)k = kuk ≤ β kvk + α,
(2.11)
for a constant
β < 1. Having this
we have shown that A is a map from the closed ball K =
B(0, M ) = u ∈ L2 (Ω) : kuk ≤ M into itself for an appropriate constant M > 0. We conclude
10
M. Burger, L. He, and C.-B. Schönlieb
the proof with showing the compactness of K and the continuity of the fixed point operator A.
From Schauder’s theorem the existence of a fixed point follows.
Let us, for the time being, additionally assume that ∇u and ∆u are bounded in L2 (Ω). Hence
we can multiply (2.4) with −∆u and integrate over Ω to obtain
Z
Z
1
1
−
∆u∆−1
(u − v) − λ(f − u) − (λ0 − λ)(v − u) dx = − h∆u, ∆ui +
F 0 (u)∆u dx
τ
Ω
Ω
After integration by parts we find with the short-hand notation
w :=
1
(u − v) − λ(f − u) − (λ0 − λ)(v − u)
τ
that
Z
Z
uw dx −
Z
1
1
2
2
∇u · ν(∆−1 w + F 0 (u)) − u∇(∆−1 w) · ν dH1 = − k∆uk −
F 00 (u) |∇u| dx.
Ω
∂Ω
Ω
Now we insert the boundary conditions ∆−1 w = 0, u = f =: f1 and F 0 (u) = F 0 (f ) = f2 on ∂Ω
with constants f1 and f2 on the left-hand side, i.e.
Z
Z Z
f2
1
2
2
uw dx −
∇u · ν − f1 ∇(∆−1 w) · ν dH1 = − k∆uk −
F 00 (u) |∇u| dx.
Ω
∂Ω
Ω
An application of Gauss’ Theorem to the boundary integral implies
Z Z
Z
f2
f2
∇u · ν − f1 ∇(∆−1 w) · ν dH1 =
∆u dx + f1
w dx,
Ω
∂Ω
Ω
and we get
Z
2
uw dx = − k∆uk −
Ω
1
Z
2
F 00 (u) |∇u| dx +
Ω
f2
Z
Z
∆u dx + f1
Ω
w dx.
Ω
By further applying Young’s inequality to the last two terms we get
Z
Z
f2 δ
1
f1 δ
2
2
2
uw dx ≤
− k∆uk −
F 00 (u) |∇u| dx +
kwk + C(f1 , f2 , |Ω|, , δ).
2
Ω
2
Ω
Using the identity λ(f − u) + (λ0 − λ)(v − u) = λ(f − v) + λ0 (v − u) in the definition of w yields
Z
Z
1
f2 δ
1
f1 δ
2
2
2
u · (u − v) ≤
− k∆uk −
F 00 (u) |∇u| dx +
kwk
τ
2
2
Ω
Ω
!
Z
Z
+λ0
u(f − u) dx +
u(v − u) dx + C(f1 , f2 , |Ω|, , δ).
Ω\D
D
By applying the standard inequality (a + b)2 ≤ 2(a2 + b2 ) to the L2 norm of w = ( τ1 + λ0 )u − ( τ1 +
λ0 − λ)v − λf and by using that (1 − λ/λ0 ) ≤ 1 in the resulting L2 norm of v we get
Z
1
u · (u − v) ≤
τ
Ω
2
Z
f2 δ
1
1
2
2
2
00
− k∆uk −
F (u) |∇u| dx + f1 δ
+ λ0 kuk
2
Ω
τ
!
2
Z
Z
1
2
+2f1 δ
+ λ0 kvk + λ0
u(f − u) dx +
u(v − u) dx
τ
Ω\D
D
+C(f, f1 , f2 , |Ω|, , δ, λ0 ).
Cahn-Hilliard and BV-H −1 inpainting
11
With F 00 (u) ≥ C1 u2 − C2 for some constants C1 , C2 > 0 and for all u ∈ R, and by further applying
the Cauchy-Schwarz inequality to the last two integrals we obtain
2
Z
1
f2 δ
C1
C2
1
2
2
2
2
u · (u − v) ≤
− k∆uk −
ku |∇u|k +
k∇uk + f1 δ
+ λ0 kuk
τ
2
τ
Ω
" 2
Z
Z
δ2
1
δ1
2
2
+2f1 δ
+ λ0 kvk + λ0 − 1 −
u dx +
−1
u2 dx
τ
2
2
Ω\D
D
Z
1
2
+
v dx + C(f, f1 , f2 , |Ω|, |D|, , λ0 , δ, δ2 ).
2δ1 D
Setting δ2 = 1 and δ1 = 2 we see that
2
Z
1
f2 δ
C1
C2
1
2
2
2
2
u · (u − v) ≤
− k∆uk −
ku |∇u|k +
k∇uk + f1 δ
+ λ0 kuk
τ
2
τ
Ω
"
#
2
Z
Z
1
1
1
2
2
2
+ λ0 kvk + λ0 −
u dx +
v dx
+2f1 δ
τ
2 Ω\D
4 D
+C(f, f1 , f2 , |Ω|, |D|, , δ, λ0 ).
We follow the argumentation of the proof of existence for (1.1) in [8] by observing the following
property: A standard interpolation inequality for ∇u reads
2
2
k∇uk ≤ δ3 k∆uk +
C3
2
kuk .
δ3
(2.12)
The domain of integration in the second integral of the equation above can be taken to be smaller
than Ω by taking a larger constant C3 . Further we use the L1 version of Poincare’s inequality
applied to the function u2 . We recall this inequality in the following theorem.
Theorem 2.4. (Poincare’s inequality in L1 ). Assume that Ω is precompact open subset of
n-dimensional Euclidean space Rn having Lipschitz boundary (i.e., Ω is an open, bounded Lipschitz
domain). Then there exists a constant C, depending only on Ω, such that, for every function u in
the Sobolev space W 1,1 (Ω),
ku − uΩ kL1 (Ω) ≤ Ck∇ukL1 (Ω) ,
R
1
where uΩ = |Ω|
u(y) dy is the average value of u over Ω, with |Ω| denoting the Lebesgue measure
Ω
of the domain Ω.
Then, assuming that u 6= 0 in Ω \ D, we choose the constant C4 (which depends on the size of D
compared to Ω) large enough such that
Z
Z
Z
Z
2
2
u − u2Ω dx ≤ C4
∇u dx.
u2 dx − C4
u2 dx ≤
Ω
Ω\D
Ω
Ω
or in other words
2
kuk ≤ C4 ∇u2 L1 (Ω) + C4
Z
u2 dx.
(2.13)
Ω\D
By Hölders inequality we also have that
2
C5
α
2
∇u 1
.
≤ ku |∇u|k +
L (Ω)
2
2α
Putting the last three inequalities (2.12)-(2.14) together we obtain
Z
C3 C4 α
C3 C4
C3 C4 C5
2
2
2
k∇uk ≤ δ3 k∆uk +
ku |∇u|k +
u2 dx +
.
2δ3
δ3
2αδ3
Ω\D
(2.14)
12
M. Burger, L. He, and C.-B. Schönlieb
We now use the last inequality to bound the gradient term in our estimates from above to get
R
2
2
C 3 C4 α
2 δ3
u · τ1 (u − v) ≤ ( f2 δ+2C
− ) k∆uk + ( C2 2δ
− C1 ) ku|∇u|k
2
Ω
3
2
2 2
2
kvk
+(f1 δ τ1 + λ0 + C2δC33C4 − C42λ0 ) kuk + λ40 + 2f2 δ τ1 + λ0
+C(f, f1 , f2 , |Ω|, |D|, , δ, λ0 ).
(2.15)
22 −f2 δ
With δ3 < 2C2 and α, δ small enough the first two terms can be estimated from above by zero.
Applying the Cauchy-Schwarz inequality on the left-hand side and rearranging the terms on both
sides of the inequality we conclude
!
2
2 !
λ0
C4 λ0
1
C2 C3 C4
1
1
1
2
2
kuk ≤
+
− f1 δ
− λ0 −
+
+ 2f2 δ
+ λ0
kvk
2τ
2
τ
δ3 4
2τ
τ
+C(f, f1 , f2 , |Ω|, |D|, , δ, λ0 ).
Choosing δ small enough, C4 large enough, and λ0 ≥ CC4 13 the solution u and v fulfill
2
2
kuk ≤ β kvk + C,
(2.16)
with β < 1 and a constant C independent of v. Hence u is bounded in L2 (Ω).
To see that our regularity assumptions on u from the beginning of the proof are automatically
fulfilled we consider (2.15) with appropriate constants δ3 , δ, and α as specified in the paragraph
below (2.15). But now we only estimate the second term on the right side by zero and keep the
first term. By applying the Cauchy-Schwarz inequality and rearranging the terms as before we
obtain
2 C2 C3 C4 2
2
C4 λ0
1
1
2 δ3
+
−
f
δ
−
λ
kuk + ( − f2 δ+2C
) k∆uk
− δ3 1
0
2
τ
2
2τ
2
2
λ0
1
1
≤
kvk + C(f, f1 , f2 , |Ω|, |D|, , δ, λ0 ),
4 + 2τ + 2f2 δ τ + λ0
2 δ3
with the coefficient − f2 δ+2C
> 0 due to our choice of δ3 . Therefore not only the L2 − norm of
2
u is uniformly bounded but also the L2 − norm of ∆u. By the standard interpolation inequality
(2.12) the boundedness of u in H 1 (Ω) follows. From the last result we additionally get that the
operator A is a compact map since A : L2 (Ω) → H 1 (Ω) ,→,→ L2 (Ω). Therefore K is a compact
and convex subset of L2 (Ω).
It remains to show that the operator A is continuous. Indeed if vk → v in L2 (Ω) then A(vk ) = uk is
bounded in H 1 (Ω) for all k = 0, 1, 2, . . .. Thus, we can consider a weakly convergent subsequence
ukj * u in H 1 (Ω). Because H 1 (Ω) ,→,→ Lq (Ω), 1 ≤ q < ∞ the sequence ukj converges also
strongly to u in Lq (Ω). Hence, a weak solution A(vk ) = uk of (2.4) weakly converges to a weak
solution u of
1
1
(−∆−1 )(u − v) = ∆u − F 0 (u) − ∆−1 [λ(f − u) + (λ0 − λ)(v − u)] ,
τ
where u is the weak limit of A(vk ) as k → ∞. Because the solution of (2.4) is unique provided
τ ≤ C3 (cf. Proposition 2.2), u = A(v), and therefore A is continuous. Applying Schauder’s
Theorem we have shown that the fixed point operator A admits a fixed point û in L2 (Ω) which
fulfills
Z
1 0
F (û), φ −hλ(f − û), φi−1 +
h∇û, ∇φi+
∆−1 (λ(f − û)) ∇∆−1 φ·ν dH1 = 0, ∀φ ∈ H01 (Ω).
∂Ω
Because the solution of (2.4) is an element of H = u ∈ H 1 (Ω), u|∂Ω = f |∂Ω also the fixed point
û ∈ H.
Following the arguments from the beginning of this section we conclude with the existence of
a stationary solution for (1.1).
Cahn-Hilliard and BV-H −1 inpainting
13
By modifying the setting and the above proof in an appropriate way one can prove the existence
of a stationary solution for (1.1) also under Neumann boundary conditions, i.e.,
∇u · ν = ∇∆u · ν = 0,
on ∂Ω.
A corresponding reformulation of the problem is given in Appendix A.
3. Total Variation - H −1 inpainting. In this section we discuss our newly proposed inpainting scheme (1.5), i.e., the inpainted image u of f ∈ L2 (Ω) evolves via
ut = ∆p + λ(f − u),
p ∈ ∂T V (u),
with
(
|Du| (Ω) if |u(x)| ≤ 1 a.e. in Ω
T V (u) =
+∞
otherwise.
Before starting this section we suggest readers who are unfamiliar with the space BV (Ω) to first
read Appendix B and maybe recall the definition of the subdifferential of a function in Definition
B.9.
3.1. Γ-Convergence of the Cahn-Hilliard energy - proof of Theorem 1.2. In the
following we want to motivate our new inpainting approach (1.5) by considering the Γ−limit for
→ 0 of an appropriate time-discrete Cahn-Hilliard inpainting approach, i.e., the Γ− limit of the
functionals from our fixed point approach in (2.7). More precisely we want to prove Theorem 1.2
stated in the Introduction of this paper. Before starting our discussion lets recall the definition
of Γ-convergence and its impact within the study of optimization problems. For more details on
Γ-convergence we refer to [29].
Definition 3.1. Let X = (X, d) be a metric space and (Fh ), h ∈ N be family of functions
Fh : X → [0, +∞]. We say that (Fh ) Γ-converges to a function F : X → [0, +∞] on X if ∀x ∈ X
we have
(i) for every sequence xh with d(xh , x) → 0 we have
F (x) ≤ lim inf Fh (xh );
h
(ii) there exists a sequence x¯h such that d(x¯h , x) → 0 and
F (x) = lim Fh (x¯h )
h
(or, equivalently, F (x) ≥ lim suph Fh (x¯h )).
We write F (x) = Γ − limh Fh (x), x ∈ X, is the Γ-limit of (Fh ) in X. The formulation of the
Γ-limit for → 0 is analogous by defining a sequence h with h → 0 as h → ∞.
The important property of Γ-convergent sequences of functions Fh is that its minima converge
to minima of the Γ-limit F . In fact we have the following theorem
Theorem 3.2. Let (Fh ) be like in Definition 3.1 and additionally equicoercive, that is there
exists a compact set K ⊂ X (independent of h) such that
inf {Fh (x)} = inf {Fh (x)}.
x∈X
x∈K
If Fh Γ-converges on X to a function F we have
min {F (x)} = lim inf {Fh (x)} .
x∈X
h x∈X
After recalling these facts about Γ-convergence we continue this section with the proof of
Theorem 1.2.
14
M. Burger, L. He, and C.-B. Schönlieb
Proof. Modica and Mortola have shown in [31] and [32] that the sequence of Cahn-Hilliard
functionals
Z 1
2
CH(u) =
|∇u| + F (u) dx
2
Ω
Γ-converges in the topology L1 (Ω) to
(
C0 |Du| (Ω) if u = χE for some Borel measurable subset E ⊂ Ω
T V (u) =
+∞
otherwise
R1p
as → 0, where C0 = 2 0 F (s) ds. (The space BV (Ω) and the total variation |Du| (Ω) are
defined in Appendix B.)
Now, for a given function v ∈ L2 (Ω) the functional J from our fixed point approach (2.4), i.e.,
2
Z λ
1
λ0 1
λ 2
2
J (u, v) =
u − f − (1 − )v ,
|∇u| + F (u) dx +
ku − vk +
2
2τ {z −1} 2 λ0
λ0 −1
Ω
|
{z
}
|
|
{z
}
:=D(u,v)
:=CH(u)
:=F IT (u,v)
is the sum of the regularizing term CH(u), the damping term D(u, v) and the fitting term
F IT (u, v). We recall the following fact,
Theorem 3.3. [Dal Maso, [29], Prop. 6.21.] Let G : X → R be a continuous function and
(Fh ) Γ− converges to F in X, then (Fh + G) Γ− converges to F + G in X.
Since the H −1 -norm is continuous in H −1 (Ω) and hence in particular in L1 (Ω), the two terms in
J that are independent from , i.e., D(u, v) and F IT (u, v), are continuous in L1 (Ω). Together
with the Γ-convergence result of Modica and Mortola for the Cahn-Hilliard energy, we have proven
that the modified Cahn-Hilliard functional J can be seen as a regularized approximation in the
sense of Γ-convergence of the TV-functional
J(u, v) = T V (u) + D(u, v) + F IT (u, v),
for functions u ∈ BV (Ω) with u(x) = χE for a Borel measurable subset E ⊂ Ω. In fact we have
gone from a smooth transition layer between 0 and 1 in the Cahn-Hilliard inpainting approach
(depending on the size of ) to a sharp interface limit in which the image function now jumps from
0 to 1.
This property motivates the extension of J(u, v) to grayvalue functions such that |u| ≤ 1
on Ω and hence leads us from the Cahn-Hilliard inpainting approach for binary images to a
generalization for grayvalue images, namely our so called T V − H −1 inpainting equation (1.5).
3.2. Existence of a stationary solution - proof of Theorem 1.4. Our strategy for
proving the existence of a stationary solution for T V − H −1 inpainting (1.5) is similar to our
existence proof for a stationary solution of the modified Cahn-Hilliard equation (1.1) in Section
2. Similarly as in our analysis for (1.1) in Section 2 we consider equation (1.5) with Dirichlet
boundary conditions, namely
ut = ∆p + λ(f − u) in Ω
u=f
on ∂Ω,
for p ∈ ∂T V (u).
Now let f ∈ L2 (Ω), |f | ≤ 1 be the given grayvalue image. For v ∈ Lr (Ω), 1 < r < 2, we
consider the minimization problem
u∗ = arg
min
J(u, v),
u∈BV (Ω)
with functionals
J(u, v) := T V (u) +
λ0
λ
λ
1
||u − v||2−1 + ||u − f − (1 − )v||2−1 ,
2τ
2
λ
λ
0
0
|
{z
} |
{z
}
D(u,v)
F IT (u,v)
(3.1)
Cahn-Hilliard and BV-H −1 inpainting
15
with T V (u) defined as in (1.6), i.e.,
(
|Du| (Ω) if |u(x)| ≤ 1 a.e. in Ω
T V (u) =
+∞
otherwise.
Note that Lr (Ω) can be continuously embedded in H −1 (Ω). Hence the functionals in (3.1) are
well defined.
First we will show that for a given v ∈ Lr (Ω) the functional J(., v) attains a unique minimizer
∗
u ∈ BV (Ω) with |u∗ (x)| ≤ 1 a.e. in Ω.
Proposition 3.4. Let f ∈ L2 (Ω) be given with |f (x)| ≤ 1 a.e. in Ω and v ∈ Lr (Ω). Then
the functional J(., v) has a unique minimizer u∗ ∈ BV (Ω) with |u∗ (x)| ≤ 1 a.e. in Ω.
Proof. Let (un )n∈N be a minimizing sequence for J(u, v), i.e.,
J(un , v) →
inf
J(u, v).
u∈BV (Ω)
Then un ∈ BV (Ω) and |un (x)| ≤ 1 in Ω (because otherwise T V (un ) would not be finite). Therefore
|Dun | (Ω) ≤ M,
for an M ≥ 0 and for all n ≥ 1,
and, because of the uniform boundedness of |u(x)| for every point x ∈ Ω,
kun kLp (Ω) ≤ M̃ , for an M ≥ 0, ∀n ≥ 1, and 1 ≤ p ≤ ∞.
Thus un is uniformly bounded in Lp (Ω) and in particular in L1 (Ω). Together with the boundedness
of |Dun | (Ω), the sequence un is also bounded in BV (Ω) and there exists a subsequence, still
denoted un , and a u ∈ BV (Ω) such that un * u weakly in Lp (Ω), 1 ≤ p ≤ ∞ and weakly∗ in
BV (Ω). Because L2 (Ω) ⊂ L2 (R2 ) ⊂ H −1 (Ω) (by zero extensions of functions on Ω to R2 ) un * u
also weakly in H −1 (Ω). Because |Du| (Ω) is lower semicontinuous in BV (Ω) and by the lower
semicontinuity of the H −1 norm we get
J(u, v)= T V (u) + D(u, v) + F IT (u, v)
≤ lim inf n→∞ (T V (un ) + D(un , v) + F IT (un , v))
= lim inf n→∞ J(un , v).
So u is a minimizer of J(u, v) over BV (Ω).
To prove the uniqueness of the minimizer we (similarly as in the proof of Theorem 2.2) show
that J is strictly convex. Namely we prove that for all u1 , u2 ∈ BV (Ω), u1 6= u2
u1 + u2
, v > 0.
J(u1 , v) + J(u2 , v) − 2J
2
We have
J(u1 , v) + J(u2 , v) − 2J
u1 + u2
,v
2
!
u1 + u2 2
=
+
− 2
2
−1
u1 + u2
+T V (u1 ) + T V (u2 ) − 2T V
2
1
λ0
2
≥
+
ku1 − u2 k−1 > 0.
4τ
4
1
λ0
+
2τ
2
2
ku1 k−1
2
ku2 k−1
This finishes the proof.
Next we shall prove the existence of stationary solution for (1.5). For this sake we consider
the corresponding Euler-Lagrange equation to (3.1), i.e.,
u−v
∆−1
+ p − ∆−1 (λ(f − u) + (λ0 − λ)(v − u)) = 0,
τ
16
M. Burger, L. He, and C.-B. Schönlieb
with the weak formulation
1
τ (u − v), φ −1 + hp, φi − hλ(f − u) + (λ0 − λ)(v − u), φi−1
∀φ ∈ H01 (Ω).
A fixed point of the above equation, i.e., a solution u = v, is then a stationary solution for (1.5).
Thus, to prove the existence of a stationary solution of (1.5), i.e., to prove Theorem 1.4, we as
before are going to use a fixed point argument. Let A : Lr (Ω) → Lr (Ω), 1 < r < 2, be the operator
which maps a given v ∈ Lr (Ω) to A(v) = u under the condition that A(v) = u is the minimizer
of the functional J(., v) defined in (3.1). The choice of the fixed point operator A over Lr (Ω) was
made in order to obtain the necessary compactness properties for the application of Schauder’s
theorem.
Since here the treatment of the boundary conditions is similar as in Section 2 we will leave
this part of the analysis in the upcoming proof to the reader and just carry out the proof without
explicitly taking care of the boundary.
Proof. Let A : Lr (Ω) → Lr (Ω), 1 < r < 2, be the operator that maps a given v ∈ Lr (Ω) to
A(v) = u, where u is the unique minimizer of the functional J(., v) defined in (3.1). Existence
and uniqueness follow from Theorem 3.4. Since u minimizes J(., v) we have u ∈ L∞ (Ω) hence
u ∈ Lr (Ω). Additionally we have J(u, v) ≤ J(0, v), i.e.,
1
2τ ||u
− v||2−1 +
λ0
2 ||u
−
λ
λ0 f
− (1 −
λ
2
λ0 )v||−1
+ T V (u) ≤
≤
1
2
2τ ||v||−1
|Ω|
2τ
+
λ0 λ
2 || λ0 f
+ (1 −
λ
2
λ0 )v||−1
+ λ0 (|Ω| + |D|).
(3.2)
Here the last inequality was obtained since Lr (Ω) ,→ H −1 (Ω) and hence ||v||−1 ≤ C and ||λv||−1 ≤
0
C for a C > 0. (In fact, since H 1 (Ω) ,→ Lr (Ω) for all 1 ≤ r0 < ∞ from duality it follows
that Lr (Ω) ,→ H −1 (Ω) for, 1 < r < ∞.) By the last estimate we obtain u ∈ BV (Ω). Since
BV (Ω) ,→,→ Lr (Ω) compactly for 1 ≤ r ≤ 2 and Ω ⊂ R2 (cf. Theorem B.7), the operator A maps
Lr (Ω) → BV (Ω) ,→,→ Lr (Ω), i.e., A : Lr (Ω) → K, where K is a compact subset of Lr (Ω). Thus,
for v ∈ B(0, 1) (where B(0, 1) denotes the ball in L∞ (Ω) with center 0 and radius 1), the operator
A : B(0, 1) → B(0, 1) ∩ K = K̃, where K̃ is a compact and convex subset of Lr (Ω).
Next we have to show that A is continuous in Lr (Ω). Let (vk )k≥0 be a sequence which
converges to v in Lr (Ω). Then uk = A(vk ) solves
uk − vk
∆pk =
− (λ(f − uk ) + (λ0 − λ)(vk − uk )) ,
τ
where pk ∈ ∂T V (uk ). Thus uk is uniformly bounded in BV (Ω) ∩ L∞ (Ω) (and hence in Lr (Ω))
and, since the right-hand side of the above equation is uniformly bounded in Lr (Ω), also ∆pk is
bounded in Lr (Ω). Thus there exists a subsequence pkl such that ∆pkl * ∆p in Lr (Ω) and a
subsequence ukl that converges weakly ∗ to a u in BV (Ω) ∩ L∞ (Ω). Since BV (Ω) ,→,→ Lr (Ω) we
have ukl → u strongly in Lr (Ω). Therefore the limit u solves
u−v
∆p =
− (λ(f − u) + (λ0 − λ)(v − u)) .
(3.3)
τ
If we additionally apply Poincare’s inequality to ∆pk we conclude
k∇pk − (∇pk )Ω kLr (Ω) ≤ C k∇ · (∇pk − (∇pk )Ω )kLr (Ω) ,
R
1
where (∇pk )Ω = |Ω|
∇pk dx. In addition, since pk ∈ ∂T V (uk ), it follows that (pk )Ω = 0 and
Ω
kpk kBV ∗ (Ω) ≤ 1. Thus (∇pk )Ω < ∞ and pk is uniformly bounded in W 1,r (Ω). Thus there exists a
0
subsequence pkl such that pkl * p in W 1,r (Ω). In addition Lr (Ω) ,→ BV ∗ (Ω) for 2 < r0 < ∞ (this
2r
follows again from Theorem B.7 by a duality argument) and W 1,r (Ω) ,→,→ Lq (Ω) for 1 ≤ q < 2−r
2r
(Rellich-Kondrachov Compactness Theorem, cf. [2], Theorem 8.7). By choosing 2 < q < 2−r we
have in sum W 1,r (Ω) ,→,→ BV ∗ (Ω). Thus pkl → p strongly in BV ∗ (Ω). Hence the element p in
(3.3) is an element in ∂T V (u).
Because the minimizer of (3.1) is unique, u = A(v), and therefore A is continuous in Lr (Ω).
From Schauder’s fixed point theorem the existence of a stationary solution follows.
Cahn-Hilliard and BV-H −1 inpainting
17
3.3. Characterization of Solutions - proof of Theorem 1.5. Finally we want to compute
elements p̂ ∈ ∂T V (û). Like in [13] the model for the regularizing functional is the sum of a
standard regularizer plus the indicator function of the L∞ constraint. Especially we have T V (u) =
|Du| (Ω) + χ1 (u), where |Du| (Ω) is the total variation of Du and
(
0
if |u| ≤ 1 a.e. in Ω
(3.4)
χ1 (u) =
+∞ otherwise.
We want to compute the subgradients of T V by pretending ∂T V (u) = ∂ |Du| (Ω) + ∂χ1 (u). This
means we can separately compute the subgradients of χ1 . To guarantee that the splitting
above is
R q
2
allowed we have to consider a regularized functional of the total variation, like Ω |∇u| + δ dx.
This is sufficient because both |D.| (Ω) and χ1 are convex and |D.| (Ω) is continuous (compare [21]
Proposition 5.6., pp. 26).
The subgradient ∂ |Du| (Ω) is already well described, as, for instance, in [4] or [37]. We will
just shortly recall its characterization. Thereby we do not insist on the details of the rigorous
derivation of these conditions, and we limit ourself to mention the main facts.
It is well known [37, Proposition 4.1] that p ∈ ∂|Du|(Ω) implies
(
∇u
p = −∇ · ( |∇u|
) in Ω
∇u
·
ν
=
0
on ∂Ω.
|∇u|
The previous conditions do not fully characterize p ∈ ∂|Du|(Ω), additional conditions would be
required [4, 37], but the latter are, unfortunately, hardly numerically implementable. Since we
anyway consider a regularized version of |Du| (Ω) the subdifferential becomes a gradient which
reads

 p = −∇ · ( √ ∇u2 ) in Ω
|∇u| +δ
 √
∇u
|∇u|2 +δ
·ν =0
on ∂Ω.
The subgradient of χ1 is computed like in the following Lemma.
Lemma 3.5. Let χ1 : Lr (Ω) → R ∪ {∞} be defined by (3.4), and let 1 ≤ r ≤ ∞. Then
∗
r
p ∈ Lr (Ω), for r∗ = r−1
, is a subgradient p ∈ ∂χ1 (u) for u ∈ Lr (Ω) with χ1 (u) = 0, if and only
if
p = 0 a.e. on supp({|u| < 1})
p ≤ 0 a.e. on supp({u = −1})
p ≥ 0 a.e. on supp({u = 1}).
Proof. Let p ∈ ∂χ1 (u). Then we can choose v = u + w for w being any bounded function
supported in {|u| < 1 − α} for arbitrary 0 < α < 1. If is sufficiently small we have |v| ≤ 1.
Hence
Z
0 ≥ hv − u, pi = wp dx.
{|u|<1−α}
Since we can choose both positive and negative, we obtain
Z
wp dx = 0.
{|u|<1−α}
Because 0 < α < 1 and w are arbitrary we conclude p = 0 on the support of {|u| < 1}. If we
choose v = u + w with w is an arbitrary bounded function with
(
0 ≤ w ≤ 1 on supp({−1 ≤ u ≤ 0})
w=0
on supp({0 < u ≤ 1}).
18
M. Burger, L. He, and C.-B. Schönlieb
Then v is still between −1 and 1 and
Z
Z
0 ≥ hv − u, pi =
wp dx +
{u=−1}
wp dx
{u=1}
Z
=
wp dx.
{u=−1}
Because w is arbitrary and positive on {u = −1} it follows that p ≤ 0 a.e. on {u = −1}. If we
choose now v = u + w with w is an arbitrary bounded function with
(
w=0
on supp({−1 ≤ u ≤ 0})
−1 ≤ w ≤ 0 on supp({0 < u ≤ 1}).
Then v is still between −1 and 1 and
Z
Z
0 ≥ hv − u, pi =
wp dx +
{u=−1}
wp dx
{u=1}
Z
=
wp dx.
{u=1}
Analogue to before, since w is arbitrary and negative on {u = 1} it follows that p ≥ 0 a.e. on
{u = 1}.
On the other hand assume that
p = 0 a.e. on supp({|u| < 1})
p ≤ 0 a.e. on supp({u = −1})
p ≥ 0 a.e. on supp({u = 1}).
We need to verify the subgradient property
hv − u, pi ≤ χ1 (v) − χ1 (u) = χ1 (v) for all v ∈ Lr (Ω)
only for χ1 (v) = 0, since it is trivial for χ1 (v) = ∞. So let v ∈ Lr (Ω) be a function between −1
and 1 almost everywhere on Ω. Then with p as above we obtain
Z
Z
hv − u, pi =
p(v − u) dx +
p(v − u) dx
{u=−1}
{u=1}
Z
Z
=
p(v + 1) dx +
p(v − 1) dx.
{u=−1}
{u=1}
Since −1 ≤ v ≤ 1 the first and the second term are always ≤ 0 since p ≤ 0 for {u = −1} and p ≥ 0
for {u = 1} respectively. Therefore hv − u, pi ≤ 0 and we are done.
3.4. Error estimation and stability analysis with the Bregman distance - proof
of Theorem 1.6. In the following analysis we want to present estimates for both the error we
actually make in inpainting an image with our T V − H −1 approach (1.5) (see (3.12)) and for the
stability of solutions for this problem (see (3.13)) in terms of the Bregman distance. This section is
motivated by the error analysis for variational models in image restoration with Bregman iterations
in [13], and the error estimates for inpainting models developed in [15]. In [13] the authors consider
among other things the general optimality condition
p + λ0 A∗ (Au − fdam ) = 0,
(3.5)
where p ∈ ∂R(u) for a regularizing term R, A is a bounded linear operator and A∗ its adjoint.
Now the error that is to be estimated depends on the form of smoothing of the image contained
in (3.5). Considering this equation one realizes that the smoothing consists of two steps. The first
Cahn-Hilliard and BV-H −1 inpainting
19
smoothing is created by the operator A which depends on the image restoration task at hand,
and is actually a dual one that smooths the subgradient p. The second smoothing step is the
one, which is directly implied by the regularizing term, i.e., its subgradient p, and depends on the
relationship between the primal variable u and the dual variable p. A condition that represents
this dual smoothing property of functions, i.e., subgradients, is the so-called source condition. Let
ftrue be the original image then the source condition for ftrue reads:
There exists ξ ∈ ∂R(ftrue ) such that ξ = A∗ q for a source element q ∈ D(A∗ ),
(3.6)
where D(A∗ ) is the domain of the operator A∗ . It can be shown (cf. [12]) that this is equivalent
to require from ftrue to be a minimizer of
R(u) +
λ0
2
kAu − fdam k ,
2
for arbitrary fdam ∈ D(A∗ ) and λ0 ∈ R. Now, the source condition has a direct consequence for
the Bregman distance, which gives rise to its use for the subsequent error analysis. To be more
precise, the Bregman distance is defined as
p
DR
(v, u) = R(v) − R(u) − hv − u, pi ,
p ∈ ∂R(u).
Then, if ftrue fulfills the source condition with a particular subgradient ξ we obtain
ξ
DR
(u, ftrue ) = R(u) − R(ftrue ) − hu − ftrue , ξi = R(u) − R(ftrue ) − hq, Au − Aftrue i ,
and thus the Bregman distance can be both related to the error in the regularization functional
(R(u) − R(ftrue )) and the output error (Au − Aftrue ). For the sake of symmetry properties in the
sequel we shall consider the symmetric Bregman distance, which is defined as
symm
p1
p2
DR
(u1 , u2 ) = DR
(u2 , u1 ) + DR
(u1 , u2 ) = hu1 − u2 , p1 − p2 i ,
pi ∈ ∂R(ui ).
Additionally to this error analysis we shall get a control on the inpainting error |u − ftrue | inside
the inpainting domain D by means of estimates from [15]. Therein the authors analyzed the
inpainting process by understanding how the regularizer continues level lines into the missing
domain. The inpainting error was then determined by means of the definition of an error region,
smoothness bounds on the level lines, and quantities taking into the account the shape of the
inpainting domain. In the following we are going to implement both strategies, i.e., [13] and [15],
in order to proof Theorem 1.6.
Proof. Let fdam ∈ L2 (Ω) be the given damaged image with inpainting domain D ⊂ Ω and
ftrue the original image. We consider the stationary equation to (1.5), i.e.,
−∆p + λ(u − fdam ) = 0,
p ∈ ∂T V (u),
(3.7)
where we define T V (u) as a functional over L2 (Ω) as
(
|Du| (Ω) if u ∈ BV (Ω), kukL∞ ≤ 1
T V (u) =
+∞
otherwise.
In the subsequent we want to characterize the error we make by solving (3.7) for u, i.e., how large
do we expect the distance between the restored image u and the original image ftrue to be.
Now, let ∆−1 be the inverse operator to −∆ with zero Dirichlet boundary conditions as before.
In our case the operator A in (3.5) is the embedding operator from H01 (Ω) into H −1 (Ω) and stands
in front of the whole term A(u − fdam ), cf. (3.7). The adjoint operator is A∗ = ∆−1 which maps
H −1 (Ω) into H01 (Ω). We assume that the given image fdam coincides with ftrue outside of the
inpainting domain, i.e.,
fdam = ftrue
fdam = 0
in Ω \ D
in D.
(3.8)
20
M. Burger, L. He, and C.-B. Schönlieb
Further we assume that ftrue satisfies the source condition (3.6), i.e.,
There exists ξ ∈ ∂T V (ftrue ) such that ξ = A∗ q = ∆−1 q for a source element q ∈ H −1 (Ω). (3.9)
For the following analysis we first rewrite (3.7). For û, a solution of (3.7), we get
p̂ + λ0 ∆−1 (û − ftrue ) = ∆−1 [(λ0 − λ)(û − ftrue )] ,
p̂ ∈ ∂T V (û).
Here we replaced fdam by ftrue using assumption (3.8). By adding a ξ ∈ ∂T V (ftrue ) from (3.9)
to the above equation we obtain
λ
−1
−1
p̂ − ξ + λ0 ∆ (û − ftrue ) = −ξ + λ0 ∆
1−
(û − ftrue )
λ0
Taking the duality product with û − ftrue (which is just the inner product in L2 (Ω) in our case)
we get
λ
2
−1
(û
−
f
),
û
−
f
,
DTsymm
(û,
f
)
+
λ
kû
−
f
k
=
∇ξ,
∇∆
(û
−
f
)
+
λ
1
−
true
true
true
0
true −1
true
0
V
λ0
−1
where
DTsymm
(û, ftrue ) = hû − ftrue , p̂ − ξi ,
V
p̂ ∈ ∂T V (û), ξ ∈ ∂T V (ftrue ).
An application of Young’s inequality yields
DTsymm
(û, ftrue )
V
2
λ0
1
λ
2
2
+
kû − ftrue k−1 ≤
kξk1 + λ0 1 −
(û − ftrue )
2
λ0
λ0
−1
(3.10)
For the last term we obtain
E
D
E
D = supφ,kφk−1 =1 − ∆−1 φ, 1 − λλ0 v
1 − λλ0 v = supφ,kφk−1 =1 φ, 1 − λλ0 v
−1 D
−1
E
= supφ,kφk−1 =1 − 1 − λλ0 ∆−1 φ, v ≤Hölder kvk2 · supφ,kφk−1 =1 1 − λλ0 ∆−1 φ .
2
With ∆−1 : H −1 → H 1 ,→ Lr , 2 < r < ∞ we get
R Ω
1−
λ
λ0
=choose q= r2
2
R
2
2q q1
R
1
∆−1 φ dx = D ∆−1 φ dx ≤Hölder |D| q0 · Ω ∆−1 φ
2
r−2
r−2
r−2
2
|D| r · ∆−1 φ ≤H 1 ,→Lr C|D| r kφk = C|D| r ,
−1
p
i.e.,
2
2
1 − λ v ≤ C|D| r−2
r
kvk .
λ0
−1
(3.11)
Applying (3.11) to (3.10) we see that
DTsymm
(û, ftrue ) +
V
λ0
1
2
2
(r−2)/r
2
kû − ftrue k−1 ≤
kξk1 + Cλ0 |D|
kû − ftrue k
2
λ0
To estimate the last term we use some error estimates for T V − inpainting computed in [15]. First
we have
Z
Z
2
kû − ftrue k =
(û − ftrue )2 dx +
(û − ftrue )2 dx.
Ω\D
D
Cahn-Hilliard and BV-H −1 inpainting
21
Since û − ftrue is uniformly bounded in Ω (this follows from the L∞ bound in the definition of
T V (u)) we estimate the first term by a positive constant K1 and the second term by the L1 norm
over D. We obtain
Z
2
kû − ftrue k ≤ K1 + K2
|û − ftrue | dx.
D
Now let û ∈ BV (Ω) be given by û = us + ud , where us is a smooth function and ud is a piecewise
constant function. Following the error analysis in [15] (Theorem 8.) for functions û ∈ BV (Ω) we
have
kû − ftrue k
2
≤ K1 + K2 err(D)
≤ K1 + K2 |D| C (M (us ), β) + 2 R(ud ) ,
where M (us ) is the smoothness bound for us , β is determined from the shape of D, and the error
region R(ud ) is defined from the level lines of ud . Note that in general the error region from
higher-order inpainting models including the T V seminorm is smaller than that from T V − L2
inpainting (cf. Section 3.2. in [15]).
Finally we end up with
(û, ftrue ) +
DTsymm
V
1
λ0
2
2
(r−2)/r
kû − ftrue k−1 ≤
kξk1 + Cλ0 |D|
errinpaint ,
2
λ0
(3.12)
with
errinpaint := K1 + K2 |D| C (M (us ), β) + 2 R(ud ) .
The first term in (3.12) depends on the regularizer T V , and the second term on the size of the
inpainting domain D.
Remark 3.6. From inequality (3.12) we derive an optimal scaling for λ0 , i.e., a scaling which
minimizes the inpainting error. It reads
λ20 |D|
r−2
r
∼1
− r−2
2r
λ0 ∼ |D|
.
p
In two space dimensions r can be chosen arbitrarily big, which gives λ0 ∼ 1/ |D| as the optimal
order for λ0 .
Stability estimates for (3.7) can also be derived with an analogous technique. For ui being
the solution of (3.7) with fdam = fi (again assuming that fi = ftrue in Ω \ D), the estimate
Z
λ0
λ0
2
DJsymm (u1 , u2 ) +
ku1 − u2 k−1 ≤
(f1 − f2 )2 dx
(3.13)
2
2 D
holds.
4. Numerics. In the following numerical results for the two inpainting approaches (1.1) and
(1.5) are presented. For both approaches we used convexity splitting algorithms, proposed by Eyre
in [24], for the discretization in time. For more details to the application of convexity splitting
algorithms in higher order inpainting compare [10].
For the space discretization we used the cosine transform to compute the finite differences for
the derivatives in a fast way and to preserve the Neumann boundary conditions in our inpainting
approaches (also cf. [10] for a detailed description).
4.1. Convexity splitting scheme for Cahn-Hilliard inpainting. For the discretization
in time we use a convexity splitting scheme applied by Bertozzi et al. [8] to Cahn-Hilliard inpainting. The original Cahn-Hilliard equation is a gradient flow in H −1 for the energy
Z
1
2
E1 [u] =
|∇u| + F (u) dx,
2
Ω
22
M. Burger, L. He, and C.-B. Schönlieb
Fig. 4.1. Destroyed binary image and the solution of Cahn-Hilliard inpainting with switching value: u(1200)
with = 0.1, u(2400) with = 0.01
Fig. 4.2. Destroyed binary image and the solution of Cahn-Hilliard inpainting with switching value: u(800)
with = 0.8, u(1600) with = 0.01
Fig. 4.3. Destroyed binary image and the solution of Cahn-Hilliard inpainting with switching value: u(800)
with = 0.8, u(1600) with = 0.01
while the fitting term in (1.1) can be derived from a gradient flow in L2 for the energy
Z
1
E2 [u] =
λ(f − u)2 dx.
2 Ω
We apply convexity splitting for both E1 and E2 separately. Namely we split E1 as E1 = E11 −E12
with
Z
C1 2
2
E11 =
|∇u| +
|u| dx,
2
2
Ω
and
Z
E12 =
1
C1 2
− F (u) +
|u| dx.
2
Ω
A possible splitting for E2 is E2 = E21 − E22 with
Z
1
C2 2
E21 =
|u| dx,
2 Ω 2
Cahn-Hilliard and BV-H −1 inpainting
23
and
E22
1
=
2
Z
−λ(f − u)2 +
Ω
C2 2
|u| dx.
2
For the splittings discussed above the resulting time-stepping scheme is
uk+1 − uk
k+1
k+1
k
k
− E12
) − ∇L2 (E12
− E22
),
= −∇H −1 (E11
τ
where ∇H −1 and ∇L2 represent gradient descent with respect to the H −1 inner product and the
L2 inner product respectively. This translates to a numerical scheme of the form
uk+1 − uk
1
+ ∆∆uk+1 − C1 ∆uk+1 + C2 uk+1 = ∆F 0 (uk ) − C1 ∆uk + λ(f − uk ) + C2 uk(. 4.1)
τ
To make sure that E11 , E12 and E21 , E22 are convex the constants C1 > 1 , C2 > λ0 .
For the discretization in space we used finite differences and spectral methods, i.e., the discrete
cosine transform, to symplify the inversion of the Laplacian ∆ for the computation of uk+1 .
In [10] the authors prove that the above timestepping scheme is unconditionally stable in the
sense that the numerical solution uk is uniformly bounded on a finite time interval. Moreover
the discrete solution converges to the exact solution of (1.1) as τ → 0. These properties make
(4.1) a stable and reliable discrete approximation of the continuous equation (1.1). Concerning
the performance of the scheme in terms of computational speed its already remarked in [7] and
[8] that (4.1) is certainly faster than numerical solutions for competing curvature based models.
Nevertheless the convexity conditions on the constants C1 and C2 damp the convergence of the
scheme and hence the solution of the scheme to steady state may require a lot of iterations (cf. also
the number of iterations needed to compute the examples in Figures 4.1-4.3). An investigation
of fast numerical solvers for (1.1) is a matter of future research. A very recent approach in this
direction was made in [11], where the authors propose a multigrid approach for inpainting with
CDD.
In Figures 4.1-4.3 Cahn-Hilliard inpainting was applied to three different binary images. In
all of the examples we follow the procedure of [7], i.e., the inpainted image is computed in a two
step process. In the first step Cahn-Hilliard inpainting is solved with a rather large value of ,
e.g., = 0.1, until the numerical scheme is close to steady state. In this step the level lines are
continued into the missing domain. In a second step the result of the first step is put as an initial
condition into (4.1) for a small , e.g., = 0.01, in order to sharpen the contours of the image
contents. The reason for this two step procedure is twofold. First of all in [8] the authors give
numerical evidence that the steady state of (2.4) is not unique, i.e., it is dependent on the initial
condition for the equation. As a consequence, computing the inpainted image by the application of
Cahn-Hilliard inpainting with a small only, might not prolongate the level lines into the missing
domain as desired. See also [8] for a bifurcation diagram based on the numerical computations
of the authors. The second reason for solving Cahn-Hilliard inpainting in two steps is that it is
computationally less expensive. Solving (4.1) for, e.g., = 0.1 is faster than solving it for = 0.01.
Again, this is because of the damping in (4.1) introduced by the constant C1 .
4.2. Convexity splitting scheme for T V − H −1 inpainting. We consider equation (1.5)
∇u
where p ∈ ∂T V (u) is replaced by the formal expression ∇ · ( |∇u|
), namely
ut = −∆(∇ · (
∇u
)) + λ(f − u).
|∇u|
(4.2)
Similar to the convexity splitting for the Cahn-Hilliard inpainting we propose the following splitting
for the TV-H −1 inpainting equation. The regularizing term in (4.2) can be modeled by a gradient
flow in H −1 of the energy
Z
E1 =
|∇u| dx.
Ω
24
M. Burger, L. He, and C.-B. Schönlieb
We split E1 in E11 − E12 with
Z
C1
|∇u|2 dx
Ω 2
Z
C1
|∇u|2 dx.
=
−|∇u| +
2
Ω
E11 =
E12
The fitting term is a gradient flow in L2 of the energy
Z
1
E2 =
λ(f − u)2 dx
2 Ω
and is splitted into E2 = E21 − E22 with
Z
C2 2
|u| dx
E21 =
Ω 2
Z
1
E22 =
−λ(f − u)2 + C2 |u|2 dx.
2 Ω
Analogous to above the resulting time-stepping scheme is
∇uk
uk+1 − uk
+ C1 ∆∆uk+1 + C2 uk+1 = C1 ∆∆uk − ∆(∇ · (
)) + C2 uk + λ(f − uk ). (4.3)
τ
|∇uk |
In order to make the scheme unconditionally stable, the constants C1 and C2 have to be chosen
so that E11 , E12 , E21 , E22 are all convex. The choice of C1 depends on the regularization
of the
q
2
total variation we are using. Using the square regularization |∇u| is replaced by |∇u| + δ 2 the
condition turns out to be C1 > 1δ and C2 > λ0 .
The discrete scheme (4.3) was analyzed in [10]. As for Cahn-Hilliard inpainting, therein
the authors prove that the scheme is unconditionally stable. For this case the convergence of
the discrete solution to p
the exact solution of (1.5) (where the subgradient p was replaced by its
relaxed version ∇ · (∇u/ |∇u|2 + δ 2 )) only holds under additional assumptions on the regularity
of the exact solution. Results for the one dimensional case developed in [9] suggest that these
regularity assumptions also hold in two dimensions with a sufficiently regular initial condition
for the equation. However a rigorous proof of this fact is currently missing and is a challenge
of additional research. As in the case of Cahn-Hilliard inpainting the convexity splitting scheme
(4.3) for TV-H −1 inpainting converges rather slow due to the damping induced by C1 and C2 .
Nevertheless its solution is numerically stable and approximates the exact solution accurately
(which was rigorously proven for smooth solutions of (1.5) in [10]).
In Figures 4.4-4.8 examples for the application of TV-H −1 inpainting to grayvalue images are
shown. In Figure 4.5 a comparison of the TV-H −1 inpainting result with the result obtained by
the second order TV-L2 inpainting model for a crop of the image in Figure 4.4 is presented. The
superiority of the fourth-order TV-H −1 inpainting model to the second order model with respect
to the desired continuation of edges into the missing domain is clearly visible. Other examples
which support this claim are presented in Figure 4.6 and 4.7 where the line is connected by the
TV-H −1 inpainting model but clearly splitted by the second-order TV-L2 model. It would be
interesting to strengthen this numerical observation with a rigorous result as it was done in [8] for
Cahn-Hilliard inpainting, cf. (1.2). The authors consider this as another important contribution
of future research.
Acknowledgments. This work was partially supported by KAUST (King Abdullah University of Science and Technology), by the WWTF (Wiener Wissenschafts-, Forschungs- und
Technologiefonds) project nr.CI06 003, by the FFG project Erarbeitung neuer Algorithmen zum
Image Inpainting project nr. 813610, and the PhD program Wissenschaftskolleg taking place at
the University of Vienna.
Cahn-Hilliard and BV-H −1 inpainting
25
Fig. 4.4. TV-H −1 inpainting: u(1000) with λ0 = 103
Fig. 4.5. (l.) u(1000) with TV-H −1 inpainting, (r.) u(5000) with TV-L2 inpainting
Fig. 4.6. TV-H −1 inpainting compared to TV-L2 inpainting: u(5000) with λ0 = 10
The authors further would like to thank Andrea Bertozzi for her suggestions concerning the
fixed point approach for the stationary equation, Massimo Fornasier for several discussions on the
topic and Peter Markowich for remarks on the manuscript. We would also like to thank the editor
and the referees for useful comments.
26
M. Burger, L. He, and C.-B. Schönlieb
Fig. 4.7. TV-H −1 inpainting compared to TV-L2 inpainting: u(5000) with λ0 = 10
Fig. 4.8. T V − H −1 inpainting: u(1000) with λ = 103
Appendix A. Neumann boundary conditions and the space H∂−1 (Ω).
In this section we want to pose the Cahn-Hilliard inpainting problem with Neumann boundary
conditions in a way such that the analysis from Section 2 can be carried out in a similar way.
Namely we consider
ut = ∆(−∆u + 1 F 0 (u)) + λ(f − u) in Ω,
∂u
∂∆u
on ∂Ω,
∂ν = ∂ν = 0
For the existence of a stationary solution of this equation we consider again a fixed point approach
similar to (2.4) in the case of Dirichlet boundary conditions, i.e.,
( u−v
1 0
τ = ∆(−∆u + F (u)) + λ(f − u) + (λ0 − λ)(v − u) in Ω,
1 0
(A.1)
∂ (∆u− F (u))
∂u
=0
on ∂Ω.
∂ν =
∂ν
To reformulate the above equation in terms of the operator ∆−1 with Neumann boundary conditions we first have to introduce the space H∂−1 (Ω) in which the operator ∆−1 is now the inverse
of −∆ with Neumann boundary conditions.
Cahn-Hilliard and BV-H −1 inpainting
27
Thus we define the non-standard Hilbert space
n
o
H∂−1 (Ω) = F ∈ H 1 (Ω)∗ | hF, 1i(H 1 )∗ ,H 1 = 0 .
Since Ω is bounded we know 1 ∈ H 1 (Ω), hence H∂−1 (Ω) is well defined. Before we define a norm
and an inner product on H∂−1 (Ω) we have to define more spaces. Let
Z
Hφ1 (Ω) = ψ ∈ H 1 (Ω) :
ψ dx = 0 ,
Ω
with norm kukH 1 := k∇ukL2 and inner product hu, viH 1 := h∇u, ∇viL2 . This is a Hilbert space
φ
φ
and the norms k.kH 1 and k.kH 1 are equivalent on Hφ1 (Ω). Let (Hφ1 (Ω))∗ denote the dual of Hφ1 (Ω).
φ
We will use (Hφ1 (Ω))∗ to induce an inner product on H∂−1 (Ω). Given F ∈ (Hφ1 (Ω))∗ with associate
u ∈ Hφ1 (Ω) (from the Riesz representation theorem) we have by definition
hF, ψi(H 1 )∗ ,H 1 = hu, ψiH 1 = h∇u, ∇ψiL2
φ
φ
φ
∀ψ ∈ Hφ1 (Ω).
Lets now define a norm and an inner product on H∂−1 (Ω).
Definition A.1.
n
o
H∂−1 (Ω) := F ∈ H 1 (Ω)∗ | hF, 1i(H 1 )∗ ,H 1 = 0
kF kH −1 := F | Hφ1 (H 1 )∗
∂
φ
hF1 , F2 iH −1 := h∇u1 , ∇u2 iL2 ,
∂
H∂−1 (Ω)
where F1 , F2 ∈
and where u1 , u2 ∈ Hφ1 (Ω) are the associates of F1 | Hφ1 , F2 | Hφ1 ∈
1
∗
(Hφ (Ω)) .
At this point it is not entirely obvious that for a given F ∈ H∂−1 (Ω) we have F | Hφ1 ∈ (Hφ1 (Ω))∗ .
That this is the case though is explained in the following theorem.
Theorem A.2.
1. H∂−1 (Ω) is closed in (H 1 (Ω))∗ .
2. The norms k.kH −1 and k.k(H 1 )∗ are equivalent on H∂−1 (Ω).
∂
Theorem A.2 can be easily checked just by the application of the definitions and the fact that
the norms k.kH 1 and k.kH 1 are equivalent on Hφ1 (Ω). From point 1. of the theorem we have that
φ
H∂−1 (Ω) is a Hilbert space w.r.t. the (H 1 (Ω))∗ norm and point 2. tells us that the norms k.kH −1
∂
and k.k(H 1 )∗ are equivalent on H∂−1 (Ω). Therefore the norm in Definition A.1 is well defined and
H∂−1 (Ω) is a Hilbert space w.r.t. k.kH −1 .
∂
In the following we want to characterize elements F ∈ H∂−1 (Ω). By the above definition we
have for each F ∈ H∂−1 (Ω), there exists a unique element u ∈ Hφ1 (Ω) such that
Z
hF, ψi(H 1 )∗ ,H 1 =
∇u · ∇ψ dx, ∀ψ ∈ Hφ1 (Ω).
(A.2)
Ω
Since hF, 1i(H 1 )∗ ,H 1 = 0, we see that hF, ψ + Ki(H 1 )∗ ,H 1 = hF, ψi(H 1 )∗ ,H 1 for all constants K ∈ R
and therefore (A.2) extends to all ψ ∈ H 1 (Ω). We define
∆−1 F := u
(A.3)
the unique solution to (A.2).
R
Now suppose F ∈ L2 (Ω) and assume u ∈ H 2 (Ω). Set hF, ψi := Ω F ψ dx. Because L2 (Ω) ⊂
H∂−1 (Ω) an element F is also an element in H∂−1 (Ω). Thus there exists a unique element u ∈ Hφ1 (Ω)
such that
Z
Z
(−∆u − F )ψ dx +
∇u · νψ ds = 0, ∀ψ ∈ Hφ1 (Ω).
Ω
∂Ω
28
M. Burger, L. He, and C.-B. Schönlieb
Therefore u ∈ Hφ1 (Ω) is the unique weak solution of the following problem:
−∆u − F = 0 in Ω
∇u · ν = 0
on ∂Ω.
(A.4)
Remark A.3. With the above characterization of elements F ∈ H∂−1 (Ω) and the notation
(A.3) for its associates the inner product and the norm can be written as
Z
hF1 , F2 iH −1 :=
∇∆−1 F1 · ∇∆−1 F2 dx, ∀F1 , F2 ∈ H∂−1 (Ω),
∂
Ω
and norm
sZ
kF kH −1 :=
∂
(∇∆−1 F )2 dx.
Ω
Throughout the rest of this appendix we will write the short forms h., .i−1 and k.k−1 for the inner
product and the norm in H∂−1 (Ω) respectively.
Now its important to notice that in order to rewrite (A.1) in terms of ∆−1 we require the
”right hand side” of the equation, i.e., u−v
τ + λ(u − f ) + (λ0 − λ)(u − v) to be an element of our
new space H∂−1 (Ω) (cf. Definition A.1). In other words the ”right hand side” has to have zero
mean over Ω. Because we cannot guarantee this property for solutions of the fixed point equation
(A.1) we are going to modify the right hand side by subtracting its mean. Let
FΩ = τ1 FΩR1 + λ0 FΩ2
1
FΩ1 = |Ω|
(u − v) dx
Ω
R λ
1
2
FΩ = |Ω| Ω λ0 (u − f ) + 1 −
λ
λ0
(u − v) dx,
and consider instead of (A.1) the equation
∆u − 1 F 0 (u) = ∆−1 u−v
τ − λ(f − u) − (λ0 − λ)(v − u) − FΩ
∂u
∂ν = 0
in Ω,
on ∂Ω,
∂ (∆u− 1 F 0 (u))
where the second Neumann boundary condition
= 0 on ∂Ω is included in the
∂ν
definition of ∆−1 . The functional of the corresponding variational formulation then reads
2
R 2
1 J (u, v) = Ω 2 |∇u| + 1 F (u) dx + 2τ
(u − v) − FΩ1 −1
2
+ λ20 u − λλ0 f − 1 − λλ0 v − FΩ2 .
−1
With these definitions the proof for the existence of a stationary solution for the modified CahnHilliard equation with Neumann boundary conditions Rcan be carried
out similarly to the proof in
R
d
Section 2. Note that every solution of (1.1)
fulfills
u
dx
=
λ(f
− u) dx. This means that
dt Ω
Ω
R
for a stationary solution û the integral C Ω λ(f − u) dx = 0 for every constant C ∈ R (,i.e., the
”right hand side” has zero mean and therefore FΩ1 = FΩ2 = 0).
Appendix B. Functions of bounded variation.
The following results can be found in [3]. Let Ω ⊂ R2 be an open and bounded Lipschitz
domain. As in [3] the space of functions of bounded variation BV (Ω) in two space dimensions is
defined as follows:
Definition B.1. (BV (Ω)) Let u ∈ L1 (Ω). We say that u is a function of bounded
variation in Ω if the distributional derivative of u is representable by a finite Radon measure in
Ω, i.e., if
Z
Z
∂φ
u
dx = −
φdDi u ∀φ ∈ Cc∞ (Ω), i = 1, 2,
∂x
i
Ω
Ω
Cahn-Hilliard and BV-H −1 inpainting
29
for some R2 − valued measure Du = (D1 u, D2 u) in Ω. The vector space of all functions of bounded
variation in Ω is denoted by BV (Ω). Further, the space BV (Ω) can be characterized by the total
variation of Du. For this we first define the so called variation V (u, Ω) of a function u ∈ L1loc (Ω).
Definition B.2. (Variation) Let u ∈ L1loc (Ω). The variation V (u, Ω) of u in Ω is defined
by
Z
2
V (u, Ω) := sup
u divφ dx : φ ∈ Cc1 (Ω) , kφk∞ ≤ 1 .
Ω
A simple integration by parts proves that
Z
|∇u| dx,
V (u, Ω) =
Ω
if u ∈ C 1 (Ω). By a standard density argument this is also true for functions u ∈ W 1,1 (Ω). Before
we proceed with the characterization of BV (Ω) let us recall the definition of the total variation of
a measure:
Definition B.3. (Total variation of a measure) Let (X, E) be a measure space. If µ is a
measure, we define its total variation |µ| as follows:
(∞
)
∞
X
[
|µ| (E) := sup
|µ(Eh )| : Eh ∈ E pairwise disjoint , E =
Eh , ∀E ⊂ E.
h=0
h=0
With Definition B.2 the space BV (Ω) can be characterized as follows
Theorem B.4. Let u ∈ L1 (Ω). Then, u belongs to BV (Ω) if and only if V (u, Ω) < ∞.
In addition, V (u, Ω) coincides with |Du| (Ω), the total variation of Du, for any u ∈ BV (Ω) and
u 7→ |Du| (Ω) is lower semicontinuous in BV (Ω) with respect to the L1loc (Ω) topology.
Note that BV (Ω) is a Banach space with respect to the norm
kukBV (Ω) = kukL1 (Ω) + |Du| (Ω).
Now we introduce so called weak∗ convergence in BV (Ω) which is useful for its compactness
properties. Note that this convergence is much weaker than the norm convergence.
Definition B.5. (Weak∗ convergence) Let u, uh ∈ BV (Ω). We say that (uh ) weakly∗
∗
converges in BV (Ω) to u (in signs uh * u) if (uh ) converges to u in L1 (Ω) and (Duh ) weakly∗
converges to Du in all (Ω), i.e.,
Z
Z
lim
φ dDuh =
φ dDu ∀φ ∈ C0 (Ω).
h→∞
Ω
Ω
A simple criterion for weak∗ convergence is the following:
Theorem B.6. Let (uh ) ⊂ BV (Ω). Then (uh ) weakly∗ converges to u in BV (Ω) if and only
if (uh ) is bounded in BV (Ω) and converges to u in L1 (Ω).
Further we have the following compactness theorem:
Theorem B.7. (Compactness for BV (Ω))
• Let Ω be a bounded domain with compact Lipschitz boundary. Every sequence (uh ) ⊂
BVloc (Ω) satisfying
Z
sup
|uh | dx + |Duh | (A) : h ∈ N < ∞ ∀A ⊂⊂ Ω open,
A
admits a subsequence (uhk ) converging in L1loc (Ω) to u ∈ BVloc (Ω). If the sequence is
further bounded in BV (Ω) then u ∈ BV (Ω) and a subsequence converges weakly∗ to u.
• Let Ω be a bounded domain in Rd with Lipschitz boundary. Then, every uniformly bounded
d
sequence (uk )k≥0 in BV (Ω) is relatively compact in Lr (Ω) for 1 ≤ r < d−1
, d ≥ 1.
Moreover, there exists a subsequence ukj and u in BV (Ω) such that ukj * u weakly∗ in
BV (Ω). In particular for d = 2 this compact embedding holds for 1 ≤ r < 2.
30
M. Burger, L. He, and C.-B. Schönlieb
Let u ∈ L1 (Ω). We introduce the mean value uΩ of u as
Z
1
uΩ :=
u(x) dx.
|Ω| Ω
A generalization of the Poincare inequality gives the so called Poincare-Wirtinger inequality for
functions in BV (Ω).
Theorem B.8. (Poincare-Wirtinger inequality) If Ω ⊂ R2 is a bounded, open and connected
domain with compact Lipschitz boundary, we have
ku − uΩ kLp (Ω) ≤ C |Du| (Ω)
∀u ∈ BV (Ω),
1≤p≤2
for some constant C depending only on Ω.
Finally we introduce the notion of the subdifferential of a function.
Definition B.9. Let X be a locally convex space, X ∗ its dual, h., .i the bilinear pairing over
X × X ∗ and F a mapping of X into R. The subdifferential of F at u ∈ X is defined as
∂F (u) = {p ∈ X ∗ | hv − u, pi ≤ F (v) − F (u), ∀v ∈ X} .
Every normed vector space is a locally convex space and therefore the theory of subdifferentials
applies to our framework (with X = BV (Ω)).
REFERENCES
[1] R. Acar, and C.R. Vogel, Analysis of bounded variation penalty method for ill-posed problems, Inverse Problems, Vol. 10, No. 6, pp.1217-1229, 1994.
[2] H.W. Alt, Linear Funktionalanalysis, Springer Verlag, 1992.
[3] L. Ambrosio, N. Fusco, and D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems,
Mathematical Monographs, Oxford University Press, 2000.
[4] G. Aubert, and P. Kornprobst, Mathematical Problems in Image Processing: Partial Differential Equations
and the Calculus of Variations, Springer Verlag, Applied Mathematical Sciences, Vol 147, November
2001.
[5] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image inpainting, Computer Graphics, SIGGRAPH
2000, July, 2000.
[6] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, Simultaneous structure and texture image inpainting, IEEE
Trans. Image Process., Vol. 12, Nr. 8, pp. 882-889, 2003.
[7] A. Bertozzi, S. Esedoglu, and A. Gillette, Inpainting of Binary Images Using the Cahn-Hilliard Equation,
IEEE Trans. Image Proc. 16(1) pp. 285-291, 2007.
[8] A. Bertozzi, S. Esedoglu, and A. Gillette, Analysis of a two-scale Cahn-Hilliard model for image inpainting,
Multiscale Modeling and Simulation, vol. 6, no. 3, pages 913-936, 2007.
[9] A. Bertozzi, J. Greer, S. Osher, and K. Vixie, Nonlinear regularizations of TV based PDEs for image processing , AMS Series of Contemporary Mathematics, vol. 371, pages 29-40, Gui-Qiang Chen, George Gasper,
and Joseph Jerome eds, 2005.
[10] A. Bertozzi, C.-B. Schönlieb, Unconditionally stable schemes for higher order inpainting, in preparation.
[11] C. Brito-Loeza, and K. Chen, Multigrid method for a modified curvature driven diffusion model for image
inpainting, Journal of Computational Mathematics, Vol.26, No.6, pp. 856–875, 2008.
[12] M. Burger, and S. Osher, Convergence rates of convex variational regularization, Inverse Problems, Vol. 20,
Nr. 5, pp. 1411-1421(11), 2005.
[13] M. Burger, E. Resmerita, L. He, Error Estimation for Bregman Iterations and Inverse Scale Space Methods
in Image Restoration, Computing, Vol. 81, pp. 109-135, 2007.
[14] V. Caselles, J.-M. Morel, and C. Sbert, An axiomatic approach to image interpolation, IEEE Trans. Image
Processing, 7(3):376–386, 1998.
[15] T.F. Chan, and S.H. Kang, Error Analysis for Image Inpainting, J. Math. Imaging Vis. 26, 1-2, 85-103, 2006.
[16] T.F. Chan, S.H. Kang, and J. Shen, Euler’s elastica and curvature-based inpainting, SIAM J. Appl. Math.,
Vol. 63, Nr.2, pp.564–592, 2002.
[17] T. F. Chan and J. Shen, Mathematical models for local non-texture inpaintings, SIAM J. Appl. Math.,
62(3):1019–1043, 2001.
[18] T. F. Chan and J. Shen, Non-texture inpainting by curvature driven diffusions (CDD), J. Visual Comm.
Image Rep., 12(4):436–449, 2001.
[19] T. F. Chan and J. Shen, Variational restoration of non-flat image features: models and algorithms, SIAM J.
Appl. Math., 61(4):1338–1361, 2001.
[20] I. Daubechies, G. Teschke, and L. Vese, Iteratively solving linear inverse problems under general convex
constraints, Inverse Problems and Imaging (IPI), 1(1), pp.29-46, 2007.
Cahn-Hilliard and BV-H −1 inpainting
31
[21] I.Ekeland, and R.Temam, Convex analysis and variational problems, Corrected Reprint Edition, SIAM,
Philadelphia, 1999.
[22] S. Esedoglu and J. Shen, Digital inpainting based on the Mumford-Shah-Euler image model, Eur. J. Appl.
Math., 13:4, pp. 353-370, 2002.
[23] L.C. Evans, Partial differential equations, Graduate Studies in Mathematics 19, American Mathematical
Society, Providence, RI, 1998.
[24] D. Eyre, An Unconditionally Stable One-Step Scheme for Gradient Systems, Jun. 1998, unpublished.
[25] E. Giusti, Minimal Surfaces and Functions of Bounded Variation, Birkhäuser Boston 1984.
[26] L. Lieu and L. Vese, Image restoration and decompostion via bounded total variation and negative HilbertSobolev spaces, Applied Mathematics & Optimization, Vol. 58, pp. 167-193, 2008.
[27] O. M. Lysaker and Xue-C. Tai, Iterative image restoration combining total variation minimization and a
second-order functional, International Journal of Computer Vision, 66(1):5–18, 2006.
[28] S. Masnou and J.-M. Morel, Level-lines based disocclusion, Proceedings of 5th IEEE Int’l Conf. on Image
Process., Chicago, 3:259–263, 1998.
[29] G. Dal Maso, An introduction to Gamma-convergence, Birkhäuser, Boston, 1993.
[30] Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, The Fifteenth Dean
Jacqueline B. Lewis Memorial Lectures. American Mathematical Society, Boston, MA, USA, 2001.
[31] L. Modica and S. Mortola, Il limite nella Γ-convergenza di una famiglia di funzionali ellittici, Boll. Unione
Mat. Ital., V. Ser., A 14, pp.526-529, 1977.
[32] L. Modica, S. Mortola, Un esempio di Γ− -convergenza, Boll. Unione Mat. Ital., V. Ser., B 14, pp.285-299,
1977.
[33] S. Osher, A. Sole, and L. Vese. Image decomposition and restoration using total variation minimization and
the H -1 norm, Multiscale Modeling and Simulation: A SIAM Interdisciplinary Journal, Vol. 1, Nr. 3,
pp. 349-370, 2003.
[34] L. Rudin and S. Osher, Total variation based image restoration with free local constraints, Proc. 1st IEEE
ICIP, 1:31–35, 1994.
[35] L.I. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D,
Vol. 60, Nr.1-4, pp.259-268, 1992.
[36] A. Tsai, Jr. A. Yezzi, and A. S. Willsky, Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation and magnification, IEEE Trans. Image Process.,
10(8):1169–1186, 2001.
[37] L. Vese, A study in the BV space of a denoising-deblurring variational problem, Applied Mathematics and
Optimization, 44 (2), pp. 131-161, 2001.
[38] L. Vese, and S. Osher, Modeling textures with total variation minimization and oscillating patterns in image
processing, J. Sci. Comput., Vol. 19, Nr. 1-3, pp. 553-572, 2003.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement