# User manual | Solution of conservation laws via convergence space completion by ```Solution of conservation laws via convergence space completion
by
Dennis Ferdinand Agbebaku
Submitted in partial fulfilment of requirements for the degree of
Magister Scientiae
in the Department of Mathematics and Applied Mathematics
in the Faculty of Natural and Agricultural Sciences
University of Pretoria
Pretoria
August 2011
Declaration
I, the undersigned, hereby declare that the thesis submitted herewith for the degree Master of Science to the University of Pretoria contains my own, independent
work and has not previously been submitted by me or any other person for any
degree at this or any other University.
Name: Dennis Ferdinand Agbebaku
Date: August 2011
iii
To the memory of my late aunty Mrs Roseline Segun Okhuiegbe, and my late
step-mother Mrs Philomina Agbebaku, both of whom passed on during the
course of this work. Memory is the one thing death cannot destroy.
iv
Title Solution of conservation laws via
convergence space completion
Name Dennis Ferdinand Agbebaku
Supervisor Dr JH van der Walt
Co-supervisor Prof R. Anguelov
Department Mathematics and Applied Mathematics
Degree Magister Scientiae
Summary
In this thesis we consider generalized solutions of scalar conservation laws.
In this regard, the Order Completion Method for systems of nonlinear PDEs is
modiﬁed in a suitable way. In particular, with a given Cauchy problem for scalar
conservation law, we associate an injective mapping T : M −→ N , where M
and N are suitable spaces of suﬃciently smooth functions, independent of the
given conservation law, so that the initial value problem may be expressed as one
equation
Tu = h
(1)
for a suitable h ∈ N .
Uniform convergence structures are introduced on the spaces M and N in
such a way that the mapping T is a uniformly continuous embedding. Thus there
exists a unique, injective uniformly continuous mapping T ♯ : M♯ −→ N ♯ , where
M♯ and N ♯ denote the completions of M and N , respectively, that extends T.
Thus we arrive at a generalized version of the equation (1), namely,
T ♯ u♯ = h
(2)
where the unknown function u is supposed to belong to M. Any solution of (2),
if it exists, is interpreted as a generalized solution of (1). Note that due to the
injectivity of T ♯ , the equation (2) has at most one solution. Furthermore, the
space M of generalized functions may be identiﬁed in a natural way with a set
of Hausdorﬀ continuous interval valued functions. Therefore the solution of (2)
has a solution, which agrees with the well known entropy solution.
Acknowledgements
I thank God for the grace and strength He gave me to complete this thesis. To
Him alone be all glory.
I would also like to express my sincerely appreciate to every person and organization that have made some contributions, directly or indirectly, to the successful
completion of this thesis.
First of all, I thank the University of Pretoria for their full ﬁnancial support
throughout the two-year period of this research work, under the UP M.Sc pilot
bursary programme.
My profound gratitude goes to my supervisors, Prof Roumen Anguelov and
Dr Jan Harm van der Walt, who introduce me to convergence spaces and their
applications to PDEs. I found their mathematical ingenuity second to none.
Their patience and careful guidance throughout this research work have given
some strength to my mathematical research understanding. I sincerely appreciate
their eﬀort, the attention they gave me, and the many useful discussions I had
with each of them during the course of writing this thesis. I also thank them for
reading the preliminary drafts of this thesis and making useful suggestions. I am
grateful for their supervision and recommendation for the UP M.Sc pilot bursary.
I also appreciate my virtuous wife Mrs Grace Agbebaku for her patience, endurance and prayers during the period of my study. My beloved children: Victor,
Virtue and Virginia have been very patient too, enduring and strong during my
absence, the lord bless you and keep you.
I like to thank Prof Jean Lubuma- the Head of department, Mrs Yvonne
of the Department of Mathematics and Applied Mathematics University of Pretoria for their kind assistance. My association with them was very proﬁtable to
this research work.
I would also like to thank Prof (Mrs) Okeke -the immediate Past Dean, Prof
M.O Oyesanya -the current Dean of the faculty of Physical Sciences University of
Nigeria Nsukka, the Head of Department of Mathematics -Dr G. C. E. Mbah and
Prof M. O Osilike for their support and recommendations to the University for
me to pursue this programme. I also thank my employer - University of Nigeria
Nsukka- for granting me study leave to enable me pursue this study programme.
I like to thank Pastor John Amoni- for his prayers and encouragement, the
Akpor family, the Kruger family, the Agypong family, Bro Jide, Bro Baridam,
vi
the Ogunniyi family, and the entire members of the Deeper Christian life Ministry South Africa, not mentioned for space, for their prayers, concern, care and
ﬁnancial support during my studies.
To my My Nigerian friends Kola, Bode, Mr Oke, Adaku to mention a few,
I want to say your presence has been a source of strength and encouragement.
Thank you all.
My colleagues in the Department of Mathematics and Applied Mathematics
University of Pretoria, namely Mr Zakaria Ali , Mr Yibetal Aldane to mention a
Contents
Declaration
ii
Dedication
iii
Summary
iv
Acknowledgement
v
Contents
1
1 Introduction
1.1 Nonlinear Hyperbolic Conservation Laws . . . . . . . . . . . . . .
1.1.1 Introduction to Scalar conservation laws . . . . . . . . . .
1.1.2 Examples of Conservation laws . . . . . . . . . . . . . . .
1.1.3 Solutions to Scalar Conservation Laws . . . . . . . . . . .
1.1.4 Solutions of Scalar Conservation Laws via Vanishing Viscosity
1.1.5 Compensated Compactness Methods for Nonlinear Conservation Laws . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Convergence Spaces . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1 Uniform Convergence Structure . . . . . . . . . . . . . . .
1.2.2 Convergence vector spaces . . . . . . . . . . . . . . . . . .
1.3 Hausdorﬀ Continuous Functions . . . . . . . . . . . . . . . . . . .
1.4 The Order Completion Method . . . . . . . . . . . . . . . . . . .
1.4.1 Main Ideas of Convergence space Completion . . . . . . .
1.5 Summary of the Main Results . . . . . . . . . . . . . . . . . . . .
35
37
47
52
56
60
64
68
2 Hausdorﬀ Continuous Solution of Scalar Conservation laws
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Convergence Vector Spaces for Conservation Laws . . . . . . .
2.3 Approximation results . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Requirements for u0 . . . . . . . . . . . . . . . . . . .
2.4 Existence and uniqueness results . . . . . . . . . . . . . . . . .
70
70
72
86
91
94
.
.
.
.
.
.
.
.
.
.
3
6
6
7
9
25
2
CONTENTS
3 Concluding Remarks
3.1 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Topics for further research . . . . . . . . . . . . . . . . . . . . . .
Bibliography
99
99
99
101
Chapter 1
Introduction
The mathematical models for real-world problems occurring in Physics, Chemistry, Economics, Engineering and Biology are usually expressed in the form of
partial diﬀerential equations (PDEs) with associated initial and/or boundary values.
We consider only initial value problems, consisting of a PDE
T (x, t, D)u(x, t) = h(x, t), x ∈ Ω, t > 0
and an initial condition
u(x, 0) = u0 (x), x ∈ Ω
where Ω ⊆ Rn is open and h a suitable function. The fundamental mathematical
question, concerns the well-posedness of the problem. Recall  that the given
initial value problem of a PDE is well-posed if the problem has a solution, if the
solution is unique and if the solution depends continuously on the data given in
the problem. Each of the three issues involve in the concept of well-posedness is
nontrivial in its own right.
It is well known that an initial value problem may fail to have a classical solution on the whole domain of deﬁnition of the equation. Indeed, a nonlinear
analytic PDE will, according to the well known Cauchy-Kowalevskaia Theorem
, admit an analytic solution which is deﬁned on a neighborhood of any noncharacteristic hyper surface on which analytic initial data is speciﬁed. However,
outside this neighborhood the solution may fail to exist. In particular, the solution will typically exhibit singularities outside the mentioned neighborhood of
analyticity. Thus, the solutions cannot be guaranteed to exists on the whole domain of deﬁnition of the given PDE. In fact, even a linear equation without initial
conditions may fail to have a solution, as shown by the following example due to
Lewy .
Example 1.1. Consider the linear operator
A(u) = ux + iuy − 2i(x + iy)ut
(x, y, t) ∈ R3 .
There exist C ∞ -smooth functions h for which the equation A(u) = h has no
4
Introduction
solution in D′ -distributions in any neighborhood of any point in R3 , see  for
details.
2
In view of the local nature of solutions of an initial value problems, in general, it
is clear that there is an interest in solutions to PDEs that may fail to be classical
on the whole domain of deﬁnition of the respective PDE. Such generalized or
weak solutions to PDEs are obtained as elements of suitable spaces of generalized
functions, that is, objects which retain certain essential features of the usual real
or complex valued functions.
Many mathematicians specializing in nonlinear partial diﬀerential equations
(PDEs) believe that there is no general and type independent theory for the
existence and basic regularity properties of generalized solutions of PDEs. In
fact, the ﬁrst chapter of the book  by V. I. Arnold starts with the following
statement:
“In contrast to ordinary diﬀerential equations, there is no uniﬁed theory of
partial diﬀerential equations. Some equations have their own theories, while
others have no theory at all. The reason for this complexity is a more complicated geometry ...”
This seeming inability of mathematical theories to deal with PDEs in a uniﬁed way may be attributed to the inherent limitations of the customary, linear
topological theories for the solution of PDEs themselves, rather than to any fundamental conceptual obstacles. In this regard we may note that the spaces of
generalized functions that are typically used in the study of solutions of linear
and nonlinear PDEs cannot deal with suﬃciently large classes of singularities.
Indeed, due to the celebrated Sobolev Embedding Theorem , none of the
Sobolev spaces can deal with the most simple singular functions, such as the
Heaviside step function

 0 if x < 0
H(x) =

1 if x ≥ 0
Furthermore, Colombeau generalized functions  as well as distributions, cannot handle an analytic function with an essential singularity at one single point,
such as f (z) = e1/z . The great Picard Theorem states that such a function will
assume every complex number, with possibly one exception, as a value in every neighborhood of the singular point. It will therefore violate the polynomial
growth conditions that are imposed on the Colombeau generalized functions near
singularities.
However, there are two recent theories that provide general and type independent results regarding the existence and basic regularity properties of large classes
of PDEs, namely, the Order Completion Method(OCM)  and the generalized
5
method of Steepest Descent in suitably constructed Hilbert spaces, introduced
by Neuberger [89, 90, 91, 92].
The Order Completion Method is based on the Dedekind order completion
of suitable spaces of (piecewise) smooth functions, and applies to what may be
considered all continuous nonlinear PDEs. Furthermore, the solution so obtained
satisﬁes a blanket regularity property. In particular, the solutions may be assimilated with Hausdorﬀ continuous interval valued functions . The recent
reformulation and enrichment of the OCM in terms of suitable uniform convergence spaces and their completions has greatly improved the regularity properties
as well as the understanding of the structure of solutions, [118, 119, 120, 121].
The underlying ideas upon which the method of Steepest Descent is based does
not depend on the particular form of the PDE involved, and is therefore type
independent. However, the relevant techniques involve several highly technical
aspects which have, as of yet, not been resolved for a class of equations comparable
to that to which the OCM applies. However, the numerical computation of
solutions, based on this theory, has advanced beyond the proven scope of the
underlying analytical techniques, see for instance .
In this Thesis we study a class of ﬁrst order PDEs that may serve as mathematical descriptions of physical conservation laws, such as the laws of gas dynamics
and the laws of electromagnetism. In particular, we apply the Order Completion
Method, as formulated in the context of Convergence spaces as well as uniform
convergence spaces completion [119, 120, 121] to the ﬁrst order nonlinear Cauchy
problem of conservation laws. Furthermore, we show how the Convergence Space
Completion Method can be applied to solve the initial value problem of the Burgers equation. We construct the entropy solution of the Burgers equation and show
how it can be assimilated with the space of H-continuous functions.
In the rest of Chapter 1, some of the concepts and theories that are used in
obtaining our results are discussed. The existence and uniqueness of weak solutions of the Cauchy problem of conservation laws is discussed in Section 1.1.
Some of the admissibility conditions for singling out a unique solution are considered, namely, the Lax, Oleinik and entropy conditions. Other techniques, namely,
the vanishing viscosity method and the compensated compactness technique, for
obtaining the entropy solution of a conservation law are also discussed. Some
existence and uniqueness results for solutions of conservation laws obtained by
Hopf, Lax, Oleinik and Kruzhkov, are also discussed. An introduction to spaces
of Hausdorﬀ continuous functions is presented in Section 1.3. Section 1.2 is an
introduction to the theory of convergence spaces. Section 1.4 addresses the main
ideas underlying the Order Completion Method. Chapter 1 ends with a summary
of the main results in this thesis.
6
Introduction
1.1
Nonlinear Hyperbolic Conservation Laws
1.1.1
Introduction to Scalar conservation laws
A conservation law states that a particular measurable property of an isolated
physical system does not change as the system evolves. In particular, any change
in such a conserved quantity can only occur as a result of an “inﬂux” or an
“outﬂow” of this quantity into or out of the system respectively.
The exact mathematical model for a single conservation law in one spatial
dimension is given by the ﬁrst order PDE
ut + (f (u))x = 0.
(1.1)
Here u is the conserved quantity while f is the ﬂux. Integrating equation (1.1)
over some interval [a, b] leads to
∫
∫ b
d b
u(x, t)dx =
ut (x, t)dx
dt a
a
∫ b
=−
f (u(x, t))x dx
a
= f (u(x, a)) − f (u(x, b))
= [ in ﬂow at a] − [ out ﬂow at b]
In other words, the quantity u is neither created nor destroyed. In particular, the
total amount of u contained in the interval [a, b] can only change due to the ﬂow
of u across the two endpoints.
In general, if u = (u1 , · · · uk ) is a vector of conserved quantities, depending
on time t and n independent variables x1 , · · · , xn , then the ﬂux of u out of any
bounded region Ω ⊆ Rn is given by
∫
F(u) · ndS.
∂Ω
Here F : Rk −→ Mk×n = {A : A is a matrix of order k × n} is the ﬂux, n denotes
the outward unit normal to ∂Ω and dS the surface element on ∂Ω. Since any
change in u in such a domain Ω over time can only be due to the ‘in ﬂow’ or ‘out
ﬂow’ of u into or out of Ω, it follows that
∫
∫
d
udx = −
F(u) · ndS.
(1.2)
dt Ω
∂Ω
Note that the integral on the right of (1.2) measures the ﬂow out of Ω, hence the
minus sign. Assuming that F, u and ∂Ω are suﬃciently smooth, we may apply
the Divergence Theorem to equation (1.2) so that
∫
∫
d
udx = − ∇ · F(u)dx.
dt Ω
Ω
Nonlinear Hyperbolic Conservation Laws
7
Taking the derivatives with respect to t under the integral sign we obtain
∫
[ut + ∇ · F(u)]dx = 0.
Ω
the Mean Value Theorem implies the diﬀerential form of conservation laws, which
is given by
ut + ∇ · F(u) = 0.
The study presented in this thesis is concerned mainly with the Cauchy problem
for strictly hyperbolic systems in one spatial dimension. That is,
ut + (F(u))x = 0
in R × (0, ∞)
u(x, 0) = u0 (x) x ∈ R,
(1.3)
(1.4)
where u = (u1 , · · · , uk ), F : Rk −→ Rk and u0 = (u10 , · · · , uk0 ) is the initial value
of u. If A(u) = Ju F(u) is the k × k Jacobian matrix of the function F at the
point u, then the system (1.3) can be written in the form
ut + A(u)ux = 0.
(1.5)
Deﬁnition 1.2. We say that a system of conservation laws is strictly hyperbolic
if the matrix A(u) has k real, distinct eigenvalues, say
λ1 (u) < · · · < λk (u).
(1.6)
for every u.
1.1.2
Examples of Conservation laws
As mentioned, systems of conservation laws such as (1.3) may serve as mathematical models for certain real-world phenomena. In particular, such equations
appear as precise mathematical descriptions of physical conservation laws. In this
section we mention a few examples of conservation laws that arise in applications.
Example 1.3 (Traﬃc Flow). Let u(x, t) denote the density of cars on a highway
at point x at time t. For example, u may be the number of cars per kilometer.
Assume that u is continuous and that the speed s of cars depends only on their
density, that is, s = s(u). We also assume that the speed s of the cars decreases
ds
as the density u increases, that is du
< 0. Given any two points a and b on the
highway, the number of cars between a and b varies according to the law
∫
∫ b
d b
u(x, t)dx = − [s(u)u]x dx.
(1.7)
dt a
a
Since (1.7) holds for all a, b ∈ R this leads to the conservation law
ut + [s(u)u]x = 0
8
Introduction
Here the ﬂux is given by F (u) = s(u)u. In practice the ﬂux F is often taken to
be
a2
F (u) = a1 (ln( ))u, 0 < u < a2 ,
u
for suitable constants a1 and a2 .
Example 1.4 (The p-system). The p - system is a simple model for isentropic
(constant entropy) gas dynamics. If v is the speciﬁc volume and u the velocity
of the gas, then the equations are
vt − ux = 0
ut + (p(v))x = 0
The ﬂux p is given as
p(v) = kv −λ , k ≥ 0, λ ≥ 1
where k and λ are constants. In applications λ is chosen such that λ ∈ [1, 3] for
most gases; in particular λ = 75 for air. In the region v > 0, the system is strictly
hyperbolic. Indeed
(
)
0 −1
A = JF =
p′ (v) 0
√
has real distinct eigenvalues λ = ± −p′ (v).
2
Example 1.5 (Gas dynamics). The Euler equations for the dynamics of a compressible, non-viscous gas is given by
vt − ux = 0
ut + px = 0
(conservation of mass)
(conservation of momentum)
u2
(ν + )t + (pu)x = 0 (conservation of energy).
2
−1
Here v = ρ , where ρ is the density and v is the speciﬁc volume. The velocity
in the gas is u, while ν is the internal energy and p the pressure. The system is
closed by an additional equation p = p(ν, v) called the equation of state, which
depend on the particular gas under consideration.
2
Example 1.6 (Electromagnetism). Let E be the electric intensity, D the electric
induction, H the magnetic intensity, B the magnetic induction, I the electric
current and q the heat ﬂux in an electromagnetic system. The conservation laws
of electromagnetism are
∂t B + ∇ × E = 0
∇·B =0
∂t D − ∇ × H + I = 0
∂t E + ∇ · (E × H + q) = 0
(Ampere′ s law)
(conservation of energy).
2
Nonlinear Hyperbolic Conservation Laws
1.1.3
9
Solutions to Scalar Conservation Laws
In this section we are concerned with initial value problems for scalar conservation
laws in one spatial dimension
ut + (f (u))x = 0 in R × (0, ∞)
u(x, 0) = u0 (x) x ∈ R.
(1.8)
(1.9)
Here u : R × [0, ∞) −→ R is the unknown conserved quantity, f ∈ C ∞ (R) is the
ﬂux and u0 : R −→ R is the initial value.
When solving the Cauchy problem (1.8) - (1.9), one is typically confronted with
the following diﬃculties: Even in the case of a C ∞ - smooth initial condition u0 ,
the initial value problem (1.8) - (1.9) may not have a classical solution on the
whole domain of deﬁnition of the equation (1.8). Indeed, solutions of (1.8) - (1.9)
may develop discontinuities after a ﬁnite time.
Classical Solutions
A classical solution of the Cauchy problem (1.8) - (1.9) is a continuously diﬀerentiable function satisfying equations (1.8) - (1.9). One can obtain the classical
solution of equation (1.8) - (1.9) by the method of characteristics. To do this,
let the ﬂux function f be given, and assume that equation (1.8) is genuinely
nonlinear. That is, f ′ (u) ̸= constant for all u, which further implies
f ′′ (u) > 0 for all u.
(1.10)
If u ∈ C 1 (R × [0, ∞)) is a solution of the Cauchy problem, then we deﬁne the
characteristic curves in R × [0, ∞) as the level curves of u. That is, for any y ∈ R
the characteristic curve through the point (y, 0) consists of the set of points where
u(x, t) = u(y, 0) = u0 (y). At every point (x, t) on the characteristic curve through
(y, 0), (1.8) and (1.9) imply that
∇u(x, t) · ⟨f ′ (u0 (y)), 1⟩ = 0.
Therefore
⟨1, −f ′ (u0 (y))⟩
is tangent to the curve at every point. Thus the characteristic through (y, 0) is a
straight line with equation
x(t) = y + tf ′ (u0 (y)).
Since u(x, t) = u0 (y) for every point (x, t) on the curve, we may express the
solution of (1.8) - (1.9) implicitly as
u = u0 (x − tf ′ (u)).
The Implicit Function Theorem may now be used to solve for u. The classical
solution of (1.8) - (1.9) found above is unique, but may fail to exist for all t > 0
as the following theorem shows
10
Introduction
Theorem 1.7. [109, Proposition 2.1.1] Assume that u0 ∈ C 1 (R), together with
its derivative, is bounded on R. Set
{
+∞
if f ′ (u0 ) is an increasing f unction
(1.11)
T∗ =
d ′
f (u0 ))−1 otherwise.
−(inf dx
Then (1.8) - (1.9) has a unique solution u ∈ C 1 (R × (0, T ∗ )). For T > T ∗ , (1.8)
- (1.9) has no classical solutions on R × [0, T ).
We now give an example to illustrate the nonexistence of classical solution for
some time t > 0.
Example 1.8. Consider the initial value problem for Burger’s equation
u2
ut + ( )x = 0 in R × (0, ∞)
2
u(x, 0) = u0 (x) x ∈ R.
(1.12)
(1.13)
Using the method of characteristics discussed above we see that for a C 1 -smooth
function u0 , a classical solution u is given by the implicit equation
u(x, t) = u0 (x − tu(x, t)), t ≥ 0, x ∈ R.
(1.14)
By the Implicit Function Theorem, we can obtain u(y, s) from (1.14) for y and s
in suitable neighborhoods of x and t respectively, whenever
1 + tu′0 (x − tu(x, t)) ̸= 0.
(1.15)
If u′0 (x) ≥ 0 for all x ∈ R, then condition (1.15) is clearly satisﬁed for all (x, t),
so that the Cauchy problem (1.12) - (1.13) has a unique solution on R × (0, ∞).
However, if u′0 (x0 ) < 0 for some x0 ∈ R then for certain values of t > 0 the
condition (1.15) may fail. Therefore, violation of condition (1.15) implies that
the classical solution u fails to exist for the respective values of t and x.
If we take

if x ≤ 0
1
u0 (x) =
1 − x if 0 ≤ x ≤ 1
(1.16)

0
if x ≥ 1.
then the unique classical solution of (1.12) - (1.13) is given by

if x < t
1
1−x
if t ≤ x ≤ 1,
t<1
u(x, t) =
 1−t
0
if x ≥ 1.
Clearly, the classical solution of (1.12) - (1.13), with u0 as in (1.16), breaks down
at t = 1. It should be noted that the breakdown of the solution u(x, t) at t = 1
for initial data u0 given in (1.16) is not due to the lack of smoothness of u0 , but
to the fact that u′0 (x) = −1 < 0 for x ∈ [0, 1].
2
In view of the nonexistence, in general, of global classical solutions, one is forced
to consider suitable generalized solutions of (1.8) - (1.9).
Nonlinear Hyperbolic Conservation Laws
11
Weak solutions and non-uniqueness
One well known and much studied generalized formulation of (1.8) - (1.9) is the
weak form of the initial value problem. Let us assume temporarily that u is a
classical solution of (1.8) - (1.9). The idea is to multiply equation (1.8) with a
smooth function ϕ and integrate by parts. More precisely, let ϕ be a test function,
that is,
ϕ : R × [0, ∞) → R
(1.17)
has compact support and is C ∞ - smooth. We denote the set of all such test
functions by C0∞ (R × [0, ∞)). Multiply equation (1.8) by ϕ and integrate by parts
to get
∫ ∞∫ ∞
0=
(ut + (f (u))x )ϕdxdt
0
−∞
∫ ∞∫ ∞
∫ ∞∫ ∞
∫ ∞
=−
uϕt dxdt −
f (u)ϕx dxdt −
uϕ|t=0 dx.
0
−∞
0
−∞
−∞
In view of the initial condition (1.9), we obtain
∫ ∞∫ ∞
∫ ∞
uϕt + f (u)ϕx dxdt +
u0 ϕ|t=0 dx = 0.
0
−∞
(1.18)
−∞
In contradistinction with equations (1.8) - (1.9), equation (1.18) does not involve
any derivative of u, thus equation (1.18) makes sense not only for smooth functions, but also for bounded and measurable functions u and u0 . We thus arrive
at the following deﬁnition of a weak solution of (1.8) -(1.9).
Deﬁnition 1.9. We say that u ∈ L∞ (R×(0, ∞)) is a weak solution of (1.8)-(1.9)
if the equation (1.18) holds for each test function ϕ ∈ C0∞ (R × [0, ∞)).
If u ∈ C 1 (R × (0, ∞)) is a weak solution of (1.8) - (1.9) then u satisﬁes (1.8) (1.9). That is, a C 1 -smooth weak solution is a classical solution of equation (1.8)
- (1.9). Thus the concept of weak solution of (1.8) - (1.9) is a generalization of
the classical notion of solution.
Remark 1.10. Equation (1.8) can also be written in the form:
ut + a(u)ux = 0,
with a(u) = f ′ (u).
(1.19)
At the level of classical solutions, equations (1.8) and (1.19) are equivalent. That
is, u ∈ C 1 (R × [0, ∞)) is a solution of (1.8) if and only if u is a solution of (1.19).
However, if u has a discontinuity, then the left hand side of equation (1.19)
may contain a product of a discontinuous function a(u) with the distributional
derivative ux . Such a product is typically not well deﬁned, see for instance .
Working with the equation in the form of (1.8) avoids this diﬃculty when dealing
with weak solutions as deﬁned in Deﬁnition 1.9.
12
Introduction
We give some examples to illustrate the non-uniqueness of solution to the initial
value problem (1.8) - (1.9).
Examples 1.11.
(i) Consider the initial value problem of the Burgers equation (1.12) - (1.13). If
we take initial data to be (1.16) then it can be shown that the function
{
1 if x < 1+t
2
u1 (x, t) =
0 if x > 1+t
2
is a weak solution to the initial value problem (1.8), (1.16).
(ii) Again consider the initial value problem (1.12) - (1.13) with initial data
{
0 if x < 0
u0 (x) =
(1.20)
1 if x > 0.
{
The function
u1 (x, t) =
0 if x <
1 if x >
is a weak solution to the initial value
function

0
x
u2 (x, t) =
 t
1
t
2
t
2
problem (1.8), (1.20). However, the
if x < 0
if 0 < x < t
if x > t
is a solution to the initial value problem (1.8), (1.20), but cannot be classiﬁed
as a weak solution according to Deﬁnition 1.9.
(iii) A more spectacular example of the loss of uniqueness of solution is the following. Consider the Burgers equation (1.12) with the initial data
{
−1 if x < 0
u0 (x) =
(1.21)
1 if x > 0.
For every α ∈ [1, ∞) the function


−1



−α
uα (x, t) =

+α



+1
if x < (1−α)
2 t
if (1−α)
2 t< x <0
if 0 < x < (α−1)
2 t
(α−1)
if 2 t < x
(1.22)
is a solution of (1.12) - (1.13) with u0 as in (1.21). It can be shown that only
the solution for which α = 1 satisﬁes the deﬁnition of a weak solution.
2
One diﬃculty that arises in the study of weak solutions of (1.8) - (1.9) is related
to the uniqueness of such solutions. In contradistinction with classical solution
of (1.8) - (1.9), weak solutions are not unique as shown in the following,
Nonlinear Hyperbolic Conservation Laws
13
Example 1.12. Consider the initial value problem of the Burgers equation (1.12)
- (1.13) with initial condition
{
0 if x < 0
u0 (x) =
(1.23)
1 if x ≥ 0.
For every α ∈ (0, 1), the function uα deﬁned as

αt

 0 if x < 2 ,
α if αt
≤ x < (1+α)t
uα (x, t) =
2
2

 1 if x ≥ (1+α)t ,
2
is a weak solution of the initial value problem.
The underlying physical laws that are modeled as mathematical conservation
laws are deterministic in nature. That is, the future state of a system that
evolves according to (1.8) is completely determined by the initial condition (1.9)
of the system. From this point of view, the non uniqueness of weak solutions of
(1.8) - (1.9), as demonstrated in Example 1.12, is unacceptable. In particular, in
the context of physical systems that may be modeled through (1.8) - (1.9), the
non uniqueness of weak solutions of the Cauchy problem may be interpreted as
follows: The state of the system at time t > 0 is not completely determined by the
weak formulation of (1.8) - (1.9) alone. Therefore further additional conditions,
motivated by physical consideration, must be imposed on the weak solutions of
(1.8) - (1.9) in order to obtain the unique solution that describes the evolution of
the underlying physical system.
In this regard, let u be a weak solution of (1.8) - (1.9). Assume that u has
continuous ﬁrst order partial derivatives everywhere in the open set Ω ⊆ R×[0, ∞)
except on a smooth curve C in Ω with equation x = x(t).
6
Curve of discontinuity C x = ρt
Ωl
Ωr
Figure 1.1
That is, limits of u from left and from right of curve C exist. Hence u has a
jump discontinuity across C. Let Ωl and Ωr be the parts of Ω on the left and on
the right of curve C respectively, see Figure 1.1.
Furthermore, since u is smooth on either side of the curve C, it is smooth in Ωl
14
Introduction
and Ωr . Because u is a weak solution of (1.8) - (1.9), we have
∫ ∫
uϕt + (f (u))ϕx dxdt = 0,
(1.24)
Ω
for all ϕ ∈ C0∞ (Ω). Thus, if suppϕ ⊂ Ωr , then
∫ ∫
∫ ∫
0=
uϕt + (f (u))ϕx dxdt = −
[ut + (f (u))x ]ϕdxdt.
Ω
(1.25)
Ωr
which implies
ut + (f (u))x = 0 in Ωr .
(1.26)
ut + (f (u))x = 0 in Ωl .
(1.27)
Similarly,
From (1.24) we get
∫ ∫
uϕt + f (u)ϕx dxdt
0=
∫Ω ∫
=
∫ ∫
uϕt + f (u)ϕx dxdt +
Ωl
uϕt + f (u)ϕx dxdt.
(1.28)
Ωr
Now using the fact that u is C 1 -smooth in Ωr and Green’s Theorem, we ﬁnd
that
∫ ∫
∫ ∫
(uϕt + f (u)ϕx )dxdt =
[(uϕ)t + (f (u)ϕ)x ]dxdt
Ωr
Ωr
∫
=
(−uϕ)dx + (f (u)ϕ)dt
∂Ωr
∫
=
∫
(−uϕ)dx + (f (u)ϕ)dt +
(−uϕ)dx + (f (u)ϕ)dt
C
∂Ω
Since ϕ = 0 on ∂Ω, we have
∫ ∫
∫
(uϕt + f (u)ϕx )dxdt = (−ur ϕ)dx + (f (ur )ϕ)dt.
Ωr
C
where ur the right limit of u on the curve C. Similarly,
∫ ∫
∫
(uϕt + f (u)ϕx )dxdt = − (−ul ϕ)dx + (f (ul )ϕ)dt
Ωl
(1.29)
C
(1.30)
Nonlinear Hyperbolic Conservation Laws
15
where ul is the left limit of u on the curve C. Substituting equations (1.29) and
(1.30) into equation (1.28) we have
∫
0 = (−ul + ur )ϕdx + (f (ul ) − f (ur ))ϕdt
∫C
ϕ[−(ul − ur )dx + (f (ul ) − f (ur ))dt]
=
(1.31)
C
which further implies
[−(ul − ur )dx + (f (ul ) − f (ur ))dt] = 0.
This implies
dx
= (f (ul ) − f (ur ))
dt
in Ω along the curve C, which may be expressed as
(ul − ur )
(f (ul ) − f (ur )) = ẋ(ul − ur ).
(1.32)
ρ[[u]] = [[f (u)]],
(1.33)
We write this as
where [[u]] = ul − ur is the jump in u across the curve C, [[f (u)]] = f (ul ) − f (ur )
is the jump in f (u) and ρ = dx
dt is the speed of curve C. Relation (1.33) is known
as the jump condition. Equation (1.33) is popularly known as Rankine Hugoniot
condition.
Remark 1.13. We remark here that if u is a piecewise smooth solution to the
initial value problem (1.8) - (1.9), then u satisﬁes the jump condition if and only
if it is a weak solution. In other words, every weak solution to the initial value
problem (1.8) - (1.9) satisﬁes the jump condition. Conversely, every piecewise
smooth solution to the initial value problem that satisﬁes the jump condition
is a weak solution to the initial value problem (1.8) - (1.9). This follows from
the above derivation of the jump condition. However, if u is a weak solution
which is bounded and measurable, then ul and ur in condition (1.32) have to be
interpreted as
ul (x, t) = lim inf u(y, t)
y−→x,
ur (x, t) = lim supu(y, t).
y−→x,
Examples 1.14.
(i) Applying the jump condition to the Burgers’ equation (1.12) where f (u) =
1 2
dx
2 u , we ﬁnd that the speed of propagation of the discontinuities is dt = ρ =
1
2 (ul + ur ).
16
Introduction
(ii) Again, applying the jump condition to the solutions uα of Example 1.12,
along the lines of
we see that by the jump condition, ρ = α2 and ρ = (1+α)
2
(1+α)t
discontinuity x = αt
respectively for each α ∈ (0, 1). Thus,
2 and x =
2
the jump condition alone is not suﬃcient to determine the unique, physically
relevant solution of the Cauchy problem (1.8) - (1.9).
2
Admissibility conditions and the Entropy Condition
From Example 1.12, it is clear that the set of weak solutions of a given initial
value problem (1.8)-(1.9) may include various solutions which are not physically
relevant. In order to single out a unique solution that is physically and/or mathematically relevant, suitable additional requirements, which we shall call admissibility conditions, are imposed on such solutions, see for instance [52, 79]. These
admissibility conditions, such as entropy conditions, are typically motivated by
some physical considerations. In the literature, various admissibility conditions
have been introduced. In this section, we recall some of these conditions.The
main results in which these admissibility conditions are employed to single out
the unique, physically relevant solution to the Cauchy problem (1.8) -(1.9) are
also discussed.
Admissibility condition 1 (The Oleinik inequality)
Oleinik  introduced the Lipschitz condition, with respect to x, for ﬁxed t for
a genuinely nonlinear single conservation law (1.8) given by
u(x + a, t) − u(x, t)
E
≤ .
a
t
a > 0,
t > 0.
(1.34)
Here E = inf1f ′′ is independent of x, t, and a. Using the Lax-Friedreich ﬁnite
diﬀerence scheme, Oleinik showed that if f is convex, which implies that f ′′ > 0,
then there exists precisely one weak solution of the Cauchy problem (1.8) - (1.9)
satisfying (1.34). Note that the weak solutions of (1.8) - (1.9) that satisﬁes (1.34)
will, for any ﬁxed t > 0 have x-diﬀerence quotient bounded from above. As
t tends to 0, the upper bound for the x-diﬀerence quotients may tend to plus
inﬁnity.
The Oleinik inequality (1.34) was motivated by the fact that if u′0 ≥ 0, a
classical solution u of (1.8) - (1.9) exists with
ux =
So that
ux ≤
u′0
1 + tf ′′ (u0 )u′0
.
1
K
≤
, t > 0 and K > 0 a constant
tf ′′ (u0 )
t
Nonlinear Hyperbolic Conservation Laws
17
which is an limiting version of the Oleinik inequality (1.34). It is therefore reasonable for a solution of the Cauchy problem (1.8) - (1.9) to satisfy the inequality
(1.34). The basic idea of the ﬁnite diﬀerence scheme in PDE is to replace derivatives with appropriate ﬁnite diﬀerences. The main result, concerning solutions
satisfying (1.34), which is also found in , is given below.
Theorem 1.15. [111, Theorem 16. 1] Let u0 ∈ L∞ (R), and let f ∈ C 2 (R) with
f ′′ (u) > 0 on {u : |u| ≤ ∥u0 ∥∞ }. Let M = ∥u0 ∥L∞ , µ = inf{f ′′ (u) : |u| ≤ ∥u0 ∥∞ }
and A = sup{|f ′ (u)| : |u| ≤ ∥u0 ∥∞ }. Then there exists exactly one weak solution
u of (1.8)-(1.9) satisfying the following:
(a) There exists a constant E > 0, depending only on M, µ and A, such that for
every a > 0, t > 0, and x ∈ R, the inequality
u(x + a, t) − u(x, t) E
< .
a
t
(1.35)
holds.
(b) |u(x, t)| ≤ M, ∀ (x, t) ∈ R × [0, ∞).
(c) u is stable and depends continuously on u0 in the following sense: If u0 , v0 ∈
L∞ (R)∩L1 (R) with ∥v0 ∥∞ ≤ ∥u0 ∥∞ , and v is the solution of (1.8) with initial
data v0 satisfying (1.35), then for every x1 , x2 ∈ R, with x1 < x2 , and every
t > 0,
∫ x2 −At
∫ x2
|u(x, t) − v(x, t)|dx ≤
|u0 (x) − v0 (x)|dx.
(1.36)
x1
x1 −At
Remark 1.16. (i) An immediate consequence of (1.35) is that for any t > 0,
the solution u(·, t) is of locally bounded total variation, that is u ∈ BVloc , which
means the total variation of u is bounded in every compact subset of R × [0, ∞).
To see this, let us deﬁne a function
v(x, t) = u(x, t) −
E
x.
t
Then if a > 0 (1.35) implies
E
a < 0.
t
That is, v is a decreasing function with respect to x and thus has locally bounded
total variation with respect to x. Hence u is of locally bounded total variation
since a linear function is also of locally bounded total variation. Thus even though
u0 is only L∞ , the solution u(·, t) is fairly regular. In fact, we can conclude that
it has at most a countable number of jump discontinuities, and it is diﬀerentiable
almost everywhere.
v(x + a, t) − v(x, t) = u(x + a, t) − u(x, t) −
18
Introduction
(ii) Theorem 1.15 is limited to single conservation laws in one spatial dimension.
An analogue of the Oleinik inequality (1.35) has not been found for systems of
conservation laws.
(iii) The Oleinik inequality (1.35) implies that ul > ur as we move across a
curve of discontinuity. To see this, note that the function v(x, t) = u(x, t) − Et x is
bounded in a domain (x1 , x2 ) × [0, ∞) containing the curve of discontinuity. Then
v has left and right limits with respect to x at each point since it is decreasing
with respect to x as noted in (i). Consequently u(x, t) has left and right limits
at each point. For any point c on the line of discontinuity we have
E
E
ur − ul = lim+ u(x, t) − lim+ x − lim− u(x, t) + lim− x
x−→c
x−→c
x−→c t
x−→c t
= lim+ v(x, t) − lim− v(x, t) < 0,
x−→c
x−→c
which implies ul > ur as we move across a curve of discontinuity.
Admissibility condition 2 (The Lax inequality)
The inequality
f ′ (ul ) > ρ > f ′ (ur )
for all t > 0.
(1.37)
was introduced by Lax . The inequality (1.37) implies that the characteristics
starting on either sides of the curve of discontinuity should intersect each other on
the curve, see ﬁgure 1.2. At this point of intersection, u has two values which is
impossible, so that there is a jump discontinuity at that point. Indeed, if u′0 < 0,
there are two points y1 , y2 ∈ R such that y1 < y2 and u1 = u0 (y1 ) > u0 (y2 ) = u2 .
If (1.37) holds then f ′ (u0 (y1 )) > f ′ (u0 (y2 )) so that the characteristics drawn from
2 −y1
points (y1 , 0) and (y2 , 0) intersect at the point when t = f ′ (u0 (y1y))−f
′ (u (y )) with u
0 2
having values u(y1 ) and u(y2 ) at that point.
The Lax inequality can be obtained from the Jump condition. To see this, let
f be a convex function. Then f ′′ > 0 which implies that f ′ is increasing. Thus if
ul > ur , then
f ′ (ul ) > f ′ (ur ).
By the Mean Value Theorem there exists ζ ∈ [ur , ul ] such that
f ′ (ζ) =
f (ul ) − f (ur )
= ρ.
ul − ur
Since f ′ is increasing we have that
f ′ (ul ) > f ′ (ζ) > f ′ (ur ),
which leads to the Lax inequality (1.37). However, not all weak solutions of
equations (1.8) - (1.9) satisfying the jump condition (1.33) will also satisfy the
Nonlinear Hyperbolic Conservation Laws
19
Lax condition (1.37). For example, the lines of discontinuity in the solutions
obtained in Example 1.12 that are shown to satisfy the jump condition do not
satisfy the Lax inequality (1.37). If all the discontinuities of a weak solution
satisfy condition (1.37), then no characteristics drawn backward will intersects
the curve of discontinuity, see ﬁgure 1.2. A discontinuity satisfying both the jump
condition (1.33) and the Lax inequality (1.37) is called a shock. A weak solution
having only shocks as discontinuity is called shock wave solution. Lax showed
that there is exactly one shock wave solution u of the equations (1.8) if the initial
condition is such that the left initial state is greater than the right initial state,
that is ul > ur . If we consider the Burger’s equation and take the initial condition
to be
{
1 if x < 0,
u0 (x) =
0 if x > 0,
then the shock wave solution is expressed as
{
1 if x < ρt,
u(x, t) =
0 if x > ρt.
2)
and the Lax condition f ′ (u1 ) > ρ > f ′ (u2 )
Clearly, the jump condition ρ = (u1 +u
2
are both satisﬁed if and only if ρ = 21 .
6
Curve of discontinuity C x = ρt
R
Ωl
Ωr
Figure 1.2
A more general example is the Riemann’s problem illustrated below.
Example 1.17 (The Riemann’s Problem). The Riemann’s problem is the Cauchy
problem
ut + (f (u))x = 0 in R × (0, ∞)
{
ul if x < 0
u(x, 0) = u0 (x) =
ur if x > 0.
Here, ul , ur ∈ R are the left and right initial states of u. Note that ul ̸= ur . If
20
Introduction
ul > ur the shock wave solution to the Riemann’s problem is
{
u(x, t) :=
ul if x < ρt,
ur if x > ρt.
2
The main result by Lax is summarized in the following
Theorem 1.18. Let f : R −→ R be a C 2 - smooth and convex function. If
u0 ∈ L1 (R) then there exists a weak solution u of the Cauchy problem (1.8) (1.9) which satisﬁes (1.37). The solution u is deﬁned as
(
x − y0
u(x, t) = b
t
)
for each
t > 0 and a.e. x ∈ R
(1.38)
where y0 = y0 (x, t) is the the value of y that minimizes
K(x, y, t) = U0 (y) + tG(
x−y
).
t
Here the function b(s) is deﬁned as b(s) = (f ′ (s))−1 and G(s) is deﬁned as the
solution of
dG(s)
= b(s),
ds
G(c) = 0, with c = f ′ (0),
and
∫
U0 (y) =
y
−∞
u0 (s)ds.
The discontinuity of the solution u constructed in Theorem 1.18 are shocks,
which means u satisﬁes the Lax inequality (1.37). In addition, u has the semi
group property which means that if u(x, t1 ) is taken as a new initial value, the
solution u(x, t2 ) at t2 > t1 corresponding to the initial condition u(x, t1 ) equals
u(x, t1 + t2 ).
Remark 1.19. (i) For ﬁxed t > 0, the function y0 (x, t) is an increasing function
of x, see [79, Lemma 3.3].
(ii)The shock wave solution constructed in Theorem 1.18 satisﬁes the Oleinik
inequality (1.34). Indeed, since b and y(x, t) are increasing functions, then for
Nonlinear Hyperbolic Conservation Laws
x2 > x1 we have that
(
)
(
)
(
)
x1 − y0 (x1 , t)
u(x1 , t) = b
t
x1 − y0 (x2 , t)
≥b
t
x2 − y0 (x2 , t)
≥b
t
= u(x2 , t) − α
21
−α
x2 − x1
t
x2 − x1
;
t
which implies
α
u(x2 , t) − u(x1 , t)
≤ ,
x2 − x1
t
here α > 0 is a Lipschitz constant for the function b.
A generalized form of the Lax condition (1.37) was given by Oleinik . For
0 ≤ α ≤ 1,
f (αur + (1 − α)ul ) ≤ αf (ur ) + (1 − α)f (ul )
f (αur + (1 − α)ul ) ≥ αf (ur ) + (1 − α)f (ul )
if ul > ur ,
if ul < ur .
(1.39)
(1.40)
The inequality (1.39) implies that f is convex. Geometrically this means that
the graph of f over the interval [ur , ul ] lies below the chord of f drawn from the
point (ul , f (ul )) to the point(ur , f (ur )). On the other hand, the inequality (1.40)
implies that f is concave, which means that the graph of f over the interval [ul , ur ]
lies above the chord of f drawn from the point (ul , f (ul )) to the point(ur , f (ur )).
We now discuss the relationship between the Lax inequality (1.37) and the
generalized Oleinik inequality (1.39). To start with, the convexity of f implies
that the inequality (1.39) is equivalent to
f (u∗ ) − f (ul ) f (ur ) − f (u∗ )
≥
.
u∗ − ul
ur − u∗
(1.41)
for every u∗ = αur + (1 − α)ul , with 0 < α < 1. Combining the inequality (1.41)
with the Mean Value Theorem, we have that there exists ζ ∈ [ur , ul ] such that
f ′ (ζ) =
f (ul ) − f (ur )
=ρ
ul − ur
and
f (u∗ ) − f (ul )
f (ur ) − f (u∗ )
′
≥ f (ζ) ≥
.
u∗ − ul
ur − u∗
(1.42)
22
Introduction
Now Taking limits as u∗ −→ ul and u∗ −→ ur in (1.42), we have
f ′ (ul ) ≥ ρ ≥ f ′ (ur ),
which is the Lax inequality. Thus, the generalized inequality (1.39) implies the
Lax inequality. On the other hand, if the ﬂux function f, is a convex function,
which imply f ′′ > 0, then the Lax inequality (1.37) would implies the inequality
(1.39).
Essentially, all the conditions considered so far require that the ﬂux function
f be convex or concave. Krushkov  introduced a more general admissibility
condition for a ﬂux function f that is not necessarily convex or concave. One
advantage of the Kruzkov condition is that it is also applicable to systems of
conservation laws in more than one dimension, whereas the Oleinik condition is
limited to scalar conservation laws in one dimension. Although the Lax inequality
is applicable to systems of conservation laws, it still requires the convexity of the
ﬂux function f , moreover the Lax inequality is limited to systems in one spatial
dimension. Kruzkov’s admissibility condition is given below.
Admissibility condition 3 (The Entropy condition)
The admissibility condition discussed in this section was ﬁrst introduced by
Kruzhkov . It is formulated in terms of entropy/entropy ﬂux pairs. A pair
(Φ, Ψ) of real C ∞ - smooth functions is called an entropy/entropy ﬂux pair for
the conservation law (1.8) if
Ψ′ (z) = Φ′ (z)f ′ (z),
for all z ∈ R.
(1.43)
The function Ψ is called an entropy ﬂux function for the entropy function Φ. For
every convex function Φ we can ﬁnd a corresponding entropy ﬂux function Ψ
given by
∫ z
Ψ(z) =
Φ′ (z)f ′ (z), z ∈ R.
(1.44)
z0
For each entropy/entropy ﬂux pair (Φ, Ψ) one may formulate an admissibility
condition, known as entropy condition given by
∫∫
Φ(u)ϕt + Ψ(u)ϕx dxdt ≥ 0, Ω ⊆ R × [0, ∞)
Ω
(1.45)
∀ϕ ∈ C0∞ (R × (0, ∞)), ϕ ≥ 0, supp ϕ ⊂ Ω.
Deﬁnition 1.20 (Entropy solution). The function u ∈ C([0, ∞), L1 (R))∩L∞ (R×
(0, ∞)) is called an entropy solution of the Cauchy problem (1.8) - (1.9) if it
satisﬁes the entropy condition (1.45) for each entropy/entropy-ﬂux pair (Φ, Ψ),
and u(., t) → u0 in L1 (R) as t → 0.
The main result on the existence and uniqueness of entropy solution is given
below.
Nonlinear Hyperbolic Conservation Laws
23
Theorem 1.21. [109, Theorem 2.3.5] For every function u0 ∈ L∞ (R), there
∩
exists one and only one entropy solution u ∈ L∞ (R × [0, T )) C([0, T ); L1loc (R))
of (1.8) -(1.9). The entropy solution u satisﬁes the maximum principle
∥u∥L∞ (R×[0,T )) = ∥u0 ∥L∞ (R) .
Remark 1.22. Theorem 1.21 is valid in several spatial dimension, , . By
Theorem 1.21, one can construct a semi group operator S(t), associated with the
entropy solution u(x, t) with respect to the initial data u0 and time t > 0 written
as,
St u0 (x) = u(x, t).
The semi group S : D ×[0, ∞) −→ D with D ⊂ L1 (R) a closed domain containing
all functions with bounded total variation, has the following properties [30, 102]:
(i) S0 u = u, St+s u = St Ss u,
(ii) St is uniformly Lipschitz continuous w.r.t time and initial data: There exists
L, L′ > 0 such that
∥St u0 − Ss v0 ∥ ≤ L∥u0 − v0 ∥ + L′ |t − s|.
In his proof, Kruzhkov considered a family of entropy-entropy ﬂux pairs (Φk , Ψk )k∈N ,
but Panov  has shown that it is not necessary to consider the whole family
of entropy/entropy ﬂux pair. A single pair of entropy/entropy ﬂux pair (Φ, Ψ) is
suﬃcient to characterize entropy solutions of (1.8) - (1.9).
We now show that the entropy solution u of the Cauchy problem (1.8) - (1.9)
is a weak solution, see . In this regard, assume u is C 1 - smooth in the left
subregion Ωl and right subregion Ωr of some region Ω ⊆ (R × [0, ∞)) divided by
a smooth curve C. Let u also satisfy the entropy condition. If we take Φ(u) = ±u
and Ψ(u) = F (u) in (1.45) we see that
ut + f (u)x = 0
in Ωl , Ωr .
Integrating (1.45) by parts we get
∫ ∫
∫ ∫
Φ(u)ϕt + Ψ(u)ϕx dxdt +
Φ(u)ϕt + Ψ(u)ϕx dxdt ≥ 0
Ωl
Ωr
from where we deduce
∫
ϕ[(Φ(ul ) − Φ(ur ))n2 + (Ψ(ul ) − Ψ(ur ))n1 ]dS ≥ 0
(1.46)
C
where n = (n1 , n2 ) is the unit normal to C pointing from Ωl to Ωr . Suppose that
the curve C is given by x = s(t) for some smooth function s : [0, ∞) −→ R. Then
ṡ)
. Consequently (1.46) becomes
n = (n1 , n2 ) = √(1,−
1+ṡ2
ṡ(Φ(ur ) − Φ(ul )) ≥ Ψ(ur ) − Ψ(ul )
along C,
(1.47)
24
Introduction
which leads to the jump condition
ṡ[[u]] = [[f (u)]].
(1.48)
Thus the entropy condition (1.45) satisﬁes the jump condition and thus a weak
solution.
Suppose ul > ur . Fix u∗ such that ul > u∗ > ur and deﬁne the entropy/entropy
ﬂux pair as
{
(z − u∗ ) ifz − u∗ > 0
Φ(z) =
0
otherwise
∫
and
z
Ψ(z) =
sgn+ (v − u)f ′ (v)dv.
ul
Then
and
Φ(ur ) − Φ(ul ) = u∗ − ul
Ψ(ur ) − Ψ(ul ) = f (u∗ ) − f (ul ).
Consequently (1.47) implies
ṡ(u∗ − ul ) ≥ f (u∗ ) − f (ul )
which, when combined with (1.48), gives
f (u∗ ) − f (ul )
.
ṡ ≤
u∗ − ul
(1.49)
Similarly, if ur > ul and ur > u∗ > ul then
f (ur ) − f (u∗ )
ṡ ≥
.
ur − u∗
(1.50)
Conditions (1.49) and (1.50) gives condition (1.41). Thus the entropy condition
(1.45) implies the Lax condition.
Remark 1.23. (i) Note that any entropy solution of (1.8) - (1.9) is also a weak
solution of (1.8). This follows if we set Φ(z) = z, z ∈ R, in which case Ψ = f.
(ii) If u ∈ C 1 (Ω) is a classical solution of the initial value problem, then
Φ′ (u)(ut + (f (u))x ) = 0
for any convex function Φ. This further implies
0 = Φ′ (u)ut + Φ′ (u)f ′ (u)ux = Φ′ (u)ut + Ψ′ (u)ux ,
with Ψ any entropy ﬂux associated with Φ. This veriﬁes that a classical solution
is also an entropy solution.
The theory of hyperbolic conservation laws has developed in a number of directions. One major approach consists of considering weak solutions in suitable
Nonlinear Hyperbolic Conservation Laws
25
spaces of functions with bounded variation (BV functions). The problem, and a
very diﬃcult one, is to prove that various approximating schemes such as the vanishing viscosity methods, the Glimm scheme, wave front tracking etc, converge to
the entropy solution, see [3, 16, 26, 27, 64] for details. The BV approach consists
of proving convergence of these schemes under assumption on the initial condition
u0 related to its total variation. Typically, one assumes that the total variation
satisﬁes a smallness condition, see . Another approach is to construct weak
solutions through weak convergence and compensated compactness arguments,
see for instance [87, 114, 130] and Section 1.1.5. In the next section we discuss
the vanishing viscosity method for conservation laws.
1.1.4
Solutions of Scalar Conservation Laws via Vanishing Viscosity
The role of the entropy condition in conservation laws is to distinguish between
the physically relevant weak solution and other, possibly irrelevant weak solutions.
One method for obtaining and analyzing entropy solutions to hyperbolic conservation laws is to modify the given conservation law by adding a small perturbation
term to the right-hand side of the equation, for example, εuxx , with ε << 1, to
obtain from (1.8) a regularized equation
uεt + (f (uε ))x − εuxx = 0
(1.51)
The motivation that is often given for the study of the Cauchy problem (1.8) (1.9) through the regularized problem
ut + (f (u))x = εuxx in R × (0, ∞),
u(x, 0) = u0 (x) x ∈ R.
ε>0
(1.52)
(1.53)
is that a physically and mathematically correct solution of (1.8) - (1.9) should
arise as the limit of the solution uε of (1.52) - (1.53), as the parameter ε tends to
zero. This method is generally known as the vanishing viscosity method [21, 23,
52, 111, 109, 113, 128, 129].
In this regard, we may recall that the model for thermoelastic materials under
adiabatic conditions is a ﬁrst order system of hyperbolic PDEs, while that for
thermoviscoelastic, heat-conducting materials is a second order PDE, containing
a diﬀusive term, see . Every material has a degree of viscous response and
conducts heat. Classifying a material as an elastic nonconductor of heat simply
means that viscosity and heat conductivity are negligible, but not totally absent.
The consequence of this is that the theory of adiabatic thermoelasticity may
be physically meaningful only as a limiting case of thermoviscoelasticity, with
viscosity and heat conductivity tending to zero see [52, 111]. In the same way
hyperbolic conservation laws are considered as a limiting case of the parabolic
equation (1.52).
26
Introduction
We note here that solutions of nonlinear PDEs are in general highly unstable
with respect to small perturbations of the equation. Thus in spite of the physical
intuition underlying such viscosity methods, the rigorous mathematical analysis
of the limiting behavior of solutions of equations like (1.52) - (1.53) as ε tends to
0 is highly non trivial.
It is well known that for any ε > 0, and for bounded and measurable initial
data, there exists a unique classical solution uε of the parabolic equation (1.52) (1.53), see [56, 95, 130]. This unique solution uε of equation (1.52) - (1.53) is called
a viscosity solution of (1.52) - (1.53). The following general theorem guarantees
the existence of a sequence of solutions to the parabolic problem (1.52) - (1.53).
Theorem 1.24. [130, Theorem 1.0.2] (i) For any ﬁxed ε > 0, the Cauchy problem
(1.52) - (1.53) with u0 ∈ L∞ has a local classical solution uε ∈ C ∞ (R × (0, τ )) for
a small time τ, which depends only on the L∞ norm of the initial data u0 .
(ii) If the solution uε has an a priori L∞ bound ∥uε (·, t)∥L∞ ≤ M (ε, T ) for t ∈
[0, T ], then the solution exists on R × [0, T ].
(iii) The solution uε satisﬁes:
lim uε (x, t) = 0,
|x|−→∞
if lim u0 (x) = 0.
|x|−→∞
Following the standard theory for parabolic equations, the global existence of
a solution can easily be obtained by applying the contraction mapping principle
to an integral representation of the solution. Whenever there is a local solution
with a priori L∞ bound, the domain of existence of solution can be extended,
step by step, to any further time T since the step-time depends only on the L∞
norm of the initial condition. Details of the proof can be found in [77, 111].
The two fundamental questions concerning the solution uε of (1.52) - (1.53)
are the following.
(i) In what sense does the sequence of functions uε converge to a limit function
u as ε tends to 0?
(ii) Given that uε converges to some u in a speciﬁed way, in what sense can we
interpret u as a solution of the Cauchy problem (1.8) - (1.9)? In particular,
if uε is the unique classical solution of (1.52)-(1.53) and uε converges to some
function u as ε tends to 0, is u an entropy solution of the Cauchy problem
(1.8) - (1.9)?
A partial answer to the above questions is given in the following Theorem, see
[52, 58, 75].
Theorem 1.25. [52, Theorem 6.3.1] Suppose uε is the solution of (1.52), (1.53),
and assume that for some sequence {εn } with εn → 0 as n → ∞, we have that
uεn is norm bounded in L∞ and uεn (x, t) → u(x, t) as εn −→ 0 in R for almost all
Nonlinear Hyperbolic Conservation Laws
27
(x, t) ∈ R × [0, ∞). In other words, uεn → u boundedly a.e on R × [0, ∞). Then
u is an entropy solution of (1.8)-(1.9) on R × [0, ∞).
Remark 1.26. (i) Since the weak solutions of (1.8)-(1.9) are in L∞ , and are typically not continuous, it may happen that as the smooth function uε approaches
u the functions uεx and uεxx become unbounded, in a neighborhood of a point of
discontinuity of u. Thus establishing the convergence uε → u is a highly nontrivial issue.
(ii)If uε converges to u in the weak sense only; the sequence F (uε ) will converge
in the weak sense but not to F (u). In this regard, we have the following
Theorem 1.27 (, ). If the sequence of functions un converges in the weak
sense to a limit u, then f (un ) converges in the weak sense to f (u) if and only if
un → u strongly in L1 .
The theory of scalar conservation laws via vanishing viscosity was initiated
by E. Hopf in his 1950 paper . In that paper, Hopf considered the viscous
Burgers equation
( ε 2)
(u )
ε
ut +
= εuεxx in R × (0, ∞)
(1.54)
2
x
uε (x, 0) = u0 (x)
R × {t = 0},
(1.55)
and showed that the solution to the Cauchy problem (1.54) - (1.55) can be expressed via an explicit formula
∫ ∞ x−y − 1 K(x,y,t)
e 2ε
dy
uε (x, t) = −∞
(1.56)
∫ ∞ t − 1 K(x,y,t)
2ε
e
dy
−∞
where
(x − y)2
K(x, y, t) =
+
2t
The result by Hopf is stated below.
∫
y
u0 (s)ds.
(1.57)
0
Theorem 1.28. [69, Hopf E.] Suppose u0 ∈ L1loc (R) is such that
∫ x
u0 (ξ)dξ = o(x2 ) for |x| large.
(1.58)
0
Then there exists a classical solution of equation (1.54)-(1.55) given by (1.56).
The solution uε satisﬁes the initial condition: For all a ∈ R,
∫ x
∫ a
ε
u (ξ, t)dξ −→
u0 (ξ)dξ as x −→ a, t −→ 0.
(1.59)
0
0
If, in addition, u0 (x) is continuous at x = a then
uε (x, t) −→ u0 (a) as x −→ a, t −→ 0.
(1.60)
A solution of (1.54) - (1.55) which is C 2 -smooth in the interval 0 < t < T and
satisﬁes (1.59) for each value of a necessarily coincides with (1.56) in the interval.
28
Introduction
In his proof Hopf’s technique was to ﬁrst transform the equation (1.54) - (1.55)
into the heat equation
zt − εzxx = 0,
z(x, 0) = z0 (x) = e− 2ε
1
∫x
0
(1.61)
(u0 (ν))dν
(1.62)
using the transformation equation
1
−( 2ε
z=e
∫
uε dx)
with inverse
uε = −2ε(log z)x = −2ε(
zx
).
z
The solution of (1.61) - (1.62) is then obtained as
∫ ∞
(x−y)2
1
uε (y, 0)e− 4εt .
z(x, t) = √
4πεt −∞
(1.63)
(1.64)
Substituting the expression (1.64) for z(x, t) in (1.63) one obtains the formula
(1.56) as the unique solution of equations (1.54) - (1.55). The condition (1.58) on
the initial value is necessary to guarantee the convergence of the deﬁnite integral
(1.64), thus also those in the expression (1.56) for the solution of (1.54) - (1.55).
The function K(x, y, t) has the following properties which are used in the sequel
(P1) K(x, y, t) is a continuous function of y with x, t being ﬁxed.
(P2) For any ﬁxed value of x and t, the function K(x, y, t) attains its minimum
value at one or several values of y. Furthermore, the set
{y : K(x, y, t) = minK(x, z, t)}
z∈R
(1.65)
is a compact set.
(P3) K(x, ymin (x, t), t) = K(x, ymax (x, t), t) = m(x, t) is a continuous function of
x, t in the half plane t > 0, where ymin and ymax denote the minimum and
maximum values of y, respectively, for which K(x, y, t) attains its minimum.
Note that the assumption (1.58) is essential for obtaining property (P2). Indeed,
using (1.58) one can see that
K
1
=
>0
2t
|y|−→∞ y 2
lim
which implies that the set (1.65) is bounded. The fact that it is closed follows
from the continuity of K.
It follows from (P2) that the set (1.65) is bounded in R. Hence the functions
ymin (x, t) = min{y : K(x, y, t) = minK(x, z, t)}
z∈R
Nonlinear Hyperbolic Conservation Laws
29
and
ymax (x, t) = max{y : K(x, y, t) = minK(x, z, t)}.
z∈R
are well deﬁned. The functions ymin and ymax have the following properties, [69,
Lemma 1 and Lemma 3 ].
(Y1) If x1 < x2 then ymax (x1 , t) ≤ ymin (x2 , t).
(Y2) ymin (x− , t) = ymin (x, t), ymax (x+ , t) = ymax (x, t), where ymin (x− , t) is the
left limit of ymin and ymax (x+ , t) the right limit of ymax , for ﬁxed t.
(Y3)
lim ymin (x, t) = +∞,
x−→+∞
lim ymax (x, t) = −∞.
x−→−∞
(Y4) As functions of x and t, ymin and ymax are lower semi-continuous and upper
semi-continuous respectively. Therefore both functions are continuous at all
points (x, t) where ymin (x, t) = ymax (x, t).
Let us recall  the deﬁnition of lower semi-continuous and upper semi-continuous
functions. In this regard, let R∗ = R∪{±∞} be the set of extended real numbers.
A function u : Ω −→ R∗ is called lower semi-continuous at point (α, θ) ∈ Ω if for
every m < u(α, θ) there exist η > 0 such that
|x − α| < η, |t − θ| < η,
=⇒ m < u(x, t).
A function u : Ω −→ R∗ is called upper semi-continuous at point (α, θ) ∈ Ω if for
every m > u(α, θ) there exist η > 0 such that
|x − α| < η, |t − θ| < η, =⇒ m > u(x, t).
A function u : Ω −→ R∗ is called lower(upper) semi-continuous in Ω if it is
lower(upper) semi-continuous at every point of Ω.
It is clear, from property (Y1), that ymin and ymax are monotone functions in
x. Since a monotone function has only a denumerable number of discontinuities
one can conclude that, for any t > 0, ymin (x, t) = ymax (x, t) for all x except at
some denumerable set of values of x where ymin < ymax . That is,
for t > 0 the set {x : ymin (x, t) < ymax (x, t)} is countable.
(1.66)
The convergence Theorem for the solution uε of (1.54) - (1.55) is stated as
follows
Theorem 1.29. [69, Theorem 3] Let uε (x, t) be the solution of (1.54) - (1.55)
with u0 satisfying (1.58). Then for all x and t > 0,
x − ymax (x, t)
≤ lim inf uε (α, θ) ≤
t
α→x
θ→t
ε→0
lim sup uε (α, θ) ≤
α→x
θ→t
ε→0
x − ymin (x, t)
.
t
30
Introduction
In particular,
lim uε (α, θ) =
α→x
θ→t
ε→0
x − ymax (x, t) x − ymin (x, t)
=
t
t
holds at every point (x, t), t > 0, at which ymax (x, t) = ymin (x, t).
Deﬁne the function u as
u(x, t) :=
lim uε (α, θ)
α→x
θ→t
ε→0
(1.67)
at every point (x, t), t > 0 where this limit exists. By Theorem 1.29, the limit
(1.67) will exist at every point where ymin (x, t) = ymax (x, t). At these points, u is
well deﬁned and continuous. Furthermore, for each t > 0, u has a denumerably
many discontinuities, as we noted above. Therefore we can conclude that the set
of discontinuities has measure zero. Thus the convergence of uε to u as given by
(1.67) is almost everywhere.
We deﬁne u and u on R × [0, ∞), by
u(x, t) = lim inf uε (α, θ)
α→x
θ→t
ε→0
(1.68)
= sup{inf{uε (α, θ) : |α − x| < η, |θ − t| < η, ε < η} : η > 0}
and
u(x, t) = lim sup uε (α, θ)
α→x
θ→t
ε→0
(1.69)
= inf{sup{uε (α, θ) : |α − x| < η, |θ − t| < η, ε < η} : η > 0}
Note that
u(x, t) ≤ u(x, t) (x, t) ∈ R × [0, ∞).
and
u(x, t) = u(x, t) = u(x, t) whenever u(x, t) is deﬁned for (x, t) ∈ R × [0, ∞).
The function u(x, t) deﬁned by (1.67) is a weak solution to the Cauchy problem
of the inviscid Burgers equation (1.12) - (1.13). To see this, let us note ﬁrst that
Nonlinear Hyperbolic Conservation Laws
31
a solution uε of (1.54) - (1.55) satisﬁes the equation
∫ ∞
∫ ∞∫ ∞
(uε )2
ε
ϕx }dxdt +
u0 ϕ|t=0 dx
{u ϕt +
2
−∞
0
−∞
∫ ∞∫ ∞
+ε
uε ϕxx dxdt = 0,
0
(1.70)
−∞
C0∞ (Ω).
for each test function ϕ ∈
From the convergence Theorem 1.29 it follows
that every point (x, t) has a neighborhood in which the solutions uε of (1.54) (1.55) are uniformly bounded as ε tends to 0. As a result of this, one can pass
through the limit ε −→ 0 in (1.70) with ϕ being ﬁxed. Thus we have
∫ ∞∫ ∞
∫ ∞
u2
{uϕt + ϕx }dxdt +
u0 ϕ|t=0 dx = 0,
2
−∞
0
−∞
which shows that u(x, t) is a weak solution of the Cauchy problem for the inviscid
Burgers equation (1.12).
Furthermore, the limit function u(x, t) deﬁned by (1.67) is an entropy solution
of the equation (1.12) - (1.13). To see this, let (Φ, Ψ) be any given pair of
entropy/entropy-ﬂux pair for the equation (1.8). Multiply equation (1.54) by
Φ′ (uε )
uεt Φ′ (uε ) + uε uεx Φ′ (uε ) = εuεxx Φ′ (uε )
= ε((Φ(uε ))xx − Φ′′ (uε )(uε )2 ).
Using (1.43) we get
(Φ(uε ))t + (Ψ(uε ))x = ε((Φ(uε ))xx − Φ′′ (uε )(uε )2 ).
(1.71)
Now multiply equation (1.71) by ϕ ∈ C0∞ (R × (0, ∞)), ϕ ≥ 0 and integrate
over R × [0, ∞).
∫ ∞ ∫ ∞
(Φ(uε )ϕt + Φ(uε )ϕx )dxdt
−∞ 0
∫ ∞∫ ∞
∫ ∞∫ ∞
ε
= ε
Φ(u )ϕxx dxdt − ε
Φ′′ (uε )(uε )2 ϕdxdt.
0
−∞ 0
∫−∞
∞ ∫ ∞
≥ ε
Φ(uε )ϕxx dxdt
(1.72)
−∞
0
since Φ′′ (uε )(uε )2 ϕ ≥ 0.
Again from the convergence Theorem 1.29 it follows that every point (x, t) has
a neighborhood in which the solutions uε of (1.54) - (1.55) are uniformly bounded
as ε tends to 0. Moreover, the function Φ(u) is convex and thus continuous. As
such one can pass to the limit as ε −→ 0 in (1.72) with ϕ being ﬁxed. Thus we
have
∫ ∞∫ ∞
(Φ(u)ϕt + Φ(u)ϕx )dxdt ≥ 0.
−∞
0
32
Introduction
which shows that u(x, t) is an entropy solution of the Cauchy problem for the
inviscid Burgers equation (1.12).
Lax  obtained a result similar to that of Hopf by showing that the weak
solution
(
)
x − y0
u(x, t) = b
for each t > 0 and a.e. x ∈ R
t
obtained in Theorem 1.18 can be written as the limit of the solution uε of the
Burgers equations(1.12) - (1.13). To see this, consider the equation
ut + (f (u))x =
1
u2
uxx , f (u) = .
2n
2
(1.73)
with the initial condition
un (x, 0) = u0 (x)
Then similar to Hopf’s result, we see that the function
∫ ∞ x−y −nK(x,y,t)
b( )e
dy
un = −∞∫ ∞ t −nK(x,y,t)
dy
−∞ e
(1.74)
(1.75)
is a solution to the equation (1.73) - (1.74) and we deﬁne
u(x, t) = lim un .
n→∞
As it was in case of Hopf’s result, the convergence of un to u as n tends to ∞ is
almost everywhere, see . Likewise, deﬁne
∫∞
x−y
−nK
dy
−∞ f (b( t ))e
∫∞
fn =
.
(1.76)
−nK dy
e
−∞
Then
f = lim fn , a.e.
n→∞
Here,
x−y
),
t
the function b(s) is deﬁned as b(s) = (f ′ (s))−1 , G(s) is deﬁned as the solution of
K(x, y, t) = U0 (y) + tG(
dG(s)
= b(s),
ds
G(c) = 0, with f ′ (0) = c,
∫
and
U0 (y) =
y
u0 (s)ds.
0
If we denote by Vn the function
Vn = log
∫
∞
−∞
e−nK(x,y,t) dy
Nonlinear Hyperbolic Conservation Laws
then
un = −
33
1 ∂
Vn
n ∂x
and
1∂
Vn
n ∂t
provided that f (b(z)) = zb(z) − G(z). It then follows that
fn (x, t) = −
(un )t + (fn )x = 0.
(1.77)
Multiply equation (1.77) by a test function ϕ ∈ C0∞ (Ω) and integrate to get
∫ ∫
un ϕt + fn ϕx = 0,
Ω
letting n → ∞ we obtain the limit relation
∫ ∫
uϕt + f (u)ϕx = 0,
Ω
This shows that u is a weak solution of equations (1.8) - (1.9), see [78, Theorem
2.1].
For an arbitrary function f, there are no explicit formula for the solution to
the viscous equation. However, Oleinik  proved that for a general convex or
concave function, the solutions of the parabolic problem (1.52) - (1.53) tends to
a weak solution of (1.8). A simpler proof was given by Ladyzhenskaya in .
As mentioned in Section 1.1.3, see in particular Theorem 1.15, Oleinik [95,
96] showed that there exists a unique solution of (1.8)-(1.9) that satisﬁes the
admissibility condition (1.34), provided that the ﬂux function f is convex. This
solution is constructed as a limit of solutions uε of equation (1.52) -(1.53) obtained
through a ﬁnite diﬀerence scheme introduced by Lax in , see also [81, 82]. It
was subsequently shown that u is in fact the unique solution of (1.8) - (1.9)
satisfying (1.34), see [111, Theorem 16.11].
Kruzhkov ,  introduced a new method to apply the vanishing viscosity
method to a larger class of equations. For initial data u0 ∈ L∞ , he proved
existence and uniqueness of the classical solution uε (x, t) of (1.52)-(1.53). Using
a family of entropy-entropy ﬂux pairs (Φk , Ψk )k∈R where
Φk (u) = |u − k| and Ψk (u) := sgn(u − k)(f (u) − f (k)),
he showed that the solution uε (x, t) of equations (1.52) - (1.53) converges as ε
tends to 0 almost everywhere to a weak solution u(x, t) of the Cauchy problem
(1.8) -(1.9).
Theorem 1.30. [75, Kruzhkov]] Let u0 ∈ L∞ (R). Then the solution uε (x, t) of
problem (1.52) - (1.53) converges as ε −→ 0 almost everywhere in R × [0, T ) to
a function u(x, t) which is a weak solution of the problem (1.8) - (1.9).
34
Introduction
In the proof of the above theorem, a priori bounds (independent of ε) were obtained for the solutions uε (x, t) which ensures the compactness of the family of
functions {uε (x, t) : t > 0} with respect to the L1 - norm. This in turn guarantees
the existence of a subsequence uεn of uε that converges almost everywhere to the
weak solution u(x, t). Thus a weak solution of the Cauchy problem (1.8) - (1.9)
is constructed as the limit of solution uε of the parabolic problem (1.52) - (1.53).
The following theorem shows that the weak solution constructed above is an
entropy solution.
Theorem 1.31. Let u0 ∈ L∞ (R). If uε (x, t) converges to a function u(x, t) almost
everywhere as ε −→ 0 in R × [0, T ). Then the solution u is the entropy solution
of the Cauchy problem (1.8) - (1.9).
The properties of the solution u(x, t) of problem (1.8) - (1.9) is addressed in
Theorem 1.32. [109, Proposition 2.3.6] Let u0 , v0 ∈ L∞ and u and v be the
entropy solutions of (1.8) -(1.9) associated with u0 and v0 respectively. Let
M = sup{|f ′ (s)| : s ∈ [inf(u0 (x), v0 (x)), sup(u0 (x), v0 (x))]}.
Then the following properties are satisﬁed:
(P1) For all t > 0 and every interval [a, b], we have
∫ b
∫ b+M t
|v(x, t) − u(x, t)|dx ≤
|v0 (x) − u0 (x)|dx.
a
a+M t
(P2) If u0 and v0 coincide on [x0 − δ, x0 + δ] for some δ >, 0 then u and v coincide
on the triangle {(x, t) : |x − x0 | + M t < δ}.
(P3) If u0 − v0 ∈ L1 (R), then u(t) − v(t) ∈ L1 (R) ∀ t > 0, where u(t) := u(·, t)
and v(t) := v(·, t). Moreover,
∥v(t) − u(t)∥L1 (R) ≤ ∥v0 − u0 ∥L1 (R) ,
and
∫
R
∫
(v(x, t) − u(x, t))dx =
R
(v0 (x) − u0 (x))dx.
(P4) If u0 ∈ L1 (R), then u(t) ∈ L1 (R), for all t > 0, and
∫
∫
∥u(t)∥L1 (R) ≤ ∥u0 ∥L1 (R) ,
u(x, t)dx =
u0 (x)dx.
R
R
(P5) If u0 (x) ≤ v0 (x) for almost all x ∈ R, then u(x, t) ≤ v(x, t) for almost all
(x, t) ∈ R × [0, ∞).
(P6) If u0 has bounded total variation, then u(t) has bounded total variation for
all t > 0 and
T V (u(t)) ≤ T V (u0 ).
Nonlinear Hyperbolic Conservation Laws
35
Remark 1.33. The proof of the above Theorem 1.32 is based on the fact that
the semigroup operator St of (1.8) is a contraction in L1 (R) ∩ L∞ (R) with respect
to the L1 -norm. This fact is expressed in property (P3), which implies that if
u0 ∈ BV, then u ∈ BV for all t > 0 as stated in property (P6). Property (P4) is
a consequence of property (P3), and it leads to property (P5).
1.1.5
Compensated Compactness Methods for Nonlinear Conservation
Laws
As mentioned, the vanishing viscosity method involves the construction of physically meaningful solution of the Cauchy problem (1.8) - (1.9) as the limit of the
solutions of the parabolic problem (1.52) - (1.53). The strategy usually adopted
in the literature is to obtain a priori bounds on solutions uε , of (1.52) - (1.53),
that is, to show that
∥uε ∥L∞ ≤ C,
with C a constant independent of ε.
Such an estimate is then used to show that uε converges to some function u, in an
appropriate sense, as ε −→ 0. The ﬁnal step is usually to show, using a suitable
compactness argument, that the limit function u is the entropy solution to the
Cauchy problem (1.8) - (1.9). From the forgoing discussion it is clear that an
essential part of the vanishing viscosity method is the study of the compactness
of the set {uε (x, t) : ε > 0} of solutions of the viscous problem (1.52) - (1.53)
with respect to the L1 topology. That is, the possibility of obtaining the strong
convergence of a subsequence uεn , where εn −→ 0 as n −→ ∞. These compactness
arguments are closely related to the decay of entropy solution for large time, see
for instance [41, 42, 43, 44, 45, 46] and .
The issue of compactness of the set {uε : ε > 0} has been addressed mainly in
the following ways:
(i) The compactness approach is based on a priori BV bounds, which has proven
to be more useful in the study of scalar conservation laws or systems of
conservation laws, see Section 1.1.4,  and . A wide range of numerical
examples on the applications of this compactness approach to ﬁnite diﬀerence
approximations exist, see for instance , , , . However, the
approach is essentially limited to systems in one spatial dimension, except
for the Kruzkov’s multidimensional BV-based existence result which rely on
the translation invariance of the underlying solution operator.
(ii) Compensated Compactness, developed by Tartar ,  and Murat ,
, is based on certain L2 -type, H −1 −compact entropy production bounds,
which replaces the BV bounds in the BV compactness method, see [47, 49,
130]. So far existence results based on compensated compactness arguments
are limited to a system of two conservation laws in two spatial dimension.
36
Introduction
In the light of the importance of compactness arguments in the study of nonlinear
conservation laws, we discuss brieﬂy some of the main point related to such
compactness methods and their applications to nonlinear conservation laws.
BV Compactness
If a collection of functions {uε (x, t) : ε > 0} satisﬁes
∥uε ∥L∞ ≤ C1
and
T V {∥uε ∥} ≤ C2
for all ε > 0, where C1 and C2 are constants independent of ε, then by Helly’s
Compactness Theorem [28, Theorem 2.3] there exists a sequence εk −→ 0 such
that the sequence {uεk } converges almost everywhere to some u ∈ L∞ .
The use of BV compactness framework in proving existence and uniqueness of
solution of (1.8) - (1.9) can be found in , ,  and , see  for other
applications.
Compactness in L1
Suppose that {uε : ε > 0} satisﬁes
(i) ∥uε ∥L1 ≤ C,
C > 0 a constant independent of ε.
(ii) {uε (x, t) : ε > 0} is equicontinuous in L1loc (R × [0, ∞)), that is, for any
compact subset Ω ⊆ R × [0, ∞),
∥uε (x + ∆x, t + ∆t) − uε (x, t)∥L1 −→ 0 uniformly on Ω
as ∆x, ∆t −→ 0.
Then there exists a sequence εk −→ 0 such that the sequence
uεk −→ u
in L1loc (R × [0, ∞).
The above compactness in L1 was applied by Kruzhkov to prove existence and
uniqueness of entropy solution to the conservation laws .
Remark 1.34. (i) Existence of C ∞ solution uε follows from the boundedness of
the initial data and Theorem 1.24. The proof of the compactness of the entropy
bound is similar to the case where solutions are in L∞ , which was brieﬂy shown
above.
(ii) We remark here that the concept of generalized solutions of hyperbolic
systems of conservation laws is a straight forward generalization of scalar conservation laws discussed above, we therefore omit it and refer the reader [2, 13, 29,
30, 31, 34, 35, 36, 37, 55, 60, 67, 110, 115] for details.
Convergence Spaces
1.2
37
Convergence Spaces
The Hausdorﬀ-Kuratowski-Bourbaki concept of general topology has proved to
be very useful in analysis .One such useful application is the powerful methods
of linear functional analysis initiated by Banach  within the setting of metric spaces. However, several deﬁciencies of the Hausdorﬀ-Kuratowski-Bourbaki
topology emerged in the middle of the twentieth century. The most serious of
these deﬁciencies is the fact that there is in general no natural topological structure for function spaces.
Recall that if X, Y and Z are sets, then the exponential law
Z X×Y ≃ (Z X )Y
(1.78)
holds. This means there is a canonical one-to-one mapping between the spaces
of functions
f : X × Y −→ Z
(1.79)
g : Y −→ Z X = {h : X −→ Z}
(1.80)
and
That is, with any function (1.79) one can associate the function
f˜ : Y ∋ y 7−→ f (·, y) ∈ Z X
(1.81)
deﬁned through
f˜(y) : X ∋ x 7−→ f (x, y) ∈ Z.
Conversely, with the function (1.80) one may associate the function
g̃ : X × Y −→ Z
deﬁned by
g̃(x, y) = g(y)(x) ∈ Z.
For topological spaces X, Y and Z, the exponential law can be written as
C(X × Y, Z) ≃ C(Y, C(X, Z)).
(1.82)
Consider a continuous function
f : X × Y −→ Z,
(1.83)
with the mapping (1.83) we associate the mapping
Ff : Y ∋ y 7−→ Ff (y) ∈ C(X, Z)
(1.84)
Ff (y) : X ∋ x 7−→ f (x, y) ∈ Z.
(1.85)
deﬁned by
38
Introduction
Conversely, with a continuous function
F : Y −→ C(X, Z)
(1.86)
fF : X × Y −→ Z
(1.87)
associate the mapping
deﬁned as
fF : X × Y ∋ (x, y) 7−→ (F (y))(x) ∈ Z.
Let C(X, Y ) be equipped with the compact open topology, which has a subbasis
{
}
K ⊆ Xcompact
S(K, U ) U ⊆ Y open
where
S(K, U ) = {f ∈ C(X, Y ) : f (K) ⊆ U }.
If X is locally compact and Hausdorﬀ, then the mapping (1.87) associated with
the mapping (1.86) is continuous whenever the mapping (1.86) is continuous .
Hence, whenever X is locally compact and Hausdorﬀ , the mappings (1.83) (1.87) deﬁne a bijection
χ : C(X × Y, Z)  f 7−→ Ff ∈ C(Y, C(X, Z).
(1.88)
Moreover, if Y and Z are also locally compact then the mapping (1.88) is a
homeomorphism. Thus (1.82) holds for locally compact spaces X, Y and Z and
the compact open topology on the relevant spaces of continuous functions. However, if the assumptions of local compactness on any of the spaces X, Y or Z are
relaxed, then either the mapping (1.87) fails to be continuous, or the mapping
(1.88) will no longer be a homeomorphism. Thus, unless all the spaces X, Y
and Z are locally compact, there is no topology on C(X, Y ) so that the above
construction holds, see for instance [25, 86, 122].
Another failure of the Hausdorﬀ-Kuratowski-Bourbaki concept of topology,
from the perspective of applications to analysis, concerns the issue of generality.
In this regards, we may mention that there are several natural and important
notions of convergence that cannot be associated with the Hausdorﬀ-KuratowskiBourbaki topology. For instance, the point-wise almost everywhere convergence
is not topological. To see this, we recall the following example, see [97, 101, 123].
Example 1.35. Let M (R) denote the set of real Lebesgue measurable functions
on R. Consider the sequence (um
n ), where
{
m
1 m−1
n ≤x ≤ n
um
(x)
=
n
0 otherwise.
Convergence Spaces
39
For any m, , n ∈ N and ε > 0 we have
m−1 m
, ].
n
n
m
lim mes(Am
n (ε)) = 0 So that (un ) converges to 0 in measure. However,
m
Am
n (ε) = {x ∈ R : un (x) ≥ ε} ⊆ [
Thus
m,n−→∞
m
(un )does not
converge to 0 almost everywhere. Indeed, for all a ∈ R, a > 0 and
N ∈ N, there exists m, n ≥ N such that um
n (a) = 1. Now suppose that there
is some topology τae on M (R) so that the sequence that converges with respect
to τae are precisely those that converge almost everywhere. Since (um
n ) does not
converge to 0 almost everywhere, it follows that there is a τae neighborhood V of
0 such that
∀ k∈N:
∃ mk , nk ≥ k :
k
um
nk ̸∈ V.
In this way, we obtain a subsequence (unmkk ) of (unm ) so that
∀ k∈N
unmkk ̸∈ V.
(1.89)
Thus no subsequence of (unmkk ) converges almost everywhere to 0. However, a well
known result [68, Theorem 11.26] states that every sequence which converges
in measure has a subsequence which converges almost everywhere to the same
limit. Therefore (unmkk ) has a subsequence which converges almost everywhere to
0, which is a contradiction. Thus there is no topology which induces convergence
almost everywhere on the set of measurable functions.
2
One solution of the above mentioned limitations of Hausdorﬀ-KuratowskiBourbaki topology is provided by the theory of convergence spaces, see [20, 25,
123], which is a more general notion of topology. Our main focus in this section is
to introduce some of the fundamental concepts related to convergence spaces. A
convergence space is a set together with a designated collection of ﬁlters. Recall
that a ﬁlter F on a set X is a nonempty collection of subsets of X such that
(i) The empty set does not belong to F.
(ii) For all F ∈ F and for all G ⊆ X, if G ⊇ F, then G ∈ F
(iii) If F, G ∈ F , then F ∩ G ∈ F.
A subset B ⊆ F is a ﬁlter basis of F if each set in F contains a set in B. The
ﬁlter F is said to be generated by B. We then write F = [B]. If A ⊆ X, the ﬁlter
generated A is written as [A]. That is
[A] = {F ⊆ X : F ⊇ A} .
In particular for x ∈ X, [x] is the ﬁlter generated by {x}. The ﬁlter [x] is called
the principal ultraﬁlter generated by x. Recall that a ﬁlter G on X is called an
40
Introduction
ultraﬁlter if G ̸⊂ F for all ﬁlter F on X. The intersection of two ﬁlters F and G
on X is deﬁned as
F ∩ G = [{F ∪ G : F ∈ F , G ∈ G}]
If F is a ﬁlter on X, and G is a ﬁlter on Y, then the product of the ﬁlters F
and G is a ﬁlter on X × Y which is deﬁned as
F × G = [{F × G : F ∈ F, G ∈ G}]
If ﬁlters F and G on X are such that G ⊆ F , then we say that F is ﬁner than G,
or alternatively G is coarser than F. If F and G are ﬁlters on X, the ﬁlter F ∨ G
may not exist. However, if A ∩ B ̸= ∅ for all A ∈ F and all B ∈ G then
F ∨ G = [{A ∩ B : A ∈ F, B ∈ G}]
exists. If f : X −→ Y is a mapping then we deﬁne the image ﬁlter of F under f
as
f (F) = [{f (F ) : F ∈ F}].
If (xn ) is a sequence in X, then we deﬁne the Frechét ﬁlter associated with (xn )
as
⟨(xn )⟩ = [{{xn : n ≥ k} : k ∈ N}].
Recall [20, 123] that a given topological space (X, τ ) may be completely described by specifying the convergence associated with the topology τ. In particular, a ﬁlter F on X converges to x ∈ X if and only if F ⊇ Vτ (x), where Vτ (x)
denotes the τ -neighborhood ﬁlter at x ∈ X. For each x ∈ X we may denote the
set of all ﬁlters converging to x with respect to τ by λτ (x). That is,
λτ (x) = {F a ﬁlter on X : F ⊇ Vτ (x)}
A sequence (xn ) converges to x ∈ X with respect to the topology τ if

 ∀ V ∈ Vτ (x) :
∃ NV ∈ N :

x n ∈ V ∀ n ≥ NV
(1.90)
(1.91)
This implies that
⟨(xn )⟩ ⊇ Vτ (x).
Conversely, if ⟨(xn )⟩ ⊇ Vτ (x), then (1.91) must hold. Thus the deﬁnition of
ﬁlter convergence in a topological space is a straight forward generalization of
the corresponding notion of convergence for sequences in a topological space.
A convergence structure on a set X is a generalization of the topological convergence (1.90) and is deﬁned as follows
Deﬁnition 1.36. Let X be a nonempty set. A convergence structure on X is the
mapping λ from X to the power set of the set of all ﬁlters on X that satisﬁes the
following for all x ∈ X :
Convergence Spaces
41
(i) [x] ∈ λ(x)
(ii) If F, G ∈ λ(x), then F ∩ G ∈ λ(x).
(iii) If F ∈ λ(x), then G ∈ λ(x), for all ﬁlters G ⊇ F.
The pair (X, λ) is called a convergence space. Whenever F ∈ λ(x) we say F
converges to x and write “F −→ x”.
Remark 1.37. Let λ and µ be two convergence structures on the same set X.
Then λ is ﬁner than µ (or µ is coarser than λ) if for every x ∈ X, λ(x) ⊆ µ(x).
That is, λ has fewer convergent ﬁlters than µ.
As mentioned, convergence spaces are more general than topological spaces.
However the concepts of continuity, embedding, homeomorphisms, open set and
closure of a set generalize to the more general context of convergence spaces. In
this regard, let X and Y be convergence spaces with convergence structures λX
and λY respectively. A mapping f : X −→ Y is said to be continuous at a point
x ∈ X if
f (F) = [{f (F ) : F ∈ F}] −→ f (x) whenever F −→ x ∈ X.
The mapping f is continuous if it is continuous at every point of X. Furthermore,
f is called a homeomorphism if it is a bijection with both f and f −1 are continuous.
It is an embedding if it is a homeomorphism onto its co-domain.
Clearly the topological convergence (1.90) satisﬁes the conditions of Deﬁnition
1.36. Examples of non-topological convergence structures include the following.
Example 1.38 (Almost every where convergence structure). Let X be the set of
real-valued measurable functions on a measure space (Ω, A, µ). Let a convergence
structure λae be deﬁne on X as follows: a ﬁlter F converges to f in (X, λae ) if
F converges to f almost everywhere in Ω. Then λae is a convergence structure.
In particular, a sequence (un ) in X converges almost everywhere to u ∈ X if and
only if ⟨(un )⟩ converges to u with respect to λae . As we have shown in Example
Example 1.39 (Order convergence structure).  Let X be an Archimedean
vector lattice [24, 84, 117]. A ﬁlter F on X converges to u in X with respect to
the order convergence structure λ0 if and only if

∃ (αn ), (βn ) ⊂ X :



(i) αn ≤ αn+1 ≤ βn+1 ≤ βn , n ∈ N
(ii) sup{αn : n ∈ N} = u = inf{βn : n ∈ N}



(iii) [{[αn , βn ] : n ∈ N}] ⊆ F .
A sequence (un ) in X converges to u ∈ X with respect to λ0 if and only if (un )
42
Introduction
order converges to u. That is,

 ∃ (αn ), (βn ) ⊂ X :
(i) αn ≤ αn+1 ≤ un ≤ βn+1 ≤ βn , n ∈ N

(ii) sup{αn : n ∈ N} = u = inf{βn : n ∈ N}.
The order convergence structure is not topological. To see this, consider the
Archimedean vector lattice C(R), and the sequence (un ) ⊂ C(R) given by

 1 − n|x − qn | if |x − qn | < n1
un (x) =
(1.92)

1
0
if |x − qn | ≥ n
where {qn | n ∈ N} = [0, 1] ∩ Q. The complement of any subset of Q ∩ [0, 1] is
dense in [0, 1]. The sequence (un ) does not order converge to 0. For any N0 ∈ N
we have
βN0 (x) = sup{un (x) : n ≥ N0 } = 1, x ∈ [0, 1)
This means that a sequence (βn ) ⊆ C(R) such that un ≤ βn for all n ∈ N
cannot decrease to 0. Thus if there is a topology τ on C 0 (R) that induces order
convergence, then there is some τ -neighborhood V of 0 and a subsequence (unk )
of (un ) which is always outside of V. Let (qnk ) denote the sequence of rational
numbers associated with the subsequence (unk ) according to (1.92). Since the
sequence (qnk ) is bounded, there exist a subsequence (qnki ) of (qnk ) that converges
to some q ∈ [0, 1]. Let (unki ) be the sequence associated with the sequence of
rational numbers (qnki ). Then
∀ε>0:
∃ Nε ∈ N :
unki (x) = 0, whenever|x − qn | > ε and nki > Nε .
For each j ∈ N set εj =
µnki
1
j
and let the sequence (µnki ) ⊆ C 0 (R) be deﬁned as


if |x − q| ≥ 2εj
0
1
if |x − q| ≤ εj
=

 |x−q| + 2 if εj < |x − q| < 2εj
εj
(1.93)
whenever Nεj < nki < Nεj+1 . The sequence (µnki ) decreases to 0, and 0 ≤ unki ≤
µnki , This means that the sequence (unki ) order converges to 0. Therefore it must
eventually be in V, a contradiction. Thus the topology τ cannot exist.
Example 1.40 (Continuous convergence structure).  Let X and Y be convergence spaces, C(X, Y ) the space of all continuous functions from X to Y and
ωX,Y : C(X, Y ) × X −→ Y
the evaluation mapping. That is, ωX,Y (f, x) = f (x) for all f ∈ C(X, Y ) and
all x ∈ X. A ﬁlter H converges to f ∈ C(X, Y ) with respect to the continuous
Convergence Spaces
43
convergence structure λc if and only if
ωX,Y (H × F) −→ f (x) for all x ∈ X and all F −→ x ∈ X.
The universal property of the continuous convergence structure is states as follows: Let X, Y, Z be convergence spaces. Then the mapping h : Z −→ C(X, Y )
is continuous if and only if the associated mapping
h̃ : Z × X −→ Y (z, x)
deﬁned by h̃(z, x) = h(z)(x) is continuous.
For more examples and a detailed exposition on convergence spaces see [20, 38,
40, 48, 57, 73].
One method for constructing new convergence spaces from given ones is to
make use of initial and ﬁnal convergence structures. Subspaces, product spaces,
projective limits, quotient spaces and inductive limits are examples of initial or
ﬁnal convergence structure.
Let X be a set, (Xi )i∈I a collection of convergence spaces and, for each i ∈ I,
fi : X −→ Xi a mapping. A ﬁlter F on X converges to x in the initial convergence
structure λX with respect to the family of mapping (fi )i∈I if and only if
fi (F) −→ fi (x) in Xi for all i ∈ I.
To see that λX is a convergence structure on X, note the following
(i) For each i ∈ I we have
fi ([x]) = [{fi ({x}) : {x} ∈ [x]}]
= [{fi (x) : {x} ∈ [x]}]
= [fi (x)]Xi −→ fi (x)
which shows that [x] ∈ λX (x).
(ii) Let F, G ∈ λX (x). Then for each i ∈ I, we have
fi (F ∩ G) = [{fi (F ∪ G) : F ∈ F, G ∈ G}]
= [{fi (F ) ∪ fi (G) : F ∈ F, G ∈ G}]
= fi (F) ∩ fi (G) −→ fi (x)
thus F ∩ G ∈ λX (x).
(iii) Let F ∈ λX (x) and F ⊆ G. Then
fi (G) = [{fi (G) : G ∈ G}]
⊇ [{fi (F ) : F ∈ F}]
= fi (F) −→ fi (x)
thus for each i ∈ I, G ∈ λX (x).
44
Introduction
The initial convergence structure λX with respect to the family of mapping
(fi )i∈I is the coarsest convergence structure on X making each of the mapping
fi : X −→ Xi continuous. That is, for any other convergence structure λ on X
such that each fi is continuous we have
λ(x) ⊆ λX (x), x ∈ X.
Example 1.41. Let (Xi )i∈I be a family of convergence spaces, and let X be the
Cartesian product of the family (Xi ). That is
∏
X=
Xi .
i∈I
The product convergence structure on X is the initial convergence structure with
respect to the projection mapping
πi : X −→ Xi , i ∈ I
deﬁned as
πi ((xj )j∈I ) = xi ∈ Xi .
A ﬁlter F on X converges to x = (xi ), in X if and only if , for each i ∈ I
πi (F) −→ πi (x) ∈ Xi
That is
∀ i∈I
∃ ∏
Fi ∈ λXi (xi ) :
i∈I Fi ⊆ F .
Here
∏
[{
Fi =
i∈I
∏
i∈I
}]
Fi ∈ Fi i ∈ I
Fi Fi = Xi for all but ﬁnitely many i ∈ I
denotes the Tychonoﬀ product of the family of ﬁlters (Fi )i∈I .
2
Example 1.42. Let X be a convergence space and M a subset of X. The subspace
convergence structure λM on M is the initial convergence structure with respect
to the inclusion mapping
iM : M −→ X
given by
iM (x) = x ∈ X, x ∈ M.
A ﬁlter F on M converges to x in M if and only if
[{
}]
∃ F ∈F :
[F]X =
G ⊆ X F ⊆G
converges to x in X.
Convergence Spaces
45
Let X be a set, (Xi )i∈I a collection of convergence spaces and, for each i ∈ I,
fi : Xi −→ X a mapping. A ﬁlter F on X converges to a point x in the ﬁnal
convergence structure with respect to the family of mapping (fi )i∈I if and only if
F = [x] or
∃ indices i1 · · · ik ∈ I :
∃ point xn ∈ Xin , n = 1, · · · , k :
∃ ﬁlters Fn ∈ λXin (xn ), n = 1 · · · k :
1)fi (xn ) = x, i = 1 · · · , k
2)fi1 (F1 ) ∩ · · · ∩ fik (Fk ) ⊆ F
(1.94)
Example 1.43.  Let X be a convergence space, Y a set and q : X −→ Y
a surjective mapping. The quotient convergence structure λq on Y is the ﬁnal
convergence structure with respect to the mapping q. A ﬁlter F on Y converges
to y ∈ Y if and only if
∃ points x1 , · · · , xk ∈ X :
∃ ﬁlters
F1 , · · · , Fk on X :

1) Fi ∈ λX (xi ), i = 1, · · · , k :
 2) q(xi ) = y, i = 1, · · · , k :
3) q(F1 ) ∩ · · · ∩ q(Fk ) ⊆ F .
(1.95)
If X and Y are convergence spaces, and q : X −→ Y a surjection so that Y
carries the quotient convergence structure with respect to q, then q is called a
convergence quotient mapping.
2
The ﬁnal convergence structure is the ﬁnest convergence structure making all
the mapping (fi )i∈I continuous. That is for any other convergence structure λ in
X such that the mapping fi is continuous we have
λX (x) ⊆ λ(x)
Let X be a convergence space. For any x ∈ X a set V ⊆ X is a neighborhood
of x if V belongs to every ﬁlter that converges to x. That is,
)
(
∀ F ∈ λX (x)
V ∈ VλX (x) ⇐⇒
V ∈F
where VλX (x) denotes the neighborhood ﬁlter at x. A set V ⊆ X is open if and
only if it is a neighborhood of each of its elements.
The concept of adherence in the context of convergence spaces is the generalization of the closure of a subset A of a topological space X. In a topological
space X, the closure of a set A ⊆ X consists of A, together with all cluster points
of A. That is,
{
}
∀ V ∈ Vτ (x)
clτ (A) = x ∈ X .
V ∩ A ̸= ∅
46
Introduction
Therefore for each x ∈ cl(A), the ﬁlter
F = [{V ∩ A : V ∈ Vτ (x)}]
converges to x and A ∈ F. Conversely, if there is a ﬁlter F ∈ λτ (x) such that
A ∈ F, then it follows from (1.90) that A intersects every neighborhood of x
so that x ∈ cl(A). This means that the closure of a set A ⊆ X is he set of of
all points x ∈ X such that A belongs to some ﬁlter F that converges to x with
respect to τ.
In a convergence space the adherence of a set A ⊆ X is the set
}
{
∃ F ∈ λX (x) : .
aλX (A) = x ∈ X A∈F
That is, x ∈ aλX (A) if there is a ﬁlter that converges to x and contains A. Where
there is no confusion we shall simply denote the adherence of A by a(A). The set
A ⊆ X is closed if a(A) = A.
Many of the familiar properties of the closure operator of a topological space
also hold for the adherence operator in a convergence space. Some of these
properties are stated in the following.
Proposition 1.44.  Let X be a convergence space. Then the following hold:
(i) a(A) ⊆ a(B) if A ⊆ B for all A, B ⊂ X
(ii) a(∅) = ∅
(iii) A ⊆ a(A) for all A ⊆ X
(iv) a(A ∪ B) = a(A) ∪ a(B) for all A, B ⊆ X.
(v) f (a(A)) ⊆ a(f (A)) for all A ⊆ X, and f : X −→ Y continuous.
In a non-topological convergence space, the adherence operator is, in general,
not idempotent. That is, for some A ⊂ X a(A) ̸= a(a(A)). If a convergence space
X is such that
∀ x∈X
Vλx (x) ∈ λX (x)
then the convergence space is called pre-topological and the convergence structure
λX is called a pre-topology. Every topological space is pre-topological but the
converse is not true, see for instance . Indeed, one of the characterization of
topological convergence spaces is the following
Proposition 1.45.  A convergence space X is topological if and only if X is
pre-topological and the adherence operator is idempotent.
The notions of Hausdorﬀ, T1 and regular spaces in convergence spaces coincide
with the usual ones in the case of a topological space. A convergence space X is
called a Hausdorﬀ space if every convergent ﬁlter converges to a unique limit. It
Convergence Spaces
47
is called a T1 space if every ﬁnite subset of X is closed , and it is called a regular
space if
F ∈ λX (x) =⇒ a(F) = [{a(F ) : F ∈ F}] ∈ λX (x)
Note that a Hausdorﬀ space is a T1 space. To see this, let X be a Hausdorﬀ
space and let A ⊆ X be a ﬁnite set. The set A is a ﬁnite union of singleton sets.
Therefore it suﬃces to show that the singleton set {y}, for y ∈ X, is closed. If
x ∈ a({y}) then there exists a ﬁlter F −→ x and {y} ∈ F. This implies that
F ⊆ [y]. Therefore [y] −→ x. Since X is Hausdorﬀ it follows that x = y. Thus
a({y}) = {y}.
Conversely, a regular T1 space is Hausdorﬀ. This is because if X is regular and
T1 , and a ﬁlter F converges to x and y then a(F) also converges to x, and to y.
Then x, y ∈ a(F ) for all F ∈ F. So that a(F) = [{a(F ) : F ∈ F}] ⊆ [x]. Hence
[x] converges to y. This implies that y ∈ a({x}). But X is T1 , hence x = y. Thus
X is Hausdorﬀ.
Subspaces, product and projective limits of T1 spaces, Hausdorﬀ spaces and
regular spaces are also T1 , Hausdorﬀ and regular, respectively, as shown in [20,
Proposition 1.4.2].
1.2.1
Uniform Convergence Structure
In this section we discuss some of the basic aspects of the theory of uniform
convergence spaces which is a generalization of the theory of uniform spaces.
Recall  that a uniformity on a set X is a ﬁlter U on X × X such that the
following conditions are satisﬁed.
(i) ∆ ⊆ U for each U ∈ U.
(ii) If U ∈ U, then U −1 ∈ U.
(iii) For each U ∈ U there are some V ∈ U such that V ◦ V ⊆ U.
Here ∆ = {(x, x) : x ∈ X} denotes the diagonal in X × X. If U and V are subsets
of X × X then
U −1 = {(x, y) ∈ X × X : (y, x) ∈ U }
and the composition of U and V is deﬁned as
{
}
∃ z∈X:
U ◦ V = (x, y) ∈ X × X .
(x, z) ∈ V and (z, y) ∈ U
A uniformity UX on X induces a topology on X in the following way: A set
A ⊆ X is open in X if
∀ x∈A:
∃ U ∈ UX :
U [x] ⊆ A
48
Introduction
where U [x] = {y ∈ X|(x, y) ∈ U }. A ﬁlter F on X is a Cauchy ﬁlter if and only
if
UX ⊆ F × F.
A uniform space is complete if and only if every Cauchy ﬁlter on X converges to
some point x ∈ X. We recall  the deﬁnition of uniform continuity of function
deﬁned on a uniform space. Let X and Y be uniform spaces. A mapping f :
X −→ Y is uniformly continuous if and only if
∀ U ∈ UY
(f −1 × f −1 )(U ) ∈ UX .
The mapping f is uniform embedding if it is injective and its inverse f −1 is
uniformly continuous on the subspace f (X) of Y. Furthermore, f is a uniform
isomorphism, if it is a uniform embedding which is surjective. The main result
due to Weil , in connection with completeness of uniform spaces assert that
for any Hausdorﬀ uniform space X, one can ﬁnd a complete Hausdorﬀ uniform
space X ♯ and a uniform embedding
iX : X −→ X ♯
such that iX (X) is dense in X ♯ . Moreover, for any complete, Hausdorﬀ uniform
space Y and any uniformly continuous mapping f : X −→ Y there is a uniformly
continuous mapping
f ♯ : X ♯ −→ Y
such that the diagram
f
X
-
Y
f♯
iX
R
(1.96)
X♯
commutes.
Remark 1.46. Note that not every topology τX on a set X is induced by a
uniformity UX . In fact, it was shown in  that a given topology τX on X is
induced by a uniformity UX if and only if the topology τX is completely regular.
Hence the class of uniform spaces is rather small in comparison to the class of all
topological spaces.
Deﬁnition 1.47. Let X be a set. A family JX of ﬁlters on X × X is called a
uniform convergence structure if the following holds:
(i) [x] × [x] ∈ JX for every x ∈ X
(ii) If U ∈ JX and U ⊆ V, then V ∈ JX
Convergence Spaces
49
(iii) If U, V ∈ JX , then U ∩ V ∈ JX .
(iv) If U ∈ JX , then U −1 ∈ JX .
(v) If U, V ∈ JX , then U ◦ V ∈ JX whenever U ◦ V exists.
The pair (X, JX ) is called a uniform convergence space.
If U and V are ﬁlters on X × X then U −1 is deﬁned as
U −1 = [{U −1 : U ∈ U}].
If U ◦ V ̸= ∅ for all U ∈ U and V ∈ V then the ﬁlter U ◦ V exists and it is deﬁned
as
U ◦ V = [{U ◦ V : U ∈ U , V ∈ V}].
Uniform convergence spaces generalizes the concept of a uniform space in the
sense that every uniformity UX on X give rise to a unique uniform convergence
structure JUX deﬁned through
U ∈ JUX =⇒ UX ⊆ U .
Every uniform convergence structure JX on X induces a convergence structure
λJX on X deﬁned by
∀ x∈X
∀ F a ﬁlter on X
F ∈ λJX (x) ⇐⇒ F × [x] ∈ JX
The convergence structure λJX is called the induced convergence structure. The
induced convergence structure need neither be topological nor completely regular,
but rather satisﬁes more general separation properties, see . Every reciprocal
convergence structure λX is induced by a uniform convergence structure. Recall
that a convergence structure is called reciprocal if
∀ x, y ∈ X
λX (x) = λX (y) or λX (x) ∩ λX (y) = ∅
(1.97)
Note that if a convergence space is Hausdorﬀ then it is reciprocal but the converse
is not true. Given a reciprocal convergence structure λX on X, the associated
uniform convergence structure JλX on X × X, deﬁned by

∃ x1 · · · xk ∈ X
 ∃ F1 · · · Fk ﬁlters on X :
(1.98)
U ∈ JλX ⇐⇒ 

(1) Fi ∈ λX (xi ) for i = 1 · · · k
(2) (F1 × F1 ) ∩ · · · ∩ (Fk × Fk ) ⊆ U
is a uniform convergence structure that induces a convergence structure λX . In
particular, every Hausdorﬀ convergence structure is induced by the associated
uniform convergence structure (1.98). A Hausdorﬀ uniform convergence space is
characterized by the following [20, Proposition2.1.10]
50
Introduction
Proposition 1.48. A uniform convergence space (X, JX ) is a Hausdorﬀ uniform
convergence space if and only if
∀ U ∈ JX
∀ x, y ∈ X, x ̸= y :
∃ U ∈U :
(x, y) ̸∈ U.
As mentioned in Section 1.2, new convergence spaces can be constructed from
existing ones using the initial and ﬁnal convergence structure. This is also true
of uniform convergence spaces. The initial uniform convergence structure is constructed as follows: Let X be a set and (Xi , Ji )i∈I a family of convergence spaces.
For each i ∈ I let fi : X −→ Xi be a mapping. The initial uniform convergence
structure J on X × X with respect to the mapping fi is deﬁned as
(
∀ i∈I
U ∈ J ⇐⇒
(1.99)
(fi × fi )(U) ∈ Ji .
The initial uniform convergence structure J induces the initial convergence structure λJ , see [20, Proposition 2.2.2]. Subspaces and product uniform convergence
spaces are typical example of initial uniform convergence structure.
Let X be a set and (Xi , Ji )i∈I a family of convergence spaces. For each i ∈ I
let fi : Xi −→ X be a mapping. The ﬁnal uniform convergence structure J on
X × X with respect to the mapping fi is deﬁned as

∃ U 1 · · · Un ∈ J0 :
U ∈ J ⇐⇒  ∃ x1 · · · xk ∈ X :
(1.100)
U1 ∩ · · · ∩ Un ∩ ([x1 ] × [x1 ]) ∩ · · · ∩ ([xk ] × [xk ]) ⊆ U .
where J0 is a family of ﬁlters V on X × X deﬁned by

∃ i1 · · · in ∈ I :

V ∈ J0 ⇐⇒ ∃ Vk ∈ Jik :
(fi1 × fi1 )(V1 ) ◦ · · · ◦ (fi1 × fi1 )(Vk ) ⊆ V.
Quotient uniform convergence structure is an example of ﬁnal uniform convergence structure. We remark here that the ﬁnal uniform convergence structure
does not induce the ﬁnal convergence structure, refer to [20, 62] for more details.
The concepts of uniform continuity, Cauchy ﬁlters, completeness and completion extend to uniform convergence spaces in a natural way. In this regard let
X and Y be uniform convergence spaces. A mapping f : X −→ Y is uniformly
continuous if
∀ U ∈ JX
(f × f )(U) ∈ JY .
A uniformly continuous mapping f is called a uniformly continuous embedding
if it is injective and f −1 is uniformly continuous on the subspace f (X) of Y. A
Convergence Spaces
51
uniformly continuous embedding is a uniformly continuous isomorphism if it is
also surjective.
A ﬁlter F on X is called a Cauchy ﬁlter if
F × F ∈ JX .
Some important properties of Cauchy ﬁlters are stated in the following, see [20,
Proposition 2.3.2 - 2.2.3].
Proposition 1.49. Let (X, JX ) be a uniform convergence space. Then the following hold:
(i) Each convergent ﬁlter is a Cauchy ﬁlter.
(ii) If F is a Cauchy ﬁlter and F ⊆ G then G is a Cauchy ﬁlter.
(iii) Let F be a Cauchy ﬁlter and let F ⊆ G. If G −→ x ∈ X, then F −→ x.
(iv) If F and G are Cauchy ﬁlters and F ∨ G exists then F ∩ G is a Cauchy ﬁlter.
(v) If F is a Cauchy ﬁlter, G a ﬁlter on X such that F × G ∈ JX then G is a
Cauchy ﬁlter.
(vi) If (Y, λY ) is a uniform convergence space, f : X −→ Y is a uniformly
continuous mapping, and F is a Cauchy ﬁlter on X then f (F) is a Cauchy
ﬁlter on Y.
A uniform convergence space X is said to be complete if every Cauchy ﬁlter
converges to a point in X.
Proposition 1.50. Let (X, JX ) be a complete uniform convergence space. Then
the following hold:
(i) Each closed subspace of a complete uniform convergence space is complete.
(ii) If (X, JX ) is Hausdorﬀ, then a subspace of (X, JX ) is complete if and only
if it is closed.
(iii) The product of complete uniform convergence spaces is complete.
Example 1.51. The associated uniform convergence space of a reciprocal convergence space is complete.
2
The Weil concept of completion of uniform spaces has been extended to the
more general setting of uniform convergence spaces, see . Indeed, if X is
a Hausdorﬀ uniform convergence space, then there exists a complete, Hausdorﬀ
uniform convergence space X ♯ and a uniformly continuous embedding
iX : X −→ X ♯
such that iX (X) is dense in X ♯ . Moreover, the completion X ♯ of X satisﬁes the
universal property: If Y is a complete Hausdorﬀ uniform convergence space and
52
Introduction
f : X −→ Y is uniformly continuous, then there exists a uniformly continuous
mapping
f ♯ : X ♯ −→ Y
such that the diagram
X
iX
f
Y
f♯
(1.101)
?
X♯
commutes. X ♯ is called the Wyler completion of X. This completion is unique
up to uniformly continuous isomorphism. In general, the Wyler completion of
uniform convergence spaces does not preserve subspaces. Indeed, the completion
of a subspace of a uniform convergence space X will, in general, not be a subspace
of the completion X ♯ , . The following Theorems gives the characterization
of the completion of a subspace X of a uniform convergence space Y.
Theorem 1.52. Let X be a subspace of the uniform convergence space Y. Let
i : X −→ Y be the inclusion mapping. Then there exists an injective, uniformly
continuous mapping i♯ : X ♯ −→ Y ♯ , which extends the mapping i. In particular,
i♯ (X ♯ ) = aY ♯ (iY (X)), where aY ♯ denote the adherence operator in X ♯ and iY
denote the uniformly continuous embedding associated with the completion Y ♯ .
Furthermore, the uniform convergence structure on Y ♯ is the smallest complete,
Hausdorﬀ uniform continuous structure on aY ♯ (X), with respect to inclusion so
that X is contained in X ♯ as a dense subspace.
Theorem 1.53. Let X and Y be uniformly convergence spaces, and φ : X −→
Y a uniformly continuous embedding. Then there exists an injective uniformly
continuous mapping φ♯ : X ♯ −→ Y ♯ , where X ♯ and Y ♯ are the completions of X
and Y respectively, which extends φ.
1.2.2
Convergence vector spaces
Let V be a vector space over the scalar ﬁeld K of real or complex numbers. A
convergence structure λV on V is called a vector space convergence structure if
the vector space operations
+ : (V, λV ) × (V, λV ) −→ (V, λV )
and
· : K × (V, λV ) −→ (V, λV )
are continuous. In this case V is called a convergence vector space.
Convergence Spaces
53
Examples 1.54.
1. Every topological vector space is a convergence vector space. Recall [99, 106]
that a vector space V over the scalar ﬁeld K of real or complex numbers is
called a topological vector space if V is endowed with a topology τV such that
+ : (V, τV ) × (V, τV ) −→ (V, τV )
and
· : K × (V, τV ) −→ (V, τV )
are (jointly) continuous.
2. Let X be a convergence space and V a convergence vector space. Then
Cc (X, V ) is a convergence vector space. In particular Cc (X) = Cc (X, R) is a
convergence vector space.
3. Let X and Y be convergence vector spaces. Then Lc (X, Y ), which is the set
L(X, Y ) of all continuous linear mapping between X and Y endowed with
the subspace convergence structure from Cc (X, Y ), is a convergence vector
space. In particular, Lc (X) = Lc (X, R) is a convergence vector space. The
space Lc (X) is the continuous dual space of the convergence vector space X.
It is the canonical dual in the setting of convergence vector spaces.
2
The following Lemma gives some properties of convergence vector spaces which
are well-known in the topological case, see .
Lemma 1.55. Let V be a convergence vector space. Then the following statements hold.
(i) For each a ∈ V the translation mapping
Ta : V ∋ x 7→ a + x ∈ V
is a homeomorphism.
(ii) For all x ∈ V
F ∈ λV (x) ⇐⇒ F − x ∈ λV (0)
(iii) If W is another convergence vector space then a linear mapping f : V −→ W
is continuous if and only if it is continuous at 0.
A standard procedure for constructing a vector space convergence structure
or for showing that a given convergence structure is a vector space convergence
structure, is given by the following proposition .
Proposition 1.56. Let V be a vector space over K and let V(0) be the zero
neighborhood ﬁlter on K. Let S be a family of ﬁlters on V satisfying the the
following conditions:
54
Introduction
(i) If F ∈ S and G ∈ S then F ∩ G ∈ S.
(ii) If F ∈ S then G ∈ S for all ﬁlters G ⊇ F .
(iii) If F ∈ S and G ∈ S then F + G ∈ S.
(iv) If F ∈ S then V(0)F ∈ S.
(v) If F ∈ S then αF ∈ S for all α ∈ K
(vi) For all x ∈ V, V(0)x ∈ S.
Then the mapping λV from V to the power set of all the set of ﬁlters on V deﬁned
by
F ∈ λV (x) ⇐⇒ F − x ∈ S.
is a vector space convergence structure on V.
As mentioned, a convergence space X is topological if and only if it is pretopological and the adherence operator is idempotent. However, for a convergence
vector space to be topological it is suﬃcient for V to be pre-topological. That is,
a convergence vector space is topological if and only if it is pre-topological. Also,
a convergence vector space is Hausdorﬀ if and only if the set {0} is closed, see
.
A convergence vector space is equipped with a natural uniform convergence
structure, called the induced uniform convergence structure, which is denoted as
JV . In this regard, let V be a convergence vector space, and let U be a ﬁlter on
V × V. Then

∃ F a ﬁlter on V :
U ∈ JV ⇐⇒ 
(1)F −→ 0
(1.102)
(2)∆(F) ⊆ U .
Here ∆(F) = [{∆(F ) : F ∈ F}] and for any set F ⊆ V
∆(F ) = {(x, y) ∈ V × V : x − y ∈ F }.
(1.103)
Lemma 1.57. Let V be a convergence vector space. Then for all A, B ⊆ V and
for all ﬁlters F, G on V we have
(i) ∆(A ∩ B) = ∆(A) ∩ ∆(B).
(ii) ∆(A ∪ B) = ∆(A) ∪ ∆(B).
(iii) ∆(F ∩ G) = ∆(F ) ∩ ∆(G).
(iv) If U is a ﬁlter on V × V, then the ﬁlter [{A ⊆ V : ∆(A) ∈ U}] is an ultraﬁlter
if U is an ultraﬁlter.
(v) For any x ∈ V,
F × [x] ⊇ ∆(G) =⇒ F ⊇ G + x
Convergence Spaces
55
(vi) ∆(F + G) ⊆ ∆(F) ◦ ∆(G). Here
F + G = [{F + G : F ∈ F, G ∈ G}]
= [{{x + y : x ∈ F, y ∈ G} : F ∈ F , G ∈ G}]
(vii) a(∆(F)) ⊇ ∆(a(F)).
The convergence structure induced by the uniform convergence structure JV
agrees with the vector space convergence structure λV , that is, λJV = λV . If
V and W are convergence vector space and a linear mapping f : V −→ W is
continuous then f is uniformly continuous, see for instance [20, Proposition 2.5.3].
Note that the induced uniform convergence structure of a reciprocal convergence
vector space is not in general the associated uniform convergence structure and
hence need not be complete.
In a convergence vector space Cauchy ﬁlters are characterized as follows: A
ﬁlter F on V is a Cauchy ﬁlter if and only if F − F converges to 0. The Wyler
completion of unform convergence spaces does not preserve algebraic structures.
If V is a convergence vector space carrying its induced uniform convergence structure, then its completion V ♯ is naturally a convergence vector space. However
the uniform structure does not induce a vector convergence structure and so one
has to consider its “convergence vector space modiﬁcation” which has all the desired properties of a convergence vector space completion, see [20, 61], see also
[63, 100]. If V is a Hausdorﬀ convergence vector space, it is not possible in general to embed it into a complete convergence vector space, since V may contain
unbounded Cauchy ﬁlters. However it is possible to modify its completion V ♯ so
that it contains V as a dense subspace, satisﬁes the universal extension property
for linear mappings, and every bounded Cauchy ﬁlter converges. See for instance
. However if every Cauchy ﬁlter F in V is bounded, that is, there is some
F ∈ F so that V(0)F −→ 0, then there is a complete, Hausdorﬀ convergence
vector space V ♯ and a linear embedding iV : V −→ V ♯ . such that iV is dense in
V ♯ . Furthermore, for every complete Hausdorﬀ convergence vector space W and
every continuous linear mapping f : V −→ W there exists a continuous linear
mapping f ♯ : V ♯ −→ W so that the diagram
f
V
W
iV
f♯
(1.104)
?
V♯
commutes.
Below are some important examples of complete convergence vector spaces .
56
Introduction
Examples 1.58.
(i) If X is any convergence space, then Cc (X) is a complete convergence vector
space.
(ii) If V is a convergence vector space, then Lc (V ) is a complete convergence
vector space.
1.3
Hausdorﬀ Continuous Functions
In this section we discuss Hausdorﬀ continuous (H-continuous) extended real interval valued functions deﬁned on a metric space X, see [5, 7, 8, 9, 108, 123].
Interval valued functions are traditionally associated with validated computing,
where they naturally appear as error bounds for numerical and theoretical computations, see for instance [1, 72]. Sendov , see also , introduced the concept
of H-continuous functions in connection with Hausdorﬀ approximations of real
functions of real a variable.
We now recall the basic notations and concepts involve in H-continuous functions. In this regard, let IR denote the set of all closed real intervals [a, a] = {x ∈
R : a ≤ x ≤ a}. That is,
IR = {a = [a, a] : a, a ∈ R},
and let IR∗ denote the set of all extended, closed real intervals. That is,
IR∗ = {a = [a, a] : a, a ∈ R∗ },
where R∗ = R ∪ {±∞} denote the extended real line with the usual ordering.
Clearly, IR ⊂ IR∗ . Given an interval a = [a, a] ∈ IR∗ , the number w(a) = a − a
is called the width of a, and |a| = max{|a|, |a|} is called the modulus of a. An
interval a is a proper interval if w(a) > 0 and a point interval, if w(a) = 0. If we
identify a ∈ R∗ with the point interval [a, a] ∈ IR∗ , then R∗ ⊂ IR∗ . On IR∗ we
deﬁne the partial order through
a ≤ b ⇐⇒ a ≤ b and a ≤ b.
Let X be a metric space. Denote by A(X) the set of all interval valued function
deﬁned on X. That is,
A(X) = {u : X −→ IR∗ }.
Since R∗ ⊂ IR∗ we have that
A(X) ⊂ A(X),
where A(X) = {u : X −→ R∗ }.
On A(X) we deﬁne the pointwise partial order through
(
∀ x∈X
u ≤ v ⇐⇒
u(x) ≤ v(x)
(1.105)
Hausdorﬀ Continuous Functions
57
Note that if u ∈ A(X) then for all x ∈ X the value of u at x is the interval
[u(x), u(x)]. Hence the function u can be written as u = [u, u] where u, u ∈ A(X)
and u ≤ u. The concept of a H-continuous function is formulated in terms of
extended Baire operators. The extended Baire operators are deﬁned as follows:
Let D ⊆ X be dense. For u ∈ A(X) and η > 0 we denote by I(η, D, u) the
function
I(η, D, u)(x) = inf{u(y)|y ∈ Bη (x) ∩ D}, x ∈ X,
and S(η, D, u) the function
S(η, D, u)(x) = sup{u(y)|y ∈ Bη (x) ∩ D}, x ∈ X.
The function I(D, ·) : A(X) −→ A(X) is deﬁned by
I(D, u)(x) = supI(η, D, u)(x),
η>0
x∈X
(1.106)
and the function S(D, ·) : A(X) −→ A(X) is deﬁned by
S(D, u)(x) = inf S(η, D, u)(x),
η>0
x ∈ X.
(1.107)
In fact, since
I(η, D, u)(x) < I(δ, D, u)(x), x ∈ X
and
S(η, D, u)(x) > S(δ, D, u)(x), x ∈ X
whenever η < δ, it follows that
I(D, u)(x) = limI(η, D, u)(x) x ∈ X
η→0
and
S(D, u)(x) = limS(η, D, u)(x) x ∈ X
η→0
The operators I(D, ·) and S(D, ·) are called Lower and Upper extended Baire
operators respectively.
The operator F (D, ·) : A(X) −→ A(X) deﬁned by
F (D, u)(x) = [I(D, u)(x), S(D, u)(x)],
x∈X
(1.108)
is called the Graph Completion Operator. In the case when D = X the set D is
omitted from the argument and we write
I(u)(x) = I(X, u)(x), S(u)(x) = S(X, u)(x), F (u)(x) = F (X, u)(x)
The operators (1.106), (1.107) and (1.108) satisfy the following properties.
(C1 ) I(u) ≤ u ≤ S(u), u ∈ A(X)
(C2 ) I, S, F and their compositions are idempotent. That is, for all u ∈ A(X),
58
Introduction
(i) I(I(u)) = I(u)
(ii) S(S(u)) = S(u)
(iii) F (F (u)) = F (u)
(iv) (I ◦ S)((I ◦ S)(u)) = (I ◦ S)(u)
(C3 ) I, S, F and their compositions are monotone. That is, for all u, v ∈ A(X),

(i) I(u) ≤ I(v)
 (ii) S(u) ≤ S(v)
u ≤ v =⇒ 
 (iii) F (u) ≤ F (v)
(iv) (I ◦ S)(u) ≤ (I ◦ S)(v)
The operator F is monotone with respect to inclusion, that is
u(x) ⊆ v(x), x ∈ X
=⇒ F (u)(x) ⊆ F (v)(x), x ∈ X.
Furthermore, it is easy to see that, for u ∈ A(X), the functions I(u) and S(u)
are lower and upper semi-continuous functions, respectively, on X.
We now deﬁne the set of Hausdorﬀ continuous function.
Deﬁnition 1.59. A function u ∈ A(X) is called H-continuous if for every function v ∈ A(X) which satisﬁes the inclusion v(x) ⊆ u(x), x ∈ X, we have
F (v)(x) = u(x), x ∈ X.
Denote by H(X) ⊆ A(X) the set of all H-continuous functions on X. Clearly
all continuous real valued functions are H-continuous, that is C(X) ⊆ H(X).
Indeed, if u is continuous then u is both upper and lower semi-continuous and
hence
F (u) = [I(u), S(u)] = [u, u] = u
Furthermore, let v ∈ A(X) be such that v(x) ⊆ u(x). Then v(x) = u(x), x ∈ X
and hence F (v)(x) = F (u)(x) = u(x), x ∈ X which shows that u is H-continuous.
The set H(X) inherits the partial order (1.105). Equipped with this partial order,
the set H(X) is a complete lattice. That is,
∀ A ⊆ H(X)
∃ u0 , v0 ∈ H(X) :
(i) u0 = sup A
(ii) v0 = inf A
(1.109)
The supremum and inﬁmum in (1.109) may be describe as follows: If
ϕ : X ∋ x 7→ sup{u(x) : u ∈ A} ∈ R∗
and
ψ : X ∋ x 7→ inf{u(x) : u ∈ A} ∈ R∗
then
u0 = F (I(S(ϕ))), v0 = F (S(I(ψ)))
Below are some examples of H-continuous functions which are not continuous.
Hausdorﬀ Continuous Functions
59
Examples 1.60.
(i) Let X = R. The function

1
if x > 0





u(x) =
[−1, 1] if x = 0





−1
if x < 0
is H-continuous.
(ii) Let X = {(x, t) : t ≥ 0} ⊆ R2 . For (x, t) ∈ X the function

1
if t ∈ [0, 1), x < t − 1







x


if t ∈ [0, 1), x ∈ [t − 1, 0]

t−1







if t ∈ [0, 1) x > 0
0
u(x, t) =



1
if t ≥ 1,
x < t−1

2






t−1

[0,
1]
if
t
≥
1,
x
=

2






0
if t ≥ 1,
x > t−1
2
is H-continuous. This function arises as a shockwave solution of the nonlinear
conservation law.
The lower and upper Baire operators can be written in terms of u and u.
Indeed, it is clear that
I(u) = I(u) and S(u) = S(u)
Hence
F (u) = [I(u), S(u)]
Therefore we have that,
(
)
{
u = I(u),
u is lower semi − continuous,
F (u) = u ⇐⇒
⇐⇒
u = S(u)
u is upper semi − continuous
H-continuous functions are characterized as follows:
Theorem 1.61.  Let u = [u, u] ∈ A(X). The following conditions are equivalent:
(a) The function u is H-continuous.
(b) F (u) = F (u) = u
60
Introduction
(c) S(u) = u, I(u) = u.
H-continuous functions may be constructed as follows:
Theorem 1.62. Let u ∈ A(X). The functions
F (S(I(u)))
and
F (I(S(u)))
are H-continuous.
The set H(X) of H-continuous functions contains the following three important
subsets. The set
{
}
∀ x∈X:
Hf t (X) = u ∈ H(X) ,
(1.110)
u(x) ∈ IR
of all ﬁnite H-continuous functions, the set
{
}
∃ Γ ⊂ X closed nowhere dense :
Hnf (X) = u ∈ H(X) ,
x ∈ X\Γ =⇒ u(x) ∈ IR
of nearly ﬁnite H-continuous functions, and the set
}
{
∃ [a, a] ∈ IR :
,
Hb (X) = u ∈ Hf t (X) u(x) ⊆ [a, a], x ∈ X
(1.111)
(1.112)
of bounded H-continuous functions. Since the functions in C(X) assume values
which are ﬁnite real numbers, we have the following inclusions:
C(X) ⊆ Hf t (X) ⊆ H(X)
and
Cb (X) ⊆ Hb (X) ⊆ Hf t (X) ⊆ H(X).
Here Cb (X) denotes the space of all bounded continuous functions. It has been
shown, see , that the set Hf t (X) is Dedekind order complete and thus contains
the Dedekind order completion of C(X) if X is an arbitrary topological space. If
X is a metric space then the space Hf t (X) is the Dedekind order completion of
C(X).
1.4
The Order Completion Method
In this section we discuss the Order Completion Method (OCM) for nonlinear
PDEs. The OCM is a type independent theory for the existence and basic regularity of solutions to nonlinear PDEs, based on the order completion of partially
ordered sets of functions. This theory yields the existence and uniqueness of
generalized solutions to arbitrary continuous nonlinear PDEs.
The Order Completion Method
61
Let us consider a nonlinear PDE of order at most m of the form
T (x, D)u(x) = h(x), x ∈ Ω.
(1.113)
Here Ω ⊆ Rn is open, and h ∈ C 0 (Ω). The nonlinear operator T (x, D) is deﬁned
in terms of a jointly continuous function
F : Ω × Rm −→ R
by setting
T (x, D)u(x) = F (x, u(x), · · · , Dα u(x), · · · ), |α| ≤ m, x ∈ Ω,
(1.114)
for any u ∈ C m (Ω). We assume that the PDE (1.113) satisﬁes
h(x) ∈ int{F (x, ζ)|ζ ∈ Rm }, x ∈ Ω.
(1.115)
Under this condition, the following fundamental approximation result holds .
Theorem 1.63. Suppose that (1.115) holds. Then for all ε > 0 there exists
Γε ⊂ Ω closed and nowhere dense and uε ∈ C m (Ω\Γε ) such that
h(x) − ε < T (x, D)uε (x) ≤ h(x),
x ∈ Ω\Γε .
The OCM consists of using the Theorem 1.63, interpreted in appropriate function spaces, to construct solutions of the PDE (1.113). We summarize this construction below, see [10, 11, 93, 103, 104] for a detailed exposition. In this regard,
m
consider the space Cnd
(Ω) deﬁned as follows: For any integer 0 ≤ m < ∞, set


∃
Γ
⊂
Ω
closed,
nowhere
dense
:


m
Cnd (Ω) = u ∈ A(Ω) 1) u : Ω\Γ −→ R
(1.116)


m
2) u ∈ C (Ω\Γ)
m
Clearly, C m (Ω) ⊆ Cnd
(Ω), 0 ≤ m ≤ ∞. Since the mapping F that deﬁnes T (x, D)
through (1.114) is continuous, it follows that if u ∈ C m (Ω\Γ) with Γ ⊂ Ω closed
nowhere dense, then T (x, D)u ∈ C 0 (Ω\Γ). That is, with the operator T (x, D) we
can associate a mapping
m
0
T (x, D) : Cnd
(Ω) −→ Cnd
(Ω).
(1.117)
0
0
On Cnd
(Ω) we deﬁne an equivalence relation as follows: For any u, v ∈ Cnd
(Ω),
we have

 ∃ Γ ⊂ Ω closed nowhere dense :
(1.118)
u ∼ v ⇐⇒
1) u, v ∈ C 0 (Ω\Γ)

2) u(x) = v(x), x ∈ Ω\Γ
0
the quotient space Cnd
(Ω)/ ∼ is denoted by M0 (Ω). We also introduce an equivm
m
alence relation on Cnd
(Ω) in the following way: For any u, v ∈ Cnd
(Ω),
u ∼T v ⇐⇒ T u ∼ T v.
(1.119)
62
Introduction
m
The space Mm
T (Ω) is deﬁned as the quotient space Cnd (Ω)/ ∼T . The mapping
(1.117) induces an injective mapping
0
Tb : Mm
T (Ω) −→ M (Ω)
in a canonical way, so that the diagram
T
m
Cnd
(Ω)
-
q1
0
(Ω)
Cnd
q2
?
Mm
T (Ω)
Tb
-
(1.120)
(1.121)
?
M0 (Ω)
commutes, with q1 and q2 canonical quotient mappings associated with the equivalence relations (1.118) and (1.119) respectively. The mapping Tb is deﬁned as
m
follows: If U ∈ Mm
T (Ω) is the ∼T - equivalence class generated by u ∈ Cnd (Ω),
then Tb(U ) is the ∼T - equivalence class generated by T u.
On the space M0 (Ω), deﬁne a partial order as follows: For any H, G ∈ M0 (Ω),

 ∃ h ∈ H, g ∈ G, Γ ⊂ Ω closed nowhere dense :
H ≤ G ⇐⇒
(1) h, g ∈ C 0 (Ω\Γ)
(1.122)

(2) h ≤ g on Ω\Γ
b
On the space Mm
T (Ω) deﬁne a partial order ≤T through the mapping T as
follows: For any U, V ∈ Mm
T (Ω)
U ≤T V ⇐⇒ TbU ≤ TbV in M0 (Ω).
(1.123)
With respect to the partial orders (1.122) and (1.123) on M0 (Ω) and Mm
T (Ω),
respectively, the mapping Tb is an order isomorphic embedding . That is, Tb is
injective and
∀ U, V ∈ Mm
T (Ω) :
U ≤T V ⇐⇒ TbU ≤ Tb
there exists a unique Dedekind complete partially ordered sets (M0 (Ω)♯ , ≤) and
♯
(Mm
T (Ω) , ≤T ), and order isomorphic embeddings
m
♯
iMmT (Ω) : Mm
T (Ω) −→ MT (Ω)
and
iM0 (Ω) : M0 (Ω) −→ M0 (Ω)♯
so that the following universal property is satisﬁed: For every order isomorphic
embedding
0
S : Mm
T (Ω) −→ M (Ω)
The Order Completion Method
63
0
there exists a unique order isomorphic embedding S ♯ : Mm
T (Ω) −→ M (Ω) so
that the diagram
S
Mm
M0 (Ω)
T (Ω)
iMmT (Ω)
iM0 (Ω)
S♯
?
♯
Mm
T (Ω)
-
(1.124)
?
M0 (Ω)♯
commutes. In particular, there exists a unique order isomorphic embedding
Tb♯ : Mm (Ω)♯ −→ M0 (Ω)♯ ,
T
which is an extension of the mapping Tb so that the diagram
Tb
Mm
(Ω)
M0 (Ω)
T
iMmT (Ω)
iM0 (Ω)
Tb♯
?
♯
Mm
T (Ω)
-
(1.125)
?
M0 (Ω)♯
commutes. In this way we arrive at an extension of the nonlinear PDE (1.113).
♯
In this regard, any solution U ♯ ∈ Mm
T (Ω) of the equation
Tb♯ U ♯ = f
is considered a generalized solution of (1.113).
The main existence and uniqueness result for solutions of the PDE (1.113) is
stated below.
Theorem 1.64.  If the PDE (1.113) satisﬁes the condition (1.115) then there
♯
exists a unique solution U ♯ ∈ Mm
T (Ω) such that
Tb♯ U ♯ = f
As shown in , this generalized solution to the PDE (1.113) may be assimilated
with usual Hausdorﬀ continuous functions in Hnf (Ω). Indeed, the Dedekind order
completion M0 (Ω)♯ of M0 (Ω) is order isomorphic with the space Hnf of nearly
ﬁnite H-continuous functions on Ω. Thus, since
0
♯
Tb♯ : Mm
T (Ω) −→ M (Ω)
is an order isomorphic embedding, one may obtain an order isomorphic embedding
Tb♯ : Mm
T (Ω) −→ Hnf (Ω)
0
♯
so that Mm
T (Ω) is order isomorphic with a subspace of Hnf (Ω).
64
Introduction
1.4.1
Main Ideas of Convergence space Completion
One major deﬁciency of the OCM, as formulated in Section 1.4, is that the spaces
of generalized functions containing solutions of a PDE (1.113) may to a large
extent depend on the particular nonlinear operator T (x, D). Furthermore, there
is no concept of generalized partial derivative for generalized functions.
Recently, , ,  these issues were resolved by introducing suitable
uniform convergence spaces. Here we recall brieﬂy the main ideas underlying this
new approach.
To illustrate the convergence space completion method we introduce normal
lower and upper semi-continuous functions which are deﬁned through,
u ∈ A(Ω) is normal lower semi − continuous at x0 ∈ Ω ⇔ I(S(u(x0 ))) = u(x0 ),
u ∈ A(Ω) is normal upper semi − continuous at x0 ∈ Ω ⇔ S(I(u(x0 ))) = u(x0 ).
A function is normal lower or normal upper semi-continuous on Ω if it is normal
lower or normal upper semi-continuous at every point x0 ∈ Ω, , . Every
continuous function is both normal upper semi continuous and normal lower semi
continuous.
Deﬁnition 1.65. A normal lower semi-continuous function is called nearly ﬁnite
whenever the set {x ∈ Ω : u(x) ∈ R} is open and dense in Ω.
We denote by, N L(Ω), the set of all nearly ﬁnite normal lower semi-continuous
functions on Ω. That is,
{
}
(1)(I ◦ S)u(x) = u(x)
N L(Ω) = u ∈ A(Ω) (2) {x ∈ Ω : u(x) ∈ R} is open and dense in Ω
Note that every continuous, real valued function is nearly ﬁnite normal lower
semi-continuous. Thus we have that
C(Ω) ⊆ N L(Ω).
Now consider the space
m
MLm (Ω) = {u ∈ N L(Ω) : u ∈ Cnd
(Ω)}.
(1.126)
The space MLm (Ω) is a sublattice of N L(Ω). In particular, the space
0
ML0 (Ω) = {u ∈ N L(Ω) : u ∈ Cnd
(Ω)},
is σ-order dense in N L(Ω). This means for each u ∈ N L(Ω)
∃ (λn ), (µn ) ⊂ ML0 (Ω) :
(i) λn ≤ λn+1 ≤ µn+1 ≤ µn nN,
(ii) sup{λn : n ∈ N} = u = inf{µn : n ∈ N}.
(1.127)
On the space ML0 (Ω) we deﬁne a uniform convergence structure as follows:
The Order Completion Method
65
Deﬁnition 1.66. Let Λ consists of all nonempty order intervals in ML0 (Ω). Let
J0 denote the family of ﬁlters on ML0 (Ω) × ML0 (Ω) deﬁned as follows:

∃ k∈N:




∀ j = 1, · · · , k :



j


 ∃ Λj = {In } ⊆ Λ :
∃ uj ∈ N L(Ω) :
U ∈ J0 ⇐⇒
(1.128)
j

j

(i)
I
⊆
I
,
n
∈
N

n
n+1



(ii) lim inf {Inj } = uj = lim sup{Inj }


n−→∞

n−→∞

(iii) ([Λ1 ] × [Λ1 ]) ∩ · · · ∩ ([Λk ] × [Λk ]) ⊆ U .
The uniform convergence structure J0 is uniformly Hausdorﬀ, ﬁrst countable
and induces the convergence structure λJ0 on ML0 (Ω) given by

∃ (λn ), (µn ) ⊂ ML0 (Ω) :



(i) λn ≤ λn+1 ≤ µn+1 ≤ µn nN,
F ∈ λJ0 ⇐⇒
(1.129)
(ii) sup{λn : n ∈ N} = u = inf{µn : n ∈ N}



(iii) [{[λn , µn ] : n ∈ N}] ⊂ F
We now consider the PDE (1.113). With the operator T (x, D) one may associate a mapping
T : MLm (Ω) −→ ML0 (Ω)
(1.130)
deﬁned by
T (u)(x) = (I ◦ S)(F (x, u, · · · , Dα u, · · · ))(x),
x∈Ω
(1.131)
where
Dα u = (I ◦ S)(Dα u).
On the space MLm (Ω) consider the equivalence relation ∼T induced by T through
∀ u, v ∈ MLm (Ω) :
u ∼T v ⇐⇒ T u = T v
(1.132)
m
Denote by MLm
T (Ω) the quotient space MLT (Ω)\ ∼T . With the mapping (1.130)
one may associate in a canonical way an injective mapping
0
Tb : MLm
(1.133)
T (Ω) −→ ML (Ω)
such that the diagram
MLm (Ω)
T
-
qT
?
MLm
T (Ω)
ML0 (Ω)
id
Tb
-
?
ML0 (Ω)
(1.134)
66
Introduction
commutes. Here, qT denotes the canonical quotient map associated with the
equivalence relation (1.132) and id is the identity map on ML0 (Ω).
On the space MLm
T (Ω) we consider the initial uniform convergence structure
m
JT with respect to the mapping Tb: For any ﬁlter U ∈ MLm
T (Ω) × MLT (Ω)
U ∈ JT ⇐⇒ (Tb × Tb)(U) ∈ J0
(1.135)
Since the mapping Tb is injective, it follows that the space MLm
T (Ω) is uniformly
m
0
isomorphic to the subspace Tb(MLT (Ω)) of ML (Ω), see . Thus the mapping Tb is a uniformly continuous embedding. The Wyler completion of the space
(ML0 (Ω), J0 ) is the space N L(Ω) equipped with the uniform convergence structure J0♯ deﬁned as follows, see .
Deﬁnition 1.67. Let Λ consists of all nonempty order intervals in ML0 (Ω). Let
J0♯ denote the family of ﬁlters on N L(Ω) × N L(Ω) deﬁned as follows


∃ k∈N:




∀ i = 1, · · · , k




∃ Λi = {Ini : n ∈ N} ⊆ Λ :



 ∃ ui ∈ N L(Ω) :
♯
i
U ∈ J0 ⇐⇒
(1.136)
(i) In+1
⊆ Ini n ∈ N




(ii) lim inf {Ini } = ui = lim sup{Ini }


n→∞
n→∞



k
∩



(iii)
(([Λi ] × [Λi ]) ∩ ([ui ] × [ui ])) ⊆ U.

i=k
The completion of the space MLm
T (Ω) is denoted by N LT (Ω), and is realized
as a subspace of N L(Ω). In particular, the mapping Tb extends uniquely to an
injective uniformly continuous mapping
Tb♯ : N LT (Ω) −→ N L(Ω).
This is summarized in the following commutative diagram.
Tb
m
MLT (Ω)
ML0 (Ω)
ϕ
?
N LT (Ω)
ψ
Tb♯
-
(1.137)
?
N L(Ω)
Here ϕ and ψ are the canonical uniformly continuous embeddings associated
with the completions N LT (Ω) and N L(Ω), respectively. A ﬁrst existence and
uniqueness result for the generalized solutions of the PDE (1.113) is given below.
The Order Completion Method
67
Theorem 1.68. For every f ∈ C 0 (Ω) satisfying (1.115), there exists a unique
U ♯ ∈ N LT (Ω) such that
Tb♯ U ♯ = f.
Theorem 1.68 is essentially a reformulation of Theorem 1.64 in the context
of uniform convergence spaces.Thus the mentioned deﬁciencies of the OCM also
applies to Theorem 1.68. However, by introducing a parallel construction of
spaces of generalized functions, which is independent of the particular nonlinear
operator T we may resolve these diﬃculties. In this regard, we introduce on
MLm (Ω) the initial uniform convergence structure Jm with respect to the partial
derivatives
Dα : MLm (Ω) −→ ML0 (Ω).
(1.138)
That is,
(
U ∈ Jm ⇐⇒
∀ α ∈ Nn , |α| ≤ m :
(Dα × Dα )(U) ∈ J0
thus each of the mappings (1.138) is uniformly continuous so that the mapping
D : MLm (Ω) −→ ML0 (Ω).
is a uniformly continuous embedding, therefore  D extends uniquely to an
injective, uniformly continuous mapping
D♯ : N Lm (Ω) −→ N L(Ω)µ .
(1.139)
where N Lm (Ω) denotes the completion of MLm (Ω). This gives a ﬁrst and basic regularity result: The generalized functions in N Ln (Ω) may be represented,
through their generalized partial derivatives, as normal lower semi-continuous
functions. Indeed, the mapping (1.139) may be represented as
D♯ (u) = ((Dα )♯ )|α|≤m
where (Dα )♯ denotes the extension of Dα to N Lm (Ω).
Theorem 1.69. The mapping
T : MLm (Ω) −→ ML0 (Ω)
deﬁned in (1.130) - (1.131) is uniformly continuous.
In view of Theorem 1.69 the mapping T extends to a unique uniformly continuous mapping
T ♯ : N Lm (Ω) −→ N L(Ω)
68
Introduction
so that the diagram
MLm (Ω)
T
-
ML0 (Ω)
φ
ψ
?
m
N L (Ω)
T♯
-
(1.140)
?
N L(Ω)
commutes. Here φ and ψ are the uniformly continuous embeddings associated
with the completion N Lm (Ω) and N L(Ω), respectively. The main existence result
for the solutions of (1.113) in N L(Ω) is the following
Theorem 1.70. If for each x ∈ Ω there is some ζ ∈ Rm and neighbourhoods V
and W of x and ζ so that
F (x, ζ) = f (x)
and
F : V × W :−→ R
is open, then there exists u♯ ∈ N Lm (Ω) such that
T ♯ u♯ = f
The relationship Between Theorem 1.68 and Theorem 1.70 is summarized as
follows: If
F (x, ·) : Rm −→ R
is open and surjective for each x ∈ Ω, then the PDE (1.113) admits generalized
m
♯
0
solutions U ♯ ∈ N Lm
T (Ω) and u ∈ N L (Ω) for every f ∈ C (Ω) and
U ♯ = {u♯ ∈ N Lm (Ω)|T ♯ u♯ = f }.
Thus the generalized solution in N Lm
T (Ω) may be viewed as the set of all solutions
m
in N L (Ω).
1.5
Summary of the Main Results
In chapter two of this thesis we present the main results obtained. The Order Completion Method, in particular the formulation of this Theory in terms
of uniform convergence spaces presented in Section 1.4.1 is modiﬁed for single
conservation laws in one spatial dimension. The following points are addressed.
• Suitable convergence vector spaces are introduced for the formulation of question of existence and uniqueness of generalized solution of the mentioned conservation law. The completion of this space is described in terms of the set
of ﬁnite H-continuous functions.
Summary of the Main Results
69
• The issue of existence of generalized solution of conservation laws is formulated in an operator theoretic context. It is shown that each conservation
law, with a given initial condition, admits at most one generalized solution.
• Existence of a generalized solution for the Burgers equation is demonstrated.
It is also shown that this solution is the entropy solution of the Burgers
equation.
Chapter 2
Hausdorﬀ Continuous Solution of Scalar
Conservation laws
2.1
Introduction
In this chapter we study the solutions of the initial value problem
ut + (f (u))x = 0, in R × (0, ∞)
u(x, 0) = u0 (x), x ∈ R
(2.1)
(2.2)
in the context of Order Completion Method, and in particular the formulation
and extension of the theory introduced in ,  and , see also Sections
1.4 and 1.4.1 in the introduction. In particular, the general theory developed
in  is adapted so as to deliver the entropy solution of (2.1) - (2.2). In this
regard, we introduce suitable convergence vector spaces M and N . With the
initial value problem (2.1)-(2.2) a mapping
T : M −→ N
(2.3)
is associated so that (2.1)-(2.2) may be written as one single equation
Tu = h
for a suitable h ∈ N .
The vector space convergence structure on M and N are constructed in such
a way that the mapping (2.3) is uniformly continuous. In this way we obtain a
canonical uniformly continuous extension
T ♯ : M♯ −→ N ♯
of (2.3) to the completions M♯ and N ♯ of M and N , respectively. Any solution
u♯ ∈ M♯ of the equation
T ♯ u♯ = h
(2.4)
is interpreted as a generalized solution of the initial value problem (2.1) -(2.2).
The main result presented in this chapter concerns existence, uniqueness and
regularity of the solutions of (2.4). In this regard, we prove the following:
Introduction
71
(A) Equation (2.4) has at most one solution u♯ ∈ M♯ .
(B) M♯ may be identiﬁed with a set of H-continuous functions, thus the solution
of (2.4) is H-continuous, if it exists.
(C) There exist a solution u♯ ∈ M♯ for the initial value problem (2.1)-(2.2) with
f (u) =
(u)2
.
2
This solution can be identiﬁed with the entropy solution of the Burgers equation.
The main novelty of the approach developed here is that the theory of entropy
solution of scalar conservation laws is developed in an operator - theoretic setting.
In this regard, we may recall, see for instance , that weak solutions methods
for the solutions of linear and nonlinear PDEs involve an ad hoc extension of
a partial diﬀerential operator associated with a given PDE. Given topological
vector spaces X and Y of suﬃciently smooth functions, and a partial diﬀerential
operator
T : X −→ Y,
(2.5)
a Cauchy sequence (un ) in X is constructed so that the sequence (T un ) converges
to some h ∈ Y. The sequence (un ) being a Cauchy sequence in X, converges to
some u♯ in the completion X ♯ of X. Now, based on the convergence
un −→ u♯ , T un −→ h
of one single sequence (un ), u♯ is declared to be a generalized solution of the PDE
T u = h.
This amounts to an ad hoc extension of the mapping (2.5) to a mapping
T ♯ : X ∪ {u♯ } −→ Y.
In the case of a linear PDE this approach turns out to be well founded, due to
the automatic continuity of certain linear mappings on topological vector spaces.
However, in the case of non-linear PDEs such methods can, and often do, lead
to non-linear stability paradoxes, see for instance [105, Chapter1, Section 8].
The result obtained in this chapter places the theory of entropy solutions of
conservation laws on a ﬁrm operator - theoretic footing.
The rest of this Chapter is organized as follows. In Section 2.2 we discuss the
convergence vector spaces used in our result. The approximation result needed for
the existence of a solution is discussed in Section 2.3. Existence and uniqueness
results for the Burgers equation are presented in Section 2.4.
Hausdorﬀ Continuous Solution of Scalar Conservation laws
72
2.2
Convergence Vector Spaces for Conservation Laws
As mentioned, the novelty in the approach in this section to the conservation law
(2.1) - (2.2) is based mostly on the diﬀerent way of constructing the operator
equation T u = h associated with the conservation law. It is essential for this
development that the classical solution of the problem (2.1) - (2.2) is unique
whenever it exists. We present below a precise formulation of the uniqueness
result. In this regard, we now deﬁne the following convergence vector spaces. Let
M = {u ∈ C 1 (R × (0, ∞)) ∩ C 0 (R × [0, ∞)) : u(·, 0) ∈ U0 }
(2.6)
N = C 0 (R × (0, ∞)) × U0
(2.7)
and
where U0 is a set of initial conditions. In the literature U0 is deﬁned in diﬀerent
ways. Here we take
U0 = {h ∈ C 0 (R) : lim h(x),
lim h(x) exist}
x−→∞
x−→−∞
(2.8)
The following result is an extended formulation of Theorem 6.2 in .
Theorem 2.1. Let f be Liptschitz on compacta. For u, v ∈ M set
ϕ = ut + (f (u))x
ψ = vt + (f (v))x .
(2.9)
(2.10)
Then there exists L such that
∫ b
∫ b+Lt
|v(x, t) − u(x, t)| dx ≤
|u(x, 0) − v(x, 0)| dx
a
a−Lt
∫ t ∫ b+Lt
|ϕ − ψ| dxdt.
+
0
(2.11)
a−Lt
Proof. Since lim u(x, 0), lim u(x, 0), lim v(x, 0) and lim v(x, 0) exist, then
x−→∞
x−→−∞
x−→∞
x−→−∞
u, v are both bounded. Let u(x, t), v(x, t) ∈ [−d, d], x ∈ R. Using the fact that f
is Liptschitz on compacta there exists L such that
|f (w) − f (w′ )| ≤ L |w − w′ | for w, w′ ∈ [−d, d].
(2.12)
From (2.9)-(2.10) we have that
ψ − ϕ = (v − u)t + (f (v) − f (u))x .
Multiply equation (2.13) by the function

 1 if v − u > 0
sgn(v − u) =
−1 if v − u < 0

0 if u = v
(2.13)
Convergence Vector Spaces for Conservation Laws
73
to get
(ψ − ϕ)sgn(v − u) = |v − u|t + ((f (v) − f (u))x )sgn(v − u),
which can be written as
|v − u|t + ((f (v) − f (u))sgn(v − u))x = (ψ − ϕ)sgn(v − u).
(2.14)
Now integrate (2.14) over the trapezium
{
}
0 ≤ t ≤ τ;
D = (x, t) ∈ R × [0, ∞) ,
a − L(τ − t) ≤ x ≤ b + L(τ − t)
(2.15)
for arbitrary ﬁxed τ > 0. Then
∫∫
∫∫
(|v − u|t ) dxdt +
(((f (v) − f (u))sgn(v − u))x ) dxdt
∫D∫
D
(ψ − ϕ)sgn(v − u)dxdt.
=
(2.16)
D
t 6
(a, τ )
L1
(b, τ )
L3
L4
L2
a − Lτ
a
b
b + Lτ
x
Figure 2.1
Apply Green’s Theorem to the left hand side of equation (2.16). Note that,
see Figure 2.1, on
L1
L2
L3
L4
: t = τ, dt = 0
: t = 0, dt = 0
: x = a + L(t − τ ),
: x = b − L(t − τ ),
dx = Ldt
dx = −Ldt.
Hausdorﬀ Continuous Solution of Scalar Conservation laws
74
Therefore
∫∫
∫∫
(((f (v) − f (u))sgn(v − u))x ) dxdt
(|v − u|t ) dxdt +
∫D∫
D
(|v − u|t − (−((f (v) − f (u))sgn(v − u))x )) dxdt
=
D
I
|v − u| dx − ((f (v) − f (u))sgn(v − u))dt
=
L1 +L2 +L3 +L4
b
∫
∫
=
∫a τ
−
∫0
a−Lτ
τ
(((f (v((C3 (t), t))) − f (u((C3 (t), t))))sgn(v − u)))dt
τ
(L |v(C4 (t), t) − u(C4 (t), t)|)dt
+
∫0
|v0 (x) − u0 (x)| dx
|v(x, τ ) − u(x, τ )| dx −
(L |v(C3 (t), t) − u(C3 (t), t)|)dt
+
∫0
b+Lτ
τ
(((f (v(C4 (t), t)) − f (u(C4 (t), t)))sgn(v − u)))dt
−
0
where
C3 (t) = a + L(t − τ ),
That is,
∫∫
∫∫
(|v − u|t ) dxdt +
D
b
∫
(((f (v) − f (u))sgn(v − u))x ) dxdt
D
a
∫0 τ
(L |v(C3 (t), t) − u(C3 (t), t)|)dt
(((f (v((C3 (t), t))) − f (u((C3 (t), t))))sgn(v − u)))dt
(L |v(C4 (t), t) − u(C4 (t), t)|)dt
+
−
|v0 (x) − u0 (x)| dx
τ
∫0 τ
∫0
b+Lτ
a−Lτ
+
−
∫
|v(x, τ ) − u(x, τ )| dx −
=
∫
C4 (t) = b − L(t − τ ).
τ
(((f (v(C4 (t), t)) − f (u(C4 (t), t)))sgn(v − u)))dt
0
Using the inequality (2.12) we see that
L |v(C3 (t), t) − u(C3 (t), t)|
− ((f (v((C3 (t), t))) − f (u((C3 (t), t))))sgn(v − u)) ≥ 0
Convergence Vector Spaces for Conservation Laws
75
and
L |v(C4 (t), t) − u(C4 (t), t)|
− ((f (v(C4 (t), t)) − f (u(C4 (t), t)))sgn(v − u)) ≥ 0
Therefore
∫∫
∫∫
(|v − u|t ) dxdt +
D
b
D
∫
≥
(((f (v) − f (u))sgn(v − u))x ) dxdt
∫
b+Lτ
|v(x, τ ) − u(x, τ )| dx −
a
0
v (x) − u0 (x) dx,
a−Lτ
which further implies that
∫∫
(ψ − ϕ)sgn(v − u)dxdt
∫D∫
(|v − u|t + ((f (v) − f (u))sgn(v − u))x ) dxdt
=
D
b
∫
∫
b+Lτ
|v(x, τ ) − u(x, τ )| dx −
≥
a
0
v (x) − u0 (x) dx.
a−Lτ
Thus we obtain the inequality
∫ b
|v(x, τ ) − u(x, τ )| dx
a
∫ b+Lτ
∫∫
0
0
v (x) − u (x) dx +
≤
(ψ − ϕ)sgn(v − u)dxdt
a−Lτ
D
∫ b+Lτ
∫∫
0
0
v (x) − u (x) dx +
|ψ − ϕ| dxdt
≤
a−Lτ
D
as required.
Consider the operator
T : M −→ N
(2.17)
deﬁned by
(
Tu =
ut + (f (u))x
u(·, 0)
)
(
=
T1 u
T2 u
)
(2.18)
The mentioned uniqueness of a classical solution of (2.1) -(2.2) is extended in the
following way.
Lemma 2.2. The operator T is injective
76
Hausdorﬀ Continuous Solution of Scalar Conservation laws
Proof. The injectivity of the operator T follows from Theorem 2.1. Indeed, let
Tu = Tv
for some u, v ∈ M. Then for any t > 0, a, b ∈ R, a < b we have
∫ b
∫ b+Lt
|v(x, t) − u(x, t)| dx ≤
|T2 u − T2 v| dx
a
a−Lt
∫ t ∫ b+Lt
+
|T1 u − T1 v| dxdt
0
a−Lt
= 0.
By the continuity of u and v this implies u = v.
Convergence Structures on M and N
On the considered spaces M and N we deﬁne the respective convergence structures as follows: On M we consider the following convergence structure which
we denote as λ1 . Given a ﬁlter F on M, we have

∃ (αn ), (βn ) ⊆ C 0 (R × [0, ∞)) :




(i) αn ≤ αn+1 ≤ u ≤ βn+1 ≤ βn , n ∈ N


∫ b

F ∈ λ1 (u) ⇐⇒
(2.19)
(βn (x, t) − αn (x, t))dx −→ 0
(ii)


a



for
t ≥ 0, a, b ∈ R, a ≤ b


(iii) [{[αn , βn ] : n ∈ N}] ⊆ F.
Here the interval [αn , βn ] is considered in M with respect to the usual point-wise
order, that is, [αn , βn ] = {v ∈ M : αn (x, t) ≤ v(x, t) ≤ βn (x, t), x ∈ R, t ∈
[0, ∞)}
Proposition 2.3. The convergence structure λ1 is a Hausdorﬀ vector space convergence structure.
Proof. We ﬁrst show that λ1 is a convergence structure on M by showing that λ1
satisﬁes the deﬁnition of a convergence structure given in (1.36).
(i) Consider u ∈ M. In (2.19) set αn = βn = u for all n ∈ N. We see that
conditions (2.19)(i) and (ii) are satisﬁed, and [{[αn , βn ] : n ∈ N}] = [u].
Therefore [u] ∈ λ1 .
(1)
(1)
(ii) Let F, G ∈ λ1 (u) be ﬁlters on M. Then there exist sequences (αn ), (βn )
(2)
(2)
and (αn ), (βn ) on C 0 (R, [0, ∞)), converging to the same limit, which can
be associated with ﬁlters F and G respectively according to (2.19). Denote
Convergence Vector Spaces for Conservation Laws
(1)
(1)
(2)
77
(2)
αn = inf{αn , αn }, βn = sup{βn , βn }, n ∈ N. Clearly, αn ≤ αn+1 ≤ u ≤
βn+1 ≤ βn . Moreover, we have
∀ a, b ∈ R, a ≤ b
∫ b
βn (x, t) − αn (x, t)dx −→ 0.
a
Furthermore, we have that
[αn1 , βn1 ] ⊆ [αn , βn ]
and
[αn2 , βn2 ] ⊆ [αn , βn ].
Therefore,
[αn1 , βn1 ] ∪ [αn2 , βn2 ] ⊆ [αn , βn ],
which implies
[{[αn , βn ] : n ∈ N}] ⊆ F ∩ G.
(iii) Let F ∈ λ1 (u). Let G be a ﬁlter ﬁner than F. Then there exist sequences
(αn ), (βn ) on C 0 (R, [0, ∞)) satisfying (2.19)(i), (ii) and [{[αn , βn ] : n ∈ N}] ⊆
F ⊆ G. Hence G ∈ λ1 (u).
It follows from (i) - (iii) above that λ1 is a convergence structure.
Next, we show that addition and scalar multiplication are continuous. In this
regard, let F −→ u and G −→ v with respect to λ1 . Then there exist sequences
(αn1 ), (βn1 ) ⊂ C 0 (R × [0, ∞))
that can be associated with the ﬁlter F according to (2.19) and sequences
(αn2 ), (βn2 ) ⊂ C 0 (R × [0, ∞))
that can be associated with the ﬁlter G according to (2.19). Therefore, we have
(a) from (2.19)(i) we have
1
2
1
2
αn1 + αn2 ≤ αn+1
+ αn+1
≤ u + v ≤ βn+1
+ βn+1
≤ βn1 + βn2 .
(b) From (2.19)(ii) we have
∫ b
(βn1 (x, t) + βn2 (x, t) − αn1 (x, t) − αn2 (x, t))dx
∫a b
∫ b
1
1
=
(βn (x, t) − αn (x, t)) +
(βn2 (x, t) − αn2 (x, t))dx −→ 0.
a
a
(c) From (2.19)(iii) we have that
∀ n∈N
∃ F ∈F
∃ G∈G:
F ⊆ [αn1 , βn1 ] and G ⊆ [αn2 , βn2 ]
Hausdorﬀ Continuous Solution of Scalar Conservation laws
78
Hence,
F + G ⊆ [αn1 , βn1 ] + [αn2 , βn2 ]
⊆ [αn1 + αn2 , βn1 + βn2 − αn1 (x, t)]
so that
[{[αn1 + αn2 , βn1 + βn2 − αn1 (x, t)] : n ∈ N}] ⊆ F + G.
It thus follows from (a) - (c) above that F + G ∈ λ1 (u + v), which shows that
To show that scalar multiplication is continuous, let F ∈ λ1 (u) and let (αn ), (βn )
on C 0 (R, [0, ∞)) be sequences associated with F according to (2.19). Then for
any constant c ∈ R, c ≥ 0 we have
(a) cαn ≤ cαn+1 ≤ cu ≤ cβn+1 ≤ cβn
(b) ∀ t ≥ 0, a, b ∈ R, a ≤ b
∫ b
∫ b
(cβn (x, t) − cαn (x, t))dx =
c(βn (x, t) − αn (x, t))dx −→ 0
a
a
(c) ∀ n ∈ N ∃ F ∈ F such that cF ⊆ [cαn , cβn ]. Which implies [{[cαn , cβn ] : n ∈
N}] ⊆ cF.
The case c < 0 is treated in a similar way. Thus, cF ∈ λ1 (cu) which implies that
scalar multiplication is continuous.
We now show that λ1 is Hausdorﬀ. To do this we need to show that the set
{0} is closed. Let u ∈ a({0}). Then
∃ F −→ u :
{0} ∈ F
Since F −→ u, it follows that there exist sequences (αn ), (βn ) ⊆ C 0 (R × [0, ∞))
satisfying
(i)
(ii)
αn ≤ αn+1 ≤ u ≤ βn+1 ≤ βn , n ∈ N
∫ b
(βn (x, t) − αn (x, t))dx −→ 0
a
(2.20)
∀ t ≥ 0, a, b ∈ R, a ≤ b
(iii) [{[αn , βn ] : n ∈ N}] ⊆ F .
But F ⊆ , which implies, from (2.20)(iii), that
[{[αn , βn ] : n ∈ N}] ⊆ F ⊆ .
Which means,
∀ n∈N
∃ A∋0:
A ⊆ [αn , βn ].
This implies αn ≤ 0 ≤ βn . Taking limit as n −→ ∞ we have u = 0. Hence
a({0}) = {0}, which implies that {0} is closed. This completes the proof.
Convergence Vector Spaces for Conservation Laws
79
Let us recall that given a ﬁlter F on M and u ∈ M, F converges to u with
respect to the subspace convergence structure induced on M by the order convergence structure, see Example 1.42, whenever
∃ sequences (αn ), (βn ) ⊂ C 0 (R × [0, ∞)) :
(i) αn ≤ αn+1 ≤ βn+1 ≤ βn
(ii) sup{αn : n ∈ N} = u = inf{βn : n ∈ N}
(iii) [{[αn , βn ] : n ∈ N}] ⊆ F.
(2.21)
where the inﬁmum and the supremum are both taken in C 0 (R × [0, ∞)). Denote
this induced convergence structure on M by λs . The convergence structure λ1 on
M is closely related to λs . In this regard, we have the following.
Lemma 2.4. Let F converge to u with respect to λ1 . Then F converge to u with
respect to the convergence structure λs .
Proof. Let (αn ) and (βn ) be sequences associated with F according to (2.19).
Conditions (2.21)(i)and (2.21)(iii) follows from (2.19)(i) and (2.19)(iii), respectively. It follows from (2.19)(i) that u ∈ M ⊂ C 0 (R × [0, ∞)) is an upper bound
of {αn : n ∈ N} and a lower bound of {βn : n ∈ N} in C 0 (R × [0, ∞)). We need
to show that it is the least upper bound for {αn : n ∈ N} and the greatest lower
bound for {βn : n ∈ N}. Let v be an upper bound for {αn : n ∈ N} and w be a
lower bound of {βn : n ∈ N} in C 0 (R × [0, ∞)) such that v ≤ u ≤ w. Then for
any a, b ∈ R, a ≤ b, and t ≥ 0 we have
∫ b
∫ b
(u(x, t) − v(x, t))dx ≤
(βn (x, t) − αn (x, t))dx −→ 0 as n −→ 0.
a
Therefore
a
∫
b
(u(x, t) − v(x, t))dx = 0.
a
Using the continuity of u−v and the fact that u(x, t)−v(x, t) ≥ 0 for all x ∈ [a, b]
and t ≥ 0 we obtain u = v. Similarly,
∫ b
(w(x, t) − u(x, t))dx = 0
a
which implies w = v. Using the fact that sup{αn : n ∈ N} ≤ u, inf{βn : n ∈ N} ≥
u and v, w are arbitrary we obtain sup{αn : n ∈ N} = u = inf{βn : n ∈ N}. This
completes the proof.
Note that the convergence structure λ1 is ﬁner than the convergence structure
λs . Indeed, it follows from Lemma 2.4 that λ1 (u) ⊂ λs (u).
The convergence vector space M is equipped with the induced uniform convergence structure JM deﬁned as follows, see (1.102): Let U be a ﬁlter on M × M.
Hausdorﬀ Continuous Solution of Scalar Conservation laws
80
Then

U ∈ JM
⇐⇒ 
∃ F a ﬁlter on M :
(1)F ∈ λ1 (0)
(2)∆(F) ⊆ U
(2.22)
Lemma 2.5. The operator T : M −→ N is uniformly continuous with respect to
the vector space convergence structures deﬁned on N as follows:
(
π1 (F) −→ u weakly in L1
F −→ (u, h) ⇐⇒
π2 (F) −→ h in L1loc .
Here π1 is the projection on C 0 (R × (0, ∞)) and π2 is the projection on U0 .
Proof. We need to show that
∀ F −→ 0 in M
∃ G −→ 0 in N :
(T × T )(∆(F)) ⊇ ∆(G).
(2.23)
Let F −→ 0 in M and let (αn ) and (βn ) be sequences associated with F according
to (2.19). It is suﬃcient to prove that (T × T )(∆([{[αn , βn ] : n ∈ N}] ⊇ ∆(G)
for some ﬁlter G in N such that G −→ 0 in N . Let ϕ ∈ C0∞ (R × [0, ∞)) be
a test function. Let d = max{ − min α1 (x, t), max β1 (x, t)}. Then using
(x,t)∈ supp ϕ
(x,t)∈ supp ϕ
(2.19)(i) we have that αn (x, t) ∈ [−d, d], βn (x, t) ∈ [−d, d], for (x, t) ∈ supp ϕ.
Let −d ≤ u, v ≤ d. Then there exists Lϕ such that
|f (u(x, t)) − f (v(x, t))| ≤ Lϕ |u(x, t) − v(x, t)| for (x, t) ∈ supp ϕ (2.24)
where Lϕ is the lipschitz constant of f on the compact interval [−d, d].
For any n ∈ N deﬁne
∞ ∞


∫ ∫








π
(g)(x,
t)ϕdxdt
1










−∞
0
∫
∫










≤
(|ϕ
|
+
L
|ϕ
|)
(β
−
α
)
dxdt,
t
ϕ
x
n
n




Ω
. (2.25)
Gn = g ∈ N ∞


for
all
ϕ
∈
C
(R
×
[0,
∞))
ϕ


0





∫b

∫b










π2 (g)(x)dx ≤ (βn (x, 0) − αn (x, 0))dx








a
a




for any a, b ∈ R, a ≤ b
Let
G = [{Gn : n ∈ N}].
(2.26)
Convergence Vector Spaces for Conservation Laws
81
From (2.25) we see that the ﬁlter G −→ 0 in N . It remains to show that the
inclusion in (2.23) holds. Equivalently, we need to show that
∀ G∈G
∃ F ∈F :
(T × T )(∆(F )) ⊆ ∆(G).
For n ∈ N let (u, v) ∈ ∆([αn , βn ]), that is u − v ∈ [αn , βn ]. Then for every test
function ϕ and real intervals [a, b] we have
∫ ∫
∫∫
(T1 v − T1 u)ϕdxdt = ((v − u)t + (f (v) − f (u))x )ϕdxdt
Ω
supp ϕ
∫∫
=
(v − u)ϕt + (f (v) − f (u))ϕx dxdt
supp ϕ
∫∫
≤
(|v − u| |ϕt | + |f (v) − f (u)| |ϕx |)dxdt.
supp ϕ
∫∫
(|ϕt | + L |ϕx |) |v − u| dxdt.
≤
supp ϕ
∫∫
(|ϕt | + L |ϕx |)(βn − αn )dxdt
≤
supp ϕ
∫∫
(|ϕt | + L |ϕx |)(βn − αn )dxdt.
=
Ω
where Ω = R × [0, ∞), and
∫ b
∫ b
(T2 v − T2 u)dx = (v(x, 0) − u(x, 0))dx
a
a
∫ b
≤
(|v(x, 0) − u(x, 0)|)dx
∫a b
≤
(|v(x, 0) − u(x, 0)|)dx
a
∫ b
≤
(βn (x, 0) − αn (x, 0))dx.
a
Therefore,
(T × T )(u, v) = (T u, T v) ∈ {(p, q) : p − q ∈ Gn } = ∆(Gn ),
which implies that (T × T )∆([αn , βn ]) ⊆ ∆(Gn ). Hence
(T × T )(∆([{[αn , βn ] : n ∈ N}])) ⊇ ∆([{gn : n ∈ N}]) = ∆(G).
Hausdorﬀ Continuous Solution of Scalar Conservation laws
82
This completes the proof.
Corollary 2.6. Let F be a Cauchy ﬁlter on M. Then
(
π1 (T (F)) is weakly L1 Cauchy
π2 (T (F))is Cauchy in L1loc .
On the space N we consider the ﬁnal uniform convergence structure JN ,T which
is deﬁned as follows:

∃ V ∈ JM :

U ∈ JN ,T ⇐⇒ ∃ ϕ1 · · · ϕk ∈ N :
(2.27)
(T × T )(V) ∩ ([ϕ1 ] × [ϕ1 ]) ∩ · · · ∩ ([ϕk ] × [ϕk ]) ⊆ U.
Proposition 2.7. The uniform convergence structure JN ,T is Hausdorﬀ.
Proof. We need to show that
∀ ϕ, ψ ∈ N , ϕ ̸= ψ :
∀ U ∈ JN ,T :
∃ U ∈U :
(ϕ, ψ) ̸∈ U.
Let ϕ, ψ ∈ N be such that ϕ ̸= ψ. Set
U = (T × T )(V) ∩ ([ϕ1 ] × [ϕ1 ] ∩ · · · ∩ [ϕk ] × [ϕk ])
with basis
U = (T × T )(V ) ∪ ((ϕ1 , ϕ1 ), · · · , (ϕk , ϕk )), V ∈ V.
Suppose (ϕ, ψ) ∈ U, then (ϕ, ψ) ̸∈ ((ϕ1 , ϕ1 ), · · · , (ϕk , ϕk )) which implies (ϕ, ψ) ∈
(T × T )(V ), V ∈ V. Since T is injective it follows that (T −1 ϕ, T −1 ψ) ∈ V, V ∈ V.
Thus T −1 ϕ = T −1 ψ since JM is Hausdorﬀ so that ϕ = ψ, which is a contradiction.
Hence (ϕ, ψ) ̸∈ U, for some U ∈ U. This completes the proof.
Note that the ﬁnal uniform convergence structure JN ,T is the ﬁnest uniform
convergence structure on N making T uniformly continuous, see . Thus we
have the following
Corollary 2.8. If F is a Cauchy ﬁlter on N with respect to JN ,T , then π1 (F) is
weakly Cauchy in L1 and π2 (F) is Cauchy in L1loc .
Proof. The result follows from Corollary 2.6.
We now apply the completion process. In this regard, the Wyler completion
of M is constructed in the following way. Denote by C[M] the set of all Cauchy
ﬁlters on M, and deﬁne an equivalence relation on C[M] through
F ∼C G ⇔ F ∩ G ∈ C[M].
(2.28)
Let us denote by M♯ the quotient space C[M]/ ∼C . For F ∈ C[M], denote the
equivalence class generated by F with respect to (2.28) by [F]. One may identify
Convergence Vector Spaces for Conservation Laws
83
M with a subset of M♯ by associating each u ∈ M with λ1 (u) ⊂ C[M]. From
the deﬁnition of a convergence structure given in (1.36) it is clear that λ1 (u) is
indeed a ∼C -equivalence class. Furthermore, since λ1 is Hausdorﬀ, the mapping
iM : M ∋ u 7→ λ1 (u) ∈ M♯
is injective. Thus we may consider the convergence space M as a subset of M♯
The Wyler completion of M is the set M♯ , equipped with the following vector
space convergence structure, see for instance :
(
∃ F1 · · · Fn ∈ [F] :
♯
∩
∩
∩
∩
∩
G ∈ λ1 ([F]) ⇔
(2.29)
iM (F1 ) · · · iM (Fn ) [F1 ] · · · [Fn ] ⊆ G
Similarly, let C[N ] be the set of all Cauchy ﬁlters on N , and deﬁne an equivalence
relation on C[N ] through
F ∼C G ⇔ F ∩ G ∈ C[N ].
(2.30)
The Wyler completion of (N , JN ,T ) is the set N ♯ equipped with the uniform
convergence structure JN♯ ,T which is deﬁned as follows

∃ V ∈ JN ,T :
 ∃ Cauchy ﬁlters F1 · · · Fk ∈ C[N ]\λJN ,T :
∩
U ∈ JN♯ ,T ⇐⇒ 
(2.31)

(iN × iN )(V) [(iN (F1 ) × [F1 ]) ∩ ([F1 ] × iN (F1 )) ∩ · · ·
∩(iN (Fk ) × [Fk ]) ∩ (Fk × iN (Fk ))] ⊆ U,
where λJN ,T denotes the convergence structure induced by the ﬁnal convergence
structure JN ,T . In general, λJN ,T is not a ﬁnal convergence structure, see  and
Section 1.2.1.
Since T : M −→ N is uniformly continuous there exists a unique uniformly
continuous mapping T ♯ : M♯ −→ N ♯ such that the diagram
T
- N
M
iM
iN
?
M♯
T♯
-
(2.32)
?
N♯
commutes, where iM and iN are the uniformly continuous embeddings associated
with the completion M♯ and N ♯ , respectively. Furthermore, since T is injective,
it follows by the deﬁnition of JN ,T that T is a uniformly continuous embedding.
That is, T −1 is uniformly continuous on T (M) ⊂ N . Therefore the mapping T ♯
is injective as well.
We now give a concrete description of the completion M♯ of M as a subset of
the space of ﬁnite Hausdorﬀ continuous function H. In this regard, the following
characterization of Cauchy ﬁlters is essential.
Hausdorﬀ Continuous Solution of Scalar Conservation laws
84
Proposition 2.9. A ﬁlter F on M is a Cauchy ﬁlter with respect to the vector
space convergence structure λ1 if and only if


∃ (αn ), (βn ) ⊆ C 0 (R × [0, ∞)) :




(i) αn ≤ αn+1 ≤ βn+1 ≤ βn

∫b
(2.33)
(ii)
a (βn (x, t) − αn (x, t))dx −→ 0



∀ t > 0, a, b ∈ R, a ≤ b



(iii) [{[αn , βn ] : n ∈ N}] ⊆ F.
(1)
(1)
Proof. Let (2.33) hold. Then set αn = αn − βn and βn = βn − αn on C 0 (R ×
(1)
(1)
[0, ∞)). Therefore the sequences αn and βn satisfy the following.
(1)
(1)
(1)
(1)
(i) αn ≤ αn+1 ≤ 0 ≤ βn+1 ≤ βn . This follows from (2.33)(i).
∫ b
(1)
(1)
(ii)
(βn (x, t) − αn (x, t))dx −→ 0. This is because
a
∫
b
∫
(1)
(βn (x, t)
−
(1)
αn (x, t))dx
=
a
b
∫a
(βn − αn − αn + βn )dx
b
(βn − αn )dx −→ 0.
=2
a
(1)
(1)
(iii) [{[αn , βn ] : n ∈ N}] ⊆ F − F . To see this, observe that from (2.33)(iii) we
have
∀ n∈N
∃ F ∈F :
F ⊆ [αn , βn ].
It follows that
F − F ⊆ [αn , βn ] − [αn , βn ] ⊆ [αn − βn , βn − αn ] = [αn(1) , βn(1) ].
Thus,
[{[αn(1) , βn(1) ] : n ∈ N}] ⊆ F − F,
which implies F − F converges to zero. Hence F is Cauchy with respect to λ1 .
Conversely, let F on M be a Cauchy ﬁlter with respect to λ1 . Then F − F ∈
λ1 (0). Let αn , βn ⊆ C 0 (R × [0, ∞)) be sequences associated with F − F according
to (2.19). It follows from (2.19)(iii) that
∀ n∈N
∃ F ∈F :
F − F ⊆ [αn , βn ].
Choose any v ∈ F. Then F ⊆ F − F + v. Since the ultraﬁlter [v] ∈ λ1 (v), it
follows that there exists sequences αn1 , βn1 ⊆ C 0 (R × [0, ∞)) such that [{[αn1 , βn1 ] :
Convergence Vector Spaces for Conservation Laws
85
n ∈ N}] ⊆ [v]. Therefore,
F ⊆ F − F + v ⊆ [αn , βn ] + [αn1 , βn1 ]
⊆ [αn + αn1 , βn + βn1 ],
which implies
[{[αn + αn1 , βn + βn1 ] : n ∈ N}] ⊆ F .
Denote α̃n = αn +αn1 and β̃n = βn +βn1 . clearly, α̃n ≤ α̃n+1 ≤ β̃n+1 ≤ β̃n . Moreover,
for all a, b ∈ R, a ≤ b we have
∫b
∫b
(β̃n − α̃n )dx =
a
(βn + βn1 − αn − αn1 )dx
a
∫b
∫b
(βn − αn ) +
=
a
(βn1 − αn1 )dx −→ 0.
a
Hence there exists sequences (α̃n ), (β̃n ) ⊆ C 0 (R × [0, ∞)) satisfying (2.19). This
completes the proof.
Consider some p ∈ M♯ . Then there exists a Cauchy ﬁlter G on M such that
G −→ p in M♯ . Then G is Cauchy with respect to λs as well. Therefore there
exists u ∈ H such that G −→ u in H with respect to the order convergence
structure on H. We deﬁne the mapping η : M♯ −→ H via
η(p) = u
(2.34)
Theorem 2.10. The map η is well deﬁned, that is, if G, V are Cauchy ﬁlters in
M and G, V −→ p in M♯ . Then G and V converge to the same limit u in H.
Proof. Let G, V −→ p in M♯ . Then G ∩ V −→ p in M♯ . But G ∩ V is a Cauchy
ﬁlter with respect to λ1 . Therefore it converges in H. Let G ∩ V −→ w in H,
then G ⊇ G ∩ V implies that G −→ w. Similarly, V −→ w in H. The proof is
complete.
Theorem 2.11. The map η is injective.
Proof. Let η(p) = η(q) = u for some p, q ∈ M♯ . There exist Cauchy ﬁlters
G1 , G2 on M such that G1 −→ p, G2 −→ q in M♯ and G1 , G2 −→ u ∈ H. Let
(i)
(i)
(αn ), (βn ) be the sequences associated with Gi , i = 1, 2 in terms of (2.33). Let
(1)
(2)
αn = inf{αn , αn } in C 0 (R × [0, ∞)), that is, αn is the point-wise minimum of
(1)
(2)
(1)
(2)
αn and αn . Similarly, βn = sup{βn , βn }. Clearly, αn , βn ∈ C 0 (R × [0, ∞)) and
the sequences (αn ), (βn ) are monotone increasing and decreasing respectively. It
is also easy to see that [{[αn , βn ] : n ∈ N}] ⊆ G1 ∩ G2 . In order to associate the
sequences (αn ) and (βn ) with G1 ∩ G2 in terms of (2.33) we need to show that
the property (2.33)(ii) is satisﬁed. From the deﬁnition of the order convergence
Hausdorﬀ Continuous Solution of Scalar Conservation laws
86
(i)
(i)
structure we have αn (x, t) ≤ u(x, t) ≤ βn (x, t) i = 1, 2. Using the fact that
max{x, y} ≤ x + y for x ≥ 0, y ≥ 0 we obtain
βn (x, t) − αn (x, t)
= max{βn(1) (x, t), βn(2) (x, t)} − u(x, t)
+ u(x, t) − min{αn(1) (x, t), αn(2) (x, t)}
= max{βn(1) (x, t) − u(x, t), βn(2) (x, t) − u(x, t)}
+ max{u(x, t) − αn(1) (x, t), u(x, t) − αn(2) (x, t)}
≤ βn(1) (x, t) − u(x, t) + βn(2) (x, t) − u(x, t)
+ u(x, t) − αn(1) (x, t) + u(x, t) − αn(2) (x, t)
= βn(1) (x, t) − αn(1) (x, t) + βn(2) (x, t) − αn(2) (x, t).
Then we have
∫
a
b
(βn (x, t) − αn (x, t))dx
∫ b
≤
(βn(1) (x, t) − αn(1) (x, t))dx
a
∫ b
+ (βn(2) (x, t) − αn(2) (x, t))dx −→ 0.
a
Therefore G1 ∩ G2 is a Cauchy ﬁlter with respect to λ1 in M. This means that
G1 ∩ G2 converges in M♯ . Let G1 ∩ G2 −→ w in M♯ . Then G1 and G2 being ﬁner
that G1 ∩ G2 also converges to w. Hence p = w = q.
2.3
Approximation results
In this section we consider the Cauchy problem for the viscous Burgers equation
of the form
1 ( δ,ε )2
δ,ε
in R × (0, ∞)
vtδ,ε +
v x = εvxx
(2.35)
2
v δ (x, 0) = u0 (x) − 2δ, δ > 0 in R × {t = 0}
(2.36)
which is the Cauchy problem of the viscous Burgers equation with a vertical
shift by 2δ in the initial condition. Using the auxiliary problem (2.35) - (2.36)
and techniques for problems of monotonic type, , we show how the entropy
solution of the inviscid Burgers equation [39, 53, 69]
1
ut + (u)2x = 0
2
with the initial condition
in R × (0, ∞)
(2.37)
u(x, 0) = u0 (x) in R × {t = 0}.
(2.38)
Approximation results
87
is approximated from below.
Applying Hopf’s technique , see also Section 1.1.4, to equations (2.35) (2.36), we have a solution similar to (1.56) where K(x, y, t) is replaced with
K δ (x, y, t), deﬁned below in (2.40). Theorem 1.28 may be stated as follows
Theorem 2.12. Suppose u0 ∈ L1loc (R) is such that (1.58) holds. Then there exists
a unique classical solution of equation (2.35)-(2.36) given by
∫ ∞ x−y − 1 K δ (x,y,t)
2ε
dy
t e
(2.39)
v δ,ε (x, t) = −∞
∫ ∞ − 1 K δ (x,y,t)
2ε
dy
e
−∞
where
(x − y)2
K (x, y, t) =
+
2t
∫
y
u0 (s)ds − 2δy.
(2.40)
u0 (ξ)dξ − 2δa as x −→ a, t −→ 0,
(2.41)
δ
0
with the following properties:
(i) For all a ∈ R,
∫ x
∫
v δ,ε (ξ, t)dξ −→
0
a
0
(ii) If u0 (x) is continuous at x = a then
v δ,ε (x, t) −→ u0 (a) − 2δ as x −→ a, t −→ 0.
(2.42)
Furthermore, a solution of (2.35) - (2.36) which is C 2 -smooth in an interval
0 < t < T and satisﬁes (2.41) for each value of a ∈ R necessarily coincides with
(2.39) in this interval.
The function K δ (x, y, t) satisﬁes properties (P1) - (P3) given in Section 1.1.4.
Using this fact we now introduce the functions
δ
ymin
= min{y : K δ (x, y, t) = minK δ (x, z, t)}
z∈R
and
δ
ymax
= max{y : K δ (x, y, t) = minK δ (x, z, t)}
z∈R
Observe that
K δ (x, y, t) = K(x, y, t) − 2δy
for x , t ﬁxed,
Lemma 2.13. For each δ > 0 we have
δ
ymin
(x, t) = ymin (x + 2δt, t)
δ
ymax
(x, t) = ymax (x + 2δt, t).
(2.43)
Hausdorﬀ Continuous Solution of Scalar Conservation laws
88
Proof. From (1.57) we get
K(x + 2δt, y, t) − 2δx − 2δ 2 t
∫ y
(x + 2δt − y)2
=
+
u0 (s)ds − 2δx − 2δ 2 t
2t
0
∫ y
2
(x − y) + 4(x − y)δt + 4δ 2 t2
+
u0 (s)ds − 2δx − 2δ 2 t
=
2t
∫ y0
2
(x − y)
=
+ 2δx − 2δy + 2δ 2 t +
u0 (s)ds − 2δx − 2δ 2 t
2t
0
∫ y
2
(x − y)
=
+
u0 (s)ds − 2δy
2t
0
δ
= K (x, y, t)
This implies that for ﬁxed x and t, the functions K δ (x, y, t) and K(x, y, t) diﬀer
by a constant. Therefore they have the same set of minimizers, which implies the
statement of the Lemma.
δ
δ
The functions ymin
and ymax
have the properties (Y1) - (Y4) given in Section
δ
δ
and ymax
are monotone functions in
δ
δ
x. Moreover, for every t ≥ 0, ymin (x, t) = ymax (x, t) for all x ∈ R with the possible
δ
δ
exception of a denumerable set of values of x where ymin
(x, t) < ymax
(x, t). It
follows from property (Y1) that
δ
x − ymin
(x, t)
x − ymax (x + δt)
≤
t
t
From Theorem 1.29 we have that for all x and t > 0,
δ
x − ymax
(x, t)
≤ lim inf v δ,ε (α, θ) ≤
t
α→x
θ→t
ε→0
(2.44)
δ
x − ymin
(x, t)
lim sup v (α, θ) ≤
(2.45)
t
α→x
θ→t
ε→0
δ,ε
and, in particular, that
v δ (x, t) =
lim v δ,ε (α, θ) =
α→x
θ→t
ε→0
δ
δ
x − ymax
(x, t) x − ymin
(x, t)
=
t
t
(2.46)
δ
δ
holds at every point (x, t) where ymax
(x, t) = ymin
(x, t).
The following Lemma shows that the functions u and u deﬁned in (1.68) and
(1.69) are lower and upper semi-continuous respectively.
Lemma 2.14. The functions u and u deﬁned in (1.68) and (1.69) are lower
semi-continuous and upper semi-continuous respectively.
Approximation results
89
Proof. Let u > m for some m ∈ R and let µ be such that u > m + µ. Since
u(x, t) = sup{inf{uε (α, θ) : |α − x| < η, |θ − t| < η, ε < η} : η > 0},
it follows that
∃ η>0:
inf{uε (α, θ) : |α − x| < η, |θ − t| < η, ε < η} > m + µ.
Therefore,
uε (α, θ) > m + µ if |α − x| < η, |θ − t| < η, ε < η.
Let
x̃ ∈ (x − η, x + η) and t̃ ∈ (t − η, t + η).
Then
u(x̃, t̃) = lim inf uε (α, θ) ≥ m + µ > m.
α → x̃
θ → t̃
ε→0
Since the last inequality holds for all x̃ ∈ (x − η, x + η) and t̃ ∈ (t − η, t + η) it
shows that u is lower semi- continuous. The proof of upper semi-continuity of u
is done in a similar way.
It follows from Lemma 2.14 that the functions
v δ (x, t) = lim inf v δ,ε (α, θ)
α→x
θ→t
ε→0
(2.47)
v δ (x, t) = lim sup v δ,ε (α, θ)
α→x
θ→t
ε→0
(2.48)
and
are lower semi-continuous and upper semi-continuous respectively.
Lemma 2.15. The functions v δ deﬁned by (2.48) and u deﬁned by (1.69) satisfy
the following inequality
v δ (x, t) ≤ u(x + δt) − δ x ∈ R, t ≥ 0.
Hausdorﬀ Continuous Solution of Scalar Conservation laws
90
Proof. From the inequality (2.45), it follows that
δ
x − ymin
(x, t) x − ymin (x + 2δt, t)
v (x, t) ≤
=
t
t
x − ymax (x + δt, t) x + δt − ymax (x + δt, t)
=
−δ
≤
t
t
≤ lim inf uε (z + δτ, τ ) − δ
z→x
τ →t
ε→0
δ
= u(x + δt, t) − δ
as required.
Consider the viscous problem
1 ( δ,ρ,ε )2
δ,ρ,ε
w
= εwxx
R × (0, ∞)
x
2
wδ,ρ,ε (x, 0) = I(ρ, u0 )(x) − 2δ
R × {t = 0}
wtδ,ρ,ε +
(2.49)
(2.50)
Lemma 2.16. Let wδ,ρ,ε denote the solution of the Cauchy problem (2.49) - (2.50).
Then
[ ρ]
δ,ρ
w (x, t) ≤ u(x, t) − δ, x ∈ R, t ∈ 0, .
(2.51)
δ
where wδ,ρ (x, t) = lim sup wδ,ρ,ε (α, θ) as it is in (1.69).
α→x
θ→t
ε→0
Proof. Consider the viscous problem (2.35) - (2.36) with solution v δ,ε and the
Cauchy problem
1
δ,σ,ε
ztδ,σ,ε + (z δ,σ,ε )2x = εzxx
2
δ,σ,ε
z (x, 0) = u0 (x + σ) − 2δ = v δ,ε (x + σ, 0)
From (2.50) we have that
wδ,ρ,ε (x, 0) = I(ρ, uε )(x, 0) − 2δ
≤ uε (x + σ, 0) − 2δ = u0 (x + σ) − 2δ
∀ |σ| ≤ ρ
= z δ,σ,ε (x, 0) = v δ,ε (x + σ, 0).
It follows from [125, Chapter IV 25II] that
wδ,ρ,ε (x, t) ≤ v δ,ε (x + σ, t) ∀ t > 0, x ∈ R,
|σ| ≤ ρ.
(2.52)
(2.53)
Approximation results
91
Therefore,
wδ,ρ (x, t) = lim sup wδ,ρ,ε (α, θ) ≤
ε→0
α→x
θ→t
lim sup v δ,ε (α + σ, θ) = v δ (x + σ, t),
ε→0
α→x
θ→t
that is, By Lemma 2.15
wδ,ρ (x, t) ≤ v δ (x + σ, t), ∀ |σ| < ρ
≤ u(x + δt + σ, t) − δ.
Now, for ﬁxed x and t take σ = −δt. Then
wδ (x, t) ≤ u(x, t) − δ
as required.
2.3.1
Requirements for u0
Lemma 2.17. Assume that
lim u0 (x)
x−→∞
and
lim u0 (x)
x−→−∞
exist. Then condition (1.58) is satisﬁed.
Proof. Let lim u0 (x) = β then
x−→∞
∃ M : |u0 (x) − β| < 1 for x > M
lim
x−→∞
|
∫x
0
∫x
∫M
u0 (s)ds|
| 0 u0 (s)ds| + | M u0 (s)ds|
≤ lim
x−→∞
x2
x2
∫M
|
u0 (s)ds|
(|β| + 1)(x − M )
≤ lim 0
+
lim
x−→∞
x−→∞
x2
x2
=0
The following Lemmas are consequences of condition (2.54) on u0 .
Lemma 2.18. Suppose conditions (2.54) holds. Then for every ε > 0
lim
uε (x, t) = lim u0 (x)
x→+∞
x → +∞
t → t̃
and
lim
uε (x, t) = lim u0 (x)
x→−∞
x → −∞
t → t̃
(2.54)
Hausdorﬀ Continuous Solution of Scalar Conservation laws
92
where uε is the solution of the viscous Burger’s equation (1.54) (1.55).
Proof. Let N > 0. For any ε < 1 and x > N we have
∫ N
∫ N
(
)
2 ∫y
2
1 (x−y)
1
1 (x−y)
+
−
u
(s)ds
0
− 2ε K(x,y,t)
− 2ε 4t
2ε
4t
0
dy
u
(y)e
=
dy
u
(y)e
e
0
0
−∞
−∞
∫ N
(
)
2 ∫y
2
1 (x−y)
1 (x−y)
−
+
u
(s)ds
0
0
≤ max e− 2ε 4t
|u0 (y)|e 2ε 4t
dy
y∈(−∞,N ]
−∞
∫ N
(
)
2 ∫y
2
1 (N −y)
1 (x−N )
−
+
u
(s)ds
0 0
|u0 (y)|e 2 4t
dy
(2.55)
≤ e− 2ε 4T
−∞
Taking the limit as x −→ +∞ we have that the expression on the right of (2.55)
converges to zero, so that
∫ N
1
lim u0 (y)e− 2ε K(x,y,t) dy = 0
x−→+∞
−∞
which implies
∫
N
u0 (y)e− 2ε K(x,y,t) dy = 0
1
lim
x−→+∞
Similarly,
−∞
∫
lim x−→+∞
N
1
K(x,y,t)
− 2ε
e
−∞
(2.56)
dy = 0
which implies
∫
N
lim
x−→+∞
e− 2ε K(x,y,t) dy = 0
1
(2.57)
−∞
Now consider the solution uε of the viscous Burgers equation (1.54) - (1.55) which
is given as
∫∞
1
− 2ε
K(x,y,t)
dy
−∞ u0 (y)e
ε
u (x, t) = ∫ ∞ − 1 K(x,y,t)
2ε
dy
−∞ e
∫N
∫∞
1
1
K(x,y,t)
− 2ε
K(x,y,t)
− 2ε
u
(y)e
dy
+
dy
0
−∞
N u0 (x)e
=
∫ N − 1 K(x,y,t)
∫ N − 1 K(x,y,t)
2ε
dy + ∞ e 2ε
dy
−∞ e
Let σ > 0 and N be such that
β − σ < u0 (x) < β + σ whenever |x| > N,
where β = lim u0 (x). Then
x→+∞
∫ ∞ − 1 K(x,y,t)
1
K(x,y,t)
− 2ε
2ε
dy
+
(β
+
σ)
dy
u
(y)e
0
N e
uε (x, t) ≤ −∞ ∫ N
∫
1
1
K(x,y,t) dy + N e− 2ε
K(x,y,t) dy
− 2ε
e
−∞
∞
∫N
Approximation results
93
Using (2.56) and (2.57) we have that
lim sup uε (x, t) ≤
x → +∞
t → t̃
lim sup
x → +∞
t → t̃
[∫
N
−∞
lim sup
x → +∞
t → t̃
Therefore
u0 (y)e− 2ε K(x,y,t) dy + (β + σ)
1
[∫
N
e− 2ε K(x,y,t) dy +
−∞
[
1
∫∞
∫N
∞
∫∞
N
e− 2ε K(x,y,t) dy
1
e− 2ε K(x,y,t) dy
1
1
K(x,y,t)
− 2ε
]
]
.
]
dy
lim sup (β + σ) N e
x → +∞
t → t̃
[∫
]
= β + σ.
lim sup uε (x, t) =
N − 1 K(x,y,t)
2ε
lim sup
dy
x → +∞
∞ e
x → +∞
t → t̃
t → t̃
Similarly,
[
∫∞
1
− 2ε
K(x,y,t)
]
lim sup (β − σ) N e
dy
x → +∞
t → t̃
[∫
]
lim sup uε (x, t) ≥
= β − σ.
N − 1 K(x,y,t)
2ε
lim sup
dy
x → +∞
∞ e
x → +∞
t → t̃
t → t̃
Since σ is arbitrary, we have
lim sup uε (x, t) = lim inf uε (x, t) = β = lim u0 (x)
x→+∞
x → +∞
x → +∞
t → t̃
t → t̃
which implies
lim
uε (x, t) = lim u0 (x)
x→+∞
x → +∞
t → t̃
as required. The proof of the second part of the Lemma is similar.
As an easy consequence of Lemma 2.18, we obtain
Corollary 2.19. For any t̃ ≥ 0 we have
lim
u(x, t) = lim sup u(x, t) = lim inf u(x, t) = β = lim u0 (x)
x→+∞
x → +∞
x → +∞
x → +∞
t → t̃
t → t̃
t → t̃
Hausdorﬀ Continuous Solution of Scalar Conservation laws
94
and
lim
u(x, t) = lim sup u(x, t) = lim inf u(x, t) = β = lim u0 (x)
x→−∞
x → −∞
x → −∞
x → −∞
t → t̃
t → t̃
t → t̃
where u and u are deﬁned by (1.68) and (1.68) respectively.
2.4
Existence and uniqueness results
In this section we prove existence result for solution of the equation
T ♯ u♯ = 0
in the case of the Burgers equation. More precisely,(for every
) uo ∈ U0 we construct
0
a Cauchy sequence (wk ) in M such that T wk −→
in N . The approximau0
tion results in the previous session are utilized for this purpose. Let δk = 41k and
ρk = 21k , k ∈ N. For every k ∈ N consider the problem (2.49) - (2.50) with δ = δk
and ρ = ρk . Using Lemma 2.16 we obtain the following inequality
wδk ,ρk (x, t) ≤ wδk+1 ,ρk+1 (x, t) − (δk − δk+1 ),
[
]
2k+1
x ∈ R, t ∈ 0,
.
3
(2.58)
Indeed, we have
wδk ,ρk (x, 0) = I(ρk , u0 )(x) − 2δk
= I(ρk − ρk+1 , wδk+1 ,ρk+1 (·, 0))(x) − 2(δk − δk+1 ).
Hence the inequality (2.58) follows from Lemma 2.16 with u = wδk+1 ,ρk+1 , δ =
δk − δk+1 and ρ = ρk − ρk+1 . The upper bound for the time interval is obtained
as follows
1
1
1
− 2k+1
(2 − 1) 2k+1
ρ ρk − ρk+1
2k
2k+1
=
= 1
=
=
.
1
1
δ
δk − δk+1
3
−
(4
−
1)
4k
4k+1
4k+1
The construction of the Cauchy sequence is based on the following
Lemma 2.20. For every k there exists εk such that wδ2k ,ρ2k ,εk satisﬁes
wδ2k−1 ,ρ2k−1 (x, t) ≤ wδ2k ,ρ2k ,εk (x, t) ≤ wδ2k+1 ,ρ2k+1 (x, t),
(2.59)
4k
for x ∈ R, and t ∈ [0, ].
3
Proof. Assume the opposite, that is, there exists k > 0 such that for every ε > 0
k
there exists (xε , tε ) with tε ∈ [0, 43 ] such that one of the inequalities in (2.59) is
violated. Since tε is in a compact interval, there exists a sequence (εn ) such that
Existence and uniqueness results
95
k
(tεn ) converges. Let tεn −→ t̃ ∈ [0, 43 ]. At least one of the inequalities in (2.59) is
violated for a subsequence of (εn ). To avoid too many notations we denote this
subsequence by (εn ). Assume the second inequality is violated. The other case is
dealt with in a similar way. Now let us consider the sequence (xεn ).
Case 1. The sequence (xεn ) has an accumulation point x̃ ∈ R. Then there is
a subsequence converging to x̃. Without loss of generality we may assume that
xεn −→ x̃. Then
wδ2k ,ρ2k (x̃, t̃) ≥ lim supwδ2k ,ρ2k ,εk (xεn , tεn )
n−→∞
δ2k+1 ,ρ2k+1
≥w
(x̃, t̃)
Case 2. The sequence (xεn ) is unbounded. Then it has a subsequence diverging
to +∞ or −∞. Let us denote this subsequence again by (xεn ) and let it converge
to +∞ (the case of −∞ is treated similarly). Then using Corollary 2.19 we have
lim wδ2k ,ρ2k ,εn (xεn , tεn ) ≥ lim wδ2k+1 ,ρ2k+1 (xεn , tεn )
n−→∞
n−→∞
= lim I(ρ2k+1 , u0 )(x) − 2δ2k+1
n−→∞
= lim u0 (x) − 2δ2k+1 .
n−→∞
(2.60)
On the other hand, by Lemma 2.18
lim wδ2k ,ρ2k ,εn (xεn , tεn ) = lim I(ρ2k , u0 )(x) − 2δ2k
n−→∞
n−→∞
= lim u0 (x) − 2δ2k .
n−→∞
(2.61)
The relations (2.60) and (2.61) lead to
lim u0 (x) − 2δ2k ≥ lim u0 (x) − 2δ2k+1
n−→∞
n−→∞
which is impossible since δ2k > δ2k+1 . The contradictions obtained in Case 1 and
Case 2 proves the statement of the lemma.
Now we construct an increasing sequence (αn ) in M as follows. Set
αk (x, t) = wδ2k ,ρ2k ,εk (x,t) (x, t)
for x ∈ R, t ∈ [0, 4k−1 ].
Then αk is extended for t ∈ [4k−1 , ∞) in such a way that αk ∈ C 1 (R × [0, ∞)),
αk−1 (x, t) ≤ αk (x, t) < inf wδ2p ,ρ2p ,εp (x, t)
p>k
(2.62)
Note that for every (x, t) the sequence (wδ2p ,ρ2p ,εp (x, t)) is eventually monotone
increasing so that the inﬁmum in the inequality (2.62) is ﬁnite. The inequality
αk−1 (x, t) ≤ αk (x, t) is obtained from (2.59) for x ∈ R and t ∈ [0, 4k−1 ] and from
(2.62) for x ∈ R and t ∈ [4k−1 , ∞). It is also easy to see from Lemma 2.16 that
αk (x, t) ≤ u(x, t), x ∈ R, t ≥ 0.
Hausdorﬀ Continuous Solution of Scalar Conservation laws
96
Lemma 2.21. At any point (x, t) we have
αk (x, t) ≥ u(x − 3δ2k−1 t − ρ2k−1 , t) − 3δ2k−1 −
2ρ2k−1
t
for suﬃciently large k.
Proof. Let the point (x, t), t > 0 be ﬁxed. Let k be so large that 4k−1 > t. Then
αk (x, t) = wδ2k ,ρ2k ,εk (x, t)
≥ wδ2k−1 ,ρ2k−1 (x, t)
≥ wδ2k−1 ,ρ2k−1 (x, t)
2k−1
x − ymax
(x, t)
≥
t
(2.63)
where
2k−1
ymax
(x, t) = max{y : K 2k−1 (x, y, t) = minK 2k−1 (x, z, t)}
z∈R
∫ y
(x − y)2
K
(x, y, t) =
+
I(ρ2k−1 , u0 )(s)ds − 2δ2k−1 y.
2t
0
2k−1
2k−1
Then ymax
(x, t) is a solution to ∂K∂y = 0. That is,
and
2k−1
2k−1
ymax
(x, t) − x
2k−1
+ I(ρ2k−1 , u0 )(ymax
(x, t)) − 2δ2k−1 = 0.
t
Then there exists γ = γ(x, t), |γ| ≤ ρ2k−1 such that
2k−1
ymax
(x, t) + γ(x, t) − (x + 2δ2k−1 t + γ(x, t))
2k−1
+ u0 (ymax
(x, t) + γ(x, t)) = 0
t
or equivalently
∂K
2k−1
(x + 2δ2k−1 t + γ(x, t), ymax
(x, t) + γ(x, t), t) = 0
∂y
Therefore,
2k−1
ymax (x + 2δ2k−1 t + γ(x, t), t) ≥ ymax
(x, t) + γ(x, t)
Furthermore, using the monotonicity of ymax we have
2k−1
ymax
(x, t) ≤ ymax (x + 2δ2k−1 t + γ(x, t), t) − γ(x, t)
≤ ymax (x + 2δ2k−1 t + ρ2k−1 , t) + ρ2k−1 .
From (2.63) we obtain
x − ymax (x + 2δ2k−1 t + ρ2k−1 , t) ρ2k−1
αk (x, t) ≥
−
t
t
x − ymin (x + 3δ2k−1 t + ρ2k−1 , t) ρ2k−1
>
−
t
t
x + 3δ2k−1 t − ymin (x + 3δ2k−1 t + ρ2k−1 , t) 2ρ2k−1
=
−
− 3δ2k−1
t
t
2ρ2k−1
≥ u(x + 3δ2k−1 t + ρ2k−1 , t) −
− 3δ2k−1 .
t
Existence and uniqueness results
97
This completes the proof.
In a similar way one constructs a decreasing sequence (βk ) such that at any
point (x, t) we have
u(x, t) ≤ βk (x, t) ≤ u(x − 3δ2k−1 t − ρ2k−1 , t) + 3δ2k−1 +
2ρ2k−1
.
t
Clearly, αk ≤ wδ2k ,ρ2k ,εk ≤ βk . In order to prove that (wδ2k ,ρ2k ,εk ) is a Cauchy
sequence in M, it remains to show that (2.33)(ii) holds. Let t > 0 and a, b ∈ R,
such that a ≤ b. For all suﬃciently large k
∫ b
(βk (x, t) − αk (x, t))dx
a
∫ b
4ρ2k−1
]dx
≤
[u(x − 3δ2k−1 t − ρ2k−1 , t) − u(x + 3δ2k−1 t + ρ2k−1 , t) + 6δ2k−1 +
t
a
∫ b+3δ2k−1 t+ρ2k−1
∫ b−3δ2k−1 t−ρ2k−1
=
u(x, t)dx −
u(x, t)dx + (6δ2k−1 t + ρ2k−1 )(b − a)
a+3δ2k−1 t+ρ2k−1
b+3δ2k−1 t+ρ2k−1
∫
∫
a−3δ2k−1 t−ρ2k−1
a+3δ2k−1 t+ρ2k−1
u(x, t)dx +
=
b−3δ2k−1 t−ρ2k−1
u(x, t)dx + (6δ2k−1 t + ρ2k−1 )(b − a).
a−3δ2k−1 t−ρ2k−1
The last expression tends to 0 as k −→ ∞. Hence
∫ b
(βk (x, t) − αk (x, t))dx −→ 0 as k −→ ∞.
a
Thus the Frechét ﬁlter ⟨(wδ2k ,ρ2 k,εk )⟩ associated with the sequence (wδ2k ,ρ2 k,εk ) deﬁnes an element p of M♯ .
Moreover, in the topology of N we have
T1 wδ2k ,ρ2 k,εk −→ 0
and
T2 wδ2k ,ρ2 k,εk −→ u0 .
(
Therefore
T ♯p =
0
u0
)
.
In this way we have proved the following
Theorem 2.22. For any u0 ∈ U0 there exists a unique p ∈ M♯ such that
)
(
0
.
T ♯p =
u0
This means that the initial value problem for the Burgers equation has a unique
solution in M♯ .
98
Hausdorﬀ Continuous Solution of Scalar Conservation laws
It is easy to see that (wδ2k ρ2k εk ) order converges to the unique H-continuous
function u = [u, u]. Hence we have η(p) = u. Therefore the Burger’s equation has
an H-continuous solution which corresponds to the well-known entropy solution.
In particular, u = u almost everywhere, and any real valued function v such that
v(x, t) ∈ u(x, t) for all (x, t) ∈ R × [0, ∞) satisﬁes the entropy condition for the
Burger’s equation.
Chapter 3
Concluding Remarks
3.1
Main results
We considered the Cauchy problem of nonlinear conservation law with smooth
ﬂux function and continuous initial condition in the context of Convergence Space
Completion Method. In particular, the Convergence Space Completion Method
was applied to the nonlinear operator equation derived from the Cauchy problem
for nonlinear scalar conservation law. In this regard, suitable uniform convergence
spaces were introduced. The completions of these uniform convergence spaces
were obtained through the Wyler completion process. In addition, a uniformly
continuous and injective mapping was obtained as an extension of the nonlinear
operator derived from the Cauchy problem.
It was shown that the extended operator equation has at most one generalized solution which can be identiﬁed with the entropy solution in the case of the
Burgers equation. Thus we obtained an existence and uniqueness result for the
operator equation of the Burgers equation. The uniqueness of solution follows
from the injectivity of the extended operator. It was further shown that the space
of generalized solutions can be identiﬁed with the space of Hausdorﬀ continuous
functions, thus the unique solution of the Burgers equation so obtained is identiﬁed with a Hausdorﬀ continuous function. This provides a further regularity
property for the generalized solution of the Burgers equation.
3.2
Topics for further research
In this work we have applied the Order Completion Method, which is a general
and type independent theory for existence and regularity of generalized solutions
for large class of systems of nonlinear PDEs, to obtained the entropy solution of
Burgers equation. The application of the Order Completion Method to the case
of a more general ﬂux function is very important and should be considered.
Systems of conservation laws appear very often in real world problems, thus an
application of the Order Completion Method to speciﬁc systems of conservation
laws is an important issue that should be looked into.
100
Concluding Remarks
Apart from conservation laws, there other types of speciﬁc linear and nonlinear
PDEs, which the Order Completion Method could be applied to. This is another
interesting area for further research.
Bibliography
 Alefeld G and Herzberger J: Introduction to interval computations, Academic Press, (1983)
 Amadori D., Initial-boundary value problems for nonlinear systems of conservation laws. NoDEA Nonlinear Diﬀerential Equations Appl. 4 (1997), 142.
 Amadori, D., Baiti, P., LeFloch, P.G. and Piccoli B., Nonclassical shocks
and the Cauchy problem for nonconvex conservation laws., J. Diﬀ. Eqs. 151
(1999), 345-372.
 Anguelov R. Dedekind order Completion of C(X) by Hausdorﬀ continuous
functions. Quaestiones Mathematicae27(2004), 152 - 169.
 Anguelov R. An introduction to some spaces of Interval functions
UPWT2004/3, University of Pretoria(2004).
 Anguelov R. and Markov S., Extended Segment Analysis Freiburger IntervallBerichte, Inst. Angew.Math, U. Freiburg i. Br. 10 (1981), 1- 63.
 Anguelov R., Markov S. and Sendov B., The set of Hausdorﬀ continuous
functions the largest linear space of interval functions. Reliable Computing
12 (2006), 337 - 363.
 Anguelov R.; Markov S. and Sendov B., On the normed linear space of Hausdorﬀ continuous functions. Large-scale scientiﬁc computing, Lecture Notes
in Comput. Sci., bf 3743, Springer, Berlin,(2006,)281-288.
 Anguelov R. and Minani F. Hausdorﬀ continuous Visicosity solution of the
Hamilton-Jacobi equations J. Math. Anal. and Appl. To appear.
 Anguelov R. and Rosinger E. E. Hausdorﬀ continuous solution of nonlinear
PDE through OCM. Quaestiones Mathematicae28(3) (2005), 271 -285.
 Anguelov R, and Rosinger E.E. Solving Large classes of nonlinear sytems of
PDEs Computer and Mathematics with applications 53 2007 491 - 507
 Anguelov R, and van der Walt JH, Order Convergence Structure on C(X),
Quaestiones Mathematicae 28 2005 425 - 457
 Ancona F. and Marson A., Well Posedness for General 2 × Systems of Conservation laws, Mem. Amer. Soc. 801 (2001).
102
BIBLIOGRAPHY
 Arnold V. I. Lectures on PDEs, Springer Universitext, 2004.
 Baire R.,Lecons sur les fonctions discontinues Collection Borel, Paris(1905).
 Baiti P. and Jenssen H. K. On the Front-tracking algorithms,, J. Maths Anal.
Appl. 217(1998), 395 - 404.
 Baiti P., LeFloch P. and Piccoli B., Uniqueness of classical and nonclassical solutions for nonlinear conservation hyperbolic systems, J. Diﬀerential
Equations 172 (2001), 59 - 82.
 Banach S., Théorie des opérations linéaries,, Warsaw (1932).
 Barbu V. and Precupanu Th.,Convexity and Optimization in Banach
 Beattie R. and Butzmann H.P.,Convergence Struvtures and Application to
Functional Analysis, Klummer Academic Publishers, (2002).
 Bianchini S. and Bressan A., Vanishing viscosity solutions to nonlinear hyperbolic system, Annal. of Math. 161 (2005) 223 - 342.
 Bianchini S. and Bressan A. , BV estimates for a class of viscous hyperbolic
systems, , indiana Univ. Maths. J. 49 (2000), 1673 - 1713.
 Bianchini S. and Bressan A., A Case study in Vanishing Viscosity, Discreet
Conti. Dynam. Systems 7 (2001), 449 - 476.
 Birkhoﬀ G., Lattice Theory AMS, Providence (1973)
 Bourbaki N. General Topology, Chapters 1 - 4,
Heidelberg, New York (1998).
Springer verlag, Berlin-
 Bressan A., Global solution of systems of conservation laws by wave - front
tracking , J. Math. Anal 170 (1992), 414 - 432.
 Bressan A. , the unique limit of the Glimm scheme , Arch. Rational Mech.
Anal. 130 (1995), 205 - 230.
 Bressan A. , hyperbolic System of conservation Laws. The one Dimensional
Cauchy Problem. Oxford University Press, Oxford (2000).
 Bressan A., Stability of entropy solutions to n × n conservation laws, Some
current topics in nonlinear conservationlaws AMS/IP, Studies in Advanced
mathematics Amer. Math. Soc. Providence, 15 (2000), 1 - 32.
 Bressan A. , BV solutions to Hyperbolic Systems by Vanishing Viscosity,
C.I.M.E lecture notes in Mathematics , 1911 (2007).
 Bressan A. and Colombo R.M., Unique solutions of 2 × 2 conservation laws
with large data, indiana Univ. Math. J., 44 (1995), 677 - 725.
BIBLIOGRAPHY
103
 Bressan A., Crasta G. and Piccoli B.,Well - Posedness of the Cauchy problem
for n × n Conservation Laws, Mem. Amer. Math. Soc. 694 (2000).
 Bressan A. and Goatin P., Oleinik type estimates and Uniqueness for n × n
conservation laws, Arch. Rational Mech. Anal. 140 (1997), 301 - 317.
 Bressan A. and LeFloch P. Uniqueness of weak solutions to systems of conservation laws,, Arch Rational Mech. 140 (1970),301 - 317.
 Bressan A. and Lewicka M., A uniqueness condition for Hyperbolic Systems
of conservation laws, Discreet Conti. Dynam.Systems. 6 ( 2000 ), 673 682.
 Bressan A., Liu P. T. and Yang T. , L1 stability estimates for n × n conservation laws, Arch. Rational Mech. Anal. 149 (1999), 1 - 22.
 Bressan A. and Yang T. , On the convergence rate of Vanishing Viscosity
approximations, Comm. Pure Appl. Math. 57 (2004), 1075 - 1109.
 Bucher W. and Frolicher A. : Calculus in vector spaces without norm, Lecture notes in Mathematics 30, Springer - Verlag, Berlin, Heidelerg, New
York, (1966)
 Burgers J. M., A Mathematical model illustrating the thoery of turbulence ,
Advances in Appl. Mechanics, 1 (1948), 171 - 179.
 Cartan G. : Théorie des ﬁlters, C R Acad Sci 205 (973) 595 - 598.
 Chen G -Q., Entropy, Compactness and conservation laws, Lecture notes,
Northwest University (1999).
 Chen G -Q., Compactness methods and nonlinear hyperbolic conservation
laws, Some current topics in nonlinear conservationlaws AMS/IP, Studies in
Advanced mathematics Amer. Math. Soc. Providence, 15 (2000), 34 - 75.
 Chen G, Q. and Frid H., Asymptotic decay of solutions of conservation laws,
C. R. Acad. Sci. Paris 323 (1996), 257 - 262.
 Chen G, Q. and Frid H.,Large-Time behaviour of entropy solution in L∞ for
multidimensinal conservation laws, Advances in nonlinear PDE and related
areas World Scientiﬁc: Singapore (1998), 28 - 44.
 Chen G, Q. and Frid H., decay of entropy solutions of nonlinear conservation
laws, Arch. Rational Mech. 146(2) (1999), 95 - 127.
 Chen G, Q. and Frid H., Large time behaviour of entropy solutions of conservation laws, J. Diﬀ. Equa. 152(2) (1999), 308 - 357.
 Chen G, Q. and Lu Y. G., A study on the applications of the theory of
compensated compactness, Chinese Science Bulletin 33 (1988), 641 - 644.
104
BIBLIOGRAPHY
 Choquet G. Convergences Ann Univ Greoble - Sect Sci Math Phys II Ser 23
(1948) 57 -112.
 Coifman R., Lions P. L., Meyer Y. and Semmes S., Compensated compactness
and Hardy spaces, J. Math. Pure Appl. 72(1993), 247 - 286.
 Colombeau J. F. New generalized functions and multiplication of distributions, Noth Holland Mathematics Studies 84, 1984.
 Conway E. and Smoller J.,Global solutions of the Cauchy problem for quasilinear ﬁrst order equations in several space variable, Comm. Pure Appl. Math.
19(1966), 95 - 105.
 Dafermos C., Hyperbolic Conservation Laws in Continuum Physics,
Grundlehren Math. Wiss. 325 (1999), Springer-Verlag, New-York.
 De Lellis C., Otto F. and Westdickenberg M. , Minimal entropy condition
for Burgers equation, Quarterly Appl. Math.
 Dilworth R. P., The normal completion of the lattice of continuous functions,
AMS, Translatios,68,(1950), 427 - 438.
 DiPerna R. , Global existence of Solutions to nonlinear hyperbolic systems of
conservation laws, J. Diﬀerential Equations 20 (1976 ), 187 - 212.
 DiPerna R. , Convergence of approximate solutions to conservation laws,
Arch. Rational Mech. Anal. 82 (1983 ), 27 - 70.
 Dolecki S. and Mynard F. Comvergence - theoretic mechanisms behind product theores Proceedings of the French - Japanese Conferences “Hyperspace
Topologies and Applications”
(La Bussire, 1997). Topology Appl. 104
(2000), 67 - 99.
 Evans L. C., Partial Diﬀerential Equations, AMS Graduate Studies in Mathematics. 19 , AMS (1998).
 Evans L. C., A Survey of Entropy Methods for Partial Diﬀerential Equations,
Bulletin of the Amer. Math. Soc. 41(4) (2004), 409 - 438.
 Foy L. . Steady -state solutions of hyperbolic systems of conservation Laws
with viscosity terms, Comm. Pure Appl. Math. 17 (1964), 177 - 188.
 Fric R. and Kent D. C., Completion of pseudo-topological groups, Math
Nachr. 99 (1980), 99 - 103.
 GählerGrundstrukturen der Analysis II ,Birkhäuser Verlag, Basel(1978).
 Gerhard P., Completion of Semiuniform Convergence spaces Applied Category Structures, 8 (2000) 463 - 473.
BIBLIOGRAPHY
105
 Glimm J., Solutions in the Large for nonlinear hyperbolic systems of equations, Comm. Pure Appl. Math. , 18 (1965), 697 - 715.
 Goodman J. and Xin Z., Viscous limits for piecewise smooth solutions to
systems of conservation laws, Arch. Rational Mech. Anal. 121 (1992), 235
- 265.
 Harten A. , High Resolution Schemes for Hyperbolic Conservation laws, JCP,
49 (1983 ), 357 - 393.
 Heibig A., Exixtence and uniqueness for some hyperbolic systems of conservation laws, Arch. Rat Mech Anal. 126 (1994), 79 - 101.
 Hewitt E. and Stromberg K. Real and Abtract Analysis Springer-Verlag,
Berlin Heidelberg (1965)
 Hopf E., The Partial Diﬀerential Equation ut + uux = µuxx , Comm. Pure
Appl. Math. 3 (1950), 201 - 230.
 Kelley J. , General Topology, Van Nostrand, Princeton, (1955).
 Kovalevskaia S. Zur Theorie der partiellen diﬀerentialgleichung, Journal für
die reine und angewandte Mathematik bf 80 (1875) 1 - 32.
 Kraemer W and Wolﬀ von Gudenberg J (EDs): Scientiﬁc computing, validated numerics, interval methods, Klumer Academic, New York, Boston
Dordrecht, London, Moskow, (2001).
 Kriegl A. and Michor P. W. , The convinient setting of global anlysis, Mathematical Surveys Monographs, Vol 53 AMS Providnece (1997).
 Kruzhkov, S. N. Generalized solutions of the Cauchy problem in the large for
nonlinear equations of ﬁrst order, Dokl. Akad. Nauk SSSR, 187 (1969), 2932.
 Kruzhkov S. N , First - order quasilinear equations with several space variables , Math. USSR Sbornik 10 (1970), 217 - 243.
 Layzhenskaya A., On the construction of discontinuous solutions of quasilinear hyperbolic equations as limits to the solutions of the respective parabolic
Nauk SSSR 3 , (1956), 291 - 295.
 Layzhenskaya A. O, Solonnikov A. V. and Uraltseva N. N., Linear and quasilinear equations of parabolic type, AMS Translations, Providnce, (1968).
 Lax P., Hyperbolic System of Conservation Laws II, Comm. Pure Appl. Math.
10 (1957), 537 -566.
 Lax P., Hyperbolic System of Conservation Laws and the mathenatical theory of shock waves, CBMS Regional Conference Series in Mathematics 11
Philidelphia: SIAM, (1973).
106
BIBLIOGRAPHY
 Lax P., Weak Solutions of nonlinear hyperbolic equations and thier numerical
computations, Comm. Pure Appl. Math. 7 (1954). 159 - 193.
 Lax P., on Discontinuous initial value Problems for Nonlinear Equations and
Finite diﬀerent schemes, L. A. M. S. 1332 (1952).
 Leveque R J., Numerical methods for conservation laws, Lectures in Mathematics, ETH Zurich, Birkhauser Verlag, Basel; Boston; Berlin second ed.,
(1992).
 Lewy, H., An example of a smooth linear partial diﬀerential equation without
solution, Annals of Mathematics 66 (1957), 155 - 158.
 Luxemburg W. A., and Zaanen A. C., Riesz Spaces I North Holland, Amstardand (1971).
 Macneille, H. MPartially Ordered SetTranslations of Amer. Maths Soc
421973 416 - 460.
 Munkres J. R., Topology, 2nd Edition, Prentice Hall, (2000).
 Murat F., Compacité per compensation , Ann Scuola Norm. Sup. Disa Sci.
Math., 5: 489 - 507, (1978), and 8:69 - 102, (1981).
 Murat F., A survey on compensated compactness, In: Contributions to
mordern Calculus of Variations (Bologna, 1985), 145 - 183, Pitman Res Notes
Math. Ser. 148, Longman Sci. Tech: Harlow, (1987).
 Neuberger J. W. Sobolev Gradients and Diﬀerential; Equations Springer
Lecture Notes in Mathematics, Springer Berlin, 1670, (1997).
 Neuberger J. W. Continuous Newton’s method for polynimials Math. Intell.
21, (1999), 18 - 23.
 Neuberger J. W. A Near Minimal Hypothesis Nash-Moser Theorem, Int. J.
Pure Appl. Math., 4, (2003), 269 - 280.
 Neuberger J. W. Prospects of a central theory of partial diﬀerential equations,
Math. Intell., 27(3), (2005), 47 - 55.
 Oberguggenberger M. B., and Rosinger E. E.Solutions of Continuous Nonlinear PDEs, Through Order Completion North Holland Amstardan(1994).
 Oleinik O. A., On Cauchy’s problems for nonlinear equations in the class of
discontinuous functions, Uspehi Matem. Nauk, 9 (1954), 231 - 233.
 Oleinik O. A., Discontinuous solutions of nonlinear diﬀerential Equations,
Amer. Math. Soc. Tranl. 26 (1957), 95 - 172.
BIBLIOGRAPHY
107
 Oleinik O. A., Uniqueness and Stability of the generalized solution of the
Cauchy problem for a quasilinear equation, Uspehi Matem Nauk. 14 (1959),
165 - 170. English Translation in Amer. Maths. Soc Transl. ser. 2 vol 33
(1964), 285 - 290.
 Ordman E. T. , Convergence almost everywhere is not topological, AMS Math
monthly. 73 (1966), 182 - 183.
 Panov E. Y., Uniqueness of the Cauchy problem for a ﬁrst order quasilinear
equation with one admissible strictly convex entropy , Mat. Zametki 55
(1994), pp 116 - 129. English translation in Math. Notes 55 (1994), pp 517
- 525.
 Peressini A., Ordered Topological Vector spaces , Harper and Row, New york,
(1967).
 PreußG., Completion of semi-uniform convergence spaces, Appl. Categ.
Structures, 8, (2000), 463 - 473.
 Preus E., An approach to convinient topology, Kluwer Academic Publishers
(2002).
 Quinn B., Solutions with shocks: an example of an L1 - contraction semigroup, Comm of Pure and Appl Maths vol 24, (1971), pp 125 - 132.
 Rosinger E. E.Nonlinear Partial Diﬀerential Equations: Sequential and weak
solutions North-Holland Publishing Company INC. NY North-Holland Mathematics Studies 44 (1980)
 Rosinger E. E.Generalized solutions of Nonlinear Partial Diﬀerential Equations Elsevier Science Publishing Company INC. NY North-Holland Mathematics Studies 146 (1987)
 Rosinger E. E.Nonlinear Partial Diﬀerential Equations: An algebraic view
of generalized solutions North-Holland Publishing Company INC. NY NorthHolland Mathematics Studies 164 (1990)
 Schaefer H. H Topological Vector Spaces Springer-Verlag, New York (1970)
 Sendov B., Hausdorﬀ Approximations Kluwer Academic, Dordrecht,(1990).
 Sendov B, Anguelov R. and Markov S. On the linear space of Hausdorﬀ
Continuous Functions Lecture delivered at the 11th GAMM-IMACS International Symposium on Scientiﬁc Computing Arithmetic, and Validated
Numerics 4 - 8 October (2004), Fukuoka, Japan.
 Serre D. , System of Conservation Laws I: Hyperbolicity, Entropies, Shock
Waves, Cambridge University Press, Cambridge (1999).
108
BIBLIOGRAPHY
 Serre D. , System of Conservation Laws II, Cambridge University Press,
Cambridge (2000).
 Smoller J., Shock waves and Reaction-diﬀusion equations, Springer - Verlag,
New York second edition, (1994).
 Sobolev S. L., Methode nouvelle a resondre le probleme de Cachy pour les
equations lineaaires hyperbokiques normales Mat Sb. 1(43), 39 - 72, (1936).
 Tadmor E., Burgers equation with Vanishing hyper-viscosity , Communication in Math. Sciences, Vol. 2(2) , (2004), 317 - 324.
 Tartar L., Compensated Compactness and applications to partial diﬀerential equations, Reserach Notes in Mathematics 39, Nonlinear Analysis and
Mechanics, Huriott-Watt Symposium (R. J. Knopps, ed.) Pitman Press, 4,
(1975), 136 - 211.
 Tartar L., The Compensated Compactness method applied to systems of
conservation laws, Systems of Nonlinear PDE (J. Ball ed.), Reidel, Dordrecht,
(1983).
 Tartar L., The Compensated Compactness method for a scalar hyperbolic
equation, Carnegie Mellon Univ. Lecture Notes in Mathematics 39, Nonlinear Analysis and Mechanics, Huriott-Watt Symposium (R. J. Knopps, ed.)
Pitman Press, 87 - 20, (1987).
 Van der Walt JH., Order convergence of Archimedean vector latttices with
Applications, M.Sc. Thesis University of Pretoria (2006).
 Van der walt, JH. The Order completion method for systems of nonlinear
PDEs: Pseudotopological Perspectives. Acta Appl. Math. 103 (2008), 117.
 Van der walt, JH. The uniform order convergence structure on ML(X)
Quaest. Math. 31 (2008), 55 77.
 Van der walt, JH. The Order completion method for systems of nonlinear
PDEs revisited. Acta Appl. Math. 106 (2009), 149176.
 Van der walt, JH. The completion of Uniform Convergence Spaces and an
Application to nonlinear PDEs Quaest. Math. 32 (2009), 371 - 395.
 Van der Walt J. H., The order Conpletion Method for Systems of Nonlinear PDEs: Solution to the initial value problems. Technical report
UPWT2009/03, University of Pretoria, (2009).
 Van der Walt J. H., Generalized solutions of systems of nonlinear partial
diﬀerential equations, PhD Thesis University of Pretoria (2009).
 Volpert A., The Space BV and quasilinear equations, Mat. Sb.73, (1967),
255 - 302.
BIBLIOGRAPHY
 Walter W.
Berlin(1970).
109
Diﬀerential
and
Integral
InequalitiesSpriger-
Verlag,
 Weil A. Sur les espaces á structures uniformes et sur la topologie général,
Hermann, Paris, (1937).
 Wyler O., Ein Komplettieringsfunktor für uniforme Limesräume, Mathematische Nacherichten46(1976), 1 -12.
 Xin Zhouping , Theory of viscous conservation laws, Some current topics
in nonlinear conservationlaws AMS/IP, Studies in Advanced mathematics
Amer. Math. Soc. Providence, 15 (2000), 34 - 75.
 Xin Zhouping , Some current topics in nonlinear conservation laws,
AMS/IP, Studies in Advanced mathematics Amer. Math. Soc. Providence,
15 (2000), xi - xxxi.
 Yunguang L., Hyperbolic system of conservation laws and the compensated
compactness method, Monographs and Surveys in Pure and Applied Mathematics, Chapman & hall/CRC, Florida, (2003).
```