L The -norm of the -spline-projector

L The -norm of the -spline-projector
The L∞ -norm of the L2-spline-projector
is bounded independently of the knot sequence:
A proof of de Boor’s conjecture
A. Yu. Shadrin
Institut für Geometrie und Praktische Mathematik
RWTH Aachen, Germany
on leave from
Computing Center, 630090 Novosibirsk, Russia
New postal address:
DAMTP
Cambridge University
Silver Street
Cambridge CB3 9EW
England
new e-mail:
[email protected]
Abstract
We prove that the L∞ -norm of the L2 -projector P onto the spline space Sk (∆) is bounded
independently of the knot-sequence, i.e.,
sup kPSk (∆) k∞ < ck .
∆
This proves a conjecture stated by de Boor in 1972. We make use of specific properties
of matrices associated with the null-splines, various determinant identities and elements
of combinatorics. Total positivity of the matrices involved plays the key-role.
Key-words: Splines, L2 -projector, de Boor’s conjecture, totally positive matrices.
AMS subject classification: primary 41A15, second 15A45.
Contents
0 Introduction
0.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.2 Formulation of Theorem I . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.3 Outline of the proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
3
4
1 Main ingredients of the proof
1.1 B-splines and their properties . . . . . . . . . . . . .
1.2 L2 -projector and the inverse of the B-spline Gramian
1.3 Analytic version of de Boor’s Lemma 1.2.4 . . . . . .
1.4 Main idea. Formulation of Theorem Φ . . . . . . . .
1.5 Proof of Theorem I and its corollaries . . . . . . . .
1.6 Proof of Theorem Φ: proof of (A1 ) . . . . . . . . . .
1.7 An invariant . . . . . . . . . . . . . . . . . . . . . . .
1.8 Proof of Theorem Φ: proof of (A2 ) . . . . . . . . . .
1.9 Vectors zν . Formulation of Theorem Z . . . . . . . .
1.10 Proof of Theorem Φ: proof of (A3 ) . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
8
10
10
11
12
13
14
16
16
2 Proof of Theorem Z: intermediate estimates for
2.1 Notation and auxiliary statements . . . . . . . .
2.2 Reduction to a linear system of equations . . . .
2.2.1 Derivatives of null-splines at knots . . . .
2.2.2 The matrices B, B ′ , C . . . . . . . . . . .
2.2.3 Linear system for zν . . . . . . . . . . . .
2.3 First estimates for zν . . . . . . . . . . . . . . . .
2.3.1 Total positivity of the matrices A, B, C .
2.3.2 First estimate for z0 . . . . . . . . . . . .
2.3.3 First estimate for zν . . . . . . . . . . . .
2.4 Properties of the matrices C . . . . . . . . . . . .
2.5 Second estimates for zν . . . . . . . . . . . . . .
zν
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
19
19
21
21
23
23
24
24
25
28
3 Proof of Theorem Z: final estimates for zν
3.1 Preliminary remarks . . . . . . . . . . . . .
3.2 The matrices S and A . . . . . . . . . . . .
3.2.1 The matrix S . . . . . . . . . . . . .
3.2.2 The matrix A . . . . . . . . . . . . .
3.3 The matrices Q . . . . . . . . . . . . . . . .
3.4 A further strategy . . . . . . . . . . . . . .
3.5 Minimal and maximal paths . . . . . . . . .
3.6 Characterization of E[β,i] . . . . . . . . . . .
3.7 Relation between the minors of Q and C . .
3.8 Index relations . . . . . . . . . . . . . . . .
3.8.1 The statement . . . . . . . . . . . .
3.8.2 Proof: The case l = 0 . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
29
30
30
32
33
35
36
39
46
47
47
48
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.8.3 Proof: The case l 6= 0 . . . . . . . . . . . . . . . . . . . . . . . . .
3.9 Completion of the proof of Theorem Z . . . . . . . . . . . . . . . . . . . .
3.10 Last but not least . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Comments
4.1 A survey of earlier and related results. . . . . . . . . . . . .
4.1.1 Earlier results . . . . . . . . . . . . . . . . . . . . . .
4.1.2 L2 -projector onto finite element spaces. . . . . . . .
4.1.3 A general spline interpolation problem . . . . . . . .
4.1.4 A problem for the multivariate Dk -splines . . . . . .
4.2 On de Boor’s Lemma 1.2.4 . . . . . . . . . . . . . . . . . . .
4.2.1 Gram-matrix and de Boor’s Lemma 1.2.4 . . . . . .
4.2.2 On the choice of the null-spline σ . . . . . . . . . . .
4.3 Simplifications in particular cases . . . . . . . . . . . . . . .
4.4 Additional facts . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Orthogonality of φ ∈ Sk (∆) to Sk−1 (∆) . . . . . . .
4.4.2 Null-splines with Birkhoff boundary conditions at t0
4.4.3 Further properties of the matrices C . . . . . . . . .
4.5 On the constant ck . . . . . . . . . . . . . . . . . . . . . . .
Bibliography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
51
52
53
53
53
54
54
55
55
55
56
57
59
59
60
61
62
66
2
Chapter 0
Introduction
0.1
Preface
1. Preface. In this paper we prove de Boor’s conjecture concerning the L2 spline
projector. The exact formulation is given in §0.2. Since the proof is rather long, it is
divided into three chapters, with an outline given in §0.3. For the same reason, all the
comments (historical notes, motivations, analysis of other methods, etc.) are moved to
the end of the paper. The proof is almost self-contained, we cite (without proof) only
some basic spline properties and determinant identities, and two somewhat more special
lemmas (accompanied by known simple proofs).
2. Notation. There is some mixture of notations. We use the familiar i, j both as
one- and multivariate indices, and we use p as p := k − 2 when dealing with k, the order
of the splines, while in other cases p is just an integer.
3. Acknowledgements. I am grateful to Prof. W. Dahmen for giving me the opportunity to work at the RWTH Aachen, and for his constant inspiring encouragement
of my studies. Thanks are extended to Prof. H. Esser, who took a lively part in discussions and provided many constructive suggestions. It is a pleasure to acknowledge that
Prof. C. de Boor, in spite of some consequences for his finances, took an active part at all
stages of the proof’s evolution. To him I am obliged for a lot of hints and remarks, in
particular, for essential simplification of some of my arguments and notations.
0.2
Formulation of Theorem I
1. For an integer k > 0, and a partition
∆ := ∆N := {a = t0 < t1 < · · · < tN = b},
denote by
S := Sk (∆) := Pk (∆) ∩ C k−2 [a, b]
the space of polynomial splines of order k (i.e., of degree < k) with the knot sequence ∆
satisfying k − 1 continuity conditions at each interior knot.
Consider PS , the orthoprojector onto S with respect to the ordinary inner product
Rb
(f, g) := a f g, i.e.,
(f, s) = (PS f, s), ∀s ∈ S.
We are interested in PS as an operator from L∞ to L∞ , i.e., in bounds for its norm
kPS k∞ := sup
f
In this paper we prove the following fact.
3
kPS (f )k∞
.
kf k∞
Theorem I. For any k, the L∞ -norm of the L2 -projector P onto the spline space
Sk (∆) is bounded independently of ∆, i.e.,
sup kPSk (∆) k∞ ≤ ck .
(0.2.1)
∆
This theorem proves the conjecture of de Boor of 1972 made in [B2 ], see also §3.10
for details.
Earlier the mesh-independent bound (0.2.1) was proved for k = 2, 3, 4.
For k > 4 all previously known results proved boundedness of kPS k∞ only under
certain restrictions on the mesh ∆. (See §4.1 for a survey of earlier and related results.)
2. Some of the earlier restrictions on ∆ included spline spaces with multiple and/or
(bi-)infinite knot-sequences, therefore two corollaries of Theorem I are worthwhile to be
mentioned.
The first extends the result to the splines with a lower smoothness, the so-called
splines with multiple knots. For k and ∆ = (ti )N
0 as given above, we introduce a sequence
of smoothness parameters m := (mi )N
where
0 ≤ mi ≤ k − 1, and denote by Sk (∆, m)
0
the space of polynomial splines of order k with the knot sequence ∆ which, for every i,
have mi − 1 continuous derivatives in a neighbourhood of ti . If all mi are equal to m,
then
Sk,m (∆) := Sk (∆, (m, . . . , m)) = Pk (∆) ∩ C m−1 [a, b],
Sk (∆) = Sk,k−1 (∆).
Corollary I. For any k,
sup kPSk (∆,m) k∞ ≤ ck .
(0.2.2)
∆,m
The second corollary extends Theorem I to the splines with (bi-)infinite knot-sequence
∆∞ := (ti ) and with smoothness parameters m∞ := (mi ). We denote the space of these
splines by Sk (∆∞ , m∞ ).
Corollary II. For any k,
sup
∆∞ ,m∞
0.3
kPSk (∆∞ ,m∞ ) k∞ ≤ ck .
(0.2.3)
Outline of the proof
The proof is divided into three parts.
1. The first part (Chapter 1) describes the main ingredients of the proof.
Let (Mν ), (Nν ) be the L1 -, respectively the L∞ -normalized B-spline basis of Sk (∆)
(see §1.1). Our starting point (§1.3) is the observation that if φ is a spline such that
(A0 )
φ ∈ Sk (∆);
(A1 )
(−1)ν sign (φ, Mν ) = const ∀ν;
(A2 )
|(φ, Mν )| ≥ cmin
(A3 )
kφk∞ ≤ cmax ;
then
kPSk (∆) k∞ ≤ dk ·
∀ν;
cmax
.
cmin
This is an analytic version of de Boor’s rather simple algebraic lemma (§1.2) on the
inverse of a totally positive matrix applied to the Gram-matrix {(Mν , Nλ )}.
4
Our main idea (§1.4) is the choice
φ := σ (k−1) ,
σ ∈ S2k−1 (∆)
(0.3.1)
where σ is the null-spline of the even degree 2k − 2 such that
σ(tν ) =
0,
ν = 0, . . . , N ;
σ (l) (t0 ) = σ (l) (tN ) =
0,
l = 1, . . . , k − 2;
1
(k−1)
(tN )
(k−1)! σ
=
(0.3.2)
1.
The main claim, Theorem Φ of §1.4, is that φ so defined satisfies the properties (A0 )-(A3 )
given above.
As we show in §§1.6-1.8, the choice (0.3.1) makes the most problematic property (A1 )
almost automatically fulfilled and provides also (A2 ) quite easily. To prove (A3 ), we use
for the components of the vector
zν = (zν(1) , . . . , zν(2p+1) ),
zν(l) :=
1 (l)
σ (tν ) · |hν |l−1−p ,
l!
p := k − 2,
(0.3.3)
(where |hν | := tν+1 − tν ), the following estimate
|zν(l) | ≤ ck ,
if
l ≥ p + 1,
ν ≤ N − k.
(0.3.4)
This estimate forms the content of Theorem Z in §1.9. The rest of the proof (Chapters
2-3) consists of deriving (0.3.4).
2. In Chapter 2, we show that, for each ν, the vector zν in (0.3.3) is a solution to a
certain system of linear equations and provide intermediate estimates for it.
The known linear equations (§2.2) connecting derivatives zν of a null-spline at the
neighbouring knots are of the form
zν+1 = −D(ρν )Azν ,
ν = 0, . . . , N.
Here ρν := hν /hν+1 is the local mesh ratio, D(ρ) and A are some special matrices. For
a fixed ν, this gives the equations
B ′ zν = z0 ,
Czν = zN
with the matrices B ′ , C being products of A and D(ρs ) in certain combinations. Our
choice (0.3.2) of the null-spline σ provides the boundary conditions
(p+1)
z0 := (0, . . . , 0, z0
| {z }
(2p+1)
, . . . , z0
),
p
(p+2)
zN := (0, . . . , 0, 1, zN
| {z }
(2p+1)
, . . . , zN
).
p+1
They allow us to determine the vector zν as a solution of the linear system of equations
M zν = (0, . . . , 0, 1)T ,
| {z }
M=
B ′ [p, :]
,
C[p + 1, :]
2p+1
where the matrix M is composed of the first p rows of B ′ and the first p + 1 rows of C
(see §2.2). We solve this system explicitly by Cramer’s rule,
zν(l) = (−1)2p+1+l
det M (l)
,
det M
and then apply the Laplace expansion by minors of B ′ and C to both determinants.
Some elementary inequalities yield then (§2.3) the first estimates:
|zν(l) | ≤ max
i∈Jl
C(p, il )
,
C(p + 1, i′ )
5
l = 1, . . . , 2p + 1.
(0.3.5)
Here, J, Jl are the sets of (multi-) indices of the form
J := {i ∈ Np : 1 ≤ i1 < · · · < ip ≤ 2p + 1},
Jl := {i ⊂ J : is 6= l};
bold n stands for the index (1, 2, . . . , n); i′ and il are two different complements to i ∈ Jl
i ∪ i′ = (2p + 1),
i ∪ il = (2p + 1) \ {l},
and C(i, j) are the corresponding minors (see §2.1 for detailed notation).
The orders of the minors in the right-hand side of (0.3.5) differs by one. We use some
relations to equalize them and obtain (§2.5) the second estimate:
|zν(l) | ≤ cp max
i∈Jl
C(p, il )
,
C(p, i∗ )
l = 1, . . . , 2p + 1.
(0.3.6)
Here i∗ ∈ J is the index symmetric to i ∈ Jl , i.e., i∗s = 2p + 2 − ip+1−s .
3. In Chapter 3, in §§3.3-3.7, we find a necessary and sufficient condition on the
indices i, j denoted
i j,
i, j ∈ J,
for the inequality
C(p, i) ≤ cp C(p, j).
In §3.8 we verify that depending on l the indices il and i∗ satisfy this condition, namely
that
i l2 i ∗ i l1 , l 1 ≤ p + 1 ≤ l 2 ,
which gives
C(p, il ) ≤ cp C(p, i∗ ),
l ≥ p + 1.
Combined with (0.3.6) this proves (0.3.4) and hence Theorem I.
This part of the proof is a bit long and technical, and it would be interesting to find
simpler arguments (see §§4.3-4.4 of Comments for a discussion).
6
Chapter 1
Main ingredients of the proof
1.1
B-splines and their properties
As before, for k, N ∈ N, and a knot sequence
∆ = {a = t0 < t1 < · · · < tN = b},
the notation
Sk (∆) := Pk (∆) ∩ C k−2 [a, b]
stands for the space of polynomial splines of order k (i.e., of degree < k) on ∆.
The subintervals of ∆ and their lengthes will be denoted by
Ij := (tj , tj+1 ),
|hj | := tj+1 − tj .
N +k−1
Let ∆(k) = (ti )i=−k+1
be an extended knot sequence, such that
a = t−k+1 = · · · = t0 < t1 < · · · < tN = · · · = tN +k−1 = b.
−1
(k)
By (Nj )N
forming a partition
j=−k+1 we denote the B-spline sequence of order k on ∆
of unity, i.e.,
k−1
Nj (x) := Nj,k (x) := ([tj+1 , . . . , tj+k ] − [tj , . . . , tj+k−1 ])(· − x)+
,
and by (Mj ) the same sequence normalized with respect to the L1 -norm:
k−1
Mj (x) := Mj,k (x) := k [tj , . . . , tj+k ](· − x)+
:=
k
Nj (x).
tj+k − tj
The following lemmas are well-known.
Lemma 1.1.1 ([B4 ], Eqs. (4.2)-(4.5)) For any k and any ∆(k) , one has
X
supp Nj = [tj , tj+k ], Nj ≥ 0,
Nj = 1,
Z tj+k
k
Mj (t) dt = 1.
Nj (x),
Mj (x) =
tj+k − tj
tj
(1.1.1)
(1.1.2)
Lemma 1.1.2 ([B4 ], Th. 3.1) The B-spline sequence (Ni ) forms a basis for Sk (∆).
Lemma 1.1.3 ([B4 ], Th. 5.2) For any k, there exists a constant κk , the so-called Bspline basis condition number, such that, for any a = (aj ) and any ∆,
X
(1.1.3)
aj Nj kL∞ ≤ kakl∞
κ−1
k kakl∞ ≤ k
j
7
Lemma 1.1.4 ([Schu], Th. 4.53) Any spline s ∈ Sk (∆N ) has at most N + k − 2 zeros
counting multiplicities.
Lemma 1.1.5 ([B4 ], Eq. (4.6))
Mi,1 (x)
=
′
Mi,k
(x)
=
1
, x ∈ [ti , ti+1 ), i = 0, . . . , N − 1;
ti+1 − ti
k
[Mi,k−1 (x) − Mi+1,k−1 (x)] , i = −k + 1, . . . , N − 1.
ti+k − ti
(1.1.4)
(1.1.5)
We will need two more lemmas.
Lemma 1.1.6 Let Mi ∈ Sk (∆) be the L1 -normalized B-spline. Then
(k−1) = (−1)ν−1 , ν = 1, . . . , k.
sign Mi
(ti+ν−1 ,ti+ν )
(1.1.6)
Proof. Follows by induction from (1.1.4)-(1.1.5).
Lemma 1.1.7 Let Ii′ be a largest subinterval of supp Mi = [ti , ti+k ]. Then
(k−1)
|Mi
(x)| = const ≥ |hi′ |−k ,
x ∈ (ti′ , ti′ +1 ).
(1.1.7)
Proof. By induction. For k = 1 due to (1.1.4) the lemma is true. Let x ∈ Ii′ . From
(1.1.5)–(1.1.6) we obtain
(k−1)
|Mi,k
(x)|
k
(k−2)
(k−2)
|Mi,k−1 (x) − Mi+1,k−1 (x)|
ti+k − ti
k
(k−2)
(k−2)
(|Mi,k−1 (x)| + |Mi+1,k−1 (x)|)
ti+k − ti
1
· |hi′ |−(k−1)
|hi′ |
=
=
≥
= |hi′ |−k .
1.2
L2-projector and the inverse of the B-spline Gramian
Consider PS , the orthogonal projector onto Sk (∆) with respect to the ordinary inner
product, i.e.,
(f, s) = (PS f, s), ∀s ∈ Sk (∆).
For N ′ = N + k − 1, let G be the N ′ × N ′ matrix
−1
G = {(Mi , Nj )}N
i,j=−k+1 .
Lemma 1.2.1 [B1 ] For any k, ∆, one has
kPSk (∆) kL∞ ≤ kG−1 kl∞ .
Proof. Let f ∈ L∞ , and PS (f ) =
(Ga)i :=
X
P
j
aj (f )Nj , so that for a = (ai (f ))
(Mi , Nj ) aj (f ) = (f, Mi ) =: bi (f ).
j
By (1.1.3),
kPS (f )kL∞ ≤ ka(f )kl∞ ,
8
and by (1.1.1)-(1.1.2)
kb(f )kl∞ := max |(f, Mi )| ≤ kf kL∞ · max kMi kL1 = kf kL∞ .
i
i
Thus
kPS k∞
=
sup
f
≤
kPS (f )kL∞
ka(f )kl∞
kG−1 b(f )kl∞
≤ sup
= sup
kf kL∞
kb(f )kl∞
f kb(f )kl∞
f
kG−1 k∞
as claimed.
Lemma 1.2.2 [B1 ] The matrix G is totally positive, i.e.,
i 1 , . . . , ip
G
≥ 0.
j1 , . . . , jp
(−1)
Lemma 1.2.3 [B1 ] The matrix G−1 := (gij
(−1)
|gij
) is checkerboard, i.e.,
(−1)
| = (−1)i+j gij
.
Proof. Let Gji be the algebraic adjoint to gji . By Cramer’s rule
(−1)
gij
= (−1)i+j det Gji / det G,
and by Lemma 1.2.2 both determinants det G, det Gji are non-negative.
Lemma 1.2.4 [B1 ] Let H −1 be a checkerboard matrix, and let a, b ∈ RN be vectors,
such that Ha = b, and
(a1 )
(−1)i sign bi = const
(a2 )
min |bi | ≥ cmin ;
(a3 )
kak∞ ≤ cmax .
∀i;
i
Then
cmax
.
cmin
kH −1 k∞ ≤
Proof. Let a, b satisfy (a1 )-(a3 ), and let
(−1)
H −1 := (hij
(−1)
|hij
),
Then
|ai | = |(H −1 b)i | := |
≥ min |bj | ·
j
X
j
X
(−1)
| = (−1)i+j hij
(−1)
hij
j
(−1)
|hij
bj | =
X
j
.
(−1)
|hij
bj |
|.
Therefore,
kak∞ := max |ai | ≥
i
=
min |bj | · max
j
i
X
j
min |bj | · kH −1 k∞ .
j
9
(−1)
|hij
|
1.3
Analytic version of de Boor’s Lemma 1.2.4
′
Let a ∈ RN and let φ ∈ Sk (∆) be a spline of order k on ∆ that has the expansion
X
aj N j .
φ=
j
Then, since G := {(Mi , Nj )}, one obtains
X
bi := (Ga)i =
(Mi , Nj ) aj = (Mi , φ).
j
By Lemma 1.1.3, we also have
kakl∞ ≤ κk kφkL∞
where κk is the B-spline basis condition number.
Using these two facts, Lemma 1.2.4 applied to the matrix G combined with Lemma
1.2.1 implies the following statement.
Lemma 1.3.1 Let φ be any spline, such that
(A0 )
φ ∈ Sk (∆);
(A1 )
(−1)i sign (φ, Mi ) = const
(A2 )
|(φ, Mi )| ≥ cmin (k)
(A3 )
kφk∞ ≤ cmax (k).
Then
kPSk (∆) k∞ ≤ κk
1.4
∀i;
∀i;
cmax (k)
.
cmin (k)
Main idea: definition of φ via a null-spline σ.
Formulation of Theorem Φ
Definition 1.4.1 Define the spline σ as the spline of the even degree 2k − 2 on ∆, i.e.,
σ ∈ S2k−1 (∆),
(1.4.1)
that satisfies the following conditions:
σ(ti )
= 0,
i = 0, . . . , N ;
(1.4.2)
σ (l) (t0 ) = σ (l) (tN )
1
σ (k−1) (tN )
(k − 1)!
= 0,
l = 1, . . . , k − 2;
(1.4.3)
= 1.
(1.4.4)
The spline σ defined by (1.4.1)-(1.4.4) exists and is unique, see [Schu], Theorem 4.67.
This fact will follow also from our further considerations where we show that σ results
from the solution of a system of linear equations with some non-singular matrix.
Our main idea is to define φ as follows.
Definition 1.4.2 Set
φ(x) := σ (k−1) (x).
(1.4.5)
Example 1.4.3 For k = 2, σ is a parabolic null-spline, and its first derivative φ = σ ′ is
the broken line that alternates between +1 and −1 at the knots, i.e.,
X
(−1)i Ni , k = 2.
φ=
10
Our main result is the following theorem.
Theorem Φ. For any k there exist constants cmax (k), cmin (k), such that for any ∆N
with N ≥ 2k the spline φ defined via (1.4.5) satisfies the relations
(A0 )
φ ∈ Sk (∆N );
(A1 )
(−1)i sign (φ, Mi ) = const
(A2 )
|(φ, Mi )| > cmin (k)
(A3 )
kφkL∞ [ti ,ti+1 ] < cmax (k)
∀i;
∀i;
∀i.
Remark. The restrictions N ≥ 2k is needed only in the proof of (A3 ).
Proof of A0 . Since σ ∈ S2k−1 (∆), clearly φ := σ (k−1) ∈ Sk (∆).
1.5
Proof of Theorem I and its corollaries
Proof of Theorem I. From Theorem Φ, by Lemma 1.3.1,
kPSk (∆N ) k∞ ≤ ck ,
N ≥ 2k.
To complete the proof, it remains to cover the case N < 2k. As is known (see, e.g., [S1 ]),
kPSk (∆N ) k∞ ≤ c(k, N ),
hence,
kPSk (∆N ) k∞ ≤ c′k ,
N < 2k,
and finally
kPSk (∆) k∞ ≤ c′′k ,
∀∆.
Proof of Corollary I. Let (Mi ), (Ni ) be the B-spline sequences for the space
Sk (∆, m) of splines with multiple knots defined on the extended knot-sequence
(τ0 , . . . , τN ′ ) := (t0 , . . . , t0 , . . . ti , . . . , ti , . . . tN , . . . , tN ).
| {z }
| {z }
| {z }
k−m0
(n)
k−mi
k−mN
(n)
(n)
Further, let (Mi ), (Ni ) be the B-spline sequences on the knot-sequences ∆(n) = (tj )
chosen so that
(n)
(n)
(n)
lim tj = τj .
tj < tj+1 ,
n→∞
Then, as is known,
(n)
(n)
lim (Mi , Nj ) = (Mi , Nj ),
n→∞
whence, for the corresponding Gramians, we have
kG−1 k∞ = lim k(G(n) )−1 k∞ ≤ ck ,
n→∞
where that last inequality is due to Theorem I. Thus,
kPSk (∆,m) k∞ ≤ kG−1 k∞ ≤ ck .
11
Proof of Corollary II. Let (Mi ), (Ni ) be the B-spline sequences for the space
Sk (∆∞ , m∞ ) of splines with multiple (bi-)infinite knot-sequence. Then also
kPSk (∆∞ ,m∞ ) k∞ ≤ kG−1
∆∞ k∞ ,
where G∆∞ := (Mi , Nj ) is the corresponding (bi-)infinite Gram-matrix. By Corollary I,
all of its finite principal submatrices G∆N are boundedly invertible. This implies that
G∆∞ is invertible, too, and
−1
kG−1
∆∞ k∞ ≤ lim kG∆N k∞ ≤ ck .
N →∞
1.6
Proof of Theorem Φ: proof of (A1)
−1
Lemma 1.6.1 The spline σ changes its sign exactly at the points (ti )N
i=1 , i.e.,
(−1)i sign σ = const, i = 1, . . . , N.
(ti−1 ,ti )
Proof. By definition (1.4.2)-(1.4.3), the spline σ ∈ S2k−1 (∆) has at least N + 1 + 2(k − 2)
zeros counting multiplicities, and by Lemma 1.1.4 any spline from S2k−1 (∆) has at most
N + (2k − 1) − 2 such zeros. Therefore, σ has no zeros different from (1.4.2)-(1.4.3).
Property (A1 ). Let φ be the spline (1.4.5). Then
(−1)i sign (φ, Mi ) = const
∀i.
Proof of (A1 ). Integration by parts yields
(φ, Mi ) :=
=
Z
ti+k
σ (k−1) (t)Mi (t) dt
ti
Z ti+k
(k−1)
k−1
(−1)
(t) dt
σ(t)Mi
(1.6.1)
ti
+
k−1
ti+k
X
(l−1)
(x)
(−1)l+1 σ (k−1−l) (x)Mi
.
ti
l=1
At the point x = ti we have
σ (k−1−l) (ti )
= 0,
ti = t 0 ,
(l−1)
(ti )
Mi
l = 1, . . . , k − 1;
= 0,
ti > t0 ,
l = 1, . . . , k − 1;
σ (k−1−l) (ti+k )
= 0,
ti+k = tN ,
(l−1)
(ti+k )
Mi
l = 1, . . . , k − 1;
= 0,
ti+k < tN ,
l = 1, . . . , k − 1.
and similarly for x = ti+k
Thus, the sum in (1.6.1) vanishes and
Z
Z ti+k
(k−1)
k−1
σ
(t)Mi (t) dt = (−1)
(φ, Mi ) :=
ti+k
ti
ti
12
(k−1)
σ(t)Mi
(t) dt.
(1.6.2)
(k−1)
Since both σ(t) and Mi
we have
(−1)i sign (φ, Mi )
(t) alternate in sign on the sequence of subintervals of [ti , ti+k ],
= (−1)i · (−1)k−1 sign σ i
k−1
= (−1) · (−1)
k−1
= (−1)
(ti ,ti+1 )
i
· (−1) const · 1
(k−1) sign Mi
(ti ,ti+1 )
· const.
Hence,
(−1)i sign (φ, Mi ) = const,
1.7
i = −k + 1, . . . , N − 1.
An invariant
For the proof of (A2 ) and for some further use in §2.4, we will need the following considerations.
Definition 1.7.1 For two functions f, g and n ∈ N, set
G(f, g; x) :=
n+1
X
(−1)l f (l) (x)g (n+1−l) (x),
l=0
whenever the right-hand side makes sense.
Lemma 1.7.2 Let p, q be two polynomials of degree n + 1 on I. Then
G(p, q; x) = const(p, q),
∀x ∈ I.
Proof. It is readily seen that G′ (p, q; x) = 0 for all x ∈ R, hence the statement.
Lemma 1.7.3 Let s1 , s2 be two null-splines of degree n + 1 on ∆, i.e.,
s1 , s2 ∈ Sn+2 (∆),
s1 (ti ) = s2 (ti ) = 0,
i = 0, . . . , N.
(1.7.1)
Then
G(s1 , s2 ; x) = const(s1 , s2 ),
x ∈ [a, b].
(1.7.2)
Proof. By Lemma 1.7.2 the function G(s1 , s2 ) is piecewise constant.
On the other hand, since the continuity conditions on s1 , s2 ∈ Sn+2 (∆) imply the
inclusion s1 , s2 ∈ C n [a, b], we have
(l) (n+1−l) (l) (n+1−l) , l = 1, . . . , n,
= s1 s2
s1 s2
ti +0
ti −0
and due to the null values of s1 , s2 on ∆ also
(l) (n+1−l) (l) (n+1−l) s1 s2
= s1 s2
ti −0
ti +0
i.e., the function G(s1 , s2 ) is continuous.
As a corollary, we obtain
13
= 0,
l = 0,
l = n + 1,
Lemma 1.7.4 Let σ ∈ S2k−1 (∆) be the null-spline defined in (1.4.1)-(1.4.3). Then
H(x) := [σ (k−1) (x)]2 + 2
k−1
X
l=1
(−1)l σ (k−1−l) (x)σ (k−1+l) (x) = (k − 1)!2 .
(1.7.3)
Proof. The function H is obtained from G(s1 , s2 ) if we set s1 = s2 = σ and n+1 = 2k−2,
precisely
H(x) = (−1)k−1 G(σ, σ; x).
Therefore, by (1.7.2), it is a constant function.
The boundary conditions on σ at tN are
σ (l) (tN ) = 0,
l ≤ k − 2;
σ (k−1) (tN ) = (k − 1)!,
therefore for x = tN the sum in (1.7.3) vanishes, i.e.
H(tN ) = [σ (k−1) (tN )]2 := (k − 1)!2 .
Thus,
H(x) = H(tN ) = (k − 1)!2
Lemma 1.7.5 We have
∀x ∈ [a, b].
1
|σ (k−1) (t0 )| = 1.
(k − 1)!
(1.7.4)
Proof. The boundary conditions (1.4.3) on σ at t0 are
σ (l) (t0 ) = 0,
l ≤ k − 2.
Therefore, for x = t0 , the sum in (1.7.3) vanishes, i.e.,
H(t0 ) = [σ (k−1) (t0 )]2 .
On the other hand, by (1.7.3),
H(t0 ) = (k − 1)!2 .
1.8
Proof of Theorem Φ: proof of (A2)
For the proof of (A2 ), we need the following estimate.
Lemma 1.8.1 There exists a positive constant ck such that the inequality
kσkL1 [ti ,ti+1 ] ≥ ck |hi |k
holds uniformly in i.
14
(1.8.1)
Proof. By (1.7.3), we have
(k − 1)!2
=
H(ti )
:=
[σ (k−1) (ti )]2 + 2
k−2
X
(−1)m σ (k−1−m) (ti )σ (k−1+m) (ti )
m=1
=
k−2
X
(k−1)
2
σ
(ti ) + 2
(−1)m [σ (k−1−m) (ti ) · |hi |−m ] · [σ (k−1+m) (ti ) · |hi |m ].
m=1
From the latter equality follows that
max |σ (k−1+m) (ti )| · |hi |m ≥ ck ,
|m|≤k−2
or, equivalently,
max
1≤l≤2k−3
|σ (l) (ti )| · |hi |l+1 ≥ ck |hi |k .
(1.8.2)
By the Markov inequality for polynomials,
kσkL1 [ti ,ti+1 ] ≥ cl |hi |l+1 kσ (l) kL∞ [ti ,ti+1 ]
∀ l,
so that making use of (1.8.2), we obtain
kσkL1 [ti ,ti+1 ] ≥ c′k |hi |k .
Property (A2 ). There exists a positive constant cmin (k) depending only on k such
that, for any ∆, the spline φ defined in (1.4.5) satisfies the relation
|(φ, Mi )| ≥ cmin (k),
i = −k + 1, . . . , N − 1.
Proof of (A2 ) Let Ii′ be a largest subinterval of supp Mi := [ti , ti+k ]. Since
(k−1)
sign σ(t) · sign Mi
(t) = const,
t ∈ [ti , ti+k ],
we have
|(φ, Mi )|
:=
(1.6.2)
=
=
Z ti+k
(k−1)
σ
(t)M
(t)
dt
i
ti
Z ti+k
(k−1)
(t) dt
σ(t)Mi
ti
Z ti+k
(k−1)
(t)| dt
|σ(t)Mi
ti
ti′ +1
≥
Z
=
|Mi
(k−1)
|σ(t)Mi
ti′
(k−1)
(t)| dt
(xi′ )| · kσkL1 [ti′ ,ti′ +1 ] ,
and due to (1.8.1) and (1.1.7)
|(φ, Mi )| ≥ ck c′k =: cmin (k).
15
1.9
Vectors zν . Formulation of Theorem Z
Theorem Z formulated below enables us to verify in the next section the last condition
(A3 ) of Theorem Φ.
Definition 1.9.1 Set
(1)
(2k−3)
zi := (zi , . . . , zi
with
(l)
zi :=
) ∈ R2k−3 ,
1 (l)
σ (ti ) · |hi |l−k+1 ,
l!
i = 0, . . . , N − 1,
(1.9.1)
l = 1, . . . , 2k − 3.
(1.9.2)
In the rest of the paper we are going to prove the following theorem.
Theorem Z. There exists a constant ck depending only on k such that, for N ≥ k,
the estimates
(l)
(1.9.3)
|zi | ≤ ck , l ≥ k − 1, i = 0, . . . , N − k,
hold uniformly in i and l.
This theorem almost evidently implies the estimate
kφkL∞ [ti ,ti+1 ] := kσ (k−1) kL∞ [ti ,ti+1 ] ≤ c′k ,
i ≤ N − k,
which coincides with (A3 ) except for the indices i > N − k. In the next section we prove
this implication and show how to cover for N ≥ 2k the case i > N − k of (A3 ).
1.10
Proof of Theorem Φ: proof of (A3)
Property (A3 ). There exists a constant cmax (k) depending only on k such that, for any
∆N with N ≥ 2k, the spline φ defined (1.4.5) satisfies the relation
kφkL∞ [ti ,ti+1 ] ≤ cmax (k),
∀i.
(1.10.1)
Proof of (A3 ). 1) The case N ≥ 2k, i ≤ N − k. In this case, by (1.9.3) of Theorem
Z, and by definitions (1.9.2), (1.4.5) we have
1 (m)
|φ (ti )| · |hi |m
m!
1 (k−1+m)
|σ
(ti )| · |hi |m
m!
(k − 1 + m)! (k−1+m)
|
|zi
=
m!
≤ c′k ,
m = 0, . . . , k − 2.
=
On [ti , ti+1 ] the spline φ := σ (k−1) is an algebraic polynomial of degree k − 1, and by
Taylor expansion,
k−1
X 1
φ(m) (ti )|hi |m .
φ(ti+1 ) =
m!
m=0
Hence,
|φ(k−1) (ti )| · |hi |k−1 ≤ |φ(ti+1 )| +
k−2
X
1 (m)
|φ (ti )| · |hi |m ≤ k · c′k ,
m!
m=0
and finally
kφ(x)kL∞ [ti ,ti+1 ]
k−1
X
≤
1 (m)
|φ (ti )| · |hi |m
m!
m=0
≤
(2k − 1) · c′k =: cmax (k),
16
i ≤ N − k,
N ≥ 2k.
2) The case N ≥ 2k, i ≥ N − k. Let σ
e be the null-spline that is defined by the same
interpolation and boundary conditions (1.4.2)-(1.4.3) as σ, but with the normalization
at the left end-point
1
σ
e(t0 ) = 1.
(k − 1)!
Accordingly, we set,
φe = σ
e(k−1) .
Then, due to symmetry, by Theorem Z applied to σ
e, we obtain
e L [t ,t ] ≤ cmax (k),
kφk
∞ i i+1
i ≥ k.
On the other hand, we established in (1.7.4) that
1
σ(t0 ) = ±1.
(k − 1)!
This implies the equality
and, correspondingly, the estimate
φe = ±φ,
kφkL∞ [ti ,ti+1 ] ≤ cmax (k),
i ≥ k.
If N ≥ 2k, then N − k ≥ k, thus
kφkL∞ [ti ,ti+1 ] < cmax (k),
i ≥ N − k,
N ≥ 2k.
This completes the proof of Theorem Φ.
Remark. The size and the structure of the proof of Theorem Z (that is, of (A3 ))
given in the next two chapters are in a sharp contrast with the short proofs of (A1 )-(A2 )
given above. We conclude this chapter with a conjecture which probably could be useful
in finding a simpler proof of (A3 ).
Conjecture 1.10.1 Let φ := σ (k−1) be the spline (1.4.5). Then it takes its maximal
absolute values at the endpoints, i.e.,
|φ(x)| ≤ |φ(a)|
(= |φ(b)| = (k − 1)!) ,
∀ x ∈ [a, b].
In particular, the sum in (1.7.3) is always nonnegative, and zero only if x is a knot of
high multiplicity.
17
Chapter 2
Proof of Theorem Z:
intermediate estimates for zν
2.1
Notation and auxiliary statements
Let U be any n × n matrix. We denote by
U [α, β] := U
α1 , . . . , αp
β1 , . . . , βq
the submatrix of U (not necessarily square) whose (s, t)-entry is U [αs , βt ] with α and
β sequences (indices) with increasing entries. The default sequence (:) stands for the
sequence of all possible entries. So, U [α, :] is the matrix made up from rows α1 , . . . , αp of
U . The sequence (\s) stands for all entries but one numbered s. For example, U [\1, \l+1]
is the matrix made up from rows 2, . . . , n and columns 1, . . . , l, l + 2, . . . , n of U .
The notation
α1 , . . . , αp
α1 , . . . , αp
:= U
U (α, β) := det U
β1 , . . . , βp
β1 , . . . , βp
(now with #α = #β) stands for the corresponding subdeterminant.
A matrix U is called totally positive (TP) if
U (α, β) ≥ 0 ∀ α, β.
As was already mentioned, by indices we mean sequences with increasing entries. For
convenience we will also view indices as sets when writing, e.g., α ⊂ β to express that
the components of α appear also in β.
For n ∈ N, the bold n denotes the index (1, 2, . . . , n). Further,
Ip,n := {i ⊂ n : #i = p} := {(is )ps=1 : 1 ≤ i1 < · · · < ip ≤ n}.
For the special case n = 2p + 1 we set
J := Ip,2p+1 ,
Jl := {i ∈ J : {l} ∈
/ i},
l = 1, . . . , 2p + 1.
For i ∈ Ip,n , its complement i′ and its conjugate index i∗ are given, respectively, by
i′ ∈ In−p,n ,
i∗ ∈ Ip,n ,
i′ := n \ i,
i∗ := (n + 1 − ip , . . . , n + 1 − i1 ).
For i ∈ Jl , we define also the l-complement
il ∈ J l ,
il := i′ \ {l}.
18
Finally, for two indices i, j ∈ Ip,n , we denote
i≤j
⇔
is ≤ js
∀s,
|i| :=
X
is .
s
The following lemmas will be used frequently (see [Ka], pp. 1–6).
Lemma 2.1.1 (Cauchy–Binet Formula) If U, V, W ∈ Rn×n and U = V W , then for
any i, j ∈ Ip,n
X
U (i, j) =
V (i, α)W (α, j).
α∈Ip,n
This relation will be referred to as ‘the CB-formula’ for short.
Lemma 2.1.2 (Inverse Determinants) If V = U −1 , then for any i, j ∈ Ip,n we have
V (i, j) = (−1)|i+j|
U (j ′ , i′ )
.
det U
Lemma 2.1.3 (Laplace Expansion by Minors) For any fixed index i ∈ Ip,n , we
have
X
(−1)|i+α| U (i, α)U (i′ , α′ ).
det U =
α∈Ip,n
We will also use the following estimate.
Lemma 2.1.4 Let q ∈ N, and as , bs , cs ≥ 0. Then
Pq
as b s
bs
bs
≤ max .
≤ Pqs=1
min
s
s cs
cs
s=1 as cs
Proof. Let
min
s
bs
= ǫ,
cs
max
s
(2.1.1)
bs
= ǫ.
cs
Then ǫcs ≤ bs ≤ ǫcs , and
ǫ
q
X
s=1
2.2
2.2.1
a s cs ≤
q
X
s=1
as b s ≤ ǫ
q
X
a s cs .
s=1
Reduction to a linear system of equations
Derivatives of null-splines at knots
Let q be a null spline on ∆ of degree n + 1, i.e.,
q ∈ Sn+2 (∆),
q(tν ) = 0
Set
qν := (qν(1) , . . . , qν(n) ) ∈ Rn ,
qν(l) :=
19
∀ν.
1 (l)
q (tν ),
l!
l = 0, . . . , n + 1.
On [tν , tν+1 ], q is an algebraic polynomial, and by Taylor expansion of q at x = tν we
obtain
n+1
1
1X
q (j) (tν ) · |hν |j−i
i! j=i (j − i)!
1 (i)
q (tν+1 ) =
i!
n+1
X
=
j=i
i.e.,
(i)
qν+1 · |hν |i =
Since
(0)
qν
=
(0)
qν+1
j!
i!(j − i)!
1 (j)
q (tν ) · |hν |j−i ,
j!
n+1
X
j=i
j (j)
q · |hν |j .
i ν
= 0, we have
qν(n+1) · |hν |n+1 = −
n
X
j=1
qν(j) · |hν |j ,
and hence
(i)
qν+1 · |hν |i =
n X
j
i
j=i
−
n+1
qν(j) · |hν |j ,
i
i = 1, . . . , n.
For the vectors qν we have therefore the equality
D0 (hν )qν+1 = −AD0 (hν )qν ,
where A is the n × n matrix given by
nn + 1 j on
A=
−
i
i i,j=1
(2.2.1)
(2.2.2)
and
D0 (h) = diag ⌈h, h2 , . . . , hn ⌋.
(2.2.3)
By Taylor expansion of q at x = tν+1 , we conclude that
D0 (−hν )qν = −AD0 (−hν )qν+1 ,
so that in view of (2.2.1)
A−1 = D0 AD0 ,
(2.2.4)
D0 := D0 (−1) = diag ⌈−1, 1, −1, 1 . . .⌋.
(2.2.5)
with
It is more convenient to employ another scaling of qν in (2.2.1), namely by the matrix
h−n/2−1/2 D0 (h)
diag ⌈h−n/2+1/2 , h−n/2+3/2 , . . . , hn/2−1/2 )⌋,
Dh := D(h) :=
=
(2.2.6)
which satisfies
det D(h) = 1.
Then we also have the equality
D(hν )qν+1 = −AD(hν )qν ,
which may be rewritten as
D(hν+1 )qν+1 = −D(hν+1 /hν )AD(hν )qν .
20
(2.2.7)
2.2.2
The matrices B, B ′ , C
Set
yν := D(hν )qν ,
ν < N;
yN := D(hN −1 )qN ,
i.e., for a null spline q ∈ Sn+2 (∆), we define the vectors
yν := (yν(1) , . . . , yν(n) ) ∈ Rn ,
with the components
yν(l)
:=
(l)
:=
yN
1 (l)
q (tν ) · |hν |l−(n+1)/2 , ν = 0, . . . , N − 1;
l!
1 (l)
q (tν ) · |hN −1 |l−(n+1)/2 .
l!
Set also
ρν := hν+1 /hν .
Then from (2.2.7) follows that the vectors yν are connected by the rules
yν+1
=
yN
=
−D(ρν )Ayν ,
ν = 0, . . . , N − 2,
−AyN −1 ,
and
yν−1
yN −1
=
=
−D0 AD(1/ρν−1 )D0 yν ,
−D0 AD0 yN .
ν = 1, . . . , N − 1,
Now fix an index ν. Then we have two systems of equations
Cyν = (−1)N −ν yN ,
B ′ yν = (−1)ν y0 ,
(2.2.8)
with
2.2.3
C
:=
CN −ν
:=
AD(ρN −1 )AD(ρN −2 ) · · · AD(ρν )A,
B
:=
Bν
:=
B′
:=
Bν′
AD(1/ρ0 )AD(1/ρ1 ) · · · AD(1/ρν−1 ),
:=
D0 BD0 .
(2.2.9)
Linear system for zν
Now we rewrite formula (2.2.8) for our special null-spline σ ∈ S2k−2 (∆) defined in (1.4.1)–
(1.4.4). For the sake of brevity, set
p := k − 2.
Then the corresponding vectors are
zν := (zν(1) , . . . , zν(2p+1) ) ∈ R2p+1 ,
ν = 0, . . . , N,
with
(l)
zν
(l)
zN
:=
1 (l)
l! σ (tν )
:=
1 (l)
l! σ (tN )
· |hν |l−(p+1) ,
· |hN −1 |l−(p+1) ,
ν = 0, . . . , N − 1;
ν = N.
Moreover, by definition (1.4.2)-(1.4.4) of σ, we know that
(p+1)
(p+1)
(2p+1)
,
, z0
, . . . , z0
z0 =
0 . . . , 0 , z0
| {z }
p=k−2
(p+1)
(2p+1)
zN =
0 . . . , 0,
1,
zn
, . . . , zn
.
| {z }
p=k−2
21
By (2.2.8), we have two systems of equations
B ′ zν = (−1)ν z0 ,
Czν = (−1)N −ν zN ,
or in view of the prescribed values of the first components of z0 , zN






0
0 





 . 
 . 




 .
.
p=k−2

 . 
 . 



p+1=k−1







 0 
 0 










B ′ zν = (−1)ν  z0(p+1)
 , Czν = (−1)N −ν  1 




 (p+2)
 (p+2

 zN
 z0




 .
 .

 ..
 ..



(2p+1)
(2p+1)
zN
z0









,







ν > 0.
According to the notation introduced in §2.1 the upper half of these equations could be
written as


 
0 




0 


.



 

.. 

N −ν 
′
ν  .. 
 p+1 .
C[p + 1, :] × zν (:) = (−1)
B [p, :] × zν (:) = (−1)  .  p ,


 

 0 






0 

1 
For ν = 0 we have
CN z0 = (−1)N zN
or

0 


 . 
 .
 .  p=k−2




 0 


Cz0 := C ×  z0(p+1)

 (p+2)
 z0

 .
 ..

(2p+1)
z0


0 



 . 


 . 

 . 


p+1=k−1




 0 








 = (−1)N  1 



 (p+2)

 zN



 .

 ..


(2p+1)
zN

(p+1)
In terms of the unknowns ze0 := (z0
half of this system is equivalent to

(p+2)
, z0
(2p+1)
, . . . , z0
C[p + 1, p′ ] × ze0 = (−1)N








,







ν = 0.
) and in our notation the upper



 

 

 
  p+1.
 
 0 

 


1 


0
..
.
In summary, we can form one system with a known right-hand side and obtain the
following result.
22
Theorem 2.2.1 Let
zν
:= (zν(1) , . . . , zν(2p+1) ),
ze0
:= (z0
(p+1)
(p+2)
, z0
zν(l) :=
(2p+1)
, . . . , z0
1 (l)
σ (tν ) · |hν |l−(p+1) ,
l!
).
Then, the vector zν ∈ R2p+1 is a solution to the system


′
B
[p,
:]
,
M zν = (−1)N −ν (0, . . . , 0, 0, . . . , 0, 1), M := 
| {z } | {z }
C[p + 1, :]
p
and the vector ze0 ∈ R
p+1
ν > 0,
(2.2.10)
p+1
is a solution to the system
M0 ze0 = (−1)N (0, . . . , 0, 1),
| {z }
M0 := C[p + 1, p′ ].
(2.2.11)
p+1
2.3
First estimates for zν
2.3.1
Total positivity of the matrices A, B, C
By definition (2.2.9),
C
:= CN −ν
:= ADγ1 ADγ2 · · · ADγN −ν A,
B
:= Bν
B′
:= Bν′
:= ADδ1 ADδ2 · · · ADδν ,
:= D0 BD0 ,
where γs , δs are some positive numbers.
Lemma 2.3.1 The matrix A is totally positive.
Proof. See e.g. [BS]. We present another proof in §3.2.2.
Lemma 2.3.2 The matrices B and C are totally positive.
Proof. By Lemma 2.3.1, the matrix A is totally positive, and so is D(γ), as a diagonal
matrix with positive entries. By the CB-formula, the product of TP-matrices is a TPmatrix.
Lemma 2.3.3 For any ν ∈ N, we have
Bν′ (i, j) = (−1)|i+j| Bν (i, j).
Proof. By definition, we have
D0 := diag ⌈(−1)l ⌋,
thus, by the CB-formula,
Bν′ (i, j) = D0 (i, i)Bν (i, j)D0 (j, j).
But since
D0 (i, i) = (−1)|i| ,
D0 (j, j) = (−1)|j| .
the statement follows.
23
(2.3.1)
2.3.2
First estimate for z0
(p+1)
Theorem 2.3.4 The solution ze0 = (z0
(2p+1) T
, . . . , z0
M0 ze0 = (0, . . . , 0, 1)T ,
| {z }
) to the problem
M0 := C[p + 1, p′ ]
(2.3.2)
l = p + 1, . . . , 2p + 1.
(2.3.3)
p+1
satisfies the relation
C(p, pl )
,
C(p + 1, p′ )
(l)
|z0 | =
Proof. From (2.3.2) we infer
(p+1)
ze0 = (z0
(2p+1)
, . . . , z0
) = M0−1 · (0, . . . , 0, 1)T = M0−1 [:, p + 1],
| {z }
p+1
i.e., ze0 coincides with the last column of M0−1 . By Cramer’s rule, we obtain
(l)
(l−p)
where M0
(l−p)
z0 = ze0
(l−p)
= M0−1 [l − p, p + 1] = (−1)l+1
det M0
,
det M0
is the algebraic adjoint to the element M0 [p + 1, l − p]. The formulas
(l−p)
:=
M0 (\p + 1, \l − p) := M0 (p, \l − p) := C(p, pl ),
det M0
:=
C(p + 1, p′ )
det M0
follow from definitions and prove the theorem.
2.3.3
First estimate for zν
Theorem 2.3.5 The solution zν ∈ R2p+1 to the problem


′
B
[p,
:]
 ∈ R(2p+1)×(2p+1)
M zν = (0, . . . , 0, 1)T , M := 
| {z }
C[p + 1, :]
(2.3.4)
2p+1
admits the estimate
|zν(l) | ≤ max
j∈Jl
C(p, j l )
.
C(p + 1, j ′ )
(2.3.5)
Proof. 1) First we derive an expression for zν . Note that




B ′ [p, :] 

=: M [2p, :]
B ′ [p, :]
 := 
M := 
 C[p, :] 

C[p + 1, :]
C[p + 1, :]
From (2.3.4) we infer that



.

(2.3.6)
zν = M −1 · (0, . . . , 0, 1)T = M −1 [:, 2p + 1],
i.e., the vector zν is equal to the last column of M −1 . By Cramer’s rule we obtain
zν(l) = M −1 [l, 2p + 1] = (−1)2p+1+l
24
det M (l)
,
det M
(2.3.7)
where M (l) is the algebraic adjoint to the element M [2p + 1, l], i.e.,
det M (l) := M (\2p + 1, \l) = M (2p, \l).
2) Next we estimate det M (l) . Expanding the determinant M (2p, \l) in (2.3.6) by
Laplace Expansion (2.1.3) by Minors of B ′ (p, \l) and C(p, \l), we obtain
X
(−1)ǫl (j) B ′ (p, j)C(p, j l ),
det M (l) := M (2p, \l) =
j∈Jl
where ǫl (j) are some integers. From (2.3.1) it follows that
B ′ (p, j) = (−1)ǫ(j) B(p, j)
for some integer ǫ(j). Therefore
| det M (l) | ≤
X
B(p, j)C(p, j l ).
(2.3.8)
j∈Jl
3) We also need an expression for det M . Expanding the determinant det M in (2.3.6)
by Laplace Expansion (2.1.3) by Minors of B ′ and C, and using (2.3.1), we find
X
det M =
(−1)|p+j| M (p, j) M (p′ , j ′ )
j∈J
:=
X
(−1)|p+j| B ′ (p, j) C(p + 1, j ′ )
j∈J
=
X
B(p, j) C(p + 1, j ′ ) .
j∈J
i.e.,
det M =
X
B(p, j)C(p + 1, j ′ ).
(2.3.9)
j∈J
4) Now we are able to bound zν . From (2.3.7)-(2.3.9), it follows that
P
P
l
l
| det M (l) |
j∈Jl B(p, j)C(p, j )
j∈Jl B(p, j)C(p, j )
P
≤ P
≤
.
|zν(l) | =
′
′
| det M |
j∈J B(p, j)C(p + 1, j )
j∈Jl B(p, j)C(p + 1, j )
Applying Lemma 2.1.4 to the latter ratio we obtain
|zν(l) | ≤ max
j∈Jl
2.4
C(p, j l )
.
C(p + 1, j ′ )
Properties of the matrices C
The orders of the minors of C in the right hand side of (2.3.3) and (2.3.5) differ by one.
In this section we establish some relation between minors of C which allow us to equalize
these orders.
Definition 2.4.1 Define F ∈ Rn×n as an anti-diagonal matrix with the only non-zero
elements
−1
F [i, n + 1 − i] = n+1
.
i
25
Recall that by definition (2.2.5)
D0 := ⌈−1, +1, . . .⌋.
Lemma 2.4.2 There holds the equality
C −1 = (D0 F )−1 C ∗ (D0 F ).
(2.4.1)
Proof. Consider two null-splines s1 , s2 ∈ Sn+2 (∆) of degree n + 1 on ∆,
s1 , s2 ∈ Sn+2 (∆),
s1 (tν ) = s2 (tν ) = 0,
∀tν ∈ ∆,
and the vectors xν , yν ∈ Rn of their normalized successive derivatives
x(l)
ν :=
1 (l)
s (tν ) · |hν |l−n/2+1 ,
l! 1
yν(l) :=
1 (l)
s (tν ) · |hν |l−n/2+1 .
l! 2
(2.4.2)
We proved in Lemma 1.7.3 the equality
G(s1 , s2 ; x) :=
n+1
X
(l)
(n+1−l)
(−1)l s1 (x) s2
(x) = const(s1 , s2 ),
l=0
x ∈ [a, b].
(2.4.3)
It follows, in particular, that
G(s1 , s2 ; tν ) = G(s1 , s2 ; tN ),
(2.4.4)
Notice that due to the null values of s1 , s2 on ∆ we can omit in the sum (2.4.3) the terms
corresponding to l = 0 and l = n + 1, i.e., we have
n
X
G(s1 , s2 ; tν ) =
(l)
(n+1−l)
(−1)l s1 (tν ) s2
(tν ).
l=1
Using equalities (2.4.2) we may rewrite the latter expression in terms of the vectors x, y
as
n
X
−1 (l) (n+1−l)
1
G(s1 , s2 ; tν ) =
(−1)l n+1
xν yν
.
(2.4.5)
l
(n + 1)!
l=1
With the help of matrices D0 and F one obtains
(−1)l
Hence,
(−1)l
n+1 −1
l
= (D0 F )l,n+1−l .
(n+1−l)
yν
= (D0 F yν )(l) ,
n+1
l
so that (2.4.5) becomes
1
G(s1 , s2 ; tν ) = (xν , D0 F yν ).
(n + 1)!
Now, from (2.4.4) we conclude that
(xν , D0 F yν ) = (xN , D0 F yN ).
(2.4.6)
Recall that we defined the matrix C in (2.2.8)-(2.2.9) through the following relations
(−1)N −ν xN = Cxν ,
(−1)N −ν yN = Cyν .
Thus, from (2.4.6) it follows that
(xν , D0 F yν ) = (Cxν , D0 F Cyν ) = (xν , C ∗ D0 F Cyν ).
26
Since we have not made any assumptions on xν , yν , the latter equality holds for any
xν , yν ∈ Rn . Hence
D0 F = C ∗ D0 F C,
and therefore
C −1 = (D0 F )−1 C ∗ (D0 F ).
Lemma 2.4.3 For any i, j ∈ Ip,n , we have the equality
C(i′ , j ′ ) = f [i, j] · C(i∗ , j ∗ ),
where
Qp
F (i, i∗ )
s=1
:= Qp
f [i, j] :=
F (j, j ∗ )
s=1
Proof. From
(2.4.7)
n+1
js
n+1 .
is
C −1 = (D0 F )−1 C ∗ (D0 F )
∗
it follows that det C = det C = det C
−1
(2.4.8)
, and since C is a TP-matrix, we have
det C = 1.
Therefore, by the Inverse Determinants Identity (2.1.2), we obtain
C(i′ , j ′ ) = (−1)|i+j| C −1 (j, i).
(2.4.9)
To estimate the minor C −1 (j, i) we apply the CB-formula to the right hand side of (2.4.8).
Since the matrix D0 (resp. F ) is diagonal (resp. anti-diagonal), it follows that
D0 (α, β) 6= 0,
iff
F (α, β) 6= 0,
α = β;
iff
α = β∗.
Thus, the CB-formula gives the equality
C −1 (j, i) = F −1 (j, j ∗ )D0−1 (j ∗ , j ∗ )C ∗ (j ∗ , i∗ )D0 (i∗ , i∗ )F (i∗ , i).
Due to the relations
∗
D0 (α∗ , α∗ ) =
(−1)|α
F −1 (α, α∗ ) =
[F (α, α∗ )]−1
C ∗ (α, β)
=
|
:=
(−1)(n+1)p−|α| ,
=
[F (α∗ , α)]−1 ,
C(β, α),
the latter formula for C −1 (j, i) is reduced to
C −1 (j, i) = (−1)−|i|−|j|
F (i, i∗ )
C(i∗ , j ∗ ).
F (j, j ∗ )
Combining this expression with (2.4.9) gives (2.4.7).
Lemma 2.4.4 For any p, n ∈ N we have
C(n − p, p′ ) = C(p, p∗ )
(2.4.10)
and there exist constants cn , c′n , such that
cn C(p, j ∗ ) ≤ C(n − p, j ′ ) ≤ c′n C(p, j ∗ )
27
∀j ∈ Ip,n .
(2.4.11)
Proof. By definition,
= (n − p + 1, . . . , n)∗ ,
p := (1, . . . , p)
′
n − p := (1, . . . , n − p) = (n − p + 1, . . . , n) .
Thus by (2.4.7) we obtain
C(n − p, j ′ ) = f [p, j]C(p, j ∗ ).
(2.4.12)
Equality (2.4.10) follows now if we take j = p, since f [p, p] = 1. The inequalities (2.4.11)
follow with
cn := min {f [p, j] : 1 ≤ p ≤ n, j ∈ Ip,n },
c′n := max {f [p, j] : 1 ≤ p ≤ n, j ∈ Ip,n }.
With n = 2p + 1, Lemma 2.4.4 takes the following form.
Lemma 2.4.5 For any p with n = 2p + 1 we have
C(p + 1, p′ ) = C(p, p∗ ),
(2.4.13)
and there exist constants cp , c′p , such that
cp C(p, j ∗ ) ≤ C(p + 1, j ′ ) ≤ c′p C(p, j ∗ ),
2.5
∀j ∈ J.
(2.4.14)
Second estimates for zν
Theorem 2.5.1 The components of the vector zν satisfy the relations
(l)
C(p, pl )
,
C(p, p∗ )
|z0 |
=
|zν(l) |
≤ cp max
j∈Jl
l = p + 1, . . . , 2p + 1.
C(p, j l )
,
C(p, j ∗ )
l = 1, . . . , 2p + 1.
Remark. Since for l = p + 1 we have pl = p∗ , it follows that
(p+1)
|z0
|=
C(p, pp+1 )
= 1,
C(p, p∗ )
in accordance with (1.7.4).
Proof. By Theorem 2.3.4 we have
(l)
|z0 | =
C(p, pl )
,
C(p + 1, p′ )
l = p + 1, . . . , 2p + 1,
and by (2.4.13)
C(p + 1, p′ ) = C(p, p∗ ),
which implies the first equality (2.5.1).
Similarly, by Theorem 2.3.5 we have
|zν(l) | ≤ max
j∈Jl
C(p, j l )
,
C(p + 1, j ′ )
l = 1, . . . , 2p + 1;
and by (2.4.14)
C(p + 1, j ′ ) ≥ cp C(p, j ∗ ),
which leads to the second inequality.
28
(2.5.1)
(2.5.2)
Chapter 3
Proof of Theorem Z:
final estimates for zν
3.1
Preliminary remarks
To estimate the ratio
C(p, i)/C(p, j)
for specific i, j ∈ J, in particular, for those given in (2.5.2), we may split the whole
product
N
−ν
Y
C :=
[ADγr ] · A,
r=1
into two arbitrary parts
C = KRq ,
Rq :=
q
Y
[ADγr ] · A,
(3.1.1)
r=1
and use the CB-formula keeping the total positivity of the matrices involved in mind.
This gives
C(p, i)
Rq (α, i)
≤ max
,
(3.1.2)
α∈J Rq (α, j)
C(p, j)
so that it is sufficient to estimate Rq (α, i)/Rq (α, j) for some q. It is clear that, the smaller
is the number q of the factors of Rq in (3.1.1), the simpler is the work to be done. It
would be ideal if we could take
q = 0,
R0 = A.
Unfortunately, A, though totally positive, is not strictly totally positive, i.e.,
A(α, β) = 0 for quite a lot of indices
α, β ∈ J.
But fortunately, A is an oscillation matrix and we prove in the next §3.2 that
A(α, β) > 0,
iff
αs ≤ βs+1 .
As we show in §3.3 this implies
Rp−1 (β, i) > 0,
∀β, i ∈ J.
Thus, it suffices to estimate the ratio
Q(β, i)/Q(β, j),
Q := Rp−1 :=
p−1
Y
r=1
This will be done in §3.6-§3.8.
29
[ADγr ] · A.
(3.1.3)
The matrices S and A
3.2
3.2.1
The matrix S
Definition 3.2.1 Set
S := Sn+2
n+1
n+2
j
j−1
:=
:=
.
i
i−1
i,j=0
i,j=1
(3.2.1)
Example 3.2.2

S2 = 
1
1
0
1


1


S3 =  0

0
,
1 1


1


 0
S4 = 

 0

0


1 2 ,

0 1
Lemma 3.2.3 The matrix S in (3.2.1) is a TP-matrix, i.e.,
S(α, β) ≥ 0,
1
1
0
0
1 1



2 3 
.

1 3 

0 1
∀ α, β ∈ Ip,n .
(3.2.2)
Moreover, we have
S(α, β) > 0
α ≤ β.
iff
(3.2.3)
Proof. The first part (3.2.2) of the lemma, that is the total positivity of S, was already
proved by Schoenberg [Sch]. We present an alternative proof by induction which gives
(3.2.3) as well.
1) Let Sn be a TP-matrix (as it is for n = 2). Since
j ′
X
j −2
j ′ =2
it follows that
Sn+1
i−2
j−1
=
,
i−1
n+1
j−1
′
:=
= Sn+1
· In+1 ,
i−1
i,j=1
(3.2.4)
where
′
Sn+1




=



1 0.................0
0
..
. Sn := {
n+1
}i,j ′ =2
j ′ −2
i−2
0





,



In+1



=



1
0
..
.
0
1
... 1



... 1 

. .
.
. . . .. 

... 0 1
1
..
(3.2.5)
The matrix In is totally positive (all its minors are either 0 or 1), hence, by the CBformula and the induction hypothesis, the total positivity of Sn+1 follows.
2) Let us prove (3.2.3).
A) If
αs > βs for some s ∈ {1, . . . , p},
then the entries of the matrix
T := S[α, β],
which is a (p × p)-submatrix of the lower triangular matrix S, satisfy
T [λ, µ] = S[αλ , jµ ] = 0,
30
λ ≥ s ≥ µ.
Hence the rows {T [λ, :]}pλ=s are linearly dependent, i.e.,
det T := S(α, β) = 0.
B) Suppose that for any γ, δ ∈ Ip,n we have the equivalence
Sn (γ, δ) > 0
iff
γ ≤ δ.
Now let
α, β ∈ Ip,n+1 ,
αs ≤ βs
∀s = 1, . . . , p.
(3.2.6)
We assume also that p ≤ n, since for p = n + 1 by definition we have det Sn+1 = 1. From
(3.2.4)-(3.2.5), by the CB-formula, we conclude that
X
δ 1 , . . . , δp
α1 , . . . , αp
α1 , . . . , αp
′
.
(3.2.7)
In+1
=
Sn+1
Sn+1
β1 , . . . , βp
δ 1 , . . . , δp
β1 , . . . , βp
δ
We distinguish two cases.
1) If α1 > 1, then, by (3.2.6) we also have β1 > 1. Hence
α1 − 1, . . . , αp − 1
α1 , . . . , αp
′
.
= Sn
Sn+1
β1 − 1, . . . , βp − 1
β1 , . . . , βp
Taking from the sum (3.2.7) only one term with δ = β we obtain
β1 , . . . , βp
α1 − 1, . . . , αp − 1
α1 , . . . , αp
In+1
≥ Sn
Sn+1
β1 , . . . , βp
β1 − 1, . . . , βp − 1
β1 , . . . , βp
α1 − 1, . . . , αp − 1
= Sn
β1 − 1, . . . , βp − 1
> 0,
where the last inequality holds by the induction hypothesis.
2) If α1 = 1, then


0,
1, α2 , . . . , αp
′
=
Sn+1
 Sn α2 −1,...,αp −1,
β1 , β2 , . . . , βp
β2 −1,...,βp −1
if
β1 > 1;
if
β1 = 1.
In this case taking from the sum (3.2.7) the term with
δ1 = 1,
δ s = βs ,
s ≥ 2,
we obtain
Sn+1
1, α2 , . . . , αp
β1 , β2 , . . . , βp
1, β2 , . . . , βp
α2 − 1, . . . , αp − 1
In+1
β1 , β2 , . . . , βp
β2 − 1, . . . , βp − 1
α2 − 1, . . . , αp − 1
= Sn
β2 − 1, . . . , βp − 1
> 0.
≥ Sn
31
3.2.2
The matrix A
The matrix A was defined in (2.2.2). We recall this definition.
Definition 3.2.4 Set
A := An := (aij )ni,j=1 ,
aij :=
n+1
j
−
.
i
i
(3.2.8)
Example 3.2.5

A2 = 
2
1
3
2

,

3


A3 =  6

4
2 1


4
3
2


 10 9 7
A4 = 

 10 10 9

5 5 5


5 3 ,

4 3
Lemma 3.2.6 The matrix A in (3.2.8) is a TP-matrix, i.e.,
A(α, β) ≥ 0,
1



4 
.

6 

4
∀α, β ∈ Ip,n .
(3.2.9)
Moreover,
A(α, β) > 0
iff
αs ≤ βs+1
∀s = 1, . . . , p − 1.
(3.2.10)
Proof. The following considerations are due to [BS]. For the matrix S defined in (3.2.1),
consider the matrix S − obtained from S by subtracting the last column of S from all
other columns. We have


0 0. . . . . . . . . . . . . . . . . . . . . . . .0 1

n+1
n+1 
 .
n
.. 
j
n+1
−
j
n+1
.

S :=
−
=  . { i − i }i,j=1 =: −An . 
.
i
i

 .
i,j=0
i=0
.
.. . . . . . . . . . . . . . . . . . . . . . . . . . . ..
This implies that for α, β ∈ Ip,n
0, α1 , . . . , αp−1 , αp
=
S
β1 , β2 . . . , βp , n + 1
i.e.,
S
S−
0, α1 , . . . , αp−1 , αp
β1 , β2 . . . , βp , n + 1
=
=
(−1)(p+1)+1 det (−A[α, β])
(−1)(p+1)+1 (−1)p A(α, β)
=
A(α, β),
α1 , . . . , αp
0, α1 , . . . , αp−1 , αp
=A
.
β1 , . . . , βp
β1 , β2 . . . , βp , n + 1
By (3.2.2), S is totally positive, and by (3.2.3) one has


 0 ≤ β1 ,


0, α1 , . . . , αp−1 , αp
>0
iff
S
α ≤ βs+1 ∀s = 1, . . . , p − 1,
 s
β1 , β2 , . . . , βp , n + 1


 α ≤ n + 1.
p
This is equivalent to (3.2.10), since the condition α, β ∈ Ip,n implies that β1 ≥ 1 and
αp ≤ n.
32
The matrices Q
3.3
Definition 3.3.1 Set
Qγ := ADγ1 ADγ2 · · · ADγp−1 A =
p−1
Y
r=1
[ADγr ] · A
(3.3.1)
where
A = A2p+1 ,
Dγr := D(γr ) := diag ⌈ |γr |−p , . . . , |γr |p ⌋,
A, Dγ ∈ R(2p+1)×(2p+1) .
(3.3.2)
In this section we establish a relation between indices β, i, j ∈ J of the form
E[β,i] ⊂ E[β,j] ,
which implies the estimate
Qγ (β, i) ≤ cp Qγ (β, j),
∀γ = (γ1 , . . . , γp−1 ) ∈ Rp−1 .
Here cp is a constant that is independent of γ, i.e., independent of the knot-sequence (we
recall that in (3.3.1) γr stands for the local mesh ratio ρν = hν /hν+1 with some ν).
Let
α(r) ∈ J,
r = 0, . . . , p,
be a sequence of indices with
α(0) := β,
α(p) := i.
From (3.3.1) and the CB-formula, we infer
"p−1
#
Y
X
(r)
(r)
(r−1)
(r)
(0)
(p)
A(α
, α )Dγr (α , α ) × A(α(p−1) , α(p) ).
Qγ (α , α ) =
α(1) ,...,α(p−1) ∈J
r=1
(3.3.3)
Since by definition (3.3.2) we have
Pp
Dγr (α(r) , α(r) ) = γr
(r)
s=1 [αs −(p+1)]
(r)
= γr−p(p+1) · γr|α
|
,
we may rewrite (3.3.3) as
(0)
Qγ (α
(p)
,α
)·
p−1
Y
γrp(p+1)
=
r=1
X
α(1) ,...,α(p−1) ∈J
=
X
α(1) ,...,α(p−1) ∈J
"p−1
Y
(r−1)
A(α
(r)
,α
(r)
)γr|α |
r=1
p
Y
A(α(r−1) , α(r) )
r=1
p−1
Y
#
(r)
γr|α
A(α(p−1) , α(p) )
|
.
(3.3.4)
r=1
By Lemma 3.2.6 the condition
A(α(r−1) , α(r) ) > 0
is equivalent to the inequalities
(r)
α(r−1)
≤ αs+1 ,
s
s = 1, . . . , p − 1.
(3.3.5)
This means that in (3.3.4) we could restrict the sum to the non-vanishing minors of A,
i.e., to the sequence of indices that satisfy (3.3.5) for all r = 1, . . . , p simultaneously.
33
Set
cγ :=
p−1
Y
r=1
|γr |p(p+1) .
This is the factor on the left-hand side of (3.3.4) that is independent of β and i. Then
from (3.3.4) we obtain
c′p
p−1
Y
X
α(1) ,...,α(p−1) ∈J[β,i] r=1
(r)
|γr ||α
|
≤ cγ Qγ (β, i) ≤ c′′p
p−1
Y
X
α(1) ,...,α(p−1) ∈J[β,i] r=1
(r)
|γr ||α
|
,
(3.3.6)
where for a fixed β =: α(0) and i =: α(p) , the sum is taken over the set J[β,i] of sequences
(r)
(α(r) )p−1
∈ J which satisfy the condition (3.3.5) simultaneously.
r=1 of indices α
Precisely, we formulate the following
Definition 3.3.2 For given β, i ∈ J, we set
α(0) := β,
Further, we write
α(p) := i.
α := (α(r) )p−1
r=1 ∈ J[β,i] ,
and we say that the sequence α is admissible for the pair [β, i] if
α(r) ∈ J,
(r−1)
αs−1
≤
(r)
αs ,
r = 1, . . . , p − 1;
(3.3.7)
r = 1, . . . , p,
s = 2, . . . , p.
Definition 3.3.3 For given β, i ∈ J, we write
ǫ := (ǫ1 , . . . , ǫp−1 ) ∈ E[β,i] ,
and we say that the path ǫ is admissible for [β, i], if there exists a sequence of indices
α = (α(1) , . . . , α(p−1) ) ∈ J[β,i] ,
such that
ǫr = |α(r) |,
r = 1, . . . , p − 1.
With such a definition, (3.3.6) becomes
c′p
Y
X p−1
ǫ∈E[β,i] r=1
|γr |ǫr ≤ cγ Qγ (β, i) ≤ c′′p
Y
X p−1
ǫ∈E[β,i] r=1
where the sum is taken over all different paths ǫ ∈ E[β,i] .
Set
X p−1
Y
Q[β,i] (γ) :=
|γr |ǫr .
|γr |ǫr ,
(3.3.8)
(3.3.9)
ǫ∈E[β,i] r=1
The next lemma follows immediately.
Lemma 3.3.4 There exists a constant cp such that if
E[β,i] ⊂ E[β,j] ,
β, i, j ∈ J,
then for any γ = (γ1 , . . . , γp−1 ) we have
Q[β,i] (γ) ≤ Q[β,j] (γ),
and consequently
Qγ (β, i) ≤ cp Qγ (β, j).
34
(3.3.10)
3.4
A further strategy
1) The function
X p−1
Y
Q[β,i] (γ) :=
ǫ∈E[β,i] r=1
|γr |ǫr
defined in (3.3.9) is a multivariate polynomial in γ. All the coefficients of this polynomial
are equal to 1. We want to find whether, for special i, j ∈ J, the inequality
Q[β,i] (γ) ≤ cp Q[β,j] (γ)
(3.4.1)
holds for all γ ∈ Rp−1
(all γ’s are positive). The condition (3.3.10) in Lemma 3.3.4
+
provides, of course, this inequality, but we need to find a way to check its validity.
2) A trivial necessary condition for the inequality (3.4.1) to be true is that
(A)
the minimal degree of Q[β,i] (γ)
≥
the minimal degree of Q[β,j] (γ),
(B)
the maximal degree of Q[β,i] (γ)
≤
the maximal degree of Q[β,j] (γ).
This gives rise to the minimal and the maximal paths which we define in §3.5. These
paths are nothing but the corresponding degrees of the monomials in Q[β,i] .
As we show in §3.5, the set of admissible paths ǫ ∈ E[β,i] (i.e., the set of monomials
of the polynomial Q[β,i] (γ)) has the properties:
a)
the minimal path (degree) ǫ[β] depends only on β,
b)
the maximal path (degree) ǫ[i] depends only on i.
Hence, among the conditions (A)-(B) only (B) will remain under consideration.
3) For two arbitrary multivariate polynomials, the condition (B) is not sufficient to
provide (3.4.1). For example, for
P1 (x, y) := 1 + x2 y,
P2 (x, y) := 1 + x3 y 2 ,
P1 can not be bounded by (const· P2 ) for all positive values x, y. Therefore, we will prove
in §3.6 that for our particular polynomials the condition (B) for the maximal degrees, or
equivalently the condition
(B′ )
the maximal path ǫ[i] ≤ the maximal path ǫ[j]
for the maximal paths, implies that
{the set of all monomials of Q[β,i] } ⊂ {the set of all monomials of Q[β,j] }.
In the path terminology it looks like
ǫ[i] ≤ ǫ[j]
⇒
E[β,i] ⊂ E[β,j] .
Then, by (3.3.10), the inequality (3.4.1) trivially follows.
4) To prove the last implication, we establish in §3.6 a criterion for the inclusion
γ ǫ := γ1ǫ1 . . . γrǫr ∈ Q[β,i] (γ),
or equivalently ǫ ∈ E[β,i] .
With Q[β,ω] being the polynomial of the highest maximal degree ω (with the highest
maximal path ǫ[ω] ), the criterion is
γ ǫ ∈ Q[β,ω](γ),
ǫ ≤ ǫ[i]
⇔
γ ǫ ∈ Q[β,i] (γ).
In words, a monomial γ ǫ belongs to the polynomial Q[β,i] (γ) if and only if
35
i)
it belongs to the highest polynomial Q[β,ω] (γ),
ii)
its degree ǫ does not exceed the maximal degree ǫ[i] of the polynomial Q[β,i] (γ).
In the path terminology this can be rephrased as
ǫ ∈ E[β,ω] ,
ǫ ≤ ǫ[i]
⇔
ǫ ∈ E[β,i] .
Only sufficiency needs to be proved, i.e. the implication “⇒”.
5) The latter will be proved by the iterative use of the following “elementary” step:
for any i′ which differs from i only in one component im , the same implication holds:
ǫ ∈ E[β,i′ ] ,
ǫ ≤ ǫ[i]
⇒
ǫ ∈ E[β,i] .
All of §3.6 is devoted to the proof of this latter statement.
′
a) We have a path ǫ′ ∈ E[β,i′ ] (a monomial γ ǫ ∈ Q[β, i′ ; γ]) with ǫ′ ≤ ǫ[i] .
b) It is defined by a sequence (α′(r) ) ∈ J[β,i′ ] with |α′(r) | = ǫ′r .
c) Since i′ ≥ i, this sequence may not be admissible for [β, i].
d) But we can modify it to a sequence (α′′(r) ), such that (α′′(r) ) ∈ J[β,i] and |α′′(r) | =
′
ǫr .
These modifications are treated in Lemmas 3.6.1–3.6.3. The statements of these
Lemmas are summarized then in Lemmas 3.6.4–3.6.5.
3.5
Minimal and maximal paths
In this section we define the minimal and the maximal admissible sequences α(r) , α(r) ∈
J[β,j] , and respectively the minimal and the maximal paths ǫ[β] , ǫ[j] ∈ E[β,j] .
We start with examples of what the admissible sequences (α(r) ) ∈ J[β,i] look like.
According to definition (3.3.7) we have two strings of inequalities
1 ≤ αs−1
(r)
<
αs < 2p + 1,
(r)
αs−1
(r−1)
≤
αs ,
(r)
r = 1, . . . , p − 1,
s = 2, . . . , p,
r = 1, . . . , p,
s = 2, . . . , p.
In order to analyse these strings, we will frequently express them in the following matrix
form.
Example 3.5.1
p = 2,






 β1

β2
(α(1) ) ∈ J[β,i]
α(1)
↓
≤
(1)
α1
(1)
α2
p = 3,

i1 

≤ i2 














 β1


 β2

β3
36
(α(1) , α(2) ) ∈ J[β,i]
α(2)
↓
α(1)
(2)
↓
≤
≤
(1)
α1
(1)
α2
(1)
α3
α1
≤
≤
(2)
α2
(2)
α3
≤
≤

i1 


i2 


i3 







arbitrary p,




























(α(1) , . . . , α(p−1) ) ∈ J[β,i]
α(p−1)
↓

i1 


(p−1)
↓
α1
≤ i2 


α(2)
(p−2)
(p−1)
↓
α1
≤ α2
≤ i3 


α(1)

↓
............................................. 


(1)
(2)
(p−2)
(p−1)
α1
≤ α2
≤ · · · ≤ αp−2
≤ αp−1
≤ ip 


(1)
(2)
(p−2)
(p−1)

β1
≤ α2
≤ α3
≤ · · · ≤ αp−1
≤ αp



..............................................


(1)
(2)

βp−2 ≤ αp−1 ≤ αp


(1)

βp−1 ≤ αp

βp
α(p−2)
In such a representation, each column is an index from J, i.e., the following “vertical”
inequalities are also valid:
(r)
1 ≤ α1 < · · · < α(r)
p ≤ 2p + 1.
(3.5.1)
In particular, it follows that
s ≤ α(r)
s ≤ p + 1 + s,
s = 1, . . . , p.
(3.5.2)
Lemma 3.5.2 For any β, i ∈ J the set J[β,i] is non-empty.
Proof. The following sequence (α(r) ) is always admissible:




























α(p−1)
↓

i1 


↓
1
< i2 


α(2)
↓
1
<
2
< i3 


α(1)

↓
............................................... 


1
<
2
< · · · < p − 2 < p − 1 < ip 



β1
< p + 3 < p + 4 < ··· <
2p
< 2p + 1



..................................................



βp−2 <
2p
< 2p + 1



βp−1 < 2p + 1

βp
α(p−2)
Lemma 3.5.3 For any β, i ∈ J, and any (α(r) ) ∈ J[β,i] , we have
α(r) ≤ α(r) ,
37
(3.5.3)
where
α(r)
s

 min (i
p−r+s , p + 1 + s),
=
 p + 1 + s,
s ≤ r;
s > r;
r = 1, . . . , p − 1.
(3.5.4)
Proof. In view of (3.5.2), the following table presents the admissible sequence (α(r) )
whose entries take the maximal possible values.




























The maximal sequence (α(r) )
α(p−1)
↓

i1 


↓
min (i2 , p + 2) ≤ i2 


α(2)
↓
min (i3 , p + 2) ≤ min (i3 , p + 3) ≤ i3 


α(1)

↓
........................................................ 


min (ip , p + 2) ≤ min (ip , p + 3) ≤ · · · ≤ min (ip , 2p − 1) ≤ min (ip , 2p) ≤ ip 



β1 <
p+3
<
p+4
<···<
2p
<
2p + 1



...........................................................



βp−2 <
2p
<
2p + 1



βp−1 <
2p + 1

βp
α(p−2)
Lemma 3.5.4 For any β, i ∈ J, and any (α(r) ) ∈ J[β,i] , we have
α(r) ≤ α(r) ,
where
α(r)
s

 s,
=
 max (β
s ≤ r;
s−r , s),
s > r;
(3.5.5)
r = 1, . . . , p − 1.
(3.5.6)
Proof. In view of (3.5.2), the following table presents the admissible sequence (α(r) )
38
whose entries take the minimal possible values.





























The minimal sequence (α(r) )
α(p−1)
↓

i1 


↓
1
< i2 


α(2)

↓
1
<
2
< i3 


α(1)

↓
.................................................... 


1
<
2
<···<
p−2
< p − 1 < ip 



β1 ≤
max (β1 , 2)
≤ max (β1 , 3) ≤ · · · ≤ max (β1 , p − 1) ≤ max (β1 , p)



.............................................................



βp−2 ≤ max (βp−2 , p − 1) ≤ max (βp−2 , p)



βp−1 ≤ max (βp−1 , p)

βp
α(p−2)
Definition 3.5.5 For β, i ∈ J define the maximal path ǫ[i] and the minimal path ǫ[β] as
follows:
ǫ[i] ∈ E[β,i] ,
ǫ[i]
r
[β]
ǫ[β]
r
ǫ
∈ E[β,i] ,
:=
:=
|α(r) | =
(r)
|α
|=
r
X
s=1
r
X
min (ip−r+s , p + 1 + s) +
p
X
(p + 1 + s); (3.5.7)
s=r+1
s+
p
X
max (βs−r , s).
(3.5.8)
∀ǫ ∈ Eβ,i .
(3.5.9)
s=r+1
s=1
Lemma 3.5.6 For any β, i ∈ J, we have
ǫ[β] ≤ ǫ ≤ ǫ[i] ,
Proof. Follows directly from Lemmas 3.5.3-3.5.4 and Definition 3.5.5.
3.6
Characterization of E[β,i]
Here we will prove the equality
E[β,i] = {ǫ ∈ E[β,ω] : ǫ ≤ ǫ[i] },
∀β, i ∈ J,
where ω := (p + 2, . . . , 2p + 1) is index from J with maximal possible entries. The latter
will be proved by the iterative use of the following “elementary” step: for any i′ that
differs from i only in one component im , the same implication holds:
ǫ ∈ E[β,i′ ] ,
ǫ ≤ ǫ[i]
⇒
ǫ ∈ E[β,i] .
In this section exclusively, for i ∈ J we denote by i′ , i′′ ∈ J some modifications of i which
have nothing to do with unfortunately the same notation for the complementary index.
39
Lemma 3.6.1 For any given m ∈ {1, . . . , p}, let i, i′ ∈ J be such that
i′s = is ,
i′m = im + 1.
If for a given β ∈ J we have
ǫ′ ∈ E[β,i′ ] ,
s 6= m;
ǫ′ ≤ ǫ[i] ,
then for the same β there exists a path ǫ, and a number l ∈ {1, . . . , p}, such that

 ǫ′ ,
r = 1, . . . , l − 1;
r
(3.6.1)
ǫ ∈ E[β,i] , ǫr =
 ǫ′ − 1,
r = l, . . . , p − 1.
r
Proof. Let
ǫ′ ∈ E[β,i′ ] ,
ǫ′ ≤ ǫ[i] .
By definition, there exists a sequence α′ ∈ J[β,i′ ] which satisfies the inequalities

i1


′(p−1)

α1
≤ i2



......................


′(p−m)
′(p−1)

α1
≤ · · · ≤ αm
≤ im−1


′(p−m+1)
′(p−m+2)
′(p−1)

α1
≤ α2
≤ · · · ≤ αm−1 ≤ im + 1



.......................................................


′(1)
′(p−m+1)
′(p−m+2)
′(p−1)

α1 ≤ · · · ≤ αp−m+1 ≤ αp−m+2 ≤ · · · ≤ αp−1 ≤ ip


′(1)
′(p−m+1)
′(p−m+2)
′(p−1)
 β1
≤ α2 ≤ · · · ≤ αp−m+2 ≤ αp−m+3 ≤ · · · ≤ αp


 ..............................................


 βp−1 ≤ α′(1)
p

βp





























(3.6.2)
and moreover
|α′(r) | = ǫ′ ≤ ǫ[i]
r ,
r = 1, . . . , p − 1.
To produce a required sequence α ∈ J[β,i] , we change the values of the components of
α′ ∈ J[β,i′ ] only in the m-th row:
′(p−m+1)
α1
≤ · · · ≤ α′(p−1) ≤ i′m := im + 1.
For α′ ’s in this row we have two possible relations.
1) The first one is the inequality
′(p−1)
αm−1 < im + 1.
Then
′(p−m+1)
α1
Therefore, α′ ∈ J[β,i] , hence
′(p−1)
≤ · · · ≤ αm−1 ≤ im .
ǫ′ ∈ E[β,i] ,
and (3.6.1) is satisfied with l = p, i.e., we do not have to do anything.
40
2) The second possibility is that for some t ∈ {1, . . . , m − 1} we have the following
relations
′(p−m+1)
α1
′(p−m+t−1)
≤ · · · ≤ αm−1
′(p−m+t)
< αt
′(p−1)
= · · · = αm−1 = im + 1.
(3.6.3)
In this case we set
(p−m+s)
αs
(r)
αs
′(p−m+s)
:=
αs
:=
′(r)
αs ,
− 1 = im ,
s = t, . . . , m − 1;
(3.6.4)
otherwise;
thus, changing by −1 only the last m − t entries of the m-th row.
2a) By such a definition, the second part of (3.6.1) holds evidently with l = p − m + t.
2b) To show that ǫ ∈ E[β,i] , we need to prove that
α ∈ J[β,i] .
Since the changes are restricted to the m-th row we need to care only about the inequalities where the changed values are involved, i.e., about the inequalities
(p−m+t)
···
αt−1
∧
(p−m+t−1)
αt−1
(p−m+t)
αt
≤
∧
(p−1)
αm−2
im−1 ,
(3.6.5)
∧
≤ ··· ≤
(p−1)
αm−1
≤
im .
2c) From (3.6.3) and (3.6.4) it follows that in the m-th row we have
(p−m+t−1)
αt−1
(p−m+t)
≤ αt
(p−1)
= · · · = αk−1 = im ,
i.e., the “horizontal” inequalities in (3.6.5) are valid.
2d) In the columns (α(p−m+s) )m−1
s=t we have
(p−m+s)
αs−1
′(p−m+s)
:= αs−1
≤ im−1 < im =: αs(p−m+s) ,
i.e., the “vertical” inequalities in (3.6.5) are also true.
Lemma 3.6.2 For some l ∈ {1, . . . , p − 1}, let ǫ be a path such that

[i]

r = 1, . . . , l − 1;
ǫ′r ≤ ǫr ,
ǫ ∈ E[β,i] , ǫr :=
[i]
 ǫ′ − 1 < ǫr ,
r = l, . . . , p − 1.
r
(3.6.6)
Then there exists an l′′ > l and a path
ǫ′′ ∈ E[β,i] ,
such that
ǫ′′r =


[i]
ǫ′r ≤ ǫr ,
 ǫ′ − 1 < ǫ[i]
r ,
r
r = 1, . . . , l′′ − 1;
r = l′′ , . . . , p − 1.
Proof. By definition, there exists a sequence α = {α(r) }, such that

[i]

ǫ′r ≤ ǫr , r = 1, . . . , l − 1;
α ∈ J[β,i] , |α(r) | = ǫr :=
 ǫ′ − 1 < ǫ[i]
r = l, . . . , p − 1.
r ,
r
41
(3.6.7)
(3.6.8)
We will change now by +1 a non-zero number q + 1 of successive elements of α ∈ J[β,i]
(l)
in a certain row starting from an element αs∗ in the l-th column.
A) By such a change the equality (3.6.8) holds automatically.
B) The task is to find a starting element so that the new sequence α′′ would still be
in J[β,i] . Since the changes are restricted to a certain row we need to care only about the
inequalities where the changed values are involved, i.e., about the inequalities
′′(l)
αs∗
′′(l+1)
≤
αs∗ +1
∧
≤
∧
′′(l)
αs∗ +1
′′(l+q)
··· ≤
∧
′′(l+1)
αs∗ +2
≤
αs∗ +q
(l+q+1)
αs∗ +q+1
(3.6.9)
∧
′′(l+q)
αs∗ +q+1 .
···
Consider the index α(l) . Since
(l)
α(l)
s ≤ αs ,
s = 1, . . . , p,
and by assumption (3.6.6)
p
X
s=1
[i]
(l)
(l)
α(l)
s := |α | < ǫl := |α | :=
p
X
α(l)
s ,
s=1
there exists a number s′ , such that
(l)
(l)
αs′ < αs′ .
Set
o
n
(l)
.
s∗ := max s ∈ {1, . . . , p} : α(l)
s < αs
(3.6.10)
1) If s∗ = p, then we set
α′′(l)
= α(l)
p
p + 1,
and the lemma is proved with l′′ = l + 1.
2) Let s∗ < p. Then, by definition of s∗ ,
(l)
(l)
(l)
(l)
αs∗ < αs∗ < αs∗ +1 = αs∗ +1 ,
i.e.,
(l)
(l)
αs∗ + 1 < αs∗ +1 .
Set
l′′ = max
and let
n
(3.6.11)
(l)
(l+t)
l + t ∈ {l, . . . , p − 1} : αs∗ = αs∗ +t
l′′ =: l + q + 1,
o
+ 1,
q ∈ {0, . . . , p − 1 − l}.
Then we have the following three possibilities for the position of l′′ in the table.
a) The case (l + q) < (p − 1), s∗ + q < p.


(l)
(l+1)
(l+q)
(l+q+1)
· · · ≤ αs∗
= αs∗ +1 = · · · = αs∗ +q
< αs∗ +q+1 ≤ · · ·




(l)
(l+1)
(l+q)
· · · ≤ αs∗ +1 ≤ αs∗ +2 ≤ · · · ≤ αs∗ +q+1 ≤ · · ·
b) The case (l + q) < (p − 1), s∗ + q = p.

(l+q−1)
(l)
(l+1)
· · · ≤ αs∗
= αs∗ +1 = · · · = αp−1


(l)
(l+1)
(l+q−1)
· · · ≤ αs∗ +1 ≤ αs∗ +2 ≤ · · · ≤ αp
42
=
(l+q)
αp



c) The case (l + q) = (p − 1) (then s∗ + q = m − 1 < p).

(l)
(l+1)
(p−1)
· · · ≤ αs∗
= αs∗ +1 = · · · = αm−1


(l)
(l+1)
(p−1)
· · · ≤ αs∗ +1 ≤ αs∗ +2 ≤ · · · ≤ αm
≤
im
Set
′′(l+t)
=
αs∗ +t + 1,
′′(r)
=
αs ,
αs∗ +t
αs
(l+t)
t = 0, . . . , q;
(r)
otherwise;



(3.6.12)
thus, increasing by +1 the elements in the upper row of the above subtables.
2.1) Let us verify the “vertical” inequalities in (3.6.9). Since, by (3.6.11),
(l)
(l)
αs∗ + 1 < αs∗ +1 ,
and since, for the upper and lower row of the above subtables, the relations
(l)
(l+t)
(l)
(l+t)
αs∗ +1 ≤ αs∗ +t+1 ,
αs∗ +t + 1 = αs∗ + 1,
t = 0, . . . , q
are valid, we have
(l+t)
(l)
(l)
(l+t)
αs∗ +t + 1 = αs∗ + 1 < αs∗ +1 ≤ αs∗ +t+1 ,
i.e.,
(l+t)
(l+t)
αs∗ +t + 1 < αs∗ +t+1 ,
According to the definition (3.6.12), this gives
′′(l+t)
(l+t)
(l+t)
′′(l+t)
αs∗ +t := αs∗ +t + 1 < αs∗ +t+1 =: αs∗ +t+1 ,
t = 0, . . . , q,
i.e.,
′′(l+t)
′′(l+t)
αs∗ +t < αs∗ +t+1 ,
t = 0, . . . , q.
This proves the “vertical” inequalities in (3.6.9).
2.2) Let us prove the “horizontal” inequalities in (3.6.9). It is clear that, due to the
equalities
(l)
(l+1)
(l+q)
αs∗ = αs∗ +1 = · · · = αs∗ +q ,
the definition (3.6.12) implies
′′(l)
′′(l+1)
′′(l+q)
αs∗ = αs∗ +1 = · · · = αs∗ +q .
Also in the case (a) we have
′′(l+q)
(l+q)
(l+q+1)
′′(l+q+1)
αs∗ +q := αs∗ +q + 1 ≤ αs∗ +q+1 =: αs∗ +q+1
and that completes the “horizontal” part of (3.6.9) for this case.
Further, since by definition (3.6.10) we have
(l)
(l)
αs∗ + 1 < αs∗ ,
it follows that
(l+t)
(l)
(l)
(l+t)
αs∗ +t + 1 = αs∗ + 1 ≤ αs∗ ≤ αs∗ +t .
This implies
′′(l+t)
(l+t)
(l+t)
αs∗ +t := αs∗ +t + 1 ≤ αs∗ +t .
i.e., the values of the modified α′′ lie in the admissible intervals. In particular, in the
case (b)
αp′′(l+q) ≤ α(l+q)
= 2p + 1,
p
43
and in the case (c)
′′(p−1)
αm−1
(p−1)
≤ αm−1 = min (p + m, im ) ≤ im .
This finishes the proof of the “horizontal” part of (3.6.9) and of the lemma.
Lemma 3.6.3 For some l ∈ {1, . . . , p − 1}, let ǫ be a path such that

[i]

r = 1, . . . , l − 1;
ǫ′r ≤ ǫr ,
ǫ ∈ E[β,i] , ǫr :=
 ǫ′ − 1 < ǫ[i]
r = l, . . . , p − 1.
r ,
r
Then
ǫ′ ∈ E[β,i] .
Proof. An iterative use of Lemma 3.6.2.
We summarize Lemmas 3.6.1-3.6.3 in the following two statements.
Lemma 3.6.4 For any given m ∈ {1, . . . , p}, let i, i′ ∈ J be such that
i′s = is ,
i′m = im + 1.
s 6= m;
If ǫ′ is a path such that
ǫ′ ∈ E[β,i′ ] ,
ǫ′ ≤ ǫ[i] ,
then
ǫ′ ∈ E[β,i] .
(3.6.13)
Proof. By Lemma 3.6.1, for such a path ǫ′ , there exists a path ǫ, and a number l ∈
{1, . . . , p − 1}, such that

 ǫ′ ,
r = 1, . . . , l − 1;
r
ǫ ∈ E[β,i] , ǫr =
 ǫ′ − 1,
r = l, . . . , p − 1.
r
And, by Lemma 3.6.3, we have then the inclusion (3.6.13).
Lemma 3.6.5 For any given m ∈ {1, . . . , p}, let i, i′ ∈ J be such that
i′s = is ,
i′m = im + 1.
s 6= m;
Then
E[β,i] = {ǫ ∈ E[β,i′ ] : ǫ ≤ ǫ[i] } .
Proof. For i, i′ so defined, the inclusion
{ǫ ∈ E[β,i′ ] : ǫ ≤ ǫ[i] } ⊂ E[β,i]
44
is just a reformulation of Lemma 3.6.4. On the other hand, since i ≤ i′ , it is clear that
E[β,i] ⊂ E[β,i′ ] ,
and it remains to recall that, by (3.5.9), for ǫ ∈ E[β,i] we have ǫ ≤ ǫ[i] .
Set
ω ∈ J.
ω := (p + 2, . . . , 2p + 1),
Then ω is the index of J with the maximal possible entries, i.e.,
i ≤ ω,
∀i ∈ J.
Proposition 3.6.6 For any β, i ∈ J, we have
E[β,i] = {ǫ ∈ E[β,ω] : ǫ ≤ ǫ[i] }.
Proof. Since i ≤ ω, i.e.,
is ≤ ωs ,
s = 1, . . . , p − 1,
(ν)
,
s 6= mν ;
+ 1,
s = mν .
there exists a number N , a sequence of indices (i(ν) )N
ν=0 , and a sequence of numbers
(mν )N
ν=1 , such that
i(0) = i, i(N ) = ω,
and
is
(ν)
is
(ν−1)
= is
=
(ν−1)
is
Since
i ≤ i(1) ≤ · · · ≤ i(N −1) ≤ ω,
we have
(1)
ǫ[i] ≤ ǫ[i
]
(N −1)
≤ · · · ≤ ǫ[i
]
≤ ǫ[ω] ,
and, by iterative use of Lemma 3.6.5, we obtain
E[β,i]
= {ǫ ∈ E[β,i(1) ] : ǫ ≤ ǫ[i] }
(1)
= {ǫ ∈ E[β,i(2) ] : ǫ ≤ ǫ[i
= {ǫ ∈ E[β,i(2) ] : ǫ ≤ ǫ[i] }
]
, ǫ ≤ ǫ[i] }
= ···
= {ǫ ∈ E[β,i(N −1) ] : ǫ ≤ ǫ[i] }
= {ǫ ∈ E[β,ω] : ǫ ≤ ǫ[i] }.
Proposition 3.6.7 If
ǫ[i] ≤ ǫ[j] ,
then
i, j ∈ J,
E[β,i] ⊂ E[β,j]
∀β ∈ J.
Proof. By Proposition 3.6.6, we have
E[β,i] = {ǫ ∈ E[β,ω] : ǫ ≤ ǫ[i] },
E[β,j] = {ǫ ∈ E[β,ω] : ǫ ≤ ǫ[j] },
and it is clear that
ǫ[i] ≤ ǫ[j]
⇒
{ǫ ∈ E[β,ω] : ǫ ≤ ǫ[i] } ⊂ {ǫ ∈ E[β,ω] : ǫ ≤ ǫ[j] }.
45
3.7
Relation between the minors of Q and C
Definition 3.7.1 For i, j ∈ J, we write
ij
ǫ[i] ≤ ǫ[j] ,
⇔
(3.7.1)
or, equivalently,
ij
p−t
X
⇔
s=1
p−t
X
min (is+t , p+1+s) ≤
min (js+t , p+1+s),
t = 1, . . . , p−1. (3.7.2)
s=1
Let us show the equivalence. By Definition 3.5.7,
ǫ[i] ≤ ǫ[j]
r
X
⇔
s=1
min (ip−r+s , p+1+s) ≤
r
X
min (jp−r+s , p+1+s),
r = 1, . . . , p−1.
s=1
(3.7.3)
To see that inequalities (3.7.2) and (3.7.3) are equivalent, one should set r = p − t.
Proposition 3.7.2 For any p ∈ N, there exists a constant cp , such that if
i, j ∈ J,
i j,
then
Q(β, i) ≤ cp Q(β, j),
∀β ∈ J .
(3.7.4)
Proof. By Definition 3.7.1, by Lemma 3.6.7, and by Lemma 3.3.4, we have the implications
ij
ǫ[i] ≤ ǫ[j]
⇒
⇒
E[β,i] ⊂ E[β,j]
⇒
Q(β, i) ≤ cp Q(β, j) ∀β ∈ J.
Proposition 3.7.3 For any p ∈ N, there exists a constant cp , such that if
i, j ∈ J,
i j,
(3.7.5)
then for any ν ≤ N − p + 1 we have
CN −ν (p, i) ≤ cp CN −ν (p, j).
Proof. If ν ≤ N − p + 1, then N − 1 ≥ ν + p − 2 and we find that
CN −ν :=
N
−1
Y
s=ν
[AD(ρs )] · A = K ·
ν+p−2
Y
s=ν
[AD(ρs )] · A = K ·
p−1
Y
s=1
[AD(ρν+s−1 ] · A =: K · Q,
with some totally positive matrix K. By the CB-formula, making use of (3.7.4), we
obtain
X
CN −ν (p, i) =
K(p, β)Q(β, i)
β∈J
X
≤
cp
=
cp CN −ν (p, j).
K(p, β)Q(β, j)
β∈J
46
3.8
Index relations
3.8.1
The statement
Recall definitions from §2.1:
J := {j ⊂ 2p + 1 : #j = p},
2p + 1 := (1, . . . , 2p + 1),
Jl := {j ∈ J : {l} ∈
/ j},
l = 1, . . . , 2p + 1.
For i ∈ Jl we defined its l-complement il and its conjugate index i∗ as
il ∈ J l ,
i∗ ∈ J,
il = 2p + 1 \ {l} \ i,
i∗ = (2p + 2 − ip , . . . , 2p + 2 − i1 ).
In this section we will prove the following.
Proposition 3.8.1 Let i ∈ Jl . Then
i l2 i ∗ i l1 ,
l1 ≤ p + 1 ≤ l2 ,
or, equivalently,
p−t
X
s=1
p−t
X
s=1
p−t
X
min (ils+t , p + 1 + s) ≤
min (ils+t , p + 1 + s) ≥
s=1
p−t
X
min (i∗s+t , p + 1 + s),
t = 1, . . . , p − 1,
l ≥ p + 1;(3.8.1)
min (i∗s+t , p + 1 + s),
t = 1, . . . , p − 1,
l ≤ p + 1.
(3.8.2)
s=1
We will prove this statement in another equivalent formulation. It is clear that we
may compare the sums of the shifted values
min (b
js+t , s), b
js := js − (p + 1).
We define, therefore, the sets of the shifted indices
πp := (−p, . . . , p),
For j ∈
Jlp
Jp := {j ⊂ πp : #j = p},
/ j},
Jlp := {j ∈ J : {l} ∈
l = −p, . . . , p.
its l-complement and conjugate index are defined respectively as
j l ∈ Jlp ,
j l := πp \ {l} \ j;
∗
For j ∈ Jp we set also
(3.8.3)
∗
j ∈ Jp ,
j := −j.
|j| :=
Thus, Proposition 3.8.1 follows from
p
X
js .
(3.8.4)
s=1
Proposition 3.8.2 Let i ∈ Jlp . Then
i l2 i ∗ i l1 ,
l1 ≤ 0 ≤ l2 ,
or, equivalently,
p−t
X
s=1
p−t
X
s=1
min (ils+t , s)
≤
min (ils+t , s) ≥
p−t
X
s=1
p−t
X
min (i∗s+t , s),
t = 0, . . . , p − 1,
l ≥ 0;
(3.8.5)
min (i∗s+t , s),
t = 0, . . . , p − 1,
l ≤ 0.
(3.8.6)
s=1
Remark 3.8.3 We have added also the inequalities with t = 0.
Now we start with the proof of Proposition 3.8.2.
47
3.8.2
Proof: The case l = 0
Definition 3.8.4 Let any p ∈ N and any j ∈ Jp be given. For t = 0, . . . , p − 1 define the
indices
[t]
j [t] ∈ Jp−t ,
j
[−t]
s = 1, . . . , p − t;
js := min (js+t , s),
[−t]
js
∈ Jp−t ,
:= max (js , −(p − t) + (s − 1)),
s = 1, . . . , p − t.
(3.8.7)
Since the components of j ∈ Jp satisfy
−p + (s − 1) ≤ js ≤ s,
(3.8.8)
we have
j [−0] = j [0] = j.
For s = 1, . . . , p − t, due to (3.8.8), we also have
−(p − t) ≤ min (js+t , s) ≤ p − t,
−(p − t) ≤ max (js , −(p − t) + (s − 1)) ≤ p − t,
i.e., the inclusion j [t] , j [−t] ∈ Jp−t in (3.8.7) really takes place.
The following tables show what the indices j [t] , j [−t] look like.

The indices j [t]
j [0] :=
j1 ,
j2 ,
...,
jp−2 ,
jp−1 ,
jp



j [1] := min (j2 , 1), min (j3 , 2), . . . , min(jp−1 , p − 2), min(jp , p − 1)

| {z } | {z }
{z
} |
{z
}
|


[1]
[1]
[1]
[1]
j1
j2
jp−2
jp−1



j [2] := min (j3 , 1), min (j4 , 2), . . . , min(jp , p − 2)

| {z } | {z }
{z
}
|


[2]
[2]
[2]
j1
j2
jp−2


 .................................................................................


 j [p−1] := min (j , 1) p

| {z }
[p−1]
j1





































The indices j [−t]
j [0] := j1 ,
j [−1] :=
j2 ,
j3 ,
max (j1 , −p + 1),
{z
}
|
[−1]
j1
j [−2] :=
max (j2 , −p + 2),
|
{z
}
[−1]
j2
...,
jp−1 ,
jp
jp−2
[−1]
jp−1
[−2]
jp−2

. . . , max (jp−2 , −2), max (jp−1 , −1)
{z
} |
{z
}
|
[−1]
max (j1 , −p + 2), . . . , max (jp−3 , −2), max (jp−2 , −1)
|
{z
}
|
{z
} |
{z
}
[−2]
j1
jp−3
[−2]
.............................................................................................
j [−p] :=
max (j1 , −1)
|
{z
}
[−p]
j1
In notation (3.8.7), (3.8.4), we have the equality
p−t
X
s=1
min(js+t , s) =: |j [t] |,
48


















so that (for l = 0) the statement (3.8.5) to be proved is
|(i0 )[t] | = |(i∗ )[t] |,
∀i ∈ J0p .
(3.8.9)
t = 0, . . . , p − 2.
(3.8.10)
t = 0, . . . , p − 1,
Lemma 3.8.5 For any j ∈ Jp ,
j [t+1] = (j [t] )[1] ,
j [−t−1] = (j [−t] )[−1] ,
Proof. Clear from the tables.
Lemma 3.8.6 For any given p and any i ∈ J0p , we have
(a)
i[−1] ∈ J0p−1 ,
(b)
(i[−1] )0 = (i0 )[1] ,
(i[−1] )∗ = (i∗ )[1] .
(c)
(3.8.11)
Proof. We prove first Equalities (3.8.11.a)-(3.8.11.b). By definition, for i ∈ J0p we have
#i = #i0 = p,
i ∪ i0 = πp \ {0},
i ∩ i0 = ∅.
Let
i01 + p =: r.
ip =: q,
Then we have two cases
(1)
ip = −1,
Case 1: ip = −1. In this case
the following
i1
i2
i01
···
(2) ip > 0.
= 1 and the only possible entries of i and i0 are
ip
−p −p + 1 · · · −1 0
i01
···
i0p−1
i0p
1
··· p − 1
p
.
In this case we have
i[−1] = (−p + 1, . . . , −1),
(i0 )[1] = (1, . . . , p − 1)
and Equalities (3.8.11.a)-(3.8.11.b) are evident.
Case 2: ip > 0. In this case i01 < 0 and the entries of i, i0 are located as follows
···
i1
ir
i01
···
−p · · · −p + r − 1 −p + r
· · · ip
··· 0 ···
q
i0q+1
· · · i0p
q + 1 ···
p
In this case
i[−1]
:= max (is , −p + s)
s
(i0 )[1]
s := min (is+1 , s)
Briefly, it can be written as

 −p + s, s = 1, . . . , r,
=

is ,
s = r + 1, . . . , p − 1;

 i0 , s = 1, . . . , q − 1,
s+1
=
 s,
s = q, . . . , p − 1.
i[−1] = i ∪ {i01 } \ {−p} \ {ip },
(i0 )[1] = i0 ∪ {ip } \ {p} \ {i01 } .
It follows that
i[−1] ∩ (i0 )[1] = ∅,
i[−1] ∪ (i0 )[1] = πp−1 \ {0},
what is equivalent to (3.8.11.a)-(3.8.11.b).
49
.
Equality (3.8.11.c) is straightforward:
(i[−1] )∗s
:=
=
[−1]
−ip−s := − max (ip−s , −(p − 1) + (p − s − 1)) = − max (ip−s , −s)
min (−ip−s , s) = min (−ip+1−(s+1) , s) =: min (i∗s+1 , s)
=: (i∗ )[1]
s .
Lemma 3.8.7 For any p ∈ N, any i ∈ J0p , and any t = 0, . . . , p − 1, we have
(a)
i[−t] ∈ J0p−t ,
(b)
(i[−t] )0 = (i0 )[t] ,
(c)
(i[−t] )∗ = (i∗ )[t] .
Proof. Follows from Lemmas 3.8.5-3.8.6.
Lemma 3.8.8 For any p ∈ N, and any j ∈ J0p ,
|j 0 | = |j ∗ |.
Proof. Since j ∪ j 0 = πp \ {0}, and j ∗ = −j, we have
|j| + |j 0 | = |πp | = 0,
|j| + |j ∗ | = 0,
i.e., |j 0 | = |j ∗ |.
Now we are ready to prove the case l = 0 of Proposition 3.8.2.
Lemma 3.8.9 For any i ∈ J0p
i0 ≍ i∗ ,
(3.8.12)
or, equivalently,
p−t
X
min(i0s+t , s) =
s=1
p−t
X
min(i∗s+t , s).
s=1
t = 0, . . . , p − 1.
(3.8.13)
Proof. By Lemma 3.8.7, for any i ∈ J0p and any t = 0, . . . , p − 1, the index j := i[−t]
satisfies the relations
(i0 )[t] = j 0 ,
(i∗ )[t] = j ∗ ,
j ∈ J0p−t .
By Lemma 3.8.8, we have
|j ∗ | = |j 0 |,
∀ j ∈ J0p−t .
Thus
|(i0 )[t] | = |(i∗ )[t] |,
t = 0, . . . , p − 1,
and that is equivalent to (3.8.13).
This finishes the proof of Proposition 3.8.2 for l = 0.
50
3.8.3
Proof: The case l 6= 0
It is clear that the following implications are valid:
(a) i ≤ j
⇒
ij
(b)
i=j
⇒
i ≍ j.
Case 1: i ∈ {Jlp ∩ J0p }. This is the case if {0} ∈
/ i. Since for i ∈ Jlp by definition
(3.8.3) we have
il := πp \ i \ {l},
it is easy to see that
i l2 ≤ i 0 ≤ i l1 ,
if
l1 < 0 < l2 ,
i l2 ≺ i 0 ≺ i l1 ,
if
l1 < 0 < l2 .
and respectively
Since i ∈ J0p , we have by Lemma 3.8.9
i0 ≍ i∗ ,
therefore,
i l2 ≺ i ∗ ≺ i l1 ,
if
l1 < 0 < l2 ,
i ∈ {J0p ∩ Jlpν }.
Case 2: i ∈ Jlp , i ∈
/ J0p . This is the case if {0} ∈ i. Then we have the inclusions
il ∈ Jlp ,
il ∈ J0p .
Set
j := i \ {0} ∪ {l}
Then
(1)

 j ∈ J0 ,
p
 j 0 = il ;
(2)

 j < i,
 i < j,
l < 0,
(3.8.14)
l > 0.
From the first part of these relations, by Lemma 3.8.9, it follows that
il ≍ j 0 ≍ j ∗ .
From the second part one obtain

 i∗ < j ∗ ,
if l < 0,
 j ∗ < i∗ ,
if l > 0.
⇒
Thus,
i l2 i ∗ i l1 ,
if

 i∗ j ∗ ,
 j ∗ i∗ ,
l1 < 0 < l2 ,
i ∈ Jlpν ,
if
l < 0,
if
l > 0.
i∈
/ J0p .
Proposition 3.8.2, hence Proposition 3.8.1 are proved.
3.9
Completion of the proof of Theorem Z
Theorem Z [§1.9]. There exists a constant cp depending only on p such that the inequalities
1 (l)
|σ (tν )| =: |zν(l) | ≤ cp ,
l!
l = p + 1, . . . , 2p + 1,
hold uniformly in ν, l.
51
ν = 0, . . . , N − p + 1
Proof. By Theorem 2.5.1, we have
|zν(l) | ≤ max
j∈Jl
CN −ν (p, j l )
,
CN −ν (p, j ∗ )
l = 1, . . . , 2p + 1.
By Proposition 3.8.1,
jl j∗,
if
l ≥ p + 1,
j ∈ Jl ,
and by Proposition 3.7.3, this implies
CN −ν (p, j l ) ≤ cp CN −ν (p, j ∗ ),
3.10
if
ν ≤ N − p + 1.
Last but not least
In [B2 ] C. de Boor wrote:
“I offer the modest sum of m − 1972 ten dollar bills to the first person who
communicates to me a proof or a counterexample (but not both) of his or her
making of the following conjecture (known to be true when k = 2 or k = 3):
Conjecture. For a given
R n and t, let (λi φj ) be the n × n matrix whose entries
are given by λi φj = k Nik Njk /(ti+k − ti ). Then
sup k(λi φj )−1 k∞ < ∞.
n,t
Here m is the year (A.D.) of such communication.”
Added in proof. The cheque has been received. With m = 1999, and, to a nice surprise,
doubled, the modest sum turned out to be not that modest. Regarding the origin of the factor 2,
C. de Boor replied: “... well, about 5-6 years ago, I stated at some occasion that, given inflation
and all that, I was doubling that rate. In fact, Jia was kind enough to remind me of that.”
52
Chapter 4
Comments
4.1
A survey of earlier and related results.
4.1.1
Earlier results
Earlier the mesh-independent bound (0.2.1) was proved for k = 2, 3, 4 (the case k = 1 is
trivial). For k > 4 all previously known results proved boundedness of kPS k∞ only under
certain restrictions on the mesh ∆. This included, in particular, meshes with multiple
knots which correspond to the spline spaces
Sk,m (∆) := Pk (∆) ∩ C m−1 [a, b],
Sk (∆) := Sk,k−1 (∆)
We summarize these results in two theorems. The number in the square brackets indicates
the year of the result.
Theorem A. Let K be one of the mesh classes given below. Then
sup sup kPSk,m (∆) k∞ < ck (K),
∆⊂K m
(K1 )
hmax
≤ M or like
hmin
quasi-uniform
hi±1
< 1 + ǫk
hi
hi+1
(K2 ) strictly geometric
= ρ, ρ > 0
hi
Theorem B. If k, m are as given below, then
(K1′ )
quasi-geometric
∀k ∈ N.
Domsta [72],
Douglas, Dupont, Wahlbin [751 ],
de Boor [763 ], Demko [77]
de Boor [763 ]
Feng, Kozak [81], Höllig [81],
Mityagin [83], Jia [87]
sup kPSk,m (∆) k∞ < ck .
∆
m=k−1
k=2
Ciesielski [63]
m=k−1
k = 3, 4
de Boor [68],[79]
m=0
k≥1
trivial
m=1
k≥2
m = 2, 3
de Boor [763 ], Zmatrakov, Subbotin [83]
2
k > (m + 1)
Shadrin [98]
53
4.1.2
L2 -projector onto finite element spaces.
The arguments used by Douglas, Dupont, Wahlbin [DDW1 ], de Boor [B3 ] and Demko
[De] for proving the boundedness of kPS k∞ for the quasi-uniform meshes revealed that
such a boundedness has nothing to do with the particular spline nature. The essential
structural requirements on a subspace S needed for these proofs can be summarized as
follows:
(B0 )
S = span {φi },
(B1 )
supp φi < ∞,
(B2 )
the local condition number κ(Φ) of Φ := {φi } is bounded,
#{φj : φj φi 6≡ 0} ≤ k ,
i.e., κ(Φ) ≤ d for some d,
(B3 )
partition of the domain is quasi-uniform.
A general result (for quasi-uniform partitions) including also the multivariate case was
proved by Douglas, Dupont & Wahlbin in [DDW2 ], and in fact in an earlier paper by
Decloux [Dc].
To this end, a natural question is whether the mesh-independent bound of PS could
be extended to (and perhaps more simply derived for) general finite element spaces. The
answer is “no”.
More precisely, denote by Sk,d the set of all finite element spaces S that satisfy (B0 )(B2 ). Then, for k = 2 and any d > 36, we have
3
|1/p − 1/2| > √ .
d
sup kPS kp = ∞,
S∈Sk,d
This result shows that the mesh-independent L∞ -boundedness of the L2 spline projector
is based on some peculiarities of the spline nature.
On the other hand, one can show that, for any k ∈ N, d ∈ R, d > k,
sup kPS kp < c(k, d),
S∈Sk,d
|1/p − 1/2| <
1
,
2kd2 ln d
i.e., the Lp -boundedness of the spline projector PS for p in some neighbourhood of p = 2
(proved earlier in [S2 ]) is not something extraordinary.
See [S5 ] for details.
4.1.3
A general spline interpolation problem
C. de Boor’s problem is a particular case of a general problem concerned with spline
interpolation.
For p ∈ [1, ∞], and f from the Sobolev space Wpl [a, b], let s := s2k,∆ (f ) be a spline of
the odd degree 2k − 1 which interpolates f on ∆, i.e.,
s ∈ S2k (∆), s∆ = f ∆ .
To obtain uniqueness, one should add some boundary conditions, e.g.,
s(l) (x)
= f (l) (x)
, l = 1, . . . , k − 1.
x=a,b
x=a,b
A general problem is to estimate the Lq -norm of such a spline-interpolation operator,
i.e. to find
(m)
L(k, l, m, p, q, K) := sup sup kf (m) − s2k,∆ (f )kq ,
∆⊂K kf (l) kp ≤1
where K is a class of meshes, see [B7 ],[Hö],[S1 ],[Ma].
54
A particular problem is to determine whether the value
L∗ (k, l, p) := sup
sup
∆ kf (l) kp ≤1
(l)
ks2k,∆ (f )kp
(4.1.1)
is bounded (independently of the mesh). A necessary condition was found to be
L∗ (k, l, p) < ∞
⇒
k−1
Wpl ∈ {W∞
, Wpk , W1k+1 }.
(4.1.2)
It was conjectured that this is also a sufficient condition. For l = k this particular
problem is known to be equivalent to de Boor’s conjecture, since
(k)
s2k,∆ (f ) = PSk (∆) [f (k) ].
(4.1.3)
Now, by our Theorem I, due to (4.1.3), a particular converse of (4.1.2) follows:
Wpl = Wpk
L∗ (k, l, p) < ∞.
⇒
The question, whether such a converse is also true for two other cases in (4.1.2)
k−1
Wpl ∈ {W∞
, W1k+1 }
?
⇒
L∗ (k, l, p) < ∞
remains open.
4.1.4
A problem for the multivariate D k -splines
The univariate splines can be defined through a variational approach. Now the question is that perhaps the variational nature of splines determines the mesh-independent
boundedness of the spline orthoprojector. The answer is “no”, too.
For another class of variational splines, the so-called multivariate Dk -splines on a
domain of Rn , the analogue of de Boor’s conjecture is false, see [S4 ], [Ma]. In particular,
in terms of the previous subsection, we have
L∗ (k, l, p) < ∞
4.2
4.2.1
⇔
l = k, p = 2,
if
n > 4.
On de Boor’s Lemma 1.2.4
Gram-matrix and de Boor’s Lemma 1.2.4
A simple intermediate estimate
kPS k∞ ≤ kG−1 k∞ ,
stated in Lemma 1.2.1 is kind of folklore and has been used in most (but not all) papers
on the subject cited in Theorems A-B above. C. de Boor [B2 ] proved that the converse
(not so simple) inequality
kG−1 k∞ ≤ ck kPS k∞
is also valid, i.e., to quote [B6 ], “in bounding kPS k in the uniform norm, we are bounding
kG−1 k∞ , whether we want to or not”.
For k = 2, G is strictly diagonally dominant, and the direct estimate by Ciesielski
[Ci] was
kG−1 k∞ ≤ 3.
(4.2.1)
For k > 2, G fails to be diagonally dominant, so a different argument has to be used.
55
For k = 3, 4, de Boor [B1 ],[B6 ] proved the boundedness of G−1 making use of his
Lemma 1.2.4. Namely, he found that the following “comparatively simple ” choice of the
vector (ai ) works:
(ti+2 − ti+1 )2
,
(ti+2 − ti )(ti+3 − ti+1 )
k = 3,
(−1)i ai := 1 +
k = 4,
(−1)i ai := 3 + 4
(ti+3 − ti+1 )2
,
(ti+3 − ti )(ti+4 − ti+1 )
supp Mi = [ti , ti+3 ]
supp Mi = [ti , ti+4 ].
(4.2.2)
This choice clearly provides the fulfillment of
(a3 )
kak∞ < cmax ,
but makes the verification of (a1 )–(a2 ) “comparatively” problematic. (The proof of k = 4
announced in 1979 has never been published.)
In this sense our proof is of an opposite nature. We offer a construction which gives
a simple proof of (A1 )-(A2 ), but encounter the problems with (A3 ) instead.
4.2.2
On the choice of the null-spline σ
The main difficulty in using Lemma 1.2.4 for estimating kG−1 k∞ is the problem of finding
a vector a = (ai ) satisfying
P the condition (a1 ) of this lemma, or, respectively, the problem
of finding a spline φ = ai Ni satisfying the condition (A1 ) of Lemma 1.3.1
1) Since the Gram-matrix G is an oscillation matrix, a candidate for the vector a
could be the eigenvector corresponding to the minimal eigenvalue. (By a theorem of
Gantmacher-Krein such an eigenvector is sign-alternating.)
2) Consider
δ (k) = {t−k+1 = · · · = t0 = 0 < 1 = t1 = · · · = tk },
the mesh with the so-called
Bernstein knots. In this case the B-spline basis reduces to
i
the polynomials k−1
x (1 − x)k−1−i .
i
For the Bernstein Gramian Gδ the explicit expression for the “minimal” eigenvector
is available, namely
k
i k−1
.
a = (ai )i=1 , ai = (−1)
i−1
P
Also, it is known that the corresponding polynomial ψ(x) := ai Ni (x) is the Legendre
polynomial
ψ = cΨ(k−1) , Ψ(x) := [x(1 − x)]k−1 ,
i.e., the (k − 1)-st derivative of the null-spline Ψ of degree 2k − 2.
Our null-spline σ may be viewed as a generalization of Ψ.
3) However, it turned out that the coefficients of the spline φ := σ (k−1) have nothing
to do (and could not have something to do, see below) with the “minimal” eigenvector.
Nevertheless, this choice provides the fulfillment of (A1 ) in a simple and natural way.
4) Remark in retrospect. The “minimal” eigenvector (ai ) of G can not be used in
de Boor’s lemma. Recall that in order to use this lemma, one should have the following
relations
b = Ga, max |ai /bj | < ck .
i,j
For the “minimal” eigenvector (ai ) of G they should be therefore
?
|amax /amin | < c′k .
It is, however, not true, as the following lemma shows.
56
Lemma 4.2.1 Let (ai ) be the eigenvector of G∆ corresponding to the minimal eigenvalue. Then, for k > 2,
sup |amax /amin | = ∞
∆
Proof. Let ∆ = (ti )N
i=0 and hi = ti+1 − ti . Then, e.g., for k = 3,

G∗ :=
lim
hN −1 →0





1 

lim · · · lim G∆ =
hN −2 →0
h1 →0
10 





6
4
..
.
..

.
6
4
6 3
3 4
1 3






,

1 


3 

6
the limit minimal eigenvalue is λ∗min = 1/10, and the corresponding limit eigenvector is
a∗ = (−x)N −1 , (−x)N −2 , . . . , x2 , −x, 1, −2, 1 , 6x − 4 = x, x = 4/5.
Thus,
sup |amax /amin| ≥ 2 · (5/4)N −1 .
#∆=N
4.3
Simplifications in particular cases
The most elaborate part of the proof of Theorem I, viz Chapter 3, is concerned with the
estimate
q
Y
Rq (α, i)
[ADγr ] · A,
< cp , Rq :=
max
α∈J Rq (α, j)
r=1
with q = p − 1. The analysis would be simpler if we could take
q = 0,
R0 = A,
(4.3.1)
but we were forced to take q = p − 1, since A in general has vanishing minors.
We indicate here the cases when considerations from Chapter 3 starting with §3.3-§3.5
can be omitted.
In the Cases 1 and 2 below, the choice (4.3.1) works. The Case 3 uses q = p − 1 but
the only ingredient taken from Chapter 3 is non-emptiness of the set J[β,i] proved in §3.5.
1. Knots with multiplicity k − m with m ≤ (k + 1)/2. Consider
Sk,m (∆) := Pk (∆) ∩ C m−1 [a, b],
the spline space with the B-spline basis defined on the knot-sequence ∆ with knot multiplicity k − m. The following particular case of Theorem I does not rely on the analysis
made in §3.3-§3.6.
Proposition 4.3.1 If m ≤ (k + 1)/2, then
sup kPSk,m (∆) k∞ < ck .
∆
57
The last step of the proof. For this space, the null spline σ is a spline with
(k − m)-multiple zeros on ∆. The matrix A which connects the vectors zν of the nonzero derivatives of σ at tν by the rule zν+1 = Azν has the lower order
A ∈ R(2m−1)×(2m−1) .
It could be obtained from the matrix S by k − m successive transformations similar to
those in §3.2.2. This gives the following criterion
α1 , . . . , αq
> 0, iff αs ≤ βs+k−m , s = 1, . . . , q − (k − m).
(4.3.2)
A
β1 , . . . , βq
Here α, β are indices from Iq,2m−1 , in particular, we have
s ≤ αs ≤ (2m − 1) − (q − s).
(4.3.3)
If k − m ≥ q, then the condition on α, β in (4.3.2) is void. Now let
k+1
(i) k − m ≤ q − 1
(ii) m ≤
.
2
Then
(4.3.3)
αs
≤
(ii)
(i)
(ii)
(2m − 1) − (q − s) ≤ k − q + s ≤ s + m − 1 ≤ s + k − m
(4.3.3)
≤
βs+k−m ,
i.e., condition on α, β in (4.3.2) is fulfilled. Thus,
A(α, β) > 0 ∀ α, β,
if
m≤
k+1
,
2
and accordingly,
|zν(l) | ≤ max
i∈J
A(α, β)
C(p, il )
≤ max
≤ cp .
C(p, i∗ ) α,β,γ,δ A(γ, δ)
(l)
2. The estimate of z0 . For ν = 0, the estimate |z0 | < cp of Theorem Z (see §3.9)
also can be proved without analysis of §3.3-§3.8, but with making use of properties of
the matrix A only.
Lemma 4.3.2 There exist a constant cp depending only on p, such that the inequalities
1 (l)
|σ (tν )| =: |zν(l) | ≤ cp ,
l!
l = p + 1, . . . , 2p + 1,
ν=0
hold uniformly in l.
Proof. From 2.5.1, making use of the CB-formula we obtain
(l)
|z0 | =
A(α, pl )
C(p, pl )
≤
max
,
α∈J A(α, p∗ )
C(p, p∗ )
l = p + 1, . . . , 2p + 1.
(4.3.4)
The criterion (see Lemma 3.2.9)
A(α, i) > 0
iff
αs ≤ is+1
∀s,
easily gives the implication
i≤j
⇒
A(α, i) ≤ cp A(α, j),
58
∀α ∈ J.
(4.3.5)
It is not hard to see that, for two different l-complements of i ∈ J, we have
i l2 ≤ i l1 ,
if
l1 < l2 ,
in particular,
pl ≤ pp+1 = p∗ ,
if
l ≥ p + 1.
(4.3.6)
Altogether, (4.3.4)–(4.3.6) proves
(l)
|z0 | ≤ cp ,
l = p + 1, . . . , 2p + 1.
3. The estimate in terms of a local mesh ratio. The next particular case of
Theorem I does not need more than non-emptiness of the set J[β,i] proved in §3.5.
Proposition 4.3.3 Let L(M ) be the class of meshes with the bounded local mesh ratio,
i.e.,
L(M ) := {∆ : max hν /hµ ≤ M }.
(4.3.7)
|ν−µ|=1
Then
sup
∆∈L(M)
kPSk (∆) k∞ < ck (M ).
The last step of the proof. In §3.3 we proved the inequalities (3.3.6)
cp
X p−1
Y
α∈J[β,i]
r=1
|α(r) |
|γr |
≤ cγ Qγ (β, i) ≤
c′p
X p−1
Y
α∈J[β,i]
r=1
(r)
|γr ||α
|
.
We recall that γr stands for the local mesh ratio ρν with some ν, i.e.,
γr := ρν := hν /hν+1 ,
cγ is a constant independent of β and i, and that the set J[β,i] is always non-empty (see
§3.5). On account of (4.3.7), this yields the estimate
c1 (M, p) ≤ cγ Qγ (β, i) ≤ c2 (M, p) ∀β, i ∈ J,
i.e.,
max
α∈J
4.4
Q(α, i)
< cp (M )
Q(α, j)
∀i, j ∈ J.
Additional facts
Here we present some additional facts which we have not used at all in our proof of
Theorem I but which could be useful in finding a simpler proof.
4.4.1
Orthogonality of φ ∈ Sk (∆) to Sk−1 (∆)
For the Bernstein knots, φ being the Legendre polynomial of degree k − 1 is orthogonal
to the polynomials of smaller degree. The following lemma generalizes this property to
any ∆.
59
Lemma 4.4.1 The spline φ of degree k − 1 on ∆ defined via (1.4.1)–(1.4.5) is orthogonal
to all splines of degree k − 2 on ∆, i.e.,
∀s ∈ Sk−1 (∆).
(φ, s) = 0,
Up to a constant factor, φ is the unique spline from Sk (∆) which possesses this property.
Proof. It can be shown (e.g., by integration by parts) that if any function f ∈ W1k−1 [a, b]
satisfies the following conditions
f (tν ) =
f
(l)
(t0 ) = f
(l)
(tN ) =
0,
ν = 0, . . . , N,
0,
l = 1, . . . , k − 2,
(4.4.1)
then
(f (k−1) , s) = 0,
∀s ∈ Sk−1 (∆).
Since σ satisfies (4.4.1) (they are the same as (1.4.2)-(1.4.3)), and since φ := σ (k−1) , the
statement follows.
4.4.2
Null-splines with Birkhoff boundary conditions at t0
Let i ∈ J be any index, and let σ
b ∈ S2k−1 (∆) be the null-spline that satisfies the following
conditions:
σ
b(tν ) = 0, ν = 0, . . . , N ;
σ
b(is ) (t0 ) = σ
b(s) (tN ) =
1
b(k−1) (tN )
(k−1)! σ
=
0,
s = 1, . . . , k − 2;
(4.4.2)
1.
In comparison with the null-spline σ defined in (1.4.2)-(1.4.4) we have changed at the left
endpoint t0 the Hermite boundary conditions (1.4.3) into Birkhoff boundary conditions.
Spline σ
b also exists and is unique.
Lemma 4.4.2 We have the equalities
C(p, il )
1 (l)
(l)
|b
σ (t0 )| · |h0 |l−k+1 =: zb0 =
,
l!
C(p + 1, i′ )
Proof. Let p := k − 2, and let
{l} 6∈ i.
(4.4.3)
i := (i1 , . . . , ip )
be the index whose components are the orders of the derivatives involved in (4.4.2). Then
we can find zb0 as a solution to the system of linear equations similar to (2.2.11), and, as
in the proof of Theorem 2.3.5, one obtain
(l)
|b
z0 | =
C(p, il )
.
C(p + 1, i′ )
Lemma 4.4.2 is of some interest for the following reasons. In Theorem 2.3.5 we
established that
C(p, il )
|zν(l) | ≤ max
.
i∈Jl C(p + 1, i′ )
60
Therefore, by (4.4.3), we have the estimate
(l)
|zν(l) | ≤ max |b
z0 |
σ
b
where the maximum is taken over all null-splines σ
b with various Birkhoff boundary
conditions in (4.4.2). Maybe it is possible to obtain an easier proof of the inequality
(l)
|b
z 0 | < cp ,
l ≥ p + 1,
(l)
for the left endpoint as was the case for |z0 | in Lemma 4.3.2.
4.4.3
Further properties of the matrices C
For x = (x(l) ) ∈ Rn , S − (x) and S + (x) denote the minimal, respectively maximal, number
of sign changes in the sequence x.
Lemma 4.4.3 For any ν, the matrix C := CN −ν is similar to its inverse.
Proof. By 2.4.1, we have C −1 = (D0 F )−1 C ∗ (D0 F ).
The fact that C is an oscillation matrix permits the following conclusion.
Lemma 4.4.4 For any ν, the spectrum of CN −ν ∈ R2p+1 consists of 2p + 1 different
positive numbers
0 < λ1 < · · · < λ2p+1 ,
moreover, by Lemma 4.4.3,
λs = 1/λ2p+2−s ,
λp+1 = 1.
If {uν,s } is a corresponding sequence of eigenvectors of CN −ν , then
S − (uν,s ) = S + (uν,s ) = s − 1,
s = 1, . . . , 2p + 1.
The fact that, for any ν, a solution zν of the equations
CN −ν zν = zN
remains bounded at least in the second half of its components indicates that in the
expansion
2p+1
X
as uν,s
zν =
s=1
the eigenvector uν,p+1 corresponding to the eigenvalue 1 dominates in a sense. Here is
one more evidence for this “dominance”.
Lemma 4.4.5 For any ν, we have
S − (zν ) = S + (zν ) = p
[= S(uν,p+1 )].
Proof. By the Budan-Fourier Theorem for Splines [BS], with p := k − 2 we obtain
Zσ (a, b) ≤ Zσ(2p+2) (a, b) + S − [σ(a+), . . . , σ (2p+2) (a+)] − S + [σ(b−), . . . , σ (2p+2) (b−)],
(4.4.4)
where Zf (a, b) stands for the number of zeros of f on the interval (a, b) counting multiplicities. Also, by Lemma 1.6.1,
Zσ (tν , tµ ) = Zσ(2p+2) (tν , tµ ) ∀ν, µ,
61
and the boundary conditions (1.4.2)-(1.4.3) say that
S − [σ(t0 +), . . . , σ (2p+2) (t0 +)] ≤ p + 1 ≤ S + [σ(tN −), . . . , σ (2p+2) (tN −)].
Taking now (4.4.4) with
1) a = t0 , b = tN ;
2) a = t0 , b = tν ;
3) a = tν , b = tN
successively, we obtain
S + [σ(tν − 0), . . . , σ 2p+2 (tν − 0)] = S − [σ(tν + 0), . . . , σ 2p+2 (tν + 0)] = p + 1 ∀ν.
Since
σ (l) (tν − 0) = σ (l) (tν + 0),
l = 1, . . . , 2p + 1,
and since
σ(tν − 0) = σ(tν + 0) = 0,
sign σ (2p+2) (tν − 0) = −sign σ (l) (tν + 0),
we conclude that
S[σ ′ (tν ), . . . , σ 2p+1 (tν )] = p
∀ν.
This, in view of the relations
zν(l) = const · σ (l) (tν ),
l = 1, . . . , 2p + 1,
proves the statement.
4.5
On the constant ck
There are two constants in de Boor’s problem:
1) the norm of the orthoprojector
ck,∆ [P ] := kPSk (∆) k∞ ,
ck [P ] := sup ck,∆ [P ],
∆
2) the norm of the inverse of the B-spline Gramian
ck,∆ [G] := kG−1
∆ k∞ .
ck [G] := sup ck,∆ [G],
∆
Our method based on properties of the spline φ := φ∆ :=
3) the constant
ck [φ] := sup ck,∆ [φ],
∆
ck,∆ [φ] := max
i,j
P
j
aj (φ∆ )Nj provides also
|aj (φ∆ )|
.
|(Mi , φ∆ )|
These constants are related by the inequalities
ck [P ] ≤ ck [G] ≤ ck [φ],
(4.5.1)
and we proved in Theorem I that
ck [φ] ≤ ck .
It is possible of course to estimate all the constants involved in the proof, hence, the final
constant ck , but we find it more useful to give a comparative analysis of the constants in
(4.5.1).
1. Lower bounds for ck [G] and ck [φ]. Consider
δ (k) := {t−k+1 = . . . = t0 = 0 < 1 = t1 = . . . = tk },
62
the mesh δ with the Bernstein knots. In this case the corresponding B-splines are simply
the polynomials
k−1 i
Ni (x) =
x (1 − x)k−1−i , Mi (x) = kNi (x),
i
and the Gram matrix Gδ is given by
Gδ := {(Mi , Nj )} =
k
gij =
2k − 1
k−1
(gij )i,j=0
,
The first values for the constants are as follows
k
2
k−1 k−1
i
j
2k−2
i+j
.
3
4
5
6
7
8
9
ck,δ [G] 3
13
41 32
171
583 54
2, 364 51
8, 373 76
33, 737 72
ck,δ [φ]
20
105
756 4, 620
34, 320
225, 225 1, 701, 700
3
(4.5.2)
They satisfy the relations
ck,δ [G]
ck,δ [φ]
∼ k −1/2 4k ,
1 2k
2 k
∼ k −1 8k ,
< ck,δ [G]
<
ck,δ [φ]
=
To find ck,δ [φ], we have used the formula
2k
k ,
2k−1
k−1
·
k−1
[(k−1)/2]
.
ck,δ [φ] = λ−1
min · max ai /aj ,
i,j
where λmin is the minimal eigenvalue of Gδ , and (ai ) := ((−1)i k−1
i−1 ) is the corresponding
eigenvector.
The first values and the two-sided estimates for ck,δ [G] were obtained with the help
of the MAPLE-package. It is possible to find an explicit expression for this constant,
too.
2. Lower bound for ck [P ]. For the Bernstein knots, PS is simply the orthoprojector
onto the space Pk of polynomials, and in this case
√
c2,δ [P ] = 1 32 , ck,δ [P ] ∼ k.
For k = 2, K. Oskolkov [Os] improved the lower bound 1 32 , and showed that
c2 [P ] ≥ 3.
(4.5.3)
His method is easily extended for arbitrary k.
Lemma 4.5.1 For any k
ck [P ] ≥ 2k − 1.
(4.5.4)
Proof. For f ∈ L∞ , let its orthoprojection PS (f ) onto Sk (∆N ) have the expansion
′
PS (f, x) =
N
X
aj (f, ∆N )Nj (x).
j=1
Then, the value of P (f, x) at the left endpoint x = t1 of ∆N is equal to the first coefficient
of this expansion, i.e.,
PS (f, t1 )) = a1 (f, ∆N ).
63
Therefore,
kPS (f )k∞ ≥ |a1 (f, ∆N )|,
and it follows that
kPSk (∆N ) k ≥ K(∆N ),
K(∆N ) :=
sup |a1 (f, ∆N )| .
kf k∞ ≤1
Now let
∆N = (ti )N
1 ,
∆N +1 = {t0 } ∪ ∆N ,
h := t1 − t0 .
Then, for the corresponding Gramians GN and GN +1 we have the following relation


b1 b2 0 . . . 0





 0
 .
lim GN +1 = 

.

h→0

 ..
G
N


0
In the same way as in [Os], one can prove the inequality
lim K(∆N +1 ) ≥ 1/b1 + (b2 /b1 )K(∆N ).
h→0
This implies the estimate
KN +1 ≥ 1/b1 + (b2 /b1 )KN ,
KN :=
sup
#∆N =N
K(∆N ) ,
and as a consequence
lim KN ≥ 1/b1
N →∞
∞
X
(b2 /b1 )s =
s=0
1
1/b1
=
.
1 − b2 /b1
b1 − b2
For any k, the corresponding values b1 , b2 are easily computed as
Z 1
k−1
k
,
b2 = 1 − b1 =
,
b1 = k
xk−1 xk−1 dx =
2k − 1
2k − 1
0
so that
lim KN ≥ 2k − 1.
N →∞
3. Upper bounds. For k = 2, the exact values of all constants are known
k = 2,
c2 [P ] = c2 [G] = c2 [φ] = 3.
Two further estimates of de Boor are available:
k = 3, c3 [G] ≤ 30,
k = 4, c4 [G] ≤ 81 32 .
4. Expectations. Symbolic computations with MAPLE for k, N ≤ 5 give evidence
that
ck [G] = ck,δ [G],
ck [φ] = ck,δ [φ].
These relations are also supported by theoretical estimates for the classes
∆ρ := {∆ : hν /hν+1 = ρ, ∀ν ∈ N}
64
of strictly geometric meshes. They are [Hö]
2k − 1 = lim ck,∆ρ [G] < ck,∆ρ [G] ≤ lim ck,∆ρ [G] ∼ (π/2)2k
ρ→∞
ρ→1
In view of these inequalities and (4.5.4) it is plain to make the following
Conjecture. For any k ∈ N,
sup kPSk (∆) k∞ = inf kG−1
Sk (∆) k∞ = 2k − 1.
∆
∆
65
Bibliography
[B1 ]
C. de Boor, On the convergence of odd-degree spline interpolation, J. Approx.
Theory 1 (1968), 452–463.
[B2 ]
C. de Boor, The quasi-interpolant as a tool in elementary polynomial spline
theory, in “Approximation Theory” (G. G. Lorentz, Ed.), pp. 269–276, Acad.
Press, New York, 1973.
[B3 ]
C. de Boor, A bound on the L∞ -norm of L2 -approximation by splines in terms
of a global mesh ratio, Math. Comp. 30 (1976), 765-771.
[B4 ]
C. de Boor, Splines as linear combinations of B-Splines. A survey, in “Approximation Theory II” (G. G. Lorentz et al, Eds.), pp.1–47, Academic Press,
New York, 1976.
[B5 ]
C. de Boor, On local linear functionals which vanish at all B-splines but one,
in “Theory of Approximation with Applications” (A. G. Law and B. N. Sahney,
Eds.), pp.120–145, Academic Press, New York, 1976.
[B6 ]
C. de Boor, On a max-norm bound for the least-squares spline approximant,
in “Approximation and Function Spaces” (Z. Ciesielski, Ed.), pp. 163–175, Proceedings of the International Conference (Gdansk, August 27–31, 1979), PWN,
Warzawa, 1981.
[B7 ]
C. de Boor, On bounding spline interpolation, J. Approx. Theory 14 (1975),
191–203.
[BS]
C. de Boor, I. J. Schoenberg Cardinal interpolation and spline functions
VIII. The Budan-Fourier theorem for splines and applications, in “Spline Functions” (K.Böhmer et al, Eds.), Lect. Notes in Math. 501 (1976), pp. 1–79.
[Ci]
Z. Ciesielski, Properties of the orthonormal Franklin system, Studia Math.
23 (1963), 141–157.
[De]
S. Demko, Inverses of band matrices and local convergence of spline projections, SIAM J. Numer. Anal. 14 (1977), 616–619
[Do]
J. Domsta, A theorem on B-splines, Studia Math. 41 (1972), 291–314.
[DDW1 ] J. Douglas, T. Dupont & L. Wahlbin, Optimal L∞ -error estimate for
Galerkin approximation to solutions of two point boundary value problems,
Math. Comp. 29 (1975), 475–583.
[DDW2 ] J. Douglas, T. Dupont & L. Wahlbin, The stability in Lq of the L2 projection into finite element function spaces, Numer. Math. 23 (1975), 193–197.
[FK]
Y. Y. Feng & J. Kozak, On the generalized Euler-Frobenius polynomial, J.
Approx. Theory 32 (1981), 327–338.
66
[Hö]
K. Höllig, L∞ -boundedness of L2 -projections on splines for a geometric mesh,
J. Approx. Theory 33 (1981), 318–333.
[Ji]
R. Jia, L∞ -boundedness of L2 -projections on splines for a multiple geometric
mesh, Math. Comp. 48 (1987), 675–690.
[Ka]
S. Karlin, Total Positivity, Stanford Univ. Press (Stanford), 1968.
[Ma]
O. V. Matveev, Spline interpolation of functions in several variables and bases
in Sovolev spaces, Trudy MI RAN 198 (1992), 125-152 (Russian).
[Mi]
B. Mityagin, Quadratic pencils and least-squares piecewise-polynomial approximation, Math. Comp. 40 (1983), 283–300.
[Os]
K. I. Oskolkov, The upper bound of the norms of orthogonal projections onto
subspaces of polygonals, in “Approximation Theory” (Z.Ciesielski Ed.), Banach
Center Publications, Vol. 4, PWN – Polish Scientific Publishers (Warsaw), 1979,
177–183.
[Sch]
I. J. Schoenberg, Zur Abzählung der reellen Wurzeln algebraischer Gleichungen, Math. Zeit. 38 (1934), 546–564.
[Schu]
L. L. Schumaker, Spline Functions: Basic Theory, Wiley (New York), 1981.
[S1 ]
A. Yu. Shadrin, On the approximation of functions by interpolating splines
defined on nonuniform nets, Matem. Sbornik 181 (1990), 1236–1255 = Math.
USSR. Sb. 71 (1992), 81–99.
[S2 ]
A. Yu. Shadrin, On Lp -boundedness of the L2 -projector onto splines, J. Approx. Theory 77 (1994), 331–348.
[S3 ]
A. Yu. Shadrin, On a problem of de Boor for multivariate Dm -splines, Trudy
MI RAN 219 (1997), 420-452 = Proc. Steklov Institute Math. 219 (1997), 413446.
[S4 ]
A. Yu. Shadrin, On L∞ -boundedness of the L2 -projector onto splines with
multiple knots, IGPM preprint #157 (1998), Technische Hochschule Aachen.
[S5 ]
A. Yu. Shadrin, On Lp -boundedness of the L2 -projector onto finite element
spaces, a manuscript.
[ZS]
N. Zmatrakov & Yu. N. Subbotin, Multiple interpolating splines of degree
2k + 1 with deficiency k, Trudy MIAN 164 (1983), 75–90 (in Russian).
67
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement