CHAPTER 4 Knot insertion

CHAPTER 4 Knot insertion
CHAPTER 4
Knot insertion
We have already seen that the control polygon of a spline provides a rough sketch of the
spline itself. In this chapter we will study another aspect of the relationship between a
spline and its control polygon. Namely, we shall see that as the distance between the knots
of a spline is reduced, the control polygon approaches the spline it represents.
To reduce the knot spacing we perform knot insertion. Knot insertion amounts to what
the name suggests, namely insertion of knots into an existing knot vector. This results
in a new spline space with more B-splines and therefore more flexibility than the original
spline space. This can be useful in many situations, for example in interactive design of
spline curves. It turns out that the new spline space contains the original spline space
as a subspace, so any spline in the original space can also be represented in terms of the
B-splines in the refined space. As mentioned above, an important property of this new
representation is that the control polygon will have moved closer to the spline itself. In
fact, by inserting sufficiently many knots, we can make the distance between the spline
and its control polygon as small as we wish. This has obvious advantages for practical
computations since we can represent a function consisting of infinitely many points by
a polygon with a finite number of vertices. By combining this with other properties of
splines like the convex hull property, we obtain a very powerful toolbox for algorithmic
manipulation of spline functions.
We start, in Section 4.1, by showing that the control polygon of a spline converges to
the spline as the knot spacing goes to zero. To prove this we make use of one property
of splines that is proved later, in Chapter 8. The obvious way to reduce the knot spacing
is via knot insertion, and in Section 4.2 we develop algorithms for expressing the B-spline
coefficients relative to the new refined knot vector in terms of the B-spline coefficients
relative to the original knot vector. In Section 4.3 we give a characterisation the B-spline
coefficients as functions of the knots. This characterisation is often useful for developing
the theory of splines, and in Section 4.4 this characterisation is used to obtain formulas
for inserting one new knot into a spline function.
4.1
Convergence of the control polygon for spline functions
P
Recall that for a spline function f (x) = i ci Bi,d,t , the control polygon is the piecewise
linear interpolant to the points (t∗i , ci ), where t∗i = (ti+1 + · · · + ti+d )/d is the ith knot
73
74
CHAPTER 4. KNOT INSERTION
average. Lemma 4.1 below shows that this is indeed a ‘good’ definition of the control
polygon since ci is close to f (t∗i ), at least when the spacing in the knot vector is small.
The proof of the lemma makes use of the fact that the size of a B-spline coefficient ci can
be bounded in terms of the size of the spline on the interval [ti+1 , ti+d+1 ]. More specifically,
we define the size of f on the interval [a, b] in terms of the max-norm
kf k[a,b] = max f (x),
x∈[a,b]
where we take the limit from the right at a and the limit from the left at b. We will prove
in Lemma 8.16 that there exists a constant Kd that depends on d, but not on t, such that
the inequality
|ci | ≤ Kd kf k[ti+1 ,ti+d ]
(4.1)
holds.
Lemma 4.1. Let f be a spline in Sd,t with coefficients (ci ). Then
|ci − f (t∗i )| ≤ K(ti+d − ti+1 )2 kD2 f k[ti+1 ,ti+d ] ,
(4.2)
where t∗i = (ti+1 + · · · + ti+d )/d, the operator D2 denotes (one-sided) differentiation (from
the right), and the constant K only depends on d.
Proof. Let i be fixed. If ti+d = ti+1 then we know from property 5 in Lemma 2.6 that
Bi,d (t∗i ) = 1 so ci = f (t∗i ) and there is nothing to prove. Assume for the rest of the proof
that the interval J = (ti+1 , ti+d ) is nonempty. Since J contains d − 2 knots it follows from
the continuity property of B-splines that f has at least two continuous derivatives on J.
Let x0 be a number in the interval J and consider the spline
g(x) = f (x) − f (x0 ) − (x − x0 )Df (x0 )
which is the error in a first order Taylor expansion of f at x0 . The ith B-spline coefficient
of g in Sd,t is given by
bi = ci − f (x0 ) − (t∗i − x0 )Df (x0 ).
Choosing x0 = t∗i we have bi = ci − f (t∗i ) and according to the inequality (4.1) and the
error term in first order Taylor expansion we find
|ci − f (t∗i )| = |bi | ≤ Kd kgkJ ≤
Kd (ti+d − ti+1 )2
kD2 f kJ .
2
The inequality (4.2) therefore holds with K = Kd /2 and the proof is complete.
Lemma 4.1 shows that the corners of the control polygon converge to the spline as the
knot spacing goes to zero, but what does this really mean? So far we have considered the
knots of a spline to be given, fixed numbers, but it is in fact possible to represent a spline on
many different knot vectors. Suppose for example that the given spline f is a polynomial
of degree d on the interval [a, b] = [td+1 , tm+1 ], and that the knot vector t = (ti )m+d+1
is
i=1
d + 1-regular. From Section 3.1.2 we know that f lies in Sd,t regardless of how the interior
knots in [a, b] are chosen. We can therefore think of the B-spline coefficients as functions
of the knots, and the difference ci − f (t∗i ) is then also a function of the knots. Lemma 4.1
4.1. CONVERGENCE OF THE CONTROL POLYGON FOR SPLINE FUNCTIONS75
tells us that this difference is bounded by (ti+d − ti+1 )2 and therefore tends to zero as ti+1
tends to ti+d . It is important to realise that this argument is in general only valid if f
is kept fixed. Otherwise it may happen that kD2 f k[ti+1 ,ti+d ] increases when (ti+d − ti+1 )2
decreases with the result that the product remains fixed.
If f is a true piecewise polynomial with jumps in some derivatives at the knots in
(a, b), we can introduce some auxiliary knots in (a, b) where the jumps in the derivatives
are zero. These knots we can then move around in such a way that the control polygon
approaches f .
Since the corners of the control polygon converges to the spline it is not surprising that
the control polygon as a whole also converges to the spline.
Pm
Theorem 4.2. Let f =
i=1 ci Bi,d be a spline in Sd,t , and let Γd,t (f ) be its control
polygon. Then
Γd,t (f ) − f ∗ ∗ ≤ Kh2 kD2 f k[t ,t
,
(4.3)
1 m+d+1 ]
[t ,t ]
1
m
where h = maxi {ti+1 − ti } and the constant K only depends on d.
Proof. As usual, we assume that t is d + 1-regular (if not we extend it with d + 1-tuple
knots at either ends and add zero coefficients). Suppose that x is in [t∗1 , t∗m ] and let j be
such that t∗j ≤ x < t∗j+1 . Observe that since the interval J ∗ = (t∗j , t∗j+1 ) is nonempty we
have tj+1 < tj+d+1 and J ∗ contains at most d − 1 knots. From the continuity property of
B-splines we conclude that f has a continuous derivative and the second derivative of f is
at least piecewise continuous on J ∗ . Let
g(x) =
(t∗j+1 − x)f (t∗j ) + (x − t∗j )f (t∗j+1 )
t∗j+1 − t∗j
be the linear interpolant to f on this interval. We will show that both Γ = Γd,t (f ) and f
are close to g on J ∗ and then deduce that Γ is close to f because of the triangle inequality
Γ(x) − f (x) ≤ Γ(x) − g(x) + g(x) − f (x).
(4.4)
Let us first consider the difference Γ − g. Note that
Γ(x) − g(x) =
(t∗j+1 − x)(bj − f (t∗j )) + (x − t∗j )(bj+1 − f (t∗j+1 ))
t∗j+1 − t∗j
for any x in J ∗ . We therefore have
n
o
Γ(x) − g(x) ≤ max bj − f (t∗j ), bj+1 − f (t∗j+1 ) ,
for x ∈ J ∗ . From Lemma 4.1 we then conclude that
|Γ(x) − g(x)| ≤ K1 h2 kD2 f kJ ,
x ∈ J ∗,
(4.5)
where J = [t1 , tm+d+1 ] and K1 depending only on d.
The second difference f (x) − g(x) in (4.4) is the error in linear interpolation to f at
the endpoints of J ∗ . For this process we have the standard error estimate
f (x) − g(x) ≤ 1 (t∗j+1 − t∗j )2 kD2 f kJ ∗ ≤ 1 h2 kD2 f kJ , x ∈ J ∗ .
(4.6)
8
8
If we now combine (4.5) and (4.6) as indicated in (4.4), we obtain the Theorem with
constant K = K1 + 1/8.
76
CHAPTER 4. KNOT INSERTION
Because of the factor h2 in Theorem 4.2 we say (somewhat loosely) that the control
polygon converges quadratically to the spline.
4.2
Knot insertion
In Section 4.1 we showed that the control polygon converges to the spline it represents
when the knot spacing tends to zero. In this section we shall develop two algorithms
for reducing the knot spacing by inserting new (artificial) knots into a spline. The two
algorithms for knot insertion are closely related to Algorithms 2.20 and 2.21; in fact these
two algorithms are special cases of the algorithms we develop here.
Knot insertion is exactly what the name suggests: extension of a given knot vector by
adding new knots. Let us first define precisely what we mean by knot insertion, or knot
refinement as it is also called.
Definition 4.3. A knot vector t is said to be a refinement of a knot vector τ if any real
number occurs at least as many times in t as in τ .
A simple example of a knot vector and a refinement is given by
τ = (0, 0, 0, 3, 4, 5, 5, 6, 6, 6)
and
t = (0, 0, 0, 2, 2, 3, 3, 4, 5, 5, 5, 6, 6, 6).
Here two knots have been inserted at 2, one at 3 and one at 5.
Note that if t is a refinement of τ then τ is a subsequence of t, and this we will write
τ ⊆ t. The term knot insertion is used because in most situations the knot vector τ is
given and t is obtained by ‘inserting’ knots into τ .
With some polynomial degree d given, we can associate the spline spaces Sd,τ and Sd,t
with the two knot vectors τ and t. When τ is a subsequence of t, the two spline spaces
are also related.
Lemma 4.4. Let d be a positive integer and let τ be a knot vector with at least d + 2
knots. If t is a knot vector which contains τ as a subsequence then Sd,τ ⊆ Sd,t .
Proof. Suppose first that both τ and t are d + 1-regular knot vectors with common knots
at the ends. By the Curry-Schoenberg theorem (Theorem 3.25) we know that Sd,t contains
all splines with smoothness prescribed by the knot vector t. Since all knots occur at least
as many times in t as in τ , we see that at any knot, a spline in Sd,τ is at least as smooth
as required for a spline in Sd,t . We therefore conclude that Sd,τ ⊆ Sd,t .
A proof in the general case, where τ and t are not d + 1-regular with common knots
at the ends, is outlined in exercise 5.
P Suppose that f is a spline in Sd,τ with B-spline coefficients c = (cj ) so that f =
j cj Bj,d,τ . If τ is a subsequence of t, we know from Lemma 4.4 that Sd,τ is a subspace
of Sd,t so P
f must also lie in Sd,t . Hence there exist real numbers b = (bi ) with the property
that f = i bi Bi,d,t , i.e., the vector b contains the B-spline coefficients of f in Sd,t . Knot
insertion is therefore nothing but a change of basis from the B-spline basis in Sd,τ to the
B-spline basis in Sd,t .
Since Sd,τ ⊆ Sd,t all the B-splines in Sd,τ are also in Sd,t so that
Bj,d,τ =
m
X
i=1
αj,d (i)Bi,d,t ,
j = 1, 2, . . . , n,
(4.7)
4.2. KNOT INSERTION
77
for certain numbers αj,d (i). In matrix form this can be written
B Tτ = B Tt A,
(4.8)
T
where B Tτ = (B1,d,τ , . . . , Bn,d,τ
) and B t = (B1,d,t , . . . , Bm,d,t ) are row vectors, and the
m × n-matrix A = αj,d (i) is the basis transformation matrix. Using this notation we
can write f in the form
f = B Tτ c = B Tt b,
where b and c are related by
b = Ac,
or
bi =
n
X
ai,j cj
for i = 1, 2, . . . , m.
(4.9)
j=1
The basis transformation A is called the knot insertion matrix of degree d from τ to t
and we will use the notation αj,d (i) = αj,d,τ ,t (i) for its entries. The discrete function αj,d
has many properties similar to those of Bj,d , and it is therefore called a discrete B-spline
on t with knots τ .
To illustrate these ideas, let us consider a couple of simple examples of knot insertion
for splines.
Example 4.5. Let us determine the transformation matrix A for splines with d = 0, when the coarse
knot vector is given by τ = (0, 1, 2), and the refined knot vector is t = (0, 1/2, 1, 3/2, 2) = (ti )5i=1 . In this
case
Sd,τ = span{B1,0,τ , B2,0,τ } and Sd,t = span{B1,0,t , B2,0,t , B3,0,t , B4,0,t }.
We clearly have
B1,0,τ = B1,0,t + B2,0,t ,
B2,0,τ = B3,0,t + B4,0,t .
This means that the knot insertion matrix in this case is given by
1
1
A=
0
0


0
0
.
1
1
Example 4.6. Let us also consider an example with linear splines. Let d = 1, and let τ and t be as in
the preceding example. In this case dim Sd,τ = 1 and we find that
B(x | 0, 1, 2) =
1
1
B(x | 0, 1/2, 1) + B(x | 1/2, 1, 3/2) + B(x | 1, 3/2, 2).
2
2
The situation is shown in Figure 4.1. The linear B-spline on τ is a weighted sum of the three B-splines
(dashed) on t. The knot insertion matrix A is therefore the 3 × 1-matrix, or row vector, given by

1/2
A =  1 .

1/2
4.2.1
Formulas and algorithms for knot insertion
To develop algorithms for performing knot insertion we need to study the matrix A in
some more detail. Suppose as before that we have two knot vectors τ and t with τ ⊆ t
78
CHAPTER 4. KNOT INSERTION
1
0.8
0.6
0.4
0.2
0.5
1
1.5
2
Figure 4.1. Refining a linear B-spline.
P
and a spline function f = j cj Bj,d,τ in Sd,τ . Since the (i, j)-entry of A is αj,d (i), the
B-spline coefficients of f relative to Sd,t are given by
bi =
n
X
αj,d (i)cj
(4.10)
j=1
for i = 1, . . . , m, see (4.9). Similarly, equation (4.8) can now be written
Bj,d,τ =
m
X
αj,d (i)Bi,d,t
(4.11)
i=1
for j = 1, . . . , n.
In the following we make the assumption that τ = (τj )n+d+1
and t = (ti )m+d+1
are
j=1
i=1
both d + 1-regular knot vectors with common knots at the ends so that τ1 = t1 and
tn+1 = tm+1 . Exercise 6 shows that this causes no loss of generality. The following
theorem gives an explicit formula for the knot insertion matrix A.
Recall from Theorem 2.18 the the B-spline matrix Rk (x) = Rµk,τ (x) is given by
τµ+1 −x
 τµ+1 −τµ+1−k



0

µ
Rk,τ (x) = 

..

.


0

x−τµ+1−k
τµ+1 −τµ+1−k
τµ+2 −x
τµ+2 −τµ+2−k
..
.
x−τµ+2−k
τµ+2 −τµ+2−k
..
.
0
...
0

···
0
...
0
..
.
τµ+k −x
τµ+k −τµ
..
.
x−τµ
τµ+k −τµ





.




Theorem 4.7. Let the polynomial degree d be given, and let τ = (τj )n+d+1
and t =
j=1
m+d+1
(ti )i=1
be two d + 1-regular knot vectors with common knots at the ends and τ ⊆ t.
In row i of the knot insertion matrix A the entries are given by αj,d (i) = 0 for j < µ − d
4.2. KNOT INSERTION
79
and j > µ, where µ is determined by τµ ≤ ti < τµ+1 and
(
1,
if d = 0,
αd (i)T = αµ−d,d (i), . . . , αµ,d (i) =
µ
µ
µ
R1,τ (ti+1 )R2,τ (ti+2 ) · · · Rd,τ (ti+d ), if d > 0.
(4.12)
P
If f = j cj Bj,d,τ is a spline in Sd,τ , with B-spline coefficients b in Sd,t , then bi is given
by
µ
X
bi =
αj,d (i)cj = Rµ1,τ (ti+1 ) · · · Rµd,τ (ti+d )cd ,
(4.13)
j=µ−d
where cd = (cµ−d , . . . , cµ ).
Proof. We already know that the two B-spline bases are related through the relation
(4.11); we will obtain the result by analysing this equation for different values of j.
Let the integer ν be such that tν ≤ ti < tν+1 . This means that ν − d ≤ i ≤ ν
(note that ν = i only if ti < ti+1 ). Since τ is a subsequence of t we have [tν , tν+1 ) ⊆
[τµ , τµ+1 ). Restrict x to the interval [tν , tν+1 ) so that B`,d,t (x) = 0 for ` < ν − d and ` > ν.
Equation (4.11) can then be written
Bj,d,τ (x) =
ν
X
αj,d (`)B`,d,t (x),
for j = 1, . . . , n.
(4.14)
`=ν−d
Since x ∈ [τµ , τµ+1 ) we have Bj,d,τ (x) = 0 for j < µ − d or j > µ. For these values of j the
left-hand side of (4.14) is therefore zero, and since the B-splines {B`,d,t }ν`=ν−d are linearly
independent we must have αj,d (`) = 0 for ν − d ≤ ` ≤ ν, and in particular αj,d (i) = 0, for
j < µ − d and j > µ.
To establish (4.12) we consider the remaining values of j, namely j = µ − d, . . . , µ. On
the interval [tν , tν+1 ), the nonzero part of the two B-spline bases can be represented by
the two vectors B d,τ = (Bk,d,τ )µk=µ−d and B d,t = (B`,d,t )ν`=ν−d . We also have the vectors
ν
µ
of dual polynomials σ d (y) = σk,d (y) k=µ−d and ρd (y) = ρ`,d (y) `=ν−d given by
σk,d (y) = (y − τk+1 ) · · · (y − τk+d ),
ρ`,d (y) = (y − t`+1 ) · · · (y − t`+d ).
From Corollary 3.2 we have that the two sets of dual polynomials are related by
ρ`,d (y) = R1 (t`+1 ) · · · Rd (t`+d )σ d (y)
(4.15)
(in this proof we omit the second subscript and the superscript to the R matrices). Combining this with two versions of Marsden’s identity (3.10) then yields
B d,τ (x)T σ d (y) = (y − x)d =
ν
X
ρ`,d (y)B`,d,t (x)
`=ν−d
=
ν
X
`=ν−d
B`,d,t (x)R1 (t`+1 ) · · · Rd (t`+d )σ d (y).
80
CHAPTER 4. KNOT INSERTION
The linear independence of the d + 1 dual polynomials σ d (y) allows us to conclude that


R1 (tν−d+1 ) · · · Rd (tν )
ν
R1 (tν−d+2 ) · · · Rd (tν+1 )
X


B d,τ (x)T =
B`,d,t (x)R1 (t`+1 ) · · · Rd (t`+d ) = B d,t (x)T 

..


.
`=ν−d
R1 (tν+1 ) · · · Rd (tν+d )
for any x in the interval [tν , tν+1 ). Comparing this equation with (4.14) and making use
of the linear independence of B-splines shows that the matrix on the right is the submatrix of A given by (αj,d (i))ν,µ
i=ν−d,j=µ−d . Since ν − d ≤ i ≤ ν equation (4.12) follows.
Equation (4.13) now follows from (4.10).
Note that if no new knots are inserted (τ = t) then the two sets of B-spline coefficients
c and b are obviously the same. Equation (4.13) then shows that
ci = Rµ1,τ (τi+1 ) · · · Rµd,τ (τi+d )cd .
(4.16)
This simple observation will be useful later.
An example will illustrate the use of Theorem 4.7.
Example 4.8. We consider quadratic splines (d = 2) on the knot vector τ = (−1, −1, −1, 0, 1, 1, 1), and
insert two new knots, at −1/2 and 1/2 so t = (−1, −1, −1, −1/2, 0, 1/2, 1, 1, 1). We note that τ3 ≤ ti < τ4
for 1 ≤ i ≤ 4 so the first three entries of the first four rows of the 6 × 4 knot insertion matrix A are given
by
α2 (i) = R31,τ (ti+1 )R32,τ (ti+2 )
for i = 1, . . . , 4. Since
R31,τ (x) = −x
R32,τ (x) =
1+x ,
−x
0
1+x
(1 − x)/2
0
,
(1 + x)/2
we have from (4.12)
α2 (i) =
1
2ti+1 ti+2 ,
2
1 − ti+1 − ti+2 − 3ti+1 ti+2 ,
(1 + ti+1 )(1 + ti+2 ) .
Inserting the correct values for ti+1 and ti+2 and adding one zero at the end of each row, we find that the
first four rows of A are given by


1
0
0
0
0
0
1/2 1/2
.
 0
3/4 1/4 0
0
1/4 3/4 0
To determine the remaining two rows of A we have to move to the interval [τ4 , τ5 ) = [0, 1). Here we have
R41,τ (x) = 1 − x
R42,τ (x) =
x
(1 − x)/2
0
(1 + x)/2
1−x
0
,
x
so
a2 (i) = R41,τ (ti+1 )R42,τ (ti+2 ) =
1
(1 − ti+1 )(1 − ti+2 ),
2
1 + ti+1 + ti+2 − 3ti+1 ti+2 ,
2ti+1 ti+2 .
Evaluating this for i = 5, 6 and inserting one zero as the first entry, we obtain the last two rows as
0
0
0
0
1/2
0
1/2
.
1
4.2. KNOT INSERTION
-1
81
2
1
1
0.5
0.5
-0.5
-1
1
0.5
-0.5
-1
-0.5
-2
-1
(a)
1
(b)
Figure 4.2. A quadratic spline together with its control polygon relative to a coarse and a finer knot vector (a),
and the same spline as in (a) with its control polygon relative to an even more refined knot vector (b).
To see visually the effect of knot insertion, let f = B1,2,τ − 2B2,2,τ + 2B3,2,τ − B4,2,τ be a spline in Sd,τ
with B-spline coefficients c = (1, −2, 2, −1)T . Its coefficients b = (bi )6i=1 are then given by

1
1/2

 0
b = Ac = 
 0
 0
0
0
1/2
3/4
1/4
0
0
0
0
1/4
3/4
1/2
0



0
1
 
−1/2
0  1



0  −2  −1 
=

.


0  2
 1 


1/2
−1
1/2 
1
−1
Figure 4.2 (a) shows a plot of f together with its control polygons relative to τ and t. We note that
the control polygon relative to t is much closer to f and that both control polygons give a rough estimate
of f .
The knot insertion process can be continued. If we insert one new knot halfway between each old knot
in t, we obtain the new knot vector
t1 = (−1, −1, −1, −3/4, −1/2, −1/4, 0, 1/4, 1/2, 3/4, 1, 1, 1).
A plot of f and its control polygon relative to this knot vector is shown in Figure 4.2 (b).
Example 4.9. Let us again consider quadratic splines on a uniform knot vector with multiple knots at
the ends,
τ = (τj )n+3
j=1 = (3, 3, 3, 4, 5, 6, . . . , n, n + 1, n + 1, n + 1),
and form t by inserting one knot half way between each pair of old knots,
2n+1
t = (ti )i=1
= (3, 3, 3, 7/2, 4, 9/2, 5, . . . , n, (2n + 1)/2, n + 1, n + 1, n + 1).
Since dim Sd,τ = n and dim Sd,t = 2n − 2, the knot insertion matrix A is now a (2n − 2) × n matrix. As
in Example 4.8 we find that the first three columns of the first four rows of A are
1
1/2
 0
0

0
1/2
3/4
1/4

0
0 
.
1/4
3/4
µ
To determine rows 2µ − 3 and 2µ − 2 with 4 ≤ µ ≤ n − 1, we need the matrices Rµ
1,τ and R2,τ which are
given by
Rµ
1,τ (x)
= µ+1−x
x−µ ,
Rµ
2,τ (x)
(µ + 1 − x)/2
=
0
(x + 1 − µ)/2
(µ + 2 − x)/2
0
.
(x − µ)/2
82
CHAPTER 4. KNOT INSERTION
Observe that τi = i for i = 3, . . . , n + 1 and ti = (i + 3)/2 for i = 3, . . . , 2n − 1. Entries µ − 2, µ − 1 and
µ of row 2µ − 3 are therefore given by
0
1
0
µ
µ
µ
µ
R1,τ (t2µ−2 )R2,τ (t2µ−1 ) = R1,τ (µ + 1/2)R2,τ (µ + 1) = 1/2 1/2
= 0 3/4 1/4 .
0 1/2 1/2
Similarly, entries µ − 3, µ − 2 and µ of row 2µ − 2 are given by
µ
Rµ
1,τ (t2µ−1 )R2,τ (t2µ )
=
Rµ
1,τ (µ
+
1)Rµ
2,τ (µ
+ 3/2) = 0
−1/4
1
0
5/4
1/4
0
= 0
3/4
1/4
3/4 .
Finally, we find as in Example 4.8 that the last three entries of the last two rows are
0 1/2 1/2
.
0
0
1
The complete knot insertion matrix is therefore
 1
1/2
 0

 0

 0


A= 0
 ..
 .

 0

 0

0
0
0
1/2
3/4
1/4
0
0
..
.
0
0
0
0
0
0
1/4
3/4
3/4
1/4
..
.
0
0
0
0
0
0
0
0
1/4
3/4
..
.
0
0
0
0
...
...
...
...
...
...
...
...
...
...
...
0
0
0
0
0
0
..
.
3/4
1/4
0
0
0
0
0
0
0
0
..
.
1/4
3/4
1/2
0
0 
0 
0 

0 

0 

0 .

.. 
. 

0 

0 

1/2
1
The formula for αd (i) shows very clearly the close relationship between B-splines and
discrete B-splines, and it will come as no surprise that αj,d (i) satisfies a recurrence relation
similar to that of B-splines, see Definition 2.1. The recurrence for αj,d (i) is obtained by
setting x = ti+d in the recurrence (2.1) for Bj,d (x),
αj,d (i) =
τj+1+d − ti+d
ti+d − τj
αj,d−1 (i) +
αj+1,d−1 (i),
τj+d − τj
τj+1+d − τj+1
(4.17)
starting with αj,0 (i) = Bj,0 (ti ).
The two evaluation algorithms for splines, Algorithms 3.17 and 3.18, can be adapted
to knot insertion quite easily. For historical reasons these algorithms are usually referred
to as the Oslo algorithms.
Algorithm 4.10 (Oslo-Algorithm 1). Let the polynomial degree d, and the two d + 1regular knot vectors τ = (τj )n+d+1
and t = (ti )m+d+1
with common knots at the ends be
j=1
i=1
m,n
given. To compute the m × n knot insertion matrix A = αj,d (i) i,j=1 from τ to t perform
the following steps:
1. For i = 1, . . . , m.
1.1 Determine µ such that τµ ≤ ti < τµ+1 .
1.2 Compute entries µ − d, . . . , µ of row i by evaluating
(
1,
if d = 0.
T
αd (i)T = αµ−d,d (i), . . . , αj,d (i) =
R1 (ti+1 ) · · · Rd (ti+d ), if d > 0.
All other entries in row i are zero.
4.3. B-SPLINE COEFFICIENTS AS FUNCTIONS OF THE KNOTS
83
An algorithm for converting a spline from a B-spline representation in Sd,τ to Sd,t is
as follows.
Algorithm 4.11 (Oslo-Algorithm 2). Let the polynomial degree d, and the two d + 1regular knot vectors τ = (τj )n+d+1
and t = (ti )m+d+1
with common knots at the ends be
j=1
i=1
given together with the spline f in Sd,τ with B-spline coefficients c = (cj )nj=1 . To compute
the B-spline coefficients b = (ci )m
i=1 of f in Sd,t perform the following steps:
1. For i = 1, . . . , m.
1.1 Determine µ such that τµ ≤ ti < τµ+1 .
1.2 Set cd = (cj )µj=µ−d and compute bi by evaluating
(
cµ ,
if d = 0.
bi =
R1 (ti+1 ) · · · Rd (ti+d )cd , if d > 0.
4.3
B-spline coefficients as functions of the knots
Knot insertion allows us to represent the same spline function on different knot vectors.
In fact, any spline function can be given any real numbers as knots, as long as we also
include the original knots. It therefore makes sense to consider the B-spline coefficients as
functions of the knots, and we shall see that this point of view allows us to characterise
the B-spline coefficients completely by three simple properties.
P
Initially, we assume that the spline f = nj=1 cj Bj,d,τ is a polynomial represented on
a d + 1-extended knot vector τ . On the knot interval [τµ , τµ+1 ) we know that f can be
written as
f (x) = R1 (x) · · · Rd (x)cd ,
(4.18)
where cd = (cµ−d , . . . , cµ )T , see Section 2.3. Since f is assumed to be a polynomial this
representation is valid for all real numbers x, although when x is outside [τµ , τµ+1 ) it is no
longer a true B-spline representation.
Consider the function
F (x1 , . . . , xd ) = R1 (x1 ) · · · Rd (xd )cd .
(4.19)
We recognise the right-hand side of this expression from equation (4.13) in Theorem 4.7:
If we have a knot vector that includes the knots (x0 , x1 , . . . , xd , xd+1 ), then F (x1 , . . . , xd )
gives the B-spline coefficient that multiplies the B-spline B(x | x0 , . . . , xd+1 ) in the representation of the polynomial f on the knot vector x. When f is a polynomial, it turns
out that the function F is completely independent of the knot vector τ that underlie the
definition of the R-matrices in (4.19). The function F is referred to as the blossom of f ,
and the whole theory of splines can be built from properties of this function.
4.3.1
The blossom
In this subsection we develop some of the properties of the blossom. We will do this
in an abstract fashion, by starting with a formal definition of the blossom. In the next
subsection we will then show that the function F in (4.19) satisfies this definition.
84
CHAPTER 4. KNOT INSERTION
Definition 4.12. A function on the form f (x) = ax, where a is a real number, is called
a linear function. A function on the form f (x) = ax + b with a and b real constants is
called an affine function. A function of d variables f (x1 , . . . , xd ) is said to be affine if it
is affine viewed as a function of each xi for i = 1, . . . , d, with the other variables fixed.
A symmetric affine function is an affine function that is not altered when the order of the
variables is changed.
It is common to say that a polynomial p(x) = a+bx of degree one is a linear polynomial,
even when a is nonzero. According to Definition 4.12 such a polynomial is an affine
polynomial, and this (algebraic) terminology will be used in the present section. Outside
this section however, we will use the term linear polynomial.
For a linear function of one variable we have
f (αx + βy) = αf (x) + βf (y),
x, y ∈ R
(4.20)
for all real numbers α and β, while for an affine function f with b 6= 0 equation (4.20)
only holds if α + β = 1. This is in fact a complete characterisation of affine functions: If
(4.20) holds with α + β = 1, then f is affine, see exercise 9.
A general affine function of 2 variables is given by
f (x1 , x2 ) = ax2 + b = (a2 x1 + b2 )x2 + a1 x1 + b1
= c0 + c1 x1 + c2 x2 + c1,2 x1 x2 .
(4.21)
Similarly, an affine function of three variables is a function on the form
f (x1 , x2 , x3 ) = c0 + c1 x1 + c2 x2 + c3 x3 + c1,2 x1 x2 + c1,3 x1 x3 + c2,3 x2 x3 + c1,2,3 x1 x2 x3 .
In general, an affine function can be written as a linear combination of 2d terms. This
follows by induction as in (4.21) where we passed from one argument to two.
A symmetric and affine function satisfies the equation
f (x1 , x2 , . . . , xd ) = f (xπ1 , xπ2 , . . . , xπd ),
for any permutation (π1 , π2 , . . . , πd ) of the numbers 1, 2, . . . , d. We leave it as an exercise
to show that symmetric, affine functions of two and three variables can be written in the
form
f (x1 , x2 ) = a0 + a1 (x1 + x2 ) + a2 x1 x2 ,
f (x1 , x2 , x3 ) = a0 + a1 (x1 + x2 + x3 ) + a2 (x1 x2 + x1 x3 + x2 x3 ) + a3 x1 x2 x3 .
We are now ready to give the definition of the blossom of a polynomial.
Definition 4.13. Let p be a polynomial of degree at most d. The blossom B[p](x1 , . . . , xd )
of p is a function of d variables with the properties:
1. Symmetry. The blossom is a symmetric function of its arguments,
B[p](x1 , . . . , xd ) = B[p](xπ1 , . . . , xπd )
for any permutation π1 , . . . , πd of the integers 1, . . . , d.
4.3. B-SPLINE COEFFICIENTS AS FUNCTIONS OF THE KNOTS
85
2. Affine. The blossom is affine in each of its variables,
B[p](. . . , αx + βy, . . .) = αB[p](. . . , x, . . .) + βB[p](. . . , y, . . .)
whenever α + β = 1.
3. Diagonal property. The blossom agrees with p on the diagonal,
B[p](x, . . . , x) = p(x)
for all real numbers x.
The blossom of a polynomial exists and is unique.
Theorem 4.14. Each polynomial p of degree d has a unique blossom B[p](x1 , . . . , xd ).
The blossom acts linearly on p, i.e., if p1 and p2 are two polynomials and c1 and c2 are
two real constants then
B[c1 p1 + c2 p2 ](x1 , . . . , xd ) = c1 B[p1 ](x1 , . . . , xd ) + c2 B[p2 ](x1 , . . . , xd ).
(4.22)
Proof. The proof of uniqueness follows along the lines sketched at the beginning of this
section for small d. Start with a general affine function F of d variables
F (x1 , . . . , xd ) = c0 +
d
X
X
ci1 ,...,ij xi1 · · · xij .
j=1 1≤i1 <···<ij ≤d
Symmetry forces all the coefficients multiplying terms of the same degree to be identical.
To see this we note first that
F (1, 0, . . . , 0) = c0 + c1 = F (0, . . . , 1, . . . , 0) = c0 + ci
for all i with 1 ≤ i ≤ d. Hence we have c1 = · · · = cd . To prove that the terms of degree
j all have the same coefficients we use induction and set j of the variables to 1 and the
rest to 0. By the induction hypothesis we know that all the terms of degree less than j
are symmetric; denote the contribution from these terms by pj−1 . Symmetry then gives
pj−1 + c1,2,...,j = pj−1 + c1,2,...,j−1,j+1 = · · · = pj−1 + cd−j+1,...,d .
From this we conclude that all the coefficients multiplying terms of degree j must be equal.
We can therefore write F as
F (x1 , . . . , xd ) = a0 +
d
X
j=1
aj
X
xi1 · · · xij ,
(4.23)
1≤i1 <···<ij ≤d
for suitable constants (aj )dj=0 . From the diagonal property F (x, . . . , x) = f (x) the coefficients (aj )dj=0 are all uniquely determined (since 1, x, . . . , xd is basis for πd ).
The linearity of the blossom with regards to p follows from its uniqueness: The righthand side of (4.22) is affine in each of the xi , it is symmetric, and it reduces to c1 p1 (x) +
c2 p2 (x) on the diagonal x1 = · · · = xd = x.
86
CHAPTER 4. KNOT INSERTION
Recall that the elementary symmetric polynomials
sj (x1 , . . . , xd ) =
X
1≤i1 <···<ij ≤d
d xi1 xi2 · · · xij /
j
that appear in (4.23) (apart from the binomial coefficient) agree with the B-spline coefficients of the polynomial powers,
j
σk,d
= sj (τk+1 , . . . , τk+d ),
see Corollary 3.5. In fact, the elementary symmetric polynomials are the blossoms of the
powers,
B[xj ](x1 , . . . , xd ) = sj (x1 , . . . , xd )
for j = 0, . . . , d.
They can also be defined by the relation
(x − x1 ) · · · (x − xd ) =
d
X
k=0
d−k
(−1)
d
sd−k (x1 , . . . , xd )xk .
k
Note that the blossom depends on the degree of the polynomial in a nontrivial way. If
we consider the polynomial p(x) = x to be of degree one, then B[p](x1 ) = x1 . But we can
also think of p as a polynomial of degree three (the cubic and quadratic terms are zero);
then we obviously have B[p](x1 , x2 , x3 ) = (x1 + x2 + x3 )/3.
4.3.2
B-spline coefficients as blossoms
Earlier in this chapter we have come across a function that is both affine and symmetric.
Suppose we have a knot vector τ for B-splines of degree d. On the interval [τµ , τµ+1 ) the
only nonzero B-splines are B d = (Bµ−d,d , . . . , Bµ,d )T which can be expressed in terms of
matrices as
B d (x)T = R1 (x) · · · Rd (x).
If we consider the polynomial piece f = B Td cd with coefficients cd = (cµ−d , . . . , cµ )T we
can define a function F of d variables by
F (x1 , . . . , xd ) = R1 (x1 ) · · · Rd (xd )cd .
(4.24)
From equation(4.13) we recognise F (x1 , . . . , xd ) as the coefficient multiplying a B-spline
with knots x0 , x1 , . . . , xd+1 in the representation of the polynomial f .
Equation (3.7) in Lemma 3.3 shows that F is a symmetric function. It is also affine in
each of its variables. To verify this, we note that because of the symmetry it is sufficient
to check that it is affine with respect to the first variable. Recall from Theorem 2.18 that
R1 = R1,τ is given by
τµ+1 − x
x − τµ
,
R1 (x) =
τµ+1 − τµ τµ+1 − τµ
which is obviously an affine function of x.
The function F is also related to the polynomial f in that F (x, . . . , x) = f (x). We
have proved the following lemma.
4.3. B-SPLINE COEFFICIENTS AS FUNCTIONS OF THE KNOTS
87
P
Lemma 4.15. Let f = µj=µ−d cj Bj,d be a polynomial represented in terms of the Bsplines in Sd,τ on the interval [τµ , τµ+1 ), with coefficients cd = (cµ−d , . . . , cµ )T . Then the
function
F (x1 , . . . , xd ) = R1 (x1 ) · · · Rd (xd )cd
is symmetric and affine, and agrees with f on the diagonal,
F (x, . . . , x) = f (x).
Lemma 4.15 and Theorem 4.14 show that the blossom of f is given by
B[f ](x1 , . . . , xd ) = R1 (x1 ) · · · R1 (xd )cd .
Blossoming can be used to give explicit formulas for the B-spline coefficients of a spline.
P
Theorem 4.16. Let f = nj=1 cj Bj,d,τ be a spline on a d + 1-regular knot vector τ =
(τj )n+d+1
. It’s B-spline coefficients are then given by
j=1
cj = B[fk ](τj+1 , . . . , τj+d ),
for k = j, j + 1, . . . , j + d,
(4.25)
provided τk < τk+1 . Here fk = f |(τk ,τk+1 ) is the restriction of f to the interval (τk , τk+1 ).
Proof. Let us first restrict x to the interval [τµ , τµ+1 ) and only consider one polynomial
piece fµ of f . From Lemma 4.15 we know that B[fµ ](x1 , . . . , xd ) = R1 (x1 ) · · · Rd (xd )cd ,
where cd = (cj )µj=µ−d are the B-spline coefficients of f active on the interval [τµ , τµ+1 ).
From (4.16) we then obtain
cj = B[fµ ](τj+1 , . . . , τj+d )
(4.26)
which is (4.25) in this special situation.
To prove (4.25) in general, fix j and choose the integer k in the range j ≤ k ≤ j + d.
We then have
k
X
ci Bi,d (x),
(4.27)
fk (x) =
i=k−d
By the choice of k we see that the sum in (4.27) includes the term cj Bj,d . Equation (4.25)
therefore follows by applying (4.26) to fk .
The affine property allows us to perform one important operation with the blossom;
we can change the arguments.
Lemma 4.17. The blossom of p satisfies the relation
B[p](. . . , x, . . .) =
x−a
b−x
B[p](. . . , a . . .) +
B[p](. . . , b, . . .)
b−a
b−a
for all real numbers a, b and x with a 6= b.
Proof. Observe that x can be written as an affine combination of a and b,
x=
x−a
b−x
a+
b.
b−a
b−a
Equation (4.28) then follows from the affine property of the blossom.
(4.28)
88
CHAPTER 4. KNOT INSERTION
The next result will be useful later.
Lemma 4.18. Let Bx p(x, y) denote the blossom of p with respect to the variable x.
Then
k!
(4.29)
Bx (y − x)k (x1 , . . . , xd ) = Dd−k (y − x1 ) · · · (y − xd ) ,
d!
for k = 0, 1, . . . , d, and
(d − `)!
Bx (y1 − x) · · · (y` − x) (x1 , . . . , xd ) =
d!
X
(y1 − xi1 ) · · · (y` − xi` ), (4.30)
1≤i1 ,...,i` ≤d
where the sum is over all distinct choices i1 , . . . , i` of ` integers from the d integers 1, . . . ,
d.
Proof. For k = d equation (4.29) follows since the right-hand side is symmetric and affine
in each of the variables xi and it agrees with (y − x)d on the diagonal x1 = · · · = xd = x.
The general result is then obtained by differentiating both sides k times.
Equation (4.30) follows since the right-hand side is affine, symmetric and reduces to
(y1 − x) · · · (y` − x) when x = x1 = · · · = xd , i.e., it must be the blossom of (y − x)d .
4.4
Inserting one knot at a time
With blossoming we have a simple but powerful tool for determining the B-spline coefficients of splines. Here we will apply blossoming to develop an alternative knot insertion
strategy. Instead of inserting all new knots simultaneously we can insert them sequentially.
We insert one knot at a time and update the B-spline coefficients between each insertion.
This leads to simple, explicit formulas.
Lemma 4.19 (Böhm’s method). Let τ = (τj )n+d+1
be a given knot vector and let
j=1
n+d+2
t = (ti )i=1
be the knot vector obtained by inserting a knot z in τ in the interval
[τµ , τµ+1 ). If
n
n+1
X
X
f=
cj Bj,d,τ =
bi Bi,d,t ,
j=1
then
(bi )n+1
i=1
i=1
can be expressed in terms of (cj )nj=1 through the formulas


ci ,
if 1 ≤ i ≤ µ − d;


 z−τ
τi+d − z
i
bi =
c +
c , if µ − d + 1 ≤ i ≤ µ;
 τi+d − τi i τi+d − τi i−1


c ,
if µ + 1 ≤ i ≤ n + 1.
i−1
(4.31)
Proof. Observe that for j ≤ µ we have τj = tj . For i ≤ µ − d and with k an integer such
that i ≤ k ≤ i + d it therefore follows from (4.25) that
bi = B[f k ](ti+1 , . . . , ti+d ) = B[f k ](τi+1 , . . . , τi+d ) = ci .
Similarly, we have ti = τi−1 for i ≥ µ + 1 so
bi = B[f k ](ti+1 , . . . , ti+d ) = B[f k ](τi , . . . , τi+d−1 ) = ci−1
4.4. INSERTING ONE KNOT AT A TIME
89
for such values of i.
When i satisfies µ − d + 1 ≤ i ≤ µ we note that z will appear in the sequence
(ti+1 , . . . , ti+d ). From (4.25) we therefore obtain
bi = B[f µ ](ti+1 , . . . , z, . . . , ti+d ) = B[f µ ](τi+1 , . . . , z, . . . , τi+d−1 )
since we now may choose k = µ. Applying Lemma 4.17 with x = z, a = τi and b = τi+d
yields
bi =
τi+d − z
z − τi
B[f µ ](τi+1 , . . . , τi , . . . , τi+d ) +
B[f µ ](τi , . . . , τi+d , . . . , τi+d−1 ).
τi+d − τi
τi+d − τi
Exploiting the symmetry of the blossom and again applying (4.25) leads to the middle
formula in (4.31).
It is sometimes required to insert the same knot several times; this can of course be
accomplished by applying the formulas in (4.31) several times. Since blossoms have the
property B[f ](z, . . . , z) = f (z), we see that inserting a knot d times in a spline of degree
d gives as a by-product the function value of f at z. This can be conveniently illustrated
by listing old and new coefficients in a triangular scheme. Consider the following triangle
(d = 3),
···
c0µ−4
c0µ−3
c0µ−2
c0µ−1
c1µ−2 c1µ−1
c2µ−1 c2µ
c3µ
c0µ
c0µ+1
···
c1µ
In the first row we have the coefficients of f on the original knot vector τ . After inserting
z in (τµ , τµ+1 ) once, the coefficients relative to the knot vector τ 1 = τ ∪ {z} are
(. . . , c0µ−4 , c0µ−3 , c1µ−2 , c1µ−1 , c1µ , c0µ , c0µ+1 , . . .),
i.e., we move down one row in the triangle. Suppose that z is inserted once more. The
new B-spline coefficients on τ 2 = τ 1 ∪ {z} are now found by moving down to the second
row, across this row, and up the right hand side,
(. . . , c0µ−4 , c0µ−3 , c1µ−2 , c2µ−1 , c2µ , c1µ , c0µ , c0µ+1 , . . .).
Similarly, if z is inserted 3 times, we move around the whole triangle. We can also insert
z a full d = 4 times. We then simply repeat c3µ two times in the last row.
Lemma 4.19 shows that Oslo Algorithm 2 (Algorithm 4.11) is not always efficient.
To compute a new coefficient in the case where only one new knot is inserted requires
at most one convex combination according to Lemma 4.19 while Algorithm 4.11 requires
the computation of a full triangle (two nested loops). More efficient versions of the Oslo
algorithms can be developed, but this will not be considered here.
The simplicity of the formulas (4.31) indicates that the knot insertion matrix A must
have a simple structure when only one knot is inserted. Setting c = (ci )ni=1 and b = (bi )n+1
i=1
90
CHAPTER 4. KNOT INSERTION
and remembering that b = Ac, we see that A is given by the (n + 1) × n matrix


1 0
 .. ..



.
.




1
0




1
−
λ
λ
µ−d
µ−d




..
..
A=
,
.
.




1 − λµ λµ




0
1




.. ..

.
. 
0 1
(4.32)
where λi = (z − τi )/(τi+d − τi for µ − d + 1 ≤ i ≤ µ. All the entries off the two diagonals
are zero and such matrices are said to be bi-diagonal. Since z lies in the interval [τµ , τµ+1 )
all the entries in A are nonnegative. This property generalises to arbitrary knot insertion
matrices.
Lemma 4.20. Let τ = (τj )n+d+1
and t = (ti )m+d+1
be two knot vectors for splines of
j=1
i=1
degree d with τ ⊆ t. All the entries of the knot insertion matrix A from Sd,τ to Sd,t are
nonnegative and A can be factored as
A = Am−n Am−n−1 · · · A1 ,
(4.33)
where Ai is a bi-diagonal (n + i) × (n + i − 1)-matrix with nonnegative entries.
0
Proof. Let us denote the m − n knots that are in t but not in τ by (zi )m−n
i=1 . Set t = τ
i
i−1
and t = t
∪ (zi ) for i = 1, . . . , m − n. Denote by Ai the knot insertion matrix from
i−1
i
t
to t . By applying Böhm’s method m − n times we obtain (4.33). Since all the entries
in each of the matrices Ai are nonnegative the same must be true of A.
4.5
Bounding the number of sign changes in a spline
In this section we will make use of Böhm’s method for knot insertion to prove that the
number of spline changes in a spline function is bounded by the number of sign changes in
its B-spline coefficient vector. This provides a generalisation of an interesting property of
polynomials known as Descartes’ rule of signs. Bearing the name of Descartes, this result
is of course classical, but it is seldom mentioned in elementary mathematics textbooks.
Before stating Descartes’ rule of signs let us record what we mean by sign changes in a
definition.
Definition 4.21. Let c = (ci )ni=1 be a vector of real numbers. The number of sign changes
in c (zeros are ignored) is denoted S − (c). The number of sign changes in a function f in
−
an interval (a, b) is denoted S(a,b)
(f ) = S − (f ), provided this number is finite. It is given
by the largest possible integer r such that an increasing sequence of r + 1 real numbers
x1 < · · · < xr+1 in (a, b) can be found with the property that S − f (x1 ), . . . , f (xr+1 ) = r.
Example 4.22. Let us consider some simple examples of counting sign changes. It is easily checked
that
S − (1, −2) = 1,
S − (1, 0, −1, 3) = 2,
S − (1, 0, 2) = 0,
S − (2, 0, 0, 0, −1) = 1,
S − (1, −1, 2) = 2,
S − (2, 0, 0, 0, 1) = 0.
4.5. BOUNDING THE NUMBER OF SIGN CHANGES IN A SPLINE
1
1
0.5
0.5
0.5
1
1.5
0.5
2
1
1.5
2
91
2.5
3
2.5
3
-0.5
-0.5
-1
-1
(b) p(x) = 1 − 3x + x2 .
(a) p(x) = 1 − x.
3.5
1
3
0.5
2.5
2
0.5
1.5
-0.5
1
-1
0.5
-1.5
0.5
1
1.5
2
1
1.5
2
-2
(c) p(x) = 2 − 3x + x2 .
(d) p(x) = 1 − 4x + 4x2 − x3 .
Figure 4.3. Illustrations of Decartes’ rule of signs: the number of zeros in (0, ∞) is no greater than the number of
strong sign changes in the coefficients.
As stated in the definition, we simply count sign changes by counting the number of jumps from positive
to negative values and from negative to positive, ignoring all components that are zero.
Descartes’ rule of signs bounds the number of zeros in a polynomial by the number
of sign changes in its coefficients. Recall that z is a zero of f of multiplicity r ≥ 1 if
f (z) = Df (z) = · · · = Dr−1 f (z) = 0 but Dr f (z) 6= 0.
P
Theorem 4.23 (Descartes’ rule of signs). Let p = di=0 ci xi be a polynomial of degree
d with coefficients c = (c0 , . . . , cd )T , and let Z(p) denote the total number of zeros of p in
the interval (0, ∞), counted with multiplicities. Then
Z(p) ≤ S − (c),
i.e., the number of zeros of p is bounded by the number of sign changes in its coefficients.
Figures 4.3 (a)–(d) show some polynomials and their zeros in (0, ∞).
Our aim is to generalise this result to spline functions, written in terms of B-splines.
This is not so simple because it is difficult to count zeros for splines. In contrast to
polynomials, a spline may for instance be zero on an interval without being identically
zero. In this section we will therefore only consider zeros that are also sign changes. In
the next section we will then generalise and allow multiple zeros.
To bound the number of sign changes of a spline we will investigate how knot insertion
influences the number of sign changes in the B-spline coefficients. Let Sd,τ and Sd,t be two
92
CHAPTER 4. KNOT INSERTION
spline spaces of degree d, with Sd,τ ⊆ Sd,t . Recall from Section 4.4 that to get from the
knot vector τ to the refined knot vector t, we can insert one knot at a time. If there are `
more knots in τ than in t, this leads to a factorisation of the knot insertion matrix A as
A = A` A`−1 · · · A1 ,
(4.34)
where Ak is a (n + k) × (n + k − 1) matrix for k = 1, . . . , `, if dim Sd,τ = n. Each of the
matrices Ak corresponds to insertion of only one knot, and all the nonzero entries of the
bi-diagonal matrix Ak are found in positions (i, i) and (i + 1, i) for i = 1, . . . , n + k − 1,
and these entries are all nonnegative (in general many of them will be zero).
We start by showing that the number of sign changes in the B-spline coefficients is
reduced when the knot vector is refined.
Lemma
P 4.24. Let Sd,τ
P and Sd,t be two spline spaces such that t is a refinement of τ . Let
f = nj=1 cj Bj,d,τ = m
i=1 bi Bi,d,t be a spline in Sd,τ with B-spline coefficients c in Sd,τ
and b in Sd,t . Then b has no more sign changes than c, i.e.,
S − (Ac) = S − (b) ≤ S − (c),
(4.35)
where A is the knot insertion matrix from τ to t.
Proof. Since we can insert the knots one at a time, it clearly suffices to show that (4.35)
holds in the case where there is only one more knot in t than in τ . In this case we know
from Lemma 4.19 that A is bidiagonal so
bi = αi−1 (i)ci−1 + αi (i)ci ,
for i = 1, . . . n + 1,
n+1,n
where αj (i) i,j=1 are the entries of A (for convenience of notation we have introduced
two extra entries that are zero, α0 (1) = αn+1 (n + 1) = 0). Since αi−1 (i) and αi (i) both
are nonnegative, the sign of bi must be the same as either ci−1 or ci (or be zero). Since
the number of sign changes in a vector is not altered by inserting zeros or a number with
the same sign as one of its neighbours we have
S − (c) = S − (b1 , c1 , b2 , c2 , . . . , bn−1 , cn−1 , bn , cn , bn+1 ) ≥ S − (b).
The last inequality follows since the number of sign changes in a vector is always reduced
when entries are removed.
From Lemma 4.24 we can quite easily bound the number of sign changes in a spline in
terms of the number of sign changes in its B-spline coefficients.
P
Theorem 4.25. Let f = nj=1 cj Bj,d be a spline in Sd,τ . Then
S − (f ) ≤ S − (c) ≤ n − 1.
(4.36)
−
Proof. Suppose that S − (f ) = `, and let (xi )`+1
i=1 be ` + 1 points chosen so that S (f ) =
S − f (x1 ), . . . , f (x`+1 ) . We form a new knot vector t that includes τ as a subsequence,
but in addition each of the xi occurs exactlyP
d + 1 times in t. From our study of knot
insertion we know that f may be written f = j bj Bj,d,t for suitable coefficients (bj ), and
4.5. BOUNDING THE NUMBER OF SIGN CHANGES IN A SPLINE
93
3
4
2
2
1
1
2
3
4
5
6
1
2
3
4
5
6
-1
-2
-2
(a)
(b)
Figure 4.4. A quadratic spline (a) and a cubic spline (b) with their control polygons.
from Lemma 2.6 we know that each of the function values f (xi ) will appear as a B-spline
coefficient in b. We therefore have
S − (f ) ≤ S − (b) ≤ S − (c),
the last inequality following from Lemma 4.24. The last inequality in (4.36) follows since
an n-vector can only have n − 1 sign changes.
The validity of Theorem 4.25 can be checked with the two plots in Figure 4.4 as well
as all other figures which include both a spline function and its control polygon.
Exercises for Chapter 4
4.1 In this exercise we are going to study a change of polynomial basis from the Bernstein
basis to the Monomial basis. Recall that the Bernstein basis of degree d is defined
by
d k
d
Bk (x) =
x (1 − x)d−k ,
for k = 0, 1, . . . , d.
(4.37)
k
A
p of degree d is said to be written
in Monomial form if p(x) =
Pdpolynomial
Pd
k and in Bernstein form if p(x) =
d
b
x
c
k=0 k
k=0 k Bk (x). In this exercise the
binomial formula
d X
d k d−k
d
(a + b) =
a b
(4.38)
k
k=0
will be useful.
a) By applying (4.38), show that
Bkd (x)
=
d
X
i−j
(−1)
i=j
Also show that
d
j
d−j
i−j
=
d
i
d d−j i
x,
j
i−j
i
j
for j = 0, 1, . . . , d.
for i = j, . . . , d and j = 0, . . . , d.
94
CHAPTER 4. KNOT INSERTION
T
b) The two basis vectors B d = B0d (x), . . . , Bdd (x) and P d = (1, x, . . . , xd )T are
related by B Td = P Td Ad where Ad is a (d + 1) × (d + 1)-matrix Ad . Show that
the entries of Ad = (ai,j )di,j=0 are given by
ai,j
(
0,
if i < j,
=
d i
i−j
(−1)
i j , otherwise.
c) Show that the entries of Ad satisfy the recurrence relation
ai,j = βi (ai−1,j−1 − ai−1,j ) ,
where βi = (d − i + 1)/i.
Give a detailed algorithm for computing Ad based on this formula.
d) Explain how we can find the coefficients of a polynomial relative to the Monomial basis if Ad is known and the coefficients relative to the Bernstein basis are
known.
4.2 In this exercise we are going to study the opposite conversion of that in Exercise 1,
namely from the Monomial basis to the Bernstein basis.
a) With the aid of (4.38), show that for all x and t in R we have
d
d X
Bkd (x)tk .
tx + (1 − x) =
(4.39)
k=0
d
The function G(t) = tx + (1 − x) is called a generating function for the
Bernstein polynomials.
P
b) Show that dk=0 Bkd (x) = 1 for all x by choosing a suitable value for t in (4.39).
c) Find two different expressions for G(j) (1)/j! and show that this leads to the
formulas
d d j X i
for j = 0, . . . , d.
(4.40)
x =
Bkd (x),
j
j
i=j
d) Show that the entries of the matrix B d = (bi,j )di,j=0 such that P Td = B Td B d are
given by
(
0,
if i < j,
d
bi,j =
i
j / j , otherwise.
4.3 Let P denote the cubic Bernstein basis on the interval [0, 1] and let Q denote the
cubic Bernstein basis on the interval [2, 3]. Determine the matrix A3 such that
P (x)T = Q(x)T A3 for all real numbers x.
4.4 Let A denote the knot insertion matrix for the linear (d = 1) B-splines on τ = (τj )n+2
j=1
to the linear B-splines in t = (ti )m+2
.
We
assume
that
τ
and
t
are
2-extended
with
i=1
τ1 = t1 and τn+2 = tm+2 and τ ⊆ t.
a) Determine A when τ = (0, 0, 1/2, 1, 1) and t = (0, 0, 1/4, 1/2, 3/4, 1, 1).
4.5. BOUNDING THE NUMBER OF SIGN CHANGES IN A SPLINE
95
b) Device a detailed algorithm that computes A for general τ and t and requires
O(m) operations.
c) Show that the matrix AT A is tridiagonal.
4.5 Prove Lemma 4.4 in the general case where τ and t are not d + 1-regular. Hint:
Augment both τ and t by inserting d + 1 identical knots at the beginning and end.
4.6 Prove Theorem 4.7 in the general case where the knot vectors are not d + 1-regular
with common knots at the ends. Hint: Use the standard trick of augmenting τ and
t with d + 1 identical knots at both ends to obtain new knot vectors τ̂ and t̂. The
knot insertion matrix from τ to t can then be identified as a sub-matrix of the knot
insertion matrix from τ̂ to t̂.
4.7 Show that if τ P
and t are d + 1-regular knot vectors with τ ⊆ t whose knots agree at
the ends then j αj,d (i) = 1.
4.8 Implement Algorithm 4.11 and test it on two examples. Verify graphically that the
control polygon converges to the spline as more and more knots are inserted.
4.9 Let f be a function that satisfies the identity
f (αx + βy) = αf (x) + βf (y)
(4.41)
for all real numbers x and y and all real numbers α and β such that α + β = 1.
Show that then f must be an affine function. Hint: Use the alternative form of
equation (4.41) found in Lemma 4.17.
4.10 Find the cubic blossom B[p](x1 , x2 , x3 ) when p is given by:
a) p(x) = x3 .
b) p(x) = 1.
c) p(x) = 2x + x2 − 4x3 .
d) p(x) = 0.
e) p(x) = (x − a)2 where a is some real number.
96
CHAPTER 4. KNOT INSERTION
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement