# An unconditionally convergent method for computing zeros of splines and polynomials ∗ ```An unconditionally convergent method for
computing zeros of splines and polynomials∗
Knut Mørken and Martin Reimers
January 6, 2007
Abstract
We present a simple and efficient method for computing zeros of spline
functions. The method exploits the close relationship between a spline
and its control polygon and is based on repeated knot insertion. Like
Newton’s method it is quadratically convergent, but the new method
overcomes the principal problem with Newton’s method in that it always converges and no starting value needs to be supplied by the user.
1
Introduction
B-splines is a classical format for representing univariate functions and parametric curves in many applications, and the toolbox for manipulating such
functions is rich, both from a theoretical and practical point of view. A
commonly occurring operation is to find the zeros of a spline, and a number
of methods for accomplishing this have been devised. One possibility is to
use a classical method like Newton’s method or the secant method , both
of which leave us with the question of how to choose the initial guess(es).
For this we can exploit a very nice feature of spline functions, namely that
every spline comes with a simple, piecewise linear approximation, the control
polygon. It is easy to show that a spline whose control polygon is everywhere
of one sign cannot have any zeros. Likewise, a good starting point for an
iterative procedure is a point in the neighbourhood of a zero of the control
polygon. More refined methods exploit another important feature of splines,
namely that the control polygon converges to the spline as the spacing of
the knots (the joins between adjacent polynomial pieces) goes to zero. One
can then start by inserting knots to obtain a control polygon where the zeros
are clearly isolated and then apply a suitable iterative method to determine
∗
2000 Mathematics Subject Classification: 41A15, 65D07, 65H05
1
the zeros accurately. Although hybrid methods of this type can be tuned to
perform well, there are important unanswered problems. Where should the
knots be inserted in the initial phase? How many knots should be inserted
in the initial phase? How should the starting value for the iterations be
chosen? Will the method always converge?
In this paper we propose a simple method that provides answers to all
the above questions. The method is very simple: Iteratively insert zeros of
the control polygon as new knots of the spline. It turns out that all accumulation points of this procedure will be zeros of the spline function and we
prove below that the method is unconditionally convergent. In addition it
is essentially as efficient as Newton’s method, and asymptotically it behaves
like this method. A similar strategy for Bézier curves was used in ; however, no convergence analysis was given. Another related method is “Bézier
clipping”, see . The “Interval Newton” method proposed in  is, as our
method, unconditionally quadratically convergent and avoids the problem
of choosing an initial guess. It is based on iteratively dividing the domain
into segments that may contain a zero, using estimates for the derivatives of
the spline. For an overview of a number of other methods for polynomials,
see .
2
Background spline material
The method itself is very simple, but the analysis of convergence and convergence rate requires knowledge of a number of spline topics which we
summarise here.
P
Let f = ni=1 ci Bi,d,t be a spline in the n-dimensional spline space Sd,t
spanned by the n B-splines {Bi,d,t }ni=1 . Here d denotes the polynomial degree
and the nondecreasing sequence t = (t1 , . . . , tn+d+1 ) denotes the knots (joins
between neighbouring polynomial pieces) of the spline. We assume that
ti < ti+d+1 for i = 1, . . . , n which ensures that the B-splines are linearly
independent. We also make the common assumption that the first and last
d + 1 knots are identical. This causes no loss of generality as any spline can
be converted to this form by inserting the appropriate number of knots at
the two ends, see page 3 below. Without this assumption the ends of the
spline will have to be treated specially in parts of our algorithm.
The control polygon Γ = Γt (f ) of f relative to Sd,t is defined as the
piecewise linear function with vertices at (ti , ci )ni=1 , where ti = (ti+1 + . . . +
ti+d )/d is called the ith knot average. This piecewise linear function is known
to approximate f itself and has many useful properties that can be exploited
in analysis and development of algorithms for splines. One such property
2
which is particularly useful when it comes to finding zeros is a spline version
of Descartes’ rule of signs for polynomials: A spline has at most as many
zeros as its control polygon (this requires the spline to be connected, see
below). More formally,
Z(f ) ≤ S − (Γ) = S − (c) ≤ n − 1,
(1)
where Z(f ) is the number of zeros, counting multiplicities, and S − (Γ) and
S − (c) is the number of strict sign changes (ignoring zeros) in Γ and c respectively, see . We say that z is a zero of f of multiplicity m if f (j) (z) = 0
for j = 0, . . . , m − 1 and f (m) (z) 6= 0. A simple zero is a zero of multiplicity
m = 1. The inequality (1) holds for splines that are connected, meaning
that for each x ∈ [t1 , tn+d+1 ) there is at least one i such that ci Bi,d,t (x) 6= 0.
This requirement ensures that f is nowhere identically zero.
The
derivative of f is a spline in Sd−1,t which can be written in the form
Pn+1
0
f = j=1 ∆cj Bj,d−1,t where
∆cj =
cj − cj−1
cj − cj−1
.
=d
tj+d − tj
tj − tj−1
(2)
if we use the conventions that c0 = cn+1 = 0 and ∆cj = 0 whenever tj −
tj−1 = 0. This is justified since tj − tj−1 = 0 implies tj = tj+d and hence
Bj,d−1,t ≡ 0. Note that the j’th coefficient of f 0 equals the slope of segment
number j − 1 of the control-polygon of f .
The second derivative of a spline is obtained by differentiating f 0 which
results in a spline f 00 of degree d − 2 with coefficients
∆2 cj = (d − 1)
∆cj − ∆cj−1
.
tj+d−1 − tj
(3)
We will need the
P following well-known stability property for B-splines.
For all splines f = ci Bi,d,t in Sd,t the two inequalities
Kd−1 kck∞ ≤ kf k∞ ≤ kck∞ ,
(4)
hold for some constant Kd that depends only on the degree d, see e.g. .
The norm of f is here taken over the interval [t1 , tn+d+1 ].
Suppose that we insert a new knot x in t and form theP
new knot vecn
tor t1 = t ∪ {x}. Since Sd,t ⊆ Sd,t1 , we know that f =
i=1 ci Bi,d,t =
Pn+1 1
1
i=1 ci Bi,d,t1 . More specifically, if x ∈ [tp , tp+1 ) we have ci = ci for i = 1,
. . . , p − d;
c1i = (1 − µi )ci−1 + µi ci
3
with µi =
x − ti
ti+d − ti
(5)
for i = p − d + 1, . . . , p; and c1i = ci−1 for i = p + 1, . . . , n + 1, see .
(If the new knot does not lie in the interval [td+1 , tn+1 ), the indices must be
restricted accordingly. This will not happen when the first and last knots
occur d + 1 times as we have assumed here.) It is not hard to verify that
the same relation holds for the knot averages,
1
ti = (1 − µi )ti−1 + µi ti
(6)
for i = p − d + 1, . . . , p. This means that the corners of Γ1 , the refined
control polygon, lie on Γ. This property is useful when studying the effect
of knot insertion on the number of zeros of Γ.
We count the number of zeros of Γ as the number of strict sign changes
in the coefficient sequence (ci )ni=1 . The position of a zero of the control
polygon is obvious when the two ends of a line segment has opposite signs.
However, the control polygon can also be identically zero on an interval in
which case we associate the zero with the left end point of the zero interval.
More formally, if ck−1 ck+` < 0 and ck+i = 0 for i = 0, . . . , ` − 1, we say that
k is the index of the zero z, which is given by
z = min x ∈ [tk−1 , tk+` ] | Γ(x) = 0 .
Note that in this case ck−1 6= 0 and ck−1 ck ≤ 0.
−
We let Si,j
(Γ) denote the number of zeros of Γ in the half-open interval
−
−
−
−
(Γ)
(Γ) = Si,j
(Γ) + Sj,k
(Γ) = S − (Γ) and that Si,k
(ti , tj ]. It is clear that S1,n
1
for i, j, k such that 1 ≤ i < j < k ≤ n. We note that if Γ is the control
polygon of f after inserting one knot, then for any k = 2, . . . , n
−
−
(Γ1 ).
(Γ) + Sk−1,k+1
S − (Γ1 ) ≤ S − (Γ) − Sk−1,k
(7)
To prove this we first observe that the two inequalities
−
−
S1,k−1
(Γ1 ) ≤ S1,k−1
(Γ),
−
−
Sk+1,n+1
(Γ1 ) ≤ Sk,n
(Γ),
are true since the corners of Γ1 lie on Γ, see (5). The inequality (7) follows
from the identity
−
−
−
S − (Γ1 ) = S1,k−1
(Γ1 ) + Sk−1,k+1
(Γ1 ) + Sk+1,n+1
(Γ1 ),
and the two inequalities above.
The rest of this section is devoted to blossoming which is needed in later
sections to prove convergence and establish the convergence rate of our zero
4
finding algorithm. The ith B-spline coefficient of a spline is a continuous
function of the d interior knots ti+1 , . . . , ti+d of the ith B-spline,
ci = F (ti+1 , . . . , ti+d ).
(8)
and the function F is completely characterised by three properties:
• it is affine (a polynomial of degree 1) in each of its arguments,
• it is a symmetric function of its arguments,
• it satisfies the diagonal property F (x, . . . , x) = f (x).
The function F is referred to as the blossom of the spline and is often written
as B[f ](y1 , . . . , yd ).
The affinity of the blossom means that F (x, . . .) = ax + b where a and b
depend on all arguments but x. From this it follows that
F (x, . . .) =
z−x
x−y
F (y, . . .) +
F (z, . . .).
z−y
z−y
(9)
Strictly speaking, blossoms are only defined for polynomials, so when we
refer to the blossom of a spline we really mean the blossom of one of its
polynomial pieces. However, the relation ci = B[fj ](ti+1 , . . . , ti+d ) remains
true as long as fj is the restriction of f to one of the (nonempty) intervals
[tj , tj+1 ) for j = i, i + 1, . . . , i + d.
We will use the notation F 0 = B[f 0 ] and F 00 = B[f 00 ] for blossoms of the
derivatives of a spline, i.e.
∆ci = F 0 (ti+1 , . . . , ti+d−1 )
(10)
∆2 ci = F 00 (ti+1 , . . . , ti+d−2 ).
(11)
and
By differentiating (9) with respect to x we obtain
DF (x, . . .) =
F (z, . . .) − F (y, . . .)
.
z−y
(12)
More generally, we denote the derivative of F (x1 , . . . , xd ) with respect to xi
by Di F . The derivative of the blossom is related to the derivative of f by
d Di F (y1 , . . . , yd ) = F 0 (y1 , . . . , yi−1 , yi+1 , . . . , yd ).
5
(13)
This relation follows since both sides are symmetric, affine in each of the
arguments except yi and agree on the diagonal. The latter property follows
from the relation (shown here in the quadratic case)
f (x + h) − f (x)
F (x + h, x + h) − F (x, x)
=
h
h
F (x + h, x + h) − F (x, x + h) F (x, x + h) − F (x, x)
=
+
h
h
= D1 F (x, x + h) + D2 F (x, x)
which tends to f 0 (x) = 2DF (x, x) when h → 0.
Higher order derivatives will be indicated by additional subscripts as in
Di,j F (y1 , . . . , yd ), but note that the derivative is zero if differentiation with
respect to the same variable is performed more than once.
We will need a nonstandard, multivariate version of Taylor’s formula.
This is obtained from the fundamental theorem of calculus which states
that
Z 1
g(x) = g(a) +
Dt g(a + t(x − a)) dt = g(a) + (x − a)g 0 (a)
(14)
0
when g is an affine function. Suppose now that F is a function that is affine
in each of its d arguments. Repeated application of (14) then leads to
F (y1 , . . . , yd ) = F (a1 , . . . , ad ) +
d
X
(yi − ai )Di F (a1 , . . . , ad )
i=1
+
d X
i−1
X
(15)
(yi − ai )(yj − aj )Di,j F (a1 , . . . , ai , yi+1 , . . . , yd ).
i=2 j=1
Finally, we will need bounds for the second derivatives of the blossom
in terms of the spline. Results in this direction are well-known folklore, but
a proof of a specific result may be difficult to locate. We have therefore
included a short proof here.
Lemma 1. There is a constant Kd−2 such that for all y1 , . . . , yd and 1 ≤
i, j ≤ d,
Kd−2
|Di,j F (y1 , . . . , yd )| ≤
kD2 f k∞ .
(16)
d(d − 1)
Proof. Two applications of (13) yields
Di,j F (y1 , . . . , yd ) =
1
F 00 (y1 , . . . , yd ) \ {yi , yj } .
d(d − 1)
6
By (4) we have that F 00 (y1 , . . . , yd ) \ {yi , yj } ≤ Kd−2 kD2 f k∞ , and (16)
follows.
3
Root finding algorithm
The basic idea of the root finding algorithm is to exploit the close relationship
between the control polygon and the spline, and we do this by using the zeros
of the control polygon as an initial guess for the zeros of the spline. In the
next step we refine the control polygon by inserting these zeros as knots.
We can then find the zeros of the new control polygon, insert these zeros
as knots and so on. The method can be formulated in a particularly simple
way if we focus on determining the left-most zero. There is no loss in this
since once the left-most zero has been found, we can split the spline at this
point by inserting a knot of multiplicity d into t and then proceed with the
other zeros.
Note that the case where f has a zero at the first knot t1 can easily
be detected a priori; the spline is then disconnected at t1 , see page 3 for a
definition of disconnectedness. In fact, disconnected splines are degenerate,
and this degeneracy is easy to detect. We therefore assume that the spline
under consideration is connected.
We give a more refined version of the method in Algorithm 2 which
focuses on determining an arbitrary zero of f .
Algorithm 1. Let f be a connected spline in Sd,t and set t0 = t. Repeat
the following steps for j = 0, 1, . . . , until the sequence (xj ) is found to
converge or no zero of the control polygon can be found.
1. Determine the first zero xj+1 of the control polygon Γj of f relative
to the space Sd,tj .
2. Form the knot vector tj+1 = tj ∪ {xj+1 }.
We will show below that if f has zeros, this procedure will converge to
the first zero of f , otherwise it will terminate after a finite number of steps.
A typical example of how the algorithm behaves is shown in Figure 1.
In the following, we only discuss the first iteration through Algorithm 1
and therefore omit the superscripts. In case d = 1 the control polygon and
the spline are identical and so the zero is found in the first iteration. We
will therefore assume d > 1 in the rest of the paper. The first zero of the
control polygon is the zero of the linear segment connecting the two points
(tk−1 , ck−1 ) and (tk , ck ) where k is the smallest zero index, i.e. k is the
7
0.5
0.5
0.25
0.25
0.2
0.4
0.6
0.8
1
-0.25
-0.25
-0.5
-0.5
-0.75
-0.75
-1
-1
0.5
0.5
0.25
0.25
0.2
0.4
0.6
0.8
1
-0.25
-0.25
-0.5
-0.5
-0.75
-0.75
-1
-1
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
Figure 1. Our algorithm applied to a cubic spline with knot vector t = (0, 0, 0, 0, 1, 1, 1, 1) and
B-spline coefficients c = (−1, −1, 1/2, 0)
smallest integer such that ck−1 ck ≤ 0 and ck−1 6= 0, see page 4 for the
definition of zero index. The zero is characterised by the equation
(1 − λ)ck−1 + λck = 0
which has the solution
λ=
−ck−1
.
ck − ck−1
The control polygon therefore has a zero at
ck−1 (tk+d − tk )
d(ck − ck−1 )
ck (tk+d − tk )
= tk −
.
d(ck − ck−1 )
x1 = (1 − λ)tk−1 + λtk = tk−1 −
(17)
Using the notation (2) we can write this in the simple form
x1 = tk −
ck
ck−1
= tk−1 −
∆ck
∆ck
(18)
from which it is apparent that the method described by Algorithm 1 resembles a discrete version of Newton’s method.
8
When x1 is inserted in t, we can express f on the resulting knot vector
via a new coefficient vector c1 as in (5). The new control points lie on
the old control polygon and hence this process is variation diminishing in
the sense that the number of zeros of the control polygon is non-increasing.
In fact, the knot insertion step in Algorithm 1 either results in a refined
1
1
control polygon that has at least one zero in the interval (tk−1 , tk+1 ] or the
number of zeros in the refined control polygon has been reduced by at least
2 compared with the original control polygon.
−
Lemma 2. If k is the index of a zero of Γ and Sk−1,k+1
(Γ1 ) = 0 then
−
1
−
S (Γ ) ≤ S (Γ) − 2.
Proof. From (7) we see that the number of sign changes in Γ1 is at least
one less than in Γ, and since the number of sign changes has to decrease in
even numbers the result follows.
1
1
This means that if Γ1 has no zero in (tk−1 , tk+1 ], the zero in (tk−1 , tk ]
was a false warning; there is no corresponding zero in f . In fact, we have
accomplished more than this since we have also removed a second false zero
from the control polygon. If we still wish to find the first zero of f we can
restart the algorithm from the leftmost zero of the refined control polygon.
However, it is useful to be able to detect that zeros in the control polygon have disappeared so we reformulate our algorithm with this ingredient.
In addition, we need this slightly more elaborate algorithm to carry out a
detailed convergence analysis.
Algorithm 2. Let f be a connected spline in Sd,t , and set t0 = t and
c0 = c. Let k0 = k be a zero index for Γ. Repeat the following steps for
j = 0, 1, . . . , or until the process is halted in step 3:
j
1. Compute xj+1 = tkj − cjkj /∆cjkj .
2. Form the knot vector tj+1 = tj ∪ {xj+1 } and compute the B-spline
coefficients cj+1 of f in Sd,tj+1 .
3. Choose kj+1 to be the smallest of the two integers kj and kj + 1, but
such that kj+1 is the index of a zero of Γj+1 . Stop if no such number
can be found or if f is disconnected at xj+1 .
Before turning to the analysis of convergence, we establish a few basic
facts about the algorithm. We shall call an infinite sequence (xj ) generated
j
by this algorithm a zero sequence. We also introduce the notation t̂ =
(tjkj , . . . , tjkj +d ) to denote what we naturally term the active knots at level j.
9
In addition we denote by aj =
active knots.
j
i=1 tkj +i /(d − 1)
Pd−1
the average of the interior,
Lemma 3. The zero xj+1 computed in Algorithm 2 satisfies the relations
j
j
xj+1 ∈ (tkj −1 , tkj ] ⊆ (tjkj , tjkj +d ], and if xj+1 = tjkj +d then f is disconnected
at xj+1 with f (xj+1 ) = 0.
j
j
Proof. Since cjk−1 6= 0 we must have xj+1 ∈ (tk−1 , tk ]. Since we also have
j
j
(tk−1 , tk ] ⊆ (tjk , tjk+d ], the first assertion follows.
j
For the second assertion, we observe that we always have xj+1 ≤ tk ≤
j
j
j
j
tk+d . This means that if xj+1 = tk+d we must have xj+1 = tk and ck = 0.
But then xj+1 = tjk+1 = · · · = tjk+d so f (xj+1 ) = cjk = 0.
Our next result shows how the active knots at one level are derived from
the active knots on the previous level.
Lemma 4. If kj+1 = kj then t̂
kj+1 = kj + 1 then t̂
j+1
j+1
j
= t̂ ∪ {xj+1 } \ {tjkj +d }. Otherwise, if
j
= t̂ ∪ {xj+1 } \ {tjkj }.
Proof. We know that xj+1 ∈ (tjkj , tjkj +d ]. Therefore, if kj+1 = kj the latest
zero xj+1 becomes a new active knot while tjkj +d is lost. The other case is
similar.
4
Convergence
We now have the necessary tools to prove that a zero sequence (xj ) converges; afterwards we will then prove that the limit is a zero of f .
We first show convergence of the first and last active knots.
Lemma 5. Let (xj ) be a zero sequence. The corresponding sequence of
initial active knots (tjkj )j is an increasing sequence that converges to some
real number t− from below, and the sequence of last active knots (tjkj +d ) is
a decreasing sequence that converges to some real number t+ from above
with t− ≤ t+ .
Proof. From Lemma 3 we have xj+1 ∈ (tjkj , tjkj +d ], and due to Lemma 4 we
j
j+1
j
j
j
have tj+1
kj+1 ≥ tkj and tkj+1 +d ≤ tkj +d for each j. Since tkj ≤ tkj +d the result
follows.
10
This Lemma implies that xj ∈ (t`k` , t`k` +d ] for all j and ` such that j > `.
∞
Also, the set of intervals [tjkj , tjkj +d ] j=0 in which we insert the new knots
is nested and these intervals tend to a limit,
[tk0 , tk0 +d ] ⊇ [t1k1 , t1k1 +d ] ⊇ [t2k2 , t2k2 +d ] ⊇ · · · ⊇ [t− , t+ ].
Proposition 6. A zero sequence converges to either t− or t+ .
The proof of convergence goes via several lemmas; however in one situation the result is quite obvious: From Lemma 5 we deduce that if t− = t+ ,
the active knots and hence the zero sequence must all converge to this number so there is nothing to prove. We therefore focus on the case t− < t+ .
Lemma 7. None of the knot vectors (tj )∞
j=0 have knots in (t− , t+ ).
Proof. Suppose that there is at least one knot in (t− , t+ ); by the definition
of t− and t+ this must be an active knot for all j. Then, for all j sufficiently
large, the knot tjkj will be so close to t− and tjkj +d so close to t+ that the
j
j
j
two averages tkj −1 and tkj will both lie in (t− , t+ ). Since xj+1 ∈ (tkj −1 , tkj ],
this means that xj+1 ∈ (t− , t+ ). As a consequence, there are infinitely many
knots in (t− , t+ ). But this is impossible since for any given j only the knots
(tjkj +i )d−1
i=1 can possibly lie in this interval.
Lemma 8. Suppose t− < t+ . Then there is an integer ` such that for all
j ≥ ` either tjkj , . . . , tjkj +d−1 ≤ t− and tjkj +d = t+ or tjkj +1 , . . . , tjkj +d ≥ t+
and tjkj = t− .
Proof. Let K denote the constant K = (t+ −t− )/(d−1) > 0. From Lemma
5 we see that there is an ` such that tjkj > t− − K and tjkj +d < t+ + K for
j
j
j ≥ `. If the lemma was not true, it is easy to check that tkj −1 and tkj would
have to lie in (t− , t+ ) and hence xj+1 would lie in (t− , t+ ) which contradicts
the previous lemma.
Lemma 9. Suppose that t− < t+ . Then the zero sequence (xj ) and the
sequences of interior active knots (tjkj +1 ), . . . , (tjkj +d−1 ) all converge and one
of the following is true: Either all the sequences converge to t− and xj ≤ t−
for j larger than some `, or all the sequences converge to t+ and xj ≥ t+ for
all j larger than some `.
Proof. We consider the two situations described in Lemma 8 in turn. Supj
pose that tjkj , . . . , tjkj +d−1 ≤ t− for j ≥ `. This means that tkj < t+ and
since xj+1 cannot lie in (t− , t+ ), we must have xj+1 ≤ t− for j ≥ `. Since
11
no new knots can appear to the right of t+ we must have tjkj +d = t+ for
j ≥ `. Moreover, since tjkj < xj+1 ≤ t− , we conclude that (xj ) and all the
sequences of interior active knots converge to t− . The proof for the case
tjkj +1 , . . . , tjkj +d ≥ t+ is similar.
Lemma 9 completes the proof of Proposition 6. It remains to show that
the limit of a zero sequence is a zero of f .
Lemma 10. Any accumulation point of a zero sequence is a zero of f .
Proof. Let z be an accumulation point for a zero sequence (xj ), and let be any positive, real number. Then there must be positive integers ` and
k such that t`k+1 , . . . , t`k+d and x`+1 all lie in the interval (z − /2, z + /2).
`
Let t = tk and let Γ = ΓtP
` (f ) be the control polygon of f in Sd,t` . We know
that the derivative f 0 = i ∆ci Bi,d−1,t` is a spline in Sd−1,t` , and from (4)
it follows that k(∆ci )k∞ ≤ Kd−1 kf 0 k∞ for some constant Kd−1 depending
only on d. From this we note that for any real numbers x and y we have the
inequalities
Z y
0 Γ (t) dt ≤(∆ci ) |y − x| ≤ Kd−1 kf 0 k∞ |y − x|.
Γ(x) − Γ(y) ≤
∞
x
In particular, since Γ(x`+1 ) = 0 it follows that
Γ(t) = Γ(t) − Γ(x`+1 ) ≤ Kd−1 kf 0 k∞ .
In addition it follows from Theorem 4.2 in  that
f (t) − Γ(t) ≤ C(t` − t` )2 ≤ C2 ,
k+1
k+d
where C is another constant depending on f and d, but not on t` . Combining
these estimates we obtain
f (z) ≤ f (z) − f (t) + f (t) − Γ(t) + Γ(t)
≤ kf 0 k + C2 + Kd−1 kf 0 k.
Since this is valid for any positive value of we must have f (z) = 0.
Lemmas 5, 9 and 10 lead to our main result.
Theorem 11. A zero sequence converges to a zero of f .
Recall that the zero finding algorithm does not need a starting value
and there are no conditions in Theorem 11. On the other hand all control
polygons of a spline with a zero must have at least one zero. For such
splines the algorithm is therefore unconditionally convergent (for splines
without zeros the algorithm will detect that the spline is of one sign in a
finite number of steps).
12
5
Some further properties of the algorithm
Before turning to an analysis of the convergence rate of the zero finding
algorithm, we need to study the limiting behavior of the algorithm more
closely. Convergence implies that the coefficients of the spline converges to
function values. A consequence of this is that the algorithm behaves like
Newton’s method in the limit.
Proposition 12. Let (xj ) be a zero sequence converging to a zero z of f .
Then at least one of ckj −1 and ckj converges to f (z) = 0 when j tends
to infinity. The divided differences also converge in that ∆cjkj → f 0 (z),
∆2 cjkj → f 00 (z) and ∆2 cjkj +1 → f 00 (z).
Proof. We have seen that all the interior active knots tjkj +1 , . . . , tjkj +d−1 and
at least one of tjkj and tjkj +d all tend to z when j tends to infinity. The first
statement therefore follows from the diagonal property and the continuity of
the blossom. By also applying Lemma 9 we obtain convergence of some of the
divided differences. In particular we have ∆cjkj = F 0 (tjkj +1 , . . . , tjkj +d−1 ) →
f 0 (z) and similarly for ∆2 cjkj and ∆2 cjkj +1 .
Our next lemma gives some more insight into the method.
Lemma 13. Let f be a spline in Sd,t and let G : Rd 7→ R denote the
function
F (y1 , . . . , yd )
G(y1 , . . . , yd ) = y − 0
(19)
F (y1 , . . . , yd−1 )
where y = (y1 + · · · + yd )/d and F and F 0 are the blossoms of f and f 0
respectively. Let z be a simple zero of f and z d be the d−vector (z, . . . , z).
The function G has the following properties:
1. G is continuous at all points where F 0 is nonzero.
2. G(y, . . . , y) = y if and only if f (y) = 0.
3. The gradient satisfies ∇G(z d ) = 0.
4. G(y1 , . . . , yd ) is independent of yd .
Proof. Recall from equation (8) that the B-spline coefficient ci of f in Sd,t
agrees with the blossom F of f , that is ci = F (ti+1 , . . . , ti+d ). Recall also
that ci − ci−1 = Dd F (ti+1 , . . . , ti+d )(ti+d − ti ) = F 0 (ti+1 , . . . , ti+d−1 )/d, see
13
(12) and (13). This means that the computation of the new estimate for the
zero in (18) can be written as
x1 = tk −
ck
F (ti+1 , . . . , ti+d )
F (ti+1 , . . . , ti+d )
= tk −
= tk − 0
.
∆ck
d Dd F (ti+1 , . . . , ti+d )
F (ti+1 , . . . , ti+d−1 )
The continuity of G follows from the continuity of F . The second property
of G is immendiate. To prove the third property, we omit the arguments to
F and G, and as before we use the notation Di F to denote the derivative
of F (y1 , . . . , yd ) with respect to yi . The basic iteration can then be written
G = y − F/(d Dd F ) while the derivative with respect to yi is
Di F Dd F − F Di,d F 1
Di G =
.
1−
d
Dd F 2
Evaluating this at (y1 , . . . , yd ) = z d and observing that the derivative satisfies Di F (z d ) = d F 0 (z d−1 ) = Dd F (z d ) while F (z d ) = 0, we see that
Di G(z d ) = 0.
To prove the last claim we observe that Dd,d F (y1 , . . . , yd ) = 0 since F is
an affine function of yd . From this it follows that Dd G(y1 , . . . , yd ) = 0.
Since G(y1 , . . . , yd ) is independent of yd , it is, strictly speaking, not necessary to list it as a variable. On the other hand, since yd is required to
compute the value of G(y1 , . . . , yd ) it is convenient to include it in the list.
Lemma 13 shows that a zero of f is a fixed point of G, and it is in
fact possible to show that G is a kind of contractive mapping. However, we
will not make use of this here. Our main use of Lemma 13 is the following
immediate consequence of Property 2 which is needed later.
Corollary 14. If t`k` +1 = · · · = t`k` +d−1 = z, then xj = z for all j > `.
We say that a sequence (yj ) is asymptotically increasing (decreasing) if
there is an ` such that yj ≤ yj+1 (yj ≥ yj+1 ) for all j ≥ `. A sequence
that is asymptotically increasing or decreasing is said to be asymptotically
monotone. If the inequalities are strict for all j ≥ ` we say that the sequence
is asymptotically strictly increasing, decreasing or monotone.
Lemma 15. If a zero sequence is asymptotically decreasing there is some
integer ` such that tjkj = t− for all j > `. Similarly, if a zero sequence is
asymptotically increasing there is some ` such that tjkj +d = t+ for all j > `.
Proof. Suppose that (xj ) is decreasing and that tkj < t− for all j. For each
i there must then be some ν such that xν ∈ (tjkj , t− ). But this is impossible
when (xj ) is decreasing and converges to either t− or t+ . The other case is
similar.
14
Lemma 16. Let (xj ) be an asymptotically increasing or decreasing zero
sequence converging to a simple zero z. Then t− < t+ .
Proof. Suppose (xj ) is asymptotically decreasing, i.e., that xj ≥ xj+1 for
all j greater than some `, and that t− = t+ = z. Then, according to Lemma
(15), we have tjkj = t− for large j. This implies that the active knots tjkj , . . . ,
tjkj +d all tend to z, and satisfy z = tjkj ≤ · · · ≤ tjkj +d for large j. Consider
now the Taylor expansion (15) in the special case where ai = z for all i and
yi = tjkj +i for i = 1, . . . , d. Then F (y1 , . . . , yd ) = cjkj and f (z) = 0 so
j
cjkj = f 0 (z)(tkj −z)+
d X
i−1
X
(yi −z)(yj −z)Di,j F (z, . . . , z, yi+1 , . . . , yd ). (20)
i=2 j=1
The second derivatives of F can be bounded independently of the knots
in terms of the second derivative of f , see Lemma 1. This means that for
sufficiently large values of j, the first term on the right in (20) will dominate.
The same argument can be applied to cjkj −1 and hence both cjkj −1 and cjkj
will have the same nonzero sign as f 0 (z) for sufficiently large j. But this
contradicts the general assumption that cjkj −1 cjkj ≤ 0.
The case that (xj ) is asymptotically increasing is similar.
Before we continue, we need a small technical lemma.
1
Lemma 17. Let
vector obtained by inserting a knot z into
Pnt be a knot P
1
t, and let f = j=1 cj Bj,d,t = n+1
j=1 cj Bj,d,t1 be a spline in Sd,t such that
sign ck−1 ck < 0 for some k. If c1k = 0 then
z=
tk+1 + · · · tk+d−1
.
d−1
(21)
Proof. If c1k is going to be zero after knot insertion it must obviously be
a strict convex combination of the two coefficients ck−1 and ck which have
opposite signs. From (5) we know that the formula for computing c1k is
c1k =
z − tk
tk+d − z
ck−1 +
ck .
tk+d − tk
tk+d − tk
(22)
But we also have the relation (1 − λ)(tk−1 , ck−1 ) + λ(tk , ck ) = (z, 0) which
means that λ = (z − tk−1 )/(tk − tk−1 ) and
0=
tk − z
z − tk−1
ck−1 +
ck .
tk − tk−1
tk − tk−1
15
(23)
If c1k = 0 the weights used in the convex combinations (22) and (23) must
be the same,
z − tk
z − tk−1
.
=
tk+d − tk
tk − tk−1
Solving this equation for z gives the required result.
In the normal situation where f 0 (z)f 00 (z) 6= 0 a zero sequence behaves
very nicely.
Lemma 18. Let (xj ) be a zero sequence converging to a zero z and suppose
that f 0 and f 00 are both continuous and nonzero at z. Then either there is an
` such that xj = z for all j ≥ ` or (xj ) is strictly asymptotically monotone.
Proof. We will show the result in the case where both f 0 (z) and f 00 (z) are
positive; the other cases are similar. To simplify the notation we set k = kj
in this proof. We know from the above results that the sequences (tjk+1 )j ,
. . . , (tjk+d−1 )j all converge to z. Because blossoms are continuous functions
of the knots, there must be an ` such that for all j ≥ ` we have
∆cjk = F 0 (tjk+1 , . . . , tjk+d−1 ) > 0,
∆2 cjk = F 00 (tjk+1 , . . . , tjk+d−2 ) > 0,
∆2 cjk+1 = F 00 (tjk+2 , . . . , tjk+d−1 ) > 0.
j
j
This implies that the control polygons {Γj } are convex on [tk−2 , tk+1 ] and
j
j
increasing on [tk−1 , tk+1 ] for j ≥ `. Recall that cj+1
is a convex combination
k
j
j
of ck−1 < 0 and ck ≥ 0. There are two cases to consider. If cj+1
≥ 0 we
k
j+1
j+1
must be increasing and pass through
have ck−1 < 0. In other words Γ
j+1 j+1
j
j
zero on the interval I = [tk−1 , tk ] which is a subset of [tk−2 , tk ] where
Γj is convex. But then Γj+1 (x) ≥ Γj (x) on I so xj+2 ≤ xj+1 . If on the
other hand cj+1
< 0, then Γj+1 is increasing and passes through zero on
k
j+1 j+1
j
j
[tk , tk+1 ] which is a subset of [tk−1 , tk+1 ] where Γj is also convex. We can
therefore conclude that xj+2 ≤ xj+1 in this case as well.
It remains to show that either xj+2 < xj+1 for all j ≥ ` or xp = z from
some p onwards. If for some j we have xj+2 = xj+1 , then this point must
be common to Γj+1 and Γj . It turns out that there are three different ways
in which this may happen.
(i) The coefficient cjk−1 is the last one of the initial coefficients of Γj that
is not affected by insertion of xj+1 . Then we must have cjk−1 = cj+1
k−1
16
j+1
and xj+1 ≤ tk which means that xj+1 ∈ [tjk+d−1 , tjk+d ). In addition
we must have cj+1
≥ 0 for otherwise xj+2 < xj+1 . But then kj+1 = k
k
and therefore tj+1
kj+1 +d = xj+1 = xj+2 . From Lemma 3 we can now
conclude that xj+p = z for all p ≥ 1.
j
(ii) The coefficient cj+1
k+1 = ck is the first one of the later coefficients that
j+1
is not affected by insertion of xj+1 and xj+1 > tk . This is similar to
the first case.
(iii) The final possibility is that xj+1 lies in a part of Γj+1 where all vertices
are strict convex combinations of vertices of Γj and the convexity and
monotonicity assumptions of the first part of the proof are valid. This
means that the zero xj+1 must be a vertex of Γj+1 since the interior
of the line segments of Γj+1 in question lie strictly above Γj . From
Lemma 17 we see that this is only possible if xj+1 = aj = (tjk+1 +
· · · + tk+d−1 )/(d − 1). Without loss we may assume that all the interior
active knots are old xj ’s, and since we know that (xj ) is asymptotically
decreasing we must have xj+1 ≤ tjk+1 ≤ tjk+d−1 for sufficiently large j.
Then xj+1 = aj implies that xj+1 = tjk+1 = . . . = tjk+d−1 and so xj+1
is a fixed point of G by properties 2 and 4 in Lemma 13 . Therefore
xi = z for all i ≥ j.
Thus, if for sufficiently large j we have xj+1 = xj+2 , then we will also have
xj+p = xj+1 for all p > 1. This completes the proof.
Lemmas 16 and 18 are summed up in the following theorem.
Theorem 19. Let (xj ) be a zero sequence converging to a zero z of f , and
suppose that f 0 and f 00 are both nonzero at z. Then there is an ` such that
either xj = z for all j ≥ `, or one of the following two statements are true:
1. if f 0 (z)f 00 (z) > 0 then t− < t+ = z and xj > xj+1 for all j ≥ `,
2. if f 0 (z)f 00 (z) < 0 then z = t− < t+ and xj < xj+1 for all j ≥ `.
Proof. We first note that if f 0 or f 00 are not continuous at z, then there
must be a knot of multiplicity at least d − 1 at z; it is then easy to see that
xj = z for all j ≥ 1. If f 0 and f 00 are both continuous at z we can apply
Lemma 18 and Lemma 16 which proves the theorem except that we do not
know the location of z. But it is impossible to have z = t− in the first case
as this would require xj ∈ (t− , t+ ) for large j, hence z = t+ . The second
case follows similarly.
17
When f 0 (z) and f 00 (z) are nonzero, which accounts for the most common
types of zeros, Theorem 19 gives a fairly accurate description of the behavior
of the zero sequence. If f 0 (z) is nonzero, but f 00 (z) changes sign at z, we have
observed a zero sequence to alternate on both sides of z, just like Newton’s
method usually does in this situation.
The main use of Theorem 19 is in the next section where we consider
the convergence rate of our method; the theorem will help us to establish a
6
Convergence rate
The next task is to analyse the convergence rate of the zero finding method.
Our aim is to prove that it converges quadratically, just like Newton’s
method. As we shall see, this is true when f 0 and f 00 are nonzero at the
zero. The development follows the same idea as is usually employed to
prove quadratic convergence of Newton’s method, but we work with the
blossom instead of the spline itself.
We start by making use of (15) to express the error x1 − z in terms of
the knots and B-spline coefficients.
Lemma 20. Let f be spline in Sd,t with blossom F that has a zero at z,
and let x1 be given as in (17). Then
x1 − z =
d−1
X
1
d Dd F (tk+1 , . . . , tk+d )
+
d−1 X
i−1
X
(tk+i − z)2 Di,d F (tk+1 , . . . , tk+d )
i=1
(tk+i − z)(tk+j − z)Di,j F (tk+1 , . . . , tk+i , z, . . . , z) .
i=2 j=1
Proof. In this proof we use the shorthand notation Fk = F (tk+1 , . . . , tk+d ).
We start by subtracting the exact zero z from both sides of (18),
d
1 X
x1 − z =
(tk+i − z)Dd Fk − Fk .
d D d Fk
(24)
i=1
We add and subtract the linear term of the Taylor expansion (15) with
yi = z and ai = tk+i for i = 1, . . . , d, inside the brackets; this part of the
right-hand side then becomes
−
d
X
i=1
(z − tk+i )Di Fk − Fk +
d
X
(z − tk+i )Di Fk −
i=1
d
X
i=1
18
(z − tk+i )Dd Fk .
The last two sums in this expression can be simplified so that the total
becomes
−
d
X
(z − tk+i )Di Fk − Fk +
i=1
d−1
X
(z − tk+i )(tk+d − tk+i )Di,d Fk .
(25)
i=1
We now make use of the Taylor expansion (15),
f (x) = F (x, . . . , x) = Fk +
d
X
(x − tk+i )Di Fk
i=1
+
d X
i−1
X
(x − tk+i )(x − tk+j )Di,j F (tk+1 , . . . , tk+i , x, . . . , x),
i=2 j=1
and set x equal to the zero z so that the left-hand side of this equation is
zero. We can then replace the first two terms in (25) with the quadratic error
term in this Taylor expansion. The total expression in (25) then becomes
d X
i−1
X
(z − tk+i )(z − tk+j )Di,j F (tk+1 , . . . , tk+i , z, . . . , z)
i=2 j=1
+
d−1
X
(z − tk+i )(tk+d − tk+i )Di,d Fk .
i=1
The terms in the double sum corresponding to i = d can be combined with
the sum in the second line, and this yields
d−1 X
i−1
X
(z − tk+i )(z − tk+j )Di,j F (tk+1 , . . . , tk+i , z, . . . , z)
i=2 j=1
+
d−1
X
(z − tk+i )2 Di,d Fk .
i=1
Replacing the terms inside the bracket in (24) with this expression gives the
required result.
We are now ready to show quadratic convergence to a simple zero.
Theorem 21. Let (xj ) be a zero sequence converging to a zero z and suppose that f 0 (z) 6= 0. Then there is a constant C depending on f and d but
not on t, such that for sufficiently large j we have
|xj+1 − z| ≤ C
max
i=1,...,d−1
19
|tjkj +i − z|2 .
Proof. From Lemma 9 we have that the sequences of interior active knots
(tjkj +1 )j , . . . , (tjkj +d−1 )j all converge to z. Therefore, there is an ` such that
for j ≥ `, the denominator in Lemma 20 satisfies d Dd F (tjkj +1 , . . . , tjkj +d ) >
M for some M > 0 independent of t. Let j be an integer greater than ` and
let δj = maxi=1,...,d−1 |tjkj +i − z|. Using (16) we can estimate the error from
Lemma 20 with x1 = xj+1 as
|xj+1 − z| ≤
δj2 d(d − 1) maxp,q |Dp,q F |
≤ Cδj2 ,
2M
where C = Kd−2 kD2 f k∞ /(2M ).
This result can be strengthened in case a zero sequence (xj ) converges
monotonically to z.
Theorem 22. Let (xj ) be a zero sequence converging to a zero z, suppose
that f 0 (z) and f 00 (z) are both nonzero, and set en = |xn − z|. Then there is
a constant C depending on f and d, but not on t, such that for sufficiently
large j we have
ej+1 ≤ Ce2j−d+2 .
Proof. From Theorem 19 we know that (xj ) is asymptotically monotone.
There is therefore an ` such that for j ≥ ` we have maxi=1,...,d−1 |tkj +i − z| =
ej−d+2 . The result then follows from Theorem 21.
This result implies that if we sample the error in Algorithm 2 after every
d − 1 knot insertions, the resulting sequence converges quadratically to zero.
7
Stability
In this section we briefly discuss the stability properties of Algorithm 2. It is
well known that a situation where large rounding errors may occur is when
a small value is computed from relatively large values. Computing zeros of
functions fall in this category as we need to compute values of the function
near the zero, while the function is usually described by reasonably large
parameters. For example, spline functions are usually given by reasonably
large values of the knots and B-spline coefficients, but near a zero these
numbers combine such that the result is small. It is therefore particularly
important to keep an eye on rounding errors when computing zeros.
Our method consists of two parts where rounding errors potentially may
cause problems, namely the computation of xj+1 by the first step in Algorithm 2 and the computation of the new B-spline coefficients in step 2. Let
us consider each of these steps in turn.
20
The new estimate for the zero is given by the formula
j
xj+1 = tkj −
cjkj (tjkj +d − tjkj )
(cjkj − cjkj −1 )d
,
j
j
which is in fact a convex combination of the two numbers tkj −1 and tkj ,
see (17). Recall that cjkj −1 and cjkj have opposite signs while tjkj and tjkj +d
are usually well separated so the second term on the right can usually be
computed without much cancellation. This estimate xj+1 is then inserted as
a new knot, and new coefficients are computed via (5) as a series of convex
combinations. Convex combinations are generally well suited to floatingpoint computations except when combining two numbers of opposite signs
to obtain a number near zero. This can potentially happen when computing
the new coefficient
= (1 − µk )cjk−1 + µk cjk ,
cj+1
k
since we know that cjk−1 and cjk have opposite signs. However, it turns out
that in most cases, the magnitude of one of the two coefficients tends to zero
with j whereas the other one remains bounded away from zero.
Proposition 23. Let z be the first zero of f and suppose that f 0 (z) 6= 0
and f 00 (z) 6= 0. Then one of cjkj and cjkj −1 will tend to zero while the other
will tend to f 0 (z)(t+ − t− )/d 6= 0.
Proof. When the first two derivatives are nonzero we know from Theorem 19 that t− < t+ , and tjkj +1 , . . . , tjkj +d−1 will all tend to either t− or
t+ . For this proof we assume that they tend to t− , the other case is similar.
Then limj→∞ cjkj −1 = F (tjkj , . . . , tjkj +d−1 ) = f (z) = 0, while
lim
cjkj − cjkj −1
j→∞ tj
kj +d
−
tjkj
= lim F 0 (tjkj +1 , . . . , tjkj +d−1 )/d = f 0 (z)/d.
j→∞
Since tjkj +d − tjkj → t+ − t− and cjkj −1 → 0, we must have that cjkj →
f 0 (z)(t+ − t− )/d which is nonzero.
This result ensures that the most critical convex combination usually
behaves nicely, so in most cases there should not be problems with numerical
stability. This corresponds well with our practical experience. However, as
with Newton’s method and many others, we must expect the numerical
performance to deteriorate when f 0 (z) becomes small.
21
8
Implementation and numerical examples
Our algorithm is very simple to implement and does not require any elaborate spline software. To illustrate this fact we provide pseudo code for an
algorithm to compute the smallest zero of a spline, returned in the variable
x. The knots t and the coefficients c are stored in vectors (indexed from
1). For efficiency the algorithm overwrites the old coefficients with the new
ones during knot insertion.
Pseudo code for Algorithm 1
// Connected spline of degree d
// with knots t and coefficients c given
if (c(1)==0) return t(1);
k=2;
for (it = 1; it<=max_iterations; it++) {
// Compute the index of the smallest zero
// of the control polygon
n = size(c);
while (k<=n AND c(k-1)*c(k)>0) k++;
if (k>n) return NO_ZERO;
// Find zero of control polygon and check convergence
x = knotAverage(t,d,k)
- c(k) * (t(k+d)-t(k))/(c(k)-c(k-1))/d;
xlist.append(x);
if ( converged(t,d,xlist) ) return x;
// Refine spline by Boehms algorithm
mu = k;
while (x>=t(mu+1)) mu++;
c.append(c(n)); //Length of c increased by one
for (i=n; i>=mu+1; i--) c(i) = c(i-1);
for (i=mu ; i>=mu-d+1; i--) {
alpha = (x-t(i))/(t(i+d)-t(i));
c(i) = (1-alpha)*c(i-1) + alpha*c(i);
}
t.insert(mu+1,x);
}
// Max_iterations too small for convergence
22
This code will return an approximation to the leftmost root of the spline
unless the total number of allowed iterations max_iterations is too low
(or the tolerance is too small, see below). Note that it is assumed that the
spline is connected. In particular, this means that the first coefficient must
be nonzero.
The function converged returns true when the last inserted knot x equals
tk+d (in which case the spline has become disconnected at x, see Lemma 3),
or when the sequence of computed zeros of the control polygons are deemed
to converge in a traditional sense. Our specific criterion for convergence is
(after at least d knots have been inserted),
maxi,j |xi − xj |
< ,
max(|tk |, |tk+d |)
where the maximium is taken over the d last inserted knots and > 0 is a
small user-defined constant. This expression measures the relative difference
of the last d knots and = 10−15 is a good choice when the computations
are performed in double precision arithmetic.
In principle, our method should always converge, so there should be no
need for a bound on the number of iterations. However, this is always a
good safety net, as long as the maximum number of iterations is chosen as
a sufficiently big integer.
There is of course a similar algorithm for computing the largest zero. If
one needs to compute all zeros of a spline, this can be done sequentially by
first computing the smallest zero, split the spline at that point, compute the
second smallest zero and so on. Alternatively, the computations can be done
in parallel by inserting all the zeros of the control polygon in each iteration.
We leave the details to the interested reader.
A spline with d + 1 knots at both ends and without interior knots is
usually referred to as a polynomial in Bernstein form or a Bézier polynomial.
In this way, the algorithm can obviously be used for computing real zeros of
polynomials.
Before considering some examples and comparing our method with other
methods, we need to have a rough idea of the complexity of the method. To
determine the correct segment of the control polygon requires a search the
first time, thereafter choosing the right segment only involves one comparison. Computing the new estimate for the zero is also very quick as it only
involves one statement. What takes time is computing the new B-spline
coefficients after the new knot has been inserted. This usually requires d
convex combinations. As we saw above, we often have quadratic conver-
23
gence if we sample the error every d − 1 iterations, and the work involved in
d − 1 knot insertions is d(d − 1) convex combinations.
We have estimated the errors ej = |xj − z| of our algorithm for the four
examples shown in Figure 2. The first three have simple roots, while the
last has a double root. In Table 8 we have compared the errors produced by
our method with those produced by Newton’s method and the natural generalisation of the method in  (see below) to B-splines. We have compared
every d − 1th step in our method with every iteration of the other methods.
Quadratic convergence is confirmed for all the methods for the first three
examples, whereas all methods only have linear convergence for the double
zero (as for Newton’s method, we have observed higher than second order
convergence in cases with f 0 (z) 6= 0 and f 00 (z) = 0.)
We used the smallest zero of the control polygon as starting value for
Newton’s method. Note that it is not hard to find examples where this
starting point will make Newton’s method diverge. When using Newton’s
method to compute zeros of spline functions we have to evaluate f (x) and
f 0 (x) in every step. With careful coding, this requires essentially the same
number of operations as inserting d knots at a single point when f is a spline,
or roughly d(d − 1)/2 convex combinations. On the other hand, the method
in  is based on inserting d knots at the zeros of the control polygon, in effect
splitting the curve into two pieces by Bézier subdivision. The complexity
of this is the same as for Newton’s method. Although no convergence rate
was given in that paper, it is reasonable to expect a quadratic convergence
rate (as is indicated by the numbers in Table 8. In fact, it should be quite
straightforward to prove this with the technique developed in this paper.
Bézier clipping , which is a method often used for ray-tracing, is very
similar to Rockwood’s method as the curve is split by inserting d knots
where the convex hull of the control polygon of the curve intersects the
x-axis.
9
Conclusion
We have proposed and analysed a simple zero finding algorithm that exploits
the close relationship between a spline and its control polygon. The main
in that it is unconditionally convergent at no extra cost and with the same
convergence order. Though the theoretical rate of convergence is comparable
with other methods, it appears that our method usually converges more
quickly in practice.
Quadratic convergence as guaranteed by Theorems 21 and 22 only hold
24
0.4
0.2
6
4
0.2
0.4
0.6
0.8
2
1
-0.2
-0.4
-0.6
-0.8
-1
1
2
3
4
5
-2
-4
-6
4
0.4
3
0.3
2
0.2
1
0.1
0.2
0.4
0.6
0.8
0.5
1
-1
-0.1
-2
-0.2
-3
-0.3
1
1.5
2
Figure 2. Our test examples in reading order: cubic Bézier function, quintic spline, degree 25
spline, cubic polynomial with a double root.
for simple zeros; for multiple zeros we cannot expect more than first order convergence. However, it should be possible to adjust our method to
yield second order convergence even for higher order zeros, just like for
Newton’s method, see . If it is known that a zero z has multiplicity m,
another possibility is to apply Algorithm 2 to f (m−1) ; this yields a quadratically convergent algorithm for computing a zero of multiplicity m. As with
the adjustments to Newton’s method, this requires that the multiplicity is
known. One possibility is to device a method that detects the multiplicity
and then applies Algorithm 2 to the appropriate derivative. The ultimate
would of course be a method that is quadratically convergent irrespective of
the multiplicity. This is a topic for future research.
We believe that methods similar to the one presented here can be applied to other systems of functions that have a control polygon, that can be
subdivided and that exhibit a variation diminishing property. Examples of
such systems are subdivision curves, Chebychev splines and L-splines, see
e.g. .
Another topic for future research is to extend our algorithm to a method
for computing zero sets of tensor product spline surfaces. This is by no
means obvious as the idea of inserting a zero of the control polygon as a
25
Example
Cubic Bézier
Simple root
Quintic
Simple root
Degree 25
spline
Simple root
Cubic
polynomial
Double root
Method
Our
Rockwood
Newton
Our
Rockwood
Newton
Our
Rockwood
Newton
Our
Rockwood
Newton
E0
1.41e-1
1.41e-1
1.41e-1
1.34e-1
1.34e-1
1.34e-1
3.79e-3
3.79e-3
3.79e-3
5.19e-1
5.19e-1
5.19e-1
E1
1.31e-2
2.05e-2
2.05e-2
3.06e-3
5.47e-3
5.06e-3
2.64e-5
3.50e-5
7.64e-5
2.68e-1
3.34e-1
3.46e-1
E2
1.46e-4
8.71e-4
8.71e-4
1.24e-6
2.72e-5
2.33e-5
7.72e-11
5.49e-9
2.62e-8
1.37e-1
2.23e-1
2.31e-1
E3
1.70e-8
1.80e-6
1.80e-6
1.53e-13
6.87e-10
5.04e-10
2.60e-22
1.35e-16
3.08e-15
6.95e-2
1.49e-1
1.54e-1
E4
2.30e-16
7.72e-12
7.72e-12
2.21e-27
4.39e-19
2.36e-19
1.48e-45
8.22e-32
4.25e-29
3.52e-2
9.90e-2
1.03e-1
E5
4.22e-32
1.42e-22
1.42e-22
4.54e-55
1.80e-37
5.21e-38
5.10e-71
8.22e-32
8.13e-57
1.78e-2
6.60e-2
6.84e-2
Table 1. The absolute errors |xj − z| for our method (inserting d − 1 knots), Rockwoods method
and Newtons method for the three examples. The computations have been performed in extended
arithmetic in order to include more iterations.
knot does not carry over to the tensor product setting.
References
 W. Boehm. Inserting new knots into B-spline curves. Computer Aided
Design, 12:199–201, 1980.
 G. Dahlquist and Å. Björck. Numerical Methods. Wiley-Interscience,
1980.
 T. A. Grandine. Computing zeroes of spline functions. Computer Aided
Geometric Design, 6:129–136, 1989.
 T. Lyche and K. Mørken. Spline Methods, draft. Deparment of Informatics, University of Oslo, http://heim.ifi.uio.no/˜knutm/komp04.pdf,
2004.
 T. Nishita, T. W. Sederberg, and M. Kakimoto. Ray tracing trimmed
rational surface patches. In Proceedings of the 17th annual conference
on Computer graphics and interactive techniques, pages 337–345. ACM
Press, 1990.
 A. Rockwood, K. Heaton, and T. Davis. Real-time Rendering of
Trimmed Surfaces. Computer Graphics, 23(3), 1989.
 L. L. Schumaker. Spline functions: Basic theory. Prentice Hall, 1974.
26
 M. R. Spencer. Polynomial real root finding in Bernstein form. PhD
thesis, Brigham Young University, 1994.
Knut Mørken, Department of Informatics and Center of Mathematics for
Applications, P.O. Box 1053, Blindern, 0316 Oslo, Norway
Email: [email protected]
Martin Reimers, Center of Mathematics for Applications, P.O. Box 1053,
Blindern, 0316 Oslo, Norway
Email: [email protected]
27
```