T C I B

T C I B
T OPICS IN
C URVE I NTERSECTION AND
BARYCENTRIC I NTERPOLATION
C HRISTIAN S CHULZ
D ISSERTATION PRESENTED FOR THE DEGREE
OF P HILOSOPHIÆ D OCTOR
C ENTRE OF M ATHEMATICS FOR A PPLICATIONS
& D EPARTMENT OF I NFORMATICS
U NIVERSITY OF O SLO
2009
Acknowledgements
This thesis has been assembled with the help of many people. First of all I would like to
thank my main supervisor Knut Mørken. Without his knowledge, guidance and encouragement I would have never made it thus far. I am also grateful to my second supervisor
Michael Floater, who, with his sharp eye and thoroughness, has been a valued teacher in
the art and craft of scientific research. My third supervisor Martin Reimers has been the
source of much appreciated discussions and helpful comments. Tom Lyche gave me plenty
of freedom in my tutoring duties and always willingly answered my questions. I would also
like to thank my diploma thesis supervisor Günther Greiner for his support in my quest for
a Ph.D. position abroad.
My research was funded by the Department of Informatics and I was hosted by the
Centre of Mathematics for Applications (CMA) which provided an international, interdisciplinary and stimulating environment excellently administrated by Helge Galdal. I shared my
office first with Martin Groth and later with Eivind Lyche Melvær who was always open for
a discussion whatever the topic. My fellow Ph.D. students Johan Seland and Erik Christopher Dyken said never no to discussing my questions about implementational details, tips,
tricks and hacks. I am also grateful to Paul Kettler for proof-reading my English. Atgeirr
Flø Rasmussen welcomed me like an old friend when I first arrived in Oslo. I am thankful to
the Ph.D. students Agnieszka, Andrea, Asma, Franz, Georg, Linda, Marie, Simen, Solveig
and Yeliz, and the Post-docs Claire, Jiri and Sidhartha for creating a social environment at
the CMA.
Life consists not only of study and research. Thanks to my flatmates in Kringså for
creating a home. I spent many happy hours in the courses, parties and events organized by
Oslo’s student dancing group, OSI Dans, making friends and practicing my Norwegian.
Sabine Büttner has been a good friend for several years, just as Rainer Ostasch, who
both always have an open ear for my small and big problems.
Last, but not least, I would like to thank my brother and my parents, who have always
supported and encouraged me in my decisions, for which I am very grateful.
Christian Schulz, Oslo, June 2009
Contents
I
Introduction
1
1
Motivation
3
2
Background for curve intersection
2.1 Functions and parametric curves . . .
2.1.1 Polynomials . . . . . . . . .
2.1.2 Piecewise polynomials . . . .
2.1.3 Rational functions and curves
2.2 Numerical methods . . . . . . . . . .
2.2.1 Stability . . . . . . . . . . . .
2.2.2 Convergence . . . . . . . . .
2.3 Root finding . . . . . . . . . . . . . .
2.4 Curve intersection in the plane . . . .
3
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
7
10
13
14
14
15
16
19
Interpolation
3.1 Univariate . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Multivariate . . . . . . . . . . . . . . . . . . . . . . . .
3.3 From barycentric coordinates to transfinite interpolation .
3.3.1 Barycentric coordinates . . . . . . . . . . . . .
3.3.2 Transfinite Lagrange interpolation . . . . . . . .
3.3.3 Transfinite Hermite interpolation . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
21
24
27
27
30
32
Overview of papers
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
v
vi
II
5
6
III
7
8
Curve intersection in the plane
Bézier clipping is quadratically convergent
Christian Schulz; Computer Aided Geometric Design 2009
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . .
5.2 Bézier Clipping . . . . . . . . . . . . . . . . . . . .
5.3 Background Material . . . . . . . . . . . . . . . . .
5.4 Convergence Rate . . . . . . . . . . . . . . . . . . .
5.5 Arbitrary Fat Line Rules . . . . . . . . . . . . . . .
5.6 Fat Line Rules Providing Quadratic Convergence . .
5.7 Numerical Examples . . . . . . . . . . . . . . . . .
37
39
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Computing intersections of planar spline curves using knot insertion
K. Mørken, M. Reimers, Chr. Schulz; Computer Aided Geometric Design 2009
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Basic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 The Intersection Algorithm . . . . . . . . . . . . . . . . . . . . . . . .
6.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.1 Background Material . . . . . . . . . . . . . . . . . . . . . . .
6.4.2 Basic Results . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.3 Accumulation Points . . . . . . . . . . . . . . . . . . . . . . .
6.4.4 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.5 Convergence Rate . . . . . . . . . . . . . . . . . . . . . . . .
6.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Transfinite & barycentric interpolation
PRM: Hermite interpolation on arbitrary domains
Michael S. Floater and Christian Schulz; Computer graphics forum 2008
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 A new look at mean value interpolation . . . . . . . . . . . . . . .
7.2.1 Convex domains . . . . . . . . . . . . . . . . . . . . . . .
7.2.2 Non-convex domains . . . . . . . . . . . . . . . . . . . . .
7.3 Hermite interpolation . . . . . . . . . . . . . . . . . . . . . . . . .
7.4 Boundary integrals . . . . . . . . . . . . . . . . . . . . . . . . . .
7.5 Cubic precision . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . .
39
40
42
42
47
50
53
57
59
60
62
63
65
68
68
72
75
81
83
85
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
86
87
88
89
94
94
96
PRM: Existence, uniqueness and polynomial precision
101
Christian Schulz
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8.2 Univariate polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
vii
8.3
8.4
8.5
8.6
8.7
9
Existence and uniqueness
Polynomial precision . .
Boundary integrals . . .
Numerical examples . .
Future Work . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Rational Hermite interpolation without poles
Michael S. Floater and Christian Schulz
9.1 Introduction . . . . . . . . . . . . . . . .
9.1.1 Lagrange interpolation . . . . . .
9.1.2 Hermite interpolation . . . . . . .
9.1.3 Contribution . . . . . . . . . . .
9.2 A simple Berrut Hermite interpolant . . .
9.3 The Hermite Floater-Hormann interpolant
9.3.1 The barycentric form . . . . . . .
9.3.2 Properties . . . . . . . . . . . . .
9.3.3 Some explicit weights . . . . . .
9.4 Univariate pointwise radial minimization .
9.4.1 A barycentric-like form . . . . . .
9.4.2 Unique and free of poles . . . . .
9.4.3 Interpolation property . . . . . .
9.4.4 Precision . . . . . . . . . . . . .
9.5 Numerical examples . . . . . . . . . . .
Bibliography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
105
109
110
111
116
117
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
117
117
119
121
121
123
124
126
128
129
130
131
132
133
133
141
viii
Part I
Introduction
1
1
Motivation
Nowadays the use of digital geometric models like curves, surfaces or volumes has become
more and more common in multiple scientific areas as well as industrial applications. This
has naturally led to an increased variety of model types in combination with a need for
more powerful and robust tools to create and manipulate such models. Meshes, especially
two- and three-dimensional ones like triangle, quad or tetrahedral meshes, are essential for
solving partial differential equations over non-trivial domains. Subdivision surfaces are
popular in animations (Geri’s Game from 1997 was Pixar’s first film using them) and in the
graphics of modern computer games. In the world of computer aided (geometric) design
(CAD/CAGD) of for example cars, ships or other mechanical parts, geometric models are
described by mathematical formulas as this provides an exact representation everywhere and
in addition can be used to let the model fulfil a prescribed continuity and/or other desired
properties. The presented thesis can be placed into the field of CAGD and deals with such
mathematically defined geometric models. Among those formulations, parametric representations provide a relatively easy way of modelling and visualizing and are thus commonly
used in the CAD/CAGD community.
The richness and modelling power of mathematical equations can on the other hand provide difficulties for the designer of a model. It would be quite unintuitive and difficult to
design a complete body of a car with smooth transitions between different parts if one would
have to come up with the corresponding equations by hand. Thus tools for the creation and
manipulation of such geometric models are necessary. In other words, methods and algorithms are needed that generate and modify the mathematical formulas which in turn define
the model that suits the wishes of the designer. One example for such a tool would be letting
the user specify certain so called control points which then define the model in a certain, predictable and intuitive way. Another possibility is to start with some predefined shapes and
then by applying some modifications one can create the wanted model. As a third example
one can have a cloud of points which either shall lie on the desired surface or approximately
describe it. Then by either interpolation or approximation one can construct a mathematical
equation for the geometric model. In the task of reverse engineering, point clouds with a
3
4
CHAPTER 1. MOTIVATION
large number of points are generated by scanning a real world object. Most of such tools for
generation and modification will themselves consist of several smaller problems and tasks
which need to be solved for the tool to be working. Hence a lot of research and development
in CAGD focuses on solving those ’simple’ problems in an efficient, stable and robust way.
One common and important task is to compute the intersections of different geometric
objects or models. Finding the intersections of two parametric curves in the plane is a
fundamental example of this problem, which is important in itself but also as a part of more
elaborate algorithms for other, more advanced tasks. Different approaches to this problem
have been developed over time. For polynomial curves, a very important class of parametric
curves, an algorithm called Bézier clipping was developed in 1990, which provides the
desired properties of stability and robustness. The observed speed of this method is based
on a property called quadratic convergence rate, which can be seen in practice, but has
not been proved yet. In one part of this thesis we will analyse Bézier clipping and prove
this observed rate theoretically. Another important class of parametric curves are piecewise
polynomial curves, called spline curves. We will provide a new algorithm for computing
intersections between two such spline curves when represented using so called B-spline
basis functions, which are very widely used in CAGD. The algorithm tries to make specific
use of properties related to this representation.
Another fundamental problem is the task of interpolation, where one wants to find a
curve, surface or other object which passes through a set of given data points. Sometimes
also derivatives at those points might be specified in order to describe how the object should
pass through them, yielding Hermite interpolation. The number of points can be finite, but
possibly huge, or infinite. In the latter case one wants to interpolate some given geometric
objects like curves or surfaces or, more generally, some boundary, or one wants to interpolate values and maybe derivatives given at these objects. The resulting interpolation problem
is also called transfinite interpolation. As part of this thesis we will propose a new method
for transfinite Hermite interpolation that provides a natural extension of the recently developed method of transfinite mean value interpolation. We will also discuss univariate rational
barycentric Hermite interpolation, which is a special, stable and robust way of specifying a
rational function interpolating given data points and derivatives.
This thesis is organized in the following way. Part I provides the necessary background
material and an introduction to the topics of the thesis. Although this first part is aimed to
be self-contained, a certain basic knowledge in calculus and linear algebra is assumed. The
papers which form the actual contribution of this thesis are presented in Parts II and III.
Chapter 2 discusses Bézier and B-spline curves, numerical methods and the connection between root finding and computing curve intersections. Interpolation in its different
forms is the topic of Chapter 3, and a short overview of the papers in this thesis is given in
Chapter 4.
2
Background for curve intersection
The intersection algorithms discussed in Part II work on parametric curves and lead to convergent iterative methods. The latter means that they repeatedly compute an approximation
whose distance to the real intersection will become arbitrarily small. This chapter introduces the types of functions and curves used in Parts II and III, as well as different ways
of representing them. In addition we give a short introduction to numerical and iterative
methods and some of their properties. The task of finding zeros of a function is closely
related to the problem of locating the intersections of two curves. After introducing some
methods for finding zeros of a function, the chapter ends with shortly relating these to the
curve intersection problem.
2.1
Functions and parametric curves
In Part II we will use real functions and parametric curves. Let us therefore shortly recapitulate the difference between those two objects as well as a motivation for the use of
parametric curves from a modelling point of view.
A real function f is defined as the mapping of a domain D into the real numbers, f :
D → R. We are only interested in real domains, so D ⊆ R. The corresponding set of
the two-dimensional points (x, f (x)) ∈√R2 is the graph of f , as illustrated in the left part
of Figure 2.1 for the function f (x) = 1 − x2 . By definition the graph has exactly one
y-value for every x-value, thus a different construction is needed in order to represent a full
circle.
A planar parametric curve f : D → R2 is defined as the ordered pair of two functions
x : D → R and y : D → R. Here D is called the parameter domain and the image of the
curve is the set {f (t) = (x(t), y(t)) ∈ R2 | t ∈ D} where t is the parameter. Observe that
the image is not considered to be the curve. The reason for this becomes clear by realizing
that two different curves can have the same image, as illustrated in Figure 2.1 on the right.
The concept of a parametric curve is applicable for curves in any dimension, f : D → Rm ,
5
6
CHAPTER 2. BACKGROUND FOR CURVE INTERSECTION
y
y
x
x
√
Figure 2.1: Left: Graph of the function f (x) = 1 − x2 . Right: The blue line is
the image of the parametric curve f (t) = (cos(2tπ), sin(2tπ)), the red line the image
of g(t) = (cos(2t2 π), sin(2t2 π)), where t ∈ [0, 1] for both. Clearly the images are
identical. The filled blue and outlined red circles illustrate the corresponding points for
t = 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6.
where fi : D → R is the i-th function defining the i-th component of the point f (t).
For a function f the tangent T at a point x0 is defined as the line having the same value
and slope as f at this point, T (x) = f (x0 ) + (x − x0 )f 0 (x0 ). In most cases this can
directly be transformed into the parametric setting where T(s) = f (t0 ) + (s − t0 )f 0 (t0 ),
with s ∈ R, is the tangent to f at t = t0 . However, if f 0 (t0 ) = 0, then the tangent at t0
is not defined. Such points are called singular points of f . An often desired property when
dealing with curves is the absence of such singular points. We say that a curve f : D → Rm
is regular if f 0 (t) 6= 0 for all t ∈ D. The derivatives of a curve can also be used to classify
curves into differentiability classes. A curve is called C k -continuous if f has a well defined
and continuous i-th derivative, for i = 0, . . . , k. A piecewise C k -continuous curve f is
continuous and the parameter domain consists of a finite number of sub-intervals where f is
C k -continuous over each of them. The curve (t, |t|) with t ∈ R for example is only C 0 but
piecewise C ∞ . More and advanced information about curves can be found in basic calculus
books, e.g. [1], and in books on differential geometry of curves [19, 62].
Several classes of functions exist, polynomials being probably the best known. Correspondingly, parametric curves can be classified according to the type of functions used for
their components. In addition, the way in which each component function is represented
can provide some additional features or knowledge mathematically as well as in a design
context. In the remaining part of this section we discuss the different function types used
in this thesis, introduce some different ways of representing them, consider their properties,
and how these carry over to parametric curves.
2.1. FUNCTIONS AND PARAMETRIC CURVES
2.1.1
7
Polynomials
Polynomials are a well known class of functions and popular for use in a computer as they
can be evaluated by the basic operations of addition, subtraction and multiplication. Moreover, the Weierstrass approximation theorem states that over a closed interval any continuous function can be approximated arbitrarily closely by a polynomial of sufficiently high
degree [42]. Polynomials can be represented as the weighted sum of basis polynomials,
p : R → R,
p(x) =
n
X
ci bni (x),
i=0
with n being the degree of the polynomial and ci , i = 0, . . . , n its coefficients. One set of
basis polynomials is given by the monomial basis
bni (x) = xi ,
p(x) = cn xn + cn−1 xn−1 + . . . + c1 x + c0 .
This representation can be evaluated by straightforward evaluation of the basis functions
and weighted summation, or more efficiently by the Horner scheme [76]. However, in a
computer such an evaluation will consist of several steps, e.g. first the values x, x2 , . . . ,
xn are computed, then they are multiplied with the coefficients and finally added together.
Each of those steps yields an intermediate result. For the monomial basis such intermediate
results can differ quite a lot in their magnitude which in turn can lead to numerical instability
in certain situations (see 2.2). Moreover, there is only little geometric information about the
polynomial contained in the coefficients (it is difficult to tell from the coefficients how the
graph of the polynomial ’looks’).
A different basis is given by the Lagrange basis polynomials. Let x0 < x1 < . . . <
xn ∈ R be arbitrary, but fixed. Then
bni (x) = Li (x) =
n
Y
x − xj
.
x
i − xj
j=0
(2.1)
j6=i
This basis has the advantage that p(xi ) = ci , but it is often associated with numerical
instability and high evaluation costs. However, when written in a different way via the so
called barycentric formula (of the first or second form), numerical stability can be improved
significantly and evaluation costs reduced. This is discussed in more detail in Chapter 9.
Problems with numerical instability often occur when intermediate results have a high
order of magnitude while the initial data and the final result have not. Using convex combinations avoids this problem. An affine combination of the values x1 , . . . , xk is given by
λ = a1 x1 + a2 x2 + . . . + ak xk ,
k
X
ai = 1,
i=1
and a convex combination is an affine combination where all ai ≥ 0, i = 1, . . . , k.
Let α < β ∈ R be arbitrary, but fixed. The Bernstein basis polynomials
8
CHAPTER 2. BACKGROUND FOR CURVE INTERSECTION
y
3
1 B0,[0,1]
c0
(1 − x)
3
B3,[0,1]
c10 (t)
x
c1
3
3
B1,[0,1]
B2,[0,1]
(1 − x)
x
c20 (t)
(1 − x)
(1 − x)
1 x
0
x
c11 (t)
x
c2
(1 − x)
x
(1 − x)
x
c30 (t)
c21 (t)
c12 (t)
c3
Figure 2.2: Left: Bernstein basis polynomials for degree 3, α = 0 and β = 1. Right:
Illustration of the de Casteljau algorithm for degree 3, α = 0 and β = 1.
n
bni (x) = Bi,[α,β]
(x) =
1
n
(β − x)n−i (x − α)i
(β − α)n i
enable us to evaluate p at any point x in the closed interval [α, β] by using only convex
combinations. An evaluation at points outside this interval will contain affine combinations.
Hence this basis can be considered to be numerically stable. The corresponding algorithm is
called the de Casteljau algorithm and is one of the most famous and fundamental methods
in CAGD. It was developed by de Casteljau somewhere around 1959 – 1963 during his work
for Citroën, which kept it secret for a long time [24]. Set c0i (x) = ci , i = 0, . . . , n and let
cri (x) =
x − α r−1
β − x r−1
ci (x) +
c (x),
β−α
β − α i+1
r = 0, . . . , n,
i = 0, . . . , n − r.
Then p(x) = cn0 (x). The de Casteljau algorithm describes a pyramid of convex combinations as illustrated in Figure 2.2, in which α = 0 and β = 1, which is a typical choice in
practice.
The Bernstein
basis polynomials
have certain useful properties. They form a partition
Pn
n
of unity
B
(x)
=
1
and
for any x in the open interval (α, β) they are positive
i=0 i,[α,β] n
Bi,[α,β] (x) > 0 for all i . Moreover we have
x=
n
X
(n − i)α + iβ
i=0
n
n
Bi,[α,β]
(x).
The points ci = ((n − i)α + iβ)/n, ci are called the control points of the polynomial
p and the linear interpolant of these points gives its control polygon. Figure 2.3 shows an
example.
The control polygon has several desirable properties related to the polynomial. First it
has tangential end point interpolation. This means that the end points of the control polygon
are interpolated and that the lines through the first and last two control points are the tangents
2.1. FUNCTIONS AND PARAMETRIC CURVES
9
y
−1
0
2 x
1
Figure 2.3: Left: A polynomial in Bézier representation. Right: A Bézier curve with the de
Casteljau algorithm illustrated.
at the end points. This can be easily observed from the derivative of the polynomial
p0 (x) =
n−1
n X
n−1
(ci+1 − ci )Bi,[α,β]
.
β − α i=0
Moreover the polynomial has the convex hull property. Its graph is contained in the convex
hull of the control points, which is the set of all points that can be expressed as a convex
combination of the control points. Finally the polynomial has the variation diminishing
property which says that the number of intersections of any line with the polynomial is less
or equal than the number of intersections of the line with the control polygon.
A polynomial parametric curve p : D → Rm is a curve where all component functions pk : D → R, k = 1, . . . , m are polynomials. Due to their favourable properties,
we will only consider component functions represented using Bernstein basis polynomials
n
Bi,[α,β]
(x) on an interval [α, β]. A curve in this representation is called a Bézier curve after Pierre Bézier who worked at Renault and like de Casteljau was one of the first to use
polynomial curves for industrial design [24]. As all component functions use the same basis
functions, we can simply use the following vector notation for Bézier curves,
p(t) =
n
X
n
ci Bi,[α,β]
(t).
i=0
The vectors ci are the control points of the Bézier curve and the control polygon of the curve
is again the linear interpolant of the control points. The control polygon of a Bézier curve
has the same properties as the control polygon of a polynomial, namely the convex hull and
variation diminishing property and tangential end point interpolation. The latter has to be
handled with a bit of care here, as the actual derivative of p at the end points is a scaling
of the vectors defined by the four outermost points. Applying the de Casteljau algorithm in
the curve setting yields repeated linear interpolation, as illustrated in Figure 2.3. Moreover,
for any t∗ ∈ [α, β], the de Casteljau algorithm provides a Bézier representation of the parts
for t ∈ [α, t∗ ] and for t ∈ [t∗ , β] of the curve. This is called subdividing the curve and
the control points of the left part are ĉr = cr0 (t∗ ) and for the right part c̃r = cn−r
(t∗ ) for
r
r = 0, . . . , n.
10
CHAPTER 2. BACKGROUND FOR CURVE INTERSECTION
Polynomials, the coefficients of a polynomial in Bézier representation, and the control
points of a Bézier curve can be described by a multivariate function, called the Blossom.
Blossoming was discovered by Ramshaw[59], but according to [24] it was already known
to de Casteljau[18]. A multivariate function f (x1 , . . . , xn ) ∈ R is called affine if for each
xi , i = 1, . . . , n it can be expressed as
f (x1 , . . . , xn ) = a(x1 , . . . , xi−1 , xi+1 , . . . , xn ) xi + b(x1 , . . . , xi−1 , xi+1 , . . . , xn ),
with a, b : Rn−1 → R. The blossom B[p](x1 , . . . , xn ) of a polynomial is such an affine
function and it is uniquely defined by the properties
• Symmetry. B[p](x1 , . . . , xn ) = B[p](xπ1 , . . . , xπn ) for any permutation π1 , . . . , πn of
the integers 1, . . . , n.
• Multi-affine.
B[p](. . . , (1 − λ)x + λy, . . .) = (1 − λ)B[p](. . . , x, . . .) + λB[p](. . . , y, . . .).
• Diagonal property. B[p](x, . . . , x) = p(x).
For example the blossom of the polynomial p(x) = x3 + x is B[p](x1 , x2 , x3 ) = x1 x2 x3 +
(1/3)(x1 + x2 + x3 ). The control points of a Bézier curve p can be expressed as
ci = B[p](α, . . . , α, β, . . . , β ).
| {z } | {z }
(n−i)
i
Bézier curves are a classical element in CAGD. For a more detailed introduction and
more information about them, we refer the reader to [23].
2.1.2
Piecewise polynomials
Despite their approximation power, polynomials have limitations when modelling more
complicated curves. The problem is the high degree necessary, which can lead to numerical
stability problems and also to several unwanted ’wiggles’ in the curve. Moreover, changing
one control point of a Bézier curve will change the whole curve, although the difference will
be small far away from the changed control point. Using piecewise polynomial functions
and curves provides a relatively easy solution to these problems.
Piecewise polynomials, also called splines, can be specified by simply specifying the
polynomial pieces for each sub-interval. However, the problem is how to ensure a certain
desired level of continuity between those pieces. A simple yet flexible way of automatically
ensuring the wanted levels of continuity is the use of B-splines, sometimes also called Bspline basis functions. Let Bi,d,t (x) be the i-th B-spline of degree d ≥ 0 with respect to the
knot vector t = (t1 , t2 , . . . , tn+d+1 ) and
f (x) =
n
X
i=1
ci Bi,d,t (x),
x ∈ [td+1 , tn+1 )
2.1. FUNCTIONS AND PARAMETRIC CURVES
11
be the piecewise polynomial function with n coefficients ci , also called a B-spline function.
The knot vector indicates the polynomial sub-intervals, but it also prescribes the continuity
between the polynomial pieces. If ti < ti+1 then f is a polynomial in [ti , ti+1 ) and if
ti−1 < ti = ti+1 = . . . = ti+k−1 < ti+k , we say that the knot ti has multiplicity k.
This means that f is a polynomial between [ti−1 , ti ) and [ti = ti+k−1 , ti+k ) and those two
pieces connect with C d−k -continuity.
The B-splines can be defined in a recursive manner,
Bi,d,t (x) =
x − ti
ti+1+d − x
Bi,d−1,t (x) +
Bi+1,d−1,t (x),
ti+d − ti
ti+1+d − ti+1
with
(
1, if ti ≤ x < ti+1 ,
Bi,0,t (x) =
0, otherwise.
It can be shown that if n = d + 1 and t1 = . . . = td+1 < td+2 = . . . = t2d+2 , then the
B-splines reduce to the Bernstein polynomials and f is a polynomial in Bézier form. Hence
the B-spline theory includes the polynomial Bézier case. B-splines have the following properties:
• Locality. Bi,d,t depends only on ti , . . . , ti+d+1 and if x 6∈ [ti , ti+d+1 ) then Bi,d,t (x) =
0.
• Positivity. If x ∈ (ti , ti+d+1 ) then Bi,d,t (x) > 0.
• Piecewise polynomial.
• Smoothness. If z occurs k times among the knots ti , . . . , ti+d+1 then Bi,d,t is C d−k continuous at z.
For any x ∈ [td+1 , tn+1 ) the B-splines form a partitionPof unity and with the knot averages
n
i+d
or Greville abscissae t∗i = ti+1 +...+t
we have x = i=1 t∗i Bi,d,t (x).
d
For a B-spline function these properties translate to
Pµ
• Locality. If x ∈ [tµ , tµ+1 ) then f (t) = i=µ−d ci Bi,d,t (x).
• Special point. If ti+1 = . . . = ti+d < ti+d+1 for 1 ≤ i ≤ n then f (ti+1 ) = ci .
• Smoothness. If z occurs k times in t then f is C d−k -continuous at z.
The control points of a B-spline function are the points (t∗i , ci ), i = 1, . . . , n. A B-spline
curve f : [td+1 , tn+1 ) → Rm is given by
f (t) =
n
X
ci Bi,d,t (t)
i=1
with the control points ci ∈ Rm . As in the Bézier case, the linear interpolant of the control
points forms the control polygon. B-spline functions and curves have the convex hull and
12
CHAPTER 2. BACKGROUND FOR CURVE INTERSECTION
y
0
1
2
3
4 x
Figure 2.4: Left: A cubic spline function with its control polygon. The knot vector is
illustrated using black circles. Right: A cubic spline curve with its control polygon and the
de Boor algorithm illustrated.
variation diminishing property. Actually, from the locality of a B-spline function it follows
that f (t) with t ∈ [tµ , tµ+1 ) lies in the convex hull of cµ−d , . . . , cµ . The tangential end
point interpolation property only holds if the corresponding end knot has multiplicity d + 1.
For a B-spline function of degree d over the knot vector t the distance between the
coefficient ci and the function value at t∗i can be bounded by
|ci − f (t∗i )| ≤ K(ti+d − ti+1 )2 |D2 f |[ti+1 ,ti+d ] ,
1 ≤ i ≤ n,
where the operator D denotes (one-sided) differentiation (from the right) and the constant
K only depends on d. This bound can be extended to the distance between the function and
its control polygon Γf , yielding
|Γf − f |t∗1 ,t∗m ≤ Qh2 |D2 f |[t1 ,tn+d+1 ] ,
(2.2)
where h = maxi (ti+1 − ti ) and the constant Q only depends on d (see [52]). A similar
quadratic bound applies to the distance between a spline curve and its control polygon.
Blossoms are defined for polynomials. However, as shown by Ramshaw in [59], if two
polynomials of degree d connect in a C k fashion at t0 , then their blossoms agree as long as
at least d − k arguments are equal t0 . This means that we can also in the spline case express
the control points of a B-spline curve in terms of blossoms, ci = Bi [f ](ti+1 , . . . , ti+d ), as
long as the knot vector contains no knot with multiplicity greater than d + 1. Here Bi [f ] is
the blossom of one of the polynomial pieces where Bi,d,t is non-zero [59].
Blossoms provide one of many ways to develop de Boor’s algorithm, an efficient and
stable method for evaluating B-spline functions that only uses convex combinations similarly to the de Casteljau algorithm for polynomials. Suppose x ∈ [tµ , tµ+1 ), set c0i (x) = ci
for i = µ − d, . . . , d and let for r = 1, . . . , d and i = µ − d + r, . . . , µ
cri (x) =
x − ti
ti+d+1−r − x r−1
c
+
cr−1 .
ti+d+1−r − ti i−1
ti+d+1−r − ti i
2.1. FUNCTIONS AND PARAMETRIC CURVES
13
Then f (x) = cdµ (x). This algorithm has the same pyramid like structure as de Casteljau’s
and also leads to repeated linear interpolation for B-spline curves.
One final topic on B-spline functions we would like to introduce here is knot insertion. Knot insertion means that given a spline f over a knot vector t and a new knot
γ ∈ [tµ , tµ+1 ), we want to compute the coefficients of the same spline over the new knot
vector τ = t1 , . . . , tµ , γ, tµ+1 , . . . , tn+d+1 . Due to Böhm [10] we know that if
f=
n
X
ci Bi,d,t =
i=1
n+1
X
bi Bi,d,τ
i=1
then bi = ci for i = 1, . . . , µ − d and bi = ci−1 for i = µ + 1, . . . , n + 1 and
bi =
ti+d − γ
γ − ti
ci−1 +
ci ,
ti+d − ti
ti+d − ti
for i = µ − d + 1, . . . , µ.
Splines is a field with a rich theory and for more information we refer the interested
reader to [52, 71, 17, 23].
2.1.3
Rational functions and curves
Polynomials, and with them piecewise polynomials, have the limitation that they can not
exactly represent conic sections or circular arcs, although they can approximate them arbitrarily close. Rational polynomial curves on the other hand are able to do just that. A
rational function is the quotient of two polynomials
r(x) = p(x) q(x)
with q 6= 0.
Obviously for x where q(x) = 0 the rational is undefined. Such points are called poles. A
rational function in Bézier form is the quotient of a weighted polynomial in Bézier form and
the weighted sum of the Bernstein basis polynomials,
Pn
i=0
r(x) = Pn
n
wi ci Bi,[α,β]
(x)
i=0
n
wi Bi,[α,β]
(x)
.
A standard assumption here is that all weights wi > 0, which results in the denominator
being strictly positive and hence r free of poles for all x ∈ [α, β]. For rational Bézier curves
the same weights are used for all component functions, yielding
Pn
i=0
r(t) = Pn
n
wi ci Bi,[α,β]
(t)
i=0
n
wi Bi,[α,β]
(t)
.
For more information about rational Bézier curves and an introduction to rational B-spline
curves, see [23].
14
2.2
2.2.1
CHAPTER 2. BACKGROUND FOR CURVE INTERSECTION
Numerical methods
Stability
Computers have become an invaluable tool in mathematics and engineering as they are able
to do a large amount of computations in a relatively short time. However, using computers
for solving mathematical and engineering problems brings its own difficulties. The computer is a discrete machine, meaning that every task has to be split up into a finite number
of simple operations. Moreover the computer can only work on a finite subset of the real
numbers. Both limitations can cause problems for numerical methods (methods using calculations with numbers to compute the result).
Integration with the computer provides an example for the first limitation called discretization. Let us assume we have a function f given which can be evaluated in the interval
R2
[0, 2] and we want to compute its integral If = 0 f (x)dx. Without any specific knowledge
about what type of function we have, it is impossible to provide a method that will always
compute the exact value for any function f . Hence we need to approximate the solution
by numerical integration. The probably easiest way to do this is to use the mid-point rule
If ≈ 2 f (1). The error introduced by this discretization of the problem is called truncation
error. The name becomes more obvious when looking at an example using differentiation.
Assuming f is C ∞ , it is well known that f 0 (x0 ) ≈ (f (x1 ) − f (x0 ))/(x1 − x0 ), which
comes from truncating the infinite Taylor series of
f (x) =
∞
X
f (i) (x0 )
i=0
i!
(x − x0 )i
after the first derivative term.
The other major source of errors comes from the fact, that there are infinitely many
real numbers, whereas the computer can only represent a finite number of them. Hence,
whenever the initial data or the result of a computation can not be represented exactly by
the computer, the closest number that can be expressed is chosen instead, introducing a
rounding error. That this can have important consequences for the result becomes clear
when studying the following example. In a computer rational numbers are represented
using floating point numbers: Every number starts with the comma sign, followed by a
fixed number of digits and then multiplied by the basis to the power of an integer with fixed
number of digits. For example, if we allow 3 digits after the comma and 2 digits for the
exponent of the basis 10, the exact number 12.345 would become .123 · 1002 in floating
point representation. In floating point arithmetic then the computation
(.100 · 1005 + .123 · 1002 ) −.100 · 1005
|
{z
}
=.100123·1005 ≈.100·1005
yields zero and not the actual value of .123·1002 . Although this example is a bit extreme, the
loss of accuracy in floating-point representation due to subtraction is an important source of
numerical problems. Another one is that in an iterative or recursive process, a small initial
2.2. NUMERICAL METHODS
15
rounding error can be amplified considerably, leading to a completely wrong result. For
more details and an example see [76].
Being aware of these problems, one important goal for numerical methods is to be stable, meaning that a small change in the initial data leads to a small change in the result (due
to the use of floating point numbers such small changes in the initial data will inevitably
occur). To check for a stable algorithm one could compare the numeric solution to the exact
one for some cases where the exact solution is known. This kind of test is called forward
error analysis. However, not only a method can be unstable, a problem in itself can have
this property. Such an unstable problem is also called ill-conditioned. An example, taken
from [76], is the following linear system,
x + 2y = 3
.499x + 1.001y = 1.5
with the solution x = y = 1.0. If, however, the second equation is replaced by .5x +
1.001y = 1.5, the solution jumps to x = 3, y = 0. If a given problem is ill-conditioned,
a forward error analysis can suggest an unstable method, even though the method actually
is stable. A backward error analysis takes the possibility of an unstable problem into account. It treats the numerical solution as the exact solution of a different, slightly perturbed
problem. The method is stable when the perturbation is sufficiently small [76]. For more
information on stability, truncation and rounding errors see [76, 83, 34, 42].
2.2.2
Convergence
Many numerical methods use an iterative or recursive process to compute their result, generating
{xi }∞
i=0 which can be analyzed. For example we know that cos(x) =
P∞ a sequence
2 i
i=0 (−x ) /(2i)!. Hence we could compute cos(x) by defining the sequence x0 = 1 and
xi = xi−1 + (−x2 )i /(2i)! for i > 0. In this example we know that the resulting sequence
converges to cos(x). In a general method, however, one needs to examine whether the generated sequence converges or not. We say that a sequence converges to its limit x∗ in the
norm k · k if for any > 0 we can find an index N such that
kxi − x∗ k < for all i > N [12]. We write limi→∞ xi = x∗ . Moreover a sequence is a Cauchy sequence
[12] or has the Cauchy property [15] if for all > 0 there exists an index N such that for all
i, j > N we have
kxi − xj k < .
Although convergence is an important property, it is often the speed of the convergence
that determines the practicality of a method [34]. To measure this speed we use the order of
convergence or convergence rate. We say the sequence {xi }∞
i=0 converges with order q ≥ 1
to x∗ if
kxi+1 − x∗ k ≤ Kkxi − x∗ kq
(2.3)
16
CHAPTER 2. BACKGROUND FOR CURVE INTERSECTION
for some positive constant K > 0. If q = 1 we need 0 < K < 1. This is also called
Q-order of convergence [58]. The condition (2.3) can be relaxed slightly by using a positive
sequence i where
kxi − x∗ k ≤ i
lim
and
i→∞
i+1
= K,
qi
see [34]. Another way of defining convergence rate is given by the R-order of convergence
∗
[58]. We say the sequence {xi }∞
i=0 converges with R-order q to x if there exist constants
0 < K < ∞ and h ∈ (0, 1) such that
i
kxi − x∗ k ≤ Khq .
Equation (2.2) is an example for such a R-order of convergence, showing that the control
polygon of a spline function converges quadratically (with order 2) to the function. R-order
of convergence is also sometimes specified using the ’big O’ notation [81, 46, 14]. We say
e(x) = O γ(x) as x goes to x∗ with some function γ(x) when there exist constants K and
δ such that
|e(x)| ≤ K|γ(x)|
for all x where
|x − x∗ | < δ.
In Chapters 5 and 6 we will use variants of (2.3) to show that the described methods have a
quadratic convergence rate.
An actual computation in a computer can not go on for infinity. Thus an iterative process like the computation of cos(x) above needs to be stopped at some time. As one can
not calculate the actual distance from the limit, an often chosen stopping rule or stopping
criterion is to stop when
kxi+1 − xi k ≤ δ
for some given tolerance δ. For more information on order of convergence see [34, 58] and
on stopping criteria see [34, 76].
2.3
Root finding
The problem of finding the intersections of two parametric curves is related to the problem
of finding the roots or zeros of a real function f : R → R. The latter means that we want
to find all x ∈ R where f (x) = 0. This is a classical problem and several methods for
general real functions with certain continuity properties exist. Probably the best known one
is Newton’s method which is based on linear approximation and creates a sequence {xi }∞
i=0
where the start value x0 needs to be given. The value xi+1 of the next iteration is chosen to
be the root of the tangent at xi , yielding
xi+1 = xi − f (xi )/f 0 (xi ).
It can be shown that if x0 is sufficiently close to the zero ζ, then the sequence converges to
it. Moreover, if the zero is simple f 0 (ζ) 6= 0 , then it does so with a quadratic convergence
2.3. ROOT FINDING
17
rate [42]. As indicated by the first property, Newton’s method has some practical difficulties.
If f 0 (xi ) = 0 for some xi , the method is undefined. Also, if the initial value x0 is not good
enough, the method might diverge. This leads to the task of how to find a good starting
value, which is tricky in itself. For more on finding roots of general real functions, see your
favourite numerical analysis book, e.g. [42, 34].
Root finding algorithms for piecewise polynomials in B-spline form can take advantage
of the properties of this representation. The easiest way is to use the convex hull property
and apply simple repeated subdivision of the curve in the middle. The parts where the whole
convex hull lies above or below the x-axis can be discarded. Once the interval over which
the current part of the curve is defined is small enough, one stops and assumes a root at
the center of the interval. This method will yield all zeros of a spline and it is guaranteed
to converge. The convergence rate however will only be linear, as the maximal possible
error of the found root is halved in every subdivision step. It is important to not confuse
this linear convergence rate of the root finding algorithm with the quadratic convergence
of the control polygon to the spline function. Using a more clever subdivision algorithm
with two specifically chosen subdivisions in each step can lead to a higher convergence
rate, examples for polynomials in Bézier form are Bézier clipping (see Chapter 5) with a
quadratic and quadratic clipping (see [4]) with a cubic convergence rate to simple roots.
A different approach for computing a zero of a spline curve is provided in [55]. The idea
is to use the control polygon of the spline as an approximation, meaning that the first zero
of the control polygon is taken as an approximation to the first zero of the function. This
value is then inserted into the knot vector and the new control polygon computed, yielding
an iterative method. In [55] it is shown that this actually yields a Newton like method
with a quadratic convergence rate that does not need any starting value. The method takes
advantage of the variation diminishing property of a B-spline function and is able to convert
the quadratic convergence of the control polygon to the function into a quadratic convergent
root finding algorithm. Figure 2.5 shows an example of how false zeros disappear and the
zero of the control polygon converges to the zero of the function.
Root finding can also be applied to a system of multivariate functions. Let f : Rn → Rn ,
then we want to find all x ∈ Rn where f (x) = 0. Newton’s method in this setting is given
by
xi+1 = xi − J −1 (xi )f (xi )
or
J(xi )(xi − xi+1 ) = f (xi ),
where J is the n × n Jacobi matrix Jij = ∂fi /∂xj [42] and has the same practical problems
as in the univariate case.
Until now we have discussed methods that by an iterative process find an approximation
to the root of a function. However, if the function has a local extremum with a value close
to zero or several roots close to each other, these approaches can report false zeros or find
only one of several. Similarly, a zero where the function is only touching the x-axis, but not
crossing it, might be missed due to numerical errors. Figure 2.6 shows some examples of
difficult zeros. A different type of problem, not further discussed in this thesis, is therefore
the question of whether and when one can be sure that there is no root or exactly k roots in
a certain interval.
18
CHAPTER 2. BACKGROUND FOR CURVE INTERSECTION
y
y
2
2
1
1
0
1
2
3
4x
0
y
y
2
2
1
1
0
1
2
3
4x
0
y
y
2
2
1
1
0
1
2
3
4x
0
1
2
3
4x
1
2
3
4x
1
2
3
4x
Figure 2.5: Root finding by using the zero of the control polygon as an approximation
to the zero of the function, as described in [55]. One can clearly observe the false zero
disappearing and the root of the control polygon converging to the one of the function. The
black circles illustrate the current knot vector.
2.4. CURVE INTERSECTION IN THE PLANE
y
10−9
0
19
y
x
y
0
−10−9
0
x
0
x
10−9
Figure 2.6: Examples for roots where the presented iterative algorithms may give false
results.
2.4
Curve intersection in the plane
As mentioned before, finding the intersections of two planar parametric curves f and g is
related to root finding, in fact can be solved by root finding of a two-dimensional bivariate
function d(s, t) = f (s) − g(t). However, as it is a special case, algorithms tailored to the
planar intersection problem can prove more efficient. Moreover, we would like to avoid the
practical problems that for example the two-dimensional Newton’s method possesses.
As root finding for piecewise polynomials in B-spline form can be based on the properties of the representation, so can intersection finding methods. Hence similar ideas as
for root finding are used to solve the intersection problem. Examples are repeated subdivision in the middle or Bézier clipping. Again, one needs to take care of distinguishing the
quadratic convergence of the control polygons to the curves and the convergence rate of the
intersection finding algorithm. For an overview over some of the currently used methods for
planar curve intersection see 6.1. Note that also iterative algorithms for curve intersections
might give false intersections or miss some, as in the root finding case. For an example that
tries to solve this problem, see [84].
20
CHAPTER 2. BACKGROUND FOR CURVE INTERSECTION
3
Interpolation
This chapter introduces the interpolation problem in its different settings, starting with the
univariate case. Multivariate interpolation is discussed for a finite as well as an infinite or
transfinite number of points, where in the latter case the data is specified along the boundary of some domain. In recent years, new methods based on barycentric coordinates have
been developed for this kind of transfinite interpolation. We introduce and discuss some of
these barycentric coordinates and their corresponding transfinite counterparts with a special
emphasis on the mean value coordinates and interpolant.
Interpolation is not only interesting as a means of computing an interpolating function.
It is a very useful tool in the development of other methods, with the prime examples of
numerical integration and numerical differentiation. The main idea in those is to determine
some good points on the domain, generate a polynomial that interpolates the given function
at the chosen points and then integrate or differentiate this polynomial exactly. This yields
a formula that combines some well chosen function values in a clever way to compute an
approximation of the integral or derivative. For classical examples in the univariate case see
your favourite numerical analysis book, e.g. [42].
Instead of interpolating each data point, an approximating function tries to capture the
general shape of the data, often by minimizing some error. A popular example here would
be least squares approximation. The advantage is that the resulting function can be less
complicated, e.g. one can use a polynomial with relatively low degree to approximate large
amounts of data. Whether interpolation or approximation is the better choice, depends on
the given task at hand.
3.1
Univariate
Interpolation with real functions is a fundamental task in numerical analysis. In its simplest
setting, univariate Lagrange interpolation, we have given a set of distinct real points x0 <
21
22
CHAPTER 3. INTERPOLATION
x1 < . . . < xn ∈ R and values f0 , . . . , fn ∈ R and we want to find a function g : R → R
that interpolates the given data, i.e.
g(xi ) = fi ,
i = 0, . . . , n.
(3.1)
It is well known that in the given setting there exists a uniquely defined polynomial of degree
at most n which fulfills the interpolation conditions (3.1). The easiest way to specify the
interpolating polynomial is perhaps by using the Lagrange basis polynomials (2.1) defined
over the data points xi ,
n
X
g(x) =
fi Li (x).
i=0
As mentioned before, the numerical problems associated with this representation can be
reduced when writing it in its barycentric form, see Chapter 9. Another common way to
express the interpolating polynomial is the Newton form
g(x) = f [x0 ] + f [x0 , x1 ](x − x0 ) + . . . + f [x0 , . . . , xn ](x − x0 ) · · · (x − xn−1 ), (3.2)
using divided differences which are defined recursively by
f [xi ] = fi ,
f [x0 , . . . , xi ] =
f [x1 , . . . , xi ] − f [x0 , . . . , xi−1 ]
.
xi − x0
One can also compute the Bézier form with α = x0 and β = xn by inserting it into (3.1)
and solving the resulting non-singular linear system for the coefficients.
If the given data comes from a C n+1 -continuous function f , we can bound the error of
the interpolating polynomial g. Let α and β be such that α ≤ x0 and β ≥ xn . Then for any
x ∈ [α, β] we have
(n+1) n
(t) Y
g(x) − f (x) ≤ maxt∈[α,β] f
(x − xj ).
(3.3)
(n + 1)!
j=0
The relation between the error and the interval [α, β] can be made clearer by using the ’big
O’ notation introduced in 2.2.2. With λ = β − α this means that g(x) − f (x) = O(λn+1 )
as λ goes to zero. We say that g has approximation order n + 1 [81].
When interpolating data from a given function f , one can reduce the interval λ over
which one interpolates to reduce the error. But instead of letting λ go to zero, one could
also increase the number of data points to interpolate and by this increase the degree of the
polynomial. The easiest way here would be to use equidistant points, meaning xi = α + ih
and h = (β − α)/n. The resulting interpolating polynomial however might diverge from
f instead of converging to it as n → ∞ and h → 0, as illustrated in Figure 3.1 with the
popular example of Runge’s function f (x) = 1/(1 + x2 ). Instead it is better to use points
which are more dense at the boundaries of the interval. A good choice is to interpolate at
the Chebyshev points
1
2i + 1 π
xi =
(β − α) cos
+α+β
2
n+1 2
3.1. UNIVARIATE
23
2
2
1.5
1.5
1
1
0.5
0.5
0
0
−0.5
−5
0
5
−0.5
−5
0
5
Figure 3.1: Polynomial interpolation of Runge’s function f (x) = 1/(1 + x2 ) with n = 10
on the left and n = 20 on the right. The red interpolant uses equidistant points and the green
one uses Chebyshev points.
which come from scaling the zeros of the Chebyshev polynomial of the first kind Tn+1 (x) =
2−n cos[(n+1) cos−1 (x)] defined over [−1, 1] [42]. In fact, p will converge to f using these
points as n → ∞ and h → 0 for all f which are absolutely continuous on [α, β] [34, 81].
The freedom to choose the interpolation points is not always given, and sometimes
equidistant data points are what is needed. In such cases a different interpolation method
than polynomial interpolation may be preferable. This is also the case if one wants to avoid
high degrees in the interpolant. One way to solve these problems is to use piecewise polynomial interpolants. Using the B-spline representation with a knot vector {ti }n+d+2
which
i=1
satisfies ti+d+1 > ti for all i, the interpolation conditions (3.1) yield a linear system, which
is non-singular if and only if the diagonal elements Bi,d,t (xi−1 ), i = 1, . . . , n + 1 are
positive [52]. One very common method is cubic spline interpolation with the knot vector
t1 = t2 = t3 = t4 = x0 , ti = xi−4 , i = 5, . . . , n + 3 and tn+4 = tn+5 = tn+6 = tn+7 =
xn . In this setting we have two degrees of freedom which need to be dealt with. There are
several ways to do that, common ones are specifying the end point derivatives as additional
interpolation conditions or removing the knots t5 = x1 and tn+3 = xn−1 from the knot
vector. Both methods lead to non-singular linear systems and hence have a Runique solution
x
[52]. The first approach yields a cubic spline function which minimizes x0n (g 00 (x))2 dx
for all functions that are C 2 and interpolate all the data [52, 17]. If the data comes from a
C 4 -continuous function f , then this cubic spline has the error O([maxi (xi+1 − xi )]4 ) [17].
For equidistant points this gives O(h4 ) with h = (xn − x0 )/n, we say that the cubic spline
interpolation with end point derivatives has approximation order four.
Another possibility to interpolate the given data is to use rational functions. Here one
needs to be careful with the location of poles or try and avoid them completely. This is
discussed in more detail in Chapter 9.
Interpolation conditions do not need to be limited to function values, one can as well
24
CHAPTER 3. INTERPOLATION
specify derivative data to be interpolated, yielding Hermite interpolation. There are two
ways of specifying the Hermite conditions. One is to have a set of distinct points yi , i =
0, . . . , n with multiplicities κi ≥ 1. Then the interpolant g must satisfy
g (j) (yi ) = f (j) (yi ),
i = 0, . . . , n, j = 0, . . . , κi − 1,
where f (j) (yi ) is the given data to interpolate. The other is to have N =
x0 , . . . , xN which are allowed to coalesce. The convention is then that if
(3.4a)
Pn
i=0
xi−1 < xi = . . . = xi+κi −1 < xi+κi ,
κi points
(3.4b)
g(x) interpolates the data f (j) (xi ) for j = 0, . . . , κi − 1.
Similar to the Lagrange case, there exists a unique interpolating polynomial
g(x) =
n κX
i −1
X
f (j) (yi )Hi,j (x)
i=0 j=0
with appropriate basis polynomials Hi,j , which again can be written in a stable barycentric
form, see Chapter 9. Using (3.4b), the unique interpolating polynomial can also be given in
Newton form (3.2) by applying the property
1
f [ λ, λ, . . . , λ ] = f (k) (λ)
| {z }
k!
(k+1)−times
of divided differences to avoid divisions by zero [34, 42]. This in addition also means that if
the data comes from a C N +1 -continuous function f , then the error bound (3.3) still remains
valid (with N instead of n).
There are also ways to compute an interpolating piecewise polynomial or rational function that satisfies the Hermite conditions (3.4). For more information and examples on the
latter, see Chapter 9.
3.2
Multivariate
Univariate interpolation is a relatively simple and well studied, classical problem. Its extension to the multivariate case yields a more difficult task. In the Lagrange setting, we have
given points in Rm with corresponding values and we want to find a function g : Rm → R
that interpolates the data. Let us first consider the special case where the data points lie on
a rectilinear grid. Here each point is defined by the intersection of m perpendicular hyperplanes, where each hyperplane is also perpendicular to a standard basis vector. Figure 3.2
shows an example of a two-dimensional rectilinear grid. For the k-th standard basis vector
there are nk + 1 hyperplanes perpendicular to it with their positions defined
the values
Qby
m
xk,0 , . . . , xk,nk with nk ≥ 0 and xk,i ∈ R, i = 0, . . . , nk . This yields k=1 (nk + 1)
data or grid points xi1 ,...,im = (x1,i1 , . . . , xm,im ) where ik = 0, . . . , nk and k = 1, . . . , m.
3.2. MULTIVARIATE
25
x2,4
x2,3
x2,2
x3,2
x2,1
x2,0
x1,0
x1,1 x1,2 x1,3
x1,4 x1,5
Figure 3.2: A two-dimensional rectilinear grid.
For such a grid tensor-product functions can be used to extend the univariate theory. Let
x ∈ Rm with x = (x1 , . . . , xm ). One can show that there exists a unique polynomial which
for each coordinate xk has a degree of at most nk . It can be represented using the Lagrange
polynomials Lk,ik (2.1) for each coordinate xk and the resulting polynomial is then
g(x) =
n1
X
i1 =0
...
nm
X
fi1 ,...,im L1,i1 (x1 ) · · · Lm,im (xm ).
im =0
It is also possible to obtain a Newton form of the multivariate interpolating polynomial by
using divided differences for several variables as well as a bound on the error if the data
comes from a smooth enough function, see [42]. A piecewise polynomial interpolant for
gridded data can be computed using a tensor product B-spline function, see [52].
The Lagrange interpolation problem in several variables becomes much more complicated if we allow the distinct data points xi ∈ Rm , i = 0, . . . , n to be arbitrarily distributed.
This case is also called scattered data interpolation and has been studied for quite some
time. There is polynomial and piecewise polynomial interpolation, methods using radial
basis functions (functions whose value depends only on the distance to some center point)
and methods trying to minimize some properties, to name just a few categories. Some of
these methods are global, meaning that the value of the interpolant at a point x is determined
by all data points, and others are local, where the value depends only on a subset of data
points in some neighbourhood of x. A good introductory overview over different methods
for m = 2 is given in [70], some of which can be easily extended to higher dimensions.
In [31] different methods for scattered data interpolation in two dimensions are compared
with respect to certain properties. A survey article covering scattered data interpolation for
higher dimensions (m > 2) is [2].
Polynomial interpolation in this setting is quite complex. Take for example two points
in a plane with different values. There is no constant bivariate polynomial of degree zero
interpolating both data points, whereas there are infinitely many of degree one, as the line
described by the data points can be contained in infinitely many planes. Another example is
given by three points on a line. Although now the degrees of freedom of a bivariate linear
polynomial match the number of data points, there still is no unique interpolating polyno-
26
CHAPTER 3. INTERPOLATION
mial. There has been a lot of research into multivariate polynomial interpolation, trying
for example to answer the questions when there exists a unique interpolating polynomial or
how to choose the data points such that it does. For an overview see [33, 32, 65].
For the non-polynomial methods, we mention two examples. The first is interpolation
with thin-plate splines [21, 74]. Thin-plate splines are a multivariate extension of cubic
spline interpolation. The latter minimizes the energy defined by the squared second derivative (see 3.1) and thin-plate splines minimize
2
Z X
m X
m ∂2g
.
∂xi ∂xj
i=1 j=1
Rm
Our other example was introduced in [73] by Shepard for m = 2 but can be generalized to
arbitrary dimensions (see e.g. [37]). It is called Shepard’s method and uses inverse-distance
weighting. The interpolant is given by
( P
Pn
n
µ
µ
x 6= xj for all j
i=0 wi (x)fi
i=0 wi (x)
g(x) =
(3.5)
fj
x = xj for some j
with wi (x) = 1/kx − xi k for some µ > 0 and some norm k · k. Although it interpolates by
definition, it can be reformulated into the form
" n
!, n n
!#
n
X
Y
XY
µ
µ
g(x) =
kx − xj k
kx − xj k
fi
i=0
j=0
j6=i
k=0
j=0
j6=k
which is more numerically stable and clearly shows the interpolation property and that the
function value is a convex combination of the given data values. Shepard’s method can be
changed into a local method by using wi (x) in (3.5) which are equal to 1/kx − xi k close
to xi , zero far away and which have a smooth transition in between [73, 70]. There are also
other extensions or modifications of it, e.g. [61].
In multivariate Hermite interpolation we have in addition to the function values also
given partial derivatives or directional derivatives up to some order at some or all data
points. Hermite multivariate polynomial interpolation is thus also a difficult topic with
still open questions, for a survey see [51] and for an example [64]. For examples on nonpolynomial multivariate Hermite interpolation see [3, 30].
In multivariate interpolation the given data does not need to be a finite set of data points.
Instead one can also specify data along some lines or functions. More generally, it is possible to have an infinite set or transfinite number of data points, thus this setting is called
transfinite interpolation [24]. For a network of four boundary curves in R3 , Coons proposed a formula to fit a patch between them [24, 16]. The idea was picked up, extended and
analysed by several other authors like Gordon and Gregory [24, 63, 35]. For an overview of
such transfinite surface interpolation in R3 , see [63]. The progress in surface interpolation
3.3. BARYCENTRIC TRANSFINITE INTERPOLATION
27
also provided new ideas for the development of new transfinite interpolation schemes for
multivariate functions [35]. An early approach where the domain of interest is a hyperrectangular Cartesian product region is given in [35]. In the bivariate setting, [36] provides a
method for interpolating the values of a function given on the boundary of a convex domain.
Recent transfinite interpolation schemes are derived from transfinite versions of barycentric
coordinates [6], for more information on this topic see 3.3.
Transfinite interpolation methods can often be associated with one of the following three
related approaches [63]: Optimality conditions, partial differential equations and boundary
sampling. In the first, one tries to find a function that minimizes some functional or energy. Methods of the second type are based on solving a partial differential equation with
the interpolation conditions as boundary conditions, which yields an interpolating function.
Finally there are methods which compute the value of the interpolant at a point by using
a boundary integral or by sampling the boundary at discrete points. Here the interpolation
data is given at the boundary of some domain. Some of the recent transfinite interpolation
approaches like mean-value interpolation (see below) belong to this type of methods.
3.3
3.3.1
From barycentric coordinates to transfinite interpolation
Barycentric coordinates
The position of a point is often specified using Cartesian coordinates, describing it by an
origin and a weighted sum of vectors. In modern computer graphics and computational
geometry however, the need for expressing a point by a weighted average of a given set of
points arises P
in several situations. Let xi ∈ Rm , i = 1, . . . , n be the given points and let
n
wi ∈ R with i=1 wi 6= 0 be given weights. Then the weighted average of the xi uniquely
determines a point x ∈ Rm ,
Pn
wi xi
.
x(w1 , . . . , wn ) = Pi=1
n
i=1 wi
The wi are the homogeneous barycentric coordinates of x. As scaled coordinates yield
the same point x, we can define the normalized barycentric coordinates or just barycentric
coordinates
wi
λi = Pn
,
i = 1, . . . , n.
j=1 wj
Obviously, if the barycentric coordinates are given, they trivially and uniquely determine
the corresponding point. However, the more difficult task is to determine a set of barycentric
coordinates λi (x) for a given point x ∈ Rm . We call the λi (x) barycentric coordinates of
x if they form a partition of unity
n
X
i=1
λi (x) = 1,
(3.6a)
28
CHAPTER 3. INTERPOLATION
x3
x3
x
x
x1
x3
x2
x
x1
x2
x1
x2
Figure 3.3: Illustration of the barycentric coordinate λ1 (x) with respect to the triangle x1 ,
x2 , x3 for different x.
enable us to express x as an affine combination
n
X
λi (x)xi = x,
(3.6b)
i=1
and have the Lagrange property
λi (xj ) = δij ,
(3.6c)
where δij is the Kronecker delta [41, 49]. Moreover it would be nice if the coordinates are
defined on as large a domain as possible, are non-negative and depend smoothly on x [49].
Barycentric coordinates were first introduced by Möbius in [54].
For n = 2 and x1 , x2 ∈ R with x1 < x2 , each point x ∈ R has the unique barycentric
coordinates
λ1 (x) =
x2 − x
,
x2 − x1
λ2 (x) =
x − x1
.
x2 − x1
These coordinates are defined for any x ∈ R, are linear and hence infinitely smooth with
respect to x and positive for all x ∈ (x1 , x2 ). In R2 we can use a similar idea if n = 3
and the points x1 , x2 and x3 are the vertices of a non-degenerated triangle. Then the unique
barycentric coordinates for any x ∈ R2 are
λ1 (x) =
area(x, x2 , x3 )
,
area(x1 , x2 , x3 )
λ2 (x) =
area(x1 , x, x3 )
,
area(x1 , x2 , x3 )
λ3 (x) =
area(x1 , x2 , x)
,
area(x1 , x2 , x3 )
where area(x, x2 , x3 ) is the signed area of the triangle defined by x, x2 and x3 . Signed here
means that the area is positive if the vertices are ordered counter-clockwise and negative
otherwise. Again these coordinates are smooth with respect to x and positive for any point
x inside the triangle defined by x1 , x2 , x3 . Figure 3.3 shows some examples for these
coordinates. We have seen that in specific cases, the barycentric coordinates are unique. In
fact, they are unique as long as the points xi are the vertices of a non-degenerate simplex
[41]. A simplex or n-simplex is the n dimensional analogue of a triangle. It is the convex
hull of n + 1 points x1 , . . . , xn+1 ∈ Rn for which the vectors x2 − x1 , . . . , xn+1 − x1 are
linearly independent [81, 82].
3.3. BARYCENTRIC TRANSFINITE INTERPOLATION
29
xi+1
x
xi+1
Γ
αi1
x
αi−1
xi
xi−1
xi
xi−1
Figure 3.4: Star shaped polygon with angles αi and the boundary Γ of a disc around x
illustrated.
We will focus our remaining discussion on points in R2 . For n > 3 we can consider the
points xi , i = 1, . . . , n as the vertices of a polygon. It is easy to observe that in this setting
there are no unique coordinates. An easy solution for any point x inside the polygon is to
take a triangle formed by three vertices that contains x and use the barycentric coordinates
with respect to this triangle. The resulting coordinates fulfil conditions (3.6), but depend
on the choice of triangle and do not in general depend smoothly on x [25]. Thus better
coordinates are needed. For convex polygons, Wachspress in [77] found the coordinates
wi (x)
λi (x) = Pn
,
i=1 wi (x)
wi (x) =
area(xi−1 , xi , xi+1 )
,
area(xi−1 , xi , x)area(x, xi , xi+1 )
(3.7)
which are infinitely smooth (C ∞ ) and affine invariant [27]. Note that all indices here and
later are considered cyclic, so xn+1 = x1 . Unfortunately, for x outside the polygon, the
coordinates (3.7) might be undefined due to a division by zero. Wachspress coordinates were
analyzed and extended by several authors, e.g. [53, 78, 79] and also different barycentric
coordinates for convex polygons were discovered.
In [25] Floater introduced the mean value coordinates for star shaped polygons, motivated by attempting to approximate harmonic functions by piecewise linear ones. One well
known property for a harmonic function f : Ω ⊆ R2 → R is that it satisfies the mean value
theorem
Z
1
f (x) =
f (y)dy,
2πr Γ
where Γ is the boundary of a disc with center x and radius r > 0 small enough, such that
the disc is completely contained in Ω. We now let x be an arbitrary, but fixed point inside
the star shaped polygon and let F (y) be a piecewise linear function that is linear in each
triangle x, xi , xi+1 , i = 1, . . . , n. We set F (xi ) = f (xi ), i = 1, . . . , n and want to choose
the value of F at y = x such that F fulfills the mean value theorem at this point,
1
F (x) =
2πr
n Z
1 X
F (y)dy =
F (y)dy,
2πr i=1 Γi
Γ
Z
30
CHAPTER 3. INTERPOLATION
where Γi is the part of Γ in the triangle x, xi , xi+1 and r > 0 small enough such that the
whole disc lies inside the polygon as illustrated in Figure 3.4. F is linear in each triangle
x, xi , xi+1 , hence the integral over each Γi can be calculated exactly, leading to
F (x) =
n
X
λi (x)f (xi )
(3.8)
i=1
being independent of r with
wi (x)
,
λi (x) = Pn
j=1 wj (x)
wi (x) =
tan(αi−1 /2) + tan(αi /2)
.
kxi − xk2
(3.9)
It can be shown that the λi in (3.9) are barycentric coordinates, positive for any x inside the
polygon and depend smoothly on x. Mean value coordinates are well defined for any x in
R2 that does not lie on the polygon and also for arbitrary planar polygons [40]. However,
if the polygon is not star shaped, the coordinates might be negative also for x inside of it.
Three dimensional extensions of the coordinates are introduced in [28, 44].
Wachspress and mean value coordinates both depend on the position of three polygon
vertices. In [27] barycentric coordinates over convex polygons are analyzed in general,
leading to the observation that any barycentric coordinate extends continuously to identical values on the polygon. The paper also introduces a three-point coordinate family of
barycentric coordinates where λi depends on xi−1 , xi , xi+1 and of which Wachspress and
mean value coordinates are members. In addition a family of five-point coordinates is discussed, where λi depends on xi−2 , . . . , xi+2 .
For an arbitrary planar polygon, mean value coordinates are defined but not necessarily
positive inside the polygon as mentioned above. In [27] it is noted that Harmonic coordinates as the solutions to the Laplace equations 4λi (x) = 0 will yield smooth and positive
barycentric coordinates. These coordinates are realized in [43] and the properties proved for
arbitrary polygons. This paper also extends the coordinates to three dimensions. However,
to compute the coordinates one needs to solve the above equations. In [41] barycentric coordinates for arbitrary polygons are introduced which are positive for all x inside the polygon
and can be evaluated efficiently.
The standard use of the barycentric coordinates discussed so far is Lagrange interpolation of function values given at the vertices as in (3.8). However, some applications are also
interested in specifying and interpolating derivative data at the vertices. In [49] a way of
modifying the barycentric coordinates discussed above to so called higher order barycentric
coordinates is suggested, which allow for first order Hermite interpolation.
Barycentric coordinates have become a quite popular topic of research in the last 15
years. For more references see the papers [25, 27, 40, 41] as well as the reference list at
[5] maintained by Hormann, which contains also references for the extensions to transfinite
interpolation.
3.3.2
Transfinite Lagrange interpolation
Let us consider transfinite interpolation over a given planar, open domain Ω ⊂ R2 with the
boundary ∂Ω. We want to interpolate the function values of f : ∂Ω → R with a smooth
3.3. BARYCENTRIC TRANSFINITE INTERPOLATION
31
p3 (x, v)
p2 (x, v)
p(x, v)
v
Γ
x
v
p1 (x, v)
x
Ω
∂Ω
∂Ω
Ω
Figure 3.5: Notation for transfinite mean value interpolation on a convex and a non-convex
domain.
function over Ω. One approach is to sample the boundary with a set of points and approximate it using a polygon. Then one can use some barycentric coordinates to interpolate the
data at the points. An explicit formula for transfinite interpolation can therefore be achieved
by considering the limit, when the polygon converges to the boundary and the number of
points goes to infinity. Another approach is to use the same ideas as for the polygon for the
continuous boundary and in that way derive the formulas.
A continuous version for mean value coordinates was first introduced in [44] and it was
analyzed and its interpolation property proved in [22, 13]. Let Ω be a convex, open domain
and x ∈ Ω. Moreover let v be a unit vector, S be the unit circle and let p(x, v) be the
intersection between the boundary ∂Ω and the ray emitting from x into the direction v as
illustrated in Figure 3.5. Then with v ∈ S and 0 ≤ r ≤ ρ(x, v) = kp(x, v) − xk2 we
introduce the radial linear function
F (x + rv) =
r
ρ(x, v) − r
F (x) +
f (p(x, v)).
ρ(x, v)
ρ(x, v)
The value F (x) we choose as for mean value coordinates such that F fulfills the mean value
theorem at x,
Z
1
F (x) =
F (y)dy
2πr Γ
and set the value of the transfinite interpolant g to the value of F at x, yielding
Z
Z
f (p(x, v))
1
g(x) = F (x) =
dv φ(x),
φ(x) =
dv.
ρ(x,
v)
ρ(x,
v)
S
S
(3.10)
In [13] it is shown that (3.10) also holds for a convex domain Ω ⊂ Rm in any dimension with S being the corresponding unit sphere. For non-convex domains we order all
intersections between the boundary and the ray from x in direction v, yielding pj (x, v),
j = 1, . . . , µ(x, v) with ρ1 (x, v) < ρ2 (x, v) < . . . < ρµ(x,v) (x, v). Here the number of
32
CHAPTER 3. INTERPOLATION
intersections µ(x, v) is assumed to be finite for all x ∈ Ω and all v ∈ S. The mean value
interpolant then becomes
Z
g(x) =
S
µ(x,v)
X (−1)j−1 f (pj (x, v)) dv φ(x),
ρj (x, v)
j=1
Z
φ(x) =
S
µ(x,v)
X (−1)j−1
dv
ρj (x, v)
j=1
(3.11)
and does interpolate the data as shown in [13]. This paper also provides a boundary integral
formula for (3.11) in the case a parametric representation of the boundary is available.
In [66] a transfinite interpolation scheme is developed by taking the transfinite version
of Shepard’s method (3.5), but integrating over a so called polar dual of the boundary of
the domain. The creation of this dual contains an arbitrary non-negative scalar function.
Schaefer et al. show that for a specific choice of this function the interpolation scheme is
identical to mean value interpolation and for a different one it is equal to transfinite Wachspress interpolation. Moreover they show, that the scheme is equivalent to the family of five
point coordinates presented in [27] for convex polygons, which implies that the scheme is
the transfinite extension of this family of barycentric coordinates for convex domains.
The paper [6] also discusses constructions of transfinite interpolation based on barycentric coordinates. In addition it compares these methods to PDE-based schemes and pays
attention to the early one from Gordon and Wixom [36] as well as other transfinite interpolation approaches.
3.3.3
Transfinite Hermite interpolation
As for scattered data, transfinite Lagrange interpolation of a given function is not sufficient
in some applications, they need the derivative data to be interpolated too. In the papers
[22, 13] mean value interpolation was extended to a Hermite interpolation scheme for first
order derivative data. The idea is to let the interpolant g̃(x) : Rm → R be the sum
g̃(x) = g(x) + ψ(x)ĝ(x),
ψ(x) = 1/φ(x)
where g(x) is the mean value interpolant and φ(x) is the same as in (3.10). From the
observation that ψ(x) converges to zero as x goes to the boundary and ψ(x) 6= 0 for any
x ∈ Ω it follows that the function ĝ(x) can be chosen such that the Hermite interpolation
conditions are fulfilled. This is achieved by letting ĝ(x) also be a mean value Lagrange
interpolant, but to the data
∂f
∂g
∂ψ
(y) −
(y)
(y),
y ∈ ∂Ω,
ĝ(y) =
∂n
∂n
∂n
where ∂ψ/∂n denotes the directional derivative of ψ along n and
Z
∂ψ
1
∂g
1
(y) =
,
(y) =
w(y, t) f (s(t)) − f (y) dt.
∂n
Vm−1
∂n
Vm−1 D
3.3. BARYCENTRIC TRANSFINITE INTERPOLATION
33
Here s : D ⊂ Rm−1 → ∂Ω is a parametrization of the boundary, Vm−1 is the volume of the
unit sphere in Rm−1 ,
(s(t) − x) × s⊥ (t)
w(x, t) =
,
ks(t) − xkm+1
2
and s⊥ (t) = det(D1 s(t), . . . , Dm−1 s(t)) is normal to the tangent space at s(t) with
e1 w11 · · · w1m−1 e2 w21 · · · wm−1 2
det(w1 , . . . , wm−1 ) = .
. . . , 0, 1, 0, . . . , 0)T ,
..
.. , ei = (0,
| {z } | {z }
..
.
.
i−1
m−i
em w1 · · · wm−1 m
m
and Di s(t) being the partial derivative of s with respect to ti . The vector determinant in
R2 is the rotation through an angle of −π/2, det(a) = (a2 , −a1 ), and in R3 it is the cross
product det(a, b) = a × b [13]. Here the convention that s⊥ points outwards of the domain
is used.
34
CHAPTER 3. INTERPOLATION
4
Overview of papers
Bézier clipping is quadratically convergent
Bézier clipping finds the intersections of two Bézier curves by repeatedly applying two subdivision steps alternately to each curve. These subdivision steps are chosen such that one can
guarantee that the curves do not intersect in the parameter intervals which are clipped away.
To locate the parameter values where the cut is applied, Bézier clipping uses the convex hull
property of Bézier curves in a clever way. This ensures that the quadratic convergence of the
control polygons to the curves is translated into a quadratic convergent, intersection finding
algorithm. This quadratic convergence is proved for polynomial curves in Chapter 5. In
the original paper [69] polynomial and rational Bézier curves were discussed. However,
the analyzed rational method contained a flaw which could lead to undetected intersections.
Hence Chapter 5 contains only the polynomial case.
This paper was actually written after the spline intersection paper in Chapter 6. During
the work on that paper it came to our knowledge that although the quadratic convergence of
Bézier clipping seemed to be commonly accepted, there existed no formal proof of it.
Computing intersections of planar spline curves
The recent paper [55] uses a zero of the control polygon of a spline function as an approximation to a root of the function, yielding a root finding algorithm with favourable properties, see also Section 2.3. In this paper we use and analyse a similar idea for the problem
of computing an intersection of two spline curves. Here an intersection of the two control
polygons is used as an approximation of an intersection of the two curves. In contrast to
the root finding case, two curves can have an intersection even if the control polygons do
not, which enforces a preprocessing step which ensures that the curves will intersect close
to a transversal intersection (an intersection where the curves have linear independent first
35
36
CHAPTER 4. OVERVIEW OF PAPERS
derivatives). The approach leads to a quadratically convergent, Newton-like method without
the need for a starting value.
Pointwise radial minimization
Mean value interpolation can be derived by using a radial linear function and applying the
mean value theorem, see Section 3.3. The paper in Chapter 7 shows that mean value interpolation can also be interpreted as the pointwise minimization of a radial energy function.
This approach easily generalizes to Hermite boundary data of any order. Here the energy
term is determined by integrating over derivatives of polynomials with odd degree. Chapter 7 analyzes this pointwise radial minimization with respect to existence, uniqueness and
cubic precision for first order derivative data. The corresponding analysis for derivative
data of general order is contained in Chapter 8. Unfortunately we have not been able to
formally prove the interpolation properties of the resulting function. However, numerical
experiments strongly suggest interpolation for domains in R2 and R3 .
Rational Hermite interpolation without poles
The lack of a proof for interpolation of the pointwise radial minimization method motivates
a study in its simplest setting, the univariate case with first order boundary conditions. As
the method assumes the data to be given at the boundary of a domain, Chapter 8 covers only
interpolation with data given at an even number of points. We extend this to any number
of points and derive an explicit form for the resulting rational interpolant. This form can
be considered barycentric like. Univariate barycentric interpolation both for polynomials
and rationals has been studied over the last 25 years, examining the favourable properties
of this form. In Chapter 9 we also derive the barycentric formula for the rational Hermite
interpolant of a method introduced in [26]. We prove the interpolation property as well as
the lack of poles for both methods and perform some numerical experiments to examine
their behaviour.
Part II
Curve intersection in the plane
37
5
Bézier clipping is quadratically convergent
Christian Schulz
Computer Aided Geometric Design, Volume 26, Issue 1, January 2009, Pages 61-74
Abstract
In 1990 Sederberg et al. introduced Bézier clipping as a new method to determine
the intersections of two Bézier curves in the plane. The method utilizes the convex hull
property of Bézier curves. In experiments a quadratic convergence rate was observed at
transversal intersections, the equivalent of simple roots of functions, but no formal proof
for this has been provided. In this paper we formally prove the quadratic convergence
rate for polynomial Bézier curves. Bézier clipping bounds one of the curves by a region
along a line. We also discuss the usefulness of arbitrary lines for creating these so
called ’fat lines’, leading to two general classes of fat lines which both give quadratic
convergence.
Keywords: Bézier clipping, Bézier curves, curve intersection, quadratic convergence
5.1
Introduction
The problem of computing the intersections of two parametric curves is a widespread and
quite fundamental problem in the field of Geometric Modeling. Different methods have
been developed to solve the task, ranging from Newton-like methods and subdivisional
techniques to (approximate) implicitation[for an overview see [20]]. One widely used, fast
and robust method is Bézier clipping. This method was introduced by Sederberg et al. in
[72] and consists of clipping away regions of the curves that are guaranteed to not intersect.
In [72] a quadratic convergence rate for Bézier clipping is mentioned, but not proved. This
paper closes for polynomial Bézier curves the gap in the theory.
39
40
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
d3
c3
b1
dmax
c2
d2
b3
dmin
b2
b0
L
d1
d0
c1
t∗0
c0
a) Computation of the fat line and d(t).
t t∗1
t∗2
t
t∗3
b) Clipping computed from d(t).
Figure 5.1: The two stages of Bézier clipping.
In Section 5.2 we recapitulate Bézier clipping followed by a short section of background
material about splines and Bézier curves. In Section 5.4 we then show the quadratic convergence rate of Bézier clipping whereas the usage of arbitrary ’fat lines’ is discussed in
Section 5.5. In Section 5.6 two general classes of ’fat lines’ are introduced, which both provide quadratic convergence. The paper finishes with some numerical examples in Section
5.7.
5.2
Bézier Clipping
In order to compute an intersection between two polynomial Bézier curves f and g, Bézier
clipping first computes a region defined by two parallel lines for one of the curves, e.g. f , that
completely encloses this curve. We call this region a ’fat line’. Then the distance function
of the other curve g to this fat line is used to compute regions where g is guaranteed to not
intersect the fat line and hence can not intersect f . The process is then repeated with a fat
line around g and clipping f and so on. This provides a sequence of parameter intervals in
which the two curves may have an intersection. The lengths of these intervals decrease to
zero and hence P
the intersections of the two curves are located.
Pn
m
m
n
(s) be the first and g(t) = j=0 cj Bj,[γ,δ]
(t) be the second
Let f (s) = i=0 bi Bi,[α,β]
2
polynomial Bézier curve with bi , cj ∈ R for all i, j and
m
1
m
Bi,[α,β]
(s) =
(β − s)m−i (s − α)i
(β − α)m i
a general Bernstein basis polynomial of degree m. We will now show how to compute the
fat line around f . Denote by L the line defined by b0 and bm , the first and last control point
of f , as shown in Figure 5.1a. The line L can be defined by its implicit equation
L = {x | distL (x) = 0},
distL (x) = n·(x − b0 ),
where n is its unit normal vector and distL (x) the signed distance from a point x to L. Let
d˜min := min (distL (bi )) ≤ 0
0≤i≤m
and
d˜max := max (distL (bi )) ≥ 0.
0≤i≤m
5.2. BÉZIER CLIPPING
m=2
µf
1/2
41
m=3
distL (b1 )distL (b2 ) > 0 distL (b1 )distL (b2 ) ≤ 0
3/4
4/9
else
1
Table 5.1: Values for µf .
Pm
m
distL (bi )Bi,[α,β]
(s) the curve f (s) is completely contained in the
˜
fat line given by {x | dmin ≤ distL (x) ≤ d˜max } along L as illustrated in Figure 5.1a.
Actually in [72] it is shown that the better bounds dmin := µf d˜min and dmax := µf d˜max
with the values 0 < µf ≤ 1 from Table 5.1 are sufficient.
In order toP
clip g against the fat line we first compute the distancePfunction d(t) =
n
n
n
n
distL (g(t)) = i=0 di Bi,[γ,δ]
(t) with di = distL (ci ). Observe that t = i=0 t∗i Bi,[γ,δ]
(t)
∗
with ti = ((n − i)γ + iδ)/n. As the value of a polynomial Bézier curve is a convex
combination of its control points, we know that the curve (t, d(t))T lies in the convex hull
of the control points (t∗i , di )T with i = 0, . . . , n. Hence computing the intersections of this
convex hull with the lines d = dmin and d = dmax as shown in Figure 5.1b provides us with
the intervals [γ, t) and (t, δ] where we are guaranteed to have d(t) < dmin or d(t) > dmax
and hence the curve g does not intersect f . Splitting g(t) at t and t provides the new interval
[t, t ] where g might intersect f . If the whole convex hull lies above dmax or below dmin , the
curves can not have an intersection and the computation is stopped.
In the next step f will be clipped against the fat line for the interval [t, t ] of the split g
and so on. In some cases the ’new’ interval [t, t ] will be identical to the old or, generally
speaking, will not be ’small enough’, e.g. when both end points of g lie in the fat line
around f . In this situation the curve g is not clipped and instead we clip f against the fat
line around g. However, sometimes neither clipping g against f nor f against g provides a
’good enough’ clip. In this case, the curve with the larger parameter interval is subdivided
at the centre. Then Bézier clipping is recursively used to find the intersections between the
unsubdivided curve and both parts of the subdivided curve. This is called a subdivision
step and allows for the computation of an arbitrary number of intersections. The heuristic
suggested in [72] determines that a clip is ’not good enough’ if t − t is larger than 80% of
the old interval. When Bézier clipping is applied to a pair of curves, we suggest for the first
clip to start with clipping the curve with the larger parameter interval against the fat line
around the curve with the shorter parameter interval.
In each step the interval over which f or the interval over which g is considered decreases. Due to the use of subdivision steps, this decrease is at least linearly and therefore
the lengths of both intervals converge to zero. By construction it is guaranteed that in the
limit at least one intersection is contained in both intervals and therefore located. As every
subdivision step creates basically two independent Bézier clipping tasks, the whole process
is able to locate all intersections between two curves. Note that here, as in the rest of the
paper, we assume that the two curves only intersect in single points and not over whole
parameter intervals.
It is also possible to use other lines than L to determine the fat line. We discuss this in
As distL (f (s)) =
i=0
42
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
Sections 5.5 and 5.6.
5.3
Background Material
The following facts are well known and are just restated here for convenience as they will be
needed to establish the quadratic convergence rate. For an introduction to convergence and
convergence rate we direct the interested reader to [34]. A Bézier function is also a spline,
hence we know from classical
Pm splinemliterature [23, 52, 71] that the distance of a coefficient
bi to the function f (s) = i=0 bi Bi,[α,β]
(s) can be bounded quadratically by
|bi − f (s∗i )| ≤ h2f Υm Pf ,
Pf := max |f 00 (s)|
s∈[α,β]
where hf := β − α and Υm > 0 is a constant depending only on m. Extending this bound
to parametric curves in R2 gives
√
kbi − f (s∗i )k2 ≤ h2f 2Υm Pf .
The tangent Tσ,f of the function f at the point s = σ is defined by
Tσ,f (s) := f (σ) + (s − σ)f 0 (σ)
for all s ∈ [α, β]. From standard Taylor expansion of the function f around σ we get
|Tσ,f (s) − f (s)| ≤ h2f Pf /2. With Rf := Pf (Υm + 1/2), this provides the following
bound on the distance between the tangent and the coefficients,
1
|bi − Tσ,f (s∗i )| ≤ h2f Pf (Υm + ) = h2f Rf .
2
(5.1)
The latter bound can of course be extended to parametric curves with f (s) ∈ R2 , yielding
√
kbi − Tσ,f (s∗i )k2 ≤ h2f 2Rf .
(5.2)
In Section 5.4 we will need an extension of the concept of simple roots to the intersection
of two Bézier curves. An intersection f (σ) = g(τ ) is called a transversal intersection if
det(f 0 (σ) g0 (τ )) 6= 0,
which is equivalent to the derivatives not being zero or parallel at the intersection.
5.4
Convergence Rate
We will first consider the functional case and establish quadratic convergence of Bézier
clipping as a root finding algorithm (at simple zeros). Here the function
g(t) =
n
X
i=0
n
ci Bi,[γ,δ]
(t)
5.4. CONVERGENCE RATE
43
T−
T+
t+
t
t−
t
Figure 5.2: Illustration of the proof concept for quadratic convergence of Bézier clipping as
root finding.
is clipped against the x-axis and t and t are the two intersections of the x-axis with the
convex hull of (t∗i , ci )T , i = 0, . . . , n, as shown in Figure 5.2. The proof is similar to the
one used in [4] to show cubic convergence for quadratic clipping.
Proposition 5.1 Suppose g(t) has a simple root at τ so g 0 (τ ) 6= 0, and let [t, t ] be the new
interval for g after the clipping step. Then
t − t ≤ 2h2g
Rg
|g 0 (τ )|
where hg = δ − γ and Rg is as defined in Section 5.3.
Proof. We create a stripe around the tangent Tτ,g at g(τ ) bounded by the two lines
T − (t) := Tτ,g (t) − h2g Rg ,
and
T + (t) := Tτ,g (t) + h2g Rg ,
as illustrated in Figure 5.2. From (5.1) we get T − (t∗i ) ≤ ci ≤ T + (t∗i ) and hence the control
points (t∗i , ci )T , i = 0, . . . , n lie in the region enclosed by T − (t) and T + (t). Solving
T − (t− ) = 0 and T + (t+ ) = 0 gives
t− = τ + h2g
Rg
,
g 0 (τ )
t+ = τ − h2g
Rg
.
g 0 (τ )
Clearly the sign of g 0 (τ ) determines whether t+ or t− is largest. In addition we know that
both t and t must lie between those values. Thus for g 0 (τ ) > 0 we get t+ ≤ t ≤ t ≤ t− and
for g 0 (τ ) < 0 we have t− ≤ t ≤ t ≤ t+ . Hence
t − t ≤ |t− − t+ | = 2h2g
Rg
.
|g 0 (τ )|
2
44
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
The rest of this section discusses the convergence rate of Bézier clipping for curves.
We consider two curves f (s), s ∈ [α̃, β̃] and g(t), t ∈ [γ̃, δ̃] which have an intersection
f (σ) = g(τ ). In the process of Bézier clipping the curves f and g are subdivided and hence
defined over [α, β] ⊆ [α̃, β̃] and [γ, δ] ⊆ [γ̃, δ̃]. Let bi , i = 0, . . . , m be the control points
of the curve f defined over [α, β] and ci , i = 0, . . . , n the control points of g defined over
[γ, δ]. In addition let
hf := β − α
hg := δ − γ
and
be the lengths of the current intervals. Note that we are only interested in intervals where
σ ∈ [α, β], τ ∈ [γ, δ] and observe that hf and hg decrease and that
0 ≤ Pf = max kf 00 (s)k2 ≤ max kf 00 (s)k2 =: P̃f
s∈[α,β]
s∈[α̃,β̃]
and hence
0 ≤ Rf ≤ P̃f (Υm + 1/2) =: R̃f .
In the rest of the paper we will often state that something is true for small enough hf or
that there exists an Hf > 0 such that for all hf < Hf something is true. This means that the
statement is true if the interval over which the corresponding curve is currently considered
contains the intersection and its length is small enough or smaller than some bound. As the
lengths of the intervals go to zero as the intervals converge to intersections, this will always
be the case after some time (assuming there is an intersection).
To extend the result of Proposition 5.1 to Bézier clipping of curves we use a more general
lemma, where we consider fat lines computed by an arbitrary rule. To make the concept of
an arbitrary rule easier, we introduce the following definition.
Definition 5.2 Given the Bézier curve f over the interval [α, β] with the control points bi ,
i = 0, . . . , m, a fat line rule for the curve f is a rule that determines a line Y called
the orientation line which converges to a line Z called limit line as hf goes to zero. The
corresponding fat line is then defined by
{x | dmin ≤ distY (x) ≤ dmax }
where dmin = λf mini=0,...,m (distY (bi )), dmax = λf maxi=0,...,m (distY (bi )) and 0 <
λf ≤ 1 is a constant defined by the rule such that f is completely contained in the fat line.
As an example one can think of the fat line rule described in Section 5.2 where the
orientation line L is determined by the end control points, λf = µf and, as we will show
later, the limit line is the tangent Tσ,f .
The distance function d of g to the orientation line changes in the process of Bézier
clipping. However, the value of Pd and with it Rd can be bound. Let n be the unit normal
vector of the orientation line. Then from |d00 (t)| = |n·g00 (t)| ≤ kg00 (t)k2 it follows that
Pd ≤ Pg ≤ P̃g and
Rd ≤ P̃g (Υn + 1/2) = R̃g .
(5.3)
We now have the ingredients for the following general lemma. Note that for us an angle of
zero degree between two lines means that the lines are either identical or parallel.
5.4. CONVERGENCE RATE
45
T−
T+
Z
Y
d(τ )
ϑ
φ
ϑ
2
ϑ
2
Tτ,g
t
t+
min
a) Angles between Tτ,g , Y and Z.
t+
max
t
t−
min
t−
max
b) Bounding [t, t ].
Figure 5.3: Illustration of the proof concept for quadratic convergence of Bézier clipping
for curves.
Lemma 5.3 Suppose the two Bézier curves f and g have an intersection f (σ) = g(τ ).
Suppose also that the curve g is clipped against a fat line determined by a fat line rule
for f with orientation line Y and limit line Z. Moreover suppose kg0 (τ )k2 6= 0 and let
ϑ ∈ (0◦ , 90◦ ] be the angle between the tangent Tτ,g and Z. Then for small enough hf the
new interval [t, t ] for g can be bounded by
t−t<
dmax − dmin
2R̃g
+ h2g 0
kg0 (τ )k2 Aϑ
kg (τ )k2 Aϑ
where Aϑ = sin(ϑ/2) > 0.
Proof. Let n be the unit normal vector
Pnof Y. nThe distance function of g to the line Y is
(t) with di = distY (ci ). We will first
defined by d(t) = n· g(t) − b = i=0 di Bi,[γ,δ]
show that in the given setting d0 (τ ) 6= 0 for small enough hf . Let φ ∈ [0◦ , 90◦ ] be the angle
between Tτ,g and Y as shown in Figure 5.3a. As Y converges to Z the angle φ converges
to ϑ. Hence there exists an Hf > 0 such that for all hf < Hf we have φ > ϑ/2 > 0 and
thus 0 < Aϑ := sin(ϑ/2) < sin(φ) ≤ 1. Observe that the angle ρ between g0 (τ ) and n is
either ρ = 90◦ + φ or ρ = 90◦ − φ. Using this we can bound the derivative of the distance
function at the intersection
0 0 d (τ ) = g (τ ) cos(90◦ ± φ) = g0 (τ ) sin(φ) > g0 (τ ) Aϑ > 0.
2
2
2
As d0 (τ ) 6= 0 we can now use a similar approach as in the proof of Proposition 5.1. The
distance function d(t) can again be bounded by two lines
T − (t) := Tτ,d (t) − h2g Rd ,
and
T + (t) := Tτ,d (t) + h2g Rd ,
as shown in Figure 5.3b. Observe that hd = hg and that d(τ ) does not need to be zero, in
fact, in general it will not. Nevertheless, we still have T − (t∗i ) ≤ di ≤ T + (t∗i ) and hence
46
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
the convex hull of the points (t∗i , di )T , i = 0, . . . , n lies in the region defined by T − (t) and
± ±
T + (t). Solving T ± (t±
min ) = dmin and T (tmax ) = dmax gives
t±
min = τ +
dmin − d(τ ) ∓ h2g Rd
,
d0 (τ )
t±
max = τ +
dmax − d(τ ) ∓ h2g Rd
.
d0 (τ )
±
The sign of d0 (τ ) determines whether t±
min is larger than tmax or not as well as whether
−
−
0
−
+
tmax or tmax has a higher value. Thus for d (τ ) > 0 we get t+
min < tmin ≤ tmax and
+
+
+
−
−
0
tmin ≤ tmax < tmax and hence t − t ≤ tmax − tmin . Similar for d (τ ) < 0 we get
−
+
+
+
−
−
+
t−
max < tmax ≤ tmin and tmax ≤ tmin < tmin and therefore t − t ≤ tmin − tmax . Thus
dmax − dmin
2Rd
+ t − t ≤ t−
+ h2g 0
.
max − tmin <
0
kg (τ )k2 Aϑ
kg (τ )k2 Aϑ
By (5.3) we have Rd ≤ R̃g .
2
With the help of Lemma 5.3 we can now present the quadratic convergence of Bézier
clipping as presented in Section 5.2.
Theorem 5.4 Suppose the two Bézier curves f and g have a transversal intersection f (σ) =
g(τ ). Suppose also the curve g is clipped against the fat line determined by the fat line rule
described in Section 5.2. Then there exists a constant 0 < Aϑ < 1 depending on the angle
between Tσ,f and Tτ,g , such that for small enough hf the new interval [t, t ] for g can be
bounded by
√
4 2R̃f
2R̃g
2
t − t < hf 0
+ h2g 0
.
kg (τ )k2 Aϑ
kg (τ )k2 Aϑ
Proof. We first show that Tσ,f is the limit line of the orientation line L determined by the
endpoints b0 and bm . Observe that both b0 and bm are points on the curve f . Hence
according to the mean value theorem there is an ξ ∈ [α, β] such that the line L is parallel
to the tangent Tξ,f . As α and β converge to σ as hf tends to zero, ξ must converge to σ as
well and b0 must tend to f (σ). Hence the line L converges to Tσ,f as hf tends to zero.
As f (σ) = g(τ ) is a transversal intersection, the angle between Tσ,f and Tτ,g is greater
than zero and kg0 (τ )k2 6= 0 and hence we can apply Lemma 5.3. To finish the proof we
need to bound the width of the fat line, dmax − dmin . For this we define a parametrization
of L,
β−s
s−α
L(s) =
b0 +
bm .
β−α
β−α
This enables us to bound the distance between L and Tσ,f using (5.2) by
kTσ,f (s∗i ) − L(s∗i )k2 ≤
√
m−i
i
kTσ,f (s∗0 ) − b0 k2 + kTσ,f (s∗m ) − bm k2 ≤ h2f 2Rf .
m
m
From this we obtain
√
|distL (bi )| ≤ kbi − Tσ,f (s∗i )k2 + kTσ,f (s∗i ) − L(s∗i )k2 ≤ h2f 2 2R̃f
5.5. ARBITRARY FAT LINE RULES
47
for i = 0, . . . , m, which yields for the width of the fat line
√
dmax − dmin ≤ |dmax | + |dmin | ≤ 2 max |distL (bi )| ≤ h2f 4 2R̃f .
0≤i≤m
Using this in Lemma 5.3 provides the desired bound.
2
Theorem 5.4 shows that when hf and hg have become small enough, every clipping
step will result in a quadratic decrease of hf or hg . It also tells us that for small enough
hf and hg there can not be a subdivision step: In the case of hf ≤ hg when clipping g
against f the length of the interval [t, t ] is bounded from above by h2g times a constant. For
small enough hg this will result in a real clipping step. Similarly, for the case of hg ≤ hf
when clipping f against g the length of the new interval is bounded by h2f times a constant,
yielding a real clipping step for small enough hf . But for a subdivision step to occur neither
clipping g against f nor f against g must be successful. Hence for small enough hf and hg
there will be no more subdivision steps, which leads to the conclusion that Bézier clipping
is quadratically convergent.
5.5
Arbitrary Fat Line Rules
Lemma 5.3 provides us with some insight into the properties of Bézier clipping. In order
to get quadratic convergence we need dmax − dmin , the width of the fat line, to go to zero
quadratically. Although it might be possible to obtain an upper bound without using this
property, it seems quite natural that if the width of the fat line does not go to zero quadratically, then neither will the algorithm be quadratically convergent. Proposition 5.5 below
strengthens this argument. It shows the dependency of the convergence rate of Bézier clipping on the convergence rate of the width of the fat line. The only case where Proposition
5.5 does not apply, is if one can guarantee that after some time the standard case where d0
and dn lie outside the fat line only appears at most a constant number of times between the
other cases. To provide such a guarantee for a certain fat line rule is likely to be difficult, if
not impossible.
In Proposition 5.6 it is shown that the convergence rate of the width of the fat line in
turn can only be quadratic if the limit line is parallel to the tangent at the intersection of the
curve. Hence for quadratic convergence of Bézier clipping the orientation line of a fat line
rule must converge to a line parallel to the tangent line with quadratically decreasing width.
The fat line rule described in Section 5.2 fulfills these properties and is easy to compute.
In contrast, using a fat line rule with the perpendicular line to L as orientation line as also
suggested in [72] will not lead to a quadratic convergence rate, although it might provide a
better clip in the beginning.
Proposition 5.5 Suppose the two Bézier curves f and g have an intersection f (σ) = g(τ ).
Suppose also that the curve g is clipped against a fat line determined by a fat line rule
for f with orientation line Y and limit line Z. Moreover, suppose kg0 (τ )k2 6= 0 and let
ϑ ∈ (0◦ , 90◦ ] be the angle between the tangent Tτ,g and Z. Then there exists an Hf > 0
48
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
d3
d2
H0
Hn
d(τ )
d0
d1
t ζ
ζ
t
Figure 5.4: Illustration of the proof concept of Proposition 5.5.
and an Hg > 0 such that for all hf < Hf and hg < Hg where d0 < dmin and dn > dmax
or d0 > dmax and dn < dmin the new interval can be bounded by
t−t>
dmax − dmin
0
kg (τ )k2 (1 + Aϑ )
where Aϑ = sin(ϑ/2) > 0 is the same constant as in Lemma 5.3.
Proof. We will use two lines H0 and Hn inside the convex hull of the control points of the
distance function as shown in Figure 5.4 to compute a lower bound for the new
As
interval.
in the proof of Lemma 5.3 we can deduce that in the given situation 0 < g0 (τ )2 Aϑ <
0 0 d (τ ) ≤ g (τ ) for the same constant Aϑ = sin(ϑ/2) and small enough hf .
2
Observe that d0 = d(t∗0 ) and dn = d(t∗n ). Let H0 be the line defined by d0 and d(τ ),
and Hn the line defined by d(τ ) and dn as illustrated in Figure 5.4,
H0 (t) = d(τ ) + (t − τ )
d(τ ) − d(t∗0 )
,
τ − t∗0
|
{z
}
Hn (t) = d(τ ) + (t − τ )
=:E1
d(t∗n ) − d(τ )
.
t∗n − τ
{z
}
|
=:E2
From numerical analysis (see for example [34, 42]) we know that E1 is an approximation
to the derivative d0 (τ ) with the error (τ − ξ)d00 (η) where ξ, η ∈ [t∗0 , τ ], thus
E1 − d0 (τ ) ≤ hg Pd ≤ hg R̃g
E2 − d0 (τ ) ≤ hg Pd ≤ hg R̃g .
and
Let 0 < < g0 (τ )2 Aϑ . Then there exists an Hg > 0 such that for all hg < Hg we have
E1 − d0 (τ ) < ,
E2 − d0 (τ ) < .
From this it follows that
0 < d0 (τ ) − < E1 < d0 (τ ) + < g0 (τ )2 (1 + Aϑ )
− kg0 (τ )k2 (1 + Aϑ ) < d0 (τ ) − < E1 < d0 (τ ) + < 0
if d0 (τ ) > 0,
if d0 (τ ) < 0.
5.5. ARBITRARY FAT LINE RULES
49
Hence we have 0 < |E1 | < K3 := g0 (τ )2 (1+Aϑ ) and similarly 0 < |E2 | < K3 . For the
case d0 (τ ) > 0 we have E1 , E2 > 0 and therefore d0 ≤ d(τ ) ≤ dn . Unfortunately we are
not able to create a lower bound in this situation if dmin ≤ d0 or dmax ≥ dn . Hence these
cases are excluded in the proposition. Suppose therefore that d0 < dmin and dn > dmax .
Solving for H0 (ζ) = dmin and Hn (ζ) = dmax gives
ζ = τ − d(τ ) − dmin
1
E1
and
1
ζ = τ + dmax − d(τ )
.
E2
As H0 (t) for t ∈ [t∗0 , τ ] and Hn (t) for t ∈ [τ, t∗n ] lie in the convex hull of (t∗i , di )T ,
i = 0, . . . , n we have t∗0 ≤ t ≤ ζ ≤ τ ≤ ζ ≤ t ≤ t∗n and
1
1
1
t − t ≥ ζ − ζ = dmax − d(τ )
+ d(τ ) − dmin
>
(dmax − dmin ).
E2
E1
K3
The case d0 (τ ) < 0 is similar with E1 , E2 < 0 and we assume d0 > dmax and dn <
dmin . This leads to
1
ζ = τ − dmax − d(τ )
|E1 |
and
ζ = τ + d(τ ) − dmin
1
|E2 |
and thus
t − t ≥ ζ − ζ = d(τ ) − dmin
1
1
1
+ dmax − d(τ )
>
(dmax − dmin ).
|E2 |
|E1 |
K3
2
Proposition 5.6 Suppose the fat line for f is computed according to a rule with orientation
line Y and limit line Z. Suppose also that Z is not parallel to Tσ,f . Then there exists an
Hf > 0 such that for all hf < Hf and for curves f of degree m ≥ 2
dmax − dmin > hf K4
where K4 > 0 is a constant that depends on the factor λf of the fat line rule, f 0 (σ), m and
the angle between Z and Tσ,f .
Proof. Let Ŷ(s) be the line parallel to Y going through the intersection point f (σ) and let
dŶ be the distance from Ŷ to Y. Then distY (p) = distŶ (p) + dŶ and hence with
dmin,Ŷ = min distŶ (bi )
and
dmax,Ŷ = max distŶ (bi )
0≤i≤m
0≤i≤m
we have
dmax − dmin = λf (dmax,Ŷ − dmin,Ŷ ).
Therefore it is enough to show that dmax,Ŷ − dmin,Ŷ > hf K5 where K5 is a constant and
K4 := λf K5 .
50
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
We will start by bounding the distance of some control points to the line Ŷ. Let ρ ∈
[0◦ , 90◦ ] be the angle between Y and the tangent Tσ,f and ψ ∈ (0◦ , 90◦ ] be the angle
between Z and Tσ,f . As Y converges to Z there must be an Hf > 0 such that ρ > ψ/2 > 0
for all hf < Hf . Then for i = 0, . . . , m we have
dist (bi ) = kŶ(s̃) − bi k2 ≥ kŶ(s̃) − Tσ,f (s∗i )k2 − kTσ,f (s∗i ) − bi k2 .
Ŷ
Let the integer j be such that σ ∈ [s∗j , s∗j+1 ]. By a simple trigonometric argument we get
for i = 0, . . . , j − 1, j + 2, . . . , m that
min(kTσ,f (s∗i ) − Ŷ(s)k) = |s∗i − σ|kf 0 (σ)k2 sin(ρ) >
s
hf 0
kf (σ)k2 sin(ψ/2).
m
Let 0 < K5 < (1/m)kf 0 (σ)k2 sin(ψ/2) and let Hf > 0 be so small that for all hf < Hf
0 < K5 <
√
1 0
kf (σ)k2 sin(ψ/2) − hf 2R̃f .
m
For all hf < Hf and i = 0, . . . , j − 1, j + 2, . . . , m the inequalities in combination with
(5.2) then yield
√
hf 0
kf (σ)k2 sin(ψ/2) < kTσ,f (s∗i ) − Ŷ(s̃)k2
kbi − Tσ,f (s∗i )k2 ≤ h2f 2R̃f <
m
and
dist (bi ) ≥ kŶ(s̃) − Tσ,f (s∗i )k2 − kTσ,f (s∗i ) − bi k2
Ŷ
√
1 0
≥ hf
kf (σ)k2 sin(ψ/2) − hf 2R̃f > hf K5 .
m
This means that the maximum absolute distance can be bounded linearly by hf ,
dist (bi ) > hf K5 .
max distŶ (bi ) ≥
max
Ŷ
0≤i≤m
i=0,...,j−1,j+2,...,m
The line Ŷ passes through the intersection point. Hence it intersects the convex hull
of the control points bi , i = 0, . . . , m and we have dmin,Ŷ ≤ 0 ≤ dmax,Ŷ . Therefore
dmin,Ŷ is the minimum of all negative values and dmax,Ŷ the maximum of all positive ones,
yielding
dmax,Ŷ − dmin,Ŷ ≥ max − dmin,Ŷ , dmax,Ŷ = max distŶ (bi ) > hf K5 .
0≤i≤m
2
5.6
Fat Line Rules Providing Quadratic Convergence
In the following section we will define two classes of fat line rules which both give quadratic
convergence. The fat line rule of Section 5.2 belongs to one of these classes.
5.6. FAT LINE RULES PROVIDING QUADRATIC CONVERGENCE
51
A somehow trivial observation is that if a fat line rule provides orientation lines Y which
are parallel to the orientation lines Ỹ of another rule, the resulting fat lines will be identical
(up to a constant scaling if the constants λf are different). Hence if a certain fat line rule A
provides quadratic convergence of Bézier clipping, any fat line rule using orientation lines
parallel to the ones of A will also give a quadratic convergence rate.
It follows from Proposition 5.6 that for any fat line rule where the width of the fat line
converges to zero quadratically, the limit line must be identical or parallel to the tangent
of the curve at the intersection. According to Lemma 5.3 the resulting Bézier clipping
algorithm is therefore quadratically convergent at transversal intersections. Hence for any
given fat line rule it is enough to show that the width of the fat line converges to zero
quadratically in order to have quadratic convergence of Bézier clipping.
In the following two definitions we introduce two different classes of fat line rules.
Definition 5.7 Suppose a fat line is to be computed for a Bézier curve f defined over [α, β].
A fat line rule of the tangential class is a rule where the orientation line is a tangent Tξ,f
for some ξ ∈ [α, β].
Definition 5.8 Suppose a fat line is to be computed for a Bézier curve f defined over [α, β]
with the control points bi , i = 0, . . . , m. A fat line rule of the control-point class is a rule
where the orientation line is defined by two control points bk and b` with 0 ≤ k < ` ≤ m.
Observe that the fat line rule described in Section 5.2 belongs to the control-point class
of fat line rules with k = 0 and ` = m. In addition it also has a connection to the tangential
class as there always exists a tangent Tξ,f with ξ ∈ [α, β] to which L is parallel. The two
following theorems show that the tangential as well as the control-point class give quadratic
convergence of Bézier clipping.
Theorem 5.9 Suppose the two Bézier curves f and g have a transversal intersection f (σ) =
g(τ ). Suppose also the curve g is clipped against the fat line of f determined by a fat line
rule of the tangential class with Tξ,f as orientation line and ξ ∈ [α, β]. Then there exists a
constant 0 < Aϑ < 1 depending on the angle between Tσ,f and Tτ,g , such that for small
enough hf the new interval [t, t ] for g can be bounded by
√
2 2R̃f
2R̃g
2
t − t < hf 0
+ h2g 0
.
kg (τ )k2 Aϑ
kg (τ )k2 Aϑ
Proof. Let bi , i = 0, . . . , m be the control points of f over [α, β]. From (5.2) we know that
√
√
kbi − Tξ,f (s∗i )k2 ≤ h2f 2Rf ≤ h2f 2R̃f
for all i. Hence
√
|dmin | ≤ h2f 2R̃f
and
and
√
|dmax | ≤ h2f 2R̃f
√
dmax − dmin = |dmax − dmin | ≤ 2h2f 2R̃f .
(5.4)
This means that Tξ,f must converge to Tσ,f as hf goes to zero, which is also trivial to
observe. Using (5.4) in Lemma 5.3 provides the desired bound.
2
52
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
Theorem 5.10 Suppose the two Bézier curves f and g have a transversal intersection
f (σ) = g(τ ). Let bi , i = 0, . . . , m be the control points of the curve f defined over [α, β].
Suppose also the curve g is clipped against the fat line of f determined by a fat line rule of
the control-point class with the orientation line defined by bk and b` with 0 ≤ k < ` ≤ m.
Then there exists a constant 0 < Aϑ < 1 depending on the angle between Tσ,f and Tτ,g ,
such that for small enough hf the new interval [t, t ] for g can be bounded by
t − t < h2f
√
4 2(m + 1)R̃f
2R̃g
+ h2g 0
.
kg0 (τ )k2 Aϑ
kg (τ )k2 Aϑ
Proof. Let Y be the line defined by the two control points bk and b` . We define the
following parametrization for Y,
Y(s) = bk +
s − s∗k
(b` − bk ).
s∗` − s∗k
Moreover let qi := bi − Tσ,f (s∗i ), i = 0, . . . , m. Then
Y(s) − Tσ,f (s) = qk +
s − s∗k
(q` − qk ).
s∗` − s∗k
Observe that s∗i+1 − s∗i = hf /m for 0 ≤ i < m which yields together with (5.2)
√
kY(s∗i ) − Tσ,f (s∗i )k ≤ kqk k + mkq` − qk k ≤ h2f 2(2m + 1)R̃f .
From this we obtain
√
|distY (bi )| ≤ kbi − Tσ,f (s∗i )k2 + kTσ,f (s∗i ) − Y(s∗i )k2 ≤ h2f 2 2(m + 1)R̃f
for i = 0, . . . , m, which yields for the width of the fat line
√
dmax − dmin = |dmax − dmin | ≤ h2f 4 2(m + 1)R̃f .
(5.5)
This means that Y must converge to Tσ,f as hf goes to zero. Hence using (5.5) in Lemma
5.3 provides the desired bound.
2
With those classes the stability of Bézier clipping can be improved. If the orientation
line is determined by two control points and these control points happen to be identical, then
the computation of the fat line and hence Bézier clipping becomes difficult. One solution
to this is to use the tangential class and switch to an orientation line parallel to a tangent
and going through one control point. This is a stable operation as the orientation can be
computed by evaluating the derivative of the original Bézier curve at the chosen parameter
value. Of special interest would be the orientation lines Tα,f and Tβ,f as these lines can be
defined by the two first or last control points or by the first or last control point and f 0 (α) or
f 0 (β) respectively.
5.7. NUMERICAL EXAMPLES
53
a) Example 1, m = 3, n = 3.
b) Example 2, m = 7, n = 9.
Figure 5.5: Examples.
5.7
Numerical Examples
This section gives numerical examples which illustrate the quadratic convergence rate of
the introduced fat line rule classes. In Figure 5.5 the curves for Examples one and two are
shown. The lengths hf and hg of the corresponding intervals during Bézier clipping are
listed in Tables 5.2 and 5.3. We use four different fat line rules. The first is the fat line rule
described in Section 5.2 and introduced by Sederberg et al. in [72], using the orientation
line L. The second fat line rule uses the line Y1,2 determined by the second and third
control point as orientation line and belongs therefore to the control-point class. As an
example from the tangential class we use a rule that uses the tangent Y0.5 = T0.5(α+β),f as
orientation line. Finally, for comparison, we use the perpendicular line to L as orientation
line L⊥ as also suggested by Sederberg et al. in [72].
In Bézier clipping every subdivision step leads to two new Bézier clipping tasks. In
Tables 5.2 and 5.3 italic numbers indicate that the two considered curves come from a
subdivision step. Note that the lengths of the intervals in both tables are only listed until
an intersection is found. In Example two this is for all four fat line rules the left-most
intersection. Both tables clearly show the quadratic convergence rate for L, Y1,2 and Y0.5 .
The orientation line L⊥ in contrast provides only linear convergence.
Acknowledgment
I would like to thank Knut Mørken and Martin Reimers for fruitful discussions and the
referees for their comments.
54
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
L
hf
1.0e-00
5.0e-01
5.0e-01
5.0e-01
2.5e-01
2.5e-01
2.5e-01
2.9e-03
2.9e-03
1.6e-07
1.6e-07
5.6e-16
5.6e-16
Y1,2
hg
1.0e-00
1.0e-00
5.4e-01
4.3e-01
4.3e-01
4.3e-01
4.2e-02
4.2e-02
4.7e-05
4.7e-05
5.7e-11
5.7e-11
2.8e-17
hf
1.0e-00
5.0e-01
5.0e-01
5.0e-01
4.2e-02
4.2e-02
3.9e-04
3.9e-04
3.7e-08
3.7e-08
3.3e-16
3.3e-16
hg
1.0e-00
1.0e-00
5.0e-01
2.6e-01
2.6e-01
2.0e-02
2.0e-02
1.2e-04
1.2e-04
4.3e-09
4.3e-09
2.8e-17
Y0.5
hf
1.0e-00
5.0e-01
5.0e-01
5.0e-01
2.5e-01
2.5e-01
5.1e-03
5.1e-03
4.1e-06
4.1e-06
2.5e-12
2.5e-12
0.0e-17
hg
1.0e-00
1.0e-00
7.4e-01
3.7e-01
3.7e-01
6.9e-02
6.9e-02
8.8e-04
8.8e-04
1.3e-07
1.3e-07
2.7e-15
2.7e-15
L⊥
hf
1.0e-00
7.3e-01
7.3e-01
5.8e-01
2.9e-01
2.9e-01
2.9e-01
2.9e-01
1.1e-01
1.1e-01
5.5e-02
5.5e-02
2.7e-02
2.7e-02
..
.
hg
1.0e-00
1.0e-00
5.0e-01
5.0e-01
5.0e-01
3.0e-01
1.5e-01
1.5e-01
1.5e-01
7.6e-02
7.6e-02
3.8e-02
3.8e-02
2.2e-02
..
.
Table 5.2: Parameter interval lengths of curves of Example 1 for different fat line rules.
5.7. NUMERICAL EXAMPLES
L
hf
1.0e-00
5.0e-01
5.0e-01
5.0e-01
2.5e-01
2.5e-01
2.5e-01
7.0e-02
7.0e-02
1.5e-03
1.5e-03
2.6e-07
2.6e-07
7.1e-15
7.1e-15
55
Y1,2
hg
1.0e-00
1.0e-00
5.0e-01
3.4e-01
3.4e-01
3.4e-01
1.8e-01
1.8e-01
1.4e-02
1.4e-02
2.7e-05
2.7e-05
8.8e-11
8.8e-11
2.8e-17
hf
1.0e-00
5.0e-01
5.0e-01
3.3e-01
3.3e-01
2.1e-01
2.1e-01
1.6e-01
1.6e-01
9.2e-03
9.2e-03
4.7e-06
4.7e-06
1.4e-12
1.4e-12
0.0e-17
hg
1.0e-00
1.0e-00
6.0e-01
6.0e-01
4.0e-01
4.0e-01
2.4e-01
2.4e-01
5.9e-02
5.9e-02
3.9e-05
3.9e-05
7.5e-10
7.5e-10
5.6e-17
5.6e-17
Y0.5
hf
1.0e-00
5.0e-01
5.0e-01
3.6e-01
3.6e-01
2.0e-01
2.0e-01
2.1e-02
2.1e-02
9.5e-05
9.5e-05
1.0e-09
1.0e-09
1.1e-16
1.1e-16
hg
1.0e-00
1.0e-00
5.0e-01
5.0e-01
2.5e-01
2.5e-01
7.5e-02
7.5e-02
2.8e-03
2.8e-03
1.4e-06
1.4e-06
3.7e-13
3.7e-13
5.6e-17
L⊥
hf
1.0e-00
5.0e-01
5.0e-01
3.8e-01
3.8e-01
2.9e-01
1.4e-01
1.4e-01
1.4e-01
1.4e-01
1.4e-01
1.1e-01
1.1e-01
1.1e-01
8.9e-02
4.4e-02
4.4e-02
..
.
hg
1.0e-00
1.0e-00
5.4e-01
5.4e-01
2.7e-01
2.7e-01
2.7e-01
1.5e-01
2.7e-01
1.3e-01
1.3e-01
1.3e-01
6.7e-02
6.7e-02
6.7e-02
6.7e-02
3.0e-02
..
.
Table 5.3: Parameter interval lengths of curves of Example 2 for different fat line rules.
56
CHAPTER 5. BÉZIER CLIPPING IS QUADRATICALLY CONVERGENT
6
Computing intersections of planar spline curves
using knot insertion
Knut Mørken, Martin Reimers and Christian Schulz
Computer Aided Geometric Design, Volume 26, Issue 3, March 2009, Pages 351-366
Abstract
We present a new method for computing intersections of two parametric B-Spline
curves. We use an intersection of the control polygons as an approximation for an intersection of the curves in combination with knot insertion. The resulting algorithm is
asymptotically Newton-like, but without the need of a starting value. Like Newton’s
method, it converges quadratically at transversal intersections, the analogue to simple
roots. It is a generalization of an algorithm developed by two of the authors for computing zeros of spline functions.
Keywords: Curve intersection, spline, B-Spline, knot insertion, control polygon
6.1
Introduction
The problem of finding the intersections between two parametric curves is fundamental in
Geometric Modeling. The curves are commonly represented in terms of B-Splines which
means that a rich theoretical and practical toolbox is available, and a good algorithm would
exploit the B-spline properties in the design of the intersection algorithm. Perhaps the most
important of these properties is that the coefficients of a curve represented in terms of Bsplines converge to the curve when the width of the polynomial pieces goes to zero. A
particularly simple way to view this property in action is to successively divide the parameter intervals of all polynomial pieces in two parts of equal width; we will refer to this as
uniform subdivision. By performing uniform subdivision sufficiently many times, the coefficients will come arbitrarily close to the spline curve. A consequence of this is that the
57
58
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
piecewise linear interpolant to neighbouring coefficients (usually referred to as the control
polygon) will converge to the curve.
B-splines are nonnegative and sum to one which means that linear combinations of Bsplines have the convex hull property, i.e., any point on a spline curve lies within the convex
hull of the B-spline coefficients. There is also a local convex hull property: If we restrict our
attention to one polynomial piece of a spline curve of degree d, then only d+1 B-splines are
nonzero so the associated curve segment lies within the convex hull of the corresponding
coefficients. This can be exploited when computing intersections since if the local convex
hulls of the coefficients of two polynomial segments do not overlap, then it is not possible
for the curve segments to intersect either.
If the two curves are f and g (typically curves in the plane), parameterized by s and t,
the problem is to solve the vector equation
f (s) = g(t).
(6.1)
The most obvious approach is to apply one of the classical methods for computing roots of
equations, e.g. Newton’s method. The disadvantage is that these methods will only converge
if one or more starting values close to the intersection are supplied. Finding those starting
values can itself be a difficult task.
An alternative algorithm is to perform uniform subdivision until the control polygons
come sufficiently close to the spline curves and then use the intersections between the control polygons as approximations to the intersections between the curves. This strategy can
be improved by excluding the curve segments where the local convex hull property implies
that no intersections can occur. In essence, this is the intersection algorithm suggested in
[47]. The disadvantage of this approach is that the convergence to the actual intersections
is slow since the subdivision step is based on a simple bisection of the parameter intervals.
This therefore suggests an alternative method which uses subdivision to home in on the intersections and then switches to the quadratically convergent Newton’s method to compute
more accurate intersections, see [75]. However, this still leaves the problem of determining
exactly when it is safe to switch to Newton’s method.
Bézier clipping [72] is a variation on the subdivision approach that clips away the regions of the curves where no intersections can take place in a clever way which leads to a
quadratically convergent method, see [69].
The paper [57] describes a general family of methods for solving systems of polynomial
equations in several variables, of which equation (6.1) is a special case. The methods are
reminiscent of the classical subdivision methods for polynomials on Bernstein form, but
include a preconditioning step.
An interesting paper is [84] which is concerned with developing an algorithm which
can guarantee that all intersections (including tangential ones) between two polynomial
curves on Bernstein form can be found. The algorithm makes use of so-called separation
bounds which say something about how close two curves described in terms of floating
point numbers with a given number of bits can be without having an intersection. The
basic algorithm is essentially a subdivision algorithm like the one in [47], but the tests
for intersections are much more sophisticated. The emphasis is on guaranteeing that all
intersections and their type are found; speed and convergence rate is not discussed much.
6.2. BASIC NOTATION
59
Yet another family of methods for solving intersection problems is discussed in [20]
where one of the curves is rewritten (approximately) in implicit form.
The algorithm that we describe and analyze in this paper is a generalization of the
method presented in [55] for finding roots of spline functions. The new algorithm supplements and improves on the algorithms mentioned above in different ways: It works directly
with splines expressed in terms of B-splines; it focuses on finding intersections rather than
removing parts where no intersections can occur; it is based on subdivision, but is adaptive
in that the points at which to subdivide are determined by a very simple strategy that leads
to quadratic convergence at transversal intersections; and it requires no starting values even
though it converges as quickly as Newton’s method. The algorithm may be used on its own,
but it should also fit into some of the hybrid strategies mentioned above.
It should be noted that the new algorithm is dependent upon the control polygons of
the two curves having an intersection. In the case of non-intersecting control polygons (see
Figure 6.5b) a subdivision preprocessing step will necessarily lead to intersecting control
polygons near transversal intersections.
After an introduction of basic notation in Section 6.2, we describe the new algorithm in
Section 6.3. The properties of the algorithm are analyzed in Section 6.4, succeeded by some
numerical examples in Section 6.5. A discussion of open questions is given in Section 6.6.
6.2
Basic Notation
Pn
A spline curve in Rm is a vector function on the form f (t) =
i=1 ci Bi,d,t (t), where
each control point ci lies in Rm , and {Bi,d,t }ni=1 is the collection of B-splines of degree d
associated with the knot vector t = (ti )n+d+1
. We denote the set of all such curves by Sm
i=1
d,t .
Throughout the paper we always assume that ti < ti+d for i = 2, . . . , n (all knots except
the first and the last having at most multiplicity d such that the splines are continuous and
the B-splines are linearly independent) and that the first and last d + 1 knots are identical
(t1 = td+1 , tn+1 = tn+d+1 ). The latter condition causes no loss of generality as every
spline curve can be converted to this form. We are only interested in the part of the spline
curve defined on the interval [td+1 , tn+1 ] — the rest is identically zero. In this paper we
limit ourselves to spline curves in the plane, hence m = 2.
The control polygon of the spline is the piecewise linear interpolant to consecutive control points. For our purposes we need a one-to-one mapping from the parameter interval
[td+1 , tn+1 ] to the control polygon. We therefore associate the knot average or Greville
abscissa t̄i = (ti+1 + · · · + ti+d )/d with the control point ci and define the control polygon
Γf ,t of the spline curve f as
Γf ,t (t) =
t̄i+1 − t
t − t̄i
ci +
ci+1
t̄i+1 − t̄i
t̄i+1 − t̄i
for t in the interval [t̄i , t̄i+1 ). Since t̄1 = td+1 and t̄n = tn+1 , this defines the control
polygon on all of [td+1 , tn+1 ). We know from classical spline theory [52, 71] that the
control polygon converges quadratically to the spline as the maximal distance between two
neighbouring knots tends to zero.
60
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
Another classical result which will be useful is the observation that inserting a knot into
the knot vector does not change the spline itself, only its control polygon. Let τ be the knot
to be inserted with τ ∈ [tµ , tµ+1 ) and let (bi )n+1
i=1 be the new control points. Then we have
bi = ci for i = 1, . . . , µ − d,
bi =
ti+d − τ
τ − ti
ci +
ci−1
ti+d − ti
ti+d − ti
for i = µ − d + 1, . . . , µ,
and bi = ci−1 for i = µ + 1, . . . , n + 1, see [10].
As we mentioned above, splines represented in terms of B-splines have a local convex
hull property. More precisely, for any t ∈ [tj , tj+1 ) the point f (t) is contained in the convex
hull of the local control points cj−d , . . . , cj (the smallest convex set that contains all of these
points). Obviously, if the unions of all local convex hulls of two spline curves are disjunct,
so are the curves.
6.3
The Intersection Algorithm
In [55] the close relationship between a spline and its control polygon is exploited to find the
first zero of a spline function by taking the first zero of the control polygon as an approximation to the first zero of the spline. This zero is then inserted into the knot vector of the spline
and the first zero of the new control polygon is used as an improved approximation. By
repeating this, an iterative algorithm is obtained which turns out to converge quadratically
to simple zeros. The method can be generalized in a natural way to the computation of an
intersection of two parametric spline curves. We use the first intersection of the two control
polygons as an approximation to an intersection of the two curves. First here means first
according to some order, e.g. a lexicographical order on the parameter values. From this
intersection we get a pair of parameter values, one for each curve, which are then inserted as
new knots into the knot vectors of the corresponding curves. This process can be repeated
with the new control polygon as illustrated in Figure 6.1, the result being a simple iterative
method.
The algorithm outlined above, together with a detailed analysis of convergence, is the
main contribution of this paper. In itself, this is not a foolproof algorithm for computing intersections since two curves may intersect even when the control polygons do not intersect,
see Figure 6.5b. In this situation we need to do some preprocessing before our algorithm
will work. For this we exploit the local convex hull property. More specifically, if the
curves intersect, there must be at least one pair of overlapping local convex hulls. We focus
on the first such pair and insert the midpoints of the corresponding parameter intervals as
new knots in the curves. By possibly repeating this procedure, we know from Theorem
6.11 that we will eventually end up with intersecting control polygons, at least in the case
of a transversal intersection. The resulting Algorithm 1 is shown in Figure 6.2. Obvious
variations on the preprocessing step may home in on other intersections than the first one,
or even all intersections.
Any iterative algorithm needs a stopping criterion; we will discuss this together with
some examples in Section 6.5.
6.3. THE INTERSECTION ALGORITHM
61
a) Step 0.
b) Step 1.
c) Step 2.
d) Step 3.
Figure 6.1: By repeatedly inserting the intersection of the control polygons as new knots in
the splines, we obtain a sequence of control polygons whose intersection converges to the
intersection of the curves. In this example both curves have degree three.
Start
Control polygons
intersect?
No
Yes
No
No
Yes
Insert first intersection
of control polygons
Found
intersection?
Two
local convex hulls
intersect?
Insert first
midpoint pair
Curves do not
intersect
Yes
Stop
Figure 6.2: Algorithm 1. Intersection finding algorithm.
62
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
For transversal intersections the intersecting control polygons determine the limiting behaviour of Algorithm 1. If the algorithm is not stopped, it will generate an infinite sequence
of parameter pairs which we will analyze in Section 6.4. There we show that all accumulation points of the sequence must correspond to intersections between the two spline curves.
In addition, if we limit ourselves to curves with only transversal intersections, and if we
always insert the first intersection between the two control polygons, we will show that the
sequence converges with a quadratic convergence rate.
The computation of the parameter pair (σ, τ )P
corresponding to the intersection of the
nf
two
control
polygons
is
quite
simple.
Let
f
(s)
=
l=1 bl Bl,d,s (s) be the first and g(t) =
Png
c
B
(t)
the
second
curve.
Let
the
control
polygon
segments bi−1 , bi and cj−1 , cj
l
l,e,t
l=1
contain the chosen intersection between the control polygons, given by the equation
s̄i − σ
σ − s̄i−1
τ − t̄j−1
t̄j − τ
bi−1 +
bi =
cj−1 +
cj .
s̄i − s̄i−1
s̄i − s̄i−1
t̄j − t̄j−1
t̄j − t̄j−1
The left-hand side of this equation can be written as
bi + (s̄i − σ) (bi−1 − bi ) /(s̄i − s̄i−1 ) = bi + (σ − s̄i ) ∆bi ,
where ∆bi = (bi − bi−1 ) / (s̄i − s̄i−1 ). We reformulate the right-hand side in a similar
way and end up with the equation
bi − cj = − ((σ − s̄i ) ∆bi − (τ − t̄j ) ∆cj ) .
(6.2)
− ∆cj is nonsingular, the solution of (6.2) is
If the 2 × 2 matrix M = ∆bi
σ
s̄i
=
− M −1 bi − cj .
(6.3)
τ
t̄j
Note that the non-singularity of the matrix M is equivalent to the two segments of the two
control polygons not being parallel and of nonzero length.
6.4
Analysis
If the two control polygons intersect in every iteration, Algorithm 1 will yield an infinite
sequence of parameterPpairs (if not stopped). We will now analyze
Png the properties of this
nf
sequence. Let f (s) = i=1
bi Bi,d,s (s) be the first and g(t) = i=1
ci Bi,e,t (t) the second
curve. Let sk and tk be the knot vectors and (σ k , τ k ) the parameter values of the chosen
intersection of the control polygons in iteration k. Note that if σ k already has multiplicity d
in the knot vector sk , it will not be inserted into sk . Inserting this value would not change
the control polygon, it would just duplicate an existing control point. We treat τ k similarly.
Before starting the analysis we introduce needed background material in subsection
6.4.1, followed by some basic results in 6.4.2. The analysis itself starts with showing that
the algorithm actually finds an intersection in subsection 6.4.3. The Convergence analysis
can be found in 6.4.4 and a discussion of the convergence rate in 6.4.5.
6.4. ANALYSIS
6.4.1
63
Background Material
In the literature there are a number of bounds on splines in the functional case. Most of
these carry over to spline curves quite easily and we will state the ones we need below. The
proofs are straightforward and are left to the reader.
The results in this subsection can be applied to spline curves in any dimension, hence
we consider curves in Rm for a general m.
We start by defining two norms for vector valued functions. Recall that the uniform
norm of a continuous function defined on an interval [a, b] is defined by
kf k∞ = kf k∞,[a,b] = max f (x).
x∈[a,b]
Analogously we define the 2, ∞-norm of a vector valued function f : [a, b] 7→ Rm as
kf k2,∞ := max kf (s)k2 .
s∈[a,b]
Pn
If f (s) = i=1 ci Bi,d,s (s) is a spline curve in Rm then we denote the 2, ∞-norm of the
control points by
kck2,∞ := max kci k2 .
i=1,...,n
These norms enable us to derive a generalization of a standard stability result for spline
functions.
Lemma 6.1 Every spline curve f in Sm
d,s can be bounded by
Kd−1 kck2,∞ ≤ kf k2,∞ ≤ kck2,∞
with the constant Kd depending only on the degree d of f .
The next lemma gives an estimate of the distance between a spline and a control point.
The operator D denotes differentiation from the right.
Pn
Lemma 6.2 Let f (s) = i=1 ci Bi,d,s (s) be a spline in Sm
d,s . Then
kci − f (s̄i )k2 ≤ Qd,m (si+d − si+1 )2 kD2 f k2,∞,[si+1 ,si+d ] ,
where the constant Qd,m depends only on d and m.
Scalar versions with proofs of the Lemmas 6.1 and 6.2 can be found in Theorem 9.17
and Lemma 4.1 in [52].
The control points of the spline curve can be described by a multivariate function of the
knots,
ci = F(si+1 , . . . , si+d ).
(6.4)
The function F is affine and symmetric in each of its arguments, satisfies the diagonal
property f (s) = F(s, . . . , s) and is often referred to as the blossom of f . Blossoming is
64
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
discussed in [59] and standard spline literature [23, 52] so we only treat the parts we will
need later on (see also [55]).1
For the blossom of the derivative f 0 we will use the notation
∆ci = F0 (si+1 , . . . , si+d−1 )
where ∆ci denotes a control point of f 0 . Moreover, observe that the derivative of the blossom and the blossom of the derivative are related by
dDi F(x1 , . . . , xd ) = F0 (x1 , . . . , xi−1 , xi+1 , . . . , xd ),
(6.5)
where Di F denotes the derivative with respect to xi . In other words, we have ∆ci =
dDd F(si+1 , . . . , si+d ).
We will also need a multivariate Taylor expansion of the blossom F. For an affine
function φ(x) we have the simple Taylor expansion
φ(x) = φ(z) + (x − z)φ0 (z).
(6.6)
As F is an affine function in each of its arguments, repeated application of (6.6) leads to
F (x1 , . . . , xd ) = F (z1 , . . . , zd ) +
d
X
(xl − zl ) Dl F (z1 , . . . , zd )
l=1
+
d
l−1
X
X
(xl − zl ) (xr − zr ) Dl,r F (z1 , . . . , zl , xl+1 , . . . , xd ) ,
l=2 r=1
where Dl,r F denotes the second derivative with respect to xl and xr .
Finally we will need a bound for the second derivative of the blossom.
Lemma 6.3 For all x1 , . . . , xd ∈ [a, b] and integers r and l with 1 ≤ r < l ≤ d the
inequality
KF00
.
kDl,r F(x1 , . . . , xd )k2 ≤
d(d − 1)
holds, where the constant KF00 depends on a, b and f but not on a specific knot vector s.
Proof. Two applications of (6.5) yield
Dl,r F(x1 , . . . , xd ) =
1
F00 ((x1 , . . . , xd ) \ (xl , xr )).
d(d − 1)
The blossom F00i of the ith polynomial piece of the second derivative is continuous and
hence its norm kF00i ((x1 , . . . , xd ) \ (xl , xr ))k2 is also continuous. By the extreme value
theorem the norm therefore achieves its maximum and minimum over the interval [a, b].
Thus kF00i k2 can be bounded by some constant kF00i (x1 , . . . , xd )k2 ≤ KF00 for all i and
xj ∈ [a, b] with j = 1, . . . , d.
2
1 Note that the blossom is only defined for polynomials. However, the blossoms of two polynomials that are
linked with a certain continuity will agree when they are evaluated at the knots used for the construction of the
B-splines, as in (6.4).
6.4. ANALYSIS
6.4.2
65
Basic Results
After stating the background material we will now cover some basic results which can be
applied more generally. Some of the results are well known and just restated for convenience whereas others extend existing ones to provide more appropriate tools for the analysis. Again the results can be applied to spline curves in any space dimension.
First, we will
some simple properties of the control polygon. Consider the derivaPnneed
f
tive Df (s) = i=2
∆bi Bi,d−1,t (s) of f whose B-spline coefficients are given by ∆bi =
(bi − bi−1 ) / (s̄i − s̄i−1 ).
Lemma 6.4 The norm of the derivative of the control polygon Γf ,s can be bounded by
kDΓf ,s k2,∞ = k∆bk2,∞ ≤ Kd−1 kDf k2,∞ ,
where Kd−1 is a constant depending only on d and ∆b is the vector of control points of Df .
Proof. For s ∈ [s̄i , s̄i+1 ) we have DΓf ,s (s) = (bi+1 − bi ) / (s̄i+1 − s̄i ) = ∆bi+1 . But
this means that kDΓf ,s (s)k2 = k∆bi+1 k2 and kDΓf ,s k2,∞ = k∆bk2,∞ . The inequality
then follows from Lemma 6.1.
2
The next lemma shows that the control polygon is Lipschitz continuous.
Lemma 6.5 The distance between two points on the control polygon can be bounded by
kΓf ,s (x) − Γf ,s (y)k2 ≤ Kd−1 kDf k2,∞ |y − x|.
Proof. As the spline f is continuous, so is its control polygon, which leads to
Zy
kΓf ,s (x) − Γf ,s (y)k2 ≤ kDΓf ,s (s)k2 ds ≤ Kd−1 kDf k2,∞ |y − x|.
x
2
We will need a generalization of Lemma 6.2 to a situation where the parameter interval
is arbitrary. We first consider the function case.
Pnf
bi Bi,d,s (s) be a spline function, let [α, β] be a parameter
Lemma 6.6 Let f (s) = i=1
interval that contains at least d knots from s and set δ = β − α. Then the distance between
the control polygon and the spline in the interval [α, β] can be bounded by
kf − Γf,s k∞,[α,β] ≤ R2 δ 2 kD2 f k∞
where the constant R2 only depends on d.
(6.7)
66
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
Proof. Let the d knots sζ+1 , . . . , sζ+d be in [α, β]. Initially we let s be any value in [s1 , snf ].
In [60] the general bound
|f (s) − Γf,s (s)| ≤ |ϕ(s) − Γϕ,s (s)| k∆2 bk∞ ,
(6.8)
with ϕ(s) = s2 /2 was established where ∆2 bi are the coefficients of the second derivative
D2 f (s) and k∆2 bk∞ = maxi |∆2 bi |. Hence it follows from Lemma 6.1 that
k∆2 bk∞ ≤ Kd−2 kD2 f k∞ .
(6.9)
complete the proof we only need to bound the distance between the spline ϕ(s) =
PnTo
f
2
i=1 qi Bi,d,s (s) = s /2 and its control polygon, where
qi =
i+d
i+d
X
X
1
s̄2
%2
%2
sj sk = i − i = ϕ(s̄i ) − i
d(d − 1) j=i+1
2
2d
2d
(6.10)
k=j+1
Pd
2
00
2
and %2i =
j=1 (si+j − s̄i ) /(d − 1). Observe that ϕ (s) = 1 and ∆ qi = 1 for all
i which means that ϕ(s) and Γf,s (s) are convex. Moreover, we see from the right-most
expression in (6.10) that qi ≤ ϕ(s̄i ). Combining this with the fact that ϕ(s) (locally) is
contained in the convex hull of the control points leads to Γϕ,s (s) ≤ ϕ(s).
Let us now consider the line h(s) through qζ and qζ+1 ,
h(s) = qζ + (s − s̄ζ )
qζ+1 − qζ
= qζ + (s − s̄ζ )∆qζ+1 ,
s̄ζ+1 − s̄ζ
Pζ+d
where ∆qζ+1 = k=ζ+2 sk /(d − 1) is a coefficient of the derivative ϕ0 (s) = s. We clearly
have h(s̄ζ ) = qζ and h(s̄ζ+1 ) = qζ+1 . Since ϕ(s) is a convex spline function, it cannot
interpolate two consecutive control points. In other words, either h(s̄ζ ) = qζ < ϕ(s̄ζ ) or
h(s̄ζ+1 ) = qζ+1 < ϕ(s̄ζ+1 ) or both inequalities are true. In addition it follows from the
convexity of the control polygon that either h ≤ Γϕ,s or h ≥ Γϕ,s . Hence both ϕ and Γϕ,s
lie in the area above h(s), from which we conclude that
h(s) ≤ Γϕ,s (s) ≤ ϕ(s)
and
|ϕ(s) − Γϕ,s (s)| ≤ ϕ(s) − h(s).
(6.11)
The quadratic function ϕ(s) = s2 /2 can be described completely by its second order
Taylor expansion around s̄ζ given by ϕ(s) = ϕ(s̄ζ ) + (s − s̄ζ )s̄ζ + (s − s̄ζ )2 /2. Together
with the fact that both s̄ζ and ∆qζ+1 lie in the interval [α, β], this yields
ϕ(s) − h(s) = |ϕ(s) − h(s)|
≤ |ϕ(s̄ζ ) − qζ | + |(s − s̄ζ ) (s̄ζ − ∆qζ+1 )| +
≤ R3 δ 2 + |s − s̄ζ | δ +
(s − s̄ζ )2
2
(s − s̄ζ )2
2
where the constant R3 depends only on d and comes from the scalar version of Lemma 6.2.
By restricting s to the interval [α, β] we end up with |ϕ(s) − h(s)| ≤ R̃2 δ 2 . Combining this
with (6.8), (6.9) and (6.11), we are led to the required inequality.
2
6.4. ANALYSIS
67
Lemma 6.6 has a natural generalization to parametric curves.
Corollary 6.7 Let f ∈ Sm
d,s be a spline curve, let [α, β] be a parameter interval for f that
contains at least d knots from s and set δ = β − α. On the interval [α, β] the distance
between the control polygon and the spline is bounded by
kf − Γf ,s k2,∞,[α,β] ≤ R4 δ 2 kD2 f k2,∞ ,
(6.12)
where the constant R4 depends only on d and m.
Our main tool in the proof of convergence is an estimate of the distance between the
control polygon and a tangent to a spline curve. This estimate is a simple consequence of
Corollary 6.7. A tangent is well defined at a regular point of f , a point where the derivative
is continuous and nonzero, i.e., where D− f (s) = D+ f (s) 6= 0. Here D− denotes one-sided
differentiation from the left and D+ = D one-sided differentiation from the right. Let f (ρ)
be a regular point. Then the tangent line Tρ is given by
Tρ (s) := f (ρ) + (s − ρ)Df (ρ).
Lemma 6.8 Let f (ρ) be a regular point on f ∈ Sm
d,s , let also δ be a positive number such
2
that f is C in [ρ − δ, ρ) ∪ (ρ, ρ + δ], and suppose that the interval [ρ − δ, ρ + δ] contains
at least d knots. Then for u ∈ [−δ, δ]
kΓf ,s (ρ + u) − Tρ (ρ + u)k2 ≤ δ 2 R5,f ,
where R5,f is a constant that depends on f but not on s.
Proof. From a Taylor expansion of fl around ρ we obtain
2
2
fl (ρ + u) − fl (ρ) − uDfl (ρ) ≤ u |D2 fl |∞ ≤ u kD2 f k2,∞
2
2
for any u ∈ [−δ, δ]. This is easily extended to the inequality
2√
f (ρ + u) − Tρ (ρ + u) ≤ u mkD2 f k2,∞ ,
2
2
which can be combined with Corollary 6.7 to give the required result
Γf ,s (ρ + u) − Tρ (ρ + u) ≤ Γf ,s (ρ + u) − f (ρ + u) +
2
2
2√ u
f (ρ + u) − Tρ (ρ + u) ≤ R4 4δ 2 +
m kD2 f k2,∞ ≤ δ 2 R5,f ,
2
2
√
with R5,f = 4R4 + ( m/2) kD2 f k2,∞ .
2
68
6.4.3
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
Accumulation Points
We are now ready to start the analysis of the infinite sequence of parameter values {σ k , τ k }
generated by Algorithm 1, i.e., the parameter pairs of successive intersections of control
polygons. By the Bolzano-Weierstrass theorem this sequence must necessarily have at least
one accumulation point since it is bounded. With the aid of Lemma 6.5 we can show that
an accumulation point is an intersection point between the two curves.
Theorem 6.9 If the pair (σ, τ ) is an accumulation point of the infinite sequence {σ k , τ k },
then f (σ) = g(τ ).
Proof.
Let be any positive, real number. As (σ, τ ) is an accumulation point, there
must be positive integers k, i and j such that ski+1 , . . . , ski+d and σ k lie in the open interval
(σ − 21 , σ + 21 ) and tkj+1 , . . . , tkj+e and τ k lie in the open interval (τ − 21 , τ + 21 ).
Since (σ k , τ k ) is defined by the relation Γf ,sk (σ k ) = Γg,tk (τ k ), we can bound the distance
between f (σ) and g(τ ) by
kf (σ) − g(τ )k2 ≤ Γf ,sk (σ k ) − f (σ)2 + Γg,tk (τ k ) − g(τ )2 .
The distance between f (σ) and Γf ,sk (σ k ) can be bounded using Lemma 6.2, Lemma 6.5
and the mean value theorem for f ;
Γf ,sk (σ k ) − f (σ) ≤ Γf ,sk (σ k ) − Γf ,sk (s̄ki ) + Γf ,sk (s̄ki ) − f (s̄ki )
2
2
2
+ f (s̄ki ) − f (σ)2
≤ (Kd−1 + 1)kDf k2,∞ + Qd,m D2 f 2,∞ 2 ,
where the constants Qd,m and Kd−1 depend only on d and m, but not on sk . In the same
way we obtain the inequality
Γg,tk (τ k ) − g(τ ) ≤ (Ke−1 + 1)kDgk2,∞ + Qe,m D2 g
2 .
2
2,∞
Combining these two bounds leads to kf (σ) − g(τ )k2 ≤ R6 + R7 2 where R6 and R7 are
constants. Since this bound holds for any positive value of we must have f (σ) = g(τ ). 2
6.4.4
Convergence
At transversal intersections one or both of the curves may have discontinuous tangents at
the intersection which may complicate the situation. We will consider this later, for now we
only consider transversal intersections at regular points.
Definition 6.10 An intersection between two parametric curves f and g at the regular
points f (σ) = g(τ ) is said to be a transversal intersection if
det Df (σ) Dg(τ ) 6= 0.
6.4. ANALYSIS
69
g
Tf ,sa
Γg
Uf
Ug
f
θ
θ
distf + ,g
distg− ,f
Of (1 )
Tg,ta
Og (−1 )
Γf
Figure 6.3: The situation around a transversal intersection.
The variation diminishing property of splines guarantees that the control polygon of a
spline must always have a zero near a zero of a spline function. We need an analogous result
for intersections which guarantees that the control polygons of two spline curves intersect
near an intersection of the two curves. Lemma 6.8 will help us establish this kind of result.
Theorem 6.11 Suppose that the two spline curves f ∈ S2d,s and g ∈ S2e,t have a transversal
intersection at (σ, τ ). Then the two control polygons will intersect at least once in the
intervals (σ − 1 , σ + 1 ) and (τ − 1 , τ + 1 ) provided the positive number 1 satisfies the
two conditions:
1. The curve f has at least d knots in the interval [σ − 1 , σ + 1 ] and g has at least e
knots in [τ − 1 , τ + 1 ].
2. The number 1 is smaller than a number 2 that only depends on f and g.
The two conditions provide lower and upper bounds on 1 . The first condition which
requires that we have enough knots in the intervals [σ − 1 , σ + 1 ] and [τ − 1 , τ + 1 ],
gives a lower bound λ on 1 , while the second condition gives the upper bound 2 . It may
well happen that λ > 2 in which case the control polygons may not intersect even though
the two curves intersect. On the other hand, the lower bound depends on the knots while
the upper bound is independent of the knots. This means that by inserting knots sufficiently
near the intersection, we can always decrease λ so that it becomes smaller than 2 .
Proof. The idea behind the proof is illustrated in Figure 6.3. Since f and g are piecewise
polynomials, there exists a δ > 0 such that f is C 2 in [σ − δ, σ) ∪ (σ, σ + δ] and g is C 2
in [τ − δ, τ ) ∪ (τ, τ + δ]. Initially, we assume that we can satisfy both conditions by letting
1 satisfy 0 < 1 < δ, and we let i and j be such that si+1 , . . . , si+d ∈ [σ − 1 , σ + 1 ]
and tj+1 , . . . , tj+e ∈ [τ − 1 , τ + 1 ]. According to Lemma 6.8 this means that for all
70
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
u ∈ [−1 , 1 ] we have
Γf (σ + u) − Tf ,σ (σ + u) ≤ 21 R5,f ≤ 21 R8 ,
2
Γg (τ + u) − Tg,τ (τ + u) ≤ 21 R5,g ≤ 21 R8 ,
2
(6.13)
(6.14)
where R8 := max(R5,f , R5,g ). For each u ∈ [−1 , 1 ] we let Of (u) denote the circle with
center at Tf ,σ (σ + u) and radius 21 R8 , and we let U f denote the union of these circles as u
varies in the interval [−1 , 1 ],
U f = ∪u∈[−1 ,1 ] Of (u).
The set U g is generated by similar circles centered at Tg,τ (τ + u).
To reach the desired conclusion, we assume that 1 is so small that the two circles
Of (−1 ) and Of (1 ) do not intersect U g and the two circles Og (−1 ) and Og (1 ) do not
intersect U f . We will later deduce conditions that ensure that such an 1 exists. We know
from (6.13) and (6.14) that Γf (σ + u) ∈ U f and that Γg (τ + u) ∈ U g for all u ∈ [−1 , 1 ],
and that U f and U g intersect since they share the circle centered at the intersection point.
Because we have a transversal intersection, the two circles Of (±1 ) lie on opposite sides
of U g and therefore the points Γf (σ ± 1 ) do not lie in U g , but on opposite sides of it.
Similarly the two points Γg (τ ± 1 ) do not lie in the set U f , but on opposite sides of it.
Hence the control polygons must intersect at least once in the intersection of U f and U g .
We are now left with the task to determine an upper bound 2 on 1 which ensures that
the two circles Of (±1 ) do not intersect U g and the two circles Og (±1 ) do not intersect
U f . Let 0 < θ ≤ 90◦ be the positive angle between Tf ,σ and Tg,τ . Let us consider distf + ,g ,
the distance of Tf ,σ (σ + 1 ) to Tg,τ , and distg+ ,f , the distance of Tg,τ (τ + 1 ) to Tf ,σ .
We then have
sin θ =
distf + ,g
1 kDf (σ)k2
and
sin θ =
distg+ ,f
1 kDg(τ )k2
which leads to
distf + ,g = 1 kDf (σ)k2 sin θ,
distg+ ,f = 1 kDg(τ )k2 sin θ.
If we define the constant R9 := min (kDf (σ)k2 , kDg(τ )k2 ) sin θ, we have distf + ,g ≥
1 R9 and distg+ ,f ≥ 1 R9 . Similarly we get the same bound for the other distances distf − ,g
and distg− ,f . We therefore need to make sure that 1 is so small that the radius of the
circles is less than half the minimum distance (21 R8 < 12 1 R9 ). This gives the bound
R9
0 < 1 < min( 2R
, δ) where R8 , R9 and δ all depend on f and g, but not on the knot
8
R9
vectors s or t. We can therefore set 2 := min( 2R
, δ) which completes the proof.
2
8
Corollary 6.12 Suppose that the two spline curves f ∈ S2d,s and g ∈ S2e,t have a transversal
intersection at (σ, τ ), and suppose that this is also an accumulation point of the sequence
{σ k , τ k }. Given a sufficiently small 1 > 0 there exists an N > 0 such that the control
polygons intersect for all k > N somewhere in the intervals [σ − 1 , σ + 1 ] and [τ − 1 , τ +
1 ].
6.4. ANALYSIS
71
T+
g,ta
θf−
T−
f ,sa
θg+
θg−
T−
g,ta
T+
f ,sa
Figure 6.4: A generalized transversal intersection.
Proof. Since (σ, τ ) is an accumulation point there exists an N > 0 such that for all k > N
the curve f has at least d knots in [σ − 1 , σ + 1 ] and g at least e knots in [τ − 1 , τ + 1 ].
According to Theorem 6.11 there is an 2 > 0 with the property that the control polygons
will intersect in the given intervals for all k > N provided 0 < 1 < 2 .
2
Up to now the derivative at a regular point was assumed to be continuous. This was
merely for convenience, and in the case of splines it is straightforward to generalize the
concept of regular points to the situation where the derivative is discontinuous. We call a
point f (s) a generalized regular point if kD− f (s)k2 6= 0 and kD+ f (s)k2 6= 0.
At a generalized regular point of f we then denote the left tangent by T−
ρ and the right
tangent by T+
,
ρ
−
T−
ρ (s) := f (ρ) + (s − ρ)D f (ρ),
+
T+
ρ (s) := f (ρ) + (s − ρ)D f (ρ).
A generalized transversal intersection is a transversal intersection at a generalized regular point. Formally it can be defined as follows. Let the two spline curves f ∈ S2d,s and
g ∈ S2e,t intersect at (σ, τ ) and let f (σ) and g(τ ) be generalized regular points. Let also
−
+
±
θf− be the counter clockwise angle between T+
f ,σ and Tf ,σ starting at Tf ,σ and let θg be
+
+
±
the counter clockwise angles between Tf ,σ and Tg,τ again starting at Tf ,σ , see Figure 6.4.
Now let θ̄g := max(θg+ , θg− ) and θg := min(θg+ , θg− ). The intersection between the two
spline curves f and g at (σ, τ ) is called a generalized transversal intersection if
0◦ < θg < θf− < θ̄g < 360◦ ,
i.e. when turning counter clockwise around the intersection starting at T+
f ,σ one first en−
counters one of the tangents to g(τ ), then Tf ,σ and finally the other tangent to g(τ ).
Lemma 6.8 can be applied as it is to the left and right tangents of each curve. The proof
of Theorem 6.11 will essentially still be valid. The only difference is in the estimate of
±
the upper bound on 1 . Here the distances of the points T±
f ,σ (σ ± 1 ) and Tg,τ (τ ± 1 ) to
both tangents of the other curve have to be considered. This yields the analogous bounds
distf ± ,g± ≥ 1 R10 and distg± ,f ± ≥ 1 R10 where
R10 := min kD± f (σ)k2 , kD± g(τ )k2 sin(θ),
with θ is the smallest angle between the four tangent lines. With this we can restate Corollary 6.12 for generalized transversal intersections.
72
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
Corollary 6.13 Suppose that the two spline curves f ∈ S2d,s and g ∈ S2e,t have a generalized transversal intersection at (σ, τ ) and suppose that this is also an accumulation point
of the sequence {σ k , τ k }. Given a sufficiently small > 0 there exists an N > 0 such that
the control polygons intersect for all k > N somewhere in the intervals [σ − , σ + ] and
[τ − , τ + ].
Corollary 6.13 enables us to show that if the curves only have generalized transversal
intersections, the sequence {σ k , τ k } will have only one accumulation point and thus will
converge.
Lemma 6.14 Let the curves f ∈ S2d,s and g ∈ S2e,t have only generalized transversal
intersections. Then the infinite sequence {σ k , τ k } converges.
Proof. Suppose that there is more than one accumulation point. Let (σa , τa ) and (σb , τb )
be two neighbouring accumulation points and let < min(|σb − σa |, |τb − τa |)/2 be small
enough for the conclusion of Corollary 6.13 to hold. Then there exists an N > 0 such
the control polygons intersect in the interval around (σa , τa ) and in the interval around
(σb , τb ) for all k > N . The lexicographical order now ensures that one intersection is
always smaller than the other and therefore for all k > N an intersection of the control
polygons corresponding to the same curve intersection is inserted. But then the sequence
cannot have more than one accumulation point, i.e. the sequence converges.
2
6.4.5
Convergence Rate
We now know that our intersection algorithm converges; it remains to examine the convergence rate. Let (σ, τ ) be an accumulation point of the sequence {σ k , τ k } and let the control
polygon segments bkik −1 , bkik and ckjk −1 , ckjk contain the chosen intersection between the
control polygons at iteration k. In this setting Equation (6.3) becomes
k k s̄ik
σ
=
− Mk−1 Fik − Gjk , Mk = dDd Fik −eDe Gjk ,
t̄kjk
τk
(6.15)
where F and G are the blossoms of f and g and Fik = F(skik +1 , . . . , skik +d ), Dd Fik =
Dd F(skik +1 , . . . , skik +d ) with Gjk and De Gjk defined similarly. It is apparent that Algorithm 1 resembles a discrete version of Newton’s method, so we should expect a quadratic
convergence rate. We will confirm that the algorithm converges quadratically at transversal intersections. This is done with the help of two lemmas, the first of which reformulates (6.15) and the second shows convergence of the inner knots skik +1 , . . . , skik +d−1 and
tkjk +1 , . . . , tkjk +e−1 to the respective parameter values at the intersection.
Lemma 6.15 The error vector (σ k , τ k )T − (σ, τ )T satisfies the equation
k
σ −σ
= Mk−1 vk
τk − τ
6.4. ANALYSIS
73
where
vk =
d−1 X
l−1
X
σ − skik +l
σ − skik +r Dl,r F skik +1 , . . . , skik +l , σ, . . . , σ −
τ − tkjk +r Dl,r G tkjk +1 , . . . , tkjk +l , τ, . . . , τ +
l=2 r=1
e−1 X
l−1
X
τ − tkjk +l
l=2 r=1
d−1
X
(σ − skik +l )2 Dl,d Fik −
l=1
e−1
X
(τ − tkjk +l )2 Dl,e Gjk .
l=1
We start by subtracting (σ, τ )T from both sides of (6.15),




d
P
1
k
k
(s
−
σ)
ik +l

 d


σ −σ
l=1
 − Fik − Gjk  .
= Mk−1 
Mk 
k
e




P
τ −τ
1
(tkjk +l − τ )
e
l=1
{z
}
|
Proof.
(6.16)
vk
We then add and subtract
vector vk in (6.16). Since
d
P
l=1
Pd
l=1
σ − skik +l Dl Fik −
Pe
l=1
τ − tkjk +l Dl Gjk from the
d
d−1
P
P
σ − skik +l Dl Fik − (σ − skik +l )Dd Fik =
(σ − skik +l )(skik +d − skik +l )Dl,d Fik ,
l=1
l=1
the vector vk may be written as
vk = −Fik −
d
X
d−1
X
σ − skik +l Dl Fik +
(σ − skik +l )(skik +d − skik +l )Dl,d Fik +
l=1
Gjk +
e
X
l=1
τ − tkjk +l Dl Gjk −
l=1
e−1
X
(τ − tkjk +l )(tkjk +d − tkjk +l )Dl,e Gjk . (6.17)
l=1
We next make use of the fact that f (σ) − g(τ ) = 0 and perform Taylor expansions of the
blossoms F and G,
0 = f (σ) − g(τ ) = F (σ, . . . , σ) − G (τ, . . . , τ )
= Fik − Gjk +
d
X
e
X
σ − skik +l Dl Fik −
τ − tkjk +l Dl Gjk
l=1
+
d
l−1
X
X
l=1
σ − skik +l
σ − skik +r Dl,r F skik +1 , . . . , skik +l , σ, . . . , σ
τ − tkjk +r Dl,r G tkjk +1 , . . . , tkjk +l , τ, . . . , τ .
l=2 r=1
−
e
l−1
X
X
l=2 r=1
τ − tkjk +l
74
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
Using this we can replace the first, second, fourth and fifth term in (6.17) by the quadratic
error term, which yields the required form of vk .
2
The second lemma shows that the inner knots converge.
Lemma 6.16 For any > 0 there exists an N > 0 such that for all k > N
|skik +l − σ| < , l = 1, . . . , d − 1,
|tkjk +r − τ | < , r = 1, . . . , e − 1.
Proof. Let λ = /2d. Since the sequence σ k , τ k converges to (σ, τ ), there exists an Nf > 0
such that for all k > Nf there is an integer ξ such that skξ+1 , . . . , skξ+d and σ k lie in the
interval J = [σ − λ, σ + λ]. The number s̄kξ must clearly also lie in this interval. There
are two possibilities: either ξ ≤ ik − 1 or ξ ≥ ik . Let us consider the first situation.
Then we have s̄kξ ≤ s̄kik −1 ≤ σ k , hence s̄kik −1 ∈ J and skξ+1 ≤ skik ≤ s̄kik −1 . But then
skik ∈ J which provides an upper bound for skik +d−1 . From σ − λ ≤ skik ≤ . . . ≤ skik +d−2
and s̄kik −1 ≤ σ + λ we get skik +d−1 ≤ σ + (2d − 1)λ < σ + and therefore we have
skik , . . . , skik +d−1 ∈ [σ − λ, σ + ).
Consider next the second case ξ ≥ ik . As in the first case this leads to σ k ≤ s̄kik ≤ s̄kξ
and s̄kik ≤ skik +d ≤ skξ+d . But then we have s̄kik , skik +d ∈ J and skik +1 ≥ σ − (2d − 1)λ >
σ − , so we must have skik +1 , . . . , skik +d ∈ (σ − , σ + λ]. If we combine both cases we
obtain skik +1 , . . . , skik +d−1 ∈ (σ − , σ + ) for all k > Nf .
The proof that tkjk +1 , . . . , tkjk +e−1 ∈ (τ − , τ + ) for all k > Ng is similar. Defining
N = max(Nf , Ng ) finishes the proof.
2
We now have the tools required to establish the quadratic convergence rate of the intersection algorithm.
Theorem 6.17 Suppose that the two curves f and g have a transversal intersection at
(σ, τ ). Then there is a constant R11 that depends on f and g but not on sk and tk so
that for sufficiently large k

2
k
max |σ − skik +l | σ −σ l=1,...,d−1

 .
(6.18)
k
τ k − τ ≤ R11 max
|τ
−
t
|
jk +l
2
l=1,...,e−1
0
2
0
Proof. At a transversal intersection f (σ) and g (τ ) are linearly independent. Since the two
sets of inner knots converge to σ and τ respectively, there must be an N such that the matrix
Mk defined in (6.15) is nonsingular for all k > N and 0 < kMk−1 k2 ≤ R12 for some
positive constant R12 independent of sk and tk . Let k be an integer greater than N and set
ςk =
max
l=1,...,d−1
|σ − skik +l |,
δk =
max
l=1,...,e−1
|τ − tkjk +l |.
(6.19)
We know from Lemma 6.15 that k(σ k − σ, τ k − τ )T k2 = kMk−1 vk k2 ≤ R12 kvk k2 . Using
(6.19) and Lemma 6.3 we obtain
ςk 2
kvk k2 ≤ ςk2 KF00 + δk2 KG00 ≤ R13 (ςk2 + δk2 ) = R13 δk 2
6.5. EXAMPLES
75
with R13 = max(KF00 , KG00 ). Defining R11 = R12 R13 yields (6.18).
2
According to Lemma 6.16 the inner knots converge to the intersection (σ, τ ). Theorem
6.17 therefore shows that {σ k , τ k } converges quadratically to the intersection. Our practical
experience is that the sequence converges quadratically in every max(d − 1, e − 1)-step.
6.5
Examples
In the last section we showed convergence and a quadratic convergence rate for Algorithm
1. We will now consider some examples that illustrate the behaviour of the algorithm and
compare it to Bézier clipping [72] and the uniform subdivision approach. We have chosen
to compare with Bézier clipping as this method also has quadratic convergence as recently
proved in [69], and we compare with the subdivision strategy as it is widely known.
Let us first give a quick review of Bézier clipping. This method only works for Bézier
curves, hence the spline curves need to be converted to composite Bézier representation
first. In short, Bézier clipping cuts away regions where one Bézier curve is guaranteed to
not intersect the other. To do this it alternately clips one curve against a ’fat line’ of the other.
Let L denote the line determined by the end control points b0 , bd of f and let distL (x) be the
signed distance of x to L. Then the fat line along L is defined by {x | dmin ≤ distL (x) ≤
dmax } where dmin = min0≤i≤d (distL (bi )) and dmax = max0≤i≤d (distL (bi )). To clip
g against the fat line along L, the distance function distL (g) is computed. Observe that
this distance function can naturally be represented as a Bézier function. Finally the convex
hull property of Bézier functions is used to cut away regions where the distance function
is guaranteed to be smaller than dmin or greater than dmax . If the resulting interval is not
small enough according to some heuristic, the curve with the larger parameter interval is
subdivided and Bézier clipping is continued with both halves. For more information see
[69, 72].
For the subdivision algorithm we simply maintain a stack of curve pairs which may
intersect, similarly to the generic algorithm described in [84]. As long as the stack is not
empty and no intersection is found, the first pair of curves is removed from the stack and is
processed. If the convex hulls of the two curves overlap and the parameter intervals of the
curves are small enough, an intersection is assumed at the center of both intervals. If one
parameter interval is not small enough, the curve with the larger interval is subdivided in
the middle and both combinations are put back on the stack.
In Algorithm 1 we use the density of the knots to determine convergence. Using the
notation of Lemma 6.16, it is shown in [56] that for σ k ∈ [s̄ki−1 , s̄ki ) the distance between a
spline and its control polygon can be bounded by
kf (σ k ) − Γf ,s (σ k )k2 = O(h2 ),
with
h := max(σ k , ski+d−1 ) − min(σ k , ski+1 )
for small enough h. From Lemma 6.16 we know that the knots ski+1 , . . . , ski+d−1 as well as
σ k converge to an accumulation point, hence h goes to zero. If now h < for both curves,
where is a given tolerance, an intersection at the current parameter pair is assumed.
76
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
Iteration
0
1
2
3
4
5
6
7
34
35
36
37
38
39
EICP
1.2e-02
3.7e-05
3.4e-10
6.2e-17
Example 1
EBC
3.1e-01
2.4e-01
3.5e-01
7.5e-02
7.6e-04
4.0e-08
1.4e-15
ESU B
3.4e-01
2.0e-01
8.7e-02
1.7e-02
1.1e-02
3.5e-03
2.7e-03
1.1e-03
3.3e-13
5.9e-14
5.4e-14
1.6e-14
6.8e-15
3.2e-15
EICP
2.4e-01
2.6e-02
9.5e-04
3.7e-07
1.7e-13
4.7e-16
Example 2
EBC
2.5e-01
8.5e-02
1.2e-03
1.5e-04
1.9e-08
5.0e-16
ESU B
2.6e-01
1.9e-01
1.2e-01
7.8e-02
2.6e-02
1.7e-02
6.3e-03
1.0e-03
2.6e-14
4.9e-15
8.0e-16
Table 6.1: Error k(σ k , τ k )T − (σ, τ )T k2 in iteration k for Algorithm 1 (EICP ), Bézier
Clipping (EBC ) and recursive subdivision (ESU B ) for Examples 1 and 2.
The effect of inserting the intersection of the control polygons into the spline curves
is demonstrated in Figure 6.1. It is clearly visible how the intersections of the new control polygons approach the intersection of the spline curves. To compare the actual errors
k(σ k , τ k )T − (σ, τ )T k2 in each iteration between the different methods, we first have to define what we consider to be an iteration. Bézier clipping and Algorithm 1 are both quadratically convergent when a suitable number of low-level operations are combined into one.
We therefore define one iteration to be this number of low-level operations. Hence for Algorithm 1 one iteration consists of max(d − 1, e − 1) knot insertions in each curve, whether
the knot pairs come from intersecting control polygons or overlapping local convex hulls as
in the limit we will only have intersecting control polygons. Similarly, in [69] it is shown
that in the limit Bézier clipping will lead to pure clipping steps applied alternately to both
curves. Hence, we consider two steps as one iteration, where one step is either a clipping or
a subdivision step. The uniform subdivision method only exhibits linear convergence, so it
is not possible to obtain a quadratically convergent method by combining a fixed number of
low-level operations into one iteration. In the limit Bézier clipping will split each curve two
times in each iteration, leading to four splitting operations in one iteration. As one splitting
is similar to one subdivision step in the subdivision algorithm, we consider four such steps
to be one iteration for this third method. Note that the last iteration in each method may
consist of fewer steps, as the intersection can be located before another complete iteration
is finished.
We will now compare the behaviour of the three methods for the Examples 1 to 6 shown
6.5. EXAMPLES
77
a) Example 1, d = 3, e = 3.
b) Example 2, d = 3, e = 3.
c) Example 3, d = 4, e = 5.
d) Example 4, d = 8, e = 5.
e) Example 5, d = 5, e = 7.
f) Example 6, d = 3, e = 3.
Figure 6.5: Examples.
78
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
Example 3
EBC
6.4e-00
6.2e-00
6.2e-00
5.0e-00
4.3e-00
4.6e-00
5.5e-00
ESU B
4.9e-00
6.4e-00
5.7e-00
4.0e-00
3.1e-00
1.7e-00
1.4e-00
16
17
18
19
20
2.4e-00
3.4e-00
4.7e-00
2.8e-00
1.4e-00
8.3e-04
4.6e-04
1.2e-04
5.8e-05
1.2e-05
28
29
30
31
32
2.5e-02
5.7e-05
2.6e-10
0.0e-17
2.3e-08
6.4e-09
1.4e-09
4.0e-10
1.6e-10
1.2e-11
6.5e-11
1.2e-11
5.3e-12
1.7e-12
6.8e-12
3.8e-12
1.4e-12
3.2e-13
1.1e-13
3.0e-14
6.4e-15
3.8e-15
1.8e-15
1.9e-14
2.4e-15
4.3e-15
2.0e-15
Iteration
0
1
2
3
4
5
6
36
37
38
39
40
41
42
43
44
EICP
6.4e-00
2.3e-01
2.3e-03
8.8e-07
9.8e-14
0.0e-17
EICP
1.3e-00
6.1e-01
3.7e-02
1.6e-05
1.5e-11
8.9e-16
Example 4
EBC
2.1e-00
1.6e-00
1.1e-00
2.0e-00
1.5e-00
8.3e-01
1.6e-00
2.2e-04
5.7e-08
5.7e-15
1.5e-15
ESU B
1.9e-00
1.8e-00
6.2e-01
5.0e-01
1.6e-02
5.6e-02
3.0e-02
8.8e-06
1.5e-06
8.3e-07
1.9e-07
8.9e-08
Table 6.2: Error k(σ k , τ k )T − (σ, τ )T k2 in iteration k for Algorithm 1 (EICP ), Bézier
Clipping (EBC ) and recursive subdivision (ESU B ) for Examples 3 and 4.
6.5. EXAMPLES
Iteration
0
1
2
3
4
5
6
7
8
9
10
11
12
79
EICP
3.3e-03
1.9e-06
5.2e-13
5.7e-16
Example 5
EBC
4.5e-01
2.0e-01
9.8e-02
2.6e-02
4.6e-04
2.6e-08
1.0e-16
34
35
36
37
ESU B
5.5e-01
2.1e-01
8.9e-02
3.3e-02
9.1e-03
2.2e-03
1.7e-03
1.4e-03
6.1e-04
2.1e-04
9.2e-05
1.8e-05
1.0e-05
EICP
3.5e-02
2.5e-04
1.2e-07
9.8e-15
3.6e-16
Example 6
EBC
1.9e-01
1.2e-01
8.3e-02
2.9e-02
1.5e-01
1.5e-01
6.4e-03
2.9e-03
7.8e-05
5.0e-08
2.0e-14
5.6e-17
1.5e-14
4.5e-15
4.0e-15
ESU B
4.4e-01
3.4e-01
4.6e-01
3.5e-01
3.5e-01
3.6e-01
3.6e-01
3.9e-01
3.6e-01
3.6e-01
3.6e-01
3.6e-01
3.6e-01
3.6e-01
3.6e-01
3.6e-01
3.6e-01
85
86
87
2.0e-14
7.5e-16
3.0e-15
Table 6.3: Error k(σ k , τ k )T − (σ, τ )T k2 in iteration k for Algorithm 1 (EICP ), Bézier
Clipping (EBC ) and recursive subdivision (ESU B ) for Examples 5 and 6.
Example
Algorithm 1, comp.
Algorithm 1, ins.
Bézier clipping, comp.
Bézier clipping, ins.
Subdivision, comp.
Subdivision, ins.
1
14
14
23(0)
69(0)
94
282
2
21
21
22(0)
66(0)
97
291
3
42
42
56(14)
240(50)
114
513
4
76
76
43(7)
271(37)
104
672
5
38
38
32(6)
188(26)
99
591
6
18
18
41(4)
119(8)
194
582
Table 6.4: Comparison of number of knots computed(comp.) and inserted(ins.) for the different methods. For Bézier clipping the conversion costs are included and in addition shown
in parentheses.
80
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
in Figure 6.5. The examples with an odd number have intersecting control polygons, the
ones with an even number do not. The first two examples show two simple polynomial
curves with one intersection. The errors k(σ k , τ k )T − (σ, τ )T k2 after each iteration for
those two examples are listed in Table 6.1. In Example 1, Algorithm 1 can make direct
use of the intersecting control polygons and therefore reaches the intersection more quickly
than Bézier clipping, which first has to make sure that no intersections are missed. At this
point it should be noted that Algorithm 1 does not exclude any parameter interval as nonintersecting when using control polygon intersections. If the aim is to find all intersections,
a final local convex hull check is necessary in order to make sure that no intersections have
been missed. This is in contrast to Bézier clipping which is based on excluding parameter
intervals where no intersections can occur. This means that Bézier clipping automatically
looks for all intersections, which may, at least partly, explain why it needs more iterations
than Algorithm 1. Example 2 illustrates that the local convex hull approach will lead relatively quickly to intersecting control polygons and from this point onwards a quadratic
convergence rate is observable. It is also clearly visible that both Algorithm 1 and Bézier
clipping as quadratic methods are superior to the subdivision strategy, which only gives
linear convergence.
Examples 3 and 4 are designed to be more difficult for Algorithm 1, and the errors are
shown in Table 6.2. In Example 3 the control polygons have a lot of false intersections and in
Example 4 many local convex hulls need to be checked before a situation with intersecting
control polygons arises. Nevertheless, Algorithm 1 needs considerably fewer iterations to
find the intersection than Bézier clipping.
In the last two Examples we consider situations with several intersections. All three
methods find the same intersection first, hence the errors listed in Table 6.3 are comparable.
For Example 5 the performance is similar to Example 1. In the last example, Algorithm 1
again locates the intersection with fewer iterations than Bézier clipping.
The performance of a method is not only determined by the number of iterations, but
also by the amount of work done in each iteration. In order to make a valid efficiency
comparison, well tuned and specialized implementations for each method are necessary.
Unfortunately, we currently do not have such implementations. We compare the number
of knots inserted into the curves instead. Here it is important to note that for Bézier clipping and the subdivision method each knot will be inserted several times in order to split
the curve. Hence we have to differentiate between the number of knots (or parameter values) computed and the number of knots inserted. For Algorithm 1 those will be identical.
For Bézier clipping we also need to include the knots needed for converting the spline representation into composite Bézier form. The resulting comparison is shown in Table 6.4,
with conversion costs for Bézier clipping included and also shown in parentheses. Clearly,
the number of knots inserted into the curves is minimal for Algorithm 1. For the number
of computed knots the minimum varies between Bézier clipping and Algorithm 1 for the
different examples. The subdivision method clearly uses both most computed and inserted
knots of all three methods.
6.6. CONCLUSION AND FUTURE WORK
6.6
81
Conclusion and Future Work
In this paper we have presented a new method for finding intersections of two parametric
spline curves. From the analysis and examples it is clear that the resulting algorithm is a
Newton-like method that achieves a quadratic convergence rate for transversal intersections
without the need for a starting point. For non-transversal intersections, however, the convergence is linear. An area for future research is to develop a more efficient algorithm with
higher order convergence for non-transversal intersections.
When considering overlapping local convex hulls in Algorithm 1 the midpoints of the
corresponding intervals are inserted. This is done for convenience as it is simple and guaranteed to lead to non-intersecting local convex hulls for non-intersecting curves. However,
it is probably not the best possible choice of knots to insert. With more research one might
be able to determine a better knot pair in this situation that leads to non-intersecting local
convex hulls more quickly.
82
CHAPTER 6. COMPUTING INTERSECTIONS OF PLANAR SPLINE CURVES
Part III
Transfinite & barycentric
interpolation
83
7
Pointwise radial minimization: Hermite
interpolation on arbitrary domains
Michael S. Floater and Christian Schulz
Computer graphics forum 2008, Volume 27, Number 5, Pages 1505-1512, Special issue for
Symposium on Geometric Processing
Abstract
In this paper we propose a new kind of Hermite interpolation on arbitrary domains,
matching derivative data of arbitrary order on the boundary. The basic idea stems from
an interpretation of mean value interpolation as the pointwise minimization of a radial
energy function involving first derivatives of linear polynomials. We generalize this and
minimize over derivatives of polynomials of arbitrary odd degree. We analyze the cubic
case, which assumes first derivative boundary data and show that the minimization has a
unique, infinitely smooth solution with cubic precision. We have not been able to prove
that the solution satisfies the Hermite interpolation conditions but numerical examples
strongly indicate that it does for a wide variety of planar domains and that it behaves
nicely.
Keywords: Transfinite interpolation, Hermite interpolation, mean value interpolation
7.1
Introduction
Mean value interpolation has emerged as a simple and robust way to smoothly interpolate a
function defined on the boundary of an arbitrary domain in Rn , and provides an alternative
to solving a PDE such as the Laplace equation. This kind of interpolation started from
a generalization of barycentric coordinates in the plane [25], and was later extended to
general polygons [40] and to triangular meshes [28, 44]. These constructions were further
85
86
CHAPTER 7. PRM: HERMITE INTERPOLATION ON ARBITRARY DOMAINS
generalized to continuous boundaries in [44, 66]. More work on mean value coordinates
and related topics can be found in [6, 26, 27, 45, 48, 50, 78].
Mean value interpolation only matches the values of a function on the boundary, not
its derivatives. However, there are many applications in geometric modeling and scientific computing in which it is also desirable to match certain derivatives at the boundary, and
again it is worth looking for methods that avoid solving a PDE such as the biharmonic equation. An early approach was that of Gordon and Wixom [36], which is based on averaging
the values of interpolatory univariate polynomials across the domain. This method is simple conceptually but only applies to convex domains and requires computing intersections
between lines and the domain boundary.
A more recent approach to higher order interpolation by Langer and Seidel [49] is an
extension of barycentric coordinates to incorporate derivative data given at a set of vertices.
Generalized barycentric coordinates such as Wachspress, Sibson, and mean value coordinates are composed with a modifying function and then used to interpolate first order Taylor
expansions at the vertices. This method is easy to compute and was applied successfully
to give greater control when deforming polygonal meshes. However, even in the univariate
case this method lacks cubic precision.
Another recent idea, developed in [22, 13], was to build on properties of the mean value
weight function, i.e., the reciprocal of the denominator in the rational expression for the
interpolant, in order to construct C 1 Hermite interpolants on arbitrary domains. This method
has the advantage that it reduces to cubic polynomial interpolation in the univariate case,
and in R2 and R3 it can be evaluated approximately through explicit formulas for polygons
and triangular meshes. However, the method does not have cubic precision in Rn for n > 1.
In this paper we propose a new kind of Hermite interpolation on arbitrary domains in
Rn . The method has cubic precision for C 1 boundary data for all n, and in R2 the method
appears to perform very well in practice. We start by viewing mean value interpolation as
the pointwise minimization of a radial energy function involving first derivatives of linear
polynomials. We then generalize this to match boundary derivatives up to order k by minimizing a radial energy function involving (k + 1)-st derivatives of polynomials of degree
2k + 1. The case k = 0 is simply mean value interpolation. We analyze the cubic case,
k = 1, and show that the minimization has a unique solution which has cubic precision. We
cannot prove that the solution satisfies the Hermite interpolation conditions but numerical
examples show that the solution interpolates the derivative boundary data in R2 for a wide
variety of domain shapes.
7.2
A new look at mean value interpolation
We start by showing that mean value interpolation can be expressed as the solution to a
radial minimization problem.
7.2. A NEW LOOK AT MEAN VALUE INTERPOLATION
87
p (x ,v )
v
Ω
x
dΩ
Figure 7.1: Definition of p.
7.2.1
Convex domains
Consider the case that Ω ⊂ Rn is a bounded, open, convex domain and that f : ∂Ω → R
is a continuous function. For any point x = (x1 , x2 , . . . , xn ) in Ω and any unit vector v in
the unit sphere S ⊂ Rn , let p(x, v) be the unique point of intersection between the semiinfinite line {x + rv : r ≥ 0} and the boundary ∂Ω; see Figure 7.1. Let ρ(x, v) be the
Euclidean distance ρ(x, v) = kp(x, v) − xk. As developed in the papers [25, 44, 22, 13],
the mean value interpolant g : Ω → R is given by the formula
Z
Z
1
f (p(x, v))
dv φ(x),
φ(x) =
dv,
x ∈ Ω.
g(x) =
ρ(x,
v)
ρ(x,
v)
S
S
It was shown in [22, 13] that under mild conditions on the shape of the boundary ∂Ω, the
function g interpolates f when f is continuous. Figure 7.2 shows two examples of the mean
value interpolant g on a circular domain. In previous papers, g was derived from the mean
value property of harmonic functions. In this paper we take a different viewpoint in order
to generalize to Hermite interpolation. We claim that at a fixed point x ∈ Ω, the value g(x)
is the unique minimizer a = g(x) of the local ‘energy’ function
Z Z
ρ(x,v)
E(a) =
S
0
2
∂
F (x + rv)
dr dv,
∂r
where F : Ω → R is the radially linear function,
F (x + rv) =
r
ρ(x, v) − r
a+
f (p(x, v)),
ρ(x, v)
ρ(x, v)
v ∈ S,
To see this observe that
∂
f (p(x, v)) − a
F (x + rv) =
,
∂r
ρ(x, v)
and therefore,
Z
E(a) =
S
(f (p(x, v)) − a)2
dv.
ρ(x, v)
0 ≤ r ≤ ρ(x, v).
88
CHAPTER 7. PRM: HERMITE INTERPOLATION ON ARBITRARY DOMAINS
Figure 7.2: Mean value interpolants.
Thus, setting the derivative of E with respect to a to zero gives
Z
S
f (p(x, v)) − a
dv = 0,
ρ(x, v)
and solving this for a gives the solution a = g(x).
7.2.2
Non-convex domains
An interpretation in terms of functional minimization can also be made for non-convex domains. Let Ω ⊂ Rn be an open, not necessarily convex domain. Recall that the intersection
between a line and a surface is said to be transversal if the line does not lie in the tangent space of the surface at the point of intersection. We will say that a unit vector v is
transversal with respect to x in Ω if all the intersections between {x + rv : r ≥ 0} and
∂Ω are transversal. For example, in the R2 case of Figure 7.3 all unit vectors are transversal
at x except for v1 and v2 . If v is transversal, let µ(x, v) be the number of intersections
of {x + rv : r ≥ 0} with ∂Ω which will be an odd number, assumed finite, and let
pj (x, v), j = 1, 2, . . . , µ(x, v), be the points of intersection, ordered so that their distances
ρj (x, v) = kpj (x, v) − xk are increasing,
ρ1 (x, v) < ρ2 (x, v) < · · · < ρµ(x,v) (x, v).
For example, for v in Figure 7.3b, there are three such intersections and so µ(x, v) = 3.
We make the assumption that the set {v ∈ S : v is non-transversal} has measure zero, so
that non-transversal v can be ignored when integrating over S. The mean value interpolant
7.3. HERMITE INTERPOLATION
89
p 2(x ,v)
v1
p 1(x ,v)
v
v2
x
p 3(x ,v)
x
Figure 7.3: (a) Two non-transversal vectors. (b) A transversal vector with three intersections.
[22, 13] is then
Z
g(x) =
S
µ(x,v)
X (−1)j−1
f (pj (x, v)) dv φ(x),
ρj (x, v)
j=1
Z
φ(x) =
S
µ(x,v)
X (−1)j−1
dv.
ρj (x, v)
j=1
We claim that g(x) is now the unique minimizer a = g(x) of
µ(x,v)
Z
X
E(a) =
S
(−1)j−1
ρj (x,v)
Z
2
0
qv,j
(r) dr dv,
0
j=1
where, for v ∈ S, 0 ≤ r ≤ ρj (x, v), and j = 1, . . . , µ(x, v), qv,j is the linear polynomial
qv,j (r) =
r
ρj (x, v) − r
a+
f (pj (x, v)).
ρj (x, v)
ρj (x, v)
Indeed, similar to the convex case, we find
Z
E(a) =
S
µ(x,v)
X
(−1)j−1
j=1
(f (pj (x, v)) − a)2
dv,
ρj (x, v)
and setting E 0 (a) = 0 gives a = g(x).
7.3
Hermite interpolation
Having now recast mean value interpolation as the minimization of a radial energy function,
we are ready to explore a possible generalization in which the interpolant also matches
derivative boundary data. Instead of minimizing over first derivatives of linear polynomials,
we now minimize over (k + 1)-st derivatives of polynomials of degree 2k + 1, for any
90
CHAPTER 7. PRM: HERMITE INTERPOLATION ON ARBITRARY DOMAINS
k = 0, 1, 2, . . .. Specifically, we want to interpolate in Ω a C k continuous real function f ,
given its values and partial derivatives up to order k at the boundary ∂Ω. Fix x ∈ Ω and let
τ be some polynomial in πk (Rn ), the linear space of n-variate polynomials of degree ≤ k.
We think of τ as its own Taylor series at x because we are only interested in the derivatives
of τ up to order k at x. Then, for each transversal v ∈ S and for j = 1, . . . , µ(x, v), let
qv,j be the univariate polynomial of degree ≤ 2k + 1 such that
(i)
qv,j (0) = Dvi τ (x),
(i)
qv,j (ρj (x, v))
i = 0, 1, . . . , k,
= Dvi f (pj (x, v)),
i = 0, 1, . . . , k,
where Dv f denotes the directional derivative of f in the direction v. Then, defining
Z
ρj (x,v)
Ev,j (τ ) =
(k+1)
qv,j
2
(r) dr,
0
we propose to choose τ in πk (Rn ) to minimize
µ(x,v)
Z
X
E(τ ) =
S
(−1)j−1 Ev,j (τ ) dv,
j=1
and then set
g(x) = τ (x).
(7.1)
This procedure defines, in a pointwise fashion, a function g : Ω → R. In general, the
minimization will yield a different polynomial τ at each x and so g will not itself be a
polynomial.
Several questions arise, the main ones being
1. does the minimization have a unique solution τ , so that g is well-defined, and
2. do the derivatives of g up to order k match those of f on ∂Ω?
Considering first the univariate case, n = 1, it is well known that among all piecewise
C k+1 functions q on a real interval (a, b) that match derivatives up to order k of a given
function f at a and b, the unique minimizer of the energy
Z
b
(q (k+1) (x))2 dx
a
is the polynomial Hermite interpolant to f of degree at most 2k + 1. Thus when n = 1
and Ω = (a, b), we see that g, defined by (7.1), will simply be the Hermite polynomial
interpolant to f of degree ≤ 2k + 1. Thus the answer to both questions is ‘yes’ when n = 1.
We are not able to answer them for general n, however. Instead, in the rest of the paper, we
focus on the case k = 1 and we show that there is a unique minimizer for all n. We cannot
yet show that g really interpolates the derivatives of f of order 0 and 1, but we can show that
7.3. HERMITE INTERPOLATION
91
if f is a cubic then g = f , in which case it trivially interpolates, and numerical examples in
R2 strongly suggest that it interpolates any f when n = 2.
In the case k = 1, we can express any polynomial τ ∈ π1 (Rn ) in the form
τ (y) = a + (y − x) · b,
for some a ∈ R and b = (b1 , . . . , bn )T ∈ Rn and then qv,j is the cubic polynomial such
that
qv,j (0) = a,
qv,j (ρj (x, v)) = f (pj (x, v)),
0
qv,j
(0)
0
qv,j
(ρj (x, v)) = Dv f (pj (x, v)).
= v · b,
Then, Ev,j and E can be viewed as functions of a and b, and the task is to find a ∈ R and
b ∈ Rn to minimize
Z
µ(x,v)
E(a, b) =
S
X
(−1)j−1 Ev,j (a, b) dv,
(7.2)
j=1
where
ρj (x,v)
Z
Ev,j (a, b) =
2
00
qv,j
(r) dr,
(7.3)
0
and set g(x) = a. In turns out that E has a unique minimizing pair (a, b). In order to show
this and to compute a we begin with a lemma.
Lemma 7.1 Let q be the cubic polynomial such that
q 0 (0) = m0 ,
q(0) = f0 ,
q(h) = f1 ,
q 0 (h) = m1 ,
for some h > 0, and let
h
Z
(q 00 (x))2 dx.
E=
(7.4)
0
Then, with ∆f0 = f1 − f0 ,
E = 12
(∆f0 )2
(∆f0 )(m0 + m1 )
m2 + m0 m1 + m21
− 12
+4 0
.
3
2
h
h
h
Proof. Using the Bernstein polynomials
k i
Bik (u) =
u (1 − u)k−i ,
i
k ≥ 0,
we can express q in the Bernstein form
q(x) =
3
X
i=0
ci Bi3 (x/h)
0 ≤ i ≤ k,
92
CHAPTER 7. PRM: HERMITE INTERPOLATION ON ARBITRARY DOMAINS
where
c0 = f0 ,
c2 = f1 − M1 ,
c1 = f0 + M0 ,
c3 = f1 ,
and Mi = hmi /3. Then
q 00 (x) =
1
6 X 2
∆ ci Bi1 (x/h),
h2 i=0
where
∆2 ci = ci+2 − 2ci+1 + ci ,
and squaring and integrating over [0, h] yields
E=
12
(∆2 c0 )2 + ∆2 c0 ∆2 c1 + (∆2 c1 )2 .
h3
Then, since
∆2 c0 = ∆f0 − 2M0 − M1 ,
∆2 c1 = −∆f0 + M0 + 2M1 ,
a simple calculation gives
E=
12
(∆f0 )2 − 3(∆f0 )(M0 + M1 ) + 3(M02 + M0 M1 + M12 ) ,
h3
2
which gives the result.
We can now apply Lemma 7.1 to give a formula for Ev,j (a, b) in (7.3), using matrix
notation. Setting f0 = a, m0 = v · b, f1 = f (p(x, v)), m1 = Dv f (p(x, v)), and
h = ρ(x, v), gives
T
Ev,j (a) = aT Mv,j a + Nv,j
a + Pv,j ,
(7.5)
where, with the shorthand p := pj (x, v) and ρ := ρj (x, v),
2
4
a
6
3ρvT
−6f (p) + 3ρDv f (p)
a=
, Mv,j = 3
, Nv,j = 3
,
b
ρ 3ρv 2ρ2 vvT
ρ −3ρf (p)v + ρ2 Dv f (p)v
and
Pv,j =
4
3(f (p))2 − 3ρf (p)Dv f (p) + ρ2 (Dv f (p))2 .
3
ρ
Substituting (7.5) into (7.2) now gives
E(a) = aT M a + N T a + P,
(7.6)
where M is the matrix
Z
M=
S
µ(x,v)
X
j=1
(−1)j−1 Mv,j dv,
(7.7)
7.3. HERMITE INTERPOLATION
93
and similarly for N and P , and integrating a matrix means integrating each element. As the
set of non-transversal v has measure zero for each x ∈ Ω, the integration can be achieved
by splitting the integral into a sum over integrals over the regions of S where the number of
intersections µ(x, v) is constant.
The matrix M is clearly symmetric and if it is also positive definite then it is non-singular
and the unique minimum of E is the unique solution a to the equation
1
M a = − N,
(7.8)
2
and a can be computed directly from this using Cramer’s rule. To show that M is positive
definite we use the following lemma.
Lemma 7.2 If f1 = m1 = 0 in Lemma 7.1 then E in (7.4) is non-increasing in h.
Proof. From Lemma 7.1 we know that
f02
f0 m0
m20
,
+
12
+
4
h3
h2
h
and computing the derivative with respect to h yields
2
dE
f2
f0 m0
m2
m0
f0
= − 36 04 + 24 3 + 4 20 = − 6 2 + 2
≤ 0.
dh
h
h
h
h
h
E = 12
2
Theorem 7.3 The matrix M is positive definite.
Proof. We first show that aT M a ≥ 0. To do this, let f = 0. Then since N = 0 and P = 0,
we have
aT M a = E(a).
Since
0
qv,µ (ρµ (x, v)) = qv,µ
(ρµ (x, v)) = 0,
(7.9)
Lemma 7.2 shows that Ev,j (a) in (7.3) is non-increasing in j. Therefore Ev,j (a, b) −
Ev,j+1 (a, b) ≥ 0 in (7.2) and so
Z
E(a) ≥
Ev,µ(x,v) (a, b) dv ≥ 0.
S
T
n+1
Thus a M a ≥ 0 for all a ∈ R
.
Further, suppose that aT M a = 0. Then, letting f = 0 again, we must have
Z
Ev,µ(x,v) (a, b) dv = 0,
S
and hence for each v ∈ S, with µ = µ(x, v), we must have
Z ρµ (x,v)
2
00
Ev,µ (a, b) =
qv,µ
(r) dr = 0.
0
Therefore, for each v ∈ S, qv,µ is linear and by (7.9), qv,µ = 0. Hence a = 0.
2
94
7.4
CHAPTER 7. PRM: HERMITE INTERPOLATION ON ARBITRARY DOMAINS
Boundary integrals
If a parametric representation of ∂Ω is available, the integrals in M and N in (7.6) can be
converted to integrals over the parameters of ∂Ω, in the spirit of [44]. To this end, suppose
that s = (s1 , . . . , sn−1 ) : D → ∂Ω is a parameterization of the curve or surface ∂Ω
with parameter domain D ⊂ Rn−1 . We will assume that s is a regular parameterization,
by which we mean that s is piecewise C 1 and that at every point of differentiability t =
(t1 , . . . , tn−1 ) ∈ D, the first order partial derivatives Di s := ∂s/∂ti , i = 1, 2, . . . , n − 1,
are linearly independent. Thus, following the notation of [44] and [13], their cross product,
s⊥ (t) := det(D1 s(t), . . . , Dn−1 s(t))
is non-zero, and is orthogonal to the tangent space at s(t). We make the convention that
s⊥ points outwards from Ω. Then, considering for example the integrals in M in (7.7), we
make the substitution pj (x, v) = s(t) in (7.7), so that
v=
s(t) − x
,
ks(t) − xk
and ρj (x, v) = ks(t) − xk, and then (see [13]),
dv =
(s(t) − x) · s⊥ (t)
dt.
ks(t) − xkn
Substituting these expressions into (7.7), and similarly for N , gives the boundary representations
Z
Z
N̂ w(x, t) dt,
(7.10)
M̂ w(x, t) dt,
N=
M=
D
D
where
w(x, t) =
(s(t) − x) · s⊥ (t)
,
ks(t) − xkn+3
and, with the shorthand s := s(t),
6
3(s − x)T
−6f (s) + 3Ds−x f (s)
,
N̂
=
4
.
M̂ = 2
3(s − x) 2(s − x)(s − x)T
(−3f (s) + Ds−x f (s))(s − x)
This provides a way of numerically computing the value of g at a point x by sampling the
surface s and its first derivatives and applying numerical integration. It also shows that g is
C ∞ smooth in Ω, due to differentiation under the integral sign.
7.5
Cubic precision
We now establish cubic precision (for k = 1 and all n).
7.5. CUBIC PRECISION
95
Theorem 7.4 Suppose that f : Ω → R is a cubic polynomial. Then g = f in Ω.
Proof. Fix an arbitrary x ∈ Ω and let â := f (x) and b̂ := ∇f (x). We will show that
E(â, b̂) ≤ E(a, b) for all a ∈ R and b ∈ Rn and hence g(x) = â = f (x).
Let ev,j : [0, ρj (x, v)] → R be the cubic polynomial defined by ev,j (r) = qv,j (r) −
f (x + rv). Then
∂2
00
qv,j
(r) = 2 f (x + rv) + e00v,j (r)
∂r
and so
2
Z ρj (x,v) 2
∂
f
(x
+
rv)
dr+
Ev,j (a, b) =
∂r2
0
Z ρj (x,v)
∂2
2
e00v,j (r) 2 f (x + rv) dr+
∂r
0
Z ρj (x,v)
2
e00v,j (r) dr.
0
Now
ρj (x,v)
Z
0
2
∂2
f
(x
+
rv)
dr = Ev,j (â, b̂),
∂r2
and as ev,j is a cubic polynomial and
ev,j (ρj (x, v)) = e0v,j (ρj (x, v)) = 0,
it follows from Lemma 7.2 that
µ(x,v)
X
j−1
ρj (x,v)
Z
(−1)
2
e00v,j (r) dr ≥ 0.
0
j=1
Thus,
E(a, b) ≥ E(â, b̂) + K,
where
µ(x,v)
Z
X
K=2
S
j=1
(−1)j−1
ρj (x,v)
Z
0
e00v,j (r)
∂2
f (x + rv) dr dv.
∂r2
We will show that K = 0, which will complete the proof. We apply integration by parts to
the inner integral, giving
Z
0
ρj (x,v)
e00v,j (r)
∂2
f (x + rv) dr = ev,j (0)Dv3 f (x) − e0v,j (0)Dv2 f (x)
∂r2
= (a − â)Dv3 f (x) − (v · (b − b̂))Dv2 f (x).
96
CHAPTER 7. PRM: HERMITE INTERPOLATION ON ARBITRARY DOMAINS
Figure 7.4: Reproduction of a quadratic polynomial (maximal absolute error due to numerical integration: 3.4 · 10−8 ).
Then, since this expression is independent of j, and recalling that µ(x, v) is odd we deduce
that
Z
Z
Dv3 f (x) dv − (b − b̂) ·
K = 2 (a − â)
S
Dv2 f (x)v dv .
S
But both integrals in this expression are zero since
2
D−v
f (x) = Dv2 f (x)
and
3
D−v
f (x) = −Dv3 f (x).
2
7.6
Numerical Examples
We conclude the paper with some numerical examples of the minimizing C 1 Hermite interpolant g in R2 . All of these examples were computed using Cramer’s rule to find the value
of a in (7.8). The elements of M and N were found using the boundary formula (7.10) and
adaptive numerical integration with a relative tolerance.
Figure 7.4 shows g on the unit disk in the case that f (x, y) = r2 sin 2θ, with polar
coordinates x = r cos θ and y = r sin θ. As predicted by Theorem 7.4, g is numerically
equal to f in this case, since f is the quadratic polynomial f (x, y) = 2xy. In this example,
the Hermite interpolant g is only slightly different to the mean value interpolant to the
Lagrange data from f shown in Figure 7.2.
Figure 7.5 shows g on a non-convex domain with a hole, with f given by f (x, y) =
sin(y) cos(2x). The interpolant g behaves nicely even though there is a cusp at the leftmost
point of the boundary. The error between g and f near the boundary of the hole is shown in
7.6. NUMERICAL EXAMPLES
97
Figure 7.5: Interpolant (left) and part of the error function (right) for a non-convex domain
with hole and cusp.
g(x) − f (x)
.1
0
−.1
−.2
−1.0
−0.5
0.0
0.5
1.0
x
Figure 7.6: Errors along the x-axis (y = 0) for different f , with the unit circle as domain.
98
CHAPTER 7. PRM: HERMITE INTERPOLATION ON ARBITRARY DOMAINS
Figure 7.7: Comparison between interpolants (top) and errors (bottom) of the interpolant of
[22] (left) and the minimizing interpolant g (right).
the right part of Figure 7.5. The error function is cut open to demonstrate numerically that
both the error and its gradient are zero at the boundary. Figure 7.6 shows further numerical
evidence of the Hermite interpolation property of g. The errors in g along the x-axis are
plotted for three different functions f on the unit circle.
Figure 7.7 compares the minimizing Hermite interpolant g with the Hermite interpolant
of [22] on the unit circle with f (x, y) = r2 sin 4θ. The top left shows the interpolant of [22]
and the bottom left its error function. The right part of the figure shows g on the top and
its error function at the bottom. Note that both error functions are tangentially zero at the
boundary.
In Figure 7.8 we use the interpolant for hole filling, with a circular hole, and with
f (x, y) = cos r. In the top f is shown together with the boundary of the hole that is
cut out. Then the Hermite interpolant of [22] is shown, and at the bottom the minimizing
interpolant g.
Figure 7.9 shows another example of the nice behaviour of the minimizing interpolant
g for hole filling with the same function f but with a more complicated boundary.
Acknowledgement
We would like to thank Christopher Dyken for giving us access to his code for the mean
value and Hermite interpolants described in [22].
7.6. NUMERICAL EXAMPLES
99
Figure 7.8: Hole filling with circular boundary. From top to bottom: original function,
interpolant of [22], minimizing interpolant.
Figure 7.9: Hole filling with a piecewise smooth boundary.
100
CHAPTER 7. PRM: HERMITE INTERPOLATION ON ARBITRARY DOMAINS
8
Pointwise radial minimization: Existence,
uniqueness and polynomial precision
Christian Schulz
Abstract
The recent paper [29] introduced the method of pointwise radial minimization to
solve the problem of Hermite interpolation on an arbitrary domain for derivative data of
any order. However, in that paper only the case of first order boundary data was analyzed
with respect to existence, uniqueness and cubic precision inside the given domain. In
this paper we will show that those properties hold inside and outside the domain for
any Hermite boundary data. Here cubic precision changes to polynomial precision.
Moreover we provide general formulas for the resulting function. Unfortunately, the
task of proving the interpolation property still remains open, but numerical experiments
strongly indicate this property for a variety of planar and three-dimensional domains.
Keywords: Transfinite interpolation, Hermite interpolation, Pointwise radial minimization
8.1
Introduction
Pointwise radial minimization is an approach for solving the problem of Hermite interpolation on an arbitrary domain. It was introduced in [29] for derivative data of any order. The
properties of existence, uniqueness and polynomial precision, however, were only shown
for first order derivative data. We will therefore use this paper to provide the proofs and formulas for the general case. As this paper is a direct extension of the work done in [29], we
will assume that the reader is familiar with the ideas and concepts presented there. For more
information about the history of transfinite interpolation and related work we also refer the
reader to the introduction in [29].
Before we start, let us shortly recapitulate the notation. Let Ω ⊂ Rn be an open, not
necessarily convex domain with the boundary ∂Ω and let Ωc = Rn \(Ω ∪ ∂Ω). For any
101
102
CHAPTER 8. PRM: EXISTENCE, UNIQUENESS, POLYNOMIAL PRECISION
point x ∈ Rn \∂Ω and any unit vector v in the unit sphere S ⊂ Rn , let µ(x, v) ≥ 0 be the
number of intersections of the semi-infinite line {x + rv : r ≥ 0} with the boundary ∂Ω,
assumed to be finite. Let pj (x, v), j = 1, 2, . . . , µ(x, v) be the corresponding intersection
points, ordered so that their distances ρj (x, v) = kpj (x, v) − xk are increasing,
ρ1 (x, v) < ρ2 (x, v) < · · · < ρµ(x,v) (x, v).
We will call v transversal with respect to x if the line {x + rv : r ≥ 0} does not lie in
the tangent space of ∂Ω at any of their intersections pj (x, v). Then for transversal v, the
number µ(x, v) will be odd for x ∈ Ω and even for x ∈ Ωc .
Our pointwise minimizing function g : Rn \∂Ω → R shall interpolate a C k continuous
real function f , given its values and derivative data up to order k at the boundary ∂Ω. With
derivative data we mean that we are able to compute any directional derivative Dvi f of f at
the boundary for i = 0, . . . , k. We fix x ∈ Rn \∂Ω and let τ be some polynomial in πk (Rn ),
the linear space of n-variate polynomials of degree ≤ k. Then, for each transversal v ∈ S
and for j = 1, . . . , µ(x, v), let qv,j be the univariate polynomial of degree ≤ 2k + 1 such
that
(i)
qv,j (0) = Dvi τ (x),
(i)
qv,j (ρj (x, v))
i = 0, 1, . . . , k,
= Dvi f (pj (x, v)),
i = 0, 1, . . . , k.
Moreover, we define the energies
Z
Ev,j (τ ) =
ρj (x,v)
2
(k+1)
qv,j (r)
Z
dr,
E(τ ) =
0
S
µ(x,v)
X
(−1)j−1 Ev,j (τ ) dv,
j=1
and choose the τ in πk (Rn ) which minimizes E(τ ) and set g(x) = τ (x). We use the
convention that an empty sum is zero and make the assumption that the set {v ∈ S :
v is non-transversal} has measure zero, so that non-transversal v can be ignored when integrating over S. By letting S̃x ⊆ S be the subset of S where µ(x, v) > 0 and v transversal,
we get
Z µ(x,v)
X
E(τ ) =
(−1)j−1 Ev,j (τ ) dv.
(8.1)
S̃x
8.2
j=1
Univariate polynomial
Due to its construction, the minimizing function g owes a lot of its properties to the univariate polynomials qv,j of degree ≤ 2k + 1, more precisely to a certain property of their energies Ev,j (τ ). In this section we will therefore study a univariate polynomial q : [0, h] → R
of degree ≤ 2k + 1 that fulfills the interpolation conditions
q (l) (0) = m0,l ,
q (l) (h) = m1,l ,
l = 0, . . . , k
8.2. UNIVARIATE POLYNOMIAL
103
for some h > 0, where the m0,l and m1,l are given data. We can express q in BernsteinBézier form,
q(x) =
2k+1
X
ci Bi2k+1
x
h
i=0
,
Bi2k+1 (u) =
2k + 1
i
ui (1 − u)2k+1−i ,
which also provides us with the derivatives
2k+1−l
(2k + 1)! 1 X
(2k+1−l) x
l
∆
c
B
,
i i
(2k + 1 − l)! hl i=0
h
l
X
l
l
l−j
∆ ci =
(−1)
ci+j .
j
j=0
q (l) (x) =
0 ≤ l ≤ 2k + 1,
(8.2)
We will need the following, maybe not so well known, binomial identities.
Lemma 8.1 Let n ∈ N+ and r ∈ N+
0 with 0 ≤ r < n. Then
n−1
X
(−1)j
j=r
n
n
j
= (−1)n−1
,
j
r
r
n−r
X
(−1)j−1
j=1
n
n−j
n
=
.
j
r
r
Let now s ∈ N+ and n, r ∈ N+
0 with 0 ≤ r ≤ n < s. Then
n
X
n−j
(−1)
j=r
s
j
s
s−r−1
=
.
j
r
r
n−r
These identities can be derived from the definition of binomial coefficients and other well
known binomial identities. For more information on that topic we refer to Chapter 5 of [38].
For convenience we define
M0,l =
(2k + 1 − l)! l
h m0,l ,
(2k + 1)!
M1,l =
(2k + 1 − l)! l
h m1,l ,
(2k + 1)!
l = 0, . . . , k,
and get ∆l c0 = M0,l and ∆l c2k+1−l = M1,l for l = 0, . . . , k and the following, simple
equations for the coefficients of q.
Lemma 8.2
cl =
l X
l
i=0
i
M0,i ,
c2k+1−l
l
X
l
=
(−1)
M1,i ,
i
i=0
i
l = 0, . . . , k.
(8.3)
Proof. We will first show the formula for cl . Observe that c0 = ∆0 c0 = M0,0 and that for
l > 0 from
l
X
l
M0,l = ∆l c0 =
(−1)l−j
cj
j
j=0
104
CHAPTER 8. PRM: EXISTENCE, UNIQUENESS, POLYNOMIAL PRECISION
it follows that
l−1
X
cl = M0,l +
l−j+1
(−1)
j=0
l
cj .
j
Now using induction we can prove the first equation,
cl = M0,l + (−1)l+1
l−1
X
M0,i
l−1
X
i=0
(−1)j
j=i
l l
j L.8.1 X l
M0,i .
=
i
j
i
i=0
The formula for c2k+1−l can be proved similarly by starting at c2k+1 = M1,0 and using the
second of the binomial identities in Lemma 8.1 for c2k+1−l , l > 0.
2
In addition to the values of ∆l c0 and ∆l c2k+1−l for l = 0, . . . , k we also need their
values for l = k + 1, . . . , 2k + 1. Using (8.3) in (8.2) and Lemma 8.1 yields
∆l c0 =
k
X
(−1)l−j
j=0
l
X
l
l
cj +
(−1)l−j
c2k+1−(2k+1−j)
j
j
j=k+1
= (−1)k+l
k
X
αl,r M0,r + βl,r M1,r ,
r=0
where αl,r , βl,r ∈ N are given by
αl,r =
l
l−r−1
,
r
k−r
βl,r =
l
X
(−1)k+r−j
j=k+1
l
2k + 1 − j
.
j
r
For ∆l c2k+1−l we get
∆l c2k+1−l =
k
X
(−1)j+1
j=2k+1−l
= (−1)k
k
X
l
2k + 1 − j
cj +
k
X
(−1)j
j=0
l
c2k+1−j
j
(−1)r βl,r M0,r + (−1)r αl,r M1,r .
r=0
By applying some binomial identities one can show that α2k+1,r = (−1)r+1 β2k+1,r
and thus both equations are identical for l = 2k + 1.
Knowing all ∆l c0 and ∆l c2k+1−l enables us to compute the energy
Z
E=
0
h
2
q (k+1) (x) dx
8.3. EXISTENCE AND UNIQUENESS
105
of q. We apply integration by parts k + 1 times, using the fact that q (2k+2) = 0. This yields
E=
k
X
h
ih
(−1)k−s q (s) (x)q (2k+1−s) (x)
0
s=0
=
k X
k
X
(2k + 1 − r)!
h2k+1−s−r
s!
r=0 s=0
1
α2k+1−s,r m0,r m0,s + (−1)r−s m1,r m1,s +
β2k+1−s,r m1,r m0,s + (−1)r−s m0,r m1,s
.
(8.4)
The energy E of q has the following important property.
Lemma 8.3 If m1,r = 0 for all r = 0, . . . , k then either
1. E is decreasing in h or
2. E = 0 for all h > 0.
Proof. With m1,r = 0 we get
E=
k
k X
X
(2k + 1 − r)! (2k + 1 − s)!
r=0 s=0
(2k + 1 − s − r) k! k!
k
k
m0,r m0,s
2k+1−s−r
h
r
s
1
and differentiating with respect to h yields
k
X
(2k + 1 − r)!
1
dE
k
=−
m
0,r
k+1−r
dh
k!
h
r
r=0
!2
.
Thus dE/dh ≤ 0, showing that E is non-increasing in h. For dE/dh to be zero for certain
h 6= 0 we must have
!
k
X
(2k + 1 − r)! k
m0,r hr = 0.
k!
r
r=0
As this is a polynomial in h it can either be zero
for atmost k distinct values of h or it is
zero for all h. In the first case we must have E h1 > E h2 for two values 0 < h1 < h2 , so
E is decreasing in h. In the latter the whole polynomial is zero and hence its coefficients
must be zero, from which follows that m0,r = 0 for all r = 0, . . . , k and thus q = 0 and
E = 0 for all h > 0.
2
8.3
Existence and uniqueness
In this section we will derive an explicit formula for the energy E(τ ) and use Lemma 8.3
to show that the resulting pointwise radial minimizing function g exists and is unique at all
106
CHAPTER 8. PRM: EXISTENCE, UNIQUENESS, POLYNOMIAL PRECISION
points x ∈ Rn \∂Ω. The polynomial τ can be expressed using the following multi-index
notation
i
ai = ai1 ,i2 ,...,in ,
x =
xi11 xi22
· · · xinn ,
kik =
n
X
ij ,
ij ∈ N+
0 for j = 1, . . . , n.
j=1
On multi-indicies the operations addition, subtraction and factorial are defined as
i ± j = (i1 ± j1 , i2 ± j2 , · · · , in ± jn ),
n
Y
i! =
ij ! .
j=1
The j-th partial derivative of xi is defined by
∂ j xi =
∂ kjk
∂xj11 ∂xj22 · · · ∂xjnn
xi = Ai,j xi−j ,
(8.5)
with
(
Ai,j =
i!
(i−j)!
if il ≥ jl for all l = 1, . . . , n,
0
else,
and the l-th directional derivative is given by the following Lemma.
Lemma 8.4 Let f : Rn → R be smooth enough. Then
Dvl f (x) =
X kjk!
vj ∂ j f (x)
j!
and
Dvl xi =
kjk=l
X kjk!
Ai,j vj xi−j .
j!
kjk=l
Proof. We show the first equality by induction on l. The second follows by simply applying
0 0
(8.5). Observe that Dv0 f (x) = f (x) = k0k!
0! v ∂ f (x). Let ei = (0, . . . , 0, 1, 0, . . . , 0) be
the unit index with eii = 1 and eij = 0 for all j 6= i. For l > 0 we get
Dvl f (x) =
X
kjk=l−1
kjk! j
v Dv ∂ j f (x) =
j!
=
n
X X
kjk! j+er j+er
v
∂
f (x)
j!
r=1
kjk=l−1
X
kjk=l
vj ∂ j f (x)
n
X
kj − er k!
.
j − er !
r=1
jr >0
The last sum can be simplified to
n
n
X
kjk!
kj − er k! X kjk − 1 ! jr
=
=
,
j!
j!
j − er !
r=1
r=1
jr >0
which finishes the proof.
2
8.3. EXISTENCE AND UNIQUENESS
107
By expressing the polynomial τ in multi-index notation with ai ∈ R and defining γl,v,i ,
(
X
l! vi
if kik = l,
i
ai (y − x) ,
τ (y) =
γl,v,i =
0
else,
kik≤k
and as (x − x)i−j = 0 if i 6= j, we get relative simple directional derivatives of τ at y = x,
X
X
X
Dvl τ (x) =
ai Dvl (y − x)i y=x =
ai kik! vi =
ai γl,v,i ,
0 ≤ l ≤ k.
kik≤k
kik=l
kik≤k
Let η be the number of coefficients in τ and let i1 , i2 , . . . , iη be some ordering of all i
with kik ≤ k. The energies
P Ev,j can be expressed in matrix notation by using equation
(8.4) and setting m0,r = kik≤k ai γr,v,i and m1,r = Dvr f (p(x, v)) for r = 0, . . . , k. We
get
T
Ev,j (τ ) = aT Mv,j a + Nv,j
a + Pv,j ,
(8.6)
where
aT = ai1
ai2
···
aiη ,
and Mv,j is a η × η matrix and Nv,j a η-dimensional vector. With the shorthand p :=
pj (x, v) and ρ := ρj (x, v) the entries of Mv,j , Nv,j and the value Pv,j are
(Mv,j )ζ,ξ =
=
(Nv,j )ζ =
k
k X
X
(2k + 1 − r)!
1
ρ2k+1
ρ
r=0 s=0
kiξ k+kiζ k
s!
λkiζ k,kiξ k viζ +iξ ,
ρ2k+1
k
k
1 X X (2k + 1 − r)!
ρ2k+1
r=0 s=0
s!
ρs+r α2k+1−s,r γr,v,iζ γs,v,iξ
ρs+r β2k+1−s,r Dvr f (p) γs,v,iζ +
(−1)r+s Dvs f (p) γr,v,iζ
=
k
ρkiζ k viζ X r
ρ σkiζ k,r Dvr f (p),
ρ2k+1 r=0
with λr,s , σr,s ∈ N for r, s = 0, . . . , k defined as
λr,s =
(2k + 1 − r)! (2k + 1 − s)!
(2k + 1 − r − s) (k − r)! (k − s)!
σr,s = (2k + 1 − s)! β2k+1−r,s + (−1)r+s
(2k + 1 − r)! r!
β2k+1−s,r
s!
and
Pv,j =
1
ρ2k+1
k X
k
X
r=0 s=0
(−1)r+s
(2k + 1 − r)! r+s
ρ α2k+1−s,r Dvr f (p)Dvs f (p).
s!
108
CHAPTER 8. PRM: EXISTENCE, UNIQUENESS, POLYNOMIAL PRECISION
Substituting (8.6) into (8.1) now gives
E(τ ) = aT M a + N T a + P,
where M is the matrix
µ(x,v)
Z
X
M=
S̃x
(−1)j−1 Mv,j dv
j=0
and similarly for N and P , and integrating a matrix means integrating each element.
The Matrix M is clearly symmetric and Theorem 8.5 shows that it also is positive definite. Hence there exists a unique minimum of E which satisfies the linear system
1
M a = − N,
2
(8.7)
and g(x) = τ (x) = a0 .
Theorem 8.5 The Matrix M is positive definite for all x ∈ Rn \∂Ω.
Proof. This proof is based on the same ideas as the one in [29] for the cubic case and points
inside the boundary. We will first show that aT M a ≥ 0. Let f = 0, then N = 0, P = 0
and
aT M a = E(τ ).
Since
qv,j (ρj (x, v))(l) = 0
(8.8)
for l = 0, . . . , k, it follows from Lemma 8.3 that Ev,j is non-increasing and Ev,j (τ ) −
Ev,j+1 (τ ) ≥ 0 for all v ∈ S̃x . If x ∈ Ωc , then µ(x, v) is even for all v in S̃x and we have
Z
E(τ ) ≥
Ev,µ(x,v)−1 (τ ) − Ev,µ(x,v) (τ ) dv ≥ 0.
S̃x
For x ∈ Ω the number µ(x, v) is odd for all v in S̃x and
Z
E(τ ) ≥
Ev,µ(x,v) (τ ) dv ≥ 0.
S̃x
Thus aT M a ≥ 0 for all a ∈ Rη .
Further, suppose aT M a = 0. We let f be zero again. Then for x ∈ Ωc we have
Z
0 = E(τ ) ≥
Ev,µ(x,v)−1 (τ ) − Ev,µ(x,v) (τ ) dv ≥ 0.
S̃x
For x ∈ Ω we get
Z
0 = E(τ ) ≥
Ev,µ(x,v) (τ ) dv ≥ 0.
S̃x
8.4. POLYNOMIAL PRECISION
109
For the first case it follows from Lemma 8.3 and for the second it is obvious, that for all
v ∈ S̃x we must have
ρµ (x,v)
Z
Ev,µ (τ ) =
2
(k+1)
qv,µ
(r) = 0.
0
Therefore qv,µ must be of degree ≤ k for each v, and by (8.8) we have qv,µ = 0 for all
v ∈ S̃x . Hence a = 0.
2
8.4
Polynomial precision
In this section we will show the polynomial precision of pointwise radial minimization.
Theorem 8.6 Suppose that f : Rn → R is a polynomial of degree ≤ 2k + 1. Then g = f
in Rn \∂Ω.
Proof. Again this proof is based on the same ideas as the one in [29] for cubic precision and
points inside the boundary. Fix an arbitrary x ∈ Rn \∂Ω. Let Tf be the k-th order Taylor
expansion of f around x. We will show that E(Tf ) ≤ E(τ ) for all τ ∈ πk (Rn ). From this
follows that τ = Tf and g(x) = τ (x) = Tf (x) = f (x).
Let ev,j : [0, ρj (x, v)] → R be the polynomial of degree ≤ 2k + 1 defined by ev,j (r) =
qv,j (r) − f (x + rv). Then
(k+1)
qv,j
∂ k+1
(k+1)
f (x + rv) + ev,j (r)
∂rk+1
(r) =
and so
ρj (x,v)
Z
Ev,j (τ ) =
0
Z
ρj (x,v)
2
2
∂ k+1
f (x + rv) dr+
∂rk+1
(k+1)
ev,j
0
Now
ρj (x,v)
Z
0
(r)
∂ k+1
f (x + rv) dr +
∂rk+1
Z
ρj (x,v)
(k+1)
ev,j
2
(r) dr.
0
2
∂ k+1
f (x + rv) dr = Ev,j (Tf ),
∂rk+1
and as ev,j is a polynomial of degree ≤ 2k + 1 and
elv,j (ρj (x, v)) = 0,
for l = 0, . . . , k
it follows from Lemma 8.3 that
µ(x,v)
X
j=1
(−1)j−1
Z
ρj (x,v)
(k+1)
ev,j
0
2
(r) dr ≥ 0.
(8.9)
110
CHAPTER 8. PRM: EXISTENCE, UNIQUENESS, POLYNOMIAL PRECISION
Thus,
E(τ ) ≥ E(Tf ) + K,
where
µ(x,v)
Z
X
K=2
S
(−1)j−1
Z
ρj (x,v)
(k+1)
ev,j
(r)
0
j=1
∂ k+1
f (x + rv) dr dv.
∂rk+1
(8.10)
We will show that K = 0, which will complete the proof. Observe that
(l)
ev,j (0) = Dvl τ (x) − Dvl f (x),
for l = 0, . . . , k.
We apply k + 1 times integration by parts to the inner integral in (8.10). In combination
with (8.9) this yields
Z
ρj (x,v)
(k+1)
ev,j
0
(r)
∂ k+1
f (x + rv) dr =
∂rk+1
k
X
(−1)i+1 Dvk−i τ (x) − Dvk−i f (x) Dvk+1+i f (x).
i=0
This integral is independent of j. Hence, if x ∈ Ωc , we get
Z
K = 2 0 dv = 0.
S
For x ∈ Ω we have
K=2
k
X
i=0
(−1)i+1
Z
Dvk−i τ (x) − Dvk−i f (x) Dvk+1+i f (x) dv.
(8.11)
S
Note that both τ and f are infinitely smooth and that either k − i or k + 1 + i is odd whereas
the other is even. Therefore
k+1+i
k−i
k−i
D−v
τ (x) − D−v
f (x) D−v
f (x) = − Dvk−i τ (x) − Dvk−i f (x) Dvk+1+i f (x),
and hence the integrals in (8.11) are zero, leading to K = 0.
8.5
2
Boundary integrals
As observed in [44] the integrals over the unit sphere can be converted into boundary integrals if a parametric representation for ∂Ω is available. Following the notation of [13, 29],
let s : D → ∂Ω be a regular, piecewise C 1 parametrization of the boundary. Thus for every
point t ∈ D the vector
s⊥ (t) = det D1 s(t), . . . , Dn s(t)
8.6. NUMERICAL EXAMPLES
111
is non-zero and orthogonal to the tangent space at s(t). Here Di s(t) means the first order
partial derivative with respect to ti . With the convention that s⊥ (t) points outwards from
Ω, we have (see [13, 29])
pj (x, v) = s(t),
v=
s(t) − x
,
ks(t) − xk
ρj (x, v) = ks(t) − xk,
dv =
(s(t) − x) · s⊥ (t)
dt.
ks(t) − xkn
This yields
Z
M = χx
Z
cw(x, t) dt
M
N = χx
b w(x, t) dt,
N
D
(8.12)
D
where
(s(t) − x) · s⊥ (t)
,
w(x, t) =
ks(t) − xkn+2k+1
χx =
1 if x ∈ Ω
−1 if x ∈ Ωc
and, with the shorthand s := s(t),
cζ,ξ = λki k,ki k (s − x)iζ +iξ ,
M
ζ
ξ
bζ = (s − x)iζ
N
k
X
r
σkiζ k,r Ds−x
f (s).
r=0
8.6
Numerical examples
Unfortunately we are currently not able to formally prove that the pointwise radial minimizing function satisfies the Hermite interpolation conditions. Due to numerical experiments
we are however convinced that this is the case at least for domains in R2 and R3 . All examples are computed using adaptive numerical integration with a relative tolerance and an
LDLT -solver for the linear system (8.7). As an LDLT -solver works for symmetric positive
and negative definite matrices, it allows us to ignore the value of χx in (8.12).
To numerically examine the interpolation behaviour, we compute the value of the minimizing function g along a line which intersects the boundary ∂Ω. The line is defined by a
point y on the boundary and the unit vector v that is orthogonal to the tangent space of ∂Ω
at y and that points inwards into Ω. The boundary data is taken from a function f which is
defined over the whole R2 and R3 respectively. This allows us to compare the values and
the derivatives of f and g along the line. The derivatives of g at a point x on the line are
computed using the numerical approximations
−g(x + 2hv) + 8g(x + hv) − 8g(x − hv) + g(x − 2hv)
,
12h
−g(x + 2hv) + 16g(x + hv) − 30g(x) + 16g(x − hv) − g(x − 2hv)
Dv2 g(x) ≈
,
12h2
g(x + 2hv) − 2g(x + hv) + 2g(x − hv) − g(x − 2hv)
Dv3 g(x) ≈
.
2h3
Dv1 g(x) ≈
112
CHAPTER 8. PRM: EXISTENCE, UNIQUENESS, POLYNOMIAL PRECISION
Figure 8.1: Left: Half of a Taijitu (Chinese symbol for yin and yang) as domain with y and
±v. Right: Approximated trefoil knot as domain with y and −v.
y/10
y
1.5
4.7
0.75
0.0
0.0
−4.7
−0.75
x/10−2
−2.0
0.0
x/10−2
−9.4
−2.0
2.0
0.0
2.0
y/105
y/103
2.4
2.0
1.2
0.0
0.0
−2.0
−1.2
x/10−2
−4.0
−2.0
0.0
2.0
x/10−2
−2.0
0.0
2.0
Figure 8.2: Value and derivatives of the pointwise radial minimizing function with the half
of a Taijitu as domain along the line illustrated in Figure 8.1 for third order boundary data.
The top left shows the value, the top right the first, the bottom left the second and the bottom
right the third derivative. The black line is the data from the function used for boundary
conditions; the green line represents the minimizing function and the red line the difference
between those two. The x-axis shows the distance to the boundary point with a negative
x-value being outside and a positive one inside the domain.
8.6. NUMERICAL EXAMPLES
113
y/10−1
7.5
5.0
2.5
x/10−2
0.0
−5.0
0.0
5.0
y/10
y
3.5
4.0
0.0
0.0
−3.5
−4.0
−7.0
−8.0
x/10−2
−10
−5.0
0.0
5.0
x/10−2
−5.0
0.0
5.0
Figure 8.3: Value and derivatives of the pointwise radial minimizing function with the unit
sphere as domain along the line illustrated in the top left for second order boundary data.
The top right shows the value, the bottom left the first and the bottom right the second
derivative. The black line is the data from the function used for boundary conditions; the
green line represents the minimizing function and the red line the difference between those
two. The x-axis shows the distance to the boundary point with a negative x-value being
outside and a positive one inside the domain.
114
CHAPTER 8. PRM: EXISTENCE, UNIQUENESS, POLYNOMIAL PRECISION
y/10−2
y/10−1
4.0
8.0
0.0
4.0
−4.0
0.0
−8.0
x/10−2
−8.0
−4.0
0.0
4.0
8.0
−4.0
x/10−2
−8.0
−4.0
0.0
4.0
8.0
Figure 8.4: The value (left) and the first derivative (right) of the pointwise radial minimizing
function with the approximated trefoil knot as domain along the line illustrated in Figure 8.1
for first order boundary data. The black line is the data from the function used for boundary
conditions; the green line represents the minimizing function and the red line the difference
between those two. The x-axis shows the distance to the boundary point with a negative
x-value being outside and a positive one inside the domain.
In our first example the left half of a Taijitu (Chinese symbol for yin and yang) with a hole
and a cusp as shown in Figure 8.1 is used as domain. Figure 8.1 also illustrates the vectors
v and −v. The Hermite boundary data of order three is taken from the function f1 (x) =
cos(20πx1 ) sin(15πx2 ). The resulting g and its derivatives close to the boundary point are
plotted in Figure 8.2 and clearly indicate the satisfaction of the Hermite boundary conditions. The second example uses the unit sphere in R3 as domain and takesp
the second order
Hermite boundary data from the function f2 (x) = cos(10x3 + cos(4.5 2.25x21 + x22 )).
Figure 8.3 presents the results. For an example with a non trivial boundary in R3 we use the
approximated trefoil knot shown in Figure
p 8.1 as domain. The Hermite boundary data of
order one comes from f3 (x) = cos(2.5 x21 + 0.75x22 + 4x23 ) and the corresponding plots
are shown in Figure 8.4. In both examples with Ω ⊂ R3 the numerical results indicate
interpolation of the given boundary data.
Figures 8.5 and 8.6 show examples of the minimizing function g for two three dimensional domains. This is achieved by cutting the domain along a plane and meshing the part
of the plane lying inside the domain with triangles. The values of g are computed at the
triangle vertices and a nearest neighbour interpolation is performed inside each triangle.
The resulting values are then visualized using colour. In Figure 8.5 the unit sphere is used
as domain and f2 determines the boundary data. The approximated trefoil knot is taken as
domain in Figure 8.6 with the boundary conditions specified by f3 .
8.6. NUMERICAL EXAMPLES
115
Figure 8.5: Illustration of the pointwise radial minimizing function with the unit sphere in
R3 as domain for first order boundary data at the top and second order at the bottom. In
both cases the left shows the original function used for the boundary data, the middle the
minimizing function and the right the difference between them. The same colour transfer
function is used for the original and the minimizing function.
116
CHAPTER 8. PRM: EXISTENCE, UNIQUENESS, POLYNOMIAL PRECISION
Figure 8.6: Illustration of the pointwise radial minimizing function with the approximated
trefoil knot as domain for first order boundary data. The left shows the original function
used for the boundary data, the middle the minimizing function and the right the difference
between them. The same colour transfer function is used for the original and the minimizing
function.
8.7
Future Work
Pointwise radial minimization is a new method for Hermite interpolation of any order over
an arbitrary domain. It yields a function which is smooth and well defined inside and outside
the domain. Unfortunately a formal proof of the Hermite interpolation property is still
missing.
All numerical examples presented in this paper are computed using numerical integration. Although an adaptive method is used, the evaluation nevertheless involves the computation of several integrals for each point. This makes it a time consuming task when
performed with high accuracy for a large number of points. If an explicit formula for a
polytope boundary with the data given only at the vertices would be known one could probably achieve a faster approximation of the minimizing function by first approximating the
boundary by a polytope and then applying the explicit formula.
9
Rational Hermite interpolation without poles
Michael S. Floater and Christian Schulz
Abstract
Univariate interpolation is a fundamental problem in mathematics and geometric
modelling. In the last 25 years several researchers noticed the favorable properties of
the barycentric formula for the polynomial Lagrange interpolant. A similar barycentric
form also exists for the general Hermite setting. Both forms easily extend to rational
interpolants while maintaining their favorable properties. For the Lagrange case different interpolants have been developed by choosing certain weights in the barycentric
formula. In this paper we examine some rational interpolants resulting from certain
choices of weights in the barycentric formula for Hermite interpolation. These weights
guarantee that the interpolants do not have any poles. In addition we will show that in
the first order Hermite setting, the univariate version of the recently developed method
of pointwise radial minimization really does interpolate the data and can be written in a
barycentric-like formula.
Keywords: Rational interpolation, Barycentric interpolation, Hermite interpolation, Pointwise
radial minimization
9.1
Introduction
The problem of interpolating a given set of data points is a fundamental part of mathematics
and applications. It occurs in an abundance of situations like building approximations of
complicated functions for further computations or numerical quadrature.
9.1.1
Lagrange interpolation
The simplest interpolation problem is the Lagrange case, where given a number of distinct
points y0 < . . . < yn one wants to find a function r that interpolates given values f (yi ),
117
118
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
i = 0, . . . , n at these points,
r(yi ) = f (yi ),
i = 0, . . . , n.
It is well known, that there exists a unique polynomial of degree at most n which fulfils
the interpolation conditions. One of many ways to represent this polynomial is the Newton
form,
r(x) = f (y0 ) + f [y0 , y1 ](x − y0 ) + f [y0 , y1 , y2 ](x − y0 )(x − y1 ) + . . .
.
Here f [y0 , . . . , yn ] denotes the divided difference which can be computed by the recurrence
formula
f [y0 , . . . , yn ] =
f [y1 , . . . , yn ] − f [y0 , . . . , yn−1 ]
y n − y0
with
f [yi ] = f (yi ).
A different possibility is to use the Lagrange form with the Lagrange polynomials Li (x),
r(x) =
n
X
f (yi )Li (x),
Li (x) =
i=0
n
Y
x − yj
.
yi − yj
j=0
(9.1)
j6=i
This kind of representation is often associated with numerical instability and high evaluation
costs, as a direct use of (9.1) consists of O(n2 ) operations. However, in a series of papers
[8, 9, 39, 67, 80] different authors have reported on the advantages of the Lagrange form
when written in a slightly different way,
!, n
!
n
n
X
X wi
X
wi
wi
f (yi ) =
f (yi )
,
(9.2)
r(x) = Ω(x)
x − yi
x − yi
x − yi
i=0
i=0
i=0
where
wi =
Y
j=0
j6=i
1
,
yi − y j
Ω(x) =
n
Y
(x − yi ).
i=0
The first formula in (9.2) is called the first form and the other the second form of the barycentric formula for r. The second form is often also just called barycentric formula for r and
has amongst others the advantage that it interpolates for any weight wi 6= 0 the given function value f (yi ) for all i [8, 67]. This provides numerical stability in the sense that the
interpolation property is not destroyed by rounding errors in the computation of the wi ’s,
as long as they stay nonzero. In terms of numerical stability with respect to the evaluation
of the interpolating polynomial, Higham shows in [39] that the first form is backward stable, but concludes that the difference in accuracy for the second form is only significant for
badly chosen points yi . The evaluation costs of either form in (9.2) are reduced to O(n)
operations, as the weights wi can be precomputed like the divided differences in O(n2 ) operations. Moreover, the complexity of changing the weights due to the addition of one extra
interpolation point is comparable to that for the divided differences, as shown in [80].
9.1. INTRODUCTION
119
The choice of the weights wi in (9.2) ensures that r is a polynomial. However, as all
nonzero weights lead to interpolation, the barycentric form lends itself to representing a
rational interpolating function. In fact, any rational function interpolating the data with a
denominator and a numerator of degree at most n can be represented in the barycentric
form [8]. Polynomial interpolation is known to be ill conditioned for an unsuitable choice
of points yi , leading to undesired high oscillations. Using a rational interpolant instead can
lead to much more pleasing results. On the other hand, a rational function can have poles
near or inside the interpolation interval. This can be a blessing if one wants to approximate
a function that has poles, but a curse if one wants to avoid them. Moreover, it is not easy
to control the location of those poles. In classical rational interpolation the objective is to
find two polynomials p and q with r(x) = p(x)/q(x), where the degrees of p and q sum up
to n. From p and q then the barycentric form might be derived. Alternatively one can use
the barycentric formula directly by choosing suitable weights. In [7] Berrut suggests the
weights
wi = (−1)i ,
which in addition to interpolation ensure that r does not have any poles in R. In [26] Floater
and Hormann introduce a rational interpolant by blending polynomials which interpolate a
small part of the data, and give the formula for computing the corresponding weights in the
barycentric form. They are able to prove that the interpolant has no poles in R and that it
has a high convergence rate which depends on the degree of the polynomials. Moreover, it
turns out that Berrut’s weights are a special case of this approach.
A different approach to interpolating the data would be to use piecewise polynomials
like splines with some continuity between the parts. This approach has a huge practical
importance and is thus quite popular. However, rational interpolants can provide a higher
convergence rate and better smoothness over the whole domain.
9.1.2
Hermite interpolation
So far the above discussion covered the Lagrange case. In the more general setting of the
Hermite interpolation problem, we do not only want to interpolate the function values but
also derivatives up to some order. Let in addition to the yi the multiplicities κi ≥ 1 be given.
We want to find a function r where
r(j) (yi ) = f (j) (yi ),
i = 0, . . . , n,
j = 0, . . . , κi − 1.
(9.3)
Similar to the Lagrange case there exists a unique polynomial interpolating the data (see
e.g. [11]),
(k)
κiX
−j−1
n κX
i −1
X
1
Ω(x)
(x − yi )k (x − yi )κi
(j)
r(x) =
f (yi )
(9.4)
j! (x − yi )κi −j
k!
Ω(x)
x=yi
i=0 j=0
k=0
with
Ω(x) =
n
Y
i=0
(x − yi )κi .
120
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
(k)
Here (. . .)x=yi means the k-th derivative of the term in brackets evaluated at yi . The interpolating polynomial can again be expressed by a barycentric formula. We reformulate (9.4)
into
r(x) = Ω(x)
n κX
i −1
X
i=0 k=0
1
1
k+1
(x − yi )
(κi − 1 − k)!
(x − yi )κi
Ω(x)
(κi −1−k) X
k
x=yi
f (j) (yi )
(x − yi )j ,
j!
j=0
use the uniqueness for f = 1, which gives
1 = Ω(x)
n κX
i −1
X
i=0 k=0
1
1
k+1
(x − yi )
(κi − 1 − k)!
(x − yi )κi
Ω(x)
(κi −1−k)
,
x=yi
and get
Xn
r(x) =
i=0
Xk f (j) (yi )
wi,k
(x − yi )j
k=0 (x − yi )k+1
j=0
j!
Xn Xκi −1
wi,k
Xκi −1
i=0
k=0
(9.5)
(x − yi )k+1
with the weights
wi,k =
1
(κi − 1 − k)!
(x − yi )κi
Ω(x)
(κi −1−k)
.
(9.6)
x=yi
Similarly to the Lagrange case, the barycentric formula (9.5) provides interpolation as long
as the weights wi,κi −1 6= 0 for all i = 0, . . . , n [68]. Thus the barycentric formula again
lends itself to representing rational interpolants. A rational Hermite interpolant
Pn ofthe form
(9.5) is completely determined by the denominator of degree at most
i=0 κi − 1. In
[68] Schneider and Werner provide an efficient way to compute the corresponding weights
of such a denominator given in Newton form. The same algorithm can be used to calculate
9.2. A SIMPLE BERRUT HERMITE INTERPOLANT
121
the weights (9.6), as using the formula directly is relatively expensive.
ωi,k = 0,
k = 0, . . . , κi − 1
i = 0, . . . , n,
ω0,κ0 −1 = 1
for i = 0 to n − 1 do {
for m = i + 1 to n do {
a = 1/(yi − ym )
for k = 0 to κm − 1 do {
(9.7)
ωi,κi −1 = a ∗ ωi,κi −1
for j = 1 to κi − 1 do {
ωi,κi −1−j = a ∗ (ωi,κi −1−j − ωi,κi −j )
}
ωm,κm −1−k = ωm,κm −1−k − ωi,0
} } }
Schneider and Werner also propose the choice of a certain denominator to avoid poles and
yield pleasant results.
9.1.3
Contribution
In rational Hermite interpolation literature surprisingly little work seems to be done using
the barycentric formula directly by providing explicit formulas for the weights. The contribution of this paper splits into two parts. First we will study a Hermite extension of Berrut’s
weights [7] and the Hermite case of Floater and Hormann’s [26] approach. The latter will
lead to an explicit formula for the weights. As a second part we will study the first order
univariate case of the recently developed transfinite interpolation method of pointwise radial
minimization [29]. This study will lead to a barycentric-like formula and in addition enable
us to prove the interpolation property of this new method in the chosen setting.
9.2
A simple Berrut Hermite interpolant
Berrut’s weights in the Lagrange case provide an easy computable rational interpolant. One
possible extension to the Hermite setting yields
wi,k = 0,
0 ≤ k < κi − 1
and
wi,κi −1 = (−1)si −κ0 ,
where
s−1 = 0
and
si =
i
X
j=0
κj = si−1 + κi .
122
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
The −κ0 part in the exponent of the weights forces w0,κ0 −1 = 1. As all weights wi,κi −1 6=
0 the interpolation condition is fulfilled, and as wi+1,κi+1 −1 = (−1)κi+1 wi,κi −1 the weights
also fulfill the necessary condition for a rational interpolant without poles given in [68]. In
the case of constant multiplicity, meaning κi = K for all i, Proposition 9.1 shows that the
resulting function r really does not have any poles in R. However, for the general case,
meaning non-constant multiplicities, the function r might posses poles. We will show this
by giving an example where the denominator in (9.5) is zero. Let n = 2 and y0 = −2/3,
y1 = −1/2 and y2 = 4 with multiplicities κ0 = 2, κ1 = 1 and κ2 = 1. Choosing x = 0
then yields
n κX
i −1
X
i=0 k=0
1
wi,k
−1
1
=
+
= 0.
2 +
(x − yi )k+1 x=0
x − 4 x=0
x + 21
x + 23
Proposition 9.1 Assume that all points have the same multiplicity κi = K > 0. Then the
barycentric rational interpolant defined by (9.5) with weights
wi,k = 0,
0≤k <K −1
and
wi,K−1 = (−1)iK
(9.8)
has no poles in R.
Proof. The weights wi,K−1 are nonzero for all i. Therefore, according to Corollary 2.2 in
[68], the denominator of r is nonzero for x = yi for all i. In the following we can thus
assume that x ∈ R\{y0 , . . . , yn }. We have
n κX
i −1
X
i=0 k=0
n
X (−1)iK
wi,k
=
,
k+1
(x − yi )
(x − yi )K
i=0
hence for even K we obviously have a strictly positive denominator. For odd K we let
α = −1 if x < y0 , α = n if x > yn and else α be such that x ∈ (yα , yα+1 ). This yields
!
α
n−1−α
n
X
X
X
(−1)i
(−1)i
(−1)iK
α
= (−1)
+
,
(9.9)
(x − yi )K
(x − yα−i )K
(yα+1+i − x)K
i=0
i=0
i=0
where an empty sum is considered to be zero. For α ≥ 0 we can rearrange the first sum into
pairs, leading to
α
X
i=0
α−1
X
1
1
(−1)i
=
−
K
K
(x − yα−i )K
(x
−
y
)
(x
−
y
α−i
α−i−1 )
i=0
i even
for odd α and
α
X
i=0
α−2
X
1
1
1
(−1)i
=
+
−
K
K
(x − yα−i )K
(x − y0 )K
(x
−
y
)
(x
−
y
α−i
α−i−1 )
i=0
i even
9.3. THE HERMITE FLOATER-HORMANN INTERPOLANT
123
for even α. As 0 < x − yα−i < x − yα−i−1 we have
1
1
−
> 0.
(x − yα−i )K
(x − yα−i−1 )K
This yields together with y0 < x that
α
X
i=0
(−1)i
> 0.
(x − yα−i )K
Similarly, for α < n we can write the second sum in (9.9) as
n−1−α
X
i=0
n−2−α
X
(−1)i
1
1
=
−
K
K
(yα+1+i − x)K
(y
−
x)
(y
α+1+i
α+2+i − x)
i=0
i even
if n − 1 − α is odd and as
n−1−α
X
i=0
n−3−α
X
1
1
1
(−1)i
=
+
−
K
K
K
(yα+1+i − x)
(yn − x)
(yα+1+i − x)
(yα+2+i − x)K
i=0
i even
if n − 1 − α is even. With 0 < yα+1+i − x < yα+2+i − x this gives
n−1−α
X
i=0
(−1)i
> 0.
(yα+1+i − x)K
Hence the denominator of r is strictly positive for even α and strictly negative for odd α. 2
Given constant multiplicity at the points yi , the interpolant using Berrut’s weights (9.8)
has precision K−1. This can be shown by observing that in the barycentric formula (9.5) the
sum over j represents the k-th order Taylor expansion of f and thus reproduces polynomials
of degree at most k. Hence, with f being a polynomial of degree at most K − 1, we have
(−1)iK
i=0 (x−yi )K
Pn
r(x) =
9.3
f (j) (yi )
(x
j=0
j!
Pn (−1)iK
i=0 (x−yi )K
PK−1
− yi )j
(−1)iK
i=0 (x−yi )K
f (x) Pn (−1)iK
i=0 (x−yi )K
Pn
=
= f (x).
The Hermite Floater-Hormann interpolant
Our extension of Berrut’s weights to the general Hermite problem does not yield satisfactory
results, as the resulting interpolant might have poles for non-constant multiplicities. In the
Lagrange setting, Berrut’s weights are a special case of the interpolant introduced in [26]
by Floater and Hormann. Instead of extending Berrut’s weights, let us consider Floater and
Hormann’s interpolant in the general Hermite setting.
124
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
Let the points x0 , . . . , xN be given. The interpolant r is then defined as
PN −d
r(x) =
i=0 λi (x)pi (x)
.
PN −d
i=0 λi (x)
(9.10)
Here the polynomial pi of degree d interpolates the data given at the points xi , . . . , xi+d
and
(−1)i
.
λi (x) =
(x − xi ) · · · (x − xi+d )
In the Lagrange setting the points xi are mutually distinct. However, it is quite common
to represent the Hermite interpolation conditions (9.3) also by just letting the points xi
coalesce. The understanding is that if a point appears k times, than the interpolating function
interpolates derivatives of order up to k − 1 at this point. Thus defining
xsj−1 = xsj−1 +1 = . . . = xsj −1 = yj
(9.11)
for j = 0, . . . , n yields N = sn − 1 and enables us to use (9.10) also in the general Hermite
setting.
9.3.1
The barycentric form
In [26] the barycentric formula is given for the Lagrange, but not for the Hermite case. In
order to express (9.10) in barycentric form, we need to be able to represent each polynomial
pi in the Hermite form (9.4). For this in turn it is necessary to compute the multiplicity mi,j
of any point yj for each polynomial pi , where mi,j = 0 means that yj does not contribute
to determining pi . From (9.11) follows, that

0
if sj ≤ i
(a)




s
−
i
if
s
≤
i
<
s
≤
i
+
d,
(b)

j
j−1
j


κj
if i < sj−1 < sj ≤ i + d, (c)
mi,j =
(9.12)
d+1
if sj−1 ≤ i ≤ i + d < sj , (d)




i + d + 1 − sj−1 if i < sj−1 ≤ i + d < sj , (e)



0
if sj−1 > i + d.
(f )
In (9.12) the different cases describe that the points xsj−1 , . . . , xsj −1 are
(a) left of xi , . . . , xi+d .
(b) partially left of xi , . . . , xi+d or contained within them for sj−1 = i.
(c) contained in xi , . . . , xi+d , but without sj−1 = i or sj − 1 = i + d.
(d) a superset of xi , . . . , xi+d or equal to them.
(e) partially right of xi , . . . , xi+d or contained within them for sj − 1 = i + d.
(f ) right of xi , . . . , xi+d .
9.3. THE HERMITE FLOATER-HORMANN INTERPOLANT
125
The value of mi,j can be determined by at most four comparisons.
if sj ≤ i or sj−1 > i + d then mi,j = 0
elsif sj > i + d then {
if sj−1 ≤ i then mi,j = d + 1 else mi,j = i + d + 1 − sj−1
} else {
(9.13)
if sj−1 ≤ i then mi,j = sj − i else mi,j = κj
}
This proves that all possible cases are considered in (9.12). Observe also that mi,j = 0 for
the cases (a) and (f ), whereas 0 < mi,j ≤ κj for the cases (b) − (e). Thus we can define
Ji = {j | mi,j > 0},
Ij = {i | mi,j > 0} = {i | 0 ≤ i ≤ N − d ∧ sj−1 − d ≤ i ≤ sj − 1}.
The mi,j now enable us to express the polynomial pi in Hermite form. Let
Y
Ωi (x) =
(x − yj )mi,j
j∈Ji
and
vi,j,k =


1
(mi,j −1−k)!
(mi,j −1−k)
(x−yj )mi,j
Ωi (x)
if 0 ≤ k < mi,j
x=yj

0
else,
which yields
pi (x) = Ωi (x)
j −1
X κX
j∈Ji k=0
k
X f (l) (yj )
1
v
(x − yj )l .
i,j,k
(x − yj )k+1
l!
l=0
Substituting this into the numerator in (9.10) gives
N
−d
X
λi (x)pi (x) =
i=0
N
−d
X
j −1
X κX
i=0 j∈Ji k=0
=
j −1
n κX
X
j=0 k=0
k
X f (l) (yj )
(−1)i
vi,j,k
(x − yj )l
k+1
(x − yj )
l!
l=0
k
X
wj,k
f (l) (yj )
(x − yj )l ,
(x − yj )k+1
l!
l=0
where
wj,k =
X
(−1)i vi,j,k .
i∈Ij
From
1 = Ωi (x)
j −1
X κX
j∈Ji k=0
1
vi,j,k
(x − yj )k+1
126
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
follows also that
N
−d
X
λi (x) =
i=0
N
−d
X
j −1
j −1
n κX
X
X κX
wj,k
(−1)i vi,j,k
=
.
k+1
(x − yj )
(x − yj )k+1
j=0
i=0 j∈Ji k=0
k=0
Algorithm (9.7) provides an efficient method for computing the vi,j,k and hence the wj,k .
By computing the vi,j,k starting with i = 0 and then increasing it, one actually does not
need to compute the multiplicities mi,j using algorithm (9.13). When i increases by one,
only the first mi,j > 0, and either the last mi,j > 0 or the first mi,j = 0 after that, change.
9.3.2
Properties
In [26] is shown that if κj = 1 for all points the interpolant will have no poles in R. This
result carries over to the Hermite case under the condition that
d ≥ max (κj ) − 1,
(9.14)
0≤j≤n
as shown in Proposition 9.2. Condition (9.14) makes sense as this is the minimal degree
of a polynomial needed to be able to interpolate all given data at a point with maximal
multiplicity. In fact, this condition determines whether r fulfills the Hermite interpolation
conditions (9.3) or not, see Proposition 9.3.
Proposition 9.2 The function r from (9.10) has under condition (9.14) no poles in R.
Proof. By multiplying (9.10) with (−1)N −d Ω(x) the interpolant can be expressed as
PN −d
µi (x)pi (x)
r(x) = i=0
(9.15)
PN −d
i=0 µi (x)
with
µi (x) = (−1)N −d Ω(x)λi (x) =
i−1
Y
j=0
(x − xj )
N
Y
(xk − x).
k=i+d+1
We will show that the interpolant has no poles by showing that the denominator in (9.15) is
strictly positive for all x ∈ R.
Let first x = yα for some α, 0 ≤ α ≤ n and
Jα := {i | 0 ≤ i ≤ N − d ∧ sα − 1 − d ≤ i ≤ sα−1 }.
Due to (9.14) Jα is non-empty and µi (xα ) = 0 for all i 6∈ Jα and µi (xα ) > 0 for all i ∈ Jα
and hence
N
−d
X
µi (xα ) > 0.
i=0
For the other x ∈ R\{y0 , . . . , yn } the proof that the denominator in (9.15) is strictly
positive by Floater and Hormann in [26] is still valid under condition (9.14) and hence
9.3. THE HERMITE FLOATER-HORMANN INTERPOLANT
127
carries over. However, note that there is a misprint in the proof in [26]. On page 320 in the
first inequality from the top, it must be
xi+d+1 − x > xi − x
not xi+d+1 − x > xi+1 − x
2
from which follows that |µi (x)| > |µi+1 (x)| for all i where xi > x.
Proposition 9.3 If condition (9.14) is fulfilled, the rational function r defined in (9.10) interpolates the given Hermite data, i.e. (9.3) is true. If (9.14) is not fulfilled, then all derivatives of the order max0≤j≤n (κj ) − 1 are unattainable.
Proof. We will prove the interpolation property by showing that under condition (9.14) the
weights wj,κj −1 are nonzero for all j. From Proposition 2.1 in [68] then follows that r
interpolates the Hermite data.
Let 0 ≤ j ≤ n be arbitrary, but fixed. Then if d = κj − 1 this yields
wj,κj −1 = (−1)sj−1 vsj−1 ,j,κj −1 = (−1)sj−1 ,
which is clearly nonzero. For d > κj − 1 we can for all i where mi,j = κj write
Ωi =
i+d
Y
sj−1 −1
−1
(x − xi ), and vi,j,κ
= (−1)i+d+1−sj
j −1
k=i
Y
(yj − xk )
i+d
Y
(xl − yj ),
l=sj
k=i
where an empty product is considered to be one. The weight wj,κj −1 then becomes
X
wj,κj −1 =
(−1)i vi,j,κj −1
i
mi,j =κj
sj−1 −1
d+1−sj
= (−1)
X
Y
i
mi,j =κj
k=i
(yj − xk )
i+d
Y
!−1
(xl − yj )
6= 0.
l=sj
This is nonzero as in the first product yj − xk > 0 and in the second xl − yj > 0 and hence
the whole sum is strictly positive. Setting d ≥ max0≤j≤n (κj ) − 1 then gives wj,κj −1 6= 0
for all j = 0, . . . , n.
Let now d < max0≤j≤n (κj ) − 1 =: K. Then mi,j ≤ K and hence vi,j,K = 0 for all
i, j. But then wj,K = 0 for all j where κj = K + 1. From the remarks to Proposition 2.1
in [68] follows that the given derivatives f (K) (yj ) are then unattainable.
2
The interpolant r has precision of degree d as for any polynomial f with degree at most
d the polynomials pi reproduce f and hence
PN −d
PN −d
λi (x)
i=0 λi (x)pi (x)
r(x) = PN −d
= f (x) Pi=0
= f (x).
N −d
i=0 λi (x)
i=0 λi (x)
In [26] upper bounds on the approximation error of r to a sufficiently smooth function f
are derived for d = 0 and d ≥ 1. If max0≤j≤n (κj ) > 1 then (9.14) ensures that d ≥ 1. The
128
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
d=1
d=2
K=2
d=3
d=4
d=2
K=3
d=3
d=4
k
k
k
k
k
k
k
k
=0
=1
=0
=1
=0
=1
=0
=1
k
k
k
k
k
k
k
k
k
=0
=1
=2
=0
=1
=2
=0
=1
=2
δ∗,k
1
1
h , 0, 0, . . . , 0, 0, − h
1, 1, . . . , 1, 1
2
,
0,
0, . . . , 0, 0, − h2
h
1, 2, 2, . . . , 2, 2, 1
5
1
1
5
2h , 2h , 0, 0, . . . , 0, 0, − 2h , − 2h
1, 3, 3, . . . , 3, 3, 1
3
3
3 3
h , h , 0, 0, . . . , 0, 0, − h , − h
1, 5, 6, 6, . . . , 6, 6, 5, 1
2
4
4
4
4
2
h2 , h2 , h2 , . . . , h2 , h2 , h2
1
1
h , 0, 0, . . . , 0, 0, − h
1, 1, . . . , 1, 1
4
8
8
8
8
4
h2 , h2 , h2 , . . . , h2 , h2 , h2 ,
2
2
h , 0, 0, . . . , 0, 0, − h
1, 2, 2, . . . , 2, 2, 1
13
27 14 14
14 14 27
13
2h2 , 2h2 , h2 , h2 , . . . , h2 , h2 , 2h2 , 2h2
3
3
h , 0, 0, . . . , 0, 0, − h
1, 3, 3, . . . , 3, 3, 1
(n ≥ 1)
(n ≥ 1)
(n ≥ 3)
(n ≥ 3)
(n ≥ 1)
(n ≥ 1)
(n ≥ 3)
Table 9.1: The δj,k where wj,k = (−1)jK δj,k if all points yj have multiplicity K and are
equidistant with yj+1 − yj = h.
derivation of the bound for this case is still valid for coalescing points xi under condition
(9.14). This leads to an approximation error of O(hd+1 ) if d ≥ 1 and O(h) if d = 0 and
yj+1 − yj yj+1 − yj
,
max min
1≤j≤n−2
yj − yj−1 yj+2 − yj+1
bounded as h → 0, where h = max0≤j≤n−1 (yj+1 − yj ) [26].
9.3.3
Some explicit weights
Let the points be equidistant with h = yj+1 − yj and let them all have the same constant
multiplicity κi = K. In this setting we are for relatively low d and K able to compute
the weights wj,k explicitly. We will do so exemplary for the case of K = 2, d = 2. The
computation for other d and K is similar and the resulting weights are shown in Table 9.1
for K = 2, 3 and degree d up to four. For the weights for K = 1 see [26].
We assume n ≥ 1. For K = 2 we have sj = 2(j + 1) which together with d = 2 yields
N − d = 2n − 1 and m2j,j = 2 and m2j+1,j = 1 with j = 0, . . . , n − 1. This gives
Ω2j (x) = (x − yj )2 (x − yj+1 ),
Ω2j+1 (x) = (x − yj )(x − yj+1 )2
9.4. UNIVARIATE POINTWISE RADIAL MINIMIZATION
129
and
ω0,0 = v0,0,0 − v1,0,0 ,
ωj,0 = v2(j−1),j,0 − v2(j−1)+1,j,0 + v2j,j,0 − v2j+1,j,0 ,
j = 1, . . . , n − 1,
ωn,0 = v2(n−1),n,0 − v2(n−1)+1,n,0 ,
ω0,1 = v0,0,1 ,
ωj,1 = −v2(j−1)+1,j,1 + v2j,j,1 ,
j = 1, . . . , n − 1,
ωn,1 = −v2(n−1)+1,n,1 .
Computing the corresponding values of vi,j,k provides
1
1
= 2,
v2(j−1),j,0 =
(x − yj−1 )2 x=yj
h
−1
1
v2(j−1)+1,j,0 =
= − 2,
2
(x − yj−1 ) x=yj
h
−1
1
v2j,j,0 =
= − 2,
(x − yj+1 )2 x=yj
h
1
1
v2j+1,j,0 =
= 2,
2
(x − yj+1 ) x=yj
h
1
1
v2(j−1)+1,j,1 =
= ,
x − yj−1 x=yj
h
1
1
v2j,j,1 =
=− ,
x − yj+1 x=yj
h
j = 1, . . . , n,
j = 1, . . . , n,
j = 0, . . . , n − 1,
j = 0, . . . , n − 1,
j = 1, . . . , n,
j = 0, . . . , n − 1
resulting into the weights
2
,
h2
1
=− ,
h
2
,
h2
1
=− .
h
ω0,0 = −
ωj,0 = 0,
j = 1, . . . , n − 1,
ωn,0 =
ω0,1
2
ωj,1 = − ,
h
j = 1, . . . , n − 1,
ωn,1
As multiplying the weights by a constant factor does not change the interpolant, multiplying
with −h yields the weights in Table 9.1.
9.4
Univariate pointwise radial minimization
In the recent paper [29] the idea of pointwise radial minimization was introduced to compute
a Hermite interpolant on an arbitrary, bounded domain. Unfortunately, the authors were not
able to prove the interpolation property. We will now examine the simplest non-trivial case
of using this approach in the univariate Hermite setting with κi = 2 for i = 0, . . . , n. From
the analysis in [29] we know that for the case where n is odd, the resulting function is
130
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
unique, has no poles and cubic precision inside the domain. These results extend readily to
the outside of the domain, but not easily, or not at all, to even n. Therefore we first consider
these properties for general n which will lead us to the interpolation property of univariate
pointwise radial minimization for first order Hermite data.
Let x ∈ R\{y0 , . . . , xn } be fixed. In the given setting, the idea of pointwise radial
minimization is to use for each point yi the unique cubic polynomial qi : [0, |x − yi |] → R
which interpolates f (yi ) and f 0 (yi ) at |x − yi | and the unknown value qi (0) = a and
derivative qi0 (0) = b at zero. Then for each i the energy
Z
Ei (a) =
|x−yi |
(qi00 (r))2 dr ≥ 0,
a = (a, b)T
0
is defined. Let k be such that x ∈ (yk , yk+1 ) with k = −1 if x < y0 and k = n for x > yn .
The total energy of a vector a is then defined by
E(a) = E(a) + E(a),
E(a) =
k
X
(−1)k−i Ei ,
E(a) =
i=0
n
X
(−1)k+1−i Ei ,
i=k+1
where an empty sum is considered to be zero. The value of the interpolating function is
determined by finding the a∗ that minimizes the energy and setting r(x) = a∗ .
9.4.1
A barycentric-like form
From Lemma 1 in [29] we know that Ei (a) can be expressed by
Ei (a) = aT Mi a + NiT a + Pi
where, with vi = −1 if i ≤ k and vi = 1 for i > k,
2
6
3(yi − x)
,
vi Mi =
(yi − x)3 3(yi − x) 2(yi − x)2
4
−6f (yi ) + 3(yi − x)f 0 (yi )
vi Ni =
,
(yi − x)3 −3(yi − x)f (yi ) + (yi − x)2 f 0 (yi )
4
vi Pi =
3f (yi )2 − 3(yi − x)f (yi )f 0 (yi ) + (yi − x)2 (f 0 (yi ))2 .
3
(yi − x)
The resulting energy can be expressed similarly by
E(a) = aT M a + N T a + P
with
k+1
M = (−1)
n
X
i=0
(−1)i vi Mi
(9.16)
9.4. UNIVARIATE POINTWISE RADIAL MINIMIZATION
131
and likewise for N and P . Minimizing the energy, or solving M a = −N/2, gives the
univariate radial minimizing function r,
r(x) = r1 (x) + r2 (x),
P
Pn
n
(−1)i
r1 (x) =
j 3(yj −yi )+(yj −x)
j=0 (−1)
(yi −x)(yj −x)2
i=0 (yi −x)2
(−1)i
i=0 (yi −x)2
Pn
2(−1)i
i=0 (yi −x)2
Pn
r2 (x) = P
n
(−1)i
i=0 (yi −x)2
P
P
f (yi ) + (x − yi )f 0 (yi )
n
j 3(yj −yi )+(yj −x)
j=0 (−1)
(yi −x)(yj −x)2
n
j (yj −yi )
j=0 (−1) (yj −x)2
P
f 0 (yi )
n
j 3(yj −yi )+(yj −x)
j=0 (−1)
(yi −x)(yj −x)2
(9.17)
This form can be considered a barycentric-like form, but due to the double sum the evaluation costs are of order O(n2 ).
9.4.2
Unique and free of poles
For the function r to be uniquely and well defined for all x ∈ R\{y0 , . . . , xn }, the matrix M in (9.16) must be non-singular. Proposition 9.4 shows that M is positive definite, which in addition means that the denominator of r in (9.17) is strictly positive for
all x ∈ R\{y0 , . . . , xn }. Together with the interpolation property shown later, this means
that r has no poles in R.
Proposition 9.4 The matrix M is positive definite for all x ∈ R\{y0 , . . . , yn } and for all
n ≥ 0.
Proof. We will first show that aT M a ≥ 0 for all a ∈ R2 . For this, let f = 0. Then
N = 0 = P and
aT M a = E(a).
From Lemma 2 in [29] we know that the Ei are non-increasing with increasing distance of
yi from x. In other words, Ei ≤ Ei+1 for all i < k and Ei+1 ≤ Ei for all i > k. Hence
E(a) ≥ 0 and E(a) ≥ 0 and thus
aT M a = E(a) = E(a) + E(a) ≥ 0.
Now suppose that aT M a = 0 and again let f = 0. If k ≥ 0 and even, then E(a) ≥
E0 (a) ≥ 0 and if k ≥ 0 and odd then E(a) ≥ E1 (a)−E0 (a) ≥ 0. Similar, if n−k > 0 and
odd then E(a) ≥ En (a) ≥ 0 and for even n − k > 0 we have E(a) ≥ En−1 (a) − En (a) ≥
0. As n ≥ 0 at least one of the conditions k ≥ 0 and n − k > 0 is true. From the proof of
Lemma 2 in [29] also follows that if Ei (a) = Ei+1 (a) then Ei (a) = Ei+1 (a) = 0 for all
i 6= k. This means that either
Z |x−y0 |
Z |x−yn |
E0 (a) =
(q000 (r))2 dr = 0
or
En (τ ) =
(qn00 (r))2 dr = 0
0
0
or both. Hence q0 and/or qn are linear and due to f (y0 ) = f 0 (y0 ) = f (yn ) = f 0 (yn ) = 0 is
q0 = 0 and/or qn = 0. But then a = 0.
2
132
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
9.4.3
Interpolation property
To show that the function r really does interpolate the given data,
Qn we reformulate r by
multiplying the numerator and denominator with the polynomial l=0 (yl − x)4 , yielding
Pn
r1 (x) =
i=0
Ui (x)(f (yi ) + (x − yi )f 0 (yi ))
Pn
,
i=0 Ui (x)
r2 (x) =
Pn
V (x)f 0 (yi )
i=0
Pn i
,
i=0 Ui (x)
(9.18)
where
Ui (x) =
n
X
Vi (x) =
µi,j (x),
n
X
νi,j (x),
j=0
j=0
and
µi,j (x) =

n
Q

(yl − x)4
(−1)i+j (3(yj − yi ) + (yj − x))(yi − x)(yj − x)2



l=0
if i 6= j,
n
Q

4


 l=0 (yl − x)
if i = j,
l6=i,j
l6=i
νi,j (x) = (−1)i+j 2(yj − yi )(yi − x)2 (yj − x)2
n
Y
(yl − x)4 .
l=0
l6=i,j
Let now x = yα for some α with 0 ≤ α ≤ n. Then for all i, j = 0, . . . , n
µi,j (yα ) =
νi,j (yα ) =
 n
Q

 (yl − yα )4 6= 0
if i = j = α,


0
else,
l=0
l6=α
0
νi,j
(yα )
= 0,
as well as µ0i,j (yα ) = 0 for i 6= α, leading to
Uα (yα ) 6= 0,
Ui6=α (yα ) = 0,
Vi (yα ) = 0,
Ui60=α (yα ) = 0,
Vi0 (yα ) = 0.
Simply using this in (9.18) and in the corresponding derivatives yields
r1 (yα ) = f (yα ),
r2 (yα ) = 0,
r10 (yα ) = f 0 (yα )
r20 (yα ) = 0.
This means that r1 already interpolates the given data and as r = r1 + r2 the function r
does too.
9.5. NUMERICAL EXAMPLES
133
25
25
20
20
15
15
10
10
5
5
0
0
−5
0
5
−5
0
5
Figure 9.1: Univariate pointwise radial minimization for f (x) = x2 and on the left y0 =
−2, y1 = −1, y2 = 1, y3 = 2, whereas on the right y0 = −2, y1 = −1, y2 = 0, y3 =
1, y4 = 2. The black line represents f , the red r whereas the green is r1 and the blue line is
r2 . Clearly the cubic precision is lost for even n.
9.4.4
Precision
The function r1 defined in (9.17) has linear precision. This can be easily observed from
writing r1 as in (9.18). Letting f (x) = α + βx then yields
Pn
Pn
(α + βx) i=0 Ui (x)
i=0 Ui (x) α + βyi + (x − yi )β
Pn
Pn
r1 (x) =
=
= α + βx.
i=0 Ui (x)
i=0 Ui (x)
Observe also that νi,i (x) = 0 and that νi,j (x) + νj,i (x) = 0 for all x and i, j = 0, . . . , n
and thus
Pn
Pn
Pn Pn
i=0 νii (x) +
j=i+1 νi,j (x) + νj,i (x)
i=0
j=0 νi,j (x)β
Pn P
=β
= 0.
r2 (x) = Pn Pn
n
µ
(x)
j=0 i,j
j=0 µi,j (x)
i=0
i=0
Hence the function r also has linear precision. In [29] a cubic precision for r is shown in
the case of n being odd. However, as can be seen in Figure 9.1, this cubic precision does
not extend to even n.
9.5
Numerical examples
We finish the paper with several numerical examples to illustrate the behaviour of the discussed interpolants for equidistant data points with h = yi+1 − yi . All experiments are
done in MATLAB using double precision floating point arithmetic. When evaluating an
interpolant at a x which in floating point representation is identical to some yi , the value
f (yi ) is used. For all other x we use the corresponding barycentric or barycentric-like formula. This method was introduced by Berrut and Trefethen in [9] and seems to be stable in
practice.
134
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
Figures 9.2 to 9.6 show the interpolants for different f and n = 5, 10, 20, 40. The visible
convergence in the figures is confirmed by the quantitative experiments shown in the different tables. For each function in the Tables 9.2 to 9.4 the first column shows the maximal
absolute error and the second the corresponding estimated approximation order. Each error
is computed numerically by evaluating the interpolant at 10,001 equidistant points between
y0 and yn , comparing these values to the corresponding values of the interpolated function
and finally taking the maximal absolute value of all differences. Similarly in Table 9.5 for
each method the first column contains the maximal absolute error and the second the approximation order. For all functions the points yi , i = 0, . . . , n lie in the interval [−5, 5],
except for the sine function, where yi ∈ [0, 8].
Table 9.2 examines the interpolant using Berrut’s weights (9.8). In columns 2 and 3
the interpolant is applied to Runge’s function 1/(1 + x2 ), which is known to diverge for
Lagrange polynomial interpolation on equidistant data points. Despite the lack of a theoretical analysis of the approximation order, we would expect an order of O(hK ) due to the
precision of K − 1. The experimental results indicate that this is true for odd K, but that for
even K we only get an approximation order of O(hK−1 ). In the last column the interpolant
5
is applied to |x| 3 . The results suggest that the approximation order also depends on the
smoothness of the function to be interpolated.
For the Hermite Floater-Hormann interpolant the precision of degree d and the approximation order are known. The experiments listed in Table 9.3 confirm these results. In the
2
last column the interpolant is applied to |x| 3 which yields a lower approximation order than
d+1
O(h ). This suggests that the approximation order really does depend on the smoothness
condition mentioned in Theorem 2 in [26].
Although we have no theoretical analysis of the approximation order for the univariate
pointwise radial minimization method, we would expect a quadratic one due to the linear
precision of the method. This quadratic order can in fact be observed in the experiments
shown in Table 9.4. Even if only odd n are used, for which we have cubic precision, the
approximation order still remains quadratic. The higher errors of the odd n in column 3
compared to column 2 probably stem from missing the mid-point with the data points yi .
Similar to Berrut’s and Hermite Floater-Hormann’s method, using a less smooth function
seems to lead to a decrease in the approximation order.
In Table 9.5 we compare the different methods applied to Runge’s function with multiplicities κi = 2, as all three methods are able to interpolate this kind of data. In addition we
also list the errors and approximation order of the piecewise cubic interpolant to the given
data. When using a degree of 3 for the Hermite Floater-Hormann interpolant, its approximation order matches the one for the piecewise cubic polynomial. However, the maximal
absolute error seems to be of several magnitudes lower.
9.5. NUMERICAL EXAMPLES
135
25
25
20
20
15
15
10
10
5
5
0
0
−5
0
5
−5
25
25
20
20
15
15
10
10
5
5
0
0
−5
0
5
−5
0
5
0
5
Figure 9.2: Using Berrut’s weights to interpolate x2 with n = 5, 10, 20, 40 and K = 1 for
the green, K = 2 for the red and K = 3 for the blue line. The black line represents x2 .
n
10
20
40
80
160
320
640
1280
x2
K=2
1.2e-00
5.9e-01 1.1
2.9e-01 1.0
1.4e-01 1.0
7.0e-02 1.0
3.5e-02 1.0
1.7e-02 1.0
8.7e-03 1.0
Runge
K=3
4.1e-02
1.8e-03 4.5
1.6e-06 10.2
2.5e-07 2.6
3.2e-08 3.0
4.1e-09 3.0
5.1e-10 3.0
6.4e-11 3.0
Runge
K=4
1.2e-02
1.3e-05 9.9
1.9e-06 2.8
2.4e-07 2.9
3.1e-08 3.0
3.9e-09 3.0
4.9e-10 3.0
6.1e-11 3.0
sin(x)
K=5
2.8e-05
9.4e-07 4.9
3.0e-08 5.0
9.5e-10 5.0
3.0e-11 5.0
9.3e-13 5.0
3.1e-14 4.9
4.2e-15 2.9
5
|x| 2
K=3
9.4e-02
1.7e-02 2.5
2.9e-03 2.5
5.2e-04 2.5
9.2e-05 2.5
1.6e-05 2.5
2.8e-06 2.5
4.9e-07 2.5
Table 9.2: Errors and estimated approximation orders for Berrut’s weights.
136
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
−5
0
5
0
−5
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
5
0
5
0.1
0
−5
0
5
0
−5
Figure 9.3: Interpolating Runge’s function 1/(1 + x2 ) with the Hermite Floater-Hormann
interpolant for n = 5, 10, 20, 40. The blue line uses κi = 1, d = 2, the green κi = 2, d = 1,
the red κi = 3, d = 3 and the black line uses κ4i = 3, κ4i+1 = 1, κ4i+2 = 2, κ4i+3 = 1,
d = 3.
n
10
20
40
80
160
320
640
1280
Runge
κi = 2, d = 2
6.3e-03
3.8e-05
3.3e-06
4.1e-07
5.2e-08
6.4e-09
8.1e-10
1.0e-10
7.4
3.5
3.0
3.0
3.0
3.0
3.0
Runge
κi ∈ [1, 3],
d=3
3.0e-03
1.6e-03 1.0
4.6e-06 8.4
4.7e-07 3.3
2.7e-10 10.7
3.9e-11 2.8
2.5e-12 4.0
1.5e-13 4.0
3
Runge
κi = 2, d = 4
sin(x)
κi = 3, d = 2
|x| 2
κi = 2, d = 3
6.1e-03
1.3e-05 8.8
4.6e-09 11.5
1.4e-10 5.1
4.2e-12 5.0
1.3e-13 5.0
8.0e-15 4.0
5.6e-15 0.5
2.0e-03
2.6e-04
3.2e-05
4.0e-06
5.0e-07
6.3e-08
7.9e-09
9.9e-10
3.9e-02
1.4e-02
4.9e-03
1.7e-03
6.1e-04
2.2e-04
7.7e-05
2.6e-05
3.0
3.0
3.0
3.0
3.0
3.0
3.0
1.5
1.5
1.5
1.5
1.5
1.5
1.5
Table 9.3: Errors and estimated approximation orders for Hermite Floater-Hormann’s
weights. In column 2 the multiplicities are κ4i = 3, κ4i+1 = 1, κ4i+2 = 2 and κ4i+3 = 1.
9.5. NUMERICAL EXAMPLES
137
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
−0.2
−0.2
−0.4
−0.4
−0.6
−0.6
−0.8
−0.8
−1
−1
0
1
2
3
4
5
6
7
8
0
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
−0.2
−0.2
−0.4
−0.4
−0.6
−0.6
−0.8
−0.8
−1
0
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
−1
1
2
3
4
5
6
7
8
0
Figure 9.4: Comparing the rational interpolant for the sine function with κi = 2 for all
i and n = 5, 10, 20, 40. The green line uses Berrut’s weights, the black Hermite FloaterHormann’s weights with d = 3 and the red line is the univariate pointwise radial minimizing
function.
n
10
20
40
80
160
320
640
1280
x2
2.7e-02
6.7e-03
1.7e-03
4.2e-04
1.1e-04
2.7e-05
6.7e-06
1.6e-06
Runge
2.0
2.0
2.0
2.0
2.0
2.0
2.1
2.5e-02
5.0e-04
1.6e-04
4.1e-05
1.0e-05
2.6e-06
6.6e-07
1.6e-07
5.6
1.7
1.9
2.0
2.0
2.0
2.0
Runge
with n = n + 1
1.8e-02
1.3e-03 4.1
3.4e-04 2.0
8.9e-05 2.0
2.3e-05 2.0
5.7e-06 2.0
1.4e-06 2.0
3.6e-07 2.0
3
|x| 2
sin(x)
6.6e-03
1.8e-03
4.5e-04
1.1e-04
2.8e-05
7.1e-06
1.8e-06
4.5e-07
1.9
2.0
2.0
2.0
2.0
2.0
2.0
5.2e-02
1.8e-02
6.1e-03
2.1e-03
7.4e-04
2.6e-04
9.1e-05
3.2e-05
1.5
1.6
1.5
1.5
1.5
1.5
1.5
Table 9.4: Errors and estimated approximation orders for univariate pointwise radial minimization.
138
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
−5
0
5
0
−5
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
−5
0
5
0
−5
0
5
0
5
Figure 9.5: Comparing the rational interpolant for Runge’s function 1/(1 + x2 ) with κi = 2
for all i and n = 5, 10, 20, 40. The green line uses Berrut’s weights, the black Hermite
Floater-Hormann’s weights with d = 3 and the red line is the univariate pointwise radial
minimizing function.
n
10
20
40
80
160
320
640
1280
Berrut
K=2
3.6e-02
1.8e-02 1.0
9.5e-03 0.9
4.8e-03 1.0
2.4e-03 1.0
1.2e-03 1.0
6.1e-04 1.0
3.0e-04 1.0
HFH
κi = 2, d = 1
3.3e-03
8.1e-04 2.0
2.2e-04 1.9
5.6e-05 2.0
1.4e-05 2.0
3.5e-06 2.0
8.8e-07 2.0
2.2e-07 2.0
HFH
κi = 2, d = 3
6.1e-03
1.1e-05 9.1
1.4e-07 6.3
8.8e-09 4.0
5.4e-10 4.0
3.4e-11 4.0
2.1e-12 4.0
1.3e-13 4.0
uprm
2.5e-02
5.0e-04
1.6e-04
4.1e-05
1.0e-05
2.6e-06
6.6e-07
1.6e-07
5.6
1.7
1.9
2.0
2.0
2.0
2.0
piecewise
cubic
1.3e-02
1.3e-03 3.4
1.9e-04 2.7
1.4e-05 3.7
9.4e-07 3.9
5.9e-08 4.0
3.7e-09 4.0
2.3e-10 4.0
Table 9.5: Comparison of the errors and estimated approximation orders of the different
methods applied to Runge’s function.
9.5. NUMERICAL EXAMPLES
139
12
12
10
10
8
8
6
6
4
4
2
2
0
0
−5
0
5
−5
12
12
10
10
8
8
6
6
4
4
2
2
0
0
−5
0
5
−5
0
5
0
5
3
Figure 9.6: Comparing the rational interpolant for the function |x| 2 with κi = 2 for all
i and n = 5, 10, 20, 40. The green line uses Berrut’s weights, the black Hermite FloaterHormann’s weights with d = 3 and the red line is the univariate pointwise radial minimizing
function.
140
CHAPTER 9. RATIONAL HERMITE INTERPOLATION WITHOUT POLES
Bibliography
141
Bibliography
[1] Robert A. Adams. Calculus, a complete course. Pearson Education Canada, Toronto,
Ontario, 6th edition, 2006.
[2] Peter Alfeld. Scattered Data Interpolation in Three or More Variables. In Tom Lyche
and Larry L. Schumaker, editors, Mathematical methods in computer aided geometric
design, pages 1–33, San Diego, CA, USA, 1989. Academic Press Professional, Inc.
[3] Giampietro Allasia and Cesare Bracco. Hermite and Birkhoff Interpolation to Multivariate Scattered Data. Quaderno N. 25, URL = http://hdl.handle.net/2318/435, 2008.
[4] Michael Barton and Bert Jüttler. Computing roots of polynomials by quadratic clipping. Computer Aided Geometric Design, 24(3):125–141, April 2007.
[5] Barycentric Coordinates and Transfinite Interpolation.
clausthal.de/˜hormann/barycentric.
URL=http://www2.in.tu-
[6] Alexander Belyaev. On transfinite barycentric coordinates. In Konrad Polthier and
Alla Sheffer, editors, SGP ’06: Proceedings of the fourth Eurographics symposium on
Geometry processing, pages 89–99. Eurographics Association, 2006.
[7] Jean-Paul Berrut. Rational functions for guaranteed and experimentally wellconditioned global interpolation. Computers & Mathematics with Applications,
15(1):1–16, 1988.
[8] Jean-Paul Berrut, Richard Baltensperger, and Hans D. Mittelmann. Recent developments in barycentric rational interpolation. In Marcel G. de Bruin, Detlef. H. Mache,
and József Szabados, editors, Trends and Applications in Constructive Approximation,
volume 151 of International Series of Numercial Mathematics, pages 27–51, Basel,
Switzerland, 2005. Birkhäuser Verlag.
143
144
BIBLIOGRAPHY
[9] Jean-Paul Berrut and Lloyd N. Trefethen. Barycentric Lagrange interpolation. SIAM
Review, 46(3):501–517, 2004.
[10] Wolfgang Böhm. Inserting new knots into B-spline curves. Computer-Aided Design,
12(4):199–201, July 1980.
[11] B. D. Bojanov, H. A. Hakopian, and A. A. Sahakian. Spline Functions and Multivariate Interpolations, volume 248 of Mathematics and Its Applications. Kluwer
Academic Publishers, Dordrecht, The Netherlands, 1993.
[12] I.N. Bronstein, K.A. Semendjajew, G. Musiol, and H. Mühlig. Taschenbuch der Mathematik. Verlag Harri Deutsch, Thun und Frankfurt am Main, 5th edition, 2001.
[13] Solveig Bruvoll and Michael S. Floater. Transfinite mean value interpolation in general
dimension. preprint.
[14] Richard L. Burden and J. Douglas Faires. Numerical analysis. Thomson Brooks/Cole,
8th edition, 2005.
[15] E. W. Cheney. Introduction to Approximation Theory. AMS Chelsea Publishing,
Providence, Rhode Island, 2nd edition, 2000.
[16] S. A. Coons. Surfaces for Computer-Aided Design of Space Forms. Project MAC,
Technical Report MAC-TR-41, Massachusetts Institute of Technology, 1967.
[17] Carl de Boor. A practical guide to splines, volume 27 of Applied mathematical sciences. Springer-Verlag, New York, 1978.
[18] Paul de Faget de Casteljau. Formes à Pôles. Hermés, Paris, 1985.
[19] Manfred P. do Carmo. Differential Geometry of Curves and Surfaces. Prentice-Hall,
Inc., Englewood Cliffs, New Jersey, 1976.
[20] Tor Dokken. Aspects of Intersection Algorithms and Approximations. PhD-thesis,
University of Oslo, 1997, revised 2000.
[21] J. Duchon. Interpolation des fonctions de deux variables suivant le principe de la
flexion des plaques minces. RAIRO Analyse Numérique, 10:5–12, 1976.
[22] Christopher Dyken and Michael S. Floater. Transfinite mean value interpolation. Computer Aided Geometric Design, 26(1):117–134, 2009.
[23] Gerald Farin. Curves and surfaces for CAGD: a practical guide. Morgan Kaufmann
Publishers Inc., San Francisco, CA, USA, 5th edition, 2002.
[24] Gerald Farin. History of Curves and Surfaces in CAGD. In G. Farin, J. Hoschek, and
M.-S. Kim, editors, Handbook of CAGD, pages 1–22. Elsevier, 2002.
[25] Michael S. Floater. Mean value coordinates. Computer Aided Geometric Design,
20(1):19–27, 2003.
BIBLIOGRAPHY
145
[26] Michael S. Floater and Kai Hormann. Barycentric rational interpolation with no poles
and high rates of approximation. Numerische Mathematik, 107(2):315–331, August
2007.
[27] Michael S. Floater, Kai Hormann, and Géza Kós. A general construction of barycentric
coordinates over convex polygons. Advances in Computational Mathematics, 24(1–
4):311–331, January 2006.
[28] Michael S. Floater, Géza Kós, and Martin Reimers. Mean value coordinates in 3d.
Computer Aided Geometric Design, 22(7):623–631, October 2005.
[29] Michael S. Floater and Christian Schulz. Pointwise radial minimization: Hermite interpolation on arbitrary domains. Computer Graphics Forum, 27(5):1505–1512, 2008.
Special issue for Symposium on Geometric Processing.
[30] Thomas A. Foley. Full Hermite Interpolation to Multivariate Scattered Data. In C. K.
Chui, L. L. Schumaker, and J. D. Ward, editors, Approximation Theory IV, pages 465–
470, New York, 1983. Academic Press, Inc.
[31] Richard Franke. Scattered Data Interpolation: Tests of Some Methods. Mathematics
of Computation, 38(157):181–200, 1982.
[32] Mariano Gasca and Thomas Sauer. On the history of multivariate polynomial interpolation. Journal of Computational and Applied Mathematics, 122(1-2):23–35, 2000.
[33] Mariano Gasca and Thomas Sauer. Polynomial interpolation in several variables. Advances in Computational Mathematics, 12(4):377–410, March 2000.
[34] Walter Gautschi. Numerical Analysis: An Introduction. Birkhäuser Boston, Cambridge, MA 02139, USA, 1997.
[35] William J. Gordon. Blending-Function Methods of Bivariate and Multivariate Interpolation and Approximation. SIAM Journal on Numerical Analysis, 8(1):158–177,
1971.
[36] William J. Gordon and James A. Wixom. Pseudo-harmonic interpolation on convex
domains. SIAM Journal on Numerical Analysis, 11(5):909–933, October 1974.
[37] William J. Gordon and James A. Wixom. Shepard’s Method of ”Metric Interpolation” to Bivariate and Multivariate Interpolation. Mathematics of Computation,
32(141):253–264, 1978.
[38] Ronald L. Graham, Donald E. Knuth, and Oren Patashnik. Concrete Mathematics: A
Foundation for Computer Science. Addison-Wesley Publishing Company, Inc., 2nd
edition, 1994.
[39] Nicholas J. Higham. The numerical stability of barycentric Lagrange interpolation.
IMA Journal of Numerical Analysis, 24(4):547–556, 2004.
146
BIBLIOGRAPHY
[40] Kai Hormann and Michael S. Floater. Mean value coordinates for arbitrary planar
polygons. ACM Transactions on Graphics, 25(4):1424–1441, 2006.
[41] Kai Hormann and N. Sukumar. Maximum Entropy Coordinates for Arbitrary Polytopes. Computer Graphics Forum, 27(5):1513–1520, July 2008. Special issue for
Symposium on Geometric Processing.
[42] Eugene Isaacson and Herbert Bishop Keller. Analysis of Numerical Methods. Dover
Publications, Inc., Mineola, New York 11501, 1994.
[43] Pushkar Joshi, Mark Meyer, Tony DeRose, Brian Green, and Tom Sanocki. Harmonic
coordinates for character articulation. ACM Transactions on Graphics, 26(3), July
2007. Proceedings of the 2007 SIGGRAPH conference.
[44] Tao Ju, Scott Schaefer, and Joe Warren. Mean value coordinates for closed triangular
meshes. ACM Transactions on Graphics, 24(3):561–566, July 2005.
[45] Tao Ju, Scott Schaefer, Joe Warren, and Mathieu Desbrun. A geometric construction
of coordinates for convex polyhedra using polar duals. In Mathieu Desbrun and Helmut Pottman, editors, SGP ’05: Proceedings of the third Eurographics symposium on
Geometry processing, pages 181–186. Eurographics Association, 2005.
[46] Donald E. Knuth. Big Omicron and big Omega and big Theta. SIGACT News, 8(2):18–
24, 1976.
[47] Jeffrey M. Lane and Richard F. Riesenfeld. A Theoretical Development for the Computer Generation and Display of Piecewise Polynomial Surfaces. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 2(1):35–46, 1980.
[48] Torsten Langer, Alexander Belyaev, and Hans-Peter Seidel. Spherical barycentric coordinates. In Konrad Polthier and Alla Sheffer, editors, SGP ’06: Proceedings of the
fourth Eurographics symposium on Geometry processing, pages 81–88. Eurographics
Association, 2006.
[49] Torsten Langer and Hans-Peter Seidel. Higher order barycentric coordinates. Computer Graphics Forum, 27(2):459–466, 2008. Special issue for EUROGRAPHICS.
[50] Yaron Lipman, Johannes Kopf, Daniel Cohen-Or, and David Levin. GPU-assisted positive mean value coordinates for mesh deformation. to appear in SGP ’07: Proceedings
of the fifth Eurographics symposium on Geometry processing, 2007.
[51] R. A. Lorentz. Multivariate Hermite interpolation by algebraic polynomials: A survey.
Journal of Computational and Applied Mathematics, 122(1-2):167–201, 2000.
[52] Tom Lyche and Knut Mørken. Spline Methods, draft. Department of Informatics,
University of Oslo, URL=http://heim.ifi.uio.no/knutm/komp04.pdf, 2004.
[53] Mark Meyer, Haeyoung Lee, Alan Barr, and Mathieu Desbrun. Generalized barycentric coordinates on irregular polygons. Journal of Graphics Tools, 7(1):13–22, 2002.
BIBLIOGRAPHY
147
[54] August Ferdinand Möbius. Der barycentische Calcul. Johann Ambrosius Barth,
Leipzig, 1827.
[55] Knut Mørken and Martin Reimers. An unconditionally convergent method for computing zeros of splines and polynomials. Mathematics of Computation, 76(258):845–865,
2007.
[56] Knut Mørken and Martin Reimers. Second order multiple root finding. In progress,
2008.
[57] Bernard Mourrain and Jean-Pascal Pavone. Subdivision methods for solving polynomial equations. Research report, INRIA Sophia Antipolis, 2005.
[58] F. A. Potra. On Q-Order and R-Order of Convergence. Journal of Optimization Theory
and Applications, 63(3):425–431, December 1989.
[59] Lyle Ramshaw. Blossoming: A connect-the-dots approach to splines. Research Report 19, Compaq Systems Research Center, Palo Alto, CA, June 1987.
[60] Ulrich Reif. Best bounds on the approximation of polynomials and splines by their
control structure. Computer Aided Geometric Design, 17(6):579–589, 2000.
[61] Robert J. Renka. Multivariate interpolation of large sets of scattered data. ACM Transactions on Mathematical Software, 14(2):139–148, 1988.
[62] John W. Rutter. Geometry of curves. Chapman & Hall mathematics series, Boca
Raton, Florida, 2000.
[63] Malcolm Sabin. Transfinite Surface Interpolation. In Proceedings of the 6th IMA
Conference on the Mathematics of Surfaces, pages 517–534, New York, NY, USA,
1996. Clarendon Press.
[64] Thomas Sauer and Yuan Xu. On multivariate Hermite interpolation. Advances in
Computational Mathematics, 4:207–259, 1995.
[65] Thomas Sauer and Yuan Xu. On multivariate Lagrange interpolation. Mathematics of
Computation, 64(211):1147–1170, 1995.
[66] Scott Schaefer, Tao Ju, and Joe Warren. A unified, integral construction for coordinates
over closed curves. Computer Aided Geometric Design, 24(8-9):481–493, 2007.
[67] Claus Schneider and Wilhelm Werner. Some new aspects of rational interpolation.
Mathematics of Computation, 47(175):285–299, 1986.
[68] Claus Schneider and Wilhelm Werner. Hermite interpolation: the barycentric approach. Computing, 46(1):35–51, 1991.
[69] Christian Schulz. Bézier clipping is quadratically convergent. Computer Aided Geometric Design, 26(1):61–74, January 2009.
148
BIBLIOGRAPHY
[70] Larry L. Schumaker. Fitting surfaces to scattered data. In G. G. Lorentz, C. K. Chui,
and Larry L. Schumaker, editors, Approximation Theory II, pages 203–268, New York,
1976. Academic Press, Inc.
[71] Larry L. Schumaker. Spline Functions: Basic Theory. John Wiley & Sons, Inc., New
York, 1981.
[72] T. W. Sederberg and T. Nishita. Curve intersection using Bézier clipping. Computer
Aided Design, 22(9):538–549, 1990.
[73] Donald Shepard. A two-dimensional interpolation function for irregularly-spaced data.
In Proceedings of the 1968 23rd ACM national conference, pages 517–524, New York,
NY, USA, 1968. ACM.
[74] Robin Sibson and G. Stone. Computation of Thin-Plate Splines. SIAM Journal on
Scientific and Statistical Computing, 12(6):1304–1313, 1991.
[75] SISL
The
SINTEF
http://www.sintef.no/content/page1
Spline
Library.
12043.aspx.
URL
=
[76] James S. Vandergraft. Introduction to numerical computations. Academic press, Inc.,
New York, 1978.
[77] Eugene L. Wachspress. A rational finite element basis, volume 114 of Mathematics in
Science & Engineering. Academic Press, New York, 1975.
[78] Joe Warren, Scott Schaefer, Anil H. Hirani, and Mathieu Desbrun. Barycentric coordinates for convex sets. Advances in Computational Mathematics, 27(3):319–338,
October 2007.
[79] Joe D. Warren. Barycentric coordinates for convex polytopes. Advances in Computational Mathematics, 6(1):97–108, 1996.
[80] Wilhelm Werner. Polynomial Interpolation: Lagrange versus Newton. Mathematics
of Computation, 43(167):205–217, 1984.
[81] Wikipedia. URL = http://www.wikipedia.org.
[82] Wolfram MathWorld. URL = http://mathworld.wolfram.com.
[83] Alastair Wood. Introduction to Numerical Analysis. International Mathematics Series.
Addison Wesley Longman Ltd., Singapore, 1999.
[84] Chee K. Yap. Complete Subdivision Algorithms, I: Intersection of Bezier Curves.
In SCG ’06: Proceedings of the twenty-second annual symposium on Computational
geometry, pages 217–226, New York, NY, USA, 2006. ACM.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement