Stable reconstructions in Hilbert spaces and the resolution of the Gibbs phenomenon

Stable reconstructions in Hilbert spaces and the resolution of the Gibbs phenomenon
Stable reconstructions in Hilbert spaces and the resolution
of the Gibbs phenomenon
Ben Adcock
Department of Mathematics
Simon Fraser University
Burnaby, BC V5A 1S6
Canada
Anders C. Hansen
DAMTP, Centre for Mathematical Sciences
University of Cambridge
Wilberforce Rd, Cambridge CB3 0WA
United Kingdom
July 7, 2011
Abstract
We introduce a simple and efficient method to reconstruct an element of a Hilbert space in terms of
an arbitrary finite collection of linearly independent reconstruction vectors, given a finite number of its
samples with respect to any Riesz basis. As we establish, provided the dimension of the reconstruction
space is chosen suitably in relation to the number of samples, this procedure can be implemented in
a completely numerically stable manner. Moreover, the accuracy of the resulting approximation is
determined solely by the choice of reconstruction basis, meaning that reconstruction vectors can be
readily tailored to the particular problem at hand.
An important example of this approach is the accurate recovery of a piecewise analytic function
from its first few Fourier coefficients. Whilst the standard Fourier projection suffers from the Gibbs
phenomenon, by reconstructing in a piecewise polynomial basis we obtain an approximation with rootexponential accuracy in terms of the number of Fourier samples and exponential accuracy in terms of
the degree of the reconstruction. Numerical examples illustrate the advantage of this approach over
other existing methods.
1
Introduction
Suppose that H is a separable Hilbert space with inner product !·, ·" and corresponding norm #·#. In this
paper, we consider following problem: given the first m samples {!f, ψj "}m
j=1 of an element f ∈ H with
respect to some Riesz basis {ψj }∞
of
H
(the
sampling
basis),
reconstruct
f to high accuracy. Not only
j=1
does such a problem lie at the heart of modern sampling theory [28, 29, 63], it also occurs in a myriad of
applications, including image processing (in particular, Magnetic Resonance Imaging), and the numerical
solution of hyperbolic partial differential equations (PDEs).
In practice, straightforward reconstruction of f may be achieved via orthogonal projection with respect to the sampling basis. Indeed, for an arbitrary f ∈ H, this is the best possible strategy. However,
in many important circumstances, this approximation converges only slowly in m, when measured in the
norm on H, or not at all, if a stronger norm – the uniform norm, for example – is considered.
A prominent instance of this problem is the recovery of a function f : [−1, 1] → R from its first
m Fourier coefficients. In this instance, H = L2 (−1, 1) is the space of all square-integrable functions.
Provided f is analytic and periodic, it is well-known that its Fourier series (the orthogonal projection with
respect to the Fourier basis) converges exponentially fast [20, chpt. 5]. However, whenever f has a jump
discontinuity – in particular, if f is nonperiodic, or equivalently, has a jump discontinuity at x = 1 – its
Fourier series suffers from the well-known Gibbs phenomenon [56, Part I]. Whilst convergence occurs in
the L2 norm, uniform convergence is lacking, and the approximation is polluted by characteristic O (1)
1
oscillations near the discontinuity. !Moreover,
the rate of convergence is also slow: only O(m− 2 ) when
"
measured in the L2 norm, and O m−1 pointwise away from the discontinuity. Needless to say, the
Gibbs phenomenon is a significant blight of many practical applications of Fourier series [50]. It is a
1
testament to its importance that the design of effective techniques for its removal remains an active area
of inquiry [41, 61].
Returning to the general form of the problem, let us now suppose that some additional information
is known about the element f . Specifically, suppose that we know that f can be well-represented in
a particular basis. For example, in the Fourier setting, we may know that f is piecewise analytic with
jump discontinuities at given locations in [−1, 1]. In this circumstance, it seems plausible that a better
approximation to f can be obtained by expanding in a different basis – a piecewise polynomial basis,
for example. To this end, we introduce the so-called reconstruction space (of dimension n) and seek to
approximate f by an element fn,m consisting of n linearly independent elements of this space.
As we will show in due course, provided reconstruction is carried out in a certain manner, a suitable
approximation fn,m can always be found. Essential to this approach is that m (the number of samples)
is chosen sufficiently large in comparison to n (equivalently, n is chosen sufficiently small in comparison to m). However, provided this is the case, the approximation fn,m inherits the principal features of
the reconstruction space. In particular, fn,m is quasi-optimal in sense that the error #f − fn,m # can be
bounded by a constant multiple of #f − Qn f #, where Qn f is the orthogonal projection onto the reconstruction space – in other words, the best approximation to f from this space. Moreover, from a practical
standpoint, this method can be implemented by solving a linear least squares problem. Whenever the
reconstruction vectors are suitably chosen (e.g. if they form a Riesz basis), the corresponding linear system is well-conditioned and the least squares problem can be solved in O (mn) operations by standard
iterative techniques.
Consider once more the example of Fourier series, and let f : [−1, 1] → R be an analytic, nonperiodic
function. As mentioned, the Fourier series of f lacks uniform convergence. However, since f is analytic,
it makes sense to seek to reconstruct f in a system of polynomials. It is well-known that the nth degree
polynomial expansion of an analytic function converges exponentially fast in n [12, chpt. 2]. With this
in mind, a key theorem we prove in this paper is as follows:
Theorem 1.1. Let the first m Fourier samples of a function f ∈ L2 (−1, 1) be given. Then
√
(i) With n = O ( m) it is possible to compute a polynomial approximation fn,m of degree n that
satisfies
#f − Qn f # ≤ #f − fn,m # ≤ c#f − Qn f #,
where c is independent of f and m, and Qn f is the best polynomial approximation to f of degree
#
n. Furthermore, c can
√ be made arbitrarily close to 1 by a suitably small choice of the constant c
#
in the scaling n = c m.
(ii) The approximation fn,m is completely independent of the choice of polynomial basis used to represent it. The particular basis can be chosen by the user, and such a choice only affects numerical
stability and computational cost.
(iii) When implemented with Legendre polynomials, the numerical method is completely stable and
3
fn,m can be computed in only O(m 2 ) operations. Conversely, if√Chebyshev polynomials are
employed, for example, the condition number of the method is O ( m) and the corresponding
7
computational cost is O(m 4 ).
An immediate consequence of this theorem is that fn,m converges exponentially fast in the polynomial degree n whenever f is analytic, and root-exponentially fast in terms of m, the number of Fourier
samples. In addition, it transpires that both the method and this theorem can be generalised to the recovery of a piecewise analytic function of one variable (using piecewise polynomial bases), and to the√case
of multivariate functions defined in tensor-product regions. In particular, both the scaling n = O ( m)
and (root) exponential convergence are maintained in these more general settings.
The method developed in this paper was previously introduced by the authors in [4] within the context of sampling theory. Whilst this problem, in an abstract form, has been extensively studied in the last
couple of decades (in particular, by Eldar et al [28, 29], see also [63]), to the best of our knowledge this
method does not appear in any existing literature. For a more detailed discussion of the relation of this
approach to existing schemes we refer the reader to [4]. Conversely, in this paper, after presenting the
general version of the method in abstract terms, we focus primarily on its application to the Fourier coefficient reconstruction problem. On this topic, a similar approach, but only dealing with reconstructions
in Legendre polynomials from Fourier samples of analytic functions, was discussed in [47]. This can be
2
viewed as a special case of our general framework. Furthermore, by examining this example as part of
the general framework, we are able to extend and improve the work of [47] in the following ways: (i) we
derive a procedure allowing for reconstructions in any polynomial basis, not just Legendre polynomials,
(ii) we extend this approach to reconstructions of piecewise smooth functions using (arbitrary) piecewise
polynomial bases, (iii) we generalise this work to smooth functions of arbitrary numbers
√of variables and
(iv) we obtain improved estimates for both the error and the necessary scaling n = O ( m) required for
implementation.
Aside from these improvements, a great benefit of the general framework presented in this paper
is that it is immediately applicable to a whole host of other reconstruction problems. To illustrate this
generality, in the final part of this paper we consider its use in the accurate reconstruction of a piecewise
analytic function from its orthogonal polynomial expansion coefficients. Such a problem is typical of
that occurring in the application of polynomial spectral methods to hyperbolic PDEs [38, 46], where the
shock formation inhibits fast convergence of the polynomial approximation. As we highlight, this issue
can be overcome in a completely stable fashion by reconstructing in a piecewise polynomial basis.
There are numerous algorithms for the removal of the Gibbs phenomenon from Fourier series or
expansions in orthogonal polynomials. One of the most well-known and widely used is spectral reprojection [35, 41, 42]. As we discuss further in Section 3, the method developed in this paper has a number
of key advantages over this technique. Numerical results also indicate its superior performance.
The outline of the remainder of this paper is as follows. In Section 2 we introduce the reconstruction
procedure and establish both stability and error estimates. Section 3 is devoted to (piecewise) polynomial
reconstructions from Fourier samples. In Section 4 we consider reconstructions in tensor-product spaces,
and in Section 5 we discuss other recovery problems. Finally, in Section 6 we present open problems and
challenges.
2
General theory of reconstruction
In this section, we describe the reconstruction procedure in its full generality. To this end, suppose that
{ψj }∞
j=1 is a Riesz basis (the sampling basis) for a separable Hilbert space H over the field C. Let !·, ·" be
the inner product on H, with associated norm #·#. Recall that, by definition, span{ψ1 , ψ2 , . . .} is dense
in H and
$2
$
$
$∞
∞
∞
#
#
#
$
$
2
$
≤
c
|αj |2 , ∀α = {α1 , α2 , . . .} ∈ l2 (N),
(2.1)
c1
|αj | ≤ $
α j ψj $
2
$
$
$
j=1
j=1
j=1
for positive constants c1 , c2 . Equivalently, ψj = B(Ψj ), where {Ψj }∞
j=1 is an orthonormal basis for H
and B : H → H is a bounded, bijective operator. Using this definition, it is easy to deduce that {ψj }∞
j=1
also satisfies the frame property
d1 #f #2 ≤
∞
#
j=1
| !f, ψj " |2 ≤ d2 #f #2 ,
∀f ∈ H,
(2.2)
for d1 , d2 > 0, where the smallest possible value for d2 is #B#2H→H and the largest possible value for d1
is #B −1 #−2
H→H [21].
Suppose now that the first m coefficients of an element f ∈ H with respect to the sampling basis are
given:
fˆj = !f, ψj " , j = 1, . . . , m.
(2.3)
Set Sm = span{ψ1 , . . . , ψm } and let Pm : H → Sm be the mapping
f *→ Pm f =
m
#
j=1
!f, ψj " ψj .
(2.4)
We now seek to reconstruct f in a different basis. To this end, suppose that {φ1 , . . . , φn } are linearly
independent reconstruction vectors and define Tn = span{φ1 , . . . , φn }. Let Qn : H → Tn be the
orthogonal projection onto Tn . Direct computation of Qn f , the best approximation to f from Tn , is not
3
possible, since the coefficients !f, φj " are unknown. Instead, we seek to use the values (2.3) to compute
an approximation fn,m ∈ Tn that is quasi-optimal, i.e. #f − Qn f # ≤ #f − fn,m # ≤ C#f − Qn f #
for some constant C > 0 independent of f and n. To do this, we introduce the sesquilinear form
am : H × H → C, given by
am (g, h) = !Pm g, h" , ∀g, h ∈ H.
(2.5)
Note that, since
!Pm g, h" =
m
#
j=1
!g, ψj " !h, ψj " = !Pm h, g",
∀f, g ∈ H,
am is a Hermitian form on H × H (here z denotes the complex conjugate of z ∈ C). With this to hand,
we now define fn,m as the solution to
∀φ ∈ Tn ,
am (fn,m , φ) = am (f, φ),
fn,m ∈ Tn .
(2.6)
Upon setting φ = φj , j%
= 1, . . . , n, this becomes an n × n linear system of equations for the coefficients
n
α1 , . . . , αn of fn,m = j=1 αj φj . We shall defer a discussion of the computation of this approximation
to Section 2.3: first we consider the analysis of fn,m .
Let us at this stage observe that (2.6) is equivalent to the following linear least squares problem


m
#

min
|!f, ψj " − !φ, ψj "|2 .
(2.7)

φ∈Tn 
j=1
As a result, the coefficients α1 , . . . , αn are the least squares solution of a system of m linear equations.
Although (2.7) appears very familiar, we are yet to find this particular formulation, or any pertinent
analysis, in the literature relating to sampling and reconstruction.
2.1
Analysis of fn,m
Before proving the main theorem regarding (2.6), let us first give an intuitive explanation as to why this
approach works. As mentioned, the key to this technique is that the parameter m is sufficiently large in
comparison to n. To this end, let n be fixed and suppose that m → ∞. Due to (2.1), the mappings Pm
converge strongly to a bounded, linear operator P, the frame operator [21], given by
Pf =
∞
#
j=1
!f, ψj " ψj ,
(2.8)
∀f ∈ H.
Hence, for large m, the equations (2.6) defining fn,m resemble the equations
a(f˜n , φ) = a(f, φ),
∀φ ∈ Tn ,
f˜n ∈ Tn ,
(2.9)
where a : H × H → C is the Hermitian form a(f, g) = !Pf, g". Thus, it is reasonable to expect that
fn,m → f˜n as m → ∞, provided such a function f˜n exists. However,
Theorem 2.1. For all n ∈ N, the function f˜n exists and is unique. Moreover,
d2
#f − f˜n # ≤ #f − Qn f #,
d1
(2.10)
where d1 and d2 are as in (2.2).
This theorem can be established with a straightforward application of the Lax–Milgram theorem and
its counterpart, Céa’s lemma [36]. Indeed, due to (2.2) and (2.8),
d1 #g#2 ≤ a(g, g) =
∞
#
j=1
| !g, ψj " |2 ≤ d2 #g#2 ,
∀g ∈ H.
(2.11)
Hence the form a(·, ·) defines an equivalent inner product on H. Nonetheless, we shall present a selfcontained proof, since similar techniques will be used subsequently.
4
Proof. Let U : Tn → Cn be the linear mapping g *→ {!Pg, φj "}nj=1 . To prove existence and uniqueness
of f˜n it suffices to show that U is invertible, upon which it follows that f˜n = U −1 {!Pf, φj "}nj=1 .
Suppose that U g = 0. Then, by definition, !Pg, φj " = 0 for j = 1, . . . , n. Using linearity, we deduce
that !Pg, g" = 0. Now, it follows from (2.11) that 0 = !Pg, g" ≥ d1 #g#2 , giving g = 0. Hence, U is
invertible and f˜n exists and is unique.
Now consider the error estimate (2.10). Using (2.2) once more, we obtain
∞
,2
.
1 # ,,
1 ,
#f − f˜n #2 ≤
P(f − f˜n ), f − f˜n .
,!f − f˜n , ψj ", =
d1 j=1
d1
By definition of f˜n , !P(f − f˜n ), φ" = 0, ∀φ ∈ Tn . In particular, setting φ = f˜n − Qn f , yields
.
1 P(f − f˜n ), f − Qn f = a(f − f˜n , f − Qn f ).
#f − f˜n #2 ≤
d1
Since a(·, ·) gives an equivalent inner product on H, an application of the Cauchy–Schwarz inequality
yields
#f − f˜n #2 ≤
as required.
0 12
1 /
d2
a(f − f˜n , f − f˜n )a(f − Qn f, f − Qn f ) ≤ #f − f˜n ##f − Qn f #,
d1
d1
This theorem establishes existence and quasi-optimality of f˜n ≈ fn,m , thereby giving an intuitive
argument for the success of this method. We now wish to fully confirm this observation. To this end, let
(2.12)
Cn,m = inf am (φ, φ).
φ∈Tn
&φ&=1
Note that Cn,m has the equivalent forms
Cn,m = inf !Pm φ, φ" = inf

m
#
φ∈Tn 
j=1
&φ&=1
φ∈Tn
&φ&=1
| !φ, ψj " |2



.
The quantity Cn,m plays a fundamental role in this paper. Its key properties are described in the following
lemma:
Lemma 2.2. For all n, m ∈ N, 0 ≤ Cn,m ≤ d2 . Moreover, for each n, Cn,m → Cn∗ ≥ d1 as m → ∞,
where
Cn∗ = inf a(φ, φ),
φ∈Tn
&φ&=1
and d1 is defined in (2.2).
Proof. Consider first the quantity
$n,m = sup !Pφ − Pm φ, φ" = sup
φ∈Tn
&φ&=1
φ∈Tn
&φ&=1

#

j>m
| !φ, ψj " |2



.
(2.13)
Due to (2.2), the infinite sum is finite for any fixed φ, and tends to zero as m → ∞. Now let {Φj }nj=1
%n
be an orthonormal basis for Tn and set φ =
j=1 αj Φj . Two applications of the Cauchy–Schwarz
inequality gives
n #
#
#
| !φ, ψj " |2 ≤ #φ#2
| !Φk , ψj " |2 .
%n
%
j>m
k=1 j>m
2
Hence $n,m ≤
k=1
j>m | !Φk , ψj " | , and we deduce that $n,m is both finite and $n,m → 0 as
m → ∞. Noticing that |Cn,m − Cn∗ | ≤ $n,m , ∀n, m ∈ N, gives the first part of the proof. For the second,
we merely use (2.2).
5
Aside from Cn,m , we also define the quantity
(2.14)
Dn,m = sup sup |am (f, g)| .
g∈Tn
f ∈T⊥
n
&f &=1 &g&=1
For this, we have the following lemma:
Lemma 2.3. For all m, n ∈ N, 0 ≤ Dn,m ≤ d2 . Moreover, suppose that P is such that P(Tn ) ⊆ Tn
2
≤ c2 $n,m , where $n,m is as in (2.13). In particular,
(for example, when P = I is the identity), then Dn,m
for fixed n, Dn,m → 0 as m → ∞.
Proof. Let f, g ∈ H. By definition,
am (f, g) = !Pm f, g" =
m
#
j=1

!f, ψj " !g, ψj " ≤ 
m
#
j=1
 12 
| !f, ψj " |2  
m
#
j=1
 12
| !g, ψj " |2  .
(2.15)
Hence (2.2) gives the first result. Now suppose that f ∈ T⊥
n and P(Tn ) ⊆ Tn . Since Pm is self-adjoint,
we have !Pm f, g" = !f, Pm g" = !f, Pm g − Pg". Here the second equality is due to the fact that
f ⊥ Tn and Pg ∈ Tn for g ∈ Tn . By the Cauchy–Schwarz inequality, we obtain
Dn,m ≤ sup #Pg − Pm g#.
g∈Tn
&g&=1
For g ∈ Tn , we note from (2.1) that
#Pg − Pm g#2 ≤ c2
#
j>m
| !g, ψj " |2 = c2 !Pg − Pm g, g" ≤ c2 $n,m #g#2 ,
where the final equality follows from the definition of $n,m .
We are now able to state the main theorem of this section:
Theorem 2.4. For every n ∈ N there exists an m0 such that the approximation fn,m , defined by (2.6),
exists and is unique for all m ≥ m0 , and satisfies the stability estimate
#fn,m # ≤
d2
#f #.
Cn,m
Furthermore,
#f − Qn f # ≤ #f − fn,m # ≤ Kn,m #f − Qn f #,
Kn,m =
5
−2
2
1 + Dn,m
Cn,m
.
(2.16)
Specifically, the parameter m0 is the least value of m such that Cn,m > 0.
To prove this theorem, we first recall that a Hermitian form a : H × H → R is said to be continuous
if, for some constant γ > 0, |a(f, g)| ≤ γ#f ##g# for all f, g ∈ H. Moreover, a is coercive, provided
a(f, f ) ≥ ω#f #2 , ∀f ∈ H, for ω > 0 constant [36]. We now require the following lemma:
Lemma 2.5. Suppose that am : H × H → R is the sesquilinear form am (f, g) = !Pm f, g". Then am is
continuous with constant γ ≤ d2 . Moreover, for every n ∈ N there exists an m0 such that the restriction
of am to Tn × Tn is coercive for all m ≥ m0 . Specifically, if Cn,m is given by (2.12), then m0 is the
least value of m such that Cn,m > 0, and, for all m ≥ m0 , am (f, f ) ≥ Cn,m #f #2 , ∀f ∈ Tn . Finally,
for all f ∈ H and g ∈ Tn , we have am (f − Qn f, g) ≤ Dn,m #f − Qn f ##g#.
Proof. Continuity follows immediately from (2.15). For the second and final results, we merely use the
definitions (2.12) and (2.14) of Cn,m and Dn,m respectively.
6
Proof of Theorem 2.4. To establish existence and uniqueness, it suffices to prove that the linear operator
U : Tn → Cn , g *→ {!Pm g, φj "}nj=1 is invertible. Suppose that g ∈ Tn with U g = 0. By definition,
we have !Pm g, φj " = 0 for j = 1, . . . , n. Using linearity, it follows that !Pm g, g" = 0. Lemma 2.5 now
gives 0 ≤ Cn,m #g#2 ≤ 0. Hence g = 0, and therefore U is invertible.
Stability of fn,m is easily established from the continuity and coercivity conditions. Setting φ = fn,m
in (2.6) gives
Cn,m #fn,m #2 ≤ am (fn,m , fn,m ) = am (f, fn,m ) ≤ d2 #f ##fn,m #,
as required. Now consider the error estimate (2.16). Suppose that we define en,m = fn,m − Qn f ∈ Tn .
Then, by definition of fn,m , we have am (en,m , φ) = am (f − Qn f, φ), ∀φ ∈ Tn . In particular, setting
φ = en,m we obtain
−1
#en,m # ≤ Cn,m
Dn,m #f − Qn f #.
(2.17)
Since Qn f is the orthogonal projection onto Tn , we have #f − fn,m #2 = #en,m #2 + #f − Qn f #2 , which
gives the full result.
Let us sum up the key message of this theorem. It is possible to recover any element f ∈ H quasioptimally from its samples in an arbitrary space Tn , provided the number of samples m is sufficiently
large. Moreover, the condition that guarantees this recovery is known explicitly in terms of the quantity
Cn,m .
In most practical situations, one has that T1 ⊆ T2 ⊆ . . ., ∪∞
n=1 Tn = H (for example, when Tn =
Pn ). In this case, Theorem 2.4 states that both fn,m and Qn will converge to f a precisely the same rate
as n → ∞, provided m is chosen sufficiently large for each n.
At this moment, we also mention one other important observation. The constants Cn,m , Dn,m and
Cn∗ , as well as the approximation fn,m , are determined only by the space Tn , not by the choice of
reconstructions vectors φ1 , . . . , φn themselves. As we shall discuss later, the choice of such vectors only
affects the stability of the scheme.
Remark 2.1 Given that am was shown to be continuous and coercive before proving Theorem 2.4, it may
be tempting to seek to apply the Lax–Milgram theorem and Céa’s lemma to obtain the result. However,
the Hermitian form am , when considered as a mapping H×H → C, will not, in general, be coercive. This
is readily seen from the definition of Cn,m . The finite-dimensional operator Pm |Tn converges uniformly
to P|Tn , whereas its infinite-dimensional counterpart Pm : H → Sm typically does not (for example,
when Pm is the Fourier projection operator and H = L2 (−1, 1)). Hence, am only becomes coercive
when restricted to Tn × Tn , and these standard results do not automatically apply.
Although Theorem 2.4 establishes an estimate for the error f − fn,m measured in the natural norm
on H, it is also useful to derive a result valid for any other norm defined on a suitable subspace of H (for
example, this may be the uniform norm on [−1, 1] in the case of Fourier series). To this end, let ||| · ||| be
such a norm and define G = {g ∈ H : |||g||| < ∞}. We have
Corollary 2.6. Suppose that f ∈ G, Tn ⊆ G and that fn,m is defined by (2.6). Then, for all m ≥ m0 ,
|||f − fn,m ||| ≤ |||f − Qn f ||| +
kn Dn,m
#f − Qn f #,
Cn,m
(2.18)
where kn = sup φ∈Tn |||φ||| and Cn,m , Dn,m are given by (2.12) and (2.14) respectively.
&φ&=1
Proof. Let en,m = fn,m − Qn f once more. Since en,m ∈ Tn , it follows from the definition of kn and
the inequality (2.17) that
|||en,m ||| ≤ kn #en,m # ≤
kn Dn,m
#f − Qn f #.
Cn,m
The full result is obtained from the triangle inequality |||f − fn,m ||| ≤ |||en,m ||| + |||f − Qn f |||.
This corollary verifies convergence of fn,m to f in ||| · |||, whenever Qn f → f in this norm and
kn #f − Qn f # → 0 as n → ∞. Note however that, although fn,m will converge at the same rate as
Qn f in this norm, this rate will in general be slower than that of best approximation in this norm, i.e.
7
φ = arg minφ∈Tn |||f −φ|||. Having said this, the effect of this discrepancy is typically minimal, especially
when Qn f converges rapidly. See [20, chpt. 5] for a discussion of such differences in polynomial
approximations.
Remark
%n2.2 In practice, it is useful to have an upper bound for the constant kn . A simple exercise gives
kn ≤ j=1 |||Φj |||, where {Φj }nj=1 is any orthonormal basis for Tn .
Returning to main conclusion of Theorem 2.4 – namely, that guaranteed recovery can be obtained by
allowing m to range independently of n – a natural question to ask is what happens if m is set equal to
n. In abstract sampling theory this is known as the consistent reconstruction framework [29, 63]. This
question was discussed in detail in [4], where it demonstrated that such an approach often leads to severe
ill-conditioning as n = m → ∞. Additionally, stringent restrictions are placed on the types of vectors f
that can be reconstructed – see also Section 3.6. Conversely, by allowing m to vary independently of n,
we obtain a reconstruction fn,m that is guaranteed to converge for any vector f ∈ H. Moreover, as we
discuss in Section 2.3, provided the reconstruction vectors are suitably chosen, the computation of fn,m
is completely stable.
2.2
Oblique asymptotic optimality
Recall the intuitive argument of the previous section: namely, fn,m ≈ f˜n for all large m, where f˜n is
defined by (2.9). We now wish to confirm this observation. Specifcally, we shall show that, for fixed
n ∈ N, fn,m → f˜n as m → ∞, at a rate independent of the particular vector f .
Recall that the form a(·, ·) yields an equivalent inner product on H. Since f˜n is defined by the
equations a(f˜n , φ) = a(f, φ), ∀φ ∈ Tn , the mapping
f *→ f˜n is the orthogonal projection onto Tn with
6
respect to this inner product. Letting #g#a = a(g, g) be the corresponding norm on H, we now define
the constants
C̃n,m = inf !Pm φ, φ" ,
D̃n,m = sup sup | !Pm f, g" |.
(2.19)
φ∈Tn
&φ&a =1
g∈Tn
f ∈T⊥
n
&f &a =1 &g&a =1
⊥
In this instance, T⊥
n is defined with respect to the a-inner product, i.e. Tn = {f ∈ H : a(f, φ) =
0, ∀φ ∈ Tn }. Conversely, when considered with respect to the canonical inner product, this subspace is
precisely P(Tn )⊥ = {f ∈ H : !f, φ" = 0, ∀φ ∈ P(Tn )}.
Note the similarity between C̃n,m and D̃n,m and the quantities Cn,m and Dn,m defined in (2.12) and
(2.14) respectively. Roughly speaking, the former measure the deviation of fn,m from Qn f , whereas, as
we will subsequently show, the latter determine the deviation of fn,m from f˜n .
With these definitions to hand, identical arguments to those given in the proofs of Lemmas 2.2 and
2.3 now yield:
Lemma 2.7. For all m, n ∈ N, C̃n,m ≥
C̃n,m → 1 as m → ∞.
1
d2 Cn,m ,
where Cn,m is as in (2.12). Moreover, for fixed n,
2
Lemma 2.8. For all m, n ∈ N, D̃n,m ≤ d2 and D̃n,m
≤ 1 − C̃n,m . In particular, for fixed n, D̃n,m → 0
as m → ∞.
Using these lemmas, we deduce
Corollary 2.9. If fn,m and f˜n are given by (2.6) and (2.9) respectively, then
D̃n,m
#fn,m − f˜n #a ≤
#f − f˜n #a ,
C̃n,m
and we have the error estimate
#f − f˜n #a ≤ #f − fn,m #a ≤ K̃n,m #f − f˜n #a ,
In particular, for any f ∈ H, fn,m → f˜n as m → ∞.
8
K̃n,m =
5
−2
2
1 + D̃n,m
C̃n,m
.
Proof. Since (fn,m − f˜n ) ∈ Tn , we have
.
C̃n,m #fn,m − f˜n #2a ≤ Pm (fn,m − f˜n ), fn,m − f˜n .
Moreover, because !Pm fn,m , φ" = !Pm f, φ", we deduce that
.
C̃n,m #fn,m − f˜n #2a ≤ Pm (f − f˜n ), fn,m − f˜n ≤ D̃n,m #f − f˜n #a #fn,m − f˜n #a ,
where the second inequality follows from the definition (2.19) of D̃n,m and the fact that (f − f˜n ) ∈ T⊥
n,
the orthogonal complement of Tn with respect to the a-inner product.
Note that the mapping Wn : f *→ f˜n is an oblique projection with respect to the inner product
!·, ·" on H. In particular, Wn has range Tn and kernel P(Tn )⊥ , and we have the decomposition H =
Tn ⊕ P(Tn )⊥ . For this reason, we say that fn,m possesses oblique asymptotic optimality.
Whenever the sampling basis {ψj }∞
j=1 is orthonormal, we in fact witness so-called asymptotic optimality. In this setting, since P = I, the bilinear form a(·, ·) is precisely !·, ·", and therefore f˜n = Qn f
is the orthogonal projection. Hence, we can recover an approximation to f that is arbitrarily close to
the error minimising approximation, which, as mentioned, cannot be computed directly from the given
samples. Moreover, the rate of convergence of fn,m to Qn f is completely independent of the particular
vector f . Whatsmore, in this case the quantities C̃n,m and D̃n,m coincide with Cn,m and Dn,m respectively. It can also be shown that Dn,m = 1 − Cn,m . Thus, this rate of convergence depends only on
Cn,m , and specifically, on the speed at which Cn,m → 1.
Note that asymptotic optimality also occurs for general Riesz bases whenever P(Tn ) ⊆ Tn . The case
of orthonormal sampling vectors presents the most obvious example of a basis satisfying this condition.
Remark 2.3 Whenever the vectors {ψj }∞
j=1 are not orthonormal, a natural question to ask is whether we
can modify the method for computing fn,m to recover asymptotic optimality. This can be easily done, at
#
least in theory, by replacing the operator Pm by some operator Pm
converging strongly to the identity on
#
H (naturally, Pm g must be also a function of ĝ1 , . . . , ĝm ).
#
One approach to do this is to let Pm
be the orthogonal projection H → Sm . The downside of the
approach is that it requires additional computational cost to compute fn,m , as we explain at the end of
the next section.
%m
#
∗
Another potential means to recover asymptotic optimality is to define Pm
g =
j=1 !g, ψj " ψj ,
∗
where {ψj } is the set of dual vectors to the sampling vectors {ψj }. In this case, Pm → I strongly, and
asymptotic optimality follows. In practice, however, one may not have access to the dual vectors, thus
this approach cannot necessarily be easily implemented.
2.3
Computation of fn,m
Recall that the computation of the approximation fn,m involves solving the system of equations (2.6).
These can
interpreted as the normal equations of the least squares problem (2.7). Suppose now that
%be
n
fn,m = j=1 αj φj , α = (α1 , . . . , αn ) ∈ Cn and fˆ = (fˆ1 , . . . , fˆm ). If U is the m × n matrix with
(j, k)th entry !φk , ψj ", then (2.6) is given exactly by Aα = U † fˆ, where A = U † U and U † is the adjoint
of U . Equivalently, the vector α is the least squares solution of the problem U α ≈ fˆ.
This system can be solved iteratively by applying conjugate gradient iterations to the normal equations, for example. The number of required iterations is dependent on the condition number κ(A) of the
matrix A. Specifically, the number of iterations
required to obtain numerical convergence (i.e. to within a
6
prescribed tolerance) is proportional to κ(A) [37]. In particular, if κ(A) is O (1) for all n and m ≥ m0 ,
then the number of iterations is also O (1) for all n. Hence, the cost of computing fn,m is determined
solely by the number of operations required to perform matrix-vector multiplications involving U . In
other words, only O (mn) operations.
Naturally, aside from this consideration, the condition number of A is also important since it determines susceptibility of the numerical computation to both round-off error and noise. Specifically, an error
of magnitude $ in the inputs (i.e. the samples fˆj , j = 1, . . . , m) will yield an error of magnitude roughly
κ(A)$ in the output fn,m .
9
For these reasons it is of utmost importance to study the condition number of A. For this, we first
introduce the Hermitian matrix à ∈ Cn×n with (j, k)th entry !φj , φk ". Note that à is the Gram matrix
of the vectors {φ1 , . . . , φn }. In particular, κ(Ã) is a measure of the suitability of the particular vectors in
which to compute Qn f . With à to hand, we also introduce the related matrix Ãa ∈ Cn×n with (j, k)th
entry a(φj , φk ) = !Pφj , φk ", i.e. the Gram matrix with respect to the inner product a(·, ·).
The following lemma comes as no surprise:
Lemma 2.10. The matrices à and Ãa are spectrally equivalent. In particular, for all n ∈ N,
d1
d2
κ(Ã) ≤ κ(Ãa ) ≤ κ(Ã).
d2
d1
Proof. For any Hermitian matrix B, the condition number is the ratio of the largest and smallest eigenvalues in absolute value. Moreover, if B is positive definite, then
7 †
8
7 †
8
α Bα
α Bα
inf
= λmin (B), sup
= λmax (B).
(2.20)
α∈Cn
α† α
α† α
α∈Cn
α*=0
α*=0
%n
If φ = j=1 αj φj , then α† Ãα = #φ#2 and α† Ãa α = a(φ, φ). Hence, spectral equivalence now follows
immediately from (2.11).
Concerning the condition number of the matrix A, we now have the following:
Lemma 2.11. Suppose that m ≥ m0 , where m0 is as in Theorem 2.4, and C̃n,m and Cn,m are given by
(2.12) and (2.19) respectively. Then
C̃n,m κ(Ãa ) ≤ κ(A) ≤
1
κ(Ãa ),
C̃n,m
Cn,m
d2
κ(Ã) ≤ κ(A) ≤
κ(Ã).
d2
Cn,m
Moreover, for fixed n, A → Ãa as m → ∞, and, if P = I, A → Ã = Ãa .
Proof. The matrix A is Hermitian
and, provided m ≥ m0 , positive definite. Hence, its eigenvalues are
%n
given by (2.20). For φ = j=1 αj φj , we have α† Aα = !Pm φ, φ". By definition of Cn,m , for example,
we find that λmax (A) ≥ Cn,m λmax (Ã) and λmin (A) ≥ Cn,m λmin (Ã). Moreover, by (2.1) we have
λmax (A) ≤ d2 λmax (Ã) and λmin (A) ≤ d2 λmin (Ã). The first result now follows immediately from
(2.20). For the second, we merely note that each entry of A converges to the corresponding entry of Ãa
as m → ∞.
Note the important conclusion of this lemma: computing fn,m from (2.6) is no more ill-conditioned
than the computation of the orthogonal projection Qn f or the oblique projection Wn f in terms of the
vectors {φ1 , . . . , φn }. In practice, it is often true that these vectors correspond to the first n vectors in a
basis {φj }∞
j=1 of H with additional structure. Whenever this is the case, as the following trivial corollary
indicates, we can expect good conditioning:
#
Corollary 2.12. Suppose that {φj }∞
j=1 is a Riesz basis for H with respect to !·, ·" with constants c1 and
#
c2 . Then
c # d2
.
κ(A) ≤ # 2
c1 Cn,m
Proof. This follows immediately follows from (2.1) and Lemma 2.11.
Put together, the main conclusion of Theorem 2.4, Lemma 2.11 and Corollary 2.12 is the following:
for a given reconstruction space Tn , the individual vectors φ1 , . . . , φn can be chosen arbitrarily, without
altering either the approximation fn,m or its analysis. The choice of vectors only becomes important
when considering the condition number of linear system to solve. Moreover, the quality of a system
of vectors for the reconstruction problem is completely intrinsic, in that it is determined only by the
corresponding Gram matrix. In particular, it is independent of the sampling vectors.
Corollary 2.12 confirms that the approximation fn,m can be readily computed in a stable manner for
many choices of reconstruction basis. However, to fully implement this method, as we discuss further in
the next section, it is useful to have numerical way of computing Cn,m . The following lemma provides
such a means:
10
Lemma 2.13. The quantity Cn,m is given by Cn,m = λmin (Ã−1 A). Moreover, if à and A commute, then
Cn,m = 1 − #I − Ã−1 A#. In particular, if {φj }nj=1 is an orthonormal basis, then Cn,m = λmin (A) =
1 − #I − A#.
%n
Proof. By definition Cn,m = inf φ∈Tn !Pm φ, φ". Letting φ = j=1 αj φj , we find that
&φ&=1
Cn,m = infn
α∈C
α*=0
9 %n
j,k=1 αj αk !Pm φj , φk "
%
n
j,k=1 αj αk !φj , φk "
:
= infn
α∈C
α*=0
α† Aα
.
α† Ãα
We now claim that, for arbitrary Hermitian positive definite matrices B and C with B nonsingular, the
following holds:
infn
α∈C
α*=0
α† Cα
= λmin (B −1 C),
α† Bα
sup
α∈Cn
α*=0
α† Cα
= λmax (B −1 C).
α† Bα
To do so, write B = D† D, with D nonsingular. Then, after rearranging, we obtain
infn
α∈C
α*=0
α† Cα
β † D−† CD−1 β
= infn
= λmin (D−† CD−1 ),
†
α Bα β∈C
β†β
β*=0
for example. However, a trivial calculation confirms that the eigenvalues of D−† CD−1 are identical
to those of B −1 C, thus establishing the claim. Since à is nonsingular, this confirms that Cn,m =
λmin (Ã−1 A). For the second result, we merely notice that λmin (B) = 1 − λmax (I − B) = 1 − #I − B#
whenever B is Hermitian.
In Section 2.2 we briefly discussed a modified approach where the operator Pm , usually given by
(2.4), was replaced by the orthogonal projection operator. The advantage of this approach is that it
guarantees asymptotic optimality. However, the downside is additional computational expense. Indeed,
the corresponding matrix is of the form A = U † V −1 U , where V ∈ Cm×m has (j, k)th entry !ψj , ψk ".
Hence, if conjugate gradients iterations are used, at each stage we are required to compute matrix-vector
−1
products involving
(assuming that!V −1" had been precomputed). In general, this
! 2 " the m × m matrix V
requires O m operations. Thus, we incur a cost of O m2 , as opposed to O (mn) for the original
algorithm. Hence, in practice it may be better settle for only quasi- and oblique asymptotic optimality,
whilst retaining a lower computational cost.
Remark 2.4 One assumption made in this section when considering computational cost is that the matrix
U has already been formed. In general, computing the entries !φk , ψj ", j = 1, . . . , m, k = 1, . . . , n, of
U may be a difficult task. In Section 3 we show that this can always be done in only O (mn) operations
when reconstructing in a (Gegenbauer) polynomial basis from Fourier samples. However, this need not
be !the "case in general, and in Section 5.2 we discuss an instance for which we currently only have an
O m2 algorithm for computing U . Potential remedies for improving this figure are discussed in Section
6.
Remark 2.5 As detailed in [4], the ideas for the framework of this paper originate with the question
of how to discretise certain infinite-dimensional operators. In particular, the matrix U ∈ Cm×n is an
uneven section of the operator U : l2 (N) → l2 (N) corresponding to the infinite matrix {!φk , ψj "}∞
j,k=1 .
Uneven section techniques – as opposed to finite sections, which are not guaranteed to succeed – have
recently gained prominence in the discretisation of non-self adjoint problems [43, 45]. In particular, they
were employed in [44] to solve the long-standing computational spectral problem. The key idea is that,
by allowing m to range independently of n, one can guarantee that the structure of U is preserved by
its uneven section U . The beneficial features of the framework introduced herein, namely, numerical
stability and accuracy, are direct consequences of this property.
11
2.4
Conditions for guaranteed, quasi-optimal recovery
Let us return to the standard form of the method once more. To implement this method, it is necessary to
have conditions that guarantee nonsingularity, stability and quasi-optimal recovery. In other words, for
given sampling and reconstruction bases, we wish to study the quantity
Θ(n; θ) = min {m ∈ N : Cn,m ≥ θ} ,
θ ∈ (0, d2 ),
(2.21)
where Cn,m is given by (2.12) and d2 stems from (2.2) . Note that Cn,m ≤ d2 by (2.2), thereby explaining
the stated range of θ. Also, by Lemma 2.2, we have that limm→∞ Cn,m ≥ d1 > 0, thus Θ is well-defined.
By definition, Θ(n; θ) is the least m such that #f − fn,m # ≤ c(θ)#f − Qn f #, where
5
6
1 + (1 − θ)θ−2 ,
(2.22)
c(θ) = 1 + d22 θ−2 or
whenever the sampling basis is orthonormal. In other words, the least m required for quasi-optimal
recovery with constant c(θ). Thus, provided m ≥ Θ(n; θ), the approximation fn,m converges at the same
rate as Qn f as n → ∞. In addition, m ≥ Θ(n; θ) guarantees that #fn,m # ≤ d2 θ−1 #f # and κ(A) ≤
d2 θ−1 κ(Ã), thus making the linear system for fn,m solvable in a number of operations proportional to
5
d2 θ−1 κ(Ã).
Note that Θ(n; θ) is determined only by the sampling vectors {ψj }m
j=1 and reconstruction space Tn .
Whilst Θ(n; θ) can be numerically computed for any such pair via the expression given in Lemma 2.13,
analytical bounds must be determined on a case-by-case basis. In the next section, where we consider
the recovery of functions from their Fourier samples using (piecewise) polynomial bases, we are able to
derive explicit forms for such bounds.
2.5
Summary
Let us sum up. Given the first m samples of any element f ∈ H with respect to any Riesz basis, it
is possible to reconstruct f in an arbitrary finite-dimensional space Tn , provided the parameter m is
sufficiently large in comparison to n = dim Tn . The resulting reconstruction fn,m is quasi-optimal, and
can be computed in a completely numerically stable manner. Furthermore, the required scaling of m with
n can be determined numerically by finding either the minimal eigenvalue or, in certain cases, the norm
of an n × n matrix.
Remark 2.6 As mentioned, the framework developed in this section was first introduced by the authors
in [4]. Whilst a result similar to Theorem 2.4 was proved, there are a number of important improvements
offered by the theory presented in this paper:
1. In [4] it was assumed that the reconstruction vectors φ1 , . . . , φn were the first n in an infinite
sequence of vectors that formed a Riesz basis for H. Conversely, Theorem 2.4 depends only on the
subspace Tn , and thus the individual reconstruction vectors can be chosen arbitrarily.
2. The constants Kn,m and Cn,m are known exactly in terms of the sampling and reconstruction
bases, and can be computed numerically.
3. Simple, explicit bounds for the condition number of the matrix A are known in terms of the constant
Cn,m and the Gram matrix Ã.
4. The behaviour of fn,m as m → ∞ (for n fixed) can be fully explained in terms of oblique asymptotic optimality.
3
Polynomial reconstructions from Fourier samples
One of the most important examples of this procedure is the reconstruction of an analytic, but nonperiodic
function f to high accuracy from its Fourier coefficients. Direct expansion in Fourier series converges
only slowly in the L2 norm, and suffers from the Gibbs phenomenon near the domain boundary. Hence,
given the first m Fourier coefficients of f , we now seek to reconstruct f to high accuracy in another basis
using the procedure developed in Section 2.
12
Let H = L2 (−1, 1), f : (−1, 1) → R and
1
ψj (x) = √ eijπx ,
2
j ∈ Z,
be the standard Fourier basis. For m ≥ 2, we assume that the coefficients
; 1
<m=
<m=
+ 1, . . . ,
− 1,
fˆj =
f (x)ψj (x) dx, j = −
2
2
−1
are known (note that, whenever m is even, this means that the first m − 1 Fourier coefficients of f are
given. We will allow this minor discrepancy since it simplifies ensuing analysis). As a consequence of
Theorem 2.4, we are free to choose the reconstruction space. The orthogonal projection of an analytic
function onto the space Pn−1 of polynomials of degree less than n is known to converge exponentially
fast at rate ρ−n , where ρ > 1 is determined from the largest Bernstein ellipse within which f is analytic
[58]. Hence, we let Tn = Pn−1 . Note that an orthonormal basis for Tn is given by the functions
5
φj (x) = j + 12 Pj (x), j ∈ N,
(3.1)
where Pj is the j th Legendre polynomial. Moreover, if Qn is the orthogonal projection onto Tn , then it
is well-known that
√
#f − Qn f # ≤ cf nρ−n ,
(3.2)
where cf depends only on the maximal value of f on the Bernstein ellipse indexed by ρ. Naturally, we
could also assume finite regularity of f throughout, with suitable adjustments made to the various error
estimates. However, for simplicity we shall not do this.
With this to hand, provided m ≥ Θ(n; θ), where Θ(n; θ) is defined in (2.21), the approximation fn,m
obtained from the reconstruction procedure
satisfies #f − fn,m # ≤ c(θ)#f − Qn f # (see Theorem 2.4).
√
In particular, #f − fn,m # ≤ c(θ)cf nρ−n . Hence, we obtain exponential convergence of fn,m . The key
question remaining is how large m must be in comparison to n to ensure such behaviour. Resolving this
question involves estimating the quantity Θ(n; θ), a task we next pursue.
3.1
Estimates for Θ(n; θ)
Although Θ(n; θ) is independent of the particular basis of Pn−1 used, for both numerical and analytical
estimates we need to select an appropriate basis. A natural choice is the orthonormal basis (3.1) of scaled
Legendre polynomials. Fortunately, in this case, the inner products !φk , ψj " (i.e. the entries of the matrix
U ) are known in closed form
>
k + 12
!φk , ψj " = (−i)k
Jk+ 12 (jπ), j ∈ Z, k ∈ N,
(3.3)
j
where Jm is the Bessel function of first kind. This follows directly from the integral representation
; 1
1
jm (z) = (−i)m
eizx Pm (x) dx, ∀z ∈ C,
(3.4)
2
−1
(see [1, 10.1.14]), where jm is the spherical Bessel function of the first kind, given by
?
π
jm (z) =
J 1 (z).
2z m+ 2
With this to hand, we may compute Cn,m , and, in turn, Θ(n; θ), via the expression given in Lemma 2.13.
In Figure 1 we display the functions Θ(n; 12 ) and Θ(n; 14 ) against n. Immediately, quadratic growth of
Θ(n; θ) with n is apparent. We next verify this observation. In doing so, we derive an upper bound for
Θ(n; θ) in terms of n and θ. This gives an explicit, analytic condition for quasi-optimal recovery. Whilst
such a bound is global, in that it holds for all n, we notice from Figure 1 that Θ(n; θ), when scaled by
n−2 , quickly converges to an asymptotic limit. In practice it is wasteful to use a larger value of m than
necessary (or, conversely, for fixed m a overly pessimistic value of n). Hence, in the second part of this
section, we will also derive an asymptotic bound for Θ(n; θ).
We commence as follows:
13
0.7
0.6
0.5
0.4
0.3
0.2
0.1
800
600
400
200
10
20
30
40
50
10
Figure 1: The functions Θ(n; θ) (left) and n−2 Θ(n; θ) (right) for θ =
20
1
2
30
(squares) and θ =
40
1
4
50
(circles).
Lemma 3.1. Suppose that {ψj }j∈Z is the Fourier basis, Tn = Pn−1 and m ≥ max{2, π2 n}. Then Cn,m
satisfies
4(π − 2)n2
Cn,m ≥ 1 − 2
.
π (23 m
2 4 − 1)
Proof. From the definition of Cn,m and the fact that {ψj } is an orthonormal basis we have
1 − Cn,m = 1 − inf !Pm φ, φ" = sup !φ − Pm φ, φ" = sup #φ − Pm φ#2 ,
φ∈Tn
&φ&=1
φ∈Tn
&φ&=1
φ∈Tn
&φ&=1
%n−1
where Pm is the Fourier projection operator. It now follows that 1 − Cn,m ≤ k=0 #φk − Pm φk #2 ,
where φk is given by (3.1). By Parseval’s theorem and the expression (3.3), we find that
#φk − Pm φk #2 =
#
k + 12
|Jk+ 12 (jπ)|2 .
j
|j|≥, m
2 -
Now, using a known result for Bessel functions [47], it can be shown that
k + 12
2k + 1
|Jk+ 12 (jπ)|2 ≤ 5
,
j
2
jπ j π 2 − (k + 12 )2
provided jπ > k + 12 . Hence, for m >
2
π n,
#φk − Pm φk #2 ≤
Now, it was shown in [47] that
%
j≥m j
2(2k + 1) #
π2
m
√ 12
j −c2
#φk − Pm φk #2 ≤
and
1 − Cn,m
≤
1
c
1
5
.
(k+ 1 )2
j≥, 2 - j
j − π22
(3.5)
c
1
arcsin m−
1 , whenever m ≥ c + 2 . This gives
2
@
A
4
2k + 1
arcsin
,
π
(23 m
2 4 − 1)π
@
A
n−1
4#
2k + 1
≤
arcsin
.
π
(23 m
2 4 − 1)π
k=0
We estimate this sum by the integral of arcsin t, thus giving
1 − Cn,m ≤
Now it can be shown that F (x) =
completes the proof.
Bx
0
2(23 m
24
− 1)
;
2n
(2" m #−1)π
2
arcsin t dt.
0
arcsin t dt ≤ ( π2 − 1)x2 . Upon substituting x =
14
2n
,
(2, m
2 -−1)π
this
Theorem 3.2. Suppose that Tn and Sm are as in Lemma 3.1. Then, for n ≥ 2, Θ(n; θ) satisfies
C
D
1
2(π − 2) 2
Θ(n; θ) ≤ 2
+ 2
n , ∀n ∈ N.
2 π (1 − θ)
Proof. Suppose that m ≥ {2, π2 n}. Then, by Lemma 3.1, Cn,m ≥ θ provided
1−
4(π − 2)n2
≥ θ.
− 1)
π 2 (23 m
24
Rearranging, we find that
2
<m=
2
4(π − 2)n2
≥1+ 2
π (1 − θ)
⇒
C
1
2(π − 2) 2
m≥2
+ 2
n
2 π (1 − θ)
D
and the theorem is proved, provided the right-hand side exceeds max{2, π2 n}. Since n ≥ 2, the righthand side is certainly greater than 2. Moreover,
1+
4(π − 2)n2
8(π − 2)n
2n
≥
>
,
π 2 (1 − θ)
π
π
as required.
Using a similar approach, we are also able to obtain an asymptotic bound for Θ(n; θ), valid as n →
∞, that is sharper than if were to use Theorem 3.2 directly:
Theorem 3.3. Suppose that {ψj }j∈Z and Tn are as in Lemma 3.1. Then the function Θ(n; θ) satisfies
n−2 Θ(n; θ) ≤
!
"
4
+ O n−2 ,
π 2 (1 − θ)
n → ∞.
Proof. Suppose that m = cn2 and recall (3.5). Since j < n and k > 12 cn2 , we deduce that
#φk − Pm φk #2 ≤
!
" 4(2k + 1)
!
"
2(2k + 1) # 1
+ O n−4 =
+ O n−4 .
2
2
2
2
π
j
cπ n
m
Hence
1 − Cn,cn2 ≤
j>,
2
-
n−1
!
"
!
"
4 #
4
(2k + 1) + O n−2 = 2 + O n−2 .
2
2
cπ n
cπ
k=0
Rearranging now gives the result.
In Figure 2 we compare the function n−2 Θ(n; θ) for θ = 12 , 14 and the global and asymptotic bounds
of Theorems 3.2 and 3.3. Both bounds are reasonably sharp in comparison to the computed values. In
particular, as n → ∞, n−2 Θ(n; 12 ) quickly approaches the limiting value c ≈ 0.38, whereas the global
and asymptotic upper bounds are 0.93 and 0.81 respectively.
At this moment, we reiterate an important point. Whilst Legendre polynomials were used in the
proof of Lemma 3.1, the constant Cn,m is independent of the particular reconstruction basis, and is only
determined by the space Tn . Hence, Theorems 3.2 and 3.3 provide a priori estimates regardless of the
particular implementation of the reconstruction procedure. In the next section, we discuss the choice of
polynomial basis and its effect on the numerical method. Note that Theorems 3.2 and 3.3 establish parts
(i) and (ii) of Theorem 1.1. Parts (iii) and (iv) will be addressed in the next section.
Remark 3.1 In some applications, medical imaging, for example, oversampling is common. Formally
speaking, this is the situation where we wish to recover a function f with support in [−1, 1] from its
Fourier samples taken over an extended interval K ⊇ [−1, 1] (e.g. K = [− 1% , 1% ] for some 0 < $ ≤ 1).
In this case, proceeding in a similar manner to before, we let H = L2 (K),
5
ψj (x) = 2c eicjπx , x ∈ K,
15
1.5
1.5
1.0
1.0
0.5
0.5
20
40
60
20
80
40
60
80
Figure 2: The function n−2 Θ(n; θ) (squares), the global bound (circles) and the asymptotic bound (crosses), for
n = 2, . . . , 80 and θ =
1
2
(left), θ =
1
4
(right).
E
F
where c = 12 |K| and Tn = φ : φ|[−1,1] ∈ Pn−1 , supp(φ) ⊆ [−1, 1] . Using similar arguments to those
of Lemma 3.1, one can also derive estimates for Cn,m and Θ(n; θ) in this case. In fact,
Cn,m ≥ 1 −
4(π − 2)n2
,
cπ 2 (m − 1)
(3.6)
and
D
!
"
2(π − 2) 2
4
1
+ 2
n , ∀n ∈ N,
n−2 Θ(n; θ) ≤ 2
+ O n−2 , n → ∞.
2 cπ (1 − θ)
cπ (1 − θ)
! 2"
In particular, we retain the scaling m = O n , regardless of the of size of the interval K.
Θ(n; θ) ≤ 2
3.2
C
Choice of polynomial basis
The results proved in this section are independent of the polynomial basis used for implementation. In
selecting such a basis, there are two questions which must be resolved. First, how stable is the resultant
method, and second, how can the entries of the matrix U (as defined in Section 2.3) be computed? A
straightforward choice is the orthogonal basis of Legendre polynomials (3.1). In this case, Ã = I,
where à is the Gram matrix for {φ0 , . . . , φn−1 }, making the method well-conditioned (Lemma 2.11).
Moreover, the entries of U are known explicitly via (3.3).
Having said this, there is also interest in reconstructing in other polynomial bases. In many circumstances it may be advantageous to have an approximation fn,m that is easily manipulable. In this sense,
an approximant composed of Legendre polynomials is not as convenient as one consisting of Chebyshev
polynomials (of the first or second kind); the latter being easy to manipulate with the Fast Fourier Transform (FFT). To this end, the purpose of this section is to detail the implementation of this method in
terms of general Gegenbauer polynomials.
Gegenbauer polynomials arise as orthogonal polynomials with respect to the inner product
; 1
1
1
!f, g"λ =
f (x)g(x)(1 − x2 )λ− 2 dx, λ > − .
2
−1
For given λ, we denote the j th such polynomial by Cjλ ∈ Pj . Important special cases are the Legendre
polynomials (λ = 12 ), and Chebyshev polynomials of the first (λ = 0) and second (λ = 1) kinds. By
convention [10] (see also [42]), each polynomial Cjλ is normalised so that
Cjλ (1) =
Γ(j + 2λ)
,
j!Γ(2λ)
(3.7)
where Γ is the Gamma function, in which case it is known that (see [10, p.174])
#Cjλ #2λ =
√ Γ(j + 2λ)Γ(λ + 12 )
π
,
j!Γ(2λ)Γ(λ)(j + λ)
16
(3.8)
where #f #λ =
6
!f, f "λ . With this to hand, we now define
φj =
1
C λ,
#Cjλ #λ j
(3.9)
j = 0, 1, 2, . . . ,
and seek to reconstruct f in this basis.
Our first task is to compute the entries of the matrix U . For this, we need to compute integrals of the
form
;
1
Ik (z) =
−1
Ckλ (x)eizx dx,
k = 0, 1, 2, . . . ,
where z ∈ R. Fortunately, such integrals obey a simple recurrence relation:
Lemma 3.4. For z 7= 0, the integrals Ik (z) satisfy
sin z − z cos z
sin z
, I1 (z) = 2iC1λ (1)
,
z
z2
H
2i(k + λ)
eiz + (−1)k e−iz G λ
λ
Ik+1 (z) =
Ik (z) + Ik−1 (z) − i
Ck+1 (1) − Ck−1
(1) ,
z
z
I0 (z) = 2C0λ (1)
k = 1, 2, . . . .
When z = 0, we have
I0 (0) = 2C0λ (1),
Ik (0) =
Proof. Recall the identity (see [10, p.176])
Cjλ (x) =
H
1 + (−1)k G λ
λ
(1) , k = 1, 2, . . . .
Ck+1 (1) − Ck−1
2(k + λ)
H
1
d G λ
λ
Cj+1 − Cj−1
,
2(j + λ) dx
j = 1, 2, . . . .
Substituting this into the expression for Ik (z) and integrating by parts gives
Ik (z) =
"
H1
G! λ
iz
1
λ
(x) eizx x=−1 −
Ck+1 (x) − Ck−1
[Ik+1 (z) − Ik−1 (z)] .
2(k + λ)
2(k + λ)
Rearranging now yields the general recurrence for k ≥ 1. For k = 0, 1, we merely note that C0λ (x) =
C0λ (1), C1λ (x) = C1λ (1)x and that
;
1
e
−1
izx
sin z
dx = 2
,
z
;
1
xeizx dx = 2
−1
sin z − z cos z
.
z2
The result for z = 0 is derived in a similar manner.
Using this recurrence formula, the matrix U can be formed in O (mn) operations.
Remark 3.2 In the case of Chebyshev polynomials, this iteration is well known. However, as discussed
in detail in [22], this iteration is only stable for parameter values k ≤ |z|. Fortunately, this iteration can
be replaced by a two-phase algorithm in order to determine those integrals Ik (z) with k > |z|. This
hybrid algorithm has been shown to be stable, whilst at the same time maintaining the overall cost [22].
It is likely that a variant of this algorithm could also be used for arbitrary Gegenbauer polynomials.
Such considerations aside, we now turn our attention to the condition number of Ã:
Theorem 3.5.
Let Ã
! |2λ−1|
" be Gram matrix for the vectors {φ0 , . . . , φn−1 }, where φj is given by (3.9). Then,
κ(Ã) = O n
as n → ∞. In particular, whenever φ0 , . . . , φn−1 arise from Chebyshev polynomials (of the first or second kinds), then κ(Ã) = O (n).
To prove this theorem, we first require the following two lemmas. For convenience, we will write
L2λ (−1, 1), λ > − 12 , for the space of square-integrable functions with respect to the Gegenbauer weight
1
function (1 − x2 )λ− 2 .
17
Lemma 3.6. Suppose that − 12 < λ < 12 . Then, for all g ∈ L∞ (−1, 1), we have #g# ≤ #g#λ and
1
1
−λ
2
#g#λ ≤ cλ #g#λ+ 2 #g#∞
,
for some cλ > 0 independent of g. Conversely, if λ ≥
1
2
1
λ+ 1
2
#g# ≤ cλ #g#λ
(3.10)
then, for all g ∈ L∞ (−1, 1), #g# ≤ #g#λ and
λ− 1
2
λ+ 1
2
#g#∞
(3.11)
.
Proof. Suppose first that − 12 < λ< 12 . Trivially, #g# ≤ #g#λ . Now consider the other inequality. For
any 0 < $< 1, we have
; 1
1
#g#2λ =
|g(x)|2 (1 − x2 )λ− 2 dx
;−1
;
1
2
2 λ− 12
=
|g(x)| (1 − x )
dx +
|g(x)|2 (1 − x2 )λ− 2 dx
|x|≤1−%
1
≤ (1 − (1 − $)2 )λ− 2 #g#2 + 2#g#2∞
;
1−%<|x|≤1
1
2 λ− 12
1−%
(1 − x )
dx,
1
1
where # · #∞ is the uniform norm on [−1, 1]. Note that (1 − (1 − y)2 )λ− 2 < y λ− 2 , ∀y ∈ (0, 1). It
follows that
1
2 λ+ 1
#g#2λ ≤ $λ− 2 #g#2 +
$ 2 #g#2∞ , 0 < $< 1.
λ + 12
2
&g&
Let c > 2 be arbitrary. Then #g#2 < c#g#2∞ , so we may let $ = c&g&
< 1. Substituting this into the
2
∞
previous expression immediately gives (3.10).
Now suppose that λ > 12 . Once more, trivial arguments give that #g#λ ≤ #g#. For the other
1
inequality, we proceed in a similar manner. We have #g#2 ≤ $ 2 −λ #g#2λ + 2$#φ#2∞ . For c > 2 we now
2
I
J 1
λ+
&g&λ
2 , which gives (3.11).
set $ = c&g&
∞
Lemma 3.7. Let λ ≥ 12 . I
Then, for
J all φ ∈ Pn−1 , #φ#∞ ≤ kn,λ #φ#λ , where kn,λ depends only on n and
λ+ 12
λ and satisfies kn,λ = O n
as n → ∞.
Proof. Let {φ0 , . . . , φn−1 } be given by (3.9), and write an arbitrary φ ∈ Pn−1 as φ =
%n−1
where j=0 |aj |2 = #φ#2λ . By the Cauchy–Schwarz inequality,
#φ#2∞ ≤ #φ#2λ
n−1
#
j=0
%n−1
j=0
aj φ j ,
2
#φj #2∞ = kn,λ
#φ#2λ .
We now wish to estimate #φj #∞ . Recall that #Cjλ #∞ = Cjλ (1) [10, p.206]. Hence, by (3.7) and (3.8),
we have
Γ(j + 2λ)(j + λ)
#φj #2∞ = √
.
j! πΓ(2λ)Γ(λ + 12 )
Consider the ratio
Hence #φj #2∞
Γ(j+2λ)
.
j!
By Stirling’s formula,
!
"
Γ(j + 2λ)
= O j 2λ−1 , j → ∞.
j!
! "
!
"
2
= O j 2λ , which gives kn,λ
= O n2λ+1 , as required.
Proof of Theorem 3.5. Since à is Hermitian and positive definite, its condition number is the ratio of its
maximum and minimum eigenvalues. By a simple argument, we find that
λmax (Ã) = sup
φ∈Pn−1
φ*=0
#φ#2
,
#φ#2λ
λmin (Ã) =
18
inf
φ∈Pn−1
φ*=0
#φ#2
.
#φ#2λ
10
20
30
50
40
!5
!5
!10
!10
!15
!15
100
150
200
250
300
Figure 3: Error in approximating f (x) = e−x cos 4x by fn,m (x) for n = 1, . . . , 40. Left: log error log10 !f −
fn,m !∞ (squares) and log10 !f − fn,m ! (circles) against n. Right: log error against m = 0.2n2 .
Consider the case λ > 12 . By Lemma 3.6, we have λmin (Ã) ≥ 1 and
λmax (Ã) ≤ c2λ sup
φ∈Pn−1
φ*=0
K
#φ#∞
#φ#λ
L2 λ− 112
λ+
2
.
!
"
Using Lemma 3.7, we deduce that λmax (Ã) = O n2λ−1 , as required. For the case − 21 < λ < 12 , we
proceed in a similar manner.
This theorem confirms that the method can be implemented using Chebyshev polynomials whilst
incurring only a mild growth in the condition number. Numerical results confirm the sharpness of the
O (n) estimate for κ(Ã) in this case. It follows that, if conjugate gradients are used to compute the
3
approximation, the total computational cost of forming fn,m is O(mn 2 ), as opposed to O (mn) in the
Legendre polynomial case. In the next section we present several examples of this implementation.
Remark 3.3 Whilst Theorem 3.5 provides an asymptotic estimate for κ(Ã) (and hence κ(A)), it may
also be useful to derive global bounds. With effort, one could obtain versions of Lemmas 3.6 and 3.7
involving explicit bounds. For the sake of brevity, we shall not do this. However, whenever Chebyshev
polynomials are used (arguably the most important case), it is possible to show that
√
κ(Ã) ≤ 2 2n,
H1
2
1 G
κ(Ã) ≤ 3 3 π − 3 n(n + 12 )(n + 1) 3 ,
in the first and second kind cases respectively.
Observe that this theorem, in combination with Theorems 2.4, 3.2 and the arguments of this section,
establish one of the main results of this paper: namely, Theorem 1.1.
3.3
Numerical examples
We now present several numerical examples of this method. All examples employ the value m = 0.2n2 ,
and the first series of examples consider the implementation using Legendre polynomials. In Figure 3
we consider the function f (x) = e−x cos 4x. Since f is analytic in this case, we witness exponential
convergence in terms of n and root exponential convergence in terms of m. Note the effectiveness of the
method: using less than 100 Fourier coefficients, we obtain an approximation with 13 digits of accuracy.
As indicated by Theorem 2.4, the approximation fn,m is quasi-optimal. To highlight this feature of
the method, Figure 4 displays both the error in approximating f by fn,m and the best approximation
Qn f . Note the very close correspondence of the two graphs.
The example in Figures 3 and 4 is, in fact, entire. Hence, the approximation fn,m converges supergeometrically in n (as seen in Figure 3). For a meromorphic function, with complex singularity lying
outside [−1, 1], the convergence rate is truly exponential at a rate ρ. This is demonstrated in Figure 5, the
1
approximated function being f (x) = 1+x
2 . Note that, despite the poles at x = ±i, the approximation
fn,m still obtains 13 digits of accuracy using only 250 Fourier coefficients.
19
!2
5
10
15
20
5
!2
!4
!4
!6
!6
!8
!8
!10
!10
10
15
20
!12
!12
Figure 4: Error in approximating f (x) = e−x cos 4x by fn,m (x) (squares) and Qn f (x) (circles) for n =
1, . . . , 20. Left: log uniform error. Right log L2 error.
20
40
60
200
80
!5
!5
!10
!10
!15
!15
Figure 5: Error in approximating f (x) =
400
600
800
1000 1200
by fn,m (x) for n = 1, . . . , 80. Left: log error log10 !f − fn,m !∞
(squares) and log10 !f − fn,m ! (circles) against n. Right: log error against m = 0.2n2 .
1
1+x2
Next we consider reconstructions in other polynomials bases. In Table 1 we give the error in approximating the function f (x) = e−x cos 4x with Chebyshev polynomials of the first and second kinds. Note
that the resulting uniform error is virtually identical to the case of the Legendre polynomial implementation. Since all three implementations compute exactly the same approximation fn,m , up to numerical
error, this comes as no surprise. Moreover, as evidenced by Table 2, the payoff is only mild growth in the
condition number κ(A).
3.4
Connections to earlier work
Rather than choosing m such that Cn,m ≥ θ, it may appear advantageous to find the minimum m such
that Cn,m > 0. In other words, the smallest m such that fn,m is guaranteed to exist. Letting θ = 0
in Theorems 3.2 and 3.3, we immediately obtain a sufficient condition of the form m ≥ cn2 , for some
c > 0. However, this result is far too pessimistic: it is known that reconstruction is always possible,
provided m ≥ n [47]. For this reason, it may appear favourable to reconstruct using m = n. This
results in a technique known as the inverse polynomial reconstruction method [52, 53]. Unfortunately,
this approach is extremely unstable. The linear system has geometrically large condition number, making
the procedure extremely sensitive to both noise and round-off error. Moreover, a continuous analogue of
the Runge phenomenon occurs. In general, the approximation fn,m only converges to f if the geometric
decay of #f − Qn f # is faster than the geometric growth of #A−1 #, meaning that only functions analytic
in sufficiently large complex regions can be approximated by this procedure (as discussed in detail in [4],
n
(a)
(b)
(c)
5
1.45e0
1.45e0
1.45e0
10
1.85e-3
1.85e-3
1.85e-3
15
3.03e-7
3.03e-7
3.03e-7
20
2.53e-12
2.53e-12
2.49e-12
25
1.06e-14
3.51e-14
6.76e-14
30
8.42e-14
1.16e-13
7.33e-14
35
4.06e-14
4.57e-14
6.40e-14
40
5.31e-14
7.70e-14
5.15e-14
Table 1: Comparison of the error !f − fn,m !∞ with m = 0.2n2 , where fn,m is formed from (a) Legendre
polynomials and Chebyshev polynomials of the (b) first and (c) second kinds.
20
n
(a)
(b)
(c)
5
3.57
13.74
3.90
10
5.55
49.99
5.67
15
4.21
52.63
7.25
20
5.20
91.89
9.33
25
4.40
92.89
11.91
30
5.06
133.02
13.96
35
4.50
133.49
16.56
40
6.77
191.19
18.92
Table 2: Comparison of the condition number κ(A) with m = 0.2n2 , where A is formed from (a) Legendre
polynomials and Chebyshev polynomials of the (b) first and (c) second kinds.
this behaviour can be understood in terms of the operator-theoretic properties of finite sections of certain
non-Hermitian infinite matrices). On the other hand, by allowing m to range independently of n, we
overcome all these difficulties, and obtain a stable method whose convergence is completely determined
by the convergence of Qn f to f .
The specific instance of Legendre polynomial reconstructions
! " from Fourier samples using m > n has
also been considered in [47]. Therein, the estimate m = O n2 was derived, along with bounds for the
error. Naturally, this problem is just one specific example of our general framework. However, within
this context, our work improves and extends the results of [47] in the following ways:
1. Reconstruction is completely independent of the particular polynomial basis used. In particular,
the estimates for Θ(n; θ) and #f − fn,m # are determined only by the space Tn and the vectors
{ψj }j∈Z . This allows for analysis of reconstructions in arbitrary polynomial bases, not just the
Legendre polynomials used in [47].
2. The estimates for Θ(n; θ) in Theorems 3.2 and 3.3 improve those given in [47]. In particular, it
was shown in [47, Theorem 4.2] that
Cn,αn2 ≥ 1 −
1
8
arcsin
,
π
πα
∀n ∈ N,
α ≥ 1,
(3.12)
2
(our constant Cn,m corresponds to the quantity σn,m
in [47]). Conversely, Lemma 3.1 leads to the
improved bounds
9
? :
4(π − 2)
2
2
Cn,αn2 ≥ 1 − 2
∀n ≥ max
,
, α > 0,
(3.13)
π (α − n−2 )
πα
α
and
!
"
4
+ O n−2 , n → ∞, α > 0.
(3.14)
2
π α
Not only are these bounds sharper, they also hold for a greater range of α, thus permitting reconstruction with m = αn2 for any α > 0, as opposed to just α ≥ 1. This leads to savings in
computational cost, and, in cases where m is fixed, allows larger values of n to be used, thereby increasing accuracy. To illustrate this improvement, note that (3.12) gives the estimate Cn,m ≥ 0.175
when m = n2 . Conversely, our estimate (3.13) yields the improved bound 0.383 for n ≥ 2, and
(3.14) gives the asymptotic bound 0.595. To compare, direct computation of Cn,n2 indicates that
Cn,n2 ≥ 0.68 for all n, and Cn,n2 → 0.8 as n → ∞.
3. Piecewise analytic functions and function of arbitrary numbers of variables can be recovered in a
analogous fashion, with similar analysis (see Sections 3.5 and 4 respectively).
Cn,αn2 ≥ 1 −
3.5
Reconstruction of piecewise analytic functions
Naturally, whenever the approximated function is not analytic, the convergence rate of the polynomial
approximant fn,m to f is not exponential. For example, consider the function
7
(2e2π(x+1) − 1 − eπ )(eπ − 1)−1 x ∈ [−1, − 12 )
f (x) =
(3.15)
π
− sin( 2πx
x ∈ [− 12 , 1]
3 + 3)
This function was put forth in [62] to test algorithms for overcoming the Gibbs phenomenon. Aside from
the discontinuity, its sharp peak makes it a challenging function to reconstruct accurately. Since this
21
1.0
0.2
0.5
!1.0
!0.5
0.5
!0.2
!0.4
!0.6
!0.8
!1.0
!1.2
1.0
!0.5
!1.0
200
400
600
800 1000 1200
Figure 6: Error in approximating the function (3.15) by fn,m (x) for n = 1, . . . , 80. Left: the function f (x). Right:
log error log10 !f − fn,m !∞ (squares) and log10 !f − fn,m ! (circles) against m = 0.2n2 .
function is discontinuous, we expect only low-order, algebraic convergence of fn,m in the L2 norm, but
no uniform convergence, an observation confirmed in Figure 6.
However, by reconstructing this function in a polynomial basis, we are not exploiting the known
information about f : namely, the jump discontinuity at x = − 12 . The general procedure set out in Section
2 allows us to use such information in designing a reconstruction basis. Naturally, since f is analytic in
the subintervals [−1, − 12 ] and [− 12 , 1], a better choice is to reconstruct f in a piecewise polynomial basis.
The aim of this section is to describe this procedure.
Seeking generality, suppose that f : [−1, 1] → R is piecewise analytic with jump discontinuities at
−1 < x1 < . . . < xl < 1. Let x0 = −1 and xl+1 = 1. We assume that f has been sampled via
fˆj = !f, ψj ", j = 1, . . . , m, where !·, ·" is the Euclidean inner product on L2 (−1, 1). In examples, these
will be the Fourier samples of f , but the construction described below holds for arbitrary sampling bases
consisting of functions defined on [−1, 1].
Throughout we shall assume that the discontinuity locations x1 , . . . , xl are known exactly. That is,
we focus on reconstruction. Naturally, a fully-automated algorithm must also incorporate a scheme for
singularity detection. There are numerous methods for this problem. We refer the reader to [34, 61] for
further details.
Given the additional information about the location of the singularities of f , we now design a reconstruction basis to mirror this feature. We shall construct such a basis via local co-ordinate mappings. To
r
this end, let Ir = [xr , xr+1 ], cr = 12 (xr+1 − xr ) and define Λr (x) = x−x
cr − 1, so that Λ(Ir ) = [−1, 1].
#
Suppose now that Tn is a space of functions defined on [−1, 1] (e.g. the polynomial space Pn−1 ). By
convention, we assume that each φ ∈ T#n is extended by zero to the whole real line, i.e. φ(x) = 0 for
x ∈ R\[−1, 1]. Let Tn,r be the space of functions defined on Ir , given by Tn,r = {φ ◦ Λr : φ ∈ T#n }.
We now define the new reconstruction space in the obvious manner:
Tn = {φ : φ|Ir ∈ Tnr ,r , r = 0, . . . , l} ,
n=
l
#
nr ,
r=0
and seek an approximation fn,m ∈ Tn to f defined by (2.6). Suppose now that {φ1 , . . . , φn } is a
collection of linearly independent reconstruction functions with T#n = span{φ1 , . . . , φn }. We construct a basis for Tn by scaling. To this end, we let φr,j = √1cr φj ◦ Λr , and notice that Tn =
span {φr,j : j = 1, . . . , nr , r = 0, . . . , l}. Note that, if {φj } are orthonormal, then so are {φr,j }. With
this basis in hand, the approximation fn,m is now given by
fn,m =
nr
l #
#
αr,j φr,j ,
r=0 j=1
where the coefficients αr,j are determined by the aforementioned equations. As before, this is equivalent
to the least squares problem U α ≈ fˆ with block matrix U = [U1 , . . . , Ul ], where Ur is the m × nr matrix
with (j, k)th entry
; xr+1
1
!φr,k , ψj " = √
φk (Λr (x))ψj (x) dx.
c r xr
22
Here fˆ = (fˆ1 , . . . , fˆm )0 , α = [α0 , . . . , αl ] and αr = (αr,1 , . . . , αr,nr )0 .
Naturally, estimation of the quantity Θ(n; θ) is vital. The following lemma aids in this task:
Lemma 3.8. The constant Cn,m satisfies
Cn,m ≥ 1 −
l
#
r=0
(1 − Cnr ,m,r ) ,
where Cnr ,m,r = inf φ∈Tnr ,r !Pm φ, φ".
&φ&=1
Proof. For φ ∈ Tn , denote φ|Ir by φ[r] . Assume that φ[r] is extended to [−1, 1] by zero, so that φ =
%l
[r]
[r]
⊥ φ[s] for r 7= s, it follows that
r= φ . Since φ
9 %l M
:
N
l
[r]
#
− Pm φ[r] , φ[r]
r=0 φ
[r]
[r] 2
: φ ∈ Tnr ,r , r = 0, . . . , l,
#φ # 7= 0 .
1 − Cn,m = sup
%l
[r] 2
r=0
r=0 #φ #
Note that, for ar ≥ 0 and br > 0, r = 0, . . . , l, the inequality
l
#
r=0
ar ≤
l
l
#
ar #
r=0
br
br ,
r=0
M
N
holds. Setting ar = φ[r] − Pm φ[r] , φ[r] and br = #φ[r] #2 and using this inequality gives
:
9 l M
N
l
# φ[r] − Pm φ[r] , φ[r]
#
[r]
[r] 2
1 − Cn,m ≤ sup
: φ ∈ Tnr ,r , r = 0, . . . , l,
#φ # 7= 0
#φ[r] #2
r=0
r=0
7
8
l
#
!φ − Pm φ, φ"
≤
sup
: φ ∈ Tnr ,r , #φ# =
7 0 ,
#φ#2
r=0
and this is precisely
%l
r=0 (1
− Cnr ,m,r ).
Let us now focus on piecewise polynomial reconstructions from Fourier samples, in which case
Tn = {φ : φ|Ir ∈ Pnr −1 , r = 0, . . . , l}
(3.16)
is the space of piecewise polynomials of total degree n. Regarding the rate of convergence of the resulting
approximation fn,m , it is a simple exercise to confirm that
#f − fn,m # ≤ c(θ)cf
l
#
√
nr ρr−nr ,
r=0
where c(θ) is defined in (2.22), cf is a constant depending on f only and ρr is determined by the largest
Bernstein ellipse (appropriately scaled) within which the function f |Ir is analytic. Hence, we expect
exponential convergence of fn,m to f . The main question remaining is that of estimating the function
Θ(n; θ) for this reconstruction procedure. For this, we have the following result, which extends Theorems
3.2 and 3.3 to this more general case:
Theorem 3.9. Let {ψj }j∈Z be the Fourier basis and Tn be given by (3.16). Then the function Θ(n; θ)
satisfies
P
O
l
l
#
1
2(π − 2) # n2r
Θ(n; θ) ≤ 2
+ 2
, ∀n =
nr , n0 , . . . , nl ∈ N,
2 π (1 − θ) r=0 cr
r=0
and
l
Θ(n; θ) ≤
# n2
4
r
+ O (1) ,
2
π (1 − θ) 0 cr
23
n0 , . . . , nl → ∞.
m
Cn,m
κ(A)
10
0.05
19.88
20
0.34
2.92
40
0.33
3.06
80
0.44
2.27
160
0.44
2.27
320
0.47
2.11
640
0.49
2.03
1280
0.50
1.98
Table 3: The quanties Cn,m and κ(A) against m, where n is as in Figure 7.
Proof. In view of Lemma 3.8, it suffices to consider Cnr ,mr ,r . To this end, let J = [α, β] ⊆ [−1, 1],
Tn,J be the space of functions φ with supp(φ) ⊆ J and φ|J ∈ Pn−1 , and define
J
Cn,m
=
inf
φ∈Tn,J
&φ&=1
!Pm φ, φ" .
1
Let Λ(x) = x−α
c − 1, where c = 2 (β − α), and write φ = Φ ◦ Λ, where supp(Φ) ⊆ [−1, 1]. Consider
the quantity !φ, ψj ". By definition of ψj , we have
1
!φ, ψj " = √
2
;
1
φ(x)e
−ijπx
−1
c
dx = √ e−ijπ(α+c)
2
;
Λ(1)
Φ(y)e−ijπcy dy.
Λ(−1)
Let K = [Λ(−1), Λ(1)] = Λ([−1, 1]) ⊇ [−1, 1] and let Pm,K be the Fourier projection operator based
on the interval K. It now follows that
E
F
J
Cn,m
= inf !Pm,K Φ, Φ" : supp(Φ) ⊆ [−1, 1], Φ|[−1,1] ∈ Pn−1 , #Φ# = 1 .
This is now precisely the setup of Remark 3.1. Using (3.6), we therefore deduce that
J
1 − Cn,m
≤
4(π − 2)n2
.
cπ 2 (23 m
2 4 − 1)
Letting J = Ir , c = cr and using Lemma 3.8, we now obtain
l
Cn,m ≥ 1 −
4(π − 2) # n2r
,
π 2 (23 m
2 4 − 1) r=0 cr
(3.17)
from which the result follows immediately.
To implement this scheme, it is necessary to compute the values (3.5). By changing variables, it is
easily seen that
?
;
cr −ijπdr 1
!φr,k , ψj " =
e
φk (y)e−ijπcr y dy,
2
−1
where dr = 21 (xr+1 + xr ). Since (3.4) holds for all z ∈ C, it follows that
!φr,k , ψj " = e−ijπdr (−i)k
>
k + 12
Jk+ 12 (jπcr ),
j
(3.18)
whenever the functions φr,k arise from scaled Legendre polynomials. Naturally, if the functions φr,k
arise from arbitrary scaled Gegenbauer polynomials, computation of the values (3.5) can be carried out
recursively via the algorithm described in Section 3.2.
In Figure 7 we apply this method to the function (3.15) using the orthonormal basis of scaled Legendre
polynomials. The improvement over Figure 6 is dramatic: using only m ≈ 250 (with n0 = n1 = 16) we
obtain 13 digits of accuracy. Note that, as expected, root exponential convergence occurs. Moreover, as
predicted by (3.17), and illustrated in Table 3, the condition number of the matrix A remains small.
24
!2
!4
!6
!8
!10
!12
!14
100
200
300
400
!1.0
500
!0.5
0.5
1.0
!4
!6
!8
!10
!12
!14
!16
Figure 7: Error in approximating the function (3.15) by fn,m (x). Left: log error log10 !f −! fn,m !∞ "
(squares) and
log10 !f − fn,m ! (circles) against m = 1, . . . , 500 with n0 = n1 chosen so that m =
error log10 |f (x) − fn,m (x)| against x ∈ [−1, 1] for m = 20, 40, 80, 160.
3.6
1
5
n2
0
c0
+
n2
1
c1
. Right: the
Comparison to existing methods
Numerous methods exist for recovering functions to high accuracy from their Fourier data. Applications
are myriad, and range from medical imaging [8, 9] to postprocessing of numerical solutions of hyperbolic
PDEs [38, 51]. Prominent examples which deliver high global accuracy (in contrast to standard filtering techniques, which only yield high accuracy away from the singularities of a function [41]) include
spectral reprojection [40, 41, 42], techniques based on implicit matching of jump conditions [27], Padé
methods [24], methods based on sequence extrapolation [18] and Fourier extension methods [13, 48] (for
a more complete list, see [15] and references therein).
Whilst many of these methods deliver exponential convergence in terms of m (the number of Fourier
coefficients), they all suffer from ill-conditioning. This comes as no surprise: the problem of reconstructing a function from its Fourier coefficients can be viewed as a continuous analogue of the recovery of
a function from m equidistant pointwise values. As proved in [57], any method for this problem that
converges exponentially fast in m will suffer from exponentially poor conditioning. We conjecture that a
similar result also holds in the continuous case.
Aside from increased susceptibility to round-off error and noise, ill-conditioning often makes socalled inverse methods (e.g. extrapolation and Fourier extension methods) costly to implement. Conversely, the method proposed in the paper does not suffer from any ill-conditioning. This negative consequence of [57] is circumvented, precisely because we witness only root exponential convergence in m.
However, an advantage of this approach is that it delivers exponential convergence in n, the degree of the
final approximant fn,m . In many applications it may be necessary to manipulate fn,m , its relatively low
degree making such operations reasonably cheap. Thus, this method has the advantage of compression,
a feature not shared by the majority of the other methods mentioned previously.
A well-established and widely used alternative to this method is spectral reprojection, developed
by Gottlieb et al [35, 41, 42]. Much like this approach, it computes a polynomial approximation. Yet it
stands out as being direct, meaning that no linear system or least squares problem is required to be solved.
Whilst the original method, based on Gegenbauer polynomials [41, 42] has been shown to suffer from a
generalised Runge phenomenon [14], thereby severely affecting its applicability, an improved approach
based on Freud polynomials was recently developed in [35]. Numerically at least, this method appears
to overcome a number of the issues associated with the original Gegenbauer procedure.
! "
Comparatively speaking, spectral reprojection delivers exponential convergence in O m2 opera3
tions. On the other hand, our method obtains root exponential convergence at a cost of O(m 2 ) operations. However, despite being theoretically more efficient, the various constants involved in spectral
reprojection tend to be rather large. Indeed, in Table 4 we compare the error in approximating the function (3.15) using both procedures (the data for spectral reprojection is taken from [35, Table 1]. Note that
the parameter N used therein is such that 2N = m is the total number of Fourier coefficients). As is
evident, the method proposed in this paper obtains an error of order 10−14 using less than 256 Fourier
coefficients, whereas spectral reprojection does not reach this value until more than 1024 coefficients are
used.
The most likely reason for this improvement is that the method of this paper is quasi-optimal, thereby
25
m
(a)
(b)
64
8.90e-01
2.40e-04
128
1.37e-01
8.36e-09
256
1.84e-04
2.40e-14
512
1.01e-07
1.38e-14
1024
9.33e-13
1.74e-14
2048
5.27e-13
2.26e-14
4096
5.23e-14
2.59e-14
Table 4: Comparison of the (a) spectral reprojection and (b) generalised reconstruction methods applied to (3.15).
Here m is the total number of Fourier samples.
delivering a near-optimal polynomial approximation, whereas both the spectral reprojection does not
possess this feature. In fact, although spectral reprojection formally converges exponentially fast in
m, the corresponding rate may be substantially slower than that of the best polynomial approximation.
Furthermore, for the Gegenbauer technique at least, there is the significant issue that various parameters
have to be chosen in an essentially function-dependent manner to ensure convergence [35], and thereby
avoid a Runge phenomenon. Numerical stability is also potentially a significant issue in both cases.
Aside from improved numerical performance, √
let us mention several other benefits. First, as discussed, the final approximation consists of only O( m) terms, as opposed to O (m). Second, the basis
for the polynomial reconstruction space Tn can be chosen arbitrarily (in particular, independently of
m) without affecting the convergence. The only downside is a mild increase in condition number if
nonorthogonal polynomials are employed. In contrast, for the Freud/Gegenbauer spectral reprojection
methods, only very specific types of polynomials can be used (which may not be simple to construct
or manipulate [35]), and, whenever the number of samples m is varied, all polynomials employed for
reconstruction must be changed.
One advantage of spectral reprojection is that it is local: the approximation in each subdomain of
smoothness is computed separately and independently of any other subdomain. Conversely, with our
approach, the computations are inherently coupled. Nevertheless, it may be possible to devise a local
version of our approach, a question we intend to explore in future investigations.
4
Reconstructions in tensor-product spaces
Thus far, we have focused on the reconstruction of univariate functions from their Fourier samples. A
simple extension of this approach, via tensor products, is to functions defined in cubes. The aim of this
section is to detail this generalisation.
To formalise this idea, let us return to the general perspective of Section 2. Suppose that the Hilbert
space H can be expressed as a tensor-product H = H1 ⊗ · · · ⊗ Hd of Hilbert spaces Hi , i = 1, . . . , d,
each having inner product !·, ·"i . Note that, for f = f1 ⊗ · · · ⊗ fd ∈ H and g = g1 ⊗ · · · ⊗ gd ∈ H, we
have
d
Q
!f, g" =
!fi , gi "i .
i=1
Now suppose that the sampling basis consists of tensor-product functions. To this end, let
j = (j1 , . . . , jd ) ∈ Nd ,
ψj = ψ1,j1 ⊗ · · · ⊗ ψd,jd ,
and, for m = (m1 , . . . , md ) ∈ Nd , set
Sm = span {ψj : j = (j1 , . . . , jd ), 1 ≤ ji ≤ mi , i = 1, . . . , d} .
We assume throughout that the collection {ψi,j }∞
j=1 is a Riesz basis for Hi for i = 1, . . . , d. In particular,
{ψj } is a Riesz basis for H. With this to hand, we define the operator Pm : H → Sm by
Pm f =
m1
#
j1 =1
···
md
#
jd =1
!f, ψj " ψj .
Note that Pm (f1 ⊗ · · · ⊗ fd ) = (P1,m1 f1 ) ⊗ · · · ⊗ (Pd,md fd ), where Pi,mi : Hi → Si,mi =
span{ψi,1 , . . . , ψi,mi } is defined in the obvious manner. In a similar fashion, we introduce the reconstruction vectors φj = φ1,j1 ⊗ · · · ⊗ φd,jd , which form a basis for the reconstruction space
Tn = span {φj : j = (j1 , . . . , jd ), 1 ≤ ji ≤ ni , i = 1, . . . , d} ,
26
n = (n1 , . . . , nd ) ∈ Nd .
Note that Tn = T1,n1 ⊗ · · · ⊗ Td,nd , where Ti,ni = span{φi,1 , . . . , φi,ni }. As before, we construct the
approximation fn,m ∈ Tn via (2.6).
To cast this problem in a form suitable for computations, let U [i] ∈ Cmi ×ni be the matrix with
(j, k)th entry !ψi,j , φi,k "i . Let U ∈ Cm̄,n̄ be the matrix of the d-variate reconstruction method, where
m̄ = m1 . . . md and n̄ = n1 . . . nd . It is easily shown that
U=
d
R
U [i] ,
A=
i=1
d
R
A[i] ,
i=1
where A = U † U , and A[i] = (U [i] )† U [i] , and, in this case, B1 ⊗ B2 denotes the Kronecker product of
matrices the B1 and B2 . By a trivial argument,
6 we conclude that the number of operations required to
compute fn,m is of order (n1 m1 ) . . . (nd md ) κ(A).
Recall that the spectrum of the Kronecker product matrix B1 ⊗ B2 consists of the pairs λµ, where λ
is an eigenvalue of B1 and µ is an eigenvalue of B2 . From this, we deduce that
κ(A) =
d
Q
κ(A[i] ).
i=1
Hence κ(A) is completely determined by the matrices A[i] , with the ith such matrix corresponding to
ni
i
the univariate reconstruction problem with sampling basis {ψi,j }m
j=1 and reconstruction basis {φi,j }j=1 .
Unsurprisingly, a similar observation also holds for the quantity Cn,m :
Lemma 4.1. Let
Cn,m = inf !Pm φ, φ",
φ∈Tn
&φ&=1
Then Cn,m =
Sd
i=1
Cni ,mi =
inf !Pi,mi φ, φ"i ,
φ∈Ti,ni
&φ&i =1
i = 1, . . . , d.
(4.1)
Ci,ni ,mi .
Proof. By Lemma 2.13, Cn,m = λmin (Ã−1 A) and Ci,ni ,mi = λmin ((Ã[i] )−1 A[i] ), i = 1, . . . , d, where
à and Ã[i] are defined in the obvious manner. Since à = Ã[1] ⊗ · · · ⊗ Ã[d] , the matrix Ã−1 is the
Kronecker product of matrices (Ã[i] )−1 . The result now follows immediately.
4.1
Reconstruction of piecewise smooth functions
Having presented the general case, we now turn our attention to the reconstruction of a piecewise smooth
function f : [−1, 1]d → R. We shall make the significant assumption (see Remark 4.1) that f is smooth
in hyper-rectangular subregions of [−1, 1]d . To this end, for i = 1, . . . , d let li ∈ N and suppose that
−1 = x0,i < x1,i < . . . < xli ,i < xli +1,i = 1.
Define Ir,i = [xr,i , xr+1,i ], r = 0, . . . , li , and for r = (r1 , . . . , rd ) write Ir = Ir1 ,1 × · · · × Ird ,d , so that
the collection
{Ir : r = (r1 , . . . , rd ), ri = 0, . . . , li , i = 1, . . . , d} ,
consists of disjoint sets whose union is [−1, 1]d . We assume that f is smooth within each subdomain
Ir . In addition, for ri = 0, . . . , li and i = 1, . . . , d, let cri ,i = 12 (xri +1,i − xri ,i ) and set Λri ,i (x) =
x−xri ,i
cri ,i − 1, x ∈ Iri . Note that Λri ,i (Iri ,i ) = [−1, 1].
We now design a reconstruction space. To this end, for n ∈ N let T#n , dim T#n = n, be a space of
functions φ : R → C with supp(φ) ⊆ [−1, 1]. Define
Tn,r,i = {φ ◦ Λr,i : φ ∈ T#n } ,
n ∈ N.
Now suppose that n is the vector (n1 , . . . , nd ), where
ni =
li
#
nr,i ,
r=0
27
i = 1, . . . , d,
for some nr,i ∈ N. We define the reconstruction space Tn by
Tn =
li
d T
R
(4.2)
Tnr,i ,r,i .
i=1 r=0
We require a basis for this space. Let {φ1 , . . . , φn }, n ∈ N, be a basis for T#n , and set
φr,j,i = √
1
φj ◦ Λr,i .
cr,i
A basis for Tn is now given by
{φr1 ,j1 ,1 ⊗ · · · ⊗ φrd ,jd ,d , j = 1, . . . , nri ,i , ri = 0, . . . , li , i = 1, . . . , d} .
This framework gives a general means in which to construct reconstruction bases in the tensor-product
case for functions which are piecewise smooth with discontinuities parallel to the co-ordinate axes. Suppose now that we consider the recovery of such a function from its Fourier samples. Using the above
framework, we construct a basis consisting of piecewise polynomials of several variables. The main
question remaining is that of estimating the function Cn,m . However, in view of the Lemma 4.1 and the
results derived in Section 3.1, a simple argument gives
Theorem 4.2. Let {ψj }j∈Z be the multivariate Fourier basis on [−1, 1]d and suppose that Tn is defined
by (4.2) for the choice T#n = {φ : φ|[−1,1] ∈ Pn−1 }. Suppose further that n = (n1 , . . . , nd ), where
% li
ni = r=0
nr,i , i = 1, . . . , d, and let
Θ(n; θ) = min{m = (m1 , . . . , md ) : Cn,m ≥ θ},
θ ∈ (0, 1),
where Cn,m is as in (4.1). If θ1 , . . . , θd ∈ (0, 1) satisfy θ = θ1 . . . θd , then we may write
Θ(n; θ) = (Θ1 (n1 ; θ1 ), . . . , Θd (nd ; θd )) ,
where
and
P
li
n2r,i
2(π − 2) #
1
+
,
Θi (ni ; θi ) ≤ 2
2 π 2 (1 − θi ) r=0 cr,i
O
i = 1, . . . , d,
l
Θi (ni ; θi ) ≤
i
#
n2r,i
4
+ O (1) ,
π 2 (1 − θi ) r=0 cr,i
n0,i , . . . nli ,i → ∞,
i = 1, . . . , d.
The main consequence of this theorem is the following: regardless of the dimension, the variables
m1 , . . . , md must scale quadratically with n1 , . . . , nd to ensure quasi-optimal recovery in a multivariate piecewise polynomial basis from Fourier samples. Consider now the most simple example of this
approach: namely, where f is smooth in [−1, 1]d , so that Tn consists of multivariate polynomials. In
2
Figure 8 we plot the error in approximating the functions f (x, y) = ex y and f (x, y) = sin 3xy, using
2
parameters m1 = m2 = 0.5n1 and n2 = n1 . As in the univariate case, we observe the accuracy of this
technique. For example, using only m1 = m2 ≈ 200 and n1 = n2 ≈ 20 we obtain an error of order
10−14 .
Remark 4.1 This approach (and many others based on tensor-product formulations) has the significant
shortcoming that it requires the function to be singular in regions parallel to the co-ordinate axes. Naturally, this is a rather restrictive condition. For a function with singularities lying on a curve (in two
dimensions, for example), one potential alternative is to apply the one-dimensional method along horizontal and vertical slices, and recover the two-dimensional function from the resulting one-dimensional
reconstructions. However, the generality of the reconstruction framework presented in this paper allows one to potentially consider other multivariate reconstruction bases, better suited for functions not
possessing such a simple singularity geometry. This is a topic for future investigation.
28
5
!2
10
15
20
25
!2
!4
!6
!8
!10
!12
!14
!4
!6
!8
!10
!12
5
10
15
20
25
Figure 8: The errors log10 !f − fn,m ! (squares) and log10 !f − fn,m !∞ (circles) for n1 = n2 = 1, . . . , 25, where
f (x) = ex
5
2
y
(left) and f (x, y) = sin 3xy (right).
Other sampling problems
Overcoming the Gibbs phenomenon in Fourier series is an obvious application of the general framework
developed in Section 2. However, there is no reason to restrict to this case, and this framework can be
readily applied to design effective methods for a variety of other problems. In this section we describe
several related problems, and the application of this framework therein.
5.1
Modified Fourier sampling
Modified Fourier series were proposed in [49] as a minor adjustment of Fourier series. In the domain
[−1, 1], rather than expanding a function f in the classical Fourier basis
{cos jπx : j ∈ N} ∪ {sin jπx : j ∈ N+ },
we construct the modified Fourier expansion using the basis
{cos jπx : j ∈ N} ∪ {sin(j − 12 )πx : j ∈ N+ },
instead. Though this basis arises from only a minor adjustment of the Fourier basis, the result is an
improved approximation:
! the
" modified Fourier series of a smooth, nonperiodic function converges uniformly at a rate of O m−1 , whilst Fourier series suffers from the Gibbs phenomenon. Although the
convergence rate remains slow, the improvement over Fourier series, whilst retaining many of their principal benefits, has given rise to a number of applications of such expansions. For a more detailed survey,
we refer the reader to [2, 7].
We shall consider modified Fourier expansions in a somewhat different context. Given the similarity
between the two bases, it is reasonable to assume that any sampling procedure (e.g. an MRI scanner) can
be adapted to compute the modified Fourier coefficients of a given function (or image/signal), as opposed
to the standard Fourier samples. Indeed, if
Ff (t) =
;
1
f (x)e−iπtx dx,
−1
is the Fourier transform of f , then the modified Fourier coefficients are precisely
;
1
1
[Ff (j) + Ff (−j)] ,
2
−1
; 1
H
iG
fˆjS =
f (x) sin(j − 12 )πx dx =
Ff (j − 12 ) + Ff ( 12 − j) ,
2
−1
fˆjC =
f (x) cos jπx dx =
and hence can be computed from samples of the Fourier transform. This raises the question: given that
the general framework can handle sampling in either, is there an advantage gained from sampling in
the modified Fourier basis, as opposed to the Fourier basis? As we shall show, provided the function is
29
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.7
0.6
0.5
0.4
0.3
0.2
0.1
20
40
60
80
20
40
60
80
Figure 9: The function n−2 Θ(n; θ) (squares), the global bound (circles) and the asymptotic bound (crosses) for
n = 1, . . . , 80 and θ =
!2
!4
!6
!8
!10
!12
50
1
2
(left), θ =
100
1
4
(right).
150
200
!2
50
100
150
200
!4
!6
!8
!10
!12
Figure 10: The errors log10 !f − fn,m !∞ (left) and log10 !f − fn,m ! (right) against m = 5, 10, 15, . . . , 200,
where fn,m is computed from modified Fourier (squares) or Fourier (circles) samples.
analytic and nonperiodic, this is indeed the case. Specifically, when we reconstruct in a polynomial basis,
we require fewer samples to obtain stable, quasi-optimal recovery to within a prescribed tolerance.
Suppose that we carry out the reconstruction procedure as in Section 3 but using modified Fourier
samples instead of Fourier samples. For this, we set
,m-
2
0
/
#
1
Pm f (x) = fˆ0C +
fˆjC cos jπx + fˆjS sin jπx .
2
j=1
Naturally, we consider the function Θ(n; θ) once more. In Figure 9 we plot the function Θ(n; θ) for the
modified Fourier basis. Upon comparison with Figure 2, we conclude the following: modified Fourier
sampling, as opposed to standard Fourier sampling leads to a noticeable improvement. Specifically, the
quantity n−2 Θ(n; 21 ) is approximately 0.15 for large n in the modified Fourier case, as opposed to 0.4 in
the Fourier case.
This result means that, if the number of samples m is fixed, we are able to take a much larger value of
n in the modified Fourier case, whilst retaining quasi-optimal recovery (with constant c(θ)). To illustrate
this improvement, in Figure 10 we compare the errors in approximating the function f (x) = e−x cos 8x
from either its Fourier or modified Fourier data. In both cases the number of samples m was fixed, and
n was chosen so that the parameter Cn,m ≥ 12 . As is evident, the method based on modified Fourier
samples greatly outperforms the other. For example, using only m = 120 samples, we obtain an error of
order 10−14 for the former, in comparison to only 10−4 for the latter.
As in the Fourier case, to implement the modified Fourier-based approach it is necessary to have
estimates for the function Θ(n; θ). These are particularly simple to derive:
Lemma 5.1. Let {ψj }j∈N be the modified Fourier basis and Tn = Pn−1 . Then the function Θ(n; θ)
satisfies
C
D
2
√
Θ(n; θ) ≤
kn n2 , where kn = n−2 sup #φ# #.
π 1−θ
φ∈Pn−1
&φ&=1
30
2
Proof. In [3] it was shown that #φ − Pm φ# ≤ mπ
#φ# # for all sufficiently smooth functions φ. The result
now follows immediately from the definition of Cn,m .
As a result of this lemma, analytical bounds for Θ(n; θ) are dependent solely on the constant kn of
the Markov inequality #φ# # ≤ kn n2 #φ#, ∀φ ∈ Pn−1 . Markov inequalities and their constants are well
understood. The question of determining kn was first studied rigorously by Schmidt [59], in which the
estimates
1
1
kn ≤ √ , ∀n,
κn → , n → ∞,
(5.1)
π
2
were derived. In [60] the following improved asymptotic estimate was also obtained:
@
A−1
(n + 12 )2
π2 − 3
Rn
1−
k n n2 =
+
, n ≥ 5,
(5.2)
π
12(n + 12 )2
(n + 12 )4
where −6 < Rn < 13. We refer the reader to [11] for a more thorough discussion of both these results
and more recent work on this topic. Returning to Θ(n; θ) we now substitute the result of Lemma 5.1 into
(5.1) and (5.2) to obtain the global and asymptotic bounds. In Figure 9 we compare these bounds to their
numerically computed values. The relative sharpness of such estimates is once more observed.
5.2
Polynomial sampling
The primary concern of this paper has been reconstruction from Fourier (or Fourier-like) samples. However, in several circumstances, most notably the spectral approximation of PDEs with discontinuous
solutions [32, 38], the problem arises where a piecewise analytic function has been sampled in an orthogonal polynomial basis. As previously noted, this approximation will converge slowly (and suffer from
a Gibbs-type phenomenon near discontinuities), hence it is necessary to compute a new approximation
with faster convergence. Whilst a version of the spectral reprojection using piecewise Gegenbauer polynomials has been developed for this task [39, 40], the advantages of the method proposed in this paper
(see Section 3.6) make it a compelling alternative to this existing approach. Hence, the purpose of this
section is to give a brief overview of this application.
It is beyond the scope of this paper to develop this example of the reconstruction procedure in its full
generality. Instead, we consider only the recovery of a piecewise analytic function f : [−1, 1] → R from
1
its first m Legendre polynomial coefficients fˆj = !f, ψj ", j = 0, . . . , m − 1, where ψj = (j + 12 ) 2 Pj (x)
th
is the j normalised Legendre polynomial. Proceeding as in Section 3.5, we assume that f has jump
discontinuities as −1 < x1 < . . . < xl < 1, and seek an approximation of the form
fn,m (x) =
l n#
r −1
#
αr,j φr,j (x),
r=0 j=0
n=
l
#
nr ,
r=0
1
r
where φr,j (x) = √1cr φj (Λr (x)), Λr (x) = x−x
cr −1, cr = 2 (xr+1 −xr ) and {φ0 , . . . , φn−1 } is a system
of polynomials on [−1, 1]. Since f is piecewise analytic, we expect exponential convergence of fn,m to
f , provided m is sufficiently large in comparison to n.
Aside from determining how large m must be in comparison to n for recovery, the main question
remaining is that of implementation, i.e. how to compute the entries of the matrix U . This requires
evaluation of the integrals
; xr+1
ψj (x)φr,k (x) dx, j = 0, . . . , m − 1, k = 0, . . . , n − 1.
xr
Whenever the reconstruction functions φr,k arise from Gegenbauer polynomials, these calculations can
be done iteratively. For the sake of brevity, we will not describe this computation in full generality.
Instead, we consider only the situation where the functions φr,k are (appropriately scaled) Legendre
polynomials, in which case we are required to compute the integrals
; xr+1
Pj (x)Pk (Λr (x)) dx,
j = 0, . . . , m − 1, k = 0, . . . , n − 1, r = 0, . . . , l.
xr
We have
31
Lemma 5.2. Let
uj,k =
;
b
Pj (x)Pk (cx + d) dx,
j, k = 0, 1, 2, . . . ,
(5.3)
a
where ca + d = −1 and cb + d = 1. Then
u0,0 = b − a,
u0,k =
uj,0 =
2
δ0,k ,
c
1
b
[Pj+1 (x) − xPj (x)]x=a ,
j
u1,k =
2
2d
δ1,k − δ0,k ,
3c
c
j = 1, 2, . . . ,
k = 0, 1, 2, . . . ,
and, for j ≥ 2 and k ≥ 1,
uj,k =
(2j − 1)k
d(2j − 1)
j−1
(2j − 1)(k + 1)
uj−1,k+1 +
uj−1,k−1 −
uj−1,k −
uj−2,k . (5.4)
cj(2k + 1)
cj(2k + 1)
cj
j
Proof. Recall the recurrence relation
jPj (x) = (2j − 1)xPj−1 (x) − (j − 1)Pj−2 (x),
j = 2, 3, . . . ,
(5.5)
for Legendre polynomials [1, chpt 22]. Substituting this into (5.3) gives
uj,k =
2j − 1
j
;
b
a
xPj−1 (x)Pk (cx + d) dx −
j−1
uj−2,k .
j
Letting x *→ cx + d in (5.5) and rearranging, we find that
xPk (x) =
k+1
d
k
Pk+1 (cx + d) − Pk (cx + d) +
Pk−1 (cx + d).
c(2k + 1)
c
c(2k + 1)
The recurrence (5.4) now follows upon substituting this into the previous expression.
Bb
Next consider uj,0 . Since P0 ≡ 1, we have u0,0 = b − a and uj,0 = a Pj (x) dx for j ≥ 1. Recall
that the j th Legendre polynomial satisfies the Legendre differential equation [1, chpt 22]
Substituting for Pj in
Bb
a
G
H#
(1 − x2 )Pj# (x) = −j(j + 1)Pj (x).
Pj (x) dx and integrating gives
uj,0 =
G 2
Hb
1
(x − 1)Pj# (x) x=a .
j(j + 1)
The result now follows directly from the expression
(1 − x2 )Pj# (x) = (j + 1)(xPj (x) − Pj+1 (x)),
j = 0, 1, 2, . . . .
To complete the proof, we consider u0,k and u1,k . By the assumptions on a, b, c, d, we find that
u0,k =
1
c
;
1
Pk (x) dx.
−1
Orthogonality now gives u0,k = 2c δ0,k , as required. For u1,k we have
u1,k
1
=
c
;
1
1
(y − d)Pk (y) dy =
c
−1
as required.
32
K
L
2
δ1,k − 2dδ0,k ,
3
!1.0
!0.5
!6
0.5
1.0
!2
!4
!6
!8
!10
!12
!14
!8
!10
!12
!14
!16
200
400
600
800
Figure 11: Left: the error log10 |f (x) − fn,m (x)| for −1 ≤ x ≤ 1 and m = 20, 40, 80, 160. Right: log errors
log10 !f − fn,m !∞ (squares) and log10 !f − fn,m ! (circles) against m.
n
Cn,m
κ(U ∗ U )
8
0.98
19.97
16
0.87
4.17
24
0.85
3.57
32
0.85
3.57
40
0.85
3.50
48
0.84
3.43
56
0.84
3.43
64
0.84
3.41
72
0.84
3.38
80
0.84
3.38
Table 5: The values Cn,m and κ(U ∗ U ) against n, where m = 18 n2 .
In Figure 11 we consider the approximation of the function
7
sin(cos x) − 12 ≤ x < 12 ,
f (x) =
0
otherwise.
(5.6)
by the aforementioned method, using parameter values m = 18 n2 , n0 = n2 = 14 n and n1 = 12 n. As
shown, we obtain 13 digits of accuracy using only!m "≈ 120 Legendre coefficients of (5.6). Note that,
although we have not shown it, the scaling m = O n2 appears to be sufficient for recovery. Numerical
results demonstrating this hypothesis are given in Table 5.
The function (5.6) was introduced in [39] to test spectral reprojection when applied to this type of
problem. As shown in Figure 11, we obtain a uniform error of roughly 10−8 using only m = 40 coefficients, and when m = 120, the corresponding value is 10−14 . In comparison, the spectral reprojection
method of [39] gives errors of roughly 10−3 and 10−7 for these values of m (see [39, Fig. 3]), the latter
being 107 times larger.
Whilst this method appears to be a promising alternative,
! it "should be mentioned that the recursive
scheme introduced to compute the entries of U requires O m2 operations. Since only O (mn) operations are required to compute the approximation fn,m once U has been
! "computed, this is clearly less
than ideal. Having said that, spectral reprojection method requires O m2 operations to compute each
approximant, whereas with this scheme such higher cost is only incurred in a preprocessing step.
6
Conclusions and future work
We have presented a reconstruction procedure to recover any element of a Hilbert space using any collection of linearly independent vectors, given a finite number of its samples with respect to an arbitrary
Riesz basis. This approach is both stable and quasi-optimal, provided the number of samples m is sufficiently large in comparison to the number of reconstruction vectors n. Moreover, this condition can be
estimated numerically or, in certain circumstances, analytically.
A prominent example of this approach is the reconstruction of a piecewise analytic function from its
Fourier samples. Using a piecewise polynomial basis, this results in an approximation that converges
root-exponentially fast in terms of m, or exponentially fast in n.
The framework introduced in this paper is one of the first steps in the development of stable and accurate reconstruction techniques in Hilbert spaces, and their applications to a variety of different problems.
We now detail a number of areas of current and future work:
1. Piecewise polynomial reconstructions from polynomial samples. In the penultimate section of this
paper we detailed the recovery of a piecewise analytic function in a piecewise polynomial basis, given
33
its Legendre polynomial
! " expansion coefficients. Herein, an important open problem is verifying that
the scaling m = O n2 is sufficient for reconstruction. Other challenges involve devising an iterative
scheme for computing the entries of U valid for reconstructions
! "in arbitrary Gegenbauer polynomials, and
which involves only O (mn) operations, as opposed to O m2 . Naturally, future work will also investigate the extension of this approach to reconstructions from arbitrary Gegenbauer polynomial expansion
coefficients, as opposed to just Legendre polynomial expansion coefficients.
2. Gegenbauer polynomial reconstructions from Fourier samples. As shown, the reconstruction procedure can be implemented with arbitrary Gegenbauer polynomials. However, unless Legendre polynomials are used, the reconstruction is not completely stable. This problem arises because Gegenbauer
polynomials do not form a Riesz basis for the space L2 (−1, 1) unless λ = 12 . However, Gegenbauer
1
polynomials do form an orthogonal basis for the weighted space L2ω (−1, 1), where ω(x) = (1 − x2 )λ− 2 .
Hence, it is natural to ask whether the reconstruction procedure can be adjusted to incorporate this additional structure, thereby yielding a stable method. It turns out that this can be done, with the first step
being the derivation of an extended abstract framework along similar lines to Section 2. We are currently
compiling results in this case, and will report the details in a future paper.
3. Fast methods. For practical, large-scale implementations of this framework, the computational cost
figure of O (mn) may be too large. For this reason, the use of fast methods to compute the reconstruction
fn,m is another topic of current investigation. Potential means for doing this include using so-called
nonuniform FFTs [25, 26] to efficiently perform matrix-vector multiplications.
4. Applications. Aside from the obvious applications in image and signal processing, the are many other
potential uses of the procedure. First, it may be applicable to the spectral discretisation of PDEs. Spectral
methods are extremely efficient for solving problems with smooth solutions. However, for problems that
develop discontinuities, e.g. hyperbolic conservation laws, a postprocessor is required to recover high
accuracy [38]. Spectral reprojection is frequently used in such problems (see [32, 38] and references
therein). Given the potential advantages of the method developed in this paper (see Section 3.6), it is of
significant interest to apply this approach to these problems. Aside from high accuracy, a pertinent issue
in the use of spectral approximations for nonsmooth problems is the question of stability [38]. Since the
reconstruction procedure developed herein is numerically stable, we expect there to also be benefits in
this regard. Outside of PDEs, the Gegenbauer reconstruction technique has also been extended to other
types of series, including radial basis functions [54], Fourier–Bessel series [55] and spherical harmonics
[30]. Future work will also consider generalisations along these lines.
5. Spline and wavelet-based reconstructions. Reconstructions in spline and wavelet bases are vitally
important in numerous applications, including image and signal processing [63]. In [4], the authors gave
a first insight into the application of such bases to the Fourier sampling problem. However, the theory
is far from complete. In particular, good estimates for the corresponding quantity Θ(n; θ) are currently
lacking.
6. Recovery from other sampling data. The discrete analogue of the Fourier coefficient recovery problem involves the reconstruction of a function from m equispaced samples in [−1, 1]. This problem has
received more attention of late [16, 57] than the continuous case considered in this paper. In particular,
the use of oversampling, similar to that done in this paper, has been considered in [17]. Future work will
look to extend these ideas to related problems involving recovery from nonuniform grids (see [31] for
a discussion of such problems). In particular, with application to spectral collocation schemes based on
Chebyshev or Legendre polynomials, the recovery of piecewise analytic functions given their values at
Gauss or Gauss–Lobatto nodes.
Returning to the continuous problem, there is also significant interest in reconstructions from nonharmonic Fourier samples. In this case, the sampling vectors are typically (but not always) assumed
to constitute a frame. The extension of the framework of this paper to this problem is currently being
investigated. For related discussions, as well as other methods for this problem, see [33, 64, 65].
7. A geometric interpretation of reconstruction. The abstract reconstruction framework developed in this
34
paper can be viewed as a generalisation of the technique of consistent reconstructions (see Section 2.1
and [4]). As shown in [28, 29], consistent reconstructions can be interpreted geometrically in terms of
oblique projections onto particular subspaces. It transpires that the framework developed in this paper
also possesses such an interpretation. In turn, this viewpoint allows one to develop important notions of
optimality for this approach. A detailed discussion of this topic can be found in [6].
8. Infinite-dimensional compressed sensing. An important question in modern sampling is that of sparsity. The recently-developed field of compressed sensing allows one to successfully reconstruct sparse
signals with dramatic subsampling [19, 23]. However, although this has had a dramatic impact, compressed sensing is currently a finite-dimensional technique. Since real-world signal and images are typically infinite-dimensional (or analog), the need for a more comprehensive framework is apparent. It
transpires that, by combining the techniques of this paper with those of compressed sensing, one can
develop both a theory and a method for subsampling in infinite-dimensional sparse recovery problems.
This presents a significant extension of compressed sensing to a large class of infinite-dimensional (i.e.
analog) signal models. These recent developments are documented in [5].
References
[1] M. Abramowitz and I. A. Stegun. Handbook of Mathematical Functions. Dover, 1974.
[2] B. Adcock. Modified Fourier expansions: theory, construction and applications. PhD thesis, University of
Cambridge, 2010.
[3] B. Adcock. Multivariate modified Fourier series and application to boundary value problems. Numer. Math.,
115(4):511–552, 2010.
[4] B. Adcock and A. C. Hansen. A generalized sampling theorem for stable reconstructions in arbitrary bases.
Technical report NA2010/07, DAMTP, University of Cambridge, 2010.
[5] B. Adcock and A. C. Hansen. Generalized sampling and infinite-dimensional compressed sensing. Technical
report NA2011/02, DAMTP, University of Cambridge, 2011.
[6] B. Adcock and A. C. Hansen. Sharp bounds, optimality and a geometric interpretation for generalised sampling
in Hilbert spaces. Technical report NA2011/10, DAMTP, University of Cambridge, 2011.
[7] B. Adcock and D. Huybrechs. Multivariate modified Fourier expansions. In E. Rønquist et al, editor, Proceedings of the International Conference on Spectral and High Order Methods, 2011.
[8] R. Archibald, K. Chen, A. Gelb, and R. Renault. Improving tissue segmentation of human brain MRI through
preprocessing by the Gegenbauer reconstruction method. NeuroImage, 20(1):489–502, 2003.
[9] R. Archibald and A. Gelb. A method to reduce the Gibbs ringing artifact in MRI scans while keeping tissue
boundary integrity. IEEE Transactions on Medical Imaging, 21(4):305–319, 2002.
[10] H. Bateman. Higher Transcendental Functions. Vol. 2, McGraw–Hill, New York, 1953.
[11] A. Böttcher and P. Dörfler. Weighted Markov-type inequalities, norms of Volterra operators, and zeros of
Bessel functions. Math. Nachr., 283(1):40–57, 2010.
[12] J. P. Boyd. Chebyshev and Fourier Spectral Methods. Springer–Verlag, 1989.
[13] J. P. Boyd. A comparison of numerical algorithms for Fourier Extension of the first, second, and third kinds. J.
Comput. Phys., 178:118–160, 2002.
[14] J. P. Boyd. Trouble with Gegenbauer reconstruction for defeating Gibbs phenomenon: Runge phenomenon in
the diagonal limit of Gegenbauer polynomial approximations. J. Comput. Phys., 204(1):253–264, 2005.
[15] J. P. Boyd. Acceleration of algebraically-converging Fourier series when the coefficients have series in powers
of 1/n. J. Comput. Phys., 228:1404–1411, 2009.
[16] J. P. Boyd and J. R. Ong. Exponentially-convergent strategies for defeating the Runge phenomenon for the
approximation of non-periodic functions. I. Single-interval schemes. Commun. Comput. Phys., 5(2–4):484–
497, 2009.
[17] J.P. Boyd and F. Xu. Divergence (Runge phenomenon) for least-squares polynomial approximation on an
equispaced grid and mock-Chebyshev subset interpolation. Appl. Math. Comput., 210(1):158–168, 2009.
[18] C. Brezinski. Extrapolation algorithms for filtering series of functions, and treating the Gibbs phenomenon.
Numer. Algorithms, 36:309–329, 2004.
35
[19] E. J. Candès. An introduction to compressive sensing. IEEE Signal Process. Mag., 25(2):21–30, 2008.
[20] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang. Spectral methods: Fundamentals in Single Domains.
Springer, 2006.
[21] O. Christensen. An Introduction to Frames and Riesz Bases. Birkhauser, 2003.
[22] V. Dominguez, I. G. Graham, and V. P. Smyshlyaev. Stability and error estimates for Filon-Clenshaw-Curtis
rules for highly-oscillatory integrals. IMA J. Num. Anal. (to appear), 2011.
[23] D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289–1306, 2006.
[24] T. A. Driscoll and B. Fornberg. A Padé-based algorithm for overcoming the Gibbs phenomenon. Numer.
Algorithms, 26:77–92, 2001.
[25] A. Dutt and V. Rokhlin. Fast Fourier Transforms for nonequispaced data. SIAM J. Sci. Comput., 14(6):1368–
1393, 1993.
[26] A. Dutt and V. Rokhlin. Fast Fourier Transforms for nonequispaced data, II. Appl. Comput. Harmon. Anal.,
2:85–100, 1995.
[27] K. S. Eckhoff. On a high order numerical method for functions with singularities. Math. Comp., 67(223):1063–
1087, 1998.
[28] Y.C. Eldar. Sampling without input constraints: Consistent reconstruction in arbitrary spaces. Sampling,
Wavelets and Tomography, 2003.
[29] Y.C. Eldar and T. Werther. General framework for consistent sampling in Hilbert spaces. Int. J. Wavelets
Multiresolut. Inf. Process., 3(3):347, 2005.
[30] A. Gelb. The resolution of the Gibbs phenomenon for spherical harmonics. Math. Comp., 66(218):699–717,
1997.
[31] A. Gelb. Reconstruction of piecewise smooth functions from non-uniform grid point data. J. Sci. Comput.,
30(3):409–440, 2007.
[32] A. Gelb and S. Gottlieb. The resolution of the Gibbs phenomenon for Fourier spectral methods. In A. Jerri,
editor, Advances in The Gibbs Phenomenon. Sampling Publishing, Potsdam, New York, 2007.
[33] A. Gelb and T. Hines. Detection of edges from nonuniform Fourier data. J. Fourier Anal. Appl. (to appear),
2011.
[34] A. Gelb and E. Tadmor. Detection of edges in spectral data. Appl. Comput. Harmon. Anal., 7(1):101, 1999.
[35] A. Gelb and J. Tanner. Robust reprojection methods for the resolution of the Gibbs phenomenon. Appl. Comput.
Harmon. Anal., 20:3–25, 2006.
[36] D. Gilbarg and N.S. Trudinger. Elliptic Partial Differential Equations of Second Order. Springer Verlag, 2001.
[37] G. H. Golub and C. F. Van Loan. Matrix Computations. John Hopkins University Press, Baltimore, 2nd edition,
1989.
[38] D. Gottlieb and J. S. Hesthaven. Spectral methods for hyperbolic problems. J. Comput. Appl. Math., 128(12):83–131, 2001.
[39] D. Gottlieb and C-W. Shu. On the Gibbs phenomenon IV: Recovering exponential accuracy in a subinterval
from a Gegenbauer partial sum of a piecewise analytic function. Math. Comp., 64(211):1081–1095, 1995.
[40] D. Gottlieb and C-W. Shu. On the Gibbs phenomenon III: Recovering exponential accuracy in a sub- interval
from a spectral partial sum of a piecewise analytic function. SIAM J. Num. Anal., 33(1):280–290, 1996.
[41] D. Gottlieb and C-W. Shu. On the Gibbs’ phenomenon and its resolution. SIAM Rev., 39(4):644–668, 1997.
[42] D. Gottlieb, C-W. Shu, A. Solomonoff, and H. Vandeven. On the Gibbs phenomenon I: Recovering exponential
accuracy from the Fourier partial sum of a nonperiodic analytic function. J. Comput. Appl. Math., 43(1–2):91–
98, 1992.
[43] K. Gröchenig, Z. Rzeszotnik, and T. Strohmer. Quantitative estimates for the finite section method and Banach
algebras of matrices. Integral Equations and Operator Theory, 67(2):183–202, 2011.
[44] A.C. Hansen. On the solvability complexity index, the n-pseudospectrum and approximations of spectra of
operators. J. Amer. Math. Soc., 24(1):81–124, 2011.
[45] Eric Heinemeyer, Marko Lindner, and Roland Potthast. Convergence and numerics of a multisection method
for scattering by three-dimensional rough surfaces. SIAM J. Numer. Anal., 46(4):1780–1798, 2008.
[46] J. S. Hesthaven, S. Gottlieb, and D. Gottlieb. Spectral Methods for Time-Dependent Problems. Cambridge
University Press, 2007.
36
[47] T. Hrycak and K. Gröchenig. Pseudospectral Fourier reconstruction with the modified inverse polynomial
reconstruction method. J. Comput. Phys., 229(3):933–946, 2010.
[48] D. Huybrechs. On the Fourier extension of non-periodic functions. SIAM J. Numer. Anal., 47(6):4326–4355,
2010.
[49] A. Iserles and S. P. Nørsett. From high oscillation to rapid approximation I: Modified Fourier expansions. IMA
J. Num. Anal., 28:862–887, 2008.
[50] A.J. Jerri, editor. The Gibbs phenomenon in Fourier Analysis, Splines, and Wavelet Approximations. Kluwer
Academic, Kordrecht, The Netherlands, 1998.
[51] A.J. Jerri, editor. Advances in the Gibbs Phenomenon. Sampling Publishing, Potsdam, New York, 2007.
[52] J.-H. Jung and B. D. Shizgal. Towards the resolution of the Gibbs phenomena. J. Comput. Appl. Math.,
161(1):41–65, 2003.
[53] J.-H. Jung and B. D. Shizgal. Generalization of the inverse polynomial reconstruction method in the resolution
of the Gibbs phenomenon. J. Comput. Appl. Math., 172(1):131–151, 2004.
[54] J.H. Jung, S. Gottlieb, S.O. Kim, C.L. Bresten, and D. Higgs. Recovery of high order accuracy in radial basis
function approximations of discontinuous problems. J Sci Comput, 45:359–381, 2010.
[55] J.R. Kamm, T.O. Williams, J.S. Brock, and S. Li. Application of Gegenbauer polynomial expansions to mitigate
Gibbs phenomenon in Fourier–Bessel series solutions of a dynamic sphere problem. Int. J. Numer. Meth.
Biomed. Engng., 26(1276–1292), 2010.
[56] T. W. Körner. Fourier Analysis. Cambridge University Press, 1988.
[57] R. Platte, L. N. Trefethen, and A. Kuijlaars. Impossibility of fast stable approximation of analytic functions
from equispaced samples. SIAM Rev. (to appear), 2010.
[58] T.J. Rivlin and V. Kalashnikov. Chebyshev Polynomials: from Approximation Theory to Algebra and Number
Theory. Wiley New York, 1990.
[59] E. Schmidt. Die asymptotische Bestimmung des Maximums des Integrals über das Quadrat der Ableitung eines
normierten Polynoms, dessen Grad ins Unendliche wächst. Sitzungsber. Preuss. Akad. Wiss., page 287, 1932.
[60] E. Schmidt. Über die nebst ihren Ableitungen orthogonalen Polynomensysteme und das zugehörige Extremum.
Math. Ann., 119:165–204, 1944.
[61] E Tadmor. Filters, mollifiers and the computation of the Gibbs’ phenomenon. Acta Numerica, 16:305–378,
2007.
[62] E. Tadmor and J. Tanner. Adaptive mollifiers for high resolution recovery of piecewise smooth data from its
spectral information. Foundations of Computational Mathematics, 2(2):155–189, 2002.
[63] M. Unser. Sampling–50 years after Shannon. Proc. IEEE, 88(4):569–587, 2000.
[64] A. Viswanathan, A. Gelb, D. Cochran, and R. Renaut. On reconstructions from non-uniform spectral data. J.
Sci. Comput., 45(1–3):487–513, 2010.
[65] L. Wang and Y. Wang. Reconstruction from irregular Fourier samples and Gaussian spectral mollifiers.
preprint, 2011.
37
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement