Homework and Computer Solutions for Math*2130 (W17).

Homework and Computer Solutions for Math*2130 (W17).
Homework and Computer Solutions for
Math*2130 (W17).
MARCUS R. GARVIE 1
December 21, 2016
1 Department
of Mathematics & Statistics, University of Guelph
STOP!
Before looking at the answers the best study strategy is
to:
1. First read your lecture notes for the relevant section.
2. Then attempt the questions without first looking at
the answers.
3. Finally, if you are stuck, look at the general approach
in the relevant answer and try again.
Remember, some struggle is often necessary for effective
learning.
1
Chapter 1
Basic Tools
1.4
Taylor’s Theorem
1. What is the third-order Taylor polynomial for f (x) =
x0 = 0?
Solution:
P3 (x) = f (0) + xf 0 (0) +
x3
x2 00
f (0) + f 000 (0)
2!
3!
f (0) = 1
1
1
1
(x + 1)− 2 so f 0 (0) =
f 0 (x) =
2
2
3
1
1
f 00 (x) = − (x + 1)− 2 so f 00 (0) = −
4
4
5
3
3
f 000 (x) =
(x + 1)− 2 so f 000 (0) =
8
8
Substituting in (1.1) yields
1
1
1
P3 (x) = 1 + x − x2 + x3 .
2
8
16
2. Given that
R(x) =
2
√
|x|6 ξ
e
6!
x + 1, about
(1.1)
for x ∈ [−1, 1], where ξ is between x and 0, find and upper bound for
|R|, valid for all x ∈ [−1, 1], that is independent of x and ξ.
Solution: Note first that x ∈ [−1, 1] ⇐⇒ |x| ≤ 1.
so
|x|6 · eξ
1 · e1
e
≤
=
≈ 0.00378.
6!
6!
6!
We also used the fact that the function eξ is a monotonic increasing
function of ξ.
3. Given that
|x|4 −1
R(x) =
4! 1 + ξ
1 1
for x ∈ − , , where ξ is between x and 0, find and upper bound
2 2
1 1
for |R|, value for all x ∈ − , , that is independent of x and ξ.
2 2
Solution: We use the following facts
1 1
1
(i) x ∈ − ,
⇐⇒ |x| ≤
2 2
2
1
(ii) for 0 < z < , y = z 4 is increasing, thus for z1 < z2 , we have that
2
4
1
z14 < z24 , e.g., z 4 <
2
1
1 1
for ξ ∈ − ,
is maximized for ξ that minimizes |1 + ξ|,
(iii)
|1 + ξ|
2 2
1
i.e. when ξ = − . Thus
2
|x|4
4!
−1
1+ξ
1 4
1 4
1 4
|x|4
2
2
2
=
≤
≤
≤ =
4!|1 + ξ|
4!|1 + ξ|
4! 21
4! 1 + − 21 3
1 3
2
4
=
1
.
192
4. What is the fourth-order Taylor polynomial for
1
about x0 = 0?
x+1
Solution:
P4 (x) = f (0) + xf 0 (0) +
f (0)
f 0 (x)
f 00 (x)
f 000 (x)
f (iv) (x)
=
=
=
=
=
x3
x4
x2 00
f (0) + f 000 (0) + f (iv) (0)
2!
3!
4!
(1.2)
1
−(x + 1)−2 so f 0 (0) = −1
2(x + 1)−3 so f 00 (0) = 2
−6(x + 1)−4 so f 000 (0) = −6
24(x + 1)−5 so f (iv) (0) = 24
Substituting in (1.2) yields
P4 (x) = 1 − x + x2 − x3 + x4 .
5. Find the Taylor polynomial of third-order for sin(x) using x0 =
π
.
2
Solution:
2 3
π x − π2
π 0 π x − π2
00 π
000 π
P3 (x) = f
+ x−
f
+
f
+
f
2
2
2
2!
2
3!
2
(1.3)
f (π/2)
f 0 (x)
f 00 (x)
f 000 (x)
=
=
=
=
1
cos(x) so f 0 (π/2) = 0
− sin(x) so f 00 (π/2) = −1
− cos(x) so f 000 (π/2) = 0
Substituting into (1.3) yields
x − π2
P3 (x) = 1 −
2
4
2
.
6. For the function below construct the third-order Taylor polynomial approximation, using x0 = 0, and then estimate the error by computing an
upper
bound on the remainder, over the given interval: f (x) = ln(1+x),
1 1
x∈ − , .
2 2
Solution:
P4 (x) = f (0) + xf 0 (0) +
x3
x4
x2 00
f (0) + f 000 (0) + f (iv) (ξ)
2!
3!
4!
(1.4)
for ξ between 0 and x.
f (x) = ln(1 + x) so f (0) = ln(1) = 0
1
f 0 (x) =
so f 0 (0) = 1
1+x
f 00 (x) = −(x + 1)−2 so f 00 (0) = −1
f 000 (x) = 2(x + 1)−3 so f 000 (0) = 2
6
6
f (iv) (x) = −6(x + 1)−4 = −
so f (iv) (ξ) = −
4
(1 + x)
(1 + ξ)4
Substituting into (1.4) yields
ln(1 + x) = x −
x2 x3
x4
+
−
.
2
3
4(1 + ξ)4
x2
x3
So P3 (x) = x −
+ . For the error bound, first note that x ∈
2
3
1
1 1
− ,
⇐⇒ |x| ≤ .
2 2
2
1 4
1 4
1 4
|x|4
1
2
2
2
|R3 (x)| ≤
≤
≤
4 ≤
4 =
4
4
1
1
4(1 + ξ)
4(1 + ξ)
4
4 1 + −2
4 2
√
7. Find the Taylor Series expansion of f (x) = 3 x of degree 2 about
x0 = 8. Using the remainder term estimate the maximum absolute
error on the interval [7, 9].
5
Solution: Calculate
f (x) = x1/3 =⇒ f (8) = 81/3 = 2,
1
1
f 0 (x) = x−2/3 =⇒ f 0 (8) = ,
3
12
2 −5/3
1
00
00
f (x) = − x
=⇒ f (8) = −
,
9
144
10
f 000 (x) = x−8/3 .
27
Thus the 2nd degree polynomial is
(x − 8)2 00
f (8)
2!
(x − 8) (x − 8)2 1
=2+
−
12
2!
144
(x − 8) (x − 8)2
−
.
=2+
12
288
Using the Lagrange form of the remainder
T2 (x) = f (8) + (x − 8)f 0 (8) +
(x − 8)3 10 −8/3
(x − 8)3 000
f (ξ) =
ξ
R2 (x) =
3!
3!
27
5(x − 8)3
=
, ξ between x and 8.
81ξ 8/3
I.e., we maximize
|R2 (x)| =
5|x − 8|3
,
81|ξ|8/3
ξ between x and 8.
1
Now note that as x is between 7 and 9 so is ξ. Thus as ξ8/3
is decreasing
1
1
on [7, 9], it follows that |ξ|8/3 is maximized on [7, 9] by 78/3 . Also, powers
of x (or powers of (x − c), for some c) are maximized at the end points.
Just think of the general shape of x2 or x3 . So |x − 8|3 is maximized on
[7, 9], at x = 7 or x = 9. Here, we get the same answer for both values.
Thus
5|9 − 8|3
5|7 − 8|3
or
81 · 78/3
81 · 78/3
= 29/84238 = 3.4426 × 10−4
|R2 (x)| ≤
6
(4 d.p.).
8. Construct a Taylor polynomial approximation that is accurate to within
10−3 , over the indicated interval, for the following function, using x0 =
0: f (x) = e−x , x ∈ [0, 1].
Solution: As in an example from the lecture notes, we need n such that
|Rn (x)| ≤ 10−3
for all x ∈ [0, 1].
n+1
n+1
x
1
1 · e0
−ξ
−ξ
= |x|
e
e
=
≤
(n + 1)! (n + 1)!
(n + 1)!
(n + 1)!
1
≤ 10−3 , we find that n = 6
(n + 1)!
is the minimum value to do this (get about 1.984 × 10−4 ), thus we
must find P6 (x). Since f (n) (x) = (−1)n e−x for all n we have that
f (n) (0) = (−1)n for all n. Thus
Plugging in values of n to make
P6 (x) = 1 − x +
x2 x3 x4 x5 x6
−
+
−
+ .
2
3!
4!
5!
6!
9. For each function below, use the Mean Value Theorem to find a value
M such that
|f (x1 ) − f (x2 )| ≤ M |x1 − x2 |
that is valid for all x1 , x2 in the stated interval: (a) f (x) = ln(1 + x)
on the interval [−1, 1], (b) f (x) = sin(x) on the interval [0, π].
Solution: From the notes,
|f (x2 ) − f (x1 )| ≤ |f 0 (ξ)| · |x2 − x1 |.
For (a): [x1 , x2 ] = [−1, 1] and ξ ∈ [−1, 1].
1
1+ x 1 1 0
=∞
=⇒ |f (ξ)| = ≤
1 + ξ 1 − 1
f (x) = ln(1 + x) =⇒ f 0 (x) =
7
So M does not exist.
For (b): [x1 , x2 ] = [0, π] and ξ ∈ [0, π].
f (x) = sin(x) =⇒ f 0 (x) = cos(x)
=⇒ |f 0 (ξ)| = | cos(x)| ≤ 1 = M
10. A function is called monotone on an interval if its derivative is strictly
positive or strictly negative on the interval. Suppose f is continuous
and monotone on the interval [a, b], and f (a)f (b) < 0; prove that there
is exactly one value α ∈ [a, b] such that f (α) = 0.
Solution: If f is monotone on [a, b] and f (a)f (b) < 0 then either
f (a) < 0 and f (b) > 0
or f (a) > 0 and f (b) < 0.
Thus, as f ∈ C([a, b]), by the Intermediate Value Theorem ∃α ∈ [a, b]
such that f (α) = 0.
11. (MATLAB) Use the program GRAPH_PRES to estimate the (true) maximum absolute
error in using the Taylor polynomial of Question 7 to
√
3
approximate x on [7, 9].
√
Solution: Just plot the exact error given by |T2 (x)− 3 x| over the interval
[7, 9]. Visually estimate the maximum absolute error and compare this
value with what you estimated was the maximum theoretical error using
R2 (x). So from the graph below:
8
we see that the true maximum error on [7, 9] is about 2.6 × 10−4 . The
theoretical maximum error is estimated to be ≤ 3.4 × 10−4 , which is
not bad.
12. (MATLAB) Use the program GRAPH_MANY_PRES to graphically investigate how well the Taylor series Tn (x) (for n = 1, 2, . . . ) approximates
the functions f (x) near x = x0 (see Questions 1, 4, 5, 6, 7, 8).
Solution: On a given interval [a, b] containing x0 plot Tn (x), n = 1, 2, . . . ,
with y = f (x), and observe how the fit between the Taylor series near
x0 gets better as n increases. An example (see Question 4) is given
below:
9
1.5
Error and Asymptotic Error
1. Use Taylor’s Theorem to show that
√
1
1 + x = 1 + x + O(x2 ),
2
for x sufficiently small.
Solution: x0 = 0 and
x2 00
f (0) + ...
2
= f (0) + xf 0 (0) + O(x2 )
f (x) = f (0) + xf 0 (0) +
Calculating coefficients:
1
f (x) = (1 + x) 2
1
1
f 0 (x) = (1 + x) 2
2
Substituting into (1.5) yields
√
1+x=1+
10
=⇒ f (0) = 1
1
=⇒ f 0 (0) =
2
1
x + O(x2 ).
2
(1.5)
2. Use Taylor’s Theorem to show that
ex = 1 + x + O(x2 ),
for x sufficiently small.
Solution: x0 = 0 and
f (x) = f 0 (x) = ex
=⇒ f (0) = f 0 (0) = 1
Thus, as in #1, ex = 1 + x + O(x2 ).
3. Use Taylor’s Theorem to show that
1
1 − cos(x)
= x + O(x3 ),
x
2
for x sufficiently small.
Solution: x0 = 0 and
1 2 1 4
x + x − ...
2!
4!
1 2 1 4
=⇒ 1 − cos(x) =
x − x + ...
2!
4!
1 − cos(x)
1
1 3
=⇒
=
x − x + ...
x
2!
4!
1
=
x + O(x3 ).
2
cos(x) = 1 −
4. Show that
sin(x) = x + O(x3 ).
Solution: x0 = 0 and
sin(x) = x −
1 3
x + ... = x + O(x3 ).
3!
11
5. Recall the summation formula
2
3
n
1 + r + r + r + ... + r =
n
X
rk =
k=0
1 − rn+1
,
1−r
r 6= 1.
Use this to prove that for |r| < 1
n
X
k=0
rk =
1
+ O(rn+1 ),
1−r
Hint: What is the definition of O notation?
Solution:
n
X
1 − rn+1
1
rn+1
r =
=
−
,.
1−r
1−r 1−r
k=0
k
Need to show that
rn+1
= O(rn+1 ), for n large.
1−r
Using the definition of O notation and the fact that |r| < 1,
rn+1 1 1−r = 1 < ∞.
lim n+1 = lim n→∞ 1 − r n→∞ r
1−r
6. Use the above result to show that 9 terms are all that is needed to
compute
∞
X
S=
e−k
k=0
−4
to within 10
absolute accuracy.
Solution:
n
X
k=0
rk =
1 − rn+1
1
rn+1
rn+1
=
−
=S−
, for |r| < 1.
1−r
1−r 1−r
1−r
12
Need to show that
n+1 n
n+1
X
r
−4
k
S −
= r
=
r
1 − r 1 − r < 10 ,
k=0
for r =
1
and n = 9.
e
rn+1
e−10
=
= 7.2 × 10−5 < 10−4 .
1−r
1 − e−1
7. Recall the summation formula
S=
n
X
k=
k=1
n(n + 1)
.
2
Use this to show that
n
X
1
k = n2 + O(n).
2
k=0
Solution:
n
X
k=0
k=
n(n − 1)
n2 n
1
=
+ = n2 + O(n).
2
2
2
2
8. Use the definition of O to show that if y = yh + O(hp ), then
hy = hyh + O(hp+1 ).
y − yp Solution: Given y = yh + O(h ), i.e. lim p = C < ∞. But
h→0
h
hy − hyh y − yh = lim = C < ∞,
lim h→0 h→0 hp hp+1 p
i.e. hy = hyh + O(hp+1 ).
13
9. Show that if an = O(np ) and bn = O(nq ) then an bn = O(np+q ).
Solution: Given that an = O(np ) and bn = O(nq ), i.e.
an bn lim p = C1 < ∞, and lim q = C2 < ∞,
n→∞ n
n→∞ n
then
an b n an bn lim p+q = lim p · lim q = C1 C2 < ∞,
n→∞ n
n→∞ n
n→∞ n
i.e. an bn = O(np+q ).
10. (MATLAB) Recall that the infinite (classic) geometric series is
S = 1 + r + r2 + r3 + · · · =
1
,
1−r
|r| < 1.
In fact we saw in Question 5 that
S = 1 + r + r2 + r3 + . . . rn + O(rn+1 ),
i.e., the error in using only n + 1 terms in the approximation of S
is O(rn+1 ). The MATLAB program GEOMETRIC_PRES
sums this finite
n
X
rk < tol, where ‘tol’
series for a given r until the absolute error S −
k=0
is a tolerance supplied by the user. Use this program with tol = 1×10−6
and (a) r = 0.001 and (b) r = 0.999. Explain the difference in results
for these two cases.
Solution: Running the MATLAB code yields that the number of terms
needed to achieve the required accuracy is 3 for case (a) and 20713
for case (b). The series with r = 0.999 clearly converged much more
slowly than the series with r = 0.001 as the error term for the former is
O(0.999n+1 ) while for the latter it is O(0.001n+1 ), and an order 0.001n+1
term goes to zero much faster than an order 0.999n+1 term as n → ∞.
14
1.6
Computer Arithmetic
1. Perform the indicated computations in each of three ways: (i) Exactly;
(ii) Using 3 digit decimal arithmetic, with chopping; (iii) Using 3 digit
decimal arithmetic, with rounding. For both approximations, compute
the absolute error and the relative error.
1
1
1
1
1
+
(b)
+
+
(a)
6 10
7 10
9
Solution:
(a) (i)
1
1
+
= 0.26̇
6 10
(ii)
1
1
fl fl
+ fl
=
6
10
=
=
=
f l(f l(0.16̇) + f l(0.1))
f l(0.166 + 0.100)
f l(0.266)
0.266
(chopping)
absolute error = |0.26̇ − 0.266| = 0.0006̇
0.0006̇
= 2.5 × 10−3
relative error =
0.26̇
(2 significant figures)
(iii)
1
1
fl fl
+ fl
=
6
10
=
=
=
f l(f l(0.16̇) + f l(0.1))
f l(0.167 + 0.100)
f l(0.267)
0.267
(rounding)
absolute error = |0.26̇ − 0.267| = 3.3̇ × 10−4
0.0003̇
relative error =
= 1.25 × 10−3
(2 significant figures)
0.26̇
15
(b) (i)
1
1
+
7 10
+
1
= 0.3539682
9
(ii)
1
1
fl fl
+ fl
= f l(f l(0.142) + 0.100)
7
10
= f l(0.242)
= 0.242
so
1
f l 0.142 + f l
= f l(0.142 + 0.111)
9
= f l(0.353)
= 0.353
(chopping)
absolute error = |0.3539682 − 0.353| = 9.682 × 10−4
9.683... × 10−4
relative error =
= 2.735 × 10−3
(4 significant figures)
0.3539682
1
(iii) Rounding obtained via the same method, but as f l
=
7
0.143, this leads to the answer 0.354. Thus,
absolute error = |0.3539682 − 0.354| = 3.175 × 10−5
3.175... × 10−5
relative error =
= 8.969 × 10−5
(4 significant figures)
0.3539682
ex − 1
, how many terms in the Taylor expansion are needed
x
1
to get single precision accuracy (7 decimal digits) for all x ∈ 0, ?
2
How many terms are needed for double precision accuracy (14 decimal
digits) over this same range?
2. For f (x) =
Solution: Recall
ex = 1 + x +
x2
xn
xn+1 ξ
+ ... +
+
e,
2!
n! (n + 1)!
16
1
for ξ between 0 and x, and x ∈ 0, . Rearranging:
2
x
x2
xn−1
xn
ex − 1
=1+ +
+ ... +
+
eξ .
x
2!
3!
n!
(n + 1)!
| {z }
Rn (x)
For single precision (i.e., rounding to t = 7 decimal places) find n such
that (see your lecture notes for the formula we use)
|Rn (x)| < 5 × 10−(t+1) = 5 × 10−8 ,
1
for all x ∈ 0, . Thus
2
1 n 21
n ξ
xn
e
|x|
e
ξ
2
=
e
≤
≤ 1.78 × 10−8
(n + 1)! (n + 1)!
(n + 1)!
if n = 8
(and > 5 × 10−8 if n = 7). So we need 9 terms or n = 8.
For double precision we need n such that (see your lecture notes for
the formula we use)
|Rn (x)| < 5 × 10−(t+1) = 5 × 10−15 ,
where t = 14. Same procedure as before leads to
|Rn (x)| ≤ 2.308 × 10−15
if n = 13.
So we need n = 14 terms or n = 13.
3. Using 3−digit decimal arithmetic, find three values a, b, and c, such
that
(a + b) + c 6= a + (b + c).
Solution: Let a = 1 × 10−20 , b = 1 and c = −1. Then
(a + b) + c = (1 × 10−20 + 1) − 1,
which with 3−digit decimal arithmetic yields 0. However,
a + (b + c) = 1 × 10−20 + (1 − 1)
with 3−digit decimal arithmetic yields 1 × 10−20 .
17
4. Assume we are using 3−digit decimal arithmetic. For = 0.0001 and
a1 = 5, compute
1
a2 = a0 +
a1
for a0 equal to each of 1, 2, and 3. Comment.
Solution:
a0 = 1
1
fl
= 1 × 104
0.0001
f l(5 × 104 ) = 50000
f l(1 + 5 × 104 ) = f l(50001) = 50000.
a0 = 2
. . . f l(2 + 5 × 104 ) = f l(50002) = 50000
a0 = 3
. . . f l(3 + 5 × 104 ) = f l(50003) = 50000
a2 is the same in all three cases.
5. What is the machine epsilon for a computer that uses binary arithmetic, 24 bits for the fraction, and rounds? What if it chops?
Solution: β = 2 and t = 24: From the lecture notes
1
1
δ = β 1−t = × 21−24 = 0.596 × 10−7
2
2
δ = β 1−t = ×21−24 = 0.119 × 10−7
(rounding)
(chopping)
6. What is the machine epsilon for a computer that uses binary arithmetic,
24 bits for the fraction, and rounds? What if it chops?
18
7. (MATLAB) Perform the following calculation in MATLAB:
>> 1000.00000000000000005-1000
What does MATLAB give as the answer? What is the exact answer?
Explain the discrepancy.
Solution: MATLAB gives 0 as an answer, however the actual answer
is 5 × 10−17 . The discrepancy is due to the fact that MATLAB works
in double precision (rounding numbers to about 16 significant figures)
and so MATLAB stores the number 1000.00000000000000005 as 1000.
This is a classic problem of loss of significance due to subtracting two
nearly equal numbers. To be precise:
f l(f l(1000.00000000000000005) − f l(1000)) = f l(1000 − 1000)
= f l(0) = 0.
8. (MATLAB) Perform the following calculation in MATLAB:
>> format long
>> 0.12345678-0.12345677
What does MATLAB give as the answer? What is the exact answer?
Explain the discrepancy.
Solution: MATLAB gives 1.000000000861423e − 08 as an answer (i.e.
1.000000000861423 × 10−8 ). The actual answer is of course 1 × 10−8 .
What is puzzling about this is that both numbers in the calculation
only involve 8 significant figures which one might argue should be stored
exactly in MATLAB (as it uses double precision). However, MATLAB
works internally in binary arithmetic and even the number 0.1 has
an infinite binary expansion (which is 0.000110011 . . .). Thus loss of
significance is again due to finite precision (in the binary system).
19
9. (MATLAB) Open the MATLAB program EPS_CALC_PRES by typing:
>> open eps_calc_pres
Explain what the program does. If necessary type
>> help while
Also run the program and compare the answer you get with the value
of machine epsilon in MATLAB, obtained by simply entering the commands
>> format long
>> eps
Solution: We approximate machine epsilon M using the definition that
M is the smallest positive number x such that f l(1 + x) > 1. We start
with the number 1 and keep halving it until the condition is satisfied.
The final value is multiplied by 2 as the test 1 + x > 1 failed, indicating
that we cut x in half once to often. When we run the program we get
the output 2.220446049250313e − 16, which is the same answer we get
if we type ‘eps’ in MATLAB.
10. (MATLAB) Recall the well-known expression for the number e:
x
1
= 2.718281828459045 . . . .
e = lim 1 +
x→∞
x
The program EXP_CALC_PRES uses this result with integer x values to
approximate e. Run this program and explain the pattern in the errors.
What is the optimal x value?
Solution: Clearly from the tabulated results the absolute errors decrease
to a minimum value for n (i.e., x) around 108 . After that the errors
increase to a maximum around n ≥ 1016 . This is because for n ≥
1016 MATLAB calculates 1/n as 0 (due to finite precision), yielding
the approximation (1 + 1/n)n = 1. Thus the absolute error becomes
|e − 1| = 1.718281828459 . . . , which is what we see.
20
Chapter 2
A Survey of Simple Methods
and Tools
2.1
Horner’s Rule
1. Write the following polynomial in nested form: x3 + 3x + 2.
Solution: 2 + 3x + x3 = 2 + x(3 + x2 ).
2. Write the following polynomial in nested form, but this time take advantage of the fact that they involve only even powers of x to minimize the
1
1
computations: 1 − x2 + x4 .
2
24
Solution:
1
1
1 − x2 + x4 = 1 + x2
2
24
1
−
2
1 2
+ x .
24
1
1
3. Write the following polynomial in nested form: 1 − x2 + x4 − x6 .
2
6
Solution:
1 4 1 6
1 2 1 4
2
1 − x + x − x = 1 + x (−1) + x − x
2
6
2
6
1
1
= 1 + x2 (−1) + x2
−
x2
2
6
2
21
Note: it takes longer to work out
1 1 2
.
− x
1 + x x −1 + x x
2 6
4. Consider the polynomial
1
1
p(x) = 1 + (x − 1) + (x − 1)(x − 2) + (x − 1)(x − 2)(x − 4).
6
7
This can be written in “nested-like” form by factoring out each binomial
term as far as it will go, thus:
1 1
p(x) = 1 + (x − 1) 1 + (x − 2)
+ (x − 4)
.
6 7
Write the following
polynomial
in this
kind of nested
form:
6
1
5
1
1
1
p(x) = −1 +
x−
−
x−
(x − 4) +
x−
(x − 4)(x − 2).
7
2
21
2
7
2
Solution:
6
1
5
1
1
1
p(x) = −1 +
x−
−
x−
(x − 4) +
x−
(x − 4)(x − 2)
7
2
21
2
7
2
6
5
1
1
− (x − 4) + (x − 4)(x − 2)
= −1 + x −
2
7 21
7
1
6
5
1
= −1 + x −
+ (x − 4)
−
+ (x − 2)
2
7
21
7
2.2
Difference Approximations
1. Compute, by hand, approximations to f 0 (1) for each of the following functions, with h = 1/16 using the Forward and Centered Difference Approximations: (a) f (x) = arctan(x), (b) f (x) = e−x .
Solution:
For (a) f (x) = arctan(x)
22
Forward Difference Approximation
arctan 1 +
f (x + h) − f (x)
=
h
0.8157 − π4
=
1
1
16
1
16
− arctan(1)
16
= 0.485 ≈ f 0 (1)∗∗
(3 decimal places)
Centered Difference Approximation
1
1
− arctan(1 − 16
)
arctan 1 + 16
f (x + h) − f (x − h)
=
2
2h
16
= 0.500
(3 decimal places)...which is better.
Observe that f 0 (x) =
x2
1
so f 0 (1) = 0.5.
+1
For (b) f (x) = e−x
Forward Difference Approximation
1
f (x + h) − f (x)
e−(1+ 16 ) − e−1
=
1
h
16
= −0.356
(3 decimal places)
Centered Difference Approximation
1
1
f (x + h) − f (x − h)
e−(1+ 16 ) + e−(1− 16 )
=
2
2h
16
= −0.368
(3 decimal places)...which is better.
Observe that f 0 (x) = −e−x so f 0 (1) = −e−1 ≈ −0.367 to 3 decimal places.
2. Use the Forward Difference Approximation to approximate f 0 (1), the Centered Difference Approximation to approximate f 0 (1.1), and the Backward
Difference Approximation to approximate f 0 (1.2), using the data in the
table below:
23
x
f (x)
1.00 1.0000000000
1.10 0.9513507699
1.20 0.9181687424
Solution: Using h = 0.1
Forward Difference Approximation
0.9513... − 1.00
f (1 + 0.1) − f (1)
=
= −0.487 ≈ f 0 (1)
0.1
0.1
Centered Difference Approximation
(3 s.f.)
f (1.1 + 0.1) − f (1.1 − 0.1)
0.91816... − 1.00
=
= −0.409 ≈ f 0 (1.1)
2(0.1)
0.2
(3 s.f.)
Backward Difference Approximation
f (1.2) − f (1.2 − 0.1)
0.91816... − 0.951350
=
= −0.332 ≈ f 0 (1.2)
0.1
0.1
(3 s.f.)
3. Use the Centered Difference Approximation to the first derivative to prove
that this approximation will be exact for any quadratic polynomial.
Solution: Let p(x) = a + bx + cx2 so p0 (x) = b + 2cx. Using the centered
difference approximation
[a + b(x + h) + c(x + h)2 ] − [a + b(x − h) + c(x − h)2 ]
p(x + h) − p(x − h)
=
2h
2h
2bh + c(x + h)2 − c(x − h)2
=
2h
c
= b + [(x + h)2 − (x − h)2 ]
2h
c
= b + [(x2 + 2xh + h2 ) − (x2 − 2xh + h2 )]
2h
4cxh
= b+
2h
= b + 2cx
= p0 (x).
24
4. Find coefficients A, B, and C so that (a) f 0 (x) = Af (x) + Bf (x + h) +
Cf (x + 2h) + O(h2 ) (hard!). Hint: Use Taylor’s theorem.
Solution:
f (x + h) = f (x) + hf 0 (x) +
h2 00
h3
f (x) + f 000 (c1 ),
2!
3!
(2.1)
for c1 between x and x + h.
f (x + 2h) = f (x) + 2hf 0 (x) +
(2h)3 000
(2h)2 00
f (x) +
f (c2 ),
2!
3!
(2.2)
for c2 between x and x + 2h. Now computing 4×(2.1)-(2.2) gives
h3 000
[4f (c1 ) − 8f 000 (c2 )]
6
2h3 000
= 3f (x) + 2hf 0 (x) +
[f (c1 ) − 2f 000 (c2 )].
3
4f (x + h) − f (x + 2h) = 3f (x) + 2hf 0 (x) +
So dividing by 2h yields:
2
1
3
f (x + h) − f (x + 2h) =
f (x) + f 0 (x) + O(h2 )
h
2h
2h
or
3
2
1
f (x) + f (x + h) − f (x + 2h) + O(h2 ).
2h
h
2h
2
1
3
Thus A = − , B = , and C = − .
2h
h
2h
f 0 (x) = −
5. Use Taylor’s Theorem to show that the approximation
f 0 (x) ≈
8f (x + h) − 8f (x − h) − f (x + 2h) + f (x − 2h)
12h
is O(h4 ) (hard!). Again, use Taylor’s theorem.
Solution: Expand to powers of h4 (with remainder term O(h5 ) as we then
divide by 12h:
f (x + h) = f (x) + hf 0 (x) +
h2 00
h3
h4
h5
f (x) + f 000 (x) + f (iv) (x) + f (v) (c1 )
2!
3!
4!
5!
25
f (x − h) = f (x) − hf 0 (x) +
h2 00
h3
h4
h5
f (x) − f 000 (x) + f (iv) (x) − f (v) (c2 )
2!
3!
4!
5!
(2h)3 000
(2h)4 (iv)
(2h)5 (v)
(2h)2 00
f (x) +
f (x) +
f (x) +
f (c3 )
2!
3!
4!
5!
4h3 000
2h4 (iv)
4h5 (v)
= f (x) + 2hf 0 (x) + 2h2 f 00 (x) +
f (x) +
f (x) +
f (c3 )
3
3
15
f (x + 2h) = f (x) + 2hf 0 (x) +
f (x−2h) = f (x)−2hf 0 (x)+2h2 f 00 (x)−
2h4 (iv)
4h5 (v)
4h3 000
f (x)+
f (x)−
f (c4 )
3
3
15
Thus,
8f (x + h) − 8f (x − h) − f (x + 2h) + f (x − 2h) = 16hf 0 (x) − 4hf 0 (x) + ĉh5
= 12hf 0 (x) + ĉh5 ,
where ĉ depends on f (v) (ci ) for i = 1, 2, 3, 4. So dividing by 12h yields
8f (x + h) − 8f (x − h) − f (x + 2h) + f (x − 2h)
= f 0 (x) + ch4 ,
12h
where c =
ĉ
. Rearranging
12
f 0 (x) =
8f (x + h) − 8f (x − h) − f (x + 2h) + f (x − 2h)
+ O(h4 ).
12h
6. Use the derivative approximation from the previous problem with h = 0.1
to evaluate f 0 (1.2), using the data in the following table:
x
1.00
1.10
1.20
1.30
1.40
f (x)
1.0000000000
0.9513507699
0.9181687424
0.8974706963
0.8872638175
26
Solution:
8f (1.2 + 0.1) − 8f (1.2 − 0.1) − f (1.2 + 0.2) + f (1.2 − 0.2)
12(0.1)
8f (1.3) − 8f (1.1) − f (1.4) + f (1.0)
=
1.2
8(0.89747...) − 8(0.95135...) − (0.88726...) + 1.0
=
1.2
= −0.265
(3 decimal places)
f 0 (1.2) =
7. Use Taylor expansions for f (x ± h) to derive an O(h2 ) accurate approximation to f 00 (x) using f (x) and f (x ± h). Provide all the details of the
error estimate. Hint: Go out as far as the fourth derivative term, and
then add the two expansions.
Solution: See class notes.
8. (MATLAB) Use the programs FDA_PRES, BDA_PRES and CDA_PRES to investigate how the errors in difference approximations change with a decrease in the step sizes h. Explain what you see. (Note: the above programs compute approximations to the derivative of a function y = f (x)
at some point x0 using the Forward, Backward and Centered Difference
Approximations respectively.)
Solution: Just run the program and see. They all show pretty much the
same thing so we just show the output for a single example. We looked
at the example of f (x) = ln(x) (entered as log(x) in MATLAB) with
x0 = 1. The Forward Difference Approximation was used to approximate
the derivative of this function. N was chosen as 16:
h
approx
|error|
1/10
0.95310180
0.0468982019567507
1/100
0.99503309
0.0049669146831908
1/1000
0.99950033
0.0004996669165768
1/10000
0.99995000
0.0000499966670269
1/100000
0.99999500
0.0000049999601159
1/1000000
0.99999950
0.0000005000819332
27
1/10000000
0.99999995
0.0000000494161295
1/100000000
0.99999999
0.0000000110774709
1/1000000000
1.00000008
0.0000000822403710
1/10000000000
1.00000008
0.0000000826903710
1/100000000000
1.00000008
0.0000000827353710
1/1000000000000
1.00008890
0.0000889005818410
1/10000000000000
0.99920072
0.0007992778374091
1/100000000000000
0.99920072
0.0007992778373641
1/1000000000000000
1.11022302
0.1102230246251559
1/10000000000000000
0.00000000
1.0000000000000000
As expected, initially as we decrease h the accuracy of the approximation
decreases until we obtain the least error around h = 10−8 , after which the
error begins to rise again due to rounding error.
28
2.3
Euler’s Method
1. Use Euler’s method with h = 0.25 to compute approximate solution
values for the initial value problem
y 0 = sin(t + y),
y(0) = 1.
You should eventually get y4 = 1.851566895 (be sure that your calculator is set to radians).
Solution:
yn+1 = yn + h sin(tn + yn )
tn = nh = 0.25n
So
yn+1 = yn = 0.25 sin(0.25n + yn )
y0 = y(0) = 1
n=0
y1 = y0 + 0.25 sin(0 + y0 )
= 1 + 0.25 sin(1)
= 1.2103677462...(≈ y(0.25))
n=1
y2 = y1 + 0.25 sin(0.25 + y1 )
= 1.45884498566...(≈ y(0.5))
n=2
y3 = y2 + 0.25 sin(0.5 + y2 )
= 1.6902572796...(≈ y(0.75))
n=3
y4 = y3 + 0.25 sin(0.75 + y3 )
= 1.85156689533...(≈ y(1))
29
2. Use Euler’s method with h = 0.25 to compute approximate solution
values for
y 0 = et−y ,
y(0) = −1.
What approximate value do you get for y(1) = 0.7353256638?
Solution:
yn+1 = yn + hf (tn , yn )
= yn + 0.25etn −yn
= yn + 0.25e0.25n−yn ,
with y0 = y(0) = −1.
n=0
y1 = y0 + 0.25e0−y0
= −1 + 0.25e1
= −0.320429542885...(≈ y(0.25))
n=1
y2 = y1 + 0.25e0.25−y1
= 0.121827147394...(≈ y(0.5))
n=2
y3 = y2 + 0.25e0.5−y2
= 0.486730952236...(≈ y(0.75))
n=3
y4 = y3 + 0.25e0.75−y3
= 0.812025139875...(≈ y(1))
3. Repeat the above with h = 0.125. What value do you now get for
y8 ≈ y(1)?
30
Solution:
yn+1 = yn + hf (tn , yn )
= yn + 0.125etn −yn
= yn + 0.125e0.125n−yn ,
with y0 = y(0) = −1.
n=0
y1 = y0 + 0.125e0−y0
= −0.660214771443...(≈ y(0.125))
n=1
y2 = y1 + 0.125e0.125−y1
= −0.38610503923...(≈ y(0.25))
n=2
y3 = y2 + 0.125e0.25−y2
= −0.149966473291...(≈ y(0.375))
n=3
y4 = y3 + 0.125e0.375−y3
= 0.061333798435...(≈ y(0.5))
n=4
y5 = y4 + 0.125e0.5−y4
= 0.255163498508...(≈ y(0.625))
n=5
y6 = y5 + 0.125e0.625−y5
= 0.436100739954...(≈ y(0.75))
31
n=6
y7 = y6 + 0.125e0.75−y6
= 0.607194720154...(≈ y(0.875))
n=7
y8 = y7 + 0.125e0.875−y7
= 0.770581294899...(≈ y(1))
4. (MATLAB) Use the program EULER_PRES to compute approximations
to each of the following initial value1 problems on [0, 1], using M =
2, 4, 8, 16 steps.
(i) f (t, y) = −y + sin(t), y(0) = 1,
where y(t) = (3/2) exp(−t) + (sin(t) − cos(t))/2,
(ii) f (t, y) = t − y, y(0) = 2,
where y(t) = 3 exp(−t) + t − 1,
(iii) f (t, y) = exp(t − y), y(0) = −1,
where y(t) = log(exp(t) − 1 + exp(−1)),
(iv) f (t, y) = −y log(y), y(0) = 3,
where y(t) = exp(log(3) exp(−t)).
Where is the maximum error on [0, 1] for each value of h = 1/M ? How
does the approximation change as h is reduced?
Solution: Just run the program and see. In general we expect the
maximum error to be close to t = 1 and the errors to decrease as h → 0.
However, as we saw in our lecture notes Euler’s Method converges quite
slowly. An example of the program output with the graph is given for
example (iii) below with h = 1/4:
t
0.000
0.250
0.500
0.750
1.000
1
exact
-1.000000
-0.427857
0.016464
0.395334
0.735326
approx
-1.000000
-0.320430
0.121827
0.486731
0.812025
In MATLAB log means natural log.
32
|error|
0.000000
0.107427
0.105363
0.091397
0.076699
2.4
Linear Interpolation
1. Use linear interpolation to find an approximation to erf(0.34), where
f (x) = erf(x) is the error function, using the data in the following table:
x
f (x)
0.3 0.32862675945913
0.4 0.42839235504667
Also, give an upper bound on the error in the approximation.
Solution: Choose x0 = 0.3 and x1 = 0.4 so that [a, b] = [0.3, 0.4].
x − x1
x − x0
P1 (x) =
f (x0 ) +
f (x1 ).
x0 − x1
x1 − x0
With x = 0.34,
P1 (0.34) = 0.6(0.3286...)+0.4(0.4283...) = 0.3685
33
(4 significant figures).
For the error in the approximation we need the 2nd derivative of f (x) =
erf(x), which is given by
4x
2
f 00 (x) = − √ e−x .
π
Without resorting to a graphical approach this is difficult to bound. A
crude upper bound is
4(0.4)
2
|f 00 (x)| ≤ √ e−(0.3) = 0.8250087277,
π
max (numerator)
and the fact that y1 = x
min (denominator)
2
is increasing on [0.3, 0.4] and y2 = e−x is decreasing on [0.3, 0.4]. Thus
from the error bound formula
M
(x1 − x0 )2
|f (x) − P1 (x)| ≤
8
0.250087277
=
(0.4 − 0.3)2
8
= 0.001031
(4 significant figures),
where we have bounded using
where M = max |f 00 (x)| for all x ∈ [x0 , x1 ].
2
Check: using erf(x) = √
π
get
Z
x
2
e−t dt on my TI-89 graphing calculator I
0
Z
2
|f (x) − P1 (x)| = √
π
0
0.34
−t2
e
dt − 0.3685329977
= |0.369364529345... − 0.3685329...|
= 0.000831...,
consistent with our result above.
2. The gamma function, denoted by Γ(x), occurs in a number of applications,
most notably probability theory and the solution of certain differential
equations. It is basically the generalization of the factorial function to
non-integer values, in that Γ(n + 1) = n!. The table below gives values
of Γ(x) for x between 1 and 2. Use linear interpolation to approximate
Γ(1.005) = 0.99713853525101....
34
x
1.00
1.10
1.20
1.30
1.40
1.50
1.60
1.70
1.80
1.90
2.00
Γ(x)
1.0000000000
0.9513507699
0.9181687424
0.8974706963
0.8872638175
0.8862269255
0.8935153493
0.9086387329
0.9313837710
0.9617658319
1.000000000
Solution: Choose x0 = 1.00 and x1 = 1.10. Set f (x) = Γ(x).
x − x0
x − x1
f (x0 ) +
f (x1 ).
P1 (x) =
x0 − x1
x1 − x0
With x = 1.005
P1 (1.005) = 0.95(1.00)+0.05(0.9513507699) = 0.9976
(4 significant figures).
3. Construct a linear interpolating polynomial to the function f (x) = x−1
using x0 = 1/2 and x1 = 1 as nodes. What is the upper bound on the
error over the interval [1/2, 1], according to the error estimate?
Solution:
x − x1
x − x0
P1 (x) =
f (x0 ) +
f (x1 )
x0 − x 1
x1 − x0
−1 x − 12
x−1
1
=
+
1−1
1
1
2
−
1
1
−
2
2
= 3 − 2x.
For the error bound we need f 00 (x):
f 0 (x) = −x−2
and
35
f 00 (x) = 2x−3 =
2
.
x3
1
We need an upper bound for f (x) on , 1 . Noting that x3 is increasing
2
1
2
1
on
, 1 =⇒ 3 is decreasing on
, 1 and positive throughout, thus
2
x
2
1 −3 00
−3
|f (x)| = 2|x | ≤ 2 = 2 (2−1 )−3 = 2(8) = 16 = M.
2
00
So using the error bound formula
16
M
(x1 − x0 )2 =
|f (x) − P1 (x)| ≤
8
8
2
1
1
1
1−
=2
= .
2
4
2
1
4. Repeat the above for f (x) = x 3 , using the interval [1/8, 1].
Solution:
x − x1
x − x0
P1 (x) =
f (x0 ) +
f (x1 )
x0 − x 1
x1 − x0
13 x − 81
1
1
x−1
+
13
=
1
1
8
−1
1− 8
8
4
3
1
=
x+
or
(4x + 3).
7
7
7
Now
1 2
f 0 (x) = x− 3
3
2 5
2
f 00 (x) = − x− 3 = − 5 .
9
9x 3
1
Where is |f 00 (x)| a maximum on
, 1 ? Now (positive) powers of x and
8
their reciprocals are maximized or minimized at one of the end-points of
the interval. Thus the absolute values of these functions are maximized
at one of the end-points. To find which end-point leads to a maximum we
check each and see.
1
00
f
= −7.1̇
and
f 00 (1) = −0.2̇
8
and
36
=⇒ |f 00 (x)| ≤ 7.1̇ = M . Thus from the error bound formula
7.1̇
M
(x1 − x0 )2 =
|f (x) − P1 (x)| ≤
8
8
2
1
1−
= 0.6806
8
(4 significant figures)
Note: alternatively, to find the maximum value of |f 00 | we could have
argued as follows. Because we took absolute values we can ignore the
sign of f 00 . As x5/2 is increasing, f 00 is decreasing, and so the function is
maximized at the left end-point. See the graph below:
5. (MATLAB) Apply the program INTEROL_PRES to Question 3.
Solution: Running the program with a = 1/2, b = 1 with 2 nodes yields:
37
2.5
Trapezoid Rule
1
1. Apply the trapezoid rule with h = , to approximate the integral
8
Z 1
1
√
dx = 0.92703733865....
I=
1 + x4
0
How small does h have to be to get an error less that 10−3 ? 10−6 ?
Solution:
f (x) = √
1
,
1 + x4
1
h= ,
8
[a, b] = [0, 1],
38
n=
b−a
1−0
= 1 = 8.
h
8
i xi
0 0
1 18
2 14
3 38
4 12
5 58
6 34
7 78
8 1
f (xi )
1
0.99987...
0.99805...
0.99025...
0.97014...
0.93145...
0.87157...
0.79400...
0.70710...
1
h(f (x0 ) + 2f (x1 ) + ... + 2f (x7 ) + f (x8 ))
2 1
1
=
(1 + 2(0.99987...) + 2(0.99805...) + ... + 2(0.79400...) + 0.70710...)
2
8
= 0.926115180158...
T8 (f ) =
In order to estimate the error we need the 2nd derivative of f (x) =
1
(1 + x4 )− 2 .
3
3
1
f 0 (x) = − (1 + x4 )− 2 · 4x3 = −2x3 (1 + x4 )− 2 .
2
Using the product rule we can show that
f 00 (x) =
6x2 (x4 − 1)
5
(x4 + 1) 2
.
This is a complicated function to get an upper bound on. We would
use a combination of calculus and curve sketching techniques to deduce
the following diagram:
39
Alternatively, you could get an upper bound by maximizing and minimizing the numerator and denominator in f 00 respectively. (Note: I
wouldn’t give you one as involved as this in our tests, but the principle
is still important.) Now recall from lecture notes that to get an error
≤ 10−3 we require
|I(f ) − Tn (f )| ≤
h2
b−a 2
h M = (1.39275) ≤ 10−3 ,
12
12
(2.3)
where M = max |f 00 (x)| for all x ∈ [a, b] (as indicated in the diagram).
Rearranging, we require
h2 ≤ 0.08616047...
or
h ≤ 0.0928227...
But h must divide b − a = 1, so as
n=
b−a
h
or
h=
b−a
1
= ≤ 0.0928227...
n
n
=⇒ n ≥ 10.77
40
so choose n = 11 =⇒ h =
h2
(1.39275) ≤ 10−6
12
1
= 0.09. As in (2.3) we seek h such that
n
h2 ≤ 8.616047×10−6
or
h ≤ 0.0029353104.
so
As before
h=
1
≤ 0.0029353104... =⇒ n ≥ 340.679...
n
1
= 0.00293255132...
n
π
2. Use the trapezoid rule with h = , to approximate the integral
4
Z π
2
I=
sin(x) dx = 1.
so with n = 341, =⇒ h =
0
How small does h have to be to get an error less that 10−3 ? 10−6 ?
Solution:
f (x) = sin(x),
h=
π
,
4
h πi
[a, b] = 0,
,
2
i xi
0 0
1
π
4
2
π
2
n=
f (xi )
0
1
√
2
1
1
h(f (x0 ) + 2f (x1 ) + f (x2 ))
2 2
1 π =
0+ √ +1
2
4
2
!
√
2+1
π ≈ 0.948059...
=
8
T2 (f ) =
41
b−a
=
h
π
2
−0
π
4
= 2.
(Note: exact answer is 1.)
In order to estimate the error we need the 2nd derivative of f (x) =
sin(x).
f 0 (x) = cos(x)
and
f 00 (x) = − sin(x)
h πi
Thus |f 00 (x)| = | sin(x)| ≤ 1 = M on 0, .
2
From the error bound formula we need
π
b−a 2
π
|I(f ) − Tn (f )| ≤
h M = 2 h2 · 1 = h2 ≤ 10−3
12
12
24
=⇒ h2 ≤ 0.007639437...
(2.4)
h ≤ 0.08740387...
so
π
π
b−a
But h must divide b − a = , so noting that h =
= 2 ≤
2
n
n
0.08740387...
=⇒ n ≥ 17.97.
So choose n = 18 =⇒ h =
π
2
= 0.087266....
n
From (2.4) we also seek h such that
π 2
h ≤ 10−6
24
=⇒ h2 ≤ 7.639... × 10−6
=⇒ h ≤ 0.002763953...
As before
h=
π
2
n
≤ 0.002763953...
Choosing n = 569 =⇒ h =
π
2
569
=⇒ n ≥ 568.315...
= 0.0027606...
1
3. Apply the trapezoid rule with h = , to approximate the integral
8
Z 1
1
I=
x(1 − x2 ) dx = .
4
0
How small does h have to be to get an error less than 10−3 ? 10−6 ?
42
Solution:
f (x) = x(1−x2 ),
1
h= ,
8
[a, b] = [0, 1],
i xi
0 0
1 18
2 14
3 38
4 12
5 58
6 34
7 78
8 1
n=
b−a
1−0
= 1 = 8.
h
8
f (xi )
0
63
512
15
64
165
512
3
8
195
512
21
64
105
512
0
1
h(f (x0 ) + 2f (x1 ) + ... + 2f (x7 ) + f (x8 ))
2 1
1
63
15
105
=
(0 + 2
+2
+ ... + 2
+ 0)
2
8
512
64
512
63
=
256
= 0.2461
(4 significant figures)
1
. In order to estimate the error we need the
Note: exact answer is
4
2nd derivative of f (x) = x(1 − x2 ).
T8 (f ) =
f 0 (x) = 1 − 3x2 =⇒ f 00 (x) = −6x.
So |f 00 (x)| = 6|x| ≤ 6 = M on [0, 1]. From the error bound formula we
need h such that
|I(f ) − Tn (f )| ≤
b−a 2
h2
h M = (6) ≤ 10−3 ,
12
12
Rearranging, we require
h2 ≤ 0.002
or
43
h ≤ 0.0447213595...
(2.5)
But h must divide b − a = 1, so as
1
b−a
= ≤ 0.0447213595...
h=
n
n
=⇒ n ≥ 22.36...
1
so choose n = 23 =⇒ h =
= 0.04347826. For an error less than
23
10−6 we need h such that
h2
b−a 2
h M = (6) ≤ 10−6
|I(f ) − Tn (f )| ≤
12
12
Rearranging, we require
h2 ≤ 2 × 10−6
or
h ≤ 0.00141421356...
As before
1
≤ 0.00141421356... =⇒ n ≥ 707.106...
n
1
= 0.00141242937...
so with n = 708, =⇒ h =
708
h=
4. Apply the trapezoid rule with h = 81 , to approximate the integral
Z 1
I=
ln(1 + x) dx = 2 ln(2) − 1.
0
How small does h have to be to get an error less than 10−3 ? 10−6 ?
Solution:
f (x) = ln(1+x),
1
h= ,
8
[a, b] = [0, 1],
i xi
0 0
1 18
2 14
3 38
4 12
5 58
6 34
7 78
8 1
44
f (xi )
0 ln 98 ln 54 ln 11
8
ln 32 ln 13
8
ln 74 ln 15
8
ln(2)
n=
b−a
2−1
= 1 = 8.
h
8
1
h(f (x0 ) + 2f (x1 ) + ... + 2f (x7 ) + f (x8 ))
2 1
1
9
5
15
=
0 + 2 ln
+ 2 ln
+ ... + 2 ln
+ ln(2)
2
8
8
4
8
= 0.3856
(4 significant figures)
T8 (f ) =
(Note: exact answer is 0.3863 to 4 significant figures). In order to
estimate the error we need the 2nd derivative of f (x) = ln(1 + x).
f 0 (x) =
1
1
=⇒ f 00 (x) = −
.
1+x
(1 + x)2
1
1
≤
= 1 = M on [0, 1]. From the error
2
(1 + x)
(1 + 0)2
bound formula we need h such that
So |f 00 (x)| =
|I(f ) − Tn (f )| ≤
b−a 2
h2
h M = (1) ≤ 10−3 ,
12
12
(2.6)
Rearranging, we require
h2 ≤ 0.012
h ≤ 0.10954451150...
or
But h must divide b − a = 1, so as
b−a
1
= ≤ 0.10954451150...
n
n
=⇒ n ≥ 9.12...
1
so choose n = 10 =⇒ h =
= 0.1. For an error less than 10−6 we
10
need h such that
h=
|I(f ) − Tn (f )| ≤
b−a 2
h2
h M = (1) ≤ 10−6
12
12
Rearranging, we require
h2 ≤ 1.2 × 10−5
or
h ≤ 0.003464101615...
As before
1
≤ 0.003464101615... =⇒ n ≥ 288.67...
n
1
so with n = 289, =⇒ h =
= 0.0034602076...
289
h=
45
5. (MATLAB) Apply the program TRAPEZOID_PRES to numerically verify
any of the above results. For example, what is the actual error incurred
for Question 3 when using h = 1/23?
Solution: Running the program yields that the approximate area is
0.249483. The exact answer is 1/4, thus the error is about |1/4 −
0.249483| = 5.17 × 10−4 , which is consistent with the theoretical result
(|error| ≤ 10−3 ). The approximation is illustrated below:
2.6
Solution of Tridiagonal Linear Systems
1. Use the tridiagonal algorithm to solve the following system of equations:


  π 
4 2 0 0
x1
√9
 1 4 1 0   x2   3 

  √2 

 0 1 4 1   x3  =  3 
2
0 0 2 4
x4
− π9
46
Solution:

4 2
 1 4

 0 1
0 0
0
1
4
2
π
√9
3
√2
3
2
−π
9
0
0
1
4


r2 − 41 r1 → r2



4
 0

 0
0

r3 − 27 r2 → r3
r4 −
7
r
13 3
→ r4
2
0
1
4
2
7
2
1
0
4
 0

 0
0
0
0
26
7

2
0
1
4
 0

 0
0
2
7
2
7
2
0
0
0
0
1
4
0
1
2
26
7
0
√
π
9

3
− π
2 √ 36
3
2
− π9
π
9
0
0
1
4
0
0
1
π
9
√
3
π
−
=
0.778759
36
√2
5 3 + π = 0.643523
14
126
− 5√3 − 3π = −0.695578
26
26
45
13
√
+
2x2
7
x
2 2
+
x3
26
x
7 3
+
x4
45
x
13 4
π

− 36

π 
+ 126
− π9
π
=
9
= 0.778759
= 0.643523
= −0.695578
Back substitution yields:
x1 = 0.008495
x2 = 0.1575
x3 = 0.2274
x4 = −0.2009
(each to 4 significant figures).
2. Use the tridiagonal algorithm to solve the following system of equations:


 

1 12 0 0
x1
2
 1 1 1 0   x2   2 
 2 31 41 1  
 

 0
  x3  =  53 
4
5
6
30
15
0 0 61 17
x4
14
47

3
√2
5 3
14
(each to 6 significant figures). Keeping 6 significant figures for the
right-hand side values yields the following associated linear system (see
lecture notes):
4x1








.
Solution:

1 12 0
 1 1 1
 2 31 14
 0
4
5
0 0 16

2
2 
53 

30
15
0
0
1
6
1
7

1
 0

 0
0
r2 − 21 r1 → r2
14
1
2
1
12
1
4
0

1

r3 − 3r2 → r3  0
 0
0

r4 +
10
r
33 3
→ r4
1
 0

 0
0
0
1
4
1
5
1
6
1
2
1
12
0
0
1
2
1
12
0
0
0
0
1
6
1
7

2
1 
53 

30
15
14
0
1
4
− 11
20
1
6
0
1
4
− 11
20
0
0
0
1
6
1
7
2
1
− 37
1530
0
0
1
6
134
693
+
1
x
2 2
1
x
12 2
+
1
x
4 3
11
− 20 x3
+
1
x
6 4
134
x
693 4
2
1
− 37
96730
=
2
=
1
= − 37
30
967
= 1386
Back substitution yields:
x1 = 1.004
x2 = 1.993
x3 = 3.336
x4 = 3.608
(each to 4 significant figures).
3. Is the following system diagonally dominant? Use the triadiagonal
algorithm to find the solution.
 1 10

  61 
0 0
x1
2
21
42
1
 1 1
  x2   179 
0
 4 31 13

 =  156 
1
1 
 0

x3   563
5
4
21
420
1
1
13
0 0 5
x4
6
10
48



14
The associated linear system is:
x1

1386




Solution: Checking for diagonal dominance:
1
10
>
= 0.4761...X
2
21
1
1
1
17
0.3̇ =
> +
=
= 0.3269...X
3
4 13
52
1
1
1
26
0.25 =
> +
=
= 0.247...X
4
5 21
105
1
1
< ...×
0.16̇ =
6
5
Hence the system is not diagonally dominant. Nevertheless, Gaussian
elimination may still work:
 1 10
 1 10

0 0 61
0 0 61
2
21
42
2
21
42
1
1
1
 1 1
 0 2
179 
115
0
r
−
r
→
r
0
2
1
2
2
21
13
 4 31 13

156 
273
1
1 563 
1
1 563
 0
 0 1
5
4
21 420
5
4
21 420
1 13
1 13
0 0 51
0 0 15
6
10
6
10
0.5 =

r3 −
21
r
10 2
→ r3
 0

 0
0

r4 −
52
r
23 3
→ r4
1
2
1
2
 0

 0
0
10
21
2
21
0
0
10
21
2
21
0
0
0
1
13
23
260
1
5
0
1
13
23
260
0
The associated linear system is:
1
x
2 1
+
10
x
21 2
2
x
21 2
+
1
x
13 3
23
x
260 3
+
1
x
21 4
19
x
322 4
=
=
=
=
61
42
115
273
2489
5460
1301
4830
Back substitution yields:
x1 = 0.7661
x2 = 2.246
(each to 4 significant figures).
49
x3 = 2.696
x4 = 4.565
0
0
1
21
1
6
0
0
1
21
19
322




61
42
115
273
2489
5460
13
10
61
42
115
273
2489
5460
1301
4830








4. (MATLAB) Use MATLAB to verify any of the solutions you have found
by using the ‘backslash’ command. That is, with the system Ax = b to
find x just type >> A\b. See any tutorial to see how to enter vectors
and matrices in MATLAB.
Solution: The MATLAB commands for Question 2 are given below:
>> A=[1,1/2,0,0;1/2,1/3,1/4,0;0,1/4,1/5,1/6;0,0,1/6,1/7]
A =
1
1/2
0
0
1/2
1/3
1/4
0
0
1/4
1/5
1/6
0
0
1/6
1/7
>> b=[2;2;53/30;15/14]
b =
2
2
53/30
15/14
>> A\b
ans =
1.0037
1.9925
3.3358
3.6082
5. (MATLAB) We can use some simple commands in MATLAB to implement the individual row operations in the tridiagonal algorithm. The
commands applied to a hypothetical augmented matrix B are as follows
(ri refers to the ith row of B and k is a nonzero scalar):
50
To do kri → ri (multiply row i by k) type
>> B(i,:) = k*B(i,:)
To do ri − krj → ri (subtract k times row j from row i) type
>> B(i,:) = B(i,:) - k*B(j,:)
To do ri ↔ rj (swap rows i and j) type
>> B([i,j],:) = B([j,i],:)
Solution: The MATLAB commands for the first step of Question 2 are
given below (I am assuming you have already constructed the augmented matrix called B - see previous solution):
>> format rat % Comment: so calculations done with fractions
>> B(2,:)=B(2,:)-(1/2)*B(1,:)
B =
1
0
0
0
2.7
1/2
1/12
1/4
0
0
1/4
1/5
1/6
0
0
1/6
1/7
Boundary Value Problems
1. Solve, by hand, the following two-point BVP:
−u00 + u = f (x), x ∈ [0, 1],
u(0) = u(1) = 0,
where f (x) = x, using h = 1/4. Write out the linear system explicitly
prior to a solution. You should get the following 3 × 3 system:


 

2.0625
-1
0
U1
0.015625
 -1
2.0625
-1   U2  =  0.03125 
0
-1
2.0625
U3
0.046875
51
2
1
53/30
15/14
Solution:
−u00 + u = x,
u(0) = u(1) = 0
0≤x≤1
With uk ≈ u(xk ) and xk = kh, we replace the above BVP with
uk−1 −2uk +uk+1
+ u k = xk ,
−
h2
u0 = un = 0
Multiplying the DE through by h2 and simplifying yields:
−uk−1 + (2 + h2 )uk − uk+1 = h2 xk
1
which with h = , so that n = 11 = 4 yields
4
4

2 1 33
 −uk−1 + 16
uk − uk+1 = 14
k =
4

u0 = u4 = 0,
1 3
4
(2.7)
k,
(2.8)
for k = 1, 2, 3.
(2.8) leads to 3 linear equations:
k=1
33
−u0 + u1 − u2 =
16
3
1
33
1
=⇒ u1 − u2 =
4
16
64
(2.9)
3
1
1
(2) =
4
32
(2.10)
k=2
33
−u1 + u2 − u3 =
16
52
k=3
−u2 +
33
3
u3 =
16
64
(2.11)
Writing (2.9),(2.10) and (2.11) as a single matrix equation:

 33
  1 
−1
0
u
1
16
64
 −1 33 −1   u2  =  1 
16
32
33
3
0 −1 16
u3
64
(Note: 33/16 = 2.0625). Gaussian elimination and back substitution
yield

  266  

u1
0.0349
7625
 u2  =  65  =  0.0563 
1154
68
u3
0.0500
1359
to 4 decimal places.
2. Repeat the above problem, but this time using h = 1/5. What does
the new system become?
Solution:

 −uk−1 +

51
u
25 k
u0 = u5 = 0,
− uk+1 =
1 2
5
1
k
5
=
1 3
5
k,
(2.12)
for k = 1, 2, 3, 4.
k=1
51
−u0 + u1 − u2 =
25
3
1
51
1
=⇒ u1 − u2 =
5
25
125
(2.13)
3
1
2
(2) =
5
125
(2.14)
k=2
51
−u1 + u2 − u3 =
25
k=3
51
−u2 + u3 =
25
53
3
1
3
(3) =
5
125
(2.15)
k=4
51
−u3 + u4 − u5 =
25
Writing (2.13)-(2.16) as a
 51
−1
25
 −1 51
25

 0 −1
0
0
3
1
51
4
(4) =⇒ −u3 + u4 =
5
25
125
single matrix equation:

 
0
0
u1

 
−1 0 
  u2  = 
51
−1   u3  
25
−1 51
u4
25
1
125
2
125
3
125
4
125
(2.16)




3. Approximate the solution of the two-point BVP
−u00 + u0 + u = 1, x ∈ [0, 1],
u(0) = u(1) = 0.
Assuming h < 2 determine if the resulting coefficient matrix is diagonally dominant? Hint: For the approximation of the 2nd derivative
see your lecture notes 1 .
Solution:
−u00 + u0 + u = 1,
u(0) = u(1) = 0
x ∈ [0, 1]
with uk ≈ u(xk ) and xk = hk, we replace the above BVP with
( −uk−1 k +uk+1
− uk−1 −2u
+ uk+12h
+ uk = 1,
2
h
u0 = un = 0.
Multiplying the DE by h2 yields
or
h
−uk−1 + 2uk − uk+1 + (uk+1 − uk−1 ) + h2 uk = h2 ,
2
− h2 + 1 uk−1 + (2 + h2 )uk + h2 − 1 uk+1 = h2
u0 = un = 0,
k = 1, 2, ..., n − 1
1
Note: the problem of determining diagonal dominance is harder than I would give
you in an exam, but shows you the sort of calculations one might have to do in practice.
However, the initial part is fair game!
54
As k runs from 1 to n−1, we get the following tridiagonal linear system
in matrix form, after using in the first and last equation that u0 = 0
and un = 0, respectively:









2 + h2 − h2 + 1
0
h
2
−1
2 + h2 − h2 + 1
..
.

0
h
−1
2
2 + h2
..
.
−
h
2
+1
0
..
.
2 + h2 − h2 + 1







h
− 1 
2
2 + h2
u1
u2
u3
u4
u5
u6


 
 
 
=
 
 
 
h2
h2
h2
h2
h2
h2








Is the coefficient matrix diagonally
dominant?
h
First equation: Is |2 + h2 | > − 1?
2
Recall the triangle inequality, namely
|a + b| ≤ |a| + |b|,
so
h
h
− 1 = + (−1) ≤ h + 1 < 2 < 2 + h2 = |2 + h2 |
2
2
2
X
(We used the fact that h < 2 by assumption.)
h
Middle equations: Is |2 + h2 | ≥ − h2 + 1 + 2 −
1? First observe
h
h
h
h
− 1 = 1 − , after
that − 1 < 0 as h < 2, so − 1 = −
2
2
2
2
recalling that
x, x ≥ 0
|x| =
−x, x < 0
So
h
h
h
h
−
+ − 1 = + − 1 = h +1+1− h = 2 < 2+h2 = |2+h2 |
+
1
+
1
2
2
2
2
2
2
h
Last equation: Is |2 + h2 | > −
+ 1 ?
2
− h + 1 = h + 1 = h + 1 < 2 < 2 + h2 = |2 + h2 |
X
2
2
2
The coefficient matrix is diagonally dominant for all h.
55
X
4. (MATLAB) Use the program BVP_SIMPLE_PRES to numerically solve
the BVP
−u00 + u = f (x),
u(0) = u(1) = 0,
x ∈ [0, 1]
where f and the exact solutions u are given by
(a) f (x) = (π 2 + 1) sin(πx),
u(x) = sin(πx);
(b) f (x) = 4 exp(−x) − 4x exp(−x),
u(x) = x(1 − x) exp(−x);
(c) f (x) = π(π sin(πx) + 2 cos(πx)) exp(−x),
u(x) = exp(−x) sin(πx);
(d) f (x) = 3 − 1/x − (x2 − x − 2) ln(x),
u(x) = x(1 − x) ln(x).
Using h = 1/2n where n = 2, 3, 4, . . . numerically investigate the accuracy of the approximations. Recall that with n equal subintervals
we have the approximations uk ≈ u(xk ), k = 1, 2, . . . , n − 1 where the
nodes are xk = hk, h = 1/n.
As a measure of the accuracy choose the error to be
E(h) := max |u(xk ) − uk |,
k
k = 1, 2, . . . , n − 1.
Solution: We used the MATLAB program to solve problem (a). The
output and graph are displayed below for n = 4, i.e. h = 1/4:
56
xk
0.000000
0.250000
0.500000
0.750000
1.000000
Approx
0.000000
0.740989
1.047917
0.740989
0.000000
Exact
0.000000
0.707107
1.000000
0.707107
0.000000
|Approx - Exact|
0.000000
0.033882
0.047917
0.033882
0.000000
Clearly the maximum error is (to 6 d.p.) 0.047917. We ran the program
again with h = 1/8, h = 1/16 and h = 1/32 and summarized our
findings in the table below. We have also shown the ratio of consecutive
h
1/4
1/8
1/16
1/32
E(h)
E(h)/E(h/2)
0,047917
4.0789
0.011745
4.0195
0.002922
4.0027
0.000730
−−−
errors. From lecture notes we know that with the assumption that
E(h) = Chp (and so E(h)/E(h/2) = hp ) the data suggests that hp = 4
i.e. p = 2. That is we have shown numerically that E(h) = O(h2 ) (as
expected).
57
Chapter 3
Root - Finding
3.1
Bisection Method
1. Do three iterations of the bisection method, applied to f (x) = x3 − 2
using a = 0 and b = 2.
Solution:
f (0) = −2
f (2) = 8 − 2 = 6
so by the IMVT a root of f exists in [0, 2]. Set x1 =
f (1) = 13 − 2 = −1
f (2) = 8 − 2 = 6
0+2
= 1.
2
so by the IMVT a root of f exists in [1, 2]. Set x2 =
f (1.5) = (1.5)3 − 2 = 1.375
f (1) = −1
1+2
= 1.5.
2
so by the IMVT a root of f exists in [1, 1.5]. Set x3 =
1 + 1.5
= 1.25.
2
2. (a) How many iterations of the bisection method are needed to find
2
the root of f (x) = x − e−x on [0, 1] to an accuracy of 0.1? (b) Apply
the method to find the root to the specified accuracy. (c) If the actual
58
root is 0.6529186... verify that your root does indeed have the required
accuracy.
Solution:
(a) Need k such that
|α − xk | ≤
b−a
1−0
< 0.1 =⇒
< 0.1
k
2
2k
=⇒ 1 < (0.1)2k
=⇒ ln(1) < ln(0.1) + k ln(2)
ln(0.1)
= 3.321...
=⇒ k > −
ln(2)
Thus we need 4 iterations (bisections) to achieve the desired accuracy.
(b)
f (0) = 0 − e0 = −1
f (1) = 0.632...
so by the IMVT a root of f exists in [0, 1]. Set x1 =
f (0.5) = −0.278...
f (1) = 0.632...
0+1
= 0.5.
2
so by the IMVT a root of f exists in [0.5, 1]. Set x2 =
0.75.
f (0.75) = 0.180...
f (0.5) = −0.278...
so by the IMVT a root of f exists in [0.5, 0.75]. Set x3 =
0.625.
f (0.625) = −0.0516...
f (0.75) = 0.180...
0.5 + 1
=
2
0.75 + 0.5
=
2
so by the IMVT a root of f exists in [0.625, 0.75]. Set x4 =
0.625 + 0.75
= 0.6875.
2
59
(c) The actual root is 0.6529186... so the absolute error is:
|0.6529186... − 0.6875| = 0.0345... < 0.1.X
3. (MATLAB) Use the program BISECT_PRES to apply the bisection method
to the following test problems:
(a) f (x) = 2 − exp(x) on [0, 1],
(b) f (x) = x3 − 2 on [0, 2],
(c) f (x) = 5 − 1/x on [0.1, 0.25],
(d) f (x) = x2 − sin(x) on [0.5, 1].
Can you check the exact roots of the problems with the answers given
by the program? Use an appropriate error bound to check a priori the
minimum number of steps needed for the roots to be accurate within
10−6 .
Solution: We show output for problem (a). The iterations and graph
of how xn varies with increasing n is shown below:
x1 = 0.5000000000000000
x2 = 0.7500000000000000
x3 = 0.6250000000000000
etc.
x19 = 0.6931476593017578
x20 = 0.6931467056274414
After 20 iterations the root is approximately 0.6931467056274414 with an
|error|< 1.000000e-06
60
The root is clearly worked out to be ln(2). With a tolerance of 10−6
we require (see your lecture notes)
b−a
≤ 10−6 .
2n
With [a, b] = [0, 1] we solve for n yielding n ≥ ln(106 )/ ln(2) = 19.9316 . . ..
Thus the minimum number of steps is 20. Similar results are obtained
for the other examples. The roots
are easily found (exactly on paper)
√
3
for problems (b) and (c), to be 2 and 1/5 respectively. The root for
problem (d) cannot be found exactly on paper.
3.2
Newton’s Method
1. (a) Write down Newton’s method applied to the function f (x) = x3 −2.
Simplify the computation as much as possible. (b) What has been accomplished if we find the root of this function?
Solution:
(a)
f (x) = x3 − 2 =⇒ f 0 (x) = 3x2 .
61
Thus
f (xn )
f 0 (xn )
(x3 − 2)
= xn − n 2
3xn
3
2(xn + 1)
=
,
3x2n
xn+1 = xn −
where x0 is given.
(b) If we find a root of f we have found the cube root of 2 (do you see
why?).
2. (a) Write down Newton’s method applied to the function f (x) = a − x−1 .
Simplify the resulting computation as much as possible. (b) What has
been accomplished if we find the root of this function?
Solution:
(a)
f (x) = a − x−1 = a −
1
1
=⇒ f 0 (x) = x−2 = 2 .
x
x
Thus
xn+1 = xn −
= xn −
f (xn )
f 0 (xn )
(a − x1n )
1
x2n
= xn (2 − axn ),
where x0 is given.
(b) A root of f corresponds to a =
1
1
, or x = , a 6= 0.
x
a
3. (a) Do three iterations of Newton’s method for f (x) = 3 − ex , using
x0 = 1. Repeat, with x0 = 2, 4, 8, 16, but using as many iterations as
62
needed in order to see a pattern. (b) Comment on your results.
Solution:
(a)
f (x) = 3 − ex =⇒ f 0 (x) = −ex .
Thus
f (xn )
f 0 (x )
n xn 3−e
= xn −
−exn
= ((xn − 1)exn + 3) e−xn ,
xn+1 = xn −
x0 = 1
x1 =
=
=
x2 =
x3 =
(1 − 1)e1 + 3 e−1
3e−1
1.103...
1.098...
1.0986122...
x0 = 2
x1 = (2 − 1)e2 + 3 e−2
= 1.406...
x2 = 1.141...
x3 = 1.0995133...
x0 = 4
x1 = (4 − 1)e4 + 3 e−4
= 3.054...
x2 = 2.196...
x3 = 1.529...
63
We need to go further to see the pattern:
x4 = 1.179...
x5 = 1.101...
x6 = 1.098617...
x0 = 8
x1 = (8 − 1)e8 + 3 e−8
= 7.00...
x2 = 6.00...
x3 = 5.011...
We need to go further to see the pattern:
x4
x5
x6
x7
x8
x9
x10
=
=
=
=
=
=
=
4.031...
3.084...
2.22...
1.546...
1.185...
1.102...
1.09861...
x0 = 16
x1 =
=
x2 =
..
.
(16 − 1)e16 + 3 e−16
15.00...
14.00.00...
..
.
x18 = 1.09861...
(b) The actual root is ln(3) = 1.09861228... (convince yourself of this!).
For this particular problem, all chosen starting values converge to
the root, but the further the initial guess is from the root the more
iterations are needed to achieve a given accuracy. However, in
general we must be sufficiently close to the root for the iterative
procedure to converge.
64
4. Figure 3.1 shows the geometry of a planetary orbit around the sun. The
position of the sun is given by S, the position of the planet is given
by P . Let x denote the angle defined by P0 OA, measured in radians.
The dotted line is a circle concentric to the ellipse and having a radius
equal to the major axis of the ellipse. Let T be the total orbital period
of the planet, and let t be the time required for the planet to go from
A to P . Then Kepler’s equation from orbital mechanics, relating x and
t, is
2πt
.
x − sin(x) =
T
Here is the eccentricity of the elliptical orbit (the extent to which it
deviates from a circle). For an orbit of eccentricity = 0.01 (roughly
equivalent to that of the Earth), what is the value of x corresponding to
T
T
t = ? What is the value of x corresponding to t = ? Use Newton’s
4
8
method to solve the required equation.
Figure 3.1: Orbital geometry
65
Solution: = 0.01 and t =
T
leads to
4
x − 0.01 sin(x) =
2π T4
π
= .
T
2
Set f (x) = x − 0.01 sin(x) − π2 , then
f (0) = − π2 < 0
f (π) = π − 0 −
π
2
=
π
2
so by the IMVT a root α ∈ [0, π]. Set x0 =
>0
π
. Newton’s method is:
2
f (xn )
f 0 (xn )
xn − 0.01 sin(xn ) − π2
= xn −
(1 − 0.01 cos(xn ))
xn cos(xn ) − sin(xn ) − 50π
=
cos(xn ) − 100
xn+1 = xn −
Iterating with x0 =
π
:
2
n = 0
n = 1
n = 2
The method for t =
x1 = 1.5807963267...
x2 = 1.5807958268...
x3 = 1.5807958268...
T
is done in the same way.
8
5. (MATLAB) Use the program NEWTON_RAPH_PRES to apply Newton’s
method to the following test problems:
(a) f (x) = x2 − 4 on [1, 6.5], x0 = 6, tol = 1e − 06;
(b) f (x) = (x − 2)2 on [1.5, 3.2], x0 = 3, tol = 1e − 03;
(c) f (x) = 6(x − 2)5 on [2, 3], x0 = 3, tol = 1e − 03;
(d) f (x) = (4/3) exp((2 − x/2)(1 + ln(x)/x)) on [0, 20], x0 = 2.5,
f 0 (x) = (4/3) exp((2 − x/2)(1 + ln(x)/x)) ×
((2 − x/2)(1 − ln(x))/x2 − (1 + ln(x)/2)/2);
66
(e) f (x) = x3 − 2x + 2 on [−0.5, 1.5], x0 = 0;
(f) f (x) = 1 − x2 on [−1, 1], x0 = 0.
The number ‘tol’ refers to the tolerance that the program uses in it’s
stopping criterion for Newton’s method (default is 10−6 ). In each case:
(i) explain the behaviour of Newton’s method that you see, and (ii)
for those cases where we have convergence, explain how the rate of
convergence is related to the multiplicity of the roots.
Solution:
(i): We have convergence for problems (a) - (c). In case (d) we have
divergence because the iterates move further and further away away
from the root (eventually becoming unbounded). In case (e) we have
divergence because the iterates simple oscillate between 0 and 1. In
problem (f) the initial guess lands on the graph where the slope is
horizontal (thus the tangent line never intersects with the x-axis). In
Newton’s method we attempt to divide by zero, which is impossible.
(ii): In case (a) we have the simple roots ±2. Thus from a Theorem
in our lecture notes we know that provided the initial guess is chosen
sufficiently close to the root the method is a quadratically convergent
method (i.e., the number of correct decimal places we obtain in the
iterates is approximately doubled every step). We see this in the data
output from the Matlab program (see below). In case (b) we have a
double root (+2, twice) and so we expect a rate of convergence less
than quadratic (which is verified if you look at the digits in the iterates
output by the Matlab program). In Case (c) we have a root (+2) of
multiplicity 5, thus we expect an even slower rate of convergence. We
show the Matlab out put for problem (a) below:
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
x0
x1
x2
x3
x4
x5
x6
is
is
is
is
is
is
is
6.0000000000000000
3.3333333333333335
2.2666666666666666
2.0156862745098039
2.0000610360875868
2.0000000009313226
2.0000000000000000
67
Iterations stop because |f(x6)|+|x6-x5|<0.000001
3.3
How to Stop Newton’s Method
1. Approximate the root of f (x) = ex + x using Newton’s Method. Stop
the method using an appropriate Stopping Criterion with a tolerance
of 10−4 . (see your lecture notes). (Hint: to start, show using the IMVT
that there is a root in [−1, 0].)
Solution: Following the hint,
f (0) = e0 + 0 = 1,
f (−1) = e−1 − 1 = −0.632 . . . ,
thus by the IMVT there is a root in the interval [−1, 0]. Hence it is
68
sensible to choose our initial guess as x0 = −0.51 . Newton’s Method is
(exn + xn )
f (xn )
=
x
−
n
f 0 (xn )
(exn + 1)
exn (xn − 1)
=
,
exn + 1
xn+1 = xn −
(3.1)
after simplifying. We start iterating with (3.1) as follows
ex0 (x0 − 1)
= −0.566311 . . . ,
ex0 + 1
ex1 (x1 − 1)
= −0.567143165034862 . . . ,
x2 =
ex1 + 1
ex2 (x2 − 1)
= −0.567143290409781 . . . ,
x3 =
ex2 + 1
x1 =
etc,
to fill in the the table of values:
n
xn
0
−0.5
1
−0.566311 . . .
2 −0.567143165034862 . . .
3 −0.567143290409781 . . .
|xn − xn−1 |
—
0.066 . . .
8.321 · · · × 10−4
1.253 · · · × 10−7
|f (xn )|
0.106 . . .
0.0013 . . .
1.96 · · · × 10−7
4.551 · · · × 10−15
Clearly after three iterations we have
|f (x3 )| + |x3 − x2 | < 10−4
as required, thus we stop iterating.
2. (MATLAB) Use the program NEWTON_RAPH_PRES to check your written
answer in Problem 1.
Solution: Just run the Matlab program with the data used in the solution of Problem 1 and a tolerance of 10−4 .
1
Of course this is just one step of the Bisection Method
69
3.4
Secant Method
1. Do three steps of the secant method for f (x) = x3 − 2, using x0 = 0,
x1 = 1.
Solution:
We approximate the cube root of 2 since f (x) = 0 =⇒ x =
√
3
2. The secant method is:
xn − xn−1
xn+1 = xn − f (xn )
f (xn ) − f (xn−1 )
xn − xn−1
3
= xn − (xn − 2)
(x3 − 2) − (x3n−1 − 2)
n
xn − xn−1
3
= xn − (xn − 2)
(3.2)
x3n − x3n−1
Using (3.2) to do apply the Bisection Method is not a good idea. If
xn is very close to xn−1 then both the numerator and denominator in
the fractional part will lead to loss of significance (see the section on
Computer Arithmetic). So we will rearrange this expression to avoid
the problem. Now
b 3 − a3
= b2 + ab + a2 ,
b−a
so with b = xn and a = xn−1 we have
x3n − x3n−1
= x2n + xn xn−1 + x2n−1
xn − xn−1
so
1
xn − xn−1
= 2
.
3
3
xn − xn−1
xn + xn xn−1 + x2n−1
Substituting this into (3.2) yields:
xn+1
(x3n − 2)
= xn − 2
xn + xn xn−1 + x2n−1
(3.3)
Ideally, we would do some further rearranging to get a single expression
that avoids the (x3n − 2) term. This is because once we get very close
to the root x3n will be very nearly equal to 2, again leading to loss of
significance. However, unless we are needing a VERY accurate answer
this shouldn’t make much of a difference. Doing 3 steps of (3.3) (with
x0 = 0 and x1 = 1) yields
70
n=1
(x31 − 2)
(x21 + x0 x1 + x20 )
(13 − 2)
= 1− 2
(1 + (0)(1) + 02 )
= 2
x 2 = x1 −
n=2
x3
(x32 − 2)
= x2 − 2
(x2 + x1 x2 + x21 )
(23 − 2)
= 2− 2
(2 + (1)(2) + 12 )
8
=
or 1.14286 (6 significant figures)
7
n=3
(x33 − 2)
(x23 + x2 x3 + x22 )
8 3
−
2
7
8
− 2
=
8
8
7
2
+
(2)
+
2
7
7
x4 = x3 −
=
Note:
√
3
75
62
or
1.20968
(6 significant figures)
2 ≈ 1.259921...
2. Repeat the above using x0 = 1, x1 = 0. Comment on how the algorithm is performing compared to the last example.
Solution: Using the same method we get
n = 1
n = 2
n = 3
x2 = 2
1
x3 =
2
6
x4 =
7
or
0.857143
71
(6 significant figures)
The method appears to be converging slower with x0 = 1 and x1 = 0
than with x0 = 0 and x1 = 1.
3. (MATLAB) Use the program SECANT_PRES to apply the Secant method
to the following test problems:
(a) f (x) = x2 − 4 on [2, 6], x0 = 6, x1 = 5.5, tol = 1e − 03;
(b) f (x) = (x − 2)2 on [2, 3], x0 = 3, x1 = 2.7, tol = 1e − 06;
(c) f (x) = 6(x − 2)5 on [2, 3], x0 = 3, x1 = 2.95, tol = 1e − 03;
(d) f (x) = x2 − 4 on [−3, 3], x0 = −1, x1 = 1;
(e) f (x) = (4/3) exp((2 − x/2)(1 + ln(x)/x)) on [2, 20], x0 = 2.5,
x1 = 3.5.
The number ‘tol’ refers to the tolerance that the program uses in it’s
stopping criterion for the Secant method (default is 10−6 ). In each
case: (i) explain the behaviour of the Secant method that you see, and
(ii) for those cases where we have convergence, explain how the rate of
convergence is related to the multiplicity of the roots.
Solution:
(i): We have convergence for problems (a) - (c). In case (d) we have
divergence because the slope of the first secant line is horizontal, and
thus never intersects with the x-axis (we divide by zero in the formula).
In case (e) we have divergence because the iterates move further and
further away from the root (becoming unbounded as x tends to infinity).
(ii): Although we don’t cover this in the lecture notes, the convergence
theorem for the Secant Method is almost the same as for Newton’s
Method. The difference being that the optimal rate of convergence for
the Secant Method is a little less than 2, but bigger than 1 (‘superlinear’). It’s interesting that this optimal
rate of convergence is equal
√
to the Golden Mean (namely, (1 + 5)/2 ≈ 1.62). When you run the
program you should notice that the rate of convergence for problem (a)
is faster than for either problem (b) or (c) (as problem (a) has simple
roots, while problems (b) and (c) have roots of multiplicity 2 and 5
72
respectively). We show (some of) the Matlab output for problem (e)
below (program interupted):
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Iteration
Operation
x1 is 2.5000000000000000
x2 is 3.5000000000000000
x2 is 4.5156679582822132
x3 is 5.5514402736656114
x4 is 6.6339592833128247
x5 is 7.7514171147633677
x6 is 8.9020232758648721
x7 is 10.0794196823085791
x8 is 11.2797803442435374
x9 is 12.4992630213283906
x10 is 13.7349726174479319
x11 is 14.9844470660306381
x12 is 16.2457144330682333
terminated by user during secant_pres (line 99)
73
3.5
Fixed Point Iteration
1. Do three steps of the following fixed point iteration problem:
xn+1 = ln(1 + xn ),
x0 = 12 .
Solution:
n=0
x1
1
= ln 1 +
2
= 0.405465108108...
n=1
x2 = ln(1 + 0.405465108108...)
= 0.340368285804...
n=2
x3 = ln(1 + 0.340368285804...)
= 0.292944416354...
2. Let y0 and h = 1/8 be fixed numbers. Do three steps of the following
fixed point iteration
1
yn+1 = y0 + h(−y0 ln(y0 ) − yn ln(yn )),
2
1
1
and h = this becomes
2
8
1
1
1
1
= +
− ln
− yn ln(yn ) .
2 16
2
2
Solution: With y0 =
yn+1
y0 = 1/2.
Iterating
74
n=0
y1 =
=
=
=
1
1
1
1
+
− ln
− y0 ln(y0 )
2 16
2
2
1
1
1
1
1
1
+
− ln
− ln
2 16
2
2
2
2
1
1
1
+
ln
2 16
2
0.543321698785...
n=1
y2
1
1
1
1
=
+
− ln
− y1 ln(y1 )
2 16
2
2
= 0.542376812253...
n=2
y3
1
1
1
1
=
+
− ln
− y2 ln(y2 )
2 16
2
2
= 0.54239978931...
3. Consider the fixed point iteration xn+1 = 1 + e−xn . Show that this
iteration converges for any x0 ∈ [1, 2]. How many iterations does the
theory predict it will take to achieve 10−5 accuracy?
Solution: Observe that with g(x) = 1 + e−x
g(1) = 1.367... ∈ [1, 2],
g(2) = 1.135... ∈ [1, 2],
and as g(x) is monotonically decreasing on [1, 2] we know that 1 ≤ g(x) ≤ 2
for all x ∈ [1, 2], i.e. g : [1, 2] ⊂ [1, 2]. To show the second condition of
the fixed point theorem (see your lecture notes) consider
|g 0 (x)| = |e−x | ≤ |e−1 | = 0.367 < 1,
for all x ∈ (1, 2), as e−x is decreasing. Thus by the fixed point theorem
we see that the iteration converges for all x0 ∈ [1, 2]. We apply the
error bound formula
n L
|xn − α| ≤
(x1 − x0 ),
1−L
75
where L =
1
= 0.367... and
e
x0 =
x1 =
=
=
1.5
1 + e−x0
1 + e−1.5
1.22313016015...,
so we seek n such that
1
1 n
e
− 1e
!
(1.2231... − 1.5) < 10−5
or e−n < 2.2830965... × 10−5
so − n < ln(2.2830965... × 105 )
or n > 10.687..
so take n = 11.
4. For each function listed below, find an interval [a, b] such that g([a, b]) ⊂
[a, b]. Draw a graph of y = g(x) and y = x over this interval, and confirm that a fixed point exists there. Estimate (by eye) the value of
the fixed point, and use this as a starting value for a fixed point
iter
2
1
x+
,
ation. Does the iteration converge? Explain: (a) g(x) =
2
x
(b) g(x) = 1 + e−x .
Solution:
(a) Not straightforward. To clarify things we need to do some curve
sketching. Notice first that we can work the fixed points out exactly,
as
√
1
1
1
1
g(x) = x + = x =⇒ = x or 2 = x2 or x = ± 2.
2
x
x
2
√
√
√
Focus
on
the
fixed
point
at
x
=
2.
Notice
g(
2)
=
2 tells us that
√ √
( 2, 2) is on the graph of g. Now
√
1
1
g 0 (x) = − 2 = 0 =⇒ x = ± 2
2 x
76
are critical points. Maxima, minima, or inflection points? Look at the
second derivative.
√
2
2
g 00 (x) = 3 =⇒ g 00 ( 2) = 3 > 0
x
22
p
√
so ( (2), 2) is a local minimum. Notice also that
lim g(x) = +∞
x→+∞
and
lim g(x) = −∞.
x→−∞
This information leads to the following graph:
Since g(1) = 1.5 = g(2), thus
√
2 ≤ g(x) ≤ 1.5
1 ≤ g(x) ≤ 1.5
for all x ∈ [1, 2]
for all x ∈ [1, 2]
Thus, condition (i) in the key fixed point theorem is satisfied with
[a, b] = [1, 2]. To satisfy the second condition we need to show that
1
1
|g 0 (x)| < 1 for all x ∈ (1, 2). Now g 0 (x) = − 2 , so
2 x
|g 0 (1)| = 0.5
and
77
g 0 (2) = 0.25,
√
and as g has a local minimum at x = 2 ≈ 1.414, the slope of g, i.e.
g 0 (x), must be less than 1 throughout (1, 2)
X
(b) g(x) = 1 + e−x . Now
g(x) = x =⇒ 1 + e−x = x
or
e−x = x − 1
illustrated below:
So visually we see that the fixed point is between 1 and 2. Now
g(1) = 1 + e−1 ≈ 1.136...
g(2) = 1 + e−2 ≈ 1.135...
and as g(x) is decreasing,
1 ≤ g(x) ≤ 2
for all x ∈ [1, 2]
78
X
i.e. g : [1, 2] ⊂ [1, 2] and so condition (i) in the fixed point theorem
is satisfied. To satisfy condition (ii) in this theorem, consider g 0 (x) =
−e−x so |g 0 (x)| = e−x , which again is a decreasing function. And as
|g 0 (1)| = e−1 ≈ 0.367...
=⇒ |g 0 (x)| < 1
for all x ∈ (1, 2)
|g 0 (2)| = e−2 ≈ 0.135...
5. Let h(x) = 1 − x2 /4. Show that this function has a root at x = α = 2.
Using x0 = 1/2, apply fixed point iteration with xn+1 = h(xn ) to approximate the fixed point of h. Comment on your results.
Solution: Consider the iterative process
xn+1 = 1 −
n =
n =
n =
n =
n =
..
.
n =
x2n
,
4
1
x0 = .
2
x20
= 0.9375
0 : x1 = 1 −
4
x2
1 : x2 = 1 − 1 = 0.780273...
4
x22
2 : x3 = 1 −
= 0.847793...
4
x2
3 : x4 = 1 − 3 = 0.820311...
4
x24
4 : x5 = 1 −
= 0.83177...
4
..
.
x229
= 0.8284271...
29 : x30 = 1 −
4
Can we check our answer? The fixed points of h(x), i.e., the roots of
x2
h(x) = 1 −
= x, correspond to the roots of
4
√
−4 ± 32
2
x +4x−4 = 0
so
x=
(usual quadratic formula).
2
Just keeping the positive solution yields x = 0.8284271...
79
X
X
6. (MATLAB) Use the program FIXED_POINT_ITER_PRES to apply Fixed
Point iteration to the following iteration functions:
(a) g(x) = sin(x) on [0, 2.5], x0 = 2, tol = 1e − 03;
(b) g(x) = 1 + 2/x on [0, 3.5], x0 = 1, tol = 1e − 03;
(c) g(x) = x2 /4 + x/2 on [2, 20], x0 = 2.3.
In each case (i) work out by hand (if possible) the fixed points, (ii)
explain what you see, and state whether the iterative process converges
or diverges.
Solution:
(i): (a) solving sin(x) = x yields α = 0 (just recall the graph of sin(x) on
[0, π]), (b) solving 1 + 2/x = x yields (after application of the quadratic
formula) that α = 2 (α = −1 is not in the specified interval); solving
x2 /4 + x/2 = x yields α = 2 (α = 0 is not in the specified interval).
(ii): (a) We see a ‘stair’ shaped diagram. Convergence is quite slow. At
best fixed point iteration is linearly convergent. (b) We see a ‘cobweb’
shaped diagram. The iterates are convergent. (c) We see a ‘stair’
shaped diagram. The iterates are diverging from the root. We give
some MATLAB output for problem (a).
Iteration
Iteration
Iteration
Iteration
Iteration
etc ..
Iteration
Iteration
Iteration
x0
x1
x2
x3
x4
is
is
is
is
is
2.000000
0.9092974268256817
0.7890723435728884
0.7097000402345258
0.6516062636498291
x15 is 0.4008516296070669
x16 is 0.3902026038089464
x17 is 0.3803757974172909
Iterations stop because |g(x16)|-x16|<0.010000
80
81
Chapter 4
Interpolation and
Approximation
4.1
Lagrange Interpolation
1. Find the polynomial of degree 2 that interpolates y = x3 at the nodes
x0 = 0, x1 = 1 and x2 = 2. Plot y = x3 and the interpolating polynomial over the interval [0, 2].
Solution: To find P2 (x) first find the Lagrange basis functions:
(x − x1 )(x − x2 )
(x − 1)(x − 2)
1
=
= (x − 1)(x − 2),
(x0 − x1 )(x0 − x2 )
(0 − 1)(0 − 2)
2
(x − x0 )(x − x2 )
(x − 0)(x − 2)
L1 (x) =
=
= −x(x − 2),
(x1 − x0 )(x1 − x2 )
(1 − 0)(1 − 2)
(x − 0)(x − 1)
1
(x − x0 )(x − x1 )
L2 (x) =
=
= x(x − 1).
(x2 − x0 )(x2 − x1 )
(2 − 0)(2 − 1)
2
L0 (x) =
Thus,
P2 (x) = f (x0 )L0 (x) + f (x1 )L1 (x) + f (x2 )L2 (x)
1
= −x(x − 2) + (8)
x(x − 1)
2
= x(3x − 2)
The sketch is difficult to draw as both graphs match well over the
specified interval:
82
2. Construct the quadratic polynomial that interpolates y =
nodes x0 = 1/4, x1 = 9/16, and x2 = 1.
√
x at the
Solution: To find P2 (x) first find the Lagrange basis functions:
9
x − 16
(x − 1)
65
9
(x − x1 )(x − x2 )
1
=
= 1
x−
(x − 1),
L0 (x) =
9
(x0 − x1 )(x0 − x2 )
15
16
− 16
−1
4
4
x − 41 (x − 1)
(x − x0 )(x − x2 )
256
1
9
=−
= 9
L1 (x) =
x−
(x − 1),
(x1 − x0 )(x1 − x2 )
35
4
− 14 16
−1
16
9
x − 14 x − 16
64
(x − x0 )(x − x1 )
1
9
=
=
L2 (x) =
x−
x−
.
9
(x2 − x0 )(x2 − x1 )
21
4
16
1 − 14 1 − 16
83
Thus,
P2 (x) = f (x0 )L0 (x) + f (x1 )L1 (x) + f (x2 )L2 (x)
1
64
9
3
256
1
=
x−
(x − 1) +
−
x−
(x − 1)
2
15
16
4
35
4
1
9
64
x−
x−
+(1)
21
4
16
= after much tedious simplification
32
22
9
2
=
−
x +
x+ .
105
21
35
3. Find the polynomial of degree 3 that interpolates y = x3 at the nodes
x0 = 0, x1 = 1, x2 = 2, and x3 = 3. (Simplify your interpolating
polynomial as much as possible.) Hint: This is easy if you think about
the implications of the uniqueness of the interpolating polynomial.
Solution: From our lecture notes we know that for distinct nodes, the
interpolating polynomial is unique. Thus the cubic polynomial that
interpolates the cubic x3 at x0 = 0, x1 = 1, x2 = 2 and x3 = 3 must be
x3 ! We will do the calculation nonetheless.
(x − 1)(x − 2)(x − 3)
(x − x1 )(x − x2 )(x − x3 )
=
L0 (x) =
(x0 − x1 )(x0 − x2 )(x0 − x3 )
(0 − 1)(0 − 2)(0 − 3)
1
= − (x − 1)(x − 2)(x − 3),
6
(x − x0 )(x − x2 )(x − x3 )
(x − 0)(x − 2)(x − 3)
L1 (x) =
=
(x1 − x0 )(x1 − x2 )(x1 − x3 )
(1 − 0)(1 − 2)(1 − 3)
1
= x(x − 2)(x − 3),
2
(x − x0 )(x − x1 )(x − x3 )
(x − 0)(x − 1)(x − 3)
L2 (x) =
=
(x2 − x0 )(x2 − x1 )(x2 − x3 )
(2 − 0)(2 − 1)(2 − 3)
1
= − x(x − 1)(x − 3),
2
(x − x0 )(x − x1 )(x − x2 )
(x − 0)(x − 1)(x − 2)
L3 (x) =
=
(x3 − x0 )(x3 − x1 )(x3 − x2 )
(3 − 0)(3 − 1)(3 − 2)
1
= x(x − 1)(x − 2).
6
84
Thus, after setting f (x) = x3
P3 (x) = f (x0 )L0 (x) + f (x1 )L1 (x) + f (x2 )L2 (x) + f (x3 )L3 (x)
1
1
x(x − 1)(x − 3)
= 0 + x(x − 2)(x − 3) + (8) −
2
2
1
+ (27)
x(x − 1)(x − 2)
6
= x3
(after much simplification).
4. What is the error in quadratic interpolation to f (x) =
equally spaced nodes on the interval [1/4, 1]?
√
x, using
Solution: From lecture notes we have
h3
|f (x) − P2 (x)| ≤ √ M,
9 3
√
where M = max |f 000 (x)|. Here f (x) = x and
x∈(x0 ,x2 )
h=
so
1
x0 = ,
4
x1 =
1−
2
1
4
=
3 1
3
· = ,
4 2
8
1 3
5
+ = ,
4 8
8
x2 =
5 3
+ = 1.
8 8
Also
1
1 1
f (x) = x 2 =⇒ f 0 (x) = x− 2
2
1 3
00
=⇒ f (x) = − x− 2
4
3 −5
000
=⇒ f (x) = x 2 ,
8
1
which is a decreasing function on
, 1 . Thus
4
3
|f (x)| ≤
8
000
− 52 5
1
3
3
=
42 =
(32) = 12 = M,
4
8
8
85
1
for all x ∈ , 1 . Thus
4
√
3 3
|f (x) − P2 (x)| ≤ √ (12) =
= 0.0406
128
9 3
3 3
8
5. Repeat the above for f (x) = x
−1
(3 significant figures).
1
on
,1 .
2
Solution: Proceed as in the problem above.
1 − 12
1
= , so
f (x) = x−1 and h =
2
4
1
x0 = ,
2
3
x1 = ,
4
x2 = 1.
Also
f (x) = x−1 =⇒ f 0 (x) = −x−2
=⇒ f 00 (x) = 2x−3
=⇒ f 000 (x) = −6x−4 .
So
6
|f (x)| = 4 = 96 = M,
|x|
000
1
for all x ∈ , 1 .
2
Thus using the error estimate given in the last problem we have
√
1 3
3
4
|f (x) − P2 (x)| ≤ √ (96) =
= 0.0962
(3 significant figures).
18
9 3
6. (MATLAB) Use the program LAGRANGE_PRES to construct and plot
the Lagrange interpolant Pn (x) for the following functions f (x) with n
nodes on the specified intervals [a, b]:
(a) f (x) = exp(x) on [−2, 2] with n = 2, 3, 4;
(b) f (x) = x3 on [0, 2] with n = 2, 3, 4;
(c) f (x) = tan(x) on [−1.5, 1.5] with n = 4, 5, 6, 7, 8;
86
(d) f (x) = 1/(1 + 25x2 ) on [−1, 1] with n = 4, 8, 16, 32.
We can measure the fit between f (x) and the interpolating polynomial
Pn (x) using the maximum norm
kf − Pn k∞ := max |f (x) − Pn (x)|,
x∈[a,b]
which is just the maximum absolute error between f and Pn for all x ∈
[a, b]. We say that the sequence of Lagrange interpolation polynomials
Pn converges to f if kf − Pn k∞ → 0 as n → ∞. For each problem
above decide in which cases the Lagrange interpolation polynomials
are convergent to the given functions. If the Lagrange polynomials are
divergent explain why based on the plots.
Solution:
Running the MATLAB program reveals that we have convergence for
cases (a) - (c), as measured by the maximum norm (just assess whether
the maximum absolute error on [a, b] tends to zero as n → ∞). However, in case (d) we have divergence. As we increase the degree of
the interpolating polynomial the maximum error increases (near the
end points of the domain). This is a famous problem illustrating nonconvergence called ‘Runge’s phenomena’. So not every function can
be adequately approximated using Lagrange interpolation. The ‘fix’
for this problem is to either use piecewise Lagrange interpolation (e.g.,
piecewise linear interpolation), or to relax the assumption that the
nodes are equally spaced (in some clever way!). We show the plot from
the MATLAB program for Problem (c) with n = 5 below:
87
The program also outputs the Lagrange polynomial in traditional algebraic form, namely
P (x) = 4.83486x3 − 1.47748x.
Note: the highest power shown here is x3 because the coefficient of x4
was either zero or almost zero.
4.2
Hermite Interpolation
√
1. Construct the cubic Hermite interpolant to f (x) = 1 + x using the
nodes a = 0 and b = 1. Plot the interpolant and the function together.
Where is the approximation error the greatest?
88
1
1
1
Solution: f (x) = (1 + x) 2 =⇒ f 0 (x) = (1 + x)− 2 . Then
2
x − x2
x−1
=
=1−x
x1 − x2
0−1
x − x1
x−0
L2 (x) =
=
=x
x2 − x1
1−0
L1 (x) =
with L01 (x) = −1 and L02 (x) = 1. Now
h1 (x) =
=
=
h2 (x) =
=
=
h̃1 (x) =
=
=
h̃2 (x) =
=
[1 − 2(x − x1 )L01 (x1 )]L21 (x)
[1 − 2(x − 0)(−1)](1 − x)2
(1 + 2x)(1 − x)2
[1 − 2(x − x2 )L02 (x2 )]L22 (x)
[1 − 2(x − 1)(1)]x2
(3 − 2x)x2
(x − x1 )L21
(x − 0)(1 − x)2
x(1 − x)2
(x − x2 )L22 (x)
(x − 1)x2
(4.1)
(4.2)
(4.3)
(4.4)
Using (4.1)-(4.4) we have
H(x) = f (x1 )h1 (x) + f 0 (x1 )h̃1 (x) + f (x2 )h2 (x) + f 0 (x2 )h̃2 (x)
1
1
1
1 1
= 1 2 (1 + 2x)(1 − x)2 + x(1 − x)2 + 2 2 (3 − 2x)x2 + 2− 2 (x − 1)x2
2
2
= (after simplifying)
= 0.0251x3 − 0.111x2 + 0.5x + 1
(coefficients to ≤ 3 significant figures)
Based on the graph (see below) we see that the error is greatest at the
end of the interval.
89
Figure 4.1: Graph of f (x) and the interpolant
90
2. Construct the cubic Hermite interpolant to f (x) = sin(x) using the
nodes a = 0 and b = π. Plot the error between the interpolant and the
function. Where is the error greatest?
Solution: f (x) = sin(x) =⇒ f 0 (x) = cos(x). Then
x−π
π−x
x − x2
=
=
x1 − x2
0−π
π
x − x1
x−0
x
L2 (x) =
=
=
x2 − x1
π−0
π
L1 (x) =
with L01 (x) = −
1
1
and L02 (x) = . Now
π
π
h1 (x) = [1 − 2(x − x1 )L01 (x1 )]L21 (x)
(π − x)2
1
= 1 − 2(x − 0) −
π
π2
2
(π − x)2
=
1+ x
π
π2
h2 (x) = [1 − 2(x − x2 )L02 (x2 )]L22 (x)
2
x
1
= 1 − 2(x − π)
π
π2
2
2
x
=
3− x
π
π2
h̃1 (x) = (x − x1 )L21
(π − x)2
= (x − 0)
π2
2
(π − x)
= x
π2
h̃2 (x) = (x − x2 )L22 (x)
x2
= (x − π) 2
π
91
(4.5)
(4.6)
(4.7)
(4.8)
Using (4.5)-(4.8) we have
H(x) = f (x1 )h1 (x) + f 0 (x1 )h̃1 (x) + f (x2 )h2 (x) + f 0 (x2 )h̃2 (x)
2
x
(π − x)2
2
x2
x
= 0 + 1x
+
0
3
−
+
(−1)(x
−
π)
π2
π
π2
π2
= (after simplifying)
1
3
= 0x −
x2 + x + 0
π
The maximum error occurs near the centre of the interval.
Figure 4.2: Error between f (x) and the interpolant
92
1
3. Construct
the cubic Hermite interpolatant to f (x) = x 3 on the interval
1
, 1 . What is the maximum error as predicted by theory?
8
1
1 2
Solution: f (x) = x 3 =⇒ f 0 (x) = x− 3 . Then
3
x−1
x−1
x − x2
8
= 1
=
7 = (1 − x)
x1 − x2
7
−1
−8
8
1
x− 8
x − x1
1
8
L2 (x) =
=
x−
=
x2 − x1
7
8
1 − 18
L1 (x) =
with L01 (x) = −
8
8
and L02 (x) = . Now
7
7
h1 (x) = [1 − 2(x − x1 )L01 (x1 )]L21 (x)
2
8
8
1
−
(1 − x)2
= 1−2 x−
8
7
7
5
16
64
x+
=
(1 − x)2
7
7
49
h2 (x) = [1 − 2(x − x2 )L02 (x2 )]L22 (x)
2
2 8
1
8
x−
= 1 − 2(x − 1)
7
7
8
2
23 16
64
1
=
− x
x−
7
7
49
8
h̃1 (x) = (x − x1 )L21
1
8
=
x−
(1 − x)2
8
7
h̃2 (x) = (x − x2 )L22 (x)
2
64
1
= (x − 1)
x−
49
8
93
(4.9)
(4.10)
(4.11)
(4.12)
Using (4.9)-(4.12) we have
H(x) = f (x1 )h1 (x) + f 0 (x1 )h̃1 (x) + f (x2 )h2 (x) + f 0 (x2 )h̃2 (x)
13 16x 5
1
64
=
+
(1 − x)2
8
7
7
49
− 23 1
8
1
1
(1 − x)2
+
x−
3
8
8
7
2
1
23 16
64
1
+ (1 3 )
− x
x−
7
7
49
8
2
1
1
64
− 23
(1 )(x − 1)
x−
+
3
49
8
= (after simplifying)
= 0.466x3 − 1.26x2 + 1.46x + 0.336
(coefficients to ≤ 3 significant figures)
From the lecture notes, the interpolation error is
f (4) (ξx )
4!
so
2
M
1
|f (x) − H(x)| ≤ x −
(x − 1)2 · ,
24
8
1
where M is an upper bound for |f (4) (x)| on
,1 .
8
f (x) − H(x) = (x − x1 )2 (x − x2 )2
(4.13)
1
1 2
f (x) = x 3 =⇒ f 0 (x) = x− 3
3
2 5
=⇒ f 00 (x) = − x− 3
9
8
10
=⇒ f 000 (x) = x− 3
27
80 11
=⇒ f (4) (x) = − x− 3
81
so |f
(4)
1
(x)| =
which is a decreasing function on
, 1 , thus
8
− 113
80 1
80
20736
(4)
|f (x)| ≤
=
(2048) =
= M.
81 8
81
10
11
80
|x|− 3 ,
81
94
2
1
1
2
(x − 1) on
, 1 . The graph of
We also need to maximize x −
8
8
2
1
h(x) = x −
(x − 1)2 is:
8
We use Calculus to find the local maximum point. After some simplifying we obtain:
1
(x − 1)(8x − 1)(16x − 9).
32
1
9
Thus local extrema occur at x = 1, x = and x =
. Thus h(x) ≤
8
16
9
2401
h 16 = 65536 . So from (4.13)
h0 (x) =
1
|x 3 − H(x)| ≤
2401 20736 1
·
·
= 3.165
65536
10
24
(predicted).
Note: if you plot the actual error between H(x) and f (x) the actual
maximum error is closer to 0.016. So the theoretical (predicted) maximum error (≤ 3.165) is quite a bit different from this (but still consistent with the prediction).
4. (MATLAB) Use the program HERMITE_PRES to construct and plot the
Hermite interpolant H(x) for the following functions f (x) with n nodes
on the specified intervals [a, b]:
95
(a) f (x) = exp(x) on [−1, 1] with n = 2, 3
(b) f (x) = 1/x on [1/2, 4] with n = 2, 3, 4
(c) f (x) = x8 + 1 on [−1, 1] with n = 2, 3, 4, 5;
(d) f (x) = 1/(1 + 25x2 ) on [−1, 1] with n = 2, 3, 4, 5, 6;
We can measure the fit between f (x) and the interpolating polynomial
H(x) using the maximum norm
kf − Hk∞ := max |f (x) − H(x)|,
x∈[a,b]
which is just the maximum absolute error between f and H for all
x ∈ [a, b]. We say that the sequence of Hermite interpolation polynomials H converges to f if kf − Hk∞ → 0 as n → ∞. For each
problem above decide in which cases the Hermite interpolation polynomials are convergent to the given functions. If the Hermite polynomials
are divergent explain why based on the plots. How do the graphs of
the Hermite interpolating polynomials differ from the graphs of the
Lagrange interpolating polynomials?
Solution:
Basically, the situation is the same as for the Lagrange case. For cases
(a) - (c) we have convergence as n → ∞. Divergence in problem (d)
is due to the same ‘Runge’s phenomena’ as in the Lagrange case. The
main difference here is that it is clear from the plots that not only do
the function values between f (x) and H(x) match, but so do the slopes.
We show the plot for problem (c) with n = 3 below:
96
The program also outputs the Hermite polynomial in traditional algebraic form, namely
H(x) = 3x4 − 2x2 + 1.
It is clear that in this case we need greater than 3 nodes to get a good
approximation. Recall that the degree of the Hermite interpolant is
given by 2n − 1, thus if if n = 5 (thus the degree of H(x) would be 9)
the approximation would be exact (i.e., f (x) = H(x)). This is because
we are fitting polynomials to polynomials, and if you use a higher degree
than necessary the coefficients of the higher (unnecessary) powers of x
will be zero.
97
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement