When Close Enough Is Close Enough

When Close Enough Is Close Enough
 When Close Enough Is Close Enough
Edward R. Scheinerman
Teacher:
Student:
Teacher:
Student:
Teacher:
Student:
Teacher:
Student:
Teacher:
Student:
Teacher:
Student:
Teacher:
Student:
Teacher:
June-July 2000]
1. PRELUDE: AN INTERCHANGE BETWEEN AN ALGEBRA TEACHER AND
STUDENT.
A student was asked to demonstrate the following equation:
Y2 + y 5 — 2/6 = 3.
Please show me your solution to this problem.
Sure. At first I was confused, but then I had a great idea. I just used
my calculator. Both sides of the equation evaluate to 1.732050808. So
they're the same!
Hmm. That’s a good idea, but it doesn’t really answer the question.
You see, your calculator can display only 10 digits. It’s possible that
these two numbers agree to 10 decimal places, but aren’t equal.
Get out! I don’t believe that two expressions that agree to that many
digits could be different.
Actually, they could. Let me give you an example. Do you have your
calculator with you?
Yes.
Excellent. Please calculate
V 75025 + v121393 + v196418 + v317811.
(after a brief pause) I get 1629.259889.
Good. Now please calculate
v514229 + y832040 .
(another brief pause) 1 get 1629.259889. Hey! That’s the same as
before. They’re equal.
Well, actually, they're not equal.
How do you know?
Let me show you on my computer. I have a program that will let us
calculate these expressions with much greater accuracy than your
calculator.
First I type
sqrt (75025) + sqrt (121393) + sqrt (196418) + sqrt (317811)
and the computer responds: 1629.259888633142299848838800.
Wow! That’s 28 digits.
It is nice. Now I type
sqrt (514229) + sqrt (832040)
and the computer gives 1629.259888630189238404283301. Look
closely. Notice that the ninth digits after the decimal points are
different. So you see, even though your calculator gives the same
WHEN CLOSE ENOUGH IS CLOSE ENOUGH 489
(approximate) value for the two expressions, that does not mean they
are exactly equal.
Do you understand?
Student: You bet! You have a much better calculator than I have. May I try?
Teacher: Yes, but...
Student: Let's see. I type sgrt (2) + sgrt (5- 2*sgrt (6) ) and hit return. I
get 1.732050807568877293527446341.
Teacher: Yes, but...
Student: And then I type sqrt (3) and—LOOK!—it gives me exactly the
same answer: 1.732050807568877293527446341. You see, they
are equal.
Will we be allowed to use your computer on the test?
Teacher: Sigh!
2. WHO IS RIGHT? This is not an article bemoaning the state of mathematics
education. And I am sure that readers of this MONTHLY will have no trouble
verifying that v2 + y 5 — 2/6 and V3 are equal. Rather, my purpose is to defend
the student.
The student’s intuition is that if two algebraic expressions such as V2 +
V5 — 2V6 and V3 agree to 28 digits, then surely they are equal.
We transform the student’s intuition by asking: When does |a — В| < e imply
a = B” For example, if a and B are integers, then if we know that |a — B| is (say)
less than 0.9, then we may conclude that a = B.
Here is another example. Suppose « and B are both roots of some monic
(leading coefficient equal to 1) quadratic polynomial with integer coefficients:
х? + bx + c. Then
la — B| =|Vb? — 4c |.
Since b* — 4c is an integer, we have either a = B, or else |a — 6| > 1. So if we
know that ja — B| < 0.9, we may conclude that a = В.
The two sides of the equation V2 + y5 — 2V6 = y3 are not integers. It is not
a priori obvious that they are roots of a common, monic quadratic polynomial.
However, one can show that v2 + VS — 2/6 — V3 is the root of some monic,
integer polynomial; such roots are known as algebraic integers. Thus, one approach
to making the student’s intuition rigorous is to develop the theory of algebraic
integers; see [16]. Instead, we present an equivalent theory based on eigenvalues of
matrices; the proofs are sufficiently straightforward that they can be presented in
an undergraduate linear algebra class. This will enable us to answer the question:
When is close enough, close enough?
3. EIGENVALUES OF INTEGER MATRICES. Let n and b be positive integers.
Let M(n, b) denote the set of all n Xn matrices whose entries are integers
bounded in absolute value by b. That is, if A € M(n, b), then À is n X n, a; EZ,
and |a;;| <b. Let A(n, b) denote the set of all eigenvalues of matrices in M(n, b).
Note that A(n, b) is a finite set of complex numbers, and therefore we can bound
the absolute values of its nonzero members away from zero (see Theorem 4).
490 © THE MATHEMATICAL ASSOCIATION OF AMERICA [Monthly 107
Proposition 1. Let A € M(n, b). Then det( AT — A) is a monic, integer polynomial.
(By integer polynomial we mean a polynomial whose coefficients are integers.)
Proof: Use induction and the cofactor expansion formula. =
Proposition 2. Let a be a complex number. Then a is an algebraic integer if and
only if a € A(n, b) for some positive integers n, В.
Proof: If a is an algebraic integer, then it is the root of some monic, integer
polynomial p(x). Then « is an eigenvalue of the companion matrix of p, and so
a € A(n, b) for some n, b.
The converse follows from Proposition 1. m
Proposition 3. Suppose a € A(n, b). Then |a| < nb.
Proof: Suppose « is an eigenvalue of 4 € M(n, b) with corresponding eigenvector
v. Without loss of generality, we may assume that v is a unit vector, i.e., v-v = 1.
Since v:-Av =v-av = «a, we have
> 2 v;a;;v;
i j
|a| =|v'Av| =
<bY Y |vu] =bY |u| Y ]0;| = bl vil.
hoj i J
Since ||vll; < Vu llvll, (see [8, problem 3, р. 278) we have |a| < nb. п
The bound is best possible: Let A be an n x n matrix all of whose entries are b.
Let 1 be the vector of all 1s. Then 41 = nb1.
Theorem 4. Suppose A € M(n,b) and a is a nonzero eigenvalue of A. Then
la] > (nb) 7.
Proof: Suppose the nonzero eigenvalues of A are а, A,,..., A,. The last nonzero
coefficient of the characteristic polynomial of А 1$ + aA, --- A,. By Proposition 1,
this is an integer, hence |aA, --- A,| > 1 and so
| а > nl > (nb) > (nb) "". N
2
f
Theorem 4 is a known as a root separation theorem; see [2], [3], or [14]. It enables
us to answer the question: When is close enough, close enough? Suppose we are
given an algebraic expression, such as a = Y2 + y 5 — 2/6 — y3. We need to
find integers n and b such that a € A(n, b). Then we calculate « to sufficient
precision to show that |a| < (nb)'-”. From this, it follows that œ = 0.
The issue, then, is: Given an algebraic expression a, how do we find positive
integers n and b such that œ € A(n, b)? The next result give us tools to find n
and b.
June-July 2000] WHEN CLOSE ENOUGH IS CLOSE ENOUGH 491
Theorem 5. Let n,n, n,, b,b,, b, be positive integers.
1. If a is a root of a monic, integer polynomial of degree n whose coefficients have
absolute value no larger than b, then œ € A(n, b).
If ny, <n, and b; <b,, then A(n,, b,) C A(n,, b,).
If a € A(n, b), then — a and a € A(n, b).
If k is an integer, then k € A(1, |k|).
If œ € A(n,, b,) and B € A(n,, b,), then aB € A(n,n,, b,b,).
If a € A(n,, b,) and B € Aln,, b,), then a + B € A(n,n,, b, + b,).
If k is a positive integer and В* € A(n, b), then B € A(nk, b).
Non RW
We illustrate the use of Theorem 5 by solving the following type of problem:
Given an algebraic expression «, determine integers n and b such that a €
A(n, b).
Consider the expression:
«= 7х 5-77.
The atoms of this expression are the integers 2, 5, and 7. By part 4, we have
2 € A(1,2), 5 € A(1,5), and 7€ A(1,7).
Next we classify V2 and y7 ; by part 7, we have
2 € A(1,2) > Y2 Е A(2,2)
7 Е A(1,7) > Y7 Е A(2,7).
The classification of 5 — V2 follows from parts 3 and 6:
5 € A(1,5)
eA) = = е (2,2) | >” + (79) ЕЛИ тЬ5 + 2) = АС).
To classify the cube root of 5 — V2 we use part 7:
5— v2 € A(2,7) — V5 — V2 € A(2-3,7) = A(6,7).
Finally, part 5 gives us the final classification:
V7 € A(2,7)
3
V5 — V2 € A(6,7)
These calculations are illustrated in Figure 1.
= а = 7 x V5 — V2 € A(2-6,7-7) = A(12,49).
Figure 1. Using Theorem 5 to classify an algebraic expression.
492 © THE MATHEMATICAL ASSOCIATION OF AMERICA [Monthly 107
The proof of Theorem 5 is given in the next section. However, we encourage the
reader to be impatient and to proceed directly to Section 5. Return to the proof on
your second reading.
4. PROOF OF THEOREM 5
Proof: For part 1, see the proof of Proposition 2.
For part 2, suppose « is an eigenvalue of 4 € M(n, b,). Let B be the n, X n,
9 ol € M(n,, b,). Since « is also an eigenvalue of B, it follows that
a = A(n,, b, ).
For part 3, suppose а is an eigenvalue of A € M(n, b). Then @ is also an
eigenvalue of 4 and — « is an eigenvalue of —A4. Hence — a, a € A(n, b).
For part 4, note that А is the eigenvalue of the matrix [k].
For parts 5 and 6 we use the Kronecker (tensor) product of matrices [6].
Suppose А ва р X q matrix and B is an r X s matrix. The Kronecker product of
A and B is the pr X gs matrix
matrix |
ayB AapyB - a,B
A®B = . . ;
a, B a,B a,B
One checks that if v is a p-vector and w is a g-vector, then
(A ® B)(v ® w) = (Av) ® (Bw).
Now suppose « is an eigenvalue of 4 € M(n,, b,) corresponding to an eigen-
vector v. Likewise, suppose $ is an eigenvalue of B € M(n,, b,) corresponding to
an eigenvector w. Then A % В Е M(n,n,, b,b,) and
(A @ B)(v 8 w) = (Av) © (Bw) = (av) © ( Bw) = (aB)( 8 w),
and so af is an eigenvalue of 4 ® B and hence is in A(n,n,, b,b,).
Next, let C=4 81, +1, 86 B, so C € M(n,n,, b, + b,). Observe that
С(у ® w) = (A 91,,)(v ® w) + (I, ®B)(vew)
(Av ® w) + (v ® Bw)
œ(v ® w) + B(v ® w) = (a + B)(v ® w).
Hence a + $ is an eigenvalue of C and therefore is in A(n,n,, b, + b,).
Finally, for part 7, let k be a positive integer and let œ € A(n, b). Suppose $ is
a k" root of a, i.e., 8 = a. Let A € M(n, b) be a matrix for which « is an
eigenvalue and let v be a corresponding eigenvector.
Let B be the nk X nk matrix with block structure
0 I 0 0 0 |
0 0 7 0 -- 0
60 0 0 7 — 0
В = |. ns
0 0 00 I
4 000 0
Note that B € M(nk, b).
June-July 2000] WHEN CLOSE ENOUGH IS CLOSE ENOUGH 493
Define the vector w to be
We calculate
B ВУ r ВУ = Г Ву
2
2y 2y Py
Bv Bv Ву
a _ к
Bk ly B* ly В V
Av av By
and therefore [В is an eigenvalue of B. Thus 68 € A(nk, b). m
—
5. PROOF OF AN IDENTITY. Theorems 4 and 5 give us a technique to prove
identities. We begin by re-expressing the identity in the form «= 0. We use
Theorem 5 to find n and b for which a € A(n, b). By Theorem 4, if a # 0, then
la| > (nb)! ~". We then calculate « to sufficiently many digits to show |a| <
(nb)'”, and we conclude a = 0.
We illustrate this technique by proving the student’s identity:
V2 + 5-2/6 =
The first step is to apply Theorem 5 to find integers » and b so that a = V2
+ у5 — 2/6 — V3 € An, b).
Since 2 € A(1, 2) (part 4) we have V2 € AQ, 2) (part 7).
Likewise, V3 € A(2,3), and so — V3 € AQ, 3) (part 3).
Since V6 € A(2,6) and —2 Е A(1,2), we have —2V6 € A(2, 12) (part 5). Since
5 € A(1,5), we have 5 — 2/6 € AQ, 17) (part 6). Hence y 5 — 2/6 € A(4,17).
Finally, a = V2 + V5 — 2/6 — Y3 is in A(2-4-2,2 + 17 + 3) = A(16, 22)
(part 6).
Now we apply Theorem 4. If œ # 0 we would have
| «| > (16-22) ° = 63 x 107%.
However, calculating a to (say) 45 digits enables us to conclude that |a| < 10-**
and so a = 0.
To ease the computational burden, we want to find the smallest n and b
for which œ € A(n, b). For example, consider the term 2/6. We showed that
2/6 € A(2,12). Can we do better? Rewriting 2V6 ав /24 doesn’t help, because
24 E M1, 24) and then part 7 gives 2/6 = /24 € AQ, 24). However, it is easy to
see that /24 is an eigenvalue of E | Thus, 2/6 € AC, 6).
494 O THE MATHEMATICAL ASSOCIATION OF AMERICA [Monthly 107
In general, we have the following.
Proposition 6. If a and b are integers, then Vab © A(2,max{|al, |b|}). NE
Using 2V6 € A(2, 6) improves our earlier analysis to a € A(16, 16) and so the
separation bound is (16 - 16)" = 7.5 x 107%, a modest gain of two digits.
We can improve the bound a little more by recognizing 5 — 2/6 as a root of the
quadratic equation x* — 10x + 1 = 0, giving 5 — 2/6 € AQ, 10), leading to a €
A(16, 15) and a separation bound of roughly 2 X 107°.
However, the recognition of 5 — 2/6 as a root of x> — 10x + 1 is far from
automatic and such recognition may be difficult for more complicated expressions.
Fortunately, this can be reduced to a calculation.
Given a numeric approximation of an expression, such as 5 — 2/6, symbolic
mathematics software can find a low-degree polynomial with a root (nearly) equal
to the given expression.
For example, in Mathematica, we can calculate as follows:
Needs ["NumberTheory'Recognize'"]
a = 5- 2Sgrt [6] ;
Recognize [N [a] ,2,x]
2
1-10 x+x
The first line loads the Recognize package. The second line defines a to be
5 — 2V6 . The third line asks Mathematica to find a degree-2 polynomial in x of
which (a numerical approximation of) a is a root.
The recognition of œ as a root of some polynomial is a computation. Can we
also reduce the proof that « is the root of some polynomial to a computation?
3
For example, if we let a = Y 5/13 — 18, Mathematica asserts that a is a root
of x* + 3x — 1 = 0. We might try to verify this by expanding a” + 3a — 1 and
checking to see if we get zero. However, that leads to the messy expression
(5/13 — 18) +3(5/13 — 18)" — 1
and I don’t want to be bothered wrestling with that. I want to crunch those
numbers, get something near zero, and be convinced.
One way to verify that a is a root of p(x) = x* + 3x — 1 is to apply the earlier
method, i.e., find n and b so that pla) € A(n, b). However, the techniques we
have presented lead to pla) € A(9288, 1893). Thus, to verify p(a) = 0 we need to
show that
| p(œ)| < (9288 - 1893) 71°? = 10717705
And while computers can calculate to that level of accuracy, a looser bound is
clearly desirable.
Using Theorem 5 and Proposition 6, we can place a = y 5/13 — 18 € A(6, 43).
Therefore, there is an A € M(6,43) with eigenvalue a. Note that p(a) is an
eigenvalue of B = p( A). We can bound the absolute value of the entries of B as
follows. Let J be the 6 x 6 matrix of all 1s. Then
|B| =|p(A)| <|4?| +3|A| +1 < (437)° + 3(437) + I < 11098J.
(For a matrix X whose i, j-entry is x,,, we write |X| for the matrix whose i, j-entry
is 71)
1
June-July 2000] WHEN CLOSE ENOUGH IS CLOSE ENOUGH 495
Therefore p(œ) € A(6, 11098). To prove that p(a) = 0 it is enough to show
that
| p(a)| < (6-11098)° = 7.6 Xx 107%.
The computer easily verifies this, and therefore a is a root of x“ + 3x — 1. Since
a = 0.3028 and since the two roots of x> + 3x — 1 = 0 are 4(-3 + V13) we have
proved that
3 —3 + 713
13-18 => ———
We return to the earlier student/teacher example: We want to show that a
= /2 + у5 — 2/6 is a root of p(x) = x° — 3 = 0. By Theorem 5 and Proposi-
tion 6, we have a € A(8, 13), and so there is a matrix A € M(8, 13) of which a is
an eigenvalue. Hence, p(œ) = a° — 3 is an eigenvalue of B = p( A). We have
|B| =|p(A)| =14? — 31| <(13J)?| + 31 = 13527 + 31
where J is the 8 X 8 matrix of all ones. Thus B € M(8, 1355). So, by Theorem 4, if
B # 0 we would have
|p(œ)| =| B| = (8-1355) = 5.7 x 107%.
However, the result of calculating p(«) to (more than) 30 decimal places is zero,
and so we may conclude that p(a) = 0, and so V2 + V5 — 2V6 = V3.
In general, suppose œ € A(n, b) and p is a monic, integer polynomial of degree
d whose coefficients are bounded in absolute value by c. Then we can easily
calculate a positive € so that
n,b,d,c
| р(@)| < 8 nd, = pla) =0.
6. A MORE COMPLICATED EXAMPLE. We have presented a simple method
for proving algebraic identities formed from integers using the operations +, —,
X, and v. One applies the rules in Theorem 5 to find n and b with a € A(n, b).
Then we calculate « to sufficient precision to verify that |a| < (nb)'-”. Then
Theorem 4 implies that a = 0.
Now for a harder example. We solve Problem #10756 from the October 1999
issue of this MONTHLY. The problem asks us to prove:
7 1 V7 1 1 1 1
COS + V3 sin — arccos —=— .
cos— = — + — — arccos ——
7 6 6 3 2/7 3 2/7
From
cos(n6) + isin(n6) = (cos 0 + isin 0)" = у) (ео 6)" (i sin 9)"
and the identity cos” + sin’0 = 1 we get the following identities:
cos 30 = 4 cos“0 — 3cos 60
and
cos 70 = 64cos® — 112c0s” + S6cos*0 — 7cos 0.
Substituting 0 = 7 and œ = 2cos(77/7) into the second identity gives
1 7
—1 = — ао’ — — а” + 7a’ — —a
2 2 2
496 © THE MATHEMATICAL ASSOCIATION OF AMERICA [Monthly 107
or equivalently
0 = a’ — Ta’ + 14a° — Ta +2 = (a*— a? -2a +1) (a +2).
Clearly a # —2, so a = 2cos(7/7) € AG, 2).
Next we tackle the term
1
1
7 cos| —arccos —— |.
у | "27
`—
In cos 30 = 4cos’@ — 3cos 6 we take |
1 1
0 = 3 arecos 7 (1)
and let 8, = cos0. Then we have
! 3
277 = 4B; — 3B,
Taking 6 =2 7 Bo this becomes 6? — 218 — 7 = 0, so
В = 2/7 : : A(3,21
= COS| —arccos —— | € ,21).
3 2y7 )
Now we deal with the sine term. From sin 30 = —4sin*9 + 3sin9 with 0 defined
by (1) we have sin 30 = y27/28. Let y, = sin 60 and we have
; 27
4y5 — 3Y9 + 78 = 0
Let y=2 V7: 3 y, to rewrite this as y” — 63y + 189 = 0. Therefore, y =
2/7 - V3 sin0 € AG, 189).
Recapping, we have
TT |
a = 2cos7 E A(3,2), P= 2/7 cos 8 € A(3,21), and
y = 2/7 - V3 sin 0 € A(3,189)
where 6 is defined by (1).
We can reexpress the MONTHLY problem as
{(=-6a-2-B-y=0.
Since 6a € A(3, 12), we have
(€ A(3-3-3,12 + 2 + 21 + 189) = A(27,224) .
Therefore, if ¿ + 0 we would have |¿| > (27-224) * = 4.77 X 10”, but calcu-
lating ¿ with more than 100 digits of accuracy shows that |¿| < 10-'% and so
{= 0.
7. COMPUTER SCIENCE APPLICATIONS. This “close enough” technique for
proving algebraic identities is a fun application of algebraic integers via linear
algebra. Surprisingly, this method finds direct application in computer science.
Computational geometers develop their algorithms using a model of computa-
tion known as “the real RAM”. In this model, an arbitrary real number consumes
a single unit of storage, and the primitive operations are exact real arithmetic and
exact tests of equality and order.
June-July 2000] WHEN CLOSE ENOUGH IS CLOSE ENOUGH 497
Initially, when these algorithms were deployed on actual computers, the ideal-
ized real RAM was replaced by floating point arithmetic. The results were
disappointing. Round-off errors resulted in program failure.
Consequently, computer scientists sought a way to handle real numbers on real
computers. The basic geometric objects handled by computational geometers are
points, lines /line segments, and circles /arcs. Hence, the arena of today’s computa-
tional geometers is the same as that of the ancient Greeks, and the real numbers
they encounter are all constructible: formed from integers using the basic opera-
tions of +, —, X, +, and y . |
Computational geometry “kernels” are now being developed that hold real
numbers as a data structure of nested radicals; see [4], [15], and [17]. Equality
testing is done by numerically approximating the expressions to sufficiently many
digits to guarantee the results.
The close-enough method for verifying identities can be computationally ineffi-
cient. The verification of a modestly complicated expression might require an
enormous number of digits. The expression of the form va, + ya, + + ya,
is, by Theorem 5, in A(2*, a, a, +: a,), and to check if it is equal to zero, we must
show that it is less than (2'a,a, -+- a)'*, i.e., exponentially many (as a function of
t) digits.
Computational geometers are aware of this issue, and some have developed an
approach to geometric algorithm design that studies the asymptotic running time
of algorithms that are restricted to using primitives involving algebraic quantities
whose polynomial is of bounded degree; see [12].
However, automatic verification of nested radical identities need not be so
computationally intensive. For an alternate, algebraic approach, see the work of
Landau in [9], [10], and [11].
8. FINAL REMARKS. We have presented a technique for proving identities
involving algebraic integers. We classity the quantities involved as members of the
classes A(n, b) and then calculate numerically to sufficiently high precision to
conclude equality. The technique can be extended to more general algebraic
numbers by judiciously clearing denominators.
The technique raises a final question: Does a computer-assisted, calculation
proof of a radical identity truly constitute a rigorous proof? The issue we need to
consider is: Can we trust the computer and the programs it runs? In light of the
early Pentium floating point flaws, it seems prudent to question the accuracy of
calculations performed by computers. The CPU-resident floating point operations
are beyond the inspection of nearly all computer users. However, there are public
domain, arbitrary precision software programs (such as pari [1] and the Gnu MP
package [7]) whose source code is available for inspection. In principle, one could
check these programs in the same manner that one checks proofs in mathematics.
Has this been done?
ACKNOWLEDGMENTS. Many thanks to Susan Landau and Michael Goodrich for their help with the
computer science background material. Thanks also to Lara Diamond and David Marchette for their
comments on early drafts of this article.
REFERENCES
1. C. Batut and K. Belabas and D. Bernardi and H. Cohen and M. Olivier, PARI-GP home page:
http://hasse.mathematik.tu-muenchen. de / ntsw / pari / Welcome.
2. B. Buchberger, G. E. Collins, and R. Loos, editors, Computer Algebra: Symbolic and Algebraic
Computation, second edition, Springer-Verlag, Vienna, 1983.
498 © THE MATHEMATICAL ASSOCIATION OF AMERICA [Monthly 107
10.
11.
12.
13.
14.
15.
16.
17.
18.
C. Burnikel, R. Fleischer, K. Mehlhorn, and S. Schirra, A strong and easily computable separation
bound for arithmetic expressions involving square roots, in Proceedings of the Eighth Annual
ACM-SIAM Symposium on Discrete Algorithms, Society for Industrial and Applied Mathematics,
Philadelphia, 1997, pp. 702-709.
C. Burnikel, R. Fleischer, K. Mehlhorn, and S. Schirra, Efficient exact geometric computation
made easy, in Proceedings of the Fifteenth Annual Symposium on Computational Geometry, ACM
Press, 1999, pp. 341-350.
Henri Cohen, A Course in Computational Algebraic. Number Theory, 3rd corrected edition, Number
138 in Graduate Texts in Mathematics, Springer-Verlag, Berlin, 1996.
Shaun Fallat, Algebraic integers and tensor products of matrices, Crux Math, 22 (1996) 341-343.
GMP home page: http://www.gnu.org/ software / gmp / gmp. html.
Roger A. Horn and Charles R. Johnson, Matrix Analysis, Cambridge University Press, New York,
1985.
Susan Landau, A note on “Zipple denesting, J. Symbolic Comput. 13 (1992) 41-45.
Susan Landau, Simplification of nested radicals, SIAM J. Comput., 21 (1992) 85-110.
Susan Landau, How to tangle with a nested radical, Math Intelligencer, 16 (1994) no. 2, 49-55.
Giuseppe Liotta, Franco P. Preparata and Roberto Tamassia, Robust proximity queries: An
illustration of degree-driven algorithm design, in Proceedings of the Thirteenth Annual Symposium
on Computational Geometry, ACM Press, 1997, pp. 156-165.
Laszlo Lovasz, An Algorithmic Theory of Numbers, Society for Industrial and Applied Mathemat-
ics, Philadelphia, 1986.
Maurice Mignotte, Mathematics for Computer Algebra, Springer-Veerlag, New York, 1992.
Michael Seel, LEDA home page: http: / / www.mpi - sb .mpg.de/ LEDA /.
[an Stewart and David Tall, Algebraic Number Theory, Chapman and Hall, New York, 1979.
Chee Yap, Exact geometric computation page: http://cs.nyu.edu/ exact / .
Chee Yap, Fundamental Problems in Algorithmic Algebra, Oxford University Press, Oxford, 2000.
EDWARD R. SCHEINERMAN is a professor in the Mathematical Sciences Department at the Johns
Hopkins University. He was an undergraduate mathematics major at Brown, and received his Ph.D. in
mathematics from Princeton. His research interests are in discrete mathematics. During academic year
1999-2000, he 1s an “internal visitor” to the Department of Mechanical Engineering at Hopkins.
Department of Mathematical Sciences, The Johns Hopkins University, Baltimore, MD 21218
ers @jhu.edu
June-July 2000] WHEN CLOSE ENOUGH IS CLOSE ENOUGH 499
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement