Beebe@utah-science.arpa (03/20/87)
In mathematical software, there are perfectly valid grounds
for wanting to make comparisons between floating-point
numbers, and the suggestion of earlier postings of
introducing a fuzz factor is not a general solution. The
size of the fuzz factor is arbitrary, and worse,
machine-dependent, and it is best left out of most code.
There is altogether too much FORTRAN code lying around with
statements like IF ((X - Y) .LE. 1.0E-6) GOTO 23; let us
avoid those errors in C. Such code is unnecessary; the
entire LINPACK linear equation solver library is completely
devoid of magic constants representing fuzz factors.
The problem that we are worried about here is
implementations of C on those machines for which multiple
representations of floating-point numbers exist, in
particular, register and memory size differences in IEEE
floating-point (widely-used implementations are available
from Intel, Motorola, National SemiConductor, and ElXsI) and
Honeywell architectures.
This can lead to peculiarities like output from the code
fragment
x = 1.0/3.0;
y = 1.0/3.0;
if (x != y)
printf("Flawed C implementation\n");
because x was computed, then stored in memory, while y was
left in a register with greater precision.
Of course, subterfuges are possible. Introduce an
externally-defined function store(), and then write
x = 1.0/3.0;
y = 1.0/3.0;
store(&x);
story(&y);
if (x != y)
printf("ERRONEOUS C implementation\n");
Now the compiler must recognize that x and y have possibly
been changed, and must reload them from memory; if it does
not, it is clearly erroneous.
I believe that it is the DUTY of a language standard,
whether it be C or FORTRAN, to require that comparisons be
implemented by the compiler in such a way as to GUARANTEE
that the result of comparison of two objects be identical to
that obtained if the objects were compared in memory. If
this requires a register store and reload, so be it.
store() subterfuges like the above example should not be
necessary. ``Efficiency'' is NOT an issue here; CORRECTNESS
is. Also, this concern only to comparisons, not to the
generation of intermediate results; I am quite happy to have
the compiler compute the result of (a * b * c) in wide
registers and only store the final result in narrower memory.
One of the strong reasons for the development of the IEEE
floating-point specification was the existence of many
flawed floating-point architectures, on which at least one
of each of the following statements is (unexpectedly) true:
(x != y) && ((x - y) == 0.0)
(x * y) != (y * x)
(1.0 * x) != x
(x + x) != (2.0 * x)
(x > 0.0) && (1.0 * x == 0.0)
(x > 0.0) && (x + x == 0.0)
(x != 0.0) && ((y / x) produces zero-divide interrupt)
Note that because of the finite range and precision of
floating-point numbers, we are NOT asking for equality of
(a + b + c) with (a + c + b) (f.p. arithmetic does not obey
the distributive law of addition), or for equality of
x*(y/x) with y.
For background, I suggest the writings of the distinguished
numerical analysts W.J. Cody (Argonne) (particular his book
with W. Waite, "Software Manual for the Elementary
Functions", Prentice-Hall (1980)) and W. Kahan (Berkeley).
For implementations of mathematical software important for
C, see also Cody's "An Alternative Library under 4.2BSD Unix
on a VAX 11/780" (Argonne Laboratory report ANL-86-10), and
Kahan's and colleagues' implementation of the 4.3BSD math
library for VAX and IEEE floating-point architectures. On
4.3BSD, start with "man 3m math". For IEEE floating-point
background, see articles in Computer Magazine, Vol. 13. No.
1 (1980), and ACM SIGPLAN Notices Vol. 22, No. 2, p. 9 (Feb
1987).
-------bs@linus.UUCP (03/20/87)
In article <5054@brl-adm.ARPA> Beebe@utah-science.arpa (Nelson H.F. Beebe) writes: >In mathematical software, there are perfectly valid grounds etc. >(a + b + c) with (a + c + b) (f.p. arithmetic does not obey >the distributive law of addition), or for equality of A little nitpicking here: there is no distributive law of addition. What the auther means here is the associative law: (a + (b+c)) != ((a + b) + c) The comutative law IS obeyed: a + b = b + a. In the example above the author has essentially said: ( (a + b) + c) != (a + (b+c)) because (a + (b+c)) does equal (a + (c+b)) Bob Silverman