[comp.lang.fortran] What is zero, REALly??

djo7613@milton.u.washington.edu (Dick O'Connor) (06/11/91)

I'm trying to port an old Fortran program written to run on an IBM main-
frame to something a bit smaller.  Part of the "truth test" of the new code
(which is Pascal, but that's neither HNT) is that I have a printout of an
old run of input and output data that I have to be able to regenerate.
Fine except for one set of readings, which end up being a little bit off
in a way that's not DIRECTLY related to rounding.  I believe it's indirectly
related to rounding via The Law Of Small Numbers (a handy catch-all excuse
we use around here!), and I'd like some feedback from people with experience in
how Fortran handles very small numbers.

The routine in question takes the arithmetic mean of N positive real numbers
in the range, say 40.0 to 90.0, and uses subtraction to find the number that
is the furthest away from that mean.  It computes a "difference" from the mean
for each number and determines the greatest difference using the DABS
(double-precision absolute value) function.  The problem case is a
specific case where N equals 2, so both real numbers have the same difference.
The original code acts differently depending on which of the N numbers is
the furthest distant from the mean, and does it in the case N=2 with a
  IF ( DABS(A(1)) - DABS(A(2)) ) 800,900,900
where A is a double precision real array.  In almost every case, the middle
branch is taken, but there's one case in particular where the first branch
is taken, meaning there is a measurable difference between two numbers which
should be identical.

I believe it's a case of reaching the limits to which a binary machine can
express a decimal-based number, with the result that two "identical" numbers
are in fact only "very close", and their subtraction yields a number detectably
different from zero.  Anyone have any experience to corroborate or refute this?
Thanks!!

"Moby" Dick O'Connor                         djo7613@u.washington.edu 
Washington Department of Fisheries           *I brake for salmonids* 

-- 
"Moby" Dick O'Connor                         djo7613@u.washington.edu 
Washington Department of Fisheries           *I brake for salmonids* 

khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) (06/12/91)

Aside from the obvious sources of numerical differences (subtraction,
division, etc.) base conversion can often be a quite significant
source of such difficulties.
--
----------------------------------------------------------------
Keith H. Bierman    keith.bierman@Sun.COM| khb@chiba.Eng.Sun.COM
SMI 2550 Garcia 12-33			 | (415 336 2648)   
    Mountain View, CA 94043

dlindsle@afit.af.mil (David T. Lindsley) (06/12/91)

You didn't say what kind of machine you were trying to get this to
run on, so...

Arithmetic IFs can be tricky on one's-complement machines.  (Zero can
be represented two ways, signed negative or positive.)  I was porting
a fairly large (50+K LOC) package of Fortran IV to a muliprocessing
computer.  I won't name names; but FYI it had its own proprietary
operating system, on top of which we were running Un*x.  The hardware
supposedly had zero-checking built in (so it would treat +0 and -0
identically), but we tested and verified that in certain cases, a
"negative" zero would take the first rather than the second branch.

It could have been a hardware glitch, or something in their operating
system, or a foulup in the interface between that O/S and Un*x, or
maybe something in the compiler -- maybe all of the above.  I'm inclined
to think the multiple layering fouled the compiler into generating
faulty code.

<pedagogy ON>
However, depending on what your numbers are, it _could_ be roundoff
or truncation error.  For an ordered pair of numbers (x,y) with the
mean z, then abs(x-z) == abs(y-z), identically.  But the computer
represents numbers as a finite sequence of bits, thus what is stored
as a binary approximation to a number (call this B(a) for any real
number a).  Since x & y are not necessarily equal to B(x) and B(y),
respectively, abs(B(x)-B(z)) is not necessarily equal to abs(B(y)-B(z)).

Now if I remember my assembler correctly (it's been a while), 0.5
(one-half) will generate in binary form an infinite repeating decimal
fraction which will, of course, be truncated.
<pedagogy OFF>

I'm sorry if I patronize by addressing such an elementary point.
But from your article, it seems as though the problem may actually
be this simple.

How about this: (assuming 64-bit floating point storage, 80x87/68881/
G_FLOATING compatible):

	common /toler/ ztol
	data ztol /1.d-300/
...
	if (abs(x).le.ztol) x=0.
	if (x) ...

An ugly kludge, admittedly, but it works.  It also has the great advantage
of requiring only minimal editing of the code.

If your floating-point is IEEE-compatible, you could also check for
underflow, etc.

Hope this helps.


-- 
Dave Lindsley	#24601#					   OPINIONS. MINE. 
dlindsle@blackbird.afit.af.mil		(The words don't come no smaller.)
			?? lamroN eb yhW ??

hirchert@ncsa.uiuc.edu (Kurt Hirchert) (06/12/91)

In article <1991Jun11.143915.11722@milton.u.washington.edu> djo7613@milton.u.washington.edu (Dick O'Connor) writes:
>I'm trying to port an old Fortran program written to run on an IBM main-
>frame to something a bit smaller.  Part of the "truth test" of the new code
>(which is Pascal, but that's neither HNT) is that I have a printout of an
>old run of input and output data that I have to be able to regenerate.
>Fine except for one set of readings, which end up being a little bit off
>in a way that's not DIRECTLY related to rounding.  I believe it's indirectly
>related to rounding via The Law Of Small Numbers (a handy catch-all excuse
>we use around here!), and I'd like some feedback from people with experience in
>how Fortran handles very small numbers.
>
>The routine in question takes the arithmetic mean of N positive real numbers
>in the range, say 40.0 to 90.0, and uses subtraction to find the number that
>is the furthest away from that mean.  It computes a "difference" from the mean
>for each number and determines the greatest difference using the DABS
>(double-precision absolute value) function.  The problem case is a
>specific case where N equals 2, so both real numbers have the same difference.
>The original code acts differently depending on which of the N numbers is
>the furthest distant from the mean, and does it in the case N=2 with a
>  IF ( DABS(A(1)) - DABS(A(2)) ) 800,900,900
>where A is a double precision real array.  In almost every case, the middle
>branch is taken, but there's one case in particular where the first branch
>is taken, meaning there is a measurable difference between two numbers which
>should be identical.
>
>I believe it's a case of reaching the limits to which a binary machine can
>express a decimal-based number, with the result that two "identical" numbers
>are in fact only "very close", and their subtraction yields a number detectably
>different from zero.  Anyone have any experience to corroborate or refute this?

The problem is not specific to Fortran, nor is it specific to binary machines.
The root of the problem is that machine precision is limited.  To illustrate,
imagine that Dick's problem were being run on a decimal machine with three
digits of accuracy.  Suppose the numbers were 41.0 and 41.5.  Summing them,
we get 82.5.  Dividing by 2 to get the average, we get 41.25, but the machine
can only represent 3 digits, so it must choose either 41.2 or 41.3.  That
roundoff means that when we compute the differences, they will be different
(.2 and .3) rather than identical (.25 and .25).  If we suppose the bottom
digit of the representation to be essentially random, then this kind of
roundoff should occur half the time.  The fact that you are not seeing this
suggests a different distribution.  My guess would be that the bottom digit
is turning out even more than half the time, but that is not the only
possible explanation of what you are seeing.
-- 
Kurt W. Hirchert     hirchert@ncsa.uiuc.edu
National Center for Supercomputing Applications