[comp.lang.fortran] Explain this output to me

vaughan@ut-emx.UUCP (Curt Vaughan) (05/26/90)

>As I replied to the original poster, the problem is that 0.1 is not
>exactly representable.  Although the second version is less likely
>to result in a non-zero as answer (because there is less round-off),
>it will still yield non-zero reults on some systems. It just depends
>on machine mantissa size and rounding mode.  This is a recurring
>problem for all newcomers to floating point in all languages.

It may also be dependent upon the machine architecture.  Although
most machines now use two's complement arithmetic, the Cyber 170,
which uses the most significant bit as a sign bit (60-bit word)
produces the correct result with the original code.  The Cray X-MP,
VAX 780, and Convex, all of which use two's complement arithmetic,
yield varying, but incorrect values.

                      Curt Vaughan
                      Internet: vaughan@emx.utexas.edu
                      Decnet:   utadnx::utaivc::ccfa001
 

dik@cwi.nl (Dik T. Winter) (05/26/90)

In article <30499@ut-emx.UUCP> vaughan@ut-emx.UUCP (Curt Vaughan) writes:
(About 0.1 etc.)
 > It may also be dependent upon the machine architecture.  Although
 > most machines now use two's complement arithmetic, the Cyber 170,
 > which uses the most significant bit as a sign bit (60-bit word)
 > produces the correct result with the original code.  The Cray X-MP,
 > VAX 780, and Convex, all of which use two's complement arithmetic,
 > yield varying, but incorrect values.
 > 
This is not correct.  There are only a few machines that use two's
complement for floating-point.  I know only: CDC-205, ETA-10,
Modcomp (does it still exist?) and Gould (really 16's complement).
All the others use sign/magnitude, except the Cyber 170 which uses
one's complement.  There is not really a difference between one's
complement and sign/magnitude (except representation of course).
The rounding mode remains the same.  The reason you get different
results on those machines is:
1.  Probably the code on the Cyber 170 was compiled using rounding
    instructions.
2.  The Cray (with the same mantissa size as the Cyber) does truncate.
3.  The Vax has a different mantissa size (53 vs. 48 bits).
4.  Convex has two floating point formats, both approximately the
    same format as the Vax, but rounding is slightly different.
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl

urjlew@uncecs.edu (Rostyk Lewyckyj) (05/28/90)

In the original problem the program starts at -.5 and increments
by approximately .1 to approximately .5. The output of the steps
is displayed to 7 places.
Now in all current computers numbers are stored in binary and -.5
is representable exactly. So when -.5 is converted for printing,
it is nicely represented by   -.5000000.  .1 is not representable
exactly in binary. So the machine stores the nearest value to .1
that it can. When this representation of .1 (dx) is added to -.5 (x)
there is some error in the the resulting new x. But when the resulting
x .=. -.4 is converted for printing, it is still within .5e-9 of .4
and so to seven places prints out as -.40000000. What I am saying
is that the error is in there in the stored value of x, but it is
not evident in the displayed output. This same situation occurs
for the next several steps i.e. -.3 -.2 and -.1. But in the next
step the leading digit of the result becomes 0, and we have a massive
loss of significance. Internally the result x is normalized and when
the result is converted for display, we suddenly see all of the
accumulated error displayed  .2....... e -16 . When the next dx is
added the leading figure of the result is again not 0 and the error
is still less than .5 e-9 . So again the error disappears from view
off the right edge of the displayed result.
  
I hope that this description was illuminating and clarifying.
-----------------------------------------------
  Reply-To:  Rostyslaw Jarema Lewyckyj
             urjlew@ecsvax.UUCP ,  urjlew@unc.bitnet
       or    urjlew@uncvm1.acs.unc.edu    (ARPA,SURA,NSF etc. internet)
       tel.  (919)-962-6501

klshafer@iuvax.cs.indiana.edu (Ken Shafer) (05/29/90)

In article <1990May27.193758.25099@uncecs.edu> urjlew@uncecs.edu (Rostyk Lewyckyj) writes:
>
>
>In the original problem the program starts at -.5 and increments
>by approximately .1 to approximately .5. The output of the steps
>is displayed to 7 places.
...........remainder of explanation(s) deleted.......

This problem has been well explained so I won't add to that discussion
directly.

But I will contribute another example regarding the pitfalls of approximations
with floating point:

About 11 years ago I was stymied for a day or two with a bug at a client site,
using CDC FORTRAN 66 (this was pre f77 days.) The application used a
combination of numerics and text. I was using REAL arrays for storing
some character strings, and knowing that CDC had a 60bit word and 6bit
characters, these were A10 (10byte) structures.  I was doing an IF
test for equality of strings (looking for command keywords and the like)
and I would print the two variables out as a10 before comparison
and see that indeed they were different, but when I compared them the
equality branch of the IF was the branch taken!

Eventually I figured it out. The particular selection of characters in
the strings to be compared resulted in wildly different exponents,
and the compiler generated code to 'normalize' the exponents, and the
least significant bits (the fractional part) were subsequently shoved
off into oblivion, yielding two floating-point values that were 
"equal."  The solution was to change all the declarations from REAL
to INTEGER for the 'character' data, and the comparisons degenerated
into straight comparisons, the 'proper' logic was generated.

All this may seem moot in lieu of the character data available in
fortran 77, but be aware that some f77 compilers still have some
restrictions on character data types.  For example, Hewlett Packard
RTE-6/RTE-A f77 will not permit character data types in COMMON
blocks that are resident in extended-memory-address ($ema) partitions,
necessitating setting character data to some other type.  The
object lesson here is that, if for ANY reason you can not use 
character types to store character data, USE TYPE INTEGER rather
than TYPE REAL.

Regards,
          ken.............klshafer@iuvax.cs.indiana.edu

userDHAL@mts.ucs.UAlberta.CA (unknown) (05/31/90)

In article <1554@ns-mx.uiowa.edu>, jlhaferman@l_eld09.icaen.uiowa.edu (Jeffrey Lawrence Haferman) writes:
>Why does this program give me -0.2775558E-16 when I expect 0.0?
>Two different versions of Fortran have produced the same result.
>I'm not a Fortran programmer, so be easy on me.  Please explain
>what I need to do to get 0.0 where I expect it.
>
>==================================================================
>C
>      IMPLICIT REAL*8 (A-H,O-Z)
>C
>      XOUT = -5.D-1
>      DX = 1.D-1
>C
>      DO 40 IOUT = 1,50
>        IF (XOUT .GT. 5.D-1) GOTO 90
>        WRITE(6,20)XOUT
>40      XOUT = XOUT + DX
>90    STOP
>20    FORMAT(D15.7)
>      END
>C
>==================================================================
>Output:
>
> -0.5000000E+00
> -0.4000000E+00
> -0.3000000E+00
> -0.2000000E+00
> -0.1000000E+00
> -0.2775558E-16   <----- ????
 
Welcome to the fun of round-off error, Jeff. The catch here is to
understand that the computer is working in base 2, and your value of
DX is a nice round number in base 10. What you see in your output is that
you have a value of zero, IF you round it to only 15 decimal places
(which is appropriate for double precision). If you printed out all
values to 16 or more decimals, you would see errors in them as well.
Making this up as we go, you might expect to see:
-0.50000000000000000
-0.40000000000000005
-0.30000000000000010
-0.20000000000000015
-0.10000000000000020
-0.00000000000000025        (which is -0.25E-16).
 
To further clarify this (or perhaps muddy the waters), think about
how these numbers are stored in base 2: 0.5 is easy because it is 2**-1
(sticking with FORTRAN notation). 0.25 is 2**-2, 0.125 is 2**-3, etc.
What does this make 0.1? It CAN'T be represented exactly in base 2!
Therefore there HAS to be rounding error in your calculations, and you
won't get zero! Ever! Even if you do it in quintiple precision. Even on
a CYBER! This is why accounting programs should not use normal floating
calculations, with cents as decimal dollars. People lose money! Use
binary coded decimal or integer cents instead.
If you do a lot of serious numerical work, rounding error is CRITICAL.
There are ways to program to avoid it, and ways to program to guarantee
it will cause problems.
 
>  0.1000000E+00
>  0.2000000E+00
>  0.3000000E+00
>  0.4000000E+00
>  0.5000000E+00
>
>
>Jeff Haferman                            internet: jlhaferman@icaen.uiowa.edu
>Department of Mechanical Engineering
>University of Iowa
>Iowa City IA  52240

dneff@garth.UUCP (David Neff) (06/05/90)

In article <1549@charon.cwi.nl> dik@cwi.nl (Dik T. Winter) writes:
>                      There are only a few machines that use two's
>complement for floating-point.  I know only: CDC-205, ETA-10,
>Modcomp (does it still exist?) and Gould (really 16's complement).
>All the others use sign/magnitude, except the Cyber 170 which uses
>one's complement.

Another famous 2's complement floating-point machine is the PDP-10.
-- 
UUCP: pyramid!garth!dneff
USPS: Intergraph APD, 2400 Geng Road, Palo Alto, Ca, 94303
AT&T: (415) 852-2334