[comp.lang.c] YA floating-point mixup

mouse@mcgill-vision.UUCP (der Mouse) (08/23/88)

In article <3863@thorin.cs.unc.edu>, barkley@unc.cs.unc.edu (Matthew Barkley) writes:
> In trying out the problem program from the "floating point puzzle"
> posting, I came across some strange behavior in trying to make the
> program more concise.  Here's what I ran and what I got:

> 	float x,y;
> 	x = 1.0/10.0;
> 	y = 1677721.0/16777216.0; 
> 	printf("x: %x",x); 	printf("%20.17f\n",x);
> 	printf("y: %x",y); 	printf("%20.17f\n",y);
> 	printf("\n");
> 	printf("x: %x %20.17f\n",x,x);
> 	printf("y: %x %20.17f\n",y,y);

> /* x: cccd3ecc 0.10000000149011612 */
> /* y: ccc83ecc 0.09999996423721313 */

> /* x: cccd3ecc  0.00000000000000000 */
> /* y: ccc83ecc  0.00000000000000000 */

> Is it unreasonable to expect the 2 sets of output to be the same?
> (BTW, the results are more bizarre if x and y are of type double.)

In a word, yes.

Remember that x and y get promoted to double when being passed to
printf().  Remember that doubles and integers (which is what printf is
expecting for a %x format, though it is for %f) are not the same size.

In short, the argument list to printf is throughly confused.  You said
this was on a VAX, so here's what things will (assuming an ordinary
compiler) look like on entry to the various printf calls.  I'm showing
just the x versions; the y versions are identical except that the
numbers on the stack are slightly different.

> 	printf("x: %x",x);

On the stack, 8 bytes (x, promoted to double).  Printf will print the
first four in hex and ignore the other four.  The first four are the
sign, exponent, and the high bits of the mantissa.

> 	printf("%20.17f\n",x);

This is perfectly correct.

> 	printf("x: %x %20.17f\n",x,x);

On the stack, 16 bytes (two copies of x, each promoted to double).
Printf will print the first four in hex (the %x).  Then it will take
the next 8, that is the last half of one copy of x and the first half
of the next, and treat those bits as if they were a normal double.  It
so happens that these bits, interpreted that way, represent zero.  So
printf naturally enough prints zero.

> /* x: cccd3ecc 0.10000000149011612 */
> /* y: ccc83ecc 0.09999996423721313 */

You can see the difference here.  Here is what those bit-patterns look
like (the 1 in parentheses is the hidden bit):

	Sign	Exponent	Mantissa
x:	 +	01111101	(1)10011001100110011001101
y:	 +	01111101	(1)10011001100110011001000

The rest of the bits of the doubles that are actually on the stack will
be zero bits, from promotion to double.

					der Mouse

			old: mcgill-vision!mouse
			new: mouse@larry.mcrcim.mcgill.edu