[comp.misc] IEEE floating point format

bph@buengc.BU.EDU (Blair P. Houghton) (08/12/89)

In article <152@servio.UUCP> penneyj@servio.UUCP (D. Jason Penney) writes:
>In article <3591@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
>>Next question:  do C compilers (math libraries, I expect I should mean)
>>on IEEE-FP-implementing machines generally limit doubles to normalized
>>numbers, or do they blithely allow precision to waft away in the name
>>of a slight increase in the number-range?
>
>This is an interesting question.  The early drafts of IEEE P754 had a
>"warning mode" -- When "warning mode" was set, an operation with
>normal operands that produced a subnormal result 
>("subnormal" is the preferred term instead of "denormalized" now, by the way),
>an exception was signalled.

Ulp!  You mean it does it absolutely silently now?  No provision at all
for a hardware (or software) portabl-ized belief that an implementation
will always perk up when the bits start to disappear??  I'm less impressed.

>It was eventually removed because 1) Checking for this condition
>was expensive, and 2) it did not seem to be very useful.

Checking for this condition requires but an n-input OR'ring of the
bits of the exponent.  I can't imagine they consider it to be
expensive at all in relation to the expense of reimplementing
hardware to handle the conversion from non-subnormalizing to
subnormalizing numbers.

Sometimes I wonder at standards committees' ability to rationalize
the tweaks in the face of the earthquake that is their existence...

>I won't give a full discussion of the benefit of gradual underflow, but
>note that with truncating underflow, it is possible to have two floating 
>point values X and Y such that X != Y and yet (X - Y) == 0.0, 
>thus vitiating such precautions as,
>
>if (X == Y)
>  perror("zero divide");
>else
>  something = 1.0 / (X - Y);
>
>[Example thanks to Professor Kahan...]

People tell me Donald Knuth likes it, too, for this reason.
I find it a bit retentive, myself.  The "two numbers" in question
fall into the range of having their LSB's dangling off the edge
of the range of exponents, which at this point is in the neighborhood
of -1000.

Further, there are many more numbers where ( X == Y ), and one is
foolish to ever expect that one can do a division by ( X - Y ) and
_not_ first have to check for a zero divisor.  Therefore, by
implementing subnormalizaton, you remove the erroneous determination
that ( X == Y ) for a small portion of the numbers, but you do not
vitiate the expense of coding the precaution to check-before-you-divide.

				--Blair
				  "But, I'm not a God of computing,
				   as is Knuth (and, I presume, Kahan,
				   though I don't know the name), so
				   feel free to discard my opinions
				   without regard."

tim@cayman.amd.com (Tim Olson) (08/15/89)

In article <3707@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
| In article <152@servio.UUCP> penneyj@servio.UUCP (D. Jason Penney) writes:
| >In article <3591@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
| >>Next question:  do C compilers (math libraries, I expect I should mean)
| >>on IEEE-FP-implementing machines generally limit doubles to normalized
| >>numbers, or do they blithely allow precision to waft away in the name
| >>of a slight increase in the number-range?
| >
| >This is an interesting question.  The early drafts of IEEE P754 had a
| >"warning mode" -- When "warning mode" was set, an operation with
| >normal operands that produced a subnormal result 
| >("subnormal" is the preferred term instead of "denormalized" now, by the way),
| >an exception was signalled.
| 
| Ulp!  You mean it does it absolutely silently now?  No provision at all
| for a hardware (or software) portabl-ized belief that an implementation
| will always perk up when the bits start to disappear??  I'm less impressed.

No, there are two exceptions that can be signalled: underflow and
inexact.  Underflow occurs when "tininess" is detected (or both tininess
and loss of accuracy, if traps are disabled).  Inexact occurs whenever
the rounded result of an operation cannot be represented exactly.

	-- Tim Olson
	Advanced Micro Devices
	(tim@amd.com)

bph@buengc.BU.EDU (Blair P. Houghton) (08/16/89)

In article <26756@amdcad.AMD.COM> tim@amd.com (Tim Olson) writes:
>In article <3707@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
>| 
>| Ulp!  You mean it does it absolutely silently now?  No provision at all
>| for a hardware (or software) portabl-ized belief that an implementation
>| will always perk up when the bits start to disappear??  I'm less impressed.
>
>No, there are two exceptions that can be signalled: underflow and
>inexact.  Underflow occurs when "tininess" is detected (or both tininess
>and loss of accuracy, if traps are disabled).

Is that underflow defined as when a number enters the subnormal
region, or when it is so small that even subnormal representation
isn't possible.

>Inexact occurs whenever
>the rounded result of an operation cannot be represented exactly.

You mean whenever the rounded result is not equal to the true result,
as a result of the decrease in digits that defines the rounding
operation.  I.e., whenever rounding actually does its job.

				--Blair
				  "Just want to disambiguate any
				   discrepancies in our disparate
				   discourse..."

tim@cayman.amd.com (Tim Olson) (08/16/89)

In article <3782@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
| In article <26756@amdcad.AMD.COM> tim@amd.com (Tim Olson) writes:
| >No, there are two exceptions that can be signalled: underflow and
| >inexact.  Underflow occurs when "tininess" is detected (or both tininess
| >and loss of accuracy, if traps are disabled).
| 
| Is that underflow defined as when a number enters the subnormal
| region, or when it is so small that even subnormal representation
| isn't possible.

"Tininess" is detected when a nonzero result computed as though the
exponent range were unbounded would lie strictly between +/-2^(Emin),
where Emin is the minimum exponent representable in the format being
used.

"Loss of accuracy" is defined as either:

	- a denormalization loss: when the delivered result differs from
	  what would have been computed were texponent range unbounded

	- an inexact result: when the delivered result differs from what
	  would have been computed were both exponent range and
	  precision unbounded

If underflow traps are enabled, then an underflow exception would be
signalled whenever tininess occurs.  This would include all results
between +/-2^(Emin), which, if they had been returned, would be zero,
denormalized, or +/-2^(Emin).

If underflow traps are disabled, then an underflow exception would be
signalled (via the underflow flag) when *both* tininess and loss of
accuracy occur.  This means that a denormalized result may be returned
without setting the underflow flag if it is exactly represented.

| >Inexact occurs whenever
| >the rounded result of an operation cannot be represented exactly.
| 
| You mean whenever the rounded result is not equal to the true result,
| as a result of the decrease in digits that defines the rounding
| operation.  I.e., whenever rounding actually does its job.

Yes.


	-- Tim Olson
	Advanced Micro Devices
	(tim@amd.com)