[comp.arch] Floating Point is Here to Stay

amull@Morgan.COM (Andrew P. Mullhaupt) (09/10/90)

Several people have posted the intentionally provocative claim that
floating point arithmetic is a clooge (and I have spelled it this way 
for more than twenty years so don't correct me :-)). Well floating
point arithmetic does not play the same role it did in the middle
'60s, especially since the IEEE standards 754 and 854. Nobody who
is aware of the depth of experience and contemplation which went into
the formation of these standards could consider the result hasty
or misconceived. But let us, strictly for argument's sake consider
that floating point is some sort of unfortunate lesser evil which 
appeals to the mass market for inexplicable reasons. 


At the top of the list is the idea that floating point instructions
are not as flexible as fixed point instructions. This argument is
usually based on the false idea that a few shifts here and there are
enough to turn fixed point arithmetic into floating point. Well that
is not the case, since modern floating point implementations are 
endowed with sophisticated exception handling capabilities which 
would put a lot of pressure on the programmer to duplicate in fixed
precision. Just ask anybody who has written an IEEE conforming software
emulation which properly handles denormals. Don't care about denormals?
Then you get stuff like a-b=0 and a<>b at the same time. But then if
you have denormals, you have to consider what happens when you take
reciprocals of numbers. Another problem is what happens when you write
a transcendental function in fixed point. It's pretty difficult to
compute say x to the y power without extra bits, and one of the reasons
that Wirth left the power function out of Pascal in the original design
was that he was not convinced that one could successfully be written
which would behave the same on different machines at that time. Now
is a different story, thanks mainly to the IEEE standards.


So let us suppose that we have no floating point hardware. Then why
would it be necessary, (as it has been on nearly every computer ever
built) to write a floating point library ? And this library, 
which encapsulates perhaps two hundred different functions, is a sure
thing to be one of the most well travelled libraries on the machine.
In fact, floating point arithmetic is one of the most successful
cases of "object oriented programming" ever - you get a data structure
and a bunch of operations for it all in one bundle. So successful that
it has been "compiled to silicon". If anyone out there _really_ has a
compelling argument for a generally useful alternative to floating
point arithmetic, there will be a lot of people interested. But be
warned that you better be able to crank out those Mega-?-ops because
there are a lot of people who actually know what there applications
require (Imagine that!) and want their machines to actually _do_ that.


Most of the people who abandon floating point arithmetic do so in favor
of _low_ precision arithmetic in specific applications where there
data are "low precision" for an inherent reason. But even in audio
signal processing, (where I got my first RISC machine - a reverb unit)
the "F" instructions are raising their ugly heads. It seems that there
comes a point in VLSI integration where an FPU is not a big deal to
put on the chip (and a lot of CPUs are crossing that Rubicon about now)
and when you get to that point, floating point becomes the simplest
way to get a lot of things done. Does anyone know when we'll be getting
floating point audio CD's? I'd bet we're getting close.


I think it's time to call this bluff. Is there _any_ example of a
machine where lots of integer operations _and_ floating point
instructions were supported where the same accuracy as the floating
point turned out to be faster via integer emulation? Even on a machine
like the 80386, which had pretty weak floating coprocessors, the
floating point instructions were an order of magnitude faster than
emulation. 


Later,
Andrew "Megaflops" Mullhaupt