[comp.lang.fortran] Want Fast Double Precision?

aglew@ccvaxa.UUCP (01/22/88)

/* Written 10:14 am  Jan 21, 1988 by aglew@ccvaxa.UUCP in ccvaxa:comp.arch */
>	I really don't think the real world really needs anything more
>expansive than a 32 bit processor to get most jobs done.

Probably, but... I wonder how much interest might be out there for
a true double-precision floating point engine - one that did 64 bit
floating point, or IEEE 80 bit extended floating point, or even 
128 bit floating point, as its native floating point mode, as fast
as single precision on nearly any other machine in its price range?

I'm sure that most people wouldn't need this, but some might - and I'd 
like to get a feel for the size of such a niche, if it exists. Post
if you want to discuss, or mail me (I'll summarize to the net if I get
a large enough sample).


Andy "Krazy" Glew. Gould CSD-Urbana.    1101 E. University, Urbana, IL 61801   
    aglew@gould.com     	- preferred, if you have nameserver
    aglew@gswd-vms.gould.com    - if you don't
    aglew@gswd-vms.arpa 	- if you use DoD hosttable
    aglew%mycroft@gswd-vms.arpa - domains are supposed to make things easier?
   
My opinions are my own, and are not the opinions of my employer, or any
other organisation. I indicate my company only so that the reader may
account for any possible bias I may have towards our products.
/* End of text from ccvaxa:comp.arch */

jss@beta.UUCP (Jeffrey Saltzman) (01/30/88)

in article <32300003@ccvaxa>, aglew@ccvaxa.UUCP says:
> Nf-ID: #N:ccvaxa:32300003:000:1342
> Nf-From: ccvaxa.UUCP!aglew    Jan 21 10:15:00 1988
> 
> 
> /* Written 10:14 am  Jan 21, 1988 by aglew@ccvaxa.UUCP in ccvaxa:comp.arch */
>>	I really don't think the real world really needs anything more
>>expansive than a 32 bit processor to get most jobs done.
> 
> Probably, but... I wonder how much interest might be out there for
> a true double-precision floating point engine - one that did 64 bit
> floating point, or IEEE 80 bit extended floating point, or even 
> 128 bit floating point, as its native floating point mode, as fast
> as single precision on nearly any other machine in its price range?
> 
The Cray series of machines are 64 bit machines.  That is single
precision on these machines is 64 bits.  The reason for going this
route was rounding errors on large programs were beginning to
cause large perturbations in problems.  One example of why this
should happen is with linear systems derived from finite difference
methods for partial differential equations.  Although the algorithms
derived are stable the condition numbers of the matrices increase
with the size of the problem solved.  The amount of rounding error
is related to the condition number of the matrix.  See Forsythe and
Moler, "Computer Solution of Linear Algebraic Systems" (Prentice-Hall).

> I'm sure that most people wouldn't need this, but some might - and I'd 
> like to get a feel for the size of such a niche, if it exists. 
> 
With the increased availablity of fast CPUs more and more people might
be inclined to require more bits for their calculations since they might
want to do larger problems.

---------------------
Jeff Saltzman
Los Alamos National Laboratory
jss@lanl.gov

disclaimer:remialcsid

msf@amelia.nas.nasa.gov (Michael S. Fischbein) (01/30/88)

>> Probably, but... I wonder how much interest might be out there for
>> a true double-precision floating point engine - one that did 64 bit

The HP Precision Architecture hardware floating point is faster at
double precision than single precision.

I believe that is because it is 64-bits wide and must add a truncation
instruction for single precision.  Anyone who knows the internals out there?
		mike
-- 
Michael Fischbein                 msf@ames-nas.arpa
                                  ...!seismo!decuac!csmunix!icase!msf
These are my opinions and not necessarily official views of any
organization.

cdb@hpclcdb.HP.COM (Carl Burch) (01/31/88)

> msf@amelia.nas.nasa.gov (Michael S. Fischbein) writes :
> 
> The HP Precision Architecture hardware floating point is faster at
> double precision than single precision.
> 
> I believe that is because it is 64-bits wide and must add a truncation
> instruction for single precision.  Anyone who knows the internals out there?

I just ran the Linpack and Whetstone benchmarks on our HP9000 Series 840,
and the ratios of the times were 1.44 and 1.52, respectively - both double 
longer than single.  (NOTE : Usual disclaimer about not being official
benchmark results, etc - even just as a ratio.  Actually, these are just
ballpark ratios : running benchmarks is almost as hard as writing them - 
which is impossible.)

These ratios will vary considerably with the implementation of the architecture,
but I haven't heard of a hardware implementation that does faster double than
single.  Of course, it's always possible for some programs to run faster in
double than single (faster converging algorithms, overflow traps in single, 
etc.).

The difference you may be thinking of is that the above ratios are not as
great as some other machines'.  That may well change as HP releases new
implementations of the Precision Architecture family.

							Carl Burch
							HP Fortran Team

msf@amelia.nas.nasa.gov (Michael S. Fischbein) (02/02/88)

In article <6690011@hpclcdb.HP.COM> cdb@hpclcdb.HP.COM (Carl Burch) writes:
>> msf@amelia.nas.nasa.gov (Michael S. Fischbein) writes :
>> The HP Precision Architecture hardware floating point is faster at
>> double precision than single precision.
>
>I just ran the Linpack and Whetstone benchmarks on our HP9000 Series 840,
>and the ratios of the times were 1.44 and 1.52, respectively - both double 
>longer than single.  (NOTE : Usual disclaimer about not being official

OOPS.  I based my remark on the results we obtained with linpack and
our own f77 benchmark suite running on the Model 850, and generalized to
the rest of the HPPA clan.  Sorry about the generalization, but I'm
still curious about the 850.
		mike

-- 
Michael Fischbein                 msf@ames-nas.arpa
                                  ...!seismo!decuac!csmunix!icase!msf
These are my opinions and not necessarily official views of any
organization.