jlg@lanl.gov (Jim Giles) (01/25/89)
From article <50500101@uxe.cso.uiuc.edu>, by mcdonald@uxe.cso.uiuc.edu: > >>Bug or feature? I suppose you would have to say "feature", but really it is >>an example of the fact that you should not make assumptions about the properties >>of floating point arithmetic. > > That latter clause is a true statement, no doubt about it. But, > nevertheless, there is an IEEE standard for floating point formats > and operations. It is extremely specific about what a given result > must be, and reasonably close to the "principle of least astonishment". > [...] > I think it would behoove Cray (and that other big computer manufacturer > with the leading up to three zeros in the mantissa) to convert to > it (at least for in-range results). The IEEE standard was invented for _small_ computers. To implement it on vector type archetecture would cause multiply to run 20-30% longer. Divide is even worse - a staged divider for the IEEE standard would require twice the time (or more), but that isn't the problem: such a divide unit would occupy as much hardware as the entire rest of the CPU! So, the question I have as a Cray user is: is fixing this minor divide problem worth slowing all my other programs by large amounts, increasing the cost of the machine by 20%, and omitting other improvments that Cray _could_ have made instead? Having said all this I should point out that I am not violently opposed to improving the arithmetic done by Cray and other big machines. The time wasted (both human and machine) corecting for inaccurate arithmetic is substantial. But the issue is not as simple as just saying 'it behooves them to fix their machines'. There are trade-offs to consider. Cray arithmetic behaves as it does because of _real_ limitations in the way that hardware _can_ be designed. You may decide that Cray made the wrong compromise, but commercial success says otherwise. Cross post further discussion to comp.arch.