[comp.compilers] How to read/print fp numbers accurately

david@glance.ch (David Mosberger) (08/22/90)

Recently, two articles appeared in a SIGPLAN paper concerning the optimal
conversion of floating point numbers represented in a decimal scientific
notation to binary floating point representation and back. ``Optimal'' is
meant in the sense of ``best approximation to the true binary/decimal
value''. The presented algorithms were very elaborate. However, they
require either multi-precision integers or extended precision
floating-point operations or both.

I would like to know, what the best is one can get using single precision
floating point only? I.e., if you want to convert to/from single precision
floating point numbers, the algorithm should only use single precision
floating point operations (of course, integers with a ``usual'' size may
be used as well).  Is there an optimality criterion for such an algorithm?

David Mosberger						Glance Ltd.
Software Engineer					Gewerbestrasse 4
david@glance.ch						8162 Steinmaur
UUCP: {...}!{uunet,mcsun}!elava!david			Switzerland
X.400: S=david;O=glance;P=switch;A=arCom;C=ch
BITNET: david@glance.ch or david at glance.ch
-- 
Send compilers articles to compilers@esegue.segue.boston.ma.us
{ima | spdcc | world}!esegue.  Meta-mail to compilers-request@esegue.