CPI027@IBM.SOUTHAMPTON.AC.UK (Paul Dadswell) (08/01/89)
Has anybody got some routines for converting numbers, both integer and floating point, to and from character strings? I need them for some REXX function packages, and do not feel like re-inventing the wheel. Thanks, Paul.
SEB1525@ccfvx3.draper.COM ("Steve Bacher ", Batchman) (08/02/89)
There are probably dozens of such routines, but first you have to ask yourself a number of questions: * When converting floating point to strings, how many decimal places do you want to display? Some fractions are not exactly representable in both decimal and binary floating point. At what point do you want to switch to scientific notation? * When converting a string to floating point, how do you want to round off values that are not exactly representable in binary floating point? How do you want to deal with overflow or underflow? * REXX supports arbitrary-precision values for both integer and floating-point numbers. Without knowing what the REXX internal representation for such numbers is, how can you do the conversion? Of course, for your purposes you probably need only establish an internal representation useful to you, so that you can convert to it for your own purposes and then convert back to string format for REXX to use. But how do you select such a representation? We've had to develop routines to do all of these things for our MVS-based LISP system, including a hot assembler-coded "bignum" package. (Well, all except for arbitrary-precision floats, and Macsyma has Lisp-coded routines to implement those.) In the general case, what you ask for is non-trivial, though there are numerous "quickie" ways to do integer-to-character-string conversion (key: use EDMK to locate the first significant digit of the result in case you need to stick a minus sign on the beginning). Hope this doesn't discourage you. - Steve Bacher - Draper Lab P.S. If anyone out there wants to let us in on REXX's internal representation for large numbers, I'd be interested in hearing about it.