[comp.sys.ibm.pc.misc] Multiple-Precision Binary to ASCII Decimal Machine Languae Routine

gwang@ncsa.uiuc.edu (George Wang) (02/18/91)

Does anyone know of a 8088 Machine Language routine that will
convert a multiple-byte array containing a VERY large (64 bit, 8 bytes)
number to ASCII? I know how to write a routine for 16 bit signed
or unsiged binary to ASCII decimal but I can't figure out
how it would be done for numbers larger than 16 bits (which then
must be stored in a multiple byte memory array specified by say
OPERAND1 DB 8 DUP (0)
which would define a 8 byte (64 bit) memory array)

If anyone knows of a routine or the logic behind it, I would
appreciate hearing from you...
Thanks
George

-- 
  George Wang - Networking Development                     T     T          
  National Center for Supercomputing Applications          |  T  |          
  INTERNET: gwang@ncsa.uiuc.edu                             \_|_/           
  UUCP: gargoyle!igloo!gwang  PH: (217) 244-4020              -             

kirchner@informatik.uni-kl.de (Reinhard Kirchner) (02/19/91)

From article <1991Feb18.155207.24378@ux1.cso.uiuc.edu>, by gwang@ncsa.uiuc.edu (George Wang):
> Does anyone know of a 8088 Machine Language routine that will
> convert a multiple-byte array containing a VERY large (64 bit, 8 bytes)
> number to ASCII? I know how to write a routine for 16 bit signed

There is a very simple algorithm to convert binaries of arbitrary length
to decimal WITHOUT division. I post it here since it may be of general
interest.

You need to arrays in storage, one for the decimal ( bcd ) number and one
for the binary.

    |---------------------------|-------------|
          bcd                       binary

Then you simply shift this whole thing to the left bitwise. So the bits
from the binary cross the border to the bcd-part. And now the conversion-trick:

if the rightmost digit in the bcd-part, the one where the binary bits go in,
is >= 5, then add 3 to this digit. On the next shift this will result in a
higher next digit.

So you shift and perhaps add 64 times in the above case.

To convert not to bcd but ASCII should be possible by adding not 3 but another
constant, or by crossing the not needed 4 bits in every byte during the
shift with additional operations.

I once did this bin -> bcd cinversion on a Z80 and it was very effective.

Reinhard Kirchner
Univ. Kaiserslautern, Germany
kirchner@uklirb.informatik.uni-kl.de

brandis@inf.ethz.ch (Marc Brandis) (02/21/91)

In article <7565@uklirb.informatik.uni-kl.de> kirchner@informatik.uni-kl.de (Reinhard Kirchner) writes:
>
>There is a very simple algorithm to convert binaries of arbitrary length
>to decimal WITHOUT division. I post it here since it may be of general
>interest.
>
>You need to arrays in storage, one for the decimal ( bcd ) number and one
>for the binary.
>
>    |---------------------------|-------------|
>          bcd                       binary
>
>Then you simply shift this whole thing to the left bitwise. So the bits
>from the binary cross the border to the bcd-part. And now the conversion-trick:
>
>if the rightmost digit in the bcd-part, the one where the binary bits go in,
>is >= 5, then add 3 to this digit. On the next shift this will result in a
>higher next digit.
>
>So you shift and perhaps add 64 times in the above case.
>

This would be a really great algorithm, but I do not see how it should work.
In fact, the first example I thought about did not work, so I present it here
as a counterexample.

Let us assume we just want to convert a 16-bit value N to its BCD 
representation. I choose N = 1280.

The binary representation of this is

	0000 0101 0000 0000	(spaces for readability)

and the BCD representation is

	0001 0010 1000 0000 .

After eight steps in the algorithm, we get the following picture.

	BCD part	  	      binary part
	0000 0000 0000 0101 | 0000 0000 0000 0000

Note that up to now, the case where 3 had to be added has never occured. Now,
the rightmost digit of the BCD part is 5, so we add 3 yielding:

        BCD part                      binary part
        0000 0000 0000 1000 | 0000 0000 0000 0000

Now, we continue shifting. Note that the case where the rightmost digit of
the BCD part is >= 5 will not occur any more, the digit is always zero. After
eight steps, the algorithm stops with

        BCD part                      binary part
        0000 1000 0000 0000 | 0000 0000 0000 0000

which is the BCD number 800.

I do not see how this algorithm can be made work, and I really doubt that it
can be done without a division at all. Does anybody know for sure?


Marc-Michael Brandis
Computer Systems Laboratory, ETH-Zentrum (Swiss Federal Institute of Technology)
CH-8092 Zurich, Switzerland
email: brandis@inf.ethz.ch

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (03/02/91)

In article <25716@neptune.inf.ethz.ch> brandis@inf.ethz.ch (Marc Brandis) writes:
> Let us assume we just want to convert a 16-bit value N to its BCD 
> representation. I choose N = 1280.
  [ ... ]
> I do not see how this algorithm can be made work, and I really doubt that it
> can be done without a division at all. Does anybody know for sure?

Why don't you look up the answer in Knuth rather than asking the net?

---Dan