[comp.lang.c] x/10.0 vs x*0.1

chris@mimsy.UUCP (Chris Torek) (10/13/88)

In article <1700@dataio.Data-IO.COM> Walter Bright suggests that one
>>>Try very hard to replace divides with other operations, as in:
>>>		x / 10
>>>	with:
>>>		x * .1

>In article <10332@s.ms.uky.edu>, aash@ms.uky.edu (Aashi Deacon) notes:
>>According to theory, '.1' cannot be represented exactly as a floating
>>point number because in base2 it is irrational.  Wouldn't then the
>>first be better in this case?

Yes (subject to the usual constraints, i.e., that you know what you are
doing: if your input data has little precision, you can afford minor
degredations in computations).

In article <711@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
>For that matter, it is also very difficult to represent 10.0 (I am
>assuming you are working with floating point) in any floating point
>representation.

Not so.  `.1' is a repeating binary; but `10.0' in base 2 is merely
`1.01 E 11' (the exponent here is base 2 as well, 11_2 = 3_10):
1*8 + 0*4 + 1*2.  (In conventional f.p., one uses .101 E 100 rather
than 1.01 E 11, but it amounts to the same thing.)  Think about it a
while, and you will see that any integer that needs no more than M bits
to be represented in binary can be represented exactly in binary
floating point whenever that f.p. representation has at least M bits of
mantissa.  (Then, since the first bit after the binary point is always
a 1, you can drop it from the representation, and you need only at
least M-1 bits.  This is called a `normalised' number.)

>Also, if the operation is done in emulation mode (no
>floating point in MPU or if math coprocessor it is not in machine) the
>advantage will be nonexistent.

Again, not so: f.p. multiplication of normalised (ugly word, that)
numbers is actually the simplest f.p. operation, as you need re-
normalise only once, and that is only 1 bit and in a known direction
(down)% and can be done during the integer multiply phase.  The rest is
just integer multiplication and addition.
-----
% The number of bits in the result is the sum of the number of bits in
  the multiplier and in the multiplicand.  Since the first bit of both
  multiplier and multiplicand is always a 1, the first two bits of this
  result are 11, 10, or 01.  If 01, normalisation consists of shifting
  the result left 1 bit and decrementing the resulting exponent.
-----

>Even with the coprocessor (math ops) a MUL takes approximately the same
>amount of clock cycles a DIV does.

Only in poorly-implemented coprocessors.  (Your phrase `*the* coprocessor'
makes me wonder of which coprocessor you are thinking.)

>You would be much better served by making variables that are used
>constantly registers (if you have float registers) than some of this stuff.

That depends on your inner loop.

>Also, making Fortran indexing go backwards and C's go forwards ... for
>multiply dimensioned arrays does wonders to reduce the page faulting
>that normally occurs with multitasking/multiuser machines.

True.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

charette@edsews.EDS.COM (Mark A. Charette) (10/14/88)

In article <13969@mimsy.UUCP>, chris@mimsy.UUCP (Chris Torek) writes:
> In article <1700@dataio.Data-IO.COM> Walter Bright suggests that one
> >>>Try very hard to replace divides with other operations, as in:
> >>>		x / 10
> >>>	with:
> >>>		x * .1
> 
> Yes (subject to the usual constraints, i.e., that you know what you are
> doing: if your input data has little precision, you can afford minor
> degredations in computations).

Some numerical analysts I associate with might argue that the propagation
of any error in floating point tends to be magnified as more operations
occur. I have had occasion to see the (small) precision in some data be
completely outweighed by using float instead of double (actually REAL
instead of DOUBLE PRECISION, but this is comp.lang.c ;') and really
whacking at the data (conversions back and forth between double and float).
 
> Yes (subject to the usual constraints, i.e., that you know what you are
> doing: 

Aye, there's the rub.


-- 
Mark Charette                   "On a clean disk you can seek forever" - 
Electronic Data Systems                  Thomas B. Steel Jr.
750 Tower Drive           Voice: (313)265-7006        FAX: (313)265-5770
Troy, MI 48007-7019       charette@edsews.eds.com     uunet!edsews!charette 

steve@oakhill.UUCP (steve) (10/15/88)

In article <2888@edsews.EDS.COM>, charette@edsews.EDS.COM (Mark A. Charette) writes:
> In article <13969@mimsy.UUCP>, chris@mimsy.UUCP (Chris Torek) writes:
> > 
> > Yes (subject to the usual constraints, i.e., that you know what you are
> > doing: if your input data has little precision, you can afford minor
> > degredations in computations).
> 
> Some numerical analysts I associate with might argue that the propagation
> of any error in floating point tends to be magnified as more operations
> occur. I have had occasion to see the (small) precision in some data be
> completely outweighed by using float instead of double (actually REAL
> instead of DOUBLE PRECISION, but this is comp.lang.c ;') and really
> whacking at the data (conversions back and forth between double and float).
>  
I have seen an actual case of this 'minor' error blow out of proportion.
The case was reading the digits after a decimal point for a fortran
compiler.  The code look something like this (psuedo-code).

end = 0.0;
back = 0.1;
thischar = first character after decimal;
while (isdigit(thischar))
  {
  end = end + (thischar - '0') * back;
  back = back * 0.1;
  thischar = next character;
  }

On numbers with large decimal parts, the error added up quickly enough to
cause errors by the end of the read.  This error started an cascade of
rewrites that totally changed the algorithm used - but thats another story.

                   enough from this mooncalf - Steven
----------------------------------------------------------------------------
These opinions aren't necessarily Motorola's or Remora's - but I'd like to
think we share some common views.
----------------------------------------------------------------------------
Steven R Weintraub                        cs.utexas.edu!oakhill!devsys!steve
Motorola Inc.  Austin, Texas 
(512) 440-3023 (office) (512) 453-6953 (home)
----------------------------------------------------------------------------