[comp.sys.mac.programmer] Integers are wrong, too

ts@cup.portal.com (Tim W Smith) (09/26/90)

Always watch out when using floating point.  But also watch
out with integers.  Some properties of integer arithmetic
that one might assume hold on computers actually don't on
nearly every popular microprocessor.

Consider integer division, for example.  A reasonable definition
of A/B is that it is the unique integer Q for which there exists
an integer r, 0 <= r < B, with

	A = Q B + r

For example, 15/4 = 3, because 3 * 4 + 3 = 15.

A reasonable definition of A % B is that A % B is the value of
r from the definition of A/B.

For example, 15 % 4 = 3.

OK, how about (-15)/4?  By the definition above, this should be -4,
and (-15)%4 should be 1.

Basically, this definition says that you round towards -infinity when
something does not come out exact.

These definitions have the nice property that

	A = (A/B) * B + A%B

However, round toward -infinity does cause you to give up the
property that

	-(A/B) = (-A)/B

Try this on most processors, and you will find that they think
that (-15)/4 should be -3, rather than -4.  This is because they
round toward 0.

This makes

	-(A/B) = (-A)/B

but it gives up

	A = (A/B) * B + A%B

(assuming that we don't allow A%B to be negative)

Given that we have to round, I don't think that -(A/B) = (-A)/B is
an important property to preserve (i.e., who cares that two wrong
answers are related in such a simple way?).

The relationship between / and % seems to be me to be more fundamental,
and thus is the one that should be preserved.

In practice, I've seen many more situations where I wanted to count on this
than where I needed to use -(A/B) = (-A)/B.

In summary, not only can you not trust floating point, you can't even
trust integers!

						Tim Smith