ok@quintus.UUCP (Richard A. O'Keefe) (04/20/88)
There appears to be a contradiction between pages 197 and 194 of "Common Lisp the Language". Page 197 says very clearly that If the sequence of arguments satisfies a certain condition: = all the same /= all different < monotonically increasing ... then the predicate is true, and otherwise is false. It later says: With two arguments, these functions perform the usual arithmetic comparison tests. I took this to mean that any two non-complex numbers were to be ordered by these predicates as if they were injected into the real number line and compared there according to the rules of real arithmetic. For example, one could compare a rational number N/D with a floating point number F by comparing N with D*F, which can be computed exactly. However, today someone pointed out to me that page 194 says When rational and floating-point numbers are compared or ^^^^^^^^ combined by a numerical function, the rule of floating-point contagion is followed: when a rational meets a floating-point number, the rational is first converted to a floating-point number of the same format. Now, I haven't worked out the details, but it seems to me that if a rational number is converted to the closest floating-point number of the right format, this can cause (< Rat Flt) (> Rat Flt) or (/= Rat Flt) to fail when it should have succeeded according to "the usual arithmetic comparison tests" and can cause (<= Rat Flt) (>= Rat Flt) or (= Rat Flt) to succeed when it should have failed. For example, (= 1000000000000000000000000000000/2000000000000000000000000000001 0.5) might succeed. If the rational number is converted to floating-point by converting N and D separately and doing a floating-point division, any result is possible. So, (1) what _do_ Common Lisp systems do? (2) what _should_ they do? There is a similar question with respect to comparison of floating-point numbers of different precision. Page 193 says that When an operation combines a short floating-point number with a long one, the result will be a long floating-point number. but I can't find anything to say what conversion is done when a long and a short floating-point number are compared.
barmar@think.COM (Barry Margolin) (04/21/88)
The reason for floating-point contagion is that floating point numbers are not considered exact, while rationals are. It is therefore not proper to automatically convert floats to rationals, because you would be fooling yourself if you believe that the float actually represented that exact rational. A computation is only as precise as its least exact constituent. This is why the rule is called "contagion": floating point numbers infect rationals with their inexactness, and it spreads like a disease through the entire calculation. Let's consider your example, (> (/ big-integer (1+ (* big-integer 2))) 0.5) Where did that 0.5 come from in the first place? It would be silly for it to have been a literal in the program, because the programmer could have said what he meant by using 1/2 instead, without injecting imprecision into the program. Floating point literals should only be used when a computation is being done all in floating point, or when the value is irrational (e.g. pi, e). Therefore, it must have come from some floating point computation. For instance, it could have been the result of (/ big-float (1+ (* big-float 2))) Consider the case where big-float is (float big-integer). By the "usual arithmetic" rules one would expect the above comparison to be false, because they should be =. In order for this to happen, the rational must be converted to a float. The sentence that said "the usual arithmetic tests are performed" does not appear to be defining the behavior, but simply describing the way the comparison functions are used. In general, you cannot expect floating point numbers to always obey the usual arithmetic rules. For example, (> (/ 9.001 10) .9) is not guaranteed to return T, so why would you expect any more of (> 9001/10000 10)? Barry Margolin Thinking Machines Corp. barmar@think.com uunet!think!barmar
ok@quintus.UUCP (Richard A. O'Keefe) (04/22/88)
In article <20062@think.UUCP>, barmar@think.COM (Barry Margolin) writes: > The reason for floating-point contagion is that floating point numbers > are not considered exact, while rationals are. It is therefore not > proper to automatically convert floats to rationals, because you would > be fooling yourself if you believe that the float actually represented > that exact rational. This is fair enough when one is talking about a computation delivering numeric results, such as (+ float rational). But a comparison doesn't do that. As I attempted to sketch in my first message on this topic, it is possible to do an exact rational/float comparison without fully converting the float to a rational. If we argue that a float X represents the interval {X-somewhat,X+somewhat} --I want to be vague about the boundaries--then converting a rational to the nearest float and doing a float gives you a spurious "=" comparison only if the rational is in that interval, so this result makes some sense. But my interest is not so much in rational/float comparison as such, but in having comparison functions which satisfy the axioms of a total order. It bothers me that there can be numbers X, Y, Z such that (= X Y) and (= Y Z) but (< X Z) so that a sorting routine could be "provably correct" and nevertheless deliver wrong answers.