das@Apple.COM (David Shayer) (09/23/90)
I was running this simple program.
main ()
{
float x;
for (x=0.0;x!=10.0;x+=0.2)
printf ("x=%f \n",x);
}
As you can see, it ought to stop when x==10.0. However,
it actually runs in an infinite loop. This is because
x never equals exactly 10.0. The +0.2 always makes x
equal to 9.999999 or 10.000001 or something. Changing
x from a float to a double makes x stay closer to the
correct value, but it still isn't exactly correct, and
as the loop runs longer, x gets off more and more.
So my question is, why can't the Mac do simple math?
What's going on here? Is SANE insane?
I have a Mac IIci, so I have a 68882 FPU. This program
fails in both Think and MPW C. I didn't compile with any
special FPU options on. A friend ran it on a
PC clone, and it failed in the same way there.
Please enlighten me.
David
ar4@sage.cc.purdue.edu (Piper Keairnes) (09/23/90)
In <45060@apple.Apple.COM> das@Apple.COM (David Shayer) writes: >I was running this simple program. >main () >{ > float x; > > for (x=0.0;x!=10.0;x+=0.2) > printf ("x=%f \n",x); >} That isn't that simple of a program when dealing with floating point numbers. There is no such thing as an EXACT floating point number. Floating point numbers are close approximations to real numbers. In Mathematics, real numbers are infinitely accurate, eh? Well, machines are finite. Each 32 or 64 bits of a computer can store a number only so accurately. You start off at a number near 0 and add a number near 0.2 and expect the computer to end when the number is near 10. If that is your wish, then just set your conditional to 'x <= 10.0' It is not that the Mac is poor at Math. In fact, in a recent homework for one of my Numerical Analysis classes, we had to approximate a function with a polynomial. My Mac SE, without math co-processor, got the exact same answers as did a 10 by '386 processor Sequent Symmetry with math co-processors. So don't blame Apple... blame either your code, or the people who designed finite machines ;-) _____ Piper Keairnes - Computer Science ** Purdue University Computing Center ** INTERNET: ar4@sage.cc.purdue.edu ** Unisys Corporation Co-op Student ** BITNET: xar4@purccvm.bitnet ** Macintosh Programmer/ Specialist **
minich@d.cs.okstate.edu (Robert Minich) (09/23/90)
by das@Apple.COM (David Shayer): > I was running this simple program. | | main () | { | float x; | | for (x=0.0;x!=10.0;x+=0.2) | printf ("x=%f \n",x); | } | | As you can see, it ought to stop when x==10.0. However, | it actually runs in an infinite loop. This is because | x never equals exactly 10.0. The +0.2 always makes x | equal to 9.999999 or 10.000001 or something. Changing | x from a float to a double makes x stay closer to the | correct value, but it still isn't exactly correct, and | as the loop runs longer, x gets off more and more. | | So my question is, why can't the Mac do simple math? | What's going on here? Is SANE insane? | | I have a Mac IIci, so I have a 68882 FPU. This program | fails in both Think and MPW C. I didn't compile with any | special FPU options on. A friend ran it on a | PC clone, and it failed in the same way there. | | Please enlighten me. | | David Unless your compiler uses arbitrary precisions math (not very darn likely), you'll probably never be happy. Binary floting point is an exercise in being close. The "correct" way to do what you want is either use integer math and fake the decimal places (ie use cents for money calculations) or use an epsilon instead of equality. #define epsilon 0.00001 for {x=0.0; x-10<epsilon; x+=0.2) Just be careful not to ask for more accuracy than your floting point types can yield. You can probably find all sorts of points of view in comp.arch where discussion has considered higher precision floating point. -- |_ /| | Robert Minich | |\'o.O' | Oklahoma State University| A fanatic is one who sticks to |=(___)= | minich@d.cs.okstate.edu | his guns -- whether they are | U | - Ackphtth | loaded or not.
rcfische@polyslo.CalPoly.EDU (Ray Fischer) (09/23/90)
ar4@sage.cc.purdue.edu (Piper Keairnes) writes ... >In <45060@apple.Apple.COM> das@Apple.COM (David Shayer) writes: > >>I was running this simple program. > >>main () >>{ >> float x; >> >> for (x=0.0;x!=10.0;x+=0.2) >> printf ("x=%f \n",x); >>} > >That isn't that simple of a program when dealing with floating point >numbers. There is no such thing as an EXACT floating point number. Floating >point numbers are close approximations to real numbers. In Mathematics, real Actually, all the answers I've read do not give the correct answer to this question. And so, why doesn't the loop end? Well ... In fact, precision has nothing to do with the problem. The reason that the loop won't terminate is that 0.2 cannot be represented exactly in base-2, just as 1/3 cannot be represented exactly in base-10. You end up with a repeating decimal (or binary) fraction. In decimal 1/3 turns into a repeating 0.33333 and in binary 0.2 turns into a repeating 0.00110011001100. Just as adding 0.33333 30 times will never equal 10, adding 0.2 50 times will never equal 10 in binary, no matter what precision you use. Although many floating point numbers ARE exact (0.125 for example), some cannot be, which is why testing for equality using floating point numbers is always dicey at best. The correct loop would start ... for (x = 0.0; x < 10.0; x += 0.2) Ray Fischer rcfische@polyslo.calpoly.edu
norman@d.cs.okstate.edu (Norman Graham) (09/23/90)
From article <45060@apple.Apple.COM>, by das@Apple.COM (David Shayer): > I was running this simple program. > > main () > { > float x; > > for (x=0.0;x!=10.0;x+=0.2) > printf ("x=%f \n",x); > } > > As you can see, it ought to stop when x==10.0. However, > it actually runs in an infinite loop. This is because > x never equals exactly 10.0. The +0.2 always makes x > equal to 9.999999 or 10.000001 or something. [...] Robert Minich sketched a solution to your problem. Now I'll describe the cause of your problem. [I'm assuming that you've asked a serious question since there weren't any smilies in your post.] Most modern computers use a base 2 number system (binary) for arithmetic rather than the base 10 system that people normally use. [Although I once heard about a Russian computer that used a base 3 number system. The only problem with it was it had to be built with flip-flap-flops rather than flip-flops.] Now, any decimal integer can be represented exactly by a finite binary integer--but this is not the case for decimal fractions. A rational decimal fraction, r, can be exactly expressed by a finite binary number only if r = p/q, where p and q are integers and q is an integer power of 2 (i.e. q = 2^n for some integer n). Your constant 0.2 clearly fails this test; thus 0.2 can be represented exactly only by an infinite string of binary digits. Since no physical computer can store and manipulate infinite strings of binary digits, you must live with a finite approximation of 0.2. This is the cause of your rounding error. I hope that's clear. --Norm -- Norman Graham <norman@a.cs.okstate.edu> {cbosgd,rutgers}!okstate!norman The opinions expressed herein do not necessarily reflect the views of the state of Oklahoma, Oklahoma State University, OSU's Department of Computer Science, or of the writer himself.
wilkins@jarthur.Claremont.EDU (Mark Wilkins) (09/24/90)
For any message relating to the countability of infinitely-extended fractional rational numbers the number of copies of the message needed to get the point across has the cardinality of Aleph-null. See the last message on the subject for a practical example. :-) -- M. W. -- ******* "Freedom is a road seldom traveled by the multitude!" ********** *-----------------------------------------------------------------------------* * Mark R. Wilkins wilkins@jarthur.claremont.edu {uunet}!jarthur!wilkins * ****** MARK.WILKINS on AppleLink ****** MWilkins on America Online ******
das@Apple.COM (David Shayer) (09/24/90)
I guess I should have been clearer in my last posting, and I wouldn't have gotten flamed in my email. I was asking about this code: >> main () >> { >> float x; >> >> for (x=0.0;x!=10.0;x+=0.2) >> printf ("x=%f \n",x); >> } I am not using this code in any program. I would not use a != test in a real program, I would use a <. I understand that 0.2 does not convert perfectly to binary, and thus precision is lost. I had thought that the reason people wrote complex floating point software like SANE was to fix situations like this by have bits indicating that a numerical pattern repeated indefinitely. Suppose you calculated two real numbers, and you wanted to see if the results were the same. It seems that you could never use a simple equality (==) test, as that would often fail even when the numbers were the same when calculated with infinite precision. I take it from people's responses that this problem has not been solved, even with SANE. Is there a better way than simply seeing if the numbers are sufficiently close? David
c60b-4ah@e260-2b.berkeley.edu (Phantom) (09/24/90)
(As a result of some distraction, I have accidentally mismanipulated the cardinality of my last posting. I hope that that has been corrected and the old article has been engulfed by oblivion. The following is the normalized version. I sincerely apologize for the anarchy it has caused in the set of all sets.) With all respect, I was somewhat surprised to learn that some person from Apple should not know this. I shall refer anyone who is intrigued by this to the SANE manual published by Addison-Wesley (Someone in Apple wrote it!) But I will discuss it briefly here. No matter what base is chosen to represent even ration number, there are countablely infinite fractional rational numbers that cannot be represented in finite digits. Suppose the base used is N, which is divisible by the prime numbers p1, p2, p3, ... pk. Suppose further that fractional part of the number you want to represent (call it X) in that system can be written in the form Q --------------------- R , where both Q and R are integers, whose Greatest Common Divider is 1. Let s1, s2, ... sl be all the prime facotors of R, and t1, t2, ... tm be all the prime factors of N, then the necessary and sufficient condition for X to be representable in finit number of digits in base N is that the set { s1, s2, ... sl } is a subset of the set { t1, t2, ... tm }. In you example, the number 0.2=1/5 cannot be exactly represented in a finite sequence of 0's and 1's because 2 <> 5. In fact, 0.5 in binary notation is 0.0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 ...
cak3g@astsun9.astro.Virginia.EDU (Colin Klipsch) (09/24/90)
In article <45060@apple.Apple.COM> das@Apple.COM (David Shayer) writes: >I was running this simple program. >main () >{ > float x; > for (x=0.0;x!=10.0;x+=0.2) > printf ("x=%f \n",x); >} >As you can see, it ought to stop when x==10.0. However, >it actually runs in an infinite loop. This is because >x never equals exactly 10.0. >I have a Mac IIci, so I have a 68882 FPU. This program >fails in both Think and MPW C. I didn't compile with any >special FPU options on. A friend ran it on a >PC clone, and it failed in the same way there. >Please enlighten me. Never, ever, EVER use a floating point variable as a loop index! Not in any language, on any computer, under any circumstances. Do not use them in a box, do not use them with a fox. (This, of course, is my vehement opinion.) Always use integers for the very reason you've discovered: floating point numbers suffer from round-off error for most fractions, and that certainly includes powers of ten. 0.2 represented in binary is infinitely repeating. The computer can't carry around an infinite sequence of bits, so it's loop step is not REALLY 0.2, but the closest binary approximation. (The "double" approximation will be closer than the "float", but will still be unequal.) This is not a defect of your Mac, or the PC. It's a result of the fact that computers work in binary, not decimal. You can usually get away with using floating point numbers in loops IF the variable AND the step size are exactly integers, but it's still a questionable policy. To work around this problem, you could use: for (x = 0.0; x <= 10.0; x += 0.2) {} But the best solution: int i; for (i = 0; i <= 50; i++) { x = i/5.0; ... } This more faithfully generates the results you (presumably) wanted: to have x go from 0 to 10 in 50 evenly distributed intervals. More importantly, your loop is guaranteed to terminate, and it will terminate after the correct number of steps. Hope this helps. -------------------------------------------------------------------------- "May the forces of evil become confused on the way to your house." -- George Carlin Bemusedly, | Disclaimers: Colin Klipsch | Not guaranteed to fulfill any purpose, Property of UVa Ast. Dept. | express or implied. Contents may have Charlottesville, Virginia | settled during shipping. Not rated. May cak3g@virginia.edu | cause drowsiness. Use before 29-Feb. ____________________________/ \___________________________________________
peter@hari.Viewlogic.COM (Peter Colby) (09/24/90)
>main () >{ > float x; > > for (x=0.0;x!=10.0;x+=0.2) > printf ("x=%f \n",x); >} > >As you can see, it ought to stop when x==10.0. However, >it actually runs in an infinite loop. This is because >x never equals exactly 10.0. The +0.2 always makes x >equal to 9.999999 or 10.000001 or something. Changing >x from a float to a double makes x stay closer to the >correct value, but it still isn't exactly correct, and >as the loop runs longer, x gets off more and more. > >So my question is, why can't the Mac do simple math? One of the cardinal rules in floating point math is NEVER USE EXACT EQUALITY TESTS!!! Without going into a long involved explanation of how floating point works, suffice it to say that floating point is inexact. There are several issues, one of which is that the Mantissa (the actual number itself without reference to the position of the decimal point) is expressed internally as reciprocal powers of 2 (ie: 1/2 + 1/4 + 1/8 + ...). If your fraction can't be exactly expressed as some combination of these fractions WITHIN THE PRECISION OF THE SYSTEM (that is the number of bits used to hold the mantissa) then you get an approximation of the number you expect. In your case, 1/5 (0.2) is the culprit. In your particular case your best bet is to change the for statement to be: for (x=0.0;x<10.0;x+=.02) Peter C. -- (O)(O)(O)(O)(O)(O)(O)(O)(O) (O)(O)(O)(O)(O)(O)(O)(O)(O) (O) !the doctor is out! (O) (0) peter@viewlogic.com (0) (O)(O)(O)(O)(O)(O)(O)(O)(O) (O)(O)(O)(O)(O)(O)(O)(O)(O)
lins@Apple.COM (Chuck Lins) (09/25/90)
In article <45060@apple.Apple.COM> das@Apple.COM (David Shayer) writes: >I was running this simple program. > >main () >{ > float x; > > for (x=0.0;x!=10.0;x+=0.2) > printf ("x=%f \n",x); >} > >As you can see, it ought to stop when x==10.0. However, >it actually runs in an infinite loop. This is because >x never equals exactly 10.0. The +0.2 always makes x >equal to 9.999999 or 10.000001 or something. Changing >x from a float to a double makes x stay closer to the >correct value, but it still isn't exactly correct, and >as the loop runs longer, x gets off more and more. > >So my question is, why can't the Mac do simple math? >What's going on here? Is SANE insane? > >Please enlighten me. Ok. I'll try :-) Real arithmetic on computer tends to be inexact. Testing for *exactly* some specific real number is a bad thing. It won't matter what hardware you use. The folks who write numerical software know this well. The proper test should be to test for almost equality with a very small epsilon. > >David > -- Chuck Lins | "Is this the kind of work you'd like to do?" Apple Computer, Inc. | -- Front 242 20525 Mariani Avenue | Internet: lins@apple.com Mail Stop 37-BD | AppleLink: LINS@applelink.apple.com Cupertino, CA 95014 | "Self-proclaimed Object Oberon Evangelist" The intersection of Apple's ideas and my ideas yields the empty set.
dhoyt@vw.acs.umn.edu (09/25/90)
>As you can see, it ought to stop when x==10.0. However, >it actually runs in an infinite loop. This is because >x never equals exactly 10.0. The +0.2 always makes x >equal to 9.999999 or 10.000001 or something. Changing >x from a float to a double makes x stay closer to the >correct value, but it still isn't exactly correct, and >as the loop runs longer, x gets off more and more. >So my question is, why can't the Mac do simple math? 0.2 is a continuing fraction in base 2. Just like 1/3 can't be expressed in base ten notation, 1/5, 1/10 and other numbers cann't be represented in base 2. Some machines have some very special checks to handle 1/10th nicely, but you should never count on it. Real programmers use >=, even for integer loop counters. Paranoia, it's not just a state of mind, it's a job. david paul hoyt | dhoyt@vx.acs.umn.edu | dhoyt@umnacvx.bitnet
russotto@eng.umd.edu (Matthew T. Russotto) (09/25/90)
In article <45060@apple.Apple.COM> das@Apple.COM (David Shayer) writes: >I was running this simple program. > >main () >{ > float x; > > for (x=0.0;x!=10.0;x+=0.2) > printf ("x=%f \n",x); >} > > >So my question is, why can't the Mac do simple math? >What's going on here? Is SANE insane? 0.2 is a repeating fraction in binary. There is no way to express it exactly except with BCD (which SANE doesn't support). Thus, no matter what the precision, you can't win. -- Matthew T. Russotto russotto@eng.umd.edu russotto@wam.umd.edu .sig under construction, like the rest of this campus.
leban@par3.cs.umass.edu (Bruce Leban) (09/25/90)
Several people have suggested replacing: for (x=0.0;x!=10.0;x+=0.2) with variations of: for (x = 0.0; x < 10.0+epsilon; x += 0.2) but that is not the best suggestion since the floating point error compounds on each addition. Presumably, what you want is the nearest approximation to the sequence 0.0, 0.2, 0.4, etc. In that case, do: int y; float x; for (y = 0; x = y/10, y <= 100; y += 2) An excellent reference is the /Elements of Programming Style/ which points out that "0.1 * 10.0 is hardly ever 1.0". --- Bruce Leban@cs.umass.edu @amherst.mass.usa.earth
das@Apple.COM (David Shayer) (09/25/90)
Let's try this one more time, then I promise to go away and stop bothering all you nice people. I know what happens when you convert 0.2 to binary. I know literally how precision is lost in the base conversion. I know what a mantissa and an exponent are. That's not my question. I know that the sample program I posted is bad programming style, but it illustrates the question well. So stop flaming me about it. (I don't usually write code like that, I promise.) huh-HUM (throat clearing noises.) The calculator DA can do this math correctly. If you add 0.2 fifty times, you get 10.0. Exactly. Not 9.999999 or 10.0000001. I tried changing my float variable to an extended, as someone suggested. No dice. Does the calculator DA have its own special math package? (If so, the dCad calculator does too.) No one seems to think SANE has calls which take care of this. So why does it work in the calculator? David
peter@hari.Viewlogic.COM (Peter Colby) (09/25/90)
In article <45108@apple.Apple.COM>, das@Apple.COM (David Shayer) writes: |> The calculator DA can do this math correctly. If you add 0.2 fifty |> times, you get 10.0. Exactly. Not 9.999999 or 10.0000001. I tried |> changing my float variable to an extended, as someone suggested. No dice. |> Does the calculator DA have its own special math package? (If so, the |> dCad calculator does too.) No one seems to think SANE has calls which |> take care of this. So why does it work in the calculator? I would have to assume that the calculator DA actually uses fixed point rather than floating point arithmetic. Fixed point is exact because you can represent any number as an integer! Of course, you have to limit the size of both the exponent and the mantissa or you end up back in the infinite precision trap again. -- (O)(O)(O)(O)(O)(O)(O)(O)(O) (O)(O)(O)(O)(O)(O)(O)(O)(O) (O) !the doctor is out! (O) (0) peter@viewlogic.com (0) (O)(O)(O)(O)(O)(O)(O)(O)(O) (O)(O)(O)(O)(O)(O)(O)(O)(O)
djvelleman@amherst.bitnet (09/25/90)
In article <45108@apple.Apple.COM>, das@Apple.COM (David Shayer) writes: > The calculator DA can do this math correctly. If you add 0.2 fifty > times, you get 10.0. Exactly. Not 9.999999 or 10.0000001 The calculator may SHOW the answer as 10 exactly, but that doesn't mean that's exactly what it got. My guess is that it's rounding off its answer when it displays it. Here's an interesting calculator experiment which supports this: Ask the calculator DA for 9.8000000001 + 0.2. (That's 8 zeros between the 8 and the 1.) It shows the answer as 10 exactly, although of course the correct answer is 10.0000000001. Now enter - 0.2 and you get back 9.8000000001. So the calculator still knows about that 1 way out there in the 10th place after the decimal point, it just wasn't displaying it. If you enter 10 - 0.2 into the calculator it gives you back 9.8, so there seems to be a difference between the 10 that it got as an answer to the first calculation and a 10 that you enter directly. Now try asking for 10.0000000001 - 0.2. When you hit the "-", the 10.0000000001 gets rounded off to 10, but then it gives the answer as 9.8000000001, so again the 1 wasn't forgotten when the display was rounded off. Here's another experiment that shows how many digits the calculator is keeping track of, but not displaying. Ask for 1.000000001 * 1.000000001 (8 zeros). You get the answer 1.000000002, although the exact answer is 1.000000002000000001. The calculator DA actually still knows about that 1 way out there in the 18th place after the decimal, as you can confirm now by entering - 1.000000002 =. You get, not 0, but 1.084202E-18, very close to the right answer. If you just ask for 1.000000002 - 1.000000002, you get 0. This suggests another experiment that I'm too lazy to try. Add 0.2 to itself 50 times, getting an answer which is DISPLAYED as 10. Now subtract 10 and see if you get exactly 0. Dan Velleman Math Dept. Amherst College
bayes@hpislx.HP.COM (Scott Bayes) (09/25/90)
Another way of telling if a fraction is representable in a binary computer (actually just another way of stating what Norman said): First reduce the fraction (pull out common factors). If the denominator (that's the one underneath the divide line) has a factor other than 2^n (n is any integer) in it, it can't be exactly represented. E.g. your .2 = 1/5 = 5^(-1), which is not 2^n, so can't be represented. Scott Bayes Hewlett-Packard Company
dhoyt@vw.acs.umn.edu (09/25/90)
In article <1990Sep25.091831@hari.Viewlogic.COM>, peter@hari.Viewlogic.COM (Peter Colby) writes... >Fixed point is exact because you can represent any number as an integer! e^(pi*i) - 1 = 0. You could say that most numbers are not representable as integers. david
lindahl@violet.berkeley.edu (Ken Lindahl 642-0866) (09/26/90)
In article <45108@apple.Apple.COM> das@Apple.COM (David Shayer) writes: >huh-HUM (throat clearing noises.) >The calculator DA can do this math correctly. If you add 0.2 fifty >times, you get 10.0. Exactly. Not 9.999999 or 10.0000001. I tried >changing my float variable to an extended, as someone suggested. No dice. >Does the calculator DA have its own special math package? (If so, the >dCad calculator does too.) No one seems to think SANE has calls which >take care of this. So why does it work in the calculator? > >David huh-HUM (throat clearing noises.) I don't think this description of the calculator DA is correct. If I add 0.2 five times, I get "1" not "1.0". I'm not going to do it 50 times, but I'll bet if you do you'll get "10" not "10.0"! This should be your first hint: calculator DA is rounding the result that it displays. Even if the result was 9.9999999999 or 10.000000000001, you'd never see it because the DA would round to 10. Try this in calculator DA: 1.00000000001 * 1. The answer is "1". Actually, if you watch closely, you'll see that the rounding occurs when you hit the "*" key! Note: the number of zeroes to the right of the decimal point must be 10 for you to see the behavior I'm talking about. This is determined by the width of the calculator display. Ken Lindahl lindahl@violet.berkeley.edu Advanced Technology Planning, Information Systems and Technology University of California at Berkeley
phils@chaos.cs.brandeis.edu (Phil Shapiro) (09/26/90)
In article <1990Sep25.174006.11979@agate.berkeley.edu> lindahl@violet.berkeley.edu (Ken Lindahl 642-0866) writes: In article <45108@apple.Apple.COM> das@Apple.COM (David Shayer) writes: >huh-HUM (throat clearing noises.) >The calculator DA can do this math correctly. If you add 0.2 fifty >times, you get 10.0. Exactly. Not 9.999999 or 10.0000001. I tried >changing my float variable to an extended, as someone suggested. No dice. >Does the calculator DA have its own special math package? (If so, the >dCad calculator does too.) No one seems to think SANE has calls which >take care of this. So why does it work in the calculator? > >David huh-HUM (throat clearing noises.) [ description of how Calculator truncates floats ] Ken Lindahl lindahl@violet.berkeley.edu I don't know if this has any bearing, but I just tried out adding .2 to itself 10 times, got '2', then subtracted '2'. The result? 2.168404E-19. Go figure :-) -phil -- Phil Shapiro phils@chaos.cs.brandeis.edu
c60b-4ah@e260-1b.berkeley.edu (Phantom) (09/27/90)
(Disclaimer: The following is my own guess.) The programing style of Caculator DA is ragher modest where floating point is concerned. It does not display the result in the maximum possible precision after converting it to decimal notation; rather, it rounds the result off to certain decimal point and removes all tailing zeros.
francis@arthur.uchicago.edu (Francis Stracke) (09/27/90)
In article <4485@sage.cc.purdue.edu> ar4@sage.cc.purdue.edu (Piper Keairnes) writes: >In <45060@apple.Apple.COM> das@Apple.COM (David Shayer) writes: > >>I was running this simple program. > >>main () >>{ >> float x; >> >> for (x=0.0;x!=10.0;x+=0.2) >> printf ("x=%f \n",x); >>} > >That isn't that simple of a program when dealing with floating point >numbers. There is no such thing as an EXACT floating point number. Floating >point numbers are close approximations to real numbers. The problem is that, in binary, 0.2 is 0.001100110011... (repeating 0011 forever, in case you can't guess--in base 10 we'd call it a repeating decimal.) Think of what happens if you try to add 1/3 in base 10 with small precision: 0.3333, 0.6666, 0.9999, 1.3332, 1.6665, 1.9998 And so on. If you ever want to do fractions on a computer, do them as fractions, not as reals. (SANE has the Fixed type, doesn't it?)
francis@arthur.uchicago.edu (Francis Stracke) (09/27/90)
In article <2257@ux.acs.umn.edu> dhoyt@vw.acs.umn.edu writes: >you should never count on it. Real programmers use >=, even for integer loop >counters. Paranoia, it's not just a state of mind, it's a job. There are two types of paranoia: total and insufficient.
francis@arthur.uchicago.edu (Francis Stracke) (09/27/90)
In article <45108@apple.Apple.COM> das@Apple.COM (David Shayer) writes: >huh-HUM (throat clearing noises.) >The calculator DA can do this math correctly. If you add 0.2 fifty >times, you get 10.0. Exactly. Not 9.999999 or 10.0000001. [...] >Does the calculator DA have its own special math package? (If so, the >dCad calculator does too.) No one seems to think SANE has calls which >take care of this. So why does it work in the calculator? What are you asking us for? Find the source code & take a look! What's the point of working for Apple if they won't let you at stuff like that? (BTW: it's not just a simple BCD package. Try this: 2/3-.66666666667 (I think that's right: however many 6s it shows you from 2/3). You get -3.33333E-12. So it's a lot more accurate than it looks. Either they used SANE in some weird way, or they put a LOT of digits into their BCD package, many more than needed.)
bayes@hpislx.HP.COM (Scott Bayes) (09/27/90)
> Let's try this one more time, then I promise to go away and stop > bothering all you nice people. > > I know what happens when you convert 0.2 to binary. I know literally > how precision is lost in the base conversion. I know what a mantissa and > an exponent are. That's not my question. > > I know that the sample program I posted is bad programming style, > but it illustrates the question well. So stop flaming me about it. > (I don't usually write code like that, I promise.) > > huh-HUM (throat clearing noises.) > The calculator DA can do this math correctly. If you add 0.2 fifty > times, you get 10.0. Exactly. Not 9.999999 or 10.0000001. I tried > changing my float variable to an extended, as someone suggested. No dice. > Does the calculator DA have its own special math package? (If so, the > dCad calculator does too.) No one seems to think SANE has calls which > take care of this. So why does it work in the calculator? > > David So does the calculator DA work in base 10, maybe? Note that QuickBASIC for the Mac hase a decimal base version, which should also give exact answers for your case. It's just that often decimal base calculations are considered enough more expensive than binary in a binary computer that they are only used in specialized code, usually. Nice for business, where the world is naturally decimal; not so nice in science, where it only seems to be decimal based, due to the way we write things down. Scott "binary" Bayes Note: the decimal base trick is often done with 2 BCD fields per byte, except in "infinite-precision" math, where it might be done with rational numbers, or strings, or goodness knows what... I believe HP's old 9845 was a decimal base machine. S
roy@phri.nyu.edu (Roy Smith) (09/28/90)
rcfische@polyslo.CalPoly.EDU (Ray Fischer) writes: > Although many floating point numbers ARE exact (0.125 for example), some > cannot be, which is why testing for equality using floating point numbers > is always dicey at best. It just occurred to me, that although you *should* be able to represent 0.125 as an exact binary fraction (namely 0b001, where b represents the binary point), there is no guarantee at all that if you write the constant 0.125 in a program, the compiler will convert that into the exact binary floating point constant 0.00100000... The process of converting the ascii string "0.125" into a binary floating constant probably involves evaluating something like: 1.0/10.0 + 2.0/100.0 + 5.0/1000.0 or maybe: (((5.0/10.0 + 2.0)/10.0 + 1.0)/10.0 both of which involve inexact intermediate results. Somewhat to my surprise, when I compiled and ran the following on a Sun-3/50 with both -fsoft and -f68881, and on a 3/160 with -ffpa: main () { double x, y; x = 0.125; y = (x * 1000.0) - 125.0; if (y == 0.0) printf ("Yowza!\n"); else printf ("Cowabunga!\n"); } all three printed "Yowza!", but I'm sure you could find machines where that wasn't true. -- Roy Smith, Public Health Research Institute 455 First Avenue, New York, NY 10016 roy@alanine.phri.nyu.edu -OR- {att,cmcl2,rutgers,hombre}!phri!roy "Arcane? Did you say arcane? It wouldn't be Unix if it wasn't arcane!"
deadman@garnet.berkeley.edu (Ben Haller) (09/28/90)
This topic is getting really old, but one more thing: I've seen several people so far say that the "correct" version of the loop for (x = 0.0 ; x != 10.0 ; x += 0.2) is for (x = 0.0 ; x < 10.0 ; x += 0.2) Now I don't claim to know much about floating point stuff, but it seems to me that the internal representation of 0.2 could be either slightly greater or slightly less than 0.2 (the reason why it isn't *exactly* 0.2 has been made quite clear, I think). If the value used as 0.2 is slightly less than 0.2, this loop will execute 51 times. If the value used is slightly greater, this loop will execute 50 times. This is obviously wrong. What you need to do is either use an epsilon value, like: for (x = 0.0 ; abs(x - 10.0) > 0.1 ; x += 0.2) or you need to use a constant that is not a multiple of 0.2 to test against: for (x = 0.0 ; x < 9.9 ; x += 0.2) (or, as has already been mentioned, there are various ways of doing the equivalent loop with integers, although they are all slower than a pure FP loop like the last one above) Or am I wrong? -Ben Haller (deadman@garnet.berkeley.edu)
isr@rodan.acs.syr.edu (Michael S. Schechter - ISR group account) (10/04/90)
In article <8614@jarthur.Claremont.EDU> wilkins@jarthur.Claremont.EDU (Mark Wilkins) writes: > For any message relating to the countability of infinitely-extended >fractional rational numbers the number of copies of the message needed to >get the point across has the cardinality of Aleph-null. > See the last message on the subject for a practical example. >:-) >-- M. W. Not neccesarily. After explaining this point to the "unwashed hordes" perhaps 3,4, or even 5 hundred times previously, I've gotten pretty good at it. Here Goes: Computers, since they store 0's and 1's (you know that much that right?) can't keep track of numbers like we do. We say 0-9 hundreds and then 0-9 tens and then 0-9 ones etc etc. The poor stupid computers could only say 0 or 1 hundred and then 0 or 1 ten and then 0 or 1 one. This means there's lotsa missing numbers. And since my paycheck doesn't have any 0's or 1's, I really want there to be a way for a computer to keep track of all these numbers! So what computers do is say "i have 0 or 1 ones, then 0 or 1 twos then 0 or 1 fours then 0 or 1 eights etc etc etc It just happens that by doubling the size of the numbers it keep tracks of each time you get to "say" each number. so for example we say: 56 That's five ten's and six ones. The computer says (for 56):101110 That's one 32, no 16's, one 8, one 4, one 2, no ones. See? isn't it easy? Now the way the computer does fractions is just the same. We use 0.1 0.01 0.001 etc for tenths, hundreds, thousands. The computer uses 0.1 0.01 0.001 for halves, quarters, eighths!!! But don't forget, 3/8's isn't 0.003 No! it's 0.011 (which is 1 quarter and 1 eighth=3/8's) it's like you can't really put 1/3 in as a decimal number nicely it's 0.3333333333 for as long as you wanna go. some numbers we can write nicely, like 0.1 in decimal the computer has to write very mesily as 0.000110011001+ This is no halves, no quarters, no eights one sixteenth, one thirty-second, no sixty-fourths, no-128's one 256th, one 512th etc etc Luckily, computers only figure numbers in this internally because they only have 0's and 1's inside 'em. On screens and printers they have 2's 4's 0's, everything from 0-9...... But now you see why there's problems. it's like adding 1/3 to 1/3 to 1/3 with a calculator.. you either 0.99999 or1.00001 depending on your calculator. -- Mike Schechter, Computer Engineer,Institute Sensory Research, Syracuse Univ. InterNet: Mike_Schechter@isr.syr.edu isr@rodan.syr.edu Bitnet: SENSORY@SUNRISE
vd09+@andrew.cmu.edu (Vincent M. Del Vecchio) (10/05/90)
> Excerpts from netnews.comp.sys.mac.programmer: 3-Oct-90 Re: Why can't > the Mac add? Michael g. account@rodan (2555) > In article <8614@jarthur.Claremont.EDU> wilkins@jarthur.Claremont.EDU > (Mark Wilkins) writes: > > For any message relating to the countability of infinitely-extended > >fractional rational numbers the number of copies of the message needed > >to get the point across has the cardinality of Aleph-null. > > See the last message on the subject for a practical example. > >:-) > >-- M. W. > Not neccesarily. > After explaining this point to the "unwashed hordes" perhaps 3,4, or > even 5 hundred times previously, I've gotten pretty good at it. > Here Goes: > [Goes on to explain why not all terminating decimals are terminating > binary "decimals".] Congratulations. You have just furthered Mark's point by increasing the number of explanations by one, making it one closer to infinite.... +-------------------------------------------------------------------+ | Vincent Del Vecchio \ #include <stddisclaimer.h> | | Box 4834 \ #include <stdquote.h> | | 5125 Margaret Morrison St.\ BITNET: vd09+%andrew@cmuccvma.bitnet | | Pittsburgh, PA 15213 \ UUCP: harvard!andrew.cmu.edu!vd09 | | (412) 268-4441 \ Internet: vd09+@andrew.cmu.edu | +-------------------------------------------------------------------+
vd09+@andrew.cmu.edu (Vincent M. Del Vecchio) (10/05/90)
> Excerpts from netnews.comp.sys.mac.programmer: 3-Oct-90 Re: Why can't > the Mac add? Michael g. account@rodan (2555) > In article <8614@jarthur.Claremont.EDU> wilkins@jarthur.Claremont.EDU > (Mark Wilkins) writes: > > For any message relating to the countability of infinitely-extended > >fractional rational numbers the number of copies of the message needed > >to get the point across has the cardinality of Aleph-null. > > See the last message on the subject for a practical example. > >:-) > >-- M. W. > Not neccesarily. > After explaining this point to the "unwashed hordes" perhaps 3,4, or > even 5 hundred times previously, I've gotten pretty good at it. > Here Goes: > [Goes on to explain why not all terminating decimals are terminating > binary "decimals".] Congratulations. You have just furthered Mark's point by increasing the number of explanations by one, making it one closer to infinite.... +-------------------------------------------------------------------+ | Vincent Del Vecchio \ #include <stddisclaimer.h> | | Box 4834 \ #include <stdquote.h> | | 5125 Margaret Morrison St.\ BITNET: vd09+%andrew@cmuccvma.bitnet | | Pittsburgh, PA 15213 \ UUCP: harvard!andrew.cmu.edu!vd09 | | (412) 268-4441 \ Internet: vd09+@andrew.cmu.edu | +-------------------------------------------------------------------+
vd09+@andrew.cmu.edu (Vincent M. Del Vecchio) (10/05/90)
One of these times I'll get the formatting right... > Excerpts from netnews.comp.sys.mac.programmer: 3-Oct-90 Re: Why can't > the Mac add? Michael g. account@rodan (2555) > In article <8614@jarthur.Claremont.EDU> wilkins@jarthur.Claremont.EDU > (Mark Wilkins) writes: > > For any message relating to the countability of infinitely-extended > >fractional rational numbers the number of copies of the message needed > >to get the point across has the cardinality of Aleph-null. > > See the last message on the subject for a practical example. > >:-) > >-- M. W. > Not neccesarily. > After explaining this point to the "unwashed hordes" perhaps 3,4, or > even 5 hundred times previously, I've gotten pretty good at it. > Here Goes: > [Goes on to explain why not all terminating decimals are terminating > binary "decimals".] Congratulations. You have just furthered Mark's point by increasing the number of explanations by one, making it one closer to infinite.... +-------------------------------------------------------------------+ | Vincent Del Vecchio \ #include <stddisclaimer.h> | | Box 4834 \ #include <stdquote.h> | | 5125 Margaret Morrison St.\ BITNET: vd09+%andrew@cmuccvma.bitnet | | Pittsburgh, PA 15213 \ UUCP: harvard!andrew.cmu.edu!vd09 | | (412) 268-4441 \ Internet: vd09+@andrew.cmu.edu | +-------------------------------------------------------------------+ +-------------------------------------------------------------------+ | Vincent Del Vecchio \ #include <stddisclaimer.h> | | Box 4834 \ #include <stdquote.h> | | 5125 Margaret Morrison St.\ BITNET: vd09+%andrew@cmuccvma.bitnet | | Pittsburgh, PA 15213 \ UUCP: harvard!andrew.cmu.edu!vd09 | | (412) 268-4441 \ Internet: vd09+@andrew.cmu.edu | +-------------------------------------------------------------------+
mdtaylor@Apple.COM (Mark David Taylor) (10/06/90)
I don't know much about SANE, but I would guess that what the original poster is actually looking for is the fixed-point arithmetic type, described on pages I-79 and I-467 of Inside Macintosh. I don't think SANE has a special feature to detect endless repeating binary representations of numbers. - Mark