daver@hcx1.SSD.HARRIS.COM (06/30/87)
IEEE floating point arithmetic users: Of the 4 rounding modes -- nearest, zero, positive infinity, and negative infinity -- which one(s) are you most likely to use? Does the accuracy of the numbers influence your decision? Does the performance of computing in one of these modes help sway your opinion? Not being an IEEE user, I'm trying to determine the relative merits of one mode over another. Thanks for your input. Dave Ray -- uucp: hcx1!daver
johnl@ima.UUCP (John R. Levine) (07/02/87)
In article <93900007@hcx1> daver@hcx1.SSD.HARRIS.COM writes: >Of the 4 rounding modes -- nearest, zero, positive infinity, and negative >infinity -- which one(s) are you most likely to use? ... I always use round to nearest. Round to zero ("chop") is mostly useful for numerical shenanigans or, I suppose, compatibility with previous machines like the IBM 360/370 that round that way. In general, round to nearest gives you the effect of storing an extra bit of precision compared to round to zero. On some micros, your FP unit is so much more powerful than your regular CPU that you use it for integer calculations, and chop mode could be useful there. Round up and round down seem only useful for confidence testing -- you run your program once in round up mode and once in round down mode and the two answers you get give you an idea of how solid the answers are. The closer together they are, the more confident you feel. (Real numerical analysts are welcome to explain why this is all wrong.) -- John R. Levine, Javelin Software Corp., Cambridge MA +1 617 494 1400 { ihnp4 | decvax | cbosgd | harvard | yale }!ima!johnl, Levine@YALE.something U.S. out of New Mexico!
markb@mitisft.Convergent.COM (Mark Beyer) (07/02/87)
In article <93900007@hcx1>, daver@hcx1.SSD.HARRIS.COM writes: > Of the 4 rounding modes -- nearest, zero, positive infinity, and negative > infinity -- which one(s) are you most likely to use? The compilers and runtimes for the 80387 and Weitek 1167 that I've seen use 'nearest' rounding for floating point. Integers are rounded towards zero.
dgh%dgh@Sun.COM (David Hough) (07/02/87)
An implementation of IEEE 754 or 854 floating point must provide all four specified rounding modes with round-to-nearest as default. As to why all are specified, round-to-nearest is most likely to provide the best results on most problems. The directed roundings toward zero, negative infinity, and positive infinity are useful for special purposes. Round toward zero is used a lot for converting floating-point numbers to integers, and for Fortran functions like AINT. Likewise ceil(3m) and floor(3m) are most easily implemented using the directed rounding modes. However the most important reason for requiring directed rounding modes is to facilitate efficient implementation of interval arithmetic. Interval arithmetic is a systematic approach to bounding all the errors in a computation. It has its limitations but has been usefully applied in a great many situations. A group at Karlsruhe led by Nickel has been studying applications for many years, although handicapped by hardware that lacked directable rounding. And interval arithmetic has even received a half-baked commercialization in the form of the ACRITH hardware for some IBM mainframes. Another interval arithmetic pioneer, Ramon Moore, has organized a conference on the subject for September 8-11 at Ohio State. David Hough ARPA: dhough@sun.com UUCP: {ucbvax,decvax,allegra,decwrl,cbosgd,ihnp4,seismo}!sun!dhough
fpst@hubcap.UUCP (Dennis Stevenson) (07/02/87)
In article <93900007@hcx1>, daver@hcx1.SSD.HARRIS.COM writes: > > IEEE floating point arithmetic users: > > Of the 4 rounding modes -- nearest, zero, positive infinity, and negative > infinity -- which one(s) are you most likely to use? Does the accuracy > of the numbers influence your decision? Does the performance of computing > in one of these modes help sway your opinion? ... You should check with the numerical people on na-request at stanford.edu or sci.math on usenet. They're the folks who use it. Steve
lyang%scherzo@Sun.COM (Larry Yang) (07/03/87)
In article <93900007@hcx1> daver@hcx1.SSD.HARRIS.COM writes: >IEEE floating point arithmetic users: > >Of the 4 rounding modes -- nearest, zero, positive infinity, and negative >infinity -- which one(s) are you most likely to use? I am not a real user of floating point arithmetic, but I understand a little bit about the theory behind the use of the rounding modes. The round to nearest is pretty straight-forward; it's the most obvious rounding mode. The rounding to +inf and -inf are especially useful in what is known as 'interval arithmetic'. Since floating point arithmetic has the potential of inaccuracies due to rounding, what a programmer may do is to perform a computation using 'round to +inf', then repeat the computation using 'round to -inf'. Then they are certain that the 'true' result lies somewhere between these two bounds; they know the 'interval' in which the solution lies. I am not sure about the use of round to zero; I will need to check my notes. ================================================================================ --Larry Yang [lyang@sun.com,{backbone}!sun!lyang]| A REAL _|> /\ | Sun Microsystems, Inc., Mountain View, CA | signature | | | /-\ |-\ /-\ Hobbes: "Why do we play war and not peace?" | <|_/ \_| \_/\| |_\_| Calvin: "Too few role models." | _/ _/
roy@phri.UUCP (Roy Smith) (07/04/87)
In article <22630@sun.uucp> lyang@sun.UUCP (Larry Yang) writes: > what a programmer may do is to perform a computation using 'round to > +inf', then repeat the computation using 'round to -inf'. Then they are > certain that the 'true' result lies somewhere between these two bounds; > they know the 'interval' in which the solution lies. Maybe I'm just exposing my ignorance of the subject, but it seems to me that for division, i.e. if you do (a+b)/(c+d), this logic doesn't hold. If a+b = 1.1 and c+d = 1.9, for example, rounding both intermediate results towards -inf gives 1.0/1.0 = 1.0; similarly, rounding both towards +inf gives 2.0/2.0 = 1.0. The "real" answer is 0.578..., which is *not* within the interval (1.0,1.0). -- Roy Smith, {allegra,cmcl2,philabs}!phri!roy System Administrator, Public Health Research Institute 455 First Avenue, New York, NY 10016
rentsch@unc.cs.unc.edu (Tim Rentsch) (07/04/87)
In article <22630@sun.uucp> lyang@sun.UUCP (Larry Yang) writes: > The rounding to +inf and -inf are especially useful in what is known > as 'interval arithmetic'. Since floating point arithmetic has the > potential of inaccuracies due to rounding, what a programmer may do > is to perform a computation using 'round to +inf', then repeat the > computation using 'round to -inf'. Then they are certain that the > 'true' result lies somewhere between these two bounds; they know the > 'interval' in which the solution lies. Assuming by "perform a computation" you mean run an entire program (rather than just one arithmetic operation), this is not right. It is perfectly possible (proof left to the reader) for the exact answer to a calculation be outside the range of 'round to -inf' and 'round to +inf', if the calculation has more than one operation. True interval arithmetic requires two operands (lower and upper bound) to be carried everywhere through the program, generating new lower and upper bounds for each operation. For addition, sum the lower bounds (round to -inf) to get the result lower bound, sum the upper bounds (round to +inf) to get the result upperbound. For subtraction, on the other hand, subtract the lower bound from the upper bound (round to +inf) to get the result upper bound, subtract the upper bound from the lower bound (round to -inf) to get the result lower bound. And so forth. The 'round to -inf' and 'round to +inf' could be used to implement interval arithmetic, as sketched above, but only by carrying twice as much data, and by doing twice as many operations. Of course, you could use 'round to -inf' and 'round to +inf' as confidence checks. If the program run the two ways produces significantly different results (or even if the two similar results are significantly different from 'round to nearest'), then it is certain that there is lost significance, and so you had better do some numerical analysis (or use double precision) before trusting the answers. By a standard (perhaps suspect) reasoning process, if the two -- or three -- answers substantially agree, our confidence in the results is increased. This increased confidence (or clear indication of erroneous answer) comes with only a 2-3 fold increase in CPU time, and with no change in program. cheers, Tim
ark@alice.UUCP (07/04/87)
In article <2774@phri.UUCP>, roy@phri.UUCP writes: > Maybe I'm just exposing my ignorance of the subject, but it seems > to me that for division, i.e. if you do (a+b)/(c+d), this logic doesn't > hold. If a+b = 1.1 and c+d = 1.9, for example, rounding both intermediate > results towards -inf gives 1.0/1.0 = 1.0; similarly, rounding both towards > +inf gives 2.0/2.0 = 1.0. The "real" answer is 0.578..., which is *not* > within the interval (1.0,1.0). First of all, the rounding we're talking about is not rounding to the nearest integer, but rounding to the nearest floating-point number. However, let's pretend we're rounding to the nearest integer. Then for division we have to round in the direction that gives the smallest and then the largest result: 1.0/2.0 = 0.5 2.0/1.0 = 2.0 so we find that the result is in the interval (0.5,2.0)
henry@utzoo.UUCP (Henry Spencer) (07/06/87)
> ... for division, i.e. if you do (a+b)/(c+d), this logic doesn't > hold. If a+b = 1.1 and c+d = 1.9, for example, rounding both intermediate > results towards -inf gives 1.0/1.0 = 1.0; similarly, rounding both towards > +inf gives 2.0/2.0 = 1.0. The "real" answer is 0.578..., which is *not* > within the interval (1.0,1.0). Doing interval arithmetic correctly is rather more complicated than just running the calculation twice with different rounding modes. You have to consider the worst cases for each operation. For example, the worst cases for division round numerator and denominator in different ways. It gets worse than that when the function isn't monotonic. Taking a gross case, sin(0) = sin(pi) = 0, but sin(0...pi) is 0...1 because sin(pi/2) is 1. Getting all this right, especially when roundoff error in "hidden" intermediate results is involved, really requires a professional. The extra rounding modes are tools, not ready-made solutions. -- Mars must wait -- we have un- Henry Spencer @ U of Toronto Zoology finished business on the Moon. {allegra,ihnp4,decvax,pyramid}!utzoo!henry
cik@l.cc.purdue.edu (Herman Rubin) (07/07/87)
This is another example of the need for flexibility. Mostly I want to round to nearest; _however_, I often need other rounding modes. In fact, I may want the rounding mode to depend on the signs of the arguments (this also applies to some integer operations). Lengthening the instruction to allow this flexibility will add instructions, but should not slow down the computer appreciably. -- Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907 Phone: (317)494-6054 hrubin@l.cc.purdue.edu or pur-ee!stat-l!cik or hrubin@purccvm.bitnet