Schauble@mit-multics.arpa (Paul Schauble) (11/19/85)
Does anyone know of a C compiler for the IBM PC that does NOT force all floating point arithmetic to be done in double precision? 8087 support is necessary. Thanks, Paul
bright@dataioDataio.UUCP (Walter Bright) (11/20/85)
In article <3369@brl-tgr.ARPA> Schauble@mit-multics.arpa (Paul Schauble) writes: >Does anyone know of a C compiler for the IBM PC that does NOT force all >floating point arithmetic to be done in double precision? 8087 support >is necessary. The 8087 does all arithmetic internally in 80 bit floating format. Wanting a compiler that does 32 bit floating arithmetic, and then requiring 8087 support does not accomplish much. Perhaps if you gave more information about your problem, I could make a specific recommendation. For example, is program size or speed the problem? Do you use the trig functions? Do you require the program to run on non-8087 machines? There are a lot of C compilers for the PC, all with slightly different solutions to these problems.
farren@well.UUCP (Mike Farren) (11/20/85)
In article <3369@brl-tgr.ARPA>, Schauble@mit-multics.arpa (Paul Schauble) writes: > Does anyone know of a C compiler for the IBM PC that does NOT force all > floating point arithmetic to be done in double precision? 8087 support > is necessary. > > Thanks, > Paul Yes, but the 8087 forces conversion to double-precision anyway, there is no way to avoid it. Why would you want to? -- Mike Farren uucp: {dual, hplabs}!well!farren Fido: Sci-Fido, Fidonode 125/84, (415)655-0667 USnail: 390 Alcatraz Ave., Oakland, CA 94618
garry@lasspvax.UUCP (Garry Wiegand) (11/21/85)
Pet peeve: In an inauspicious moment, K&R specified "All floating-point arithmetic in C is done in double precision" (pg 41). Two effects: 1) floats, as opposed to doubles, are costly and useless (except when memory space is critical), and 2) I have to advise people with CPU-intense problems not to use C. Comments? Does everybody agree? garry wiegand garry%geology@cu-arpa.cs.cornell.edu (I know what my compiler does, but are char->int and short->int required in the same way? K&R seems fuzzy to me...)
levy@ttrdc.UUCP (Daniel R. Levy) (11/22/85)
In article <286@well.UUCP>, farren@well.UUCP (Mike Farren) writes: >In article <3369@brl-tgr.ARPA>, Schauble@mit-multics.arpa (Paul Schauble) writes: >> Does anyone know of a C compiler for the IBM PC that does NOT force all >> floating point arithmetic to be done in double precision? 8087 support >> is necessary. > Yes, but the 8087 forces conversion to double-precision anyway, there >is no way to avoid it. Why would you want to? One would wish to do this for the same reason that floats and doubles are two distinct types in Fortran (real and double precision)--sometimes the added machine-crunch effort to handle the doubles is unnecessary for the amount of precision desired out of the computation. If the machine has separate floating point instructions meant to handle the smaller floats, this makes sense, but if (as in the case of the 8087) it only handles double-size items, I agree it is pointless (alas) to make the distinction except artificially. On machines that do provide distinct instructions for floats and doubles, a multiply/divide of two doubles takes ~4 times the crunch of a multiply of two floats as best as I can remember from CS101. (Refutations tolerated, flames > /dev/null). -- ------------------------------- Disclaimer: The views contained herein are | dan levy | yvel nad | my own and are not at all those of my em- | an engihacker @ | ployer or the administrator of any computer | at&t computer systems division | upon which I may hack. | skokie, illinois | -------------------------------- Path: ..!ihnp4!ttrdc!levy
ark@alice.UucP (Andrew Koenig) (11/23/85)
> In an inauspicious moment, K&R specified "All floating-point arithmetic > in C is done in double precision" (pg 41). > Two effects: > 1) floats, as opposed to doubles, are costly and useless (except > when memory space is critical), and > 2) I have to advise people with CPU-intense problems not to use C. > Comments? Does everybody agree? > garry wiegand No -- on most machines, single-precision does not offer enough significance for serious number-crunching, so people tend to use double-precision anyway if they care about the results.
zben@umd5.UUCP (11/25/85)
In article <4614@alice.UUCP> ark@alice.UucP (Andrew Koenig) writes: These comments by garry wiegand: [z] >> In an inauspicious moment, K&R specified "All floating-point arithmetic >> in C is done in double precision" (pg 41). >> 1) floats, as opposed to doubles, are costly and useless (except >> when memory space is critical), and >> 2) I have to advise people with CPU-intense problems not to use C. >> Comments? Does everybody agree? Andrew replies: [z] >No -- on most machines, single-precision does not offer enough significance >for serious number-crunching, so people tend to use double-precision >anyway if they care about the results. Now, this is a most disingenious argument. If this were really true, one would map 'float' to the double-precision operations and totally ignore the single-precision ones. Let's face it - this, as well as several other "features" of the Unix system (such as the treatment of parameters passed to functions) is left over from the PDP-11 days, and puts the lie to Unix's claim of machine independance... Well, perhaps that's too strong. A better statement is that like every other claim made by Unix, the claim of machine independance is only about 90% true. Don't get sore! 90% is pretty damn good given the level of snake oil in this field... -- Ben Cranston ...{seismo!umcp-cs,ihnp4!rlgvax}!cvl!umd5!zben zben@umd2.ARPA
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (11/25/85)
> >No -- on most machines, single-precision does not offer enough significance > >for serious number-crunching, so people tend to use double-precision > >anyway if they care about the results. > > Now, this is a most disingenious argument. If this were really true, one > would map 'float' to the double-precision operations and totally ignore the > single-precision ones. > > Let's face it - this, as well as several other "features" of the Unix system > (such as the treatment of parameters passed to functions) is left over from > the PDP-11 days, and puts the lie to Unix's claim of machine independance... Both viewpoints have some merit. By the way, the discussion is about C, not about UNIX!! Certainly, some of C's harder-to- justify features, such as parameter widening, are the result of implementing the original version on a PDP-11. But these rules can be and are expressed in a machine-independent manner. Just because a language does not offer complete access to every quirk of every architecture does not mean that the language is machine-dependent! If C were to be redesigned, it would probably be wise to give EVERY basic data type equal citizenship. But it's too late now. Actually, X3J11 allows floats to be operated on without widening to doubles (except for parameter passing). Due to the unique characteristics of floating-point operations, this change is unlikely to break much existing correct code (unlike changes to integer data types). I agree that in most cases where the loss of speed might really matter, double precision is usually needed anyway to get meaningful results. Some people, though, judge code more on how fast it runs than on whether it performs a useful function correctly.
roy@phri.UUCP (Roy Smith) (11/26/85)
> [...] in most cases where the loss of speed might really matter, double > precision is usually needed anyway to get meaningful results. Some > people, though, judge code more on how fast it runs than on whether it > performs a useful function correctly. Bullcookies! A lot of people (like me) work with primary data which is only accurate to 2 or 3 significant digits. It takes a hell of a lot of roundoff error in the 7th decimal place to make any difference in the accuracy of the final result. Why should I pay (in CPU time) for digits 8-15 when I don't need them? Why do you think they make machines with both single and double precision hardware to begin with? -- Roy Smith <allegra!phri!roy> System Administrator, Public Health Research Institute 455 First Avenue, New York, NY 10016
jsdy@hadron.UUCP (Joseph S. D. Yao) (11/27/85)
In article <706@lasspvax.UUCP> garry%geology@cu-arpa.cornell.edu.arpa writes: >Pet peeve: >In an inauspicious moment, K&R specified "All floating-point arithmetic >in C is done in double precision" (pg 41). This is unsaid in ANSI C, as mentioned in the middle of an earlier (long!) article. Floats may be themselves even in the midst of arithmetic ops and function passing. May cause problems in loosely written programs. "Say what you mean ... is the whole of the law." -- Joe Yao hadron!jsdy@seismo.{CSS.GOV,ARPA,UUCP}
gwyn@BRL.ARPA (VLD/VMB) (11/27/85)
You argue that because your data is only accurate to 2 or 3 significant figures, 6 to 7 place accuracy in the computation is sufficient. This is often simply not true; a numerical analyst should make the determination. I have seen enough computations that produced total garbage to make me believe that the naive user should get double precision by default. I am in favor of supporting low-precision floating point in C, as permitted by X3J11, but let's not make it the default.
levy@ttrdc.UUCP (Daniel R. Levy) (11/28/85)
In article <42@brl-tgr.ARPA>, gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) writes: >> >No -- on most machines, single-precision does not offer enough significance >> >for serious number-crunching, so people tend to use double-precision >> >anyway if they care about the results. >> Now, this is a most disingenious argument. If this were really true, one >> would map 'float' to the double-precision operations and totally ignore the >> single-precision ones. >> Let's face it - this, as well as several other "features" of the Unix system >> (such as the treatment of parameters passed to functions) is left over from >> the PDP-11 days, and puts the lie to Unix's claim of machine independance... > >I agree that in most cases where the loss of speed might really >matter, double precision is usually needed anyway to get >meaningful results. Some people, though, judge code more on how >fast it runs than on whether it performs a useful function >correctly. Sounds a bit pejorative. If for example I were to attempt to speed up crun- ching in, say, a Fortran program by using real [float] instead of double precision [double] I might well want to include both versions of a critical routine in the code, and every so many steps run the double as well as the single, or otherwise check the sanity of the results. I am not a numerical analysis guru, but I surmise there might be testable conditions whereby it could be determined (without an overly large computational penalty) whether a computation could be entrusted to single precision or whether double precision should be used, prior to actually performing the computation. Any crunch gurus out there care to comment? -- ------------------------------- Disclaimer: The views contained herein are | dan levy | yvel nad | my own and are not at all those of my em- | an engihacker @ | ployer or the administrator of any computer | at&t computer systems division | upon which I may hack. | skokie, illinois | -------------------------------- Path: ..!ihnp4!ttrdc!levy
zoro@fluke.UUCP (Mark Hinds) (11/28/85)
In article <706@lasspvax.UUCP> garry%geology@cu-arpa.cornell.edu.arpa writes: >Pet peeve: > >In an inauspicious moment, K&R specified "All floating-point arithmetic >in C is done in double precision" (pg 41). > >Comments? Does everybody agree? > >garry wiegand > >garry%geology@cu-arpa.cs.cornell.edu I can only gues that K&R did not intend C for numerical applications. This simplifies expression evaluation code generation, and greatly simplifies/reduces the math library (look at all the different flavors of math functions in FOTRAN). I agree that this is now a problem, it may not have been when C was first used. Hopefully ANSI C will resolve this. -- ____________________________________________________________ Mark Hinds {decvax,ihnp4}!uw-beaver!--\ John Fluke Mfg. Co., Inc. {sun,allegra}!---> fluke!zoro (206) 356-6264 {ucbvax,hplabs}!lbl-csam!--/
henry@utzoo.UUCP (Henry Spencer) (11/29/85)
> ... Why do you think they make machines > with both single and double precision hardware to begin with? Because Fortran has two precisions of floating point. :-) :-<<< -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
brooks@lll-crg.ARpA (Eugene D. Brooks III) (11/29/85)
>analyst should make the determination. I have seen enough >computations that produced total garbage to make me believe >that the naive user should get double precision by default. > >I am in favor of supporting low-precision floating point in >C, as permitted by X3J11, but let's not make it the default. This argument is hard to swallow. You are suggesting protection for the user by not giving him what he has asked for in his code. Any intelligent scientific programmer trys it both ways on his own and determines whether single precision is adequate (without the help of a numerical analyst, an analyst is consulted when one wants to understand the root of the problem in an attempt to rearrange the computation so that single precision is sufficient should it fail). There are those who do not take adequate care in their work but I see no need to save them from themselves by default. If the user says "float" then he wants float! I have really gotten tired of carrying along my own compiler which does single precision arithmetic on floats in order to use C in numerical computation. I have also gotten tired of trying to defend the use of C as an efficient numerical language when people constantly complain about this problem for floating point computation. C should not promote floats to doubles in expressions or arguments for the same reasons that it does not promote ints to longs. Its not efficient. This wart was injected into the language as the result of the nature of the FP11 hardware on the PDP11. The promotion of chars and shorts to ints for arguments and expressions has its origin in the same hardware, the PDP11 sign extended chars when they were loaded into registers. The promotion of char and short to int is not a severe issue (I once spent some time trying to get a 68000 to do multiplies of shorts using the available instructions instead of promoting to 32bit ints and them using subroutines), as most hardware has alignment restrictions on the stack, at least for the sake of efficiency. This is not true for float and double. The loss in performance caused by the spurious conversions to double is a serious issue. Ask any scientist who routinely runs 20 cpu hour jobs on a Vax whether he would rather them run in 10 hours. He will be glad to do a run both ways to look for precision problems before moving on to that 100 hour run.
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (11/29/85)
> >I am in favor of supporting low-precision floating point in > >C, as permitted by X3J11, but let's not make it the default. > > This argument is hard to swallow. You are suggesting protection > for the user by not giving him what he has asked for in his > code. That was NOT what I said. "I am in favor of supporting low-precision floating-point in C, as permitted by X3J11." This means that the compiler is free to evaluate (float) op (float) in single-precision if the implementer decides to do so. The implementer may choose instead to continue to coerce (float)s to (double)s in expressions. Presumably most C implementations, especially those on number crunchers, would support single-precision operations. For compatibility reasons, if none other, it is essential to maintain (double)-ness for floating-point constants etc. One can obtain single-precision by something like (float)1.0 if so inclined.
carl@bdaemon.UUCP (carl) (11/29/85)
....... > >analyst should make the determination. I have seen enough > >computations that produced total garbage to make me believe > >that the naive user should get double precision by default. > > > > This argument is hard to swallow. You are suggesting protection > for the user by not giving him what he has asked for in his > code. Any intelligent scientific programmer trys it both ways > on his own and determines whether single precision is adequate > (without the help of a numerical analyst, an analyst is consulted > when one wants to understand the root of the problem in an attempt > to rearrange the computation so that single precision is sufficient > should it fail). .... more stuff, largely irrelevant ..... I hope never to have the misfortune to have to use one of your programs. The notion that "trying it both" ways guarantees a program that will never blow up is patently absurd and in fact accounts for many of the horror stories that give the use of computers a bad name. Consulting a numerical analyst *before* developing a program serves the same purpose as consulting a statistician well versed in experimental design before starting data collection -- the chances of wasting the client's money and time are greatly reduced.
ken@turtlevax.UUCP (Ken Turkowski) (12/02/85)
In article <608@ttrdc.UUCP>, levy@ttrdc.UUCP (Daniel R. Levy) writes: > ... I surmise there might be testable conditions whereby it > could be determined (without an overly large computational penalty) whether > a computation could be entrusted to single precision or whether double > precision should be used, prior to actually performing the computation. You are indeed correct. The field of numerical analysis is a discipline that is concerned with determining the error in a calculation given the arithmetic precision of a machine. I have done a fair amount of coding in both floating-point and fixed-point arithmetic which carries around as much precision as needed to guarantee correctness of the results. Granted, error analysis can sometimes be painful. One easy way to determine how much precision to carry around is to first do the computations in very high precision, and then decrease the precision until the errors are intolerable. -- Ken Turkowski @ CIMLINC (formerly CADLINC), Menlo Park, CA UUCP: {amd,decwrl,hplabs,seismo,spar}!turtlevax!ken ARPA: turtlevax!ken@DECWRL.DEC.COM
emjej@uokvax.UUCP (12/02/85)
Sigh...people, including those who feel that C is perfect and thus have the urge to defend its every jot and tittle to the death, at times have such a touching faith in double precision! :-) I am in no wise a numerical analysis guru, but I have studied under one, and if there is one thing I have learned, that is that if your algorithm is flawed or your problem ill-conditioned, no amount of precision will save you. The fellow commenting on the fuzziness of the input data has an extremely valid point: an ill-conditioned problem may be such that wildly different solutions (and hence physical behavior of a system being modeled) are well within the possible "true" values of the inputs to your number cruncher. Also, I recall that in matrix hacking, it turns out that certain operations are crucial--notably accumulating dot products. If you do *these* in high precision, the rest of the stuff can get away with less. That can make a large difference in storage space and runtime (if you have hardware that was designed like C so that everything is done in max precision anyway, the runtime difference is less, I'd agree, but people beat on some hefty matrices nowadays--thrash, thrash). James Jones
brooks@lll-crg.ARpA (Eugene D. Brooks III) (12/03/85)
>I hope never to have the misfortune to have to use one of your programs. Lets be nice now, I am not picking on you, nor am I trying to sell you one of my programs. I write them for my own use and they work fine for me. >The notion that "trying it both" ways guarantees a program that will never >blow up is patently absurd and in fact accounts for many of the horror I didn't say that trying it both ways guarantees a program will never blow up. It demonstrates that the loss of precision by using single instead of double is not important for the paticular run in question. I said no more and no less than that. I don't think that this statement can be argued with. The statement that you argue with is of course absurd, and I didn't make it. Trying a code in double precision and finding it produces the same results as single does not prove stability or robustness of a code if that is what you think I said. I will however argue that an unstable code can be useful. If you have an algorithm that produces a simulation of a physical system in time, it can be useful and even be the "best" algorithm for your application if at a time step size which produces good enough accuracy (stability), ie you get your answers before the code blows up, the algorithm is more efficient that an absolutely stable one. The fact that there is an exponentially growing error is not a problem if it does not get big enough for you to see it before the run completes. I know its living on the ragged edge, but if it means you get the job done in one day instead of a week you go for it. It is very much like doing data collection in a down hole shot at Nevada, the bomb of course destroys all of the measuring equipment, the trick is to get your answers before it happens, and of course know when it has happened. With a down hole shot there is never any doubt that things have blown up on you. A numerical algorithm is usually more subtle. Doing things this way of course get the noses of numerical analysts out of joint. They take offense at the idea of using an algorithm that is numerically unstable and therefore is "garbage". All I can say is that there are a lot of useful things to be found in another man's garbage and if useful they are usually a bargain. I think that the issues which started this chain of postings have gotten side tracked. This started up as a discussion on whether or not computation on floats should be done in single precision. If the programmer wanted double precision he would have declared the variables in question to be double. The value of doing computation on floats in double is very dubious, especially of the IEEE standard for 32 bit floating point is being used. Having the default precision for the constant 1.0 be double is also very dubious if it tends to cause promotion of floats in a sum with it to double. float a,b; a = b + 1.0; /* Gets done in double because 1.0 is a double. Gag me with a spoon. */
ark@alice.UucP (Andrew Koenig) (12/03/85)
> float a,b; > > a = b + 1.0; /* Gets done in double because 1.0 is a double. > Gag me with a spoon. */ Nah, gets done in single because the compiler realizes that 1.0 has the same representation in single and double, and therefore that the result of the addition will be the same.
jimc@ucla-cs.UUCP (12/03/85)
In article <706@lasspvax.UUCP> garry%geology@cu-arpa.cornell.edu.arpa writes: >In an inauspicious moment, K&R specified "All floating-point arithmetic >in C is done in double precision" (pg 41). > 1) floats, as opposed to doubles, are costly and useless (except > when memory space is critical), and > 2) I have to advise people with CPU-intense problems not to use C. >Comments? Does everybody agree? >(I know what my compiler does, but are char->int and short->int >required in the same way? K&R seems fuzzy to me...) On machines with the 8087 (IBM PC,...) where the hardware has to convert both float and double to an internal format, the cost of float variables is very slightly cheaper than doubles, as 4 bytes less have to be loaded. But on mainframes or, particularly, when floating point is emulated, the extra cost of double precision is outrageous: more than 4X, since an IEEE float has 23+1 bits of precision while a double has 52+1. Certainly the float variables should be doubled only when necessary, just as is done in integer- type expressions which may have 16-bit int and 32-bit long. As for short-int-long conversions, I don't know of any machines where loading 16 bits into a 32-bit register is any slower than loading 32 bits. As I understand it, the reason for converting everything to int is, that K+R says you can convert any pointer to an int and back again without wrecking it. Therefore an int has to match the addressing size. The moti- vation, I'm sure, was so undeclared functions could be declared "extern int" even though they return pointers. BOO! I personally would like to see a data type for 8-bit (signed?) integers. How about short = 8 bits, int = 16 bits, long = 32 bits? (Yes, I know, every- one's pet code expects 16-bit shorts...) I very much dislike using char type for arithmetic, because type char is for CHARACTERS. I have enough trouble explaining K+R's example "++ndigits[c-'0'];" without the possibility that "char c" is a variable of integer type!
ken@turtlevax.UUCP (Ken Turkowski) (12/04/85)
>> Any intelligent scientific programmer trys it both ways >> on his own and determines whether single precision is adequate >> (without the help of a numerical analyst, an analyst is consulted >> when one wants to understand the root of the problem in an attempt >> to rearrange the computation so that single precision is sufficient >> should it fail). .... more stuff, largely irrelevant ..... And suppose that the numerical analyst says that double precision isn't enough precision regardless of the ordering of the computation; what do you do then? Check out: Linnainmaa, Seppo "Software for Doubled-Precision Floating-Point Computations", ACM Transactions on Mathematical Software, Vol. 7, No. 3, Sept 1981, pp. 272-283 -- Ken Turkowski @ CIMLINC, Menlo Park, CA UUCP: {amd,decwrl,hplabs,seismo,spar}!turtlevax!ken ARPA: turtlevax!ken@DECWRL.DEC.COM
levy@ttrdc.UUCP (Daniel R. Levy) (12/05/85)
In article <4647@alice.UUCP>, ark@alice.UucP (Andrew Koenig) writes: >> float a,b; >> >> a = b + 1.0; /* Gets done in double because 1.0 is a double. >> Gag me with a spoon. */ > >Nah, gets done in single because the compiler realizes that 1.0 has >the same representation in single and double, and therefore that >the result of the addition will be the same. On a 3B20S running Sys5 Release 2: $ cat fabc.c main() { float a,b; a = b + 1.0; } $ cc -c -O fabc.c $ dis fabc.o **** DISASSEMBLER **** disassembly for fabc.o section .text main() 0: 7a02 save &0x0,&0x2 2: c870 a04e 0000 movsd 0x4(%fp),%r0 8: c908 0000 018e 0000 faddd2 $0x18,%r0 <==DOUBLE 10: c96e 0000 a000 movds %r0,0x0(%fp) 16: 7b00 ret &0x0 $ cat fabf.f real a,b a = b + 1.0 end $ f77 -c -O fabf.f fabf.f: MAIN: $ dis fabf.o **** DISASSEMBLER **** disassembly for fabf.o section .text 0: 7a00 save &0x0,&0x0 2: 114b addw2 &0x4,%sp 4: 800b br +0xb <1c> 6: ca08 0000 0208 0000 028e 0000 fadds3 $0x20,$0x28,%r0 <==SINGLE 12: 5108 0000 0240 movw %r0,$0x24 18: a100 7b00 ret &0x0 1c: 900c br -0xc <6> 1e: dede nop -- ------------------------------- Disclaimer: The views contained herein are | dan levy | yvel nad | my own and are not at all those of my em- | an engihacker @ | ployer or the administrator of any computer | at&t computer systems division | upon which I may hack. | skokie, illinois | -------------------------------- Path: ..!ihnp4!ttrdc!levy
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (12/05/85)
> K+R says you can convert any pointer to an int and back again without > wrecking it. That is NOT what it says.
ken@turtlevax.UUCP (Ken Turkowski) (12/05/85)
In article <7849@ucla-cs.ARPA> jimc@ucla-cs.UUCP (Jim Carter) writes: > As for short-int-long conversions, I don't know of any machines where >loading 16 bits into a 32-bit register is any slower than loading 32 bits. However, if an int is a long, and your C compiler does all operations on longs, then it takes at least one extra instruction to either sign- or zero-extend a short or char to a long. So to add two shorts, you have: move16 _a,r0 ext32 r0 move16 _b,r1 ext32 r1 add32 r1,r0 move16 r0,_a Rather than move16 _a,r0 add16 _b,r0 move16 r0,_a Or add16 _b,_a Can you say "order of magnitude in speed difference"? -- Ken Turkowski @ CIMLINC, Menlo Park, CA UUCP: {amd,decwrl,hplabs,seismo,spar}!turtlevax!ken ARPA: turtlevax!ken@DECWRL.DEC.COM
rlr@stcvax.UUCP (Roger Rose) (12/06/85)
>> float a,b; >> >> a = b + 1.0; /* Gets done in double because 1.0 is a double. >> Gag me with a spoon. */ > > Nah, gets done in single because the compiler realizes that 1.0 has > the same representation in single and double, and therefore that > the result of the addition will be the same. Sorry, it get's done in double. ALL floats are converted to double prior to any operation. (Refer to K&R p. 41 on implicit type conversions.) -- Roger Rose UUCP: {hao ihnp4 decvax}!stcvax!rlr USnail: Storage Technology Corp. - MD 3T / Louisville, Co. 80028 phone: (303) 673-6873
chris@umcp-cs.UUCP (Chris Torek) (12/06/85)
In article <618@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes: >In article <4647@alice.UUCP>, ark@alice.UucP (Andrew Koenig) writes: [`>>>' must be from 1087@lll-crg.ARpA] >>> float a, b; >>> >>> a = b + 1.0; /* Gets done in double ... */ >>Nah, gets done in single ... >On a 3B20S running Sys5 Release 2: [it is done in double] You just need a better compiler :-). On a 4.3BSD (Beta) Vax with the `-f' flag to the C compiler, it is done in single precision; and the compiler generates a `.float' (single precision) constant. With a tip of the hat (if I had a hat) to Donn Seeley, -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251) UUCP: seismo!umcp-cs!chris CSNet: chris@umcp-cs ARPA: chris@mimsy.umd.edu
bs@linus.UUCP (Robert D. Silverman) (12/06/85)
> >> Any intelligent scientific programmer trys it both ways > >> on his own and determines whether single precision is adequate > >> (without the help of a numerical analyst, an analyst is consulted > >> when one wants to understand the root of the problem in an attempt > >> to rearrange the computation so that single precision is sufficient > >> should it fail). .... more stuff, largely irrelevant ..... > > And suppose that the numerical analyst says that double precision > isn't enough precision regardless of the ordering of the computation; > what do you do then? > > Check out: > > Linnainmaa, Seppo > "Software for Doubled-Precision Floating-Point Computations", > ACM Transactions on Mathematical Software, > Vol. 7, No. 3, Sept 1981, pp. 272-283 > > -- > Ken Turkowski @ CIMLINC, Menlo Park, CA > UUCP: {amd,decwrl,hplabs,seismo,spar}!turtlevax!ken > ARPA: turtlevax!ken@DECWRL.DEC.COM Double, Triple, more???? I routinely do computations involving precision to hundreds of digits! e.g. computational number theory, cryptography, etc. For such applications floating point is virtually useless on most machines (and I've worked on a LOT of different ones). To do efficient multi-precision arithmetic for numbers in this size range you can't do much better than n^2 algorithms for multiplication and division. Floating point numbers would take up too many bits of each word in a multi-precise number with the exponent. Until you get really large , i.e. thousands of digits, the Schonhage-Strassen FFT based multiply algorithm has too high an overhead. Also for numbers in that range it is faster to do division by finding an accurate inverse via Newton's method then doing a multiply. Unfortunately quite a few machines today will not multiply 2 full words together giving a double length product nor will it divide such a product by a full word yielding a full word quotient and remainder. The 68010 is such a processor. For multi-precision calculations you must therefore restrict your radix to half the size of a full word and slow down your application by 4. Give me a 128 bit machine with double length registers!!!!! Bob Silverman
ark@alice.UucP (Andrew Koenig) (12/06/85)
>> float a,b; >> >> a = b + 1.0; /* Gets done in double because 1.0 is a double. >> Gag me with a spoon. */ > >Nah, gets done in single because the compiler realizes that 1.0 has >the same representation in single and double, and therefore that >the result of the addition will be the same. ... to which Dan Levy gives detailed evidence that his machine does it in double. Apparently some people don't recognize irony; let me try to make the same point more clearly. The point is that most floating-point constants are very simple: 1.0, 0.0, sometimes 2.0. It doesn't take much to recognize such constants and do the operations in single. This is true even if your compiler isn't up to it.
ark@alice.UucP (Andrew Koenig) (12/06/85)
>> K+R says you can convert any pointer to an int and back again without >> wrecking it. > That is NOT what it says. No, it's not quite, but it's extremely close. Page 210, sec. 14.4: A pointer may be converted to any of the integral types large enough to hold it. Whether an int or a long is required is machine dependent. The mapping function is also machine dependent, but is intended to be unsurprising to those who know the addressing structure of the machine. Details for some particular machines are given below. An object of integral type may be explicitly converted to a pointer. The mapping always carries an integer converted from a pointer back to the same pointer, but is otherwise machine dependent. Thus, you can convert any pointer to a long and back again without wrecking it, and you can use an int instead of a long on some machines.
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (12/06/85)
> >> float a,b; > >> > >> a = b + 1.0; /* Gets done in double because 1.0 is a double. > >> Gag me with a spoon. */ > > > > Nah, gets done in single because the compiler realizes that 1.0 has > > the same representation in single and double, and therefore that > > the result of the addition will be the same. > > Sorry, it get's done in double. ALL floats are converted to double prior > to any operation. (Refer to K&R p. 41 on implicit type conversions.) I really wish people would NOT POST if they don't know what they're talking about. Andy Koenig, as usual, gave a correct answer and some turkey, as usual, contradicts him. Sheesh.
zoro@fluke.UUCP (Mark Hinds) (12/07/85)
In article <42@brl-tgr.ARPA> gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) writes: >I agree that in most cases where the loss of speed might really >matter, double precision is usually needed anyway to get >meaningful results. Some people, though, judge code more on how >fast it runs than on whether it performs a useful function >correctly. I previously worked for a physics lab that did simulations of shock waves traveling through materials. The programs which blindly used double precision for everything invariably ran significantly slower (half or worse) than those which used only the doubles that were really needed. The programs which made judicious use of doubles were just as "accurate" as the all double programs. Accuracy in this case is measured by how well simualtion could predict experimental results. The usuall result of inappropriate singles is not inaccuracy, but wildly erroneuos results and/or programs which hang due to non-convergence. Mark Hinds -- ____________________________________________________________ Mark Hinds {decvax,ihnp4}!uw-beaver!--\ John Fluke Mfg. Co., Inc. {sun,allegra}!---> fluke!zoro (206) 356-6264 {ucbvax,hplabs}!lbl-csam!--/
zoro@fluke.UUCP (Mark Hinds) (12/07/85)
In article <4614@alice.UUCP> ark@alice.UucP (Andrew Koenig) writes: >No -- on most machines, single-precision does not offer enough significance >for serious number-crunching, so people tend to use double-precision >anyway if they care about the results. NO NO!!!! This is NOT true. Singles may be used in many places with little or no loss in accuracy and a great reduction in computing time. What are you calling serious number crunching and what are most machines??? Mark Hinds -- ____________________________________________________________ Mark Hinds {decvax,ihnp4}!uw-beaver!--\ John Fluke Mfg. Co., Inc. {sun,allegra}!---> fluke!zoro (206) 356-6264 {ucbvax,hplabs}!lbl-csam!--/
jon@cit-vax.arpa (Jonathan P. Leech) (12/07/85)
>>> float a,b; >>> >>> a = b + 1.0; /* Gets done in double because 1.0 is a double. >>> Gag me with a spoon. */ >> >>Nah, gets done in single because the compiler realizes that 1.0 has >>the same representation in single and double, and therefore that >>the result of the addition will be the same. > >The point is that most floating-point constants are very simple: 1.0, >0.0, sometimes 2.0. It doesn't take much to recognize such constants >and do the operations in single. This is true even if your compiler isn't >up to it. > There are potential pitfalls here also. For example, last summer I was involved in a project to attempt to ray-trace arbitrarily deformed parametric patches. The computation of the deformations involved the constant PI = 3.1415... For some strange reason, the numerical technique we were using took 10 times as long to converge as it should have. After several days of head-bashing, it turned out that the FORTRAN compiler we were using interpreted PI as a single precision constant, and that loss of accuracy was sufficient. (Perhaps this is standard FORTRAN behavior - I don't know as I am mainly a C hacker) It makes a great deal more sense to me to have constants double-precision by default. If you're really sure you can get away with single precision, fine, but don't assume it as default behavior. -- Jon Leech (jon@cit-vax.arpa) __@/
jimc@ucla-cs.UUCP (12/10/85)
In article <4647@alice.UUCP> ark@alice.UucP (Andrew Koenig) writes: >> float a,b; >> a = b + 1.0; /* Gets done in double because 1.0 is a double. >> Gag me with a spoon. */ > >Nah, gets done in single because the compiler realizes that 1.0 has >the same representation in single and double, and therefore that >the result of the addition will be the same. Don't forget the IEEE float standard (e.g. 8087 chip), where a float has an 8-bit exponent and a double has 11 bits. The compiler has to recognize that 1.0D0 .eq. 1.0E0 (pardon my Fortran) even though the bit patterns differ. This is certainly feasible, but takes more smarts. Gag me with a spoon. James F. Carter (213) 206-1306 UCLA-SEASnet; 2567 Boelter Hall; 405 Hilgard Ave.; Los Angeles, CA 90024 UUCP:...!{ihnp4,ucbvax,{hao!cepu}}!ucla-cs!jimc ARPA:jimc@locus.UCLA.EDU
jimc@ucla-cs.UUCP (12/10/85)
In article <322@brl-tgr.ARPA> gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) writes: >> K+R says you can convert any pointer to an int and back again without >> wrecking it. > >That is NOT what it says. Mea culpa. K+R A.14.4 (p.210) para. 2 says "Whether an int or a long is required is machine dependent." I was remembering the next paragraph, which says that an "integer" converted from a pointer can be converted back to the same pointer. James F. Carter (213) 206-1306 UCLA-SEASnet; 2567 Boelter Hall; 405 Hilgard Ave.; Los Angeles, CA 90024 UUCP:...!{ihnp4,ucbvax,{hao!cepu}}!ucla-cs!jimc ARPA:jimc@locus.UCLA.EDU
jimc@ucla-cs.UUCP (12/10/85)
In article <4614@alice.UUCP> ark@alice.UucP (Andrew Koenig) writes: >No -- on most machines, single-precision does not offer enough significance >for serious number-crunching, so people tend to use double-precision >anyway if they care about the results. Once I worked on a polynomial root package. I found that real roots from double real coefficients could be found with just about equal performance as complex roots from single precision complex coefficients. That is, the speed of the (two different) algorithms as a function of polynomial order was about the same, the errors in the roots were similar, and the maximum order before errors made the roots useless was about the same. Obviously :-) it's because the number of bits per coefficient was about the same in both cases. Here's a "valid" use of single precision. In another project, I cleaned up a brute-force polynomial fitter by using suitably selected orthogonal polynomials. I can't quite remember the numbers, but I think the error was acceptable up to order only 4 with x**j for a basis, but I could do 8th order using an orthogonal basis. This was in single precision, and 8th order was far beyond what the measurement accuracy could justify. Numerical performance is very sensitive to the method, and sometimes you really need the speed and/or compactness of single precision, so it behooves you to be serious in choosing the method. James F. Carter (213) 206-1306 UCLA-SEASnet; 2567 Boelter Hall; 405 Hilgard Ave.; Los Angeles, CA 90024 UUCP:...!{ihnp4,ucbvax,{hao!cepu}}!ucla-cs!jimc ARPA:jimc@locus.UCLA.EDU
jsdy@hadron.UUCP (Joseph S. D. Yao) (12/18/85)
In article <984@turtlevax.UUCP> ken@turtlevax.UUCP (Ken Turkowski) writes: >However, if an int is a long, and your C compiler does all operations >on longs, then it takes at least one extra instruction to either sign- >or zero-extend a short or char to a long. So to add two shorts, you have: > move16 _a,r0 > ext32 r0 > move16 _b,r1 > ext32 r1 > add32 r1,r0 > move16 r0,_a >Rather than > move16 _a,r0 > add16 _b,r0 > move16 r0,_a >Or > add16 _b,_a >Can you say "order of magnitude in speed difference"? How about: cvtwl _a,r0 cvtwl _b,r1 addl2 r1,r0 cvtlw r0,_a Or, better, addw2 _b,_a This is legitimate VAX code. Can you say "efficient orthogonal architecture"? Five times fast? ;-) [No, Cottrell hasn't converted me. It just happens to be true. Programs should still be written readably and, consequently, portably.] -- Joe Yao hadron!jsdy@seismo.{CSS.GOV,ARPA,UUCP}