tribble_acn%uta.csnet@csnet-relay.arpa (David Tribble) (01/08/86)
For the last few weeks there has been on-going discussion of the merits and drawbacks (demerits?) of the precision the compiler sees fit to use for statements like- float a, b; a = a + 1.0; /* 1 */ a = a + b; /* 2 */ One argument that should be mentioned is that some compiler writers choose the criteria- 1. keep runtime code small. 2. stay standard (K&R). to design their C compiler. For some machines, especially those that so not have floating point instructions built in (eg, 808 and 68000) it makes sense to convert everything to double (with a runtime called something like $ftod), do the add operation (with a runtime called $dadd), then convert the result back to a float (with a rtn called $dtof). What's the advantage, you ask? Well,- 1. It keeps runtime code small, because only one floating point routine ($dadd) is required; a single-precision $fadd is not necessary. 2. It agrees with K&R's definition of 'the usual arithmetic conversions' for doing arithmetic operations. True, it is more inefficient than calling a single-precision add ($fadd) and sidestep the converions to and from double-precision. But if your code uses ANY float-double conversions, your executable code will require $ftod and $dtof calls, so you are incurring this overhead already; why incur more overhead with $fadd? Of course, if you are compiling code for a machine with built-in floating point rtn's (or if your operating system supplies you with them), or if you are not concerned with executable runtime code size, then it would be advantageous to make the compiler smart enough to choose the precision as it pleases. My suggestion is to provide ANSI C with the standard pragmas #pragma float /* force single precision */ #pragma double /* force double precision */ that the programmer could insert above the arithmetic statement to specify the precision to use. These pragmas would be ignored on compilers that choose not to give the programmer a choice. David R. Tribble, Univ. Texas @ Arlington
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (01/08/86)
> My suggestion is to provide ANSI C with the standard pragmas > #pragma float /* force single precision */ > #pragma double /* force double precision */ > that the programmer could insert above the arithmetic statement > to specify the precision to use. These pragmas would be ignored > on compilers that choose not to give the programmer a choice. I think the use of #pragma for this is a good idea (I would use "use-single-precision" or "no-single-precision" pragmas), but the content of a #pragma should not be in the standard; it is intended as a portable way to invoke processing that is beyond the scope of the standard. Pragmas are implementation-specific.
ken@turtlevax.UUCP (Ken Turkowski) (01/11/86)
In article <1333@brl-tgr.ARPA> tribble_acn%uta.csnet@csnet-relay.arpa (David Tribble) writes: >For the last few weeks there has been on-going discussion of >the merits and drawbacks (demerits?) of the precision the compiler >sees fit to use for statements like- > float a, b; > a = a + 1.0; /* 1 */ > a = a + b; /* 2 */ >One argument that should be mentioned is that some compiler writers >choose the criteria- > 1. It keeps runtime code small, because only one floating > point routine ($dadd) is required; a single-precision > $fadd is not necessary. Routine? What's wrong with using the single-word machine instructions? There's no reason not to use floating-point hardware since it is now so cheap and available. > 2. It agrees with K&R's definition of 'the usual arithmetic > conversions' for doing arithmetic operations. Just because it agrees with K&R doesn't make it right. Nearly every other programming language states that if you want computations to be done at higher precision than any of the operands, then you cast any one of them to the higher precision. This should be done for chars and shorts as well as floats and longs. (could cast a long to double for more precision, but not to float) What's that? You say it breaks existing code? The easy solution to that is: #define char long #define short long #define int long #define float double -- Ken Turkowski @ CIMLINC, Menlo Park, CA UUCP: {amd,decwrl,hplabs,seismo,spar}!turtlevax!ken ARPA: turtlevax!ken@DECWRL.DEC.COM
franka@mmintl.UUCP (Frank Adams) (01/14/86)
In article <1020@turtlevax.UUCP> ken@turtlevax.UUCP (Ken Turkowski) writes: >Just because it agrees with K&R doesn't make it right. Nearly every >other programming language states that if you want computations to be >done at higher precision than any of the operands, then you cast any >one of them to the higher precision. > >This should be done for chars and shorts as well as floats and longs. >(could cast a long to double for more precision, but not to float) > >What's that? You say it breaks existing code? The easy solution to >that is: > >#define char long >#define short long >#define int long >#define float double This will break a lot more existing code than eliminating the automatic extension of floats, chars, and shorts will. Frank Adams ihpn4!philabs!pwa-b!mmintl!franka Multimate International 52 Oakland Ave North E. Hartford, CT 06108