levy@ttrdc.UUCP (Daniel R. Levy) (12/19/85)
In article <707@brl-tgr.ARPA>, gwyn@BRL.ARPA (VLD/VMB) writes: >Add Cottrell to the turkey list if you haven't long ago. >It doesn't matter whether HIS particular compiler has the ability >to avoid unnecessary conversions in the code it generates or >not. The C implementor may at his discretion do so, so long >as the effect of the code is as prescribed by the language >spec. In the case of char<->int, many compilers do indeed >take care to avoid unnecessary conversions. They could do so >for float<->double, if the implementor would take the trouble >(perhaps the Tartan Labs compiler does; I don't know). >When we're discussing general properties of the C language, >appeal to a particular implementation proves nothing. Sighhhhhh.... I am beginning to think maybe this is more a battle of English semantics than anything else. The statement (in the single float example) that 'it is done in single' could be more clearly rephrased 'the result is the same as if the machine imp- lemented it in single precision.' As I myself showed and as Mr. Cottrell noted independently, it often is NOT implemented in single precision. Depending on the meaning assigned to the term 'done' in the phrase 'the floating point operation is done in single', the phrase could either become a truth or a falsity. If 'done' means how the hardware does it, then one example to the contrary is enough to disprove the statement. But if 'done' alludes to the net effect on the variables being manipulated, without consideration of intermediate steps invoked in the process (like the promotion to double then the subsequent demotion to single) then the statement stands trivially true. Those with the former view are more implementation oriented, while those with the latter view are oriented to a more abstract view of computation. Both views are valid. Maybe Koehnig could have better said 'it might, but need not, be IMPLEMENTED in double' and thus avoided the semantics about 'done.' It would also be fair to note that a double operation on a single precision variable does not make sense in literal terms. I can't help but to note that it makes a lot of sense to take the 'implementation' point of view when one is worried about the time required by the computation and/or the space that the code occupies (mostly the former). If a = a + 1.0 is implemented in double for float a and a given machine/compiler, and the program execution thread is dominated by opera- tions of this kind, and the execution time is of serious interest, then I jolly well will say 'gag me with a spoon' to this behavior of my compiler. Lest there be flames, let me add this-- I cannot say 'gag me with a spoon' to C, however, for doing this--IMPORTANT DIFFERENCE!! C does not forbid this operation being implemented in single which makes sense anyhow--doing it in double then throwing away the excess precision in order to fit the result into the destination float variable gains nothing in accuracy and loses much efficiency over an implementation in single. But it is unfortunate that a lot of C compilers take the easy way out and do it all in double when single is readily available on the machine. 'Nough blithering for now. -- ------------------------------- Disclaimer: The views contained herein are | dan levy | yvel nad | my own and are not at all those of my em- | an engihacker @ | ployer or the administrator of any computer | at&t computer systems division | upon which I may hack. | skokie, illinois | -------------------------------- Path: ..!ihnp4!ttrdc!levy
dik@zuring.UUCP (Dik T. Winter) (12/21/85)
In article <661@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes: (In a diatribe about english semantics about the word done) >I cannot say 'gag me with a spoon' to C, however, for doing this--IMPORTANT >DIFFERENCE!! C does not forbid this operation being implemented in single >which makes sense anyhow--doing it in double then throwing away the excess >precision in order to fit the result into the destination float variable gains >nothing in accuracy and loses much efficiency over an implementation in >single. Gag me with a spoon! On one of the machines I regularly work with it makes quite a difference whether a=a+1.0 is done in single or double precision (to be precise, this is a CDC Cyber 170 series). If a is only slightly more than -1.0, single and double precision operations will give different results. (Double will give correct results, yes, single not so.) So, do I want this to be done in double? NO. It takes about 6 (7, 8?) times as long in double! (Aaaah, what do you care about 625 nano-seconds?) -- dik t. winter, cwi, amsterdam, nederland UUCP: {seismo,decvax,philabs,okstate,garfield}!mcvax!dik