[fa.info-vax] VMS C: floating point precision

info-vax@ucbvax.ARPA (05/02/85)

From: Jerry Wolf <WOLF@BBNG>

I understand that it's usual for C compilers in general (and VMS C
in particular) to convert all operands for floating point expressions
to double precision, even if there are no double-precision
operands on either side of the assignment; that is, any floating
point computation is converted to double precision, and then back
to single precision if necessary.

Is there any way to prevent this?  If I'm happy with single
precision (32-bit) floating point operands and operations,
why can't I just have it compile "ordinary" floating point
operations without all that extra work (and inefficiency)?
Will version 2.0 of C "fix" this?
Cheers,
  Jerry

info-vax@ucbvax.ARPA (05/02/85)

From: LEICHTER <Leichter@YALE.ARPA>

    I understand that it's usual for C compilers in general (and VMS C
    in particular) to convert all operands for floating point expressions
    to double precision, even if there are no double-precision
    operands on either side of the assignment; that is, any floating
    point computation is converted to double precision, and then back
    to single precision if necessary.
    
    Is there any way to prevent this?  If I'm happy with single
    precision (32-bit) floating point operands and operations,
    why can't I just have it compile "ordinary" floating point
    operations without all that extra work (and inefficiency)?
    Will version 2.0 of C "fix" this?
    Cheers,
      Jerry

Check Kernighan and Ritchie, the only real definition of the language that's
existed up to this point:  Floating point operations are DEFINED to work this
way.  A C compiler that did NOT do this might be convenient for some
applications, but it would be an incorrect implementation of the language.
(See Section 6.6, "Arithmetic Conversions", in Appendix A:  "First, any
operands of type char or short are converted to int, and any of type float are
converted to double".)

The proposed ANSI C standards - at least when I last saw them a while back -
said the same thing, although they made explicit something some compilers may
have assumed before - without real support from K & R:  That only the value
computed matters, and that the compiler is free to use single precision
arithmetic if it is certain that the same result would be obtained.  Note that
it's usually impossible to be really sure of this, even when single precision
numbers are combined to form an eventual single precision result; most
compilers will probably take the ANSI C standards fairly liberally, and just
assume that "all single precision means doing it in single precision is good
enough".
							-- Jerry
-------