[comp.lang.c] Order of evaluation, machine floating point

rbbb@rice.edu (David Chase) (03/06/87)

> to C compilers, perhaps they should give examples of mechanisms which
> allow optimization to be prevented in other languages.  I would hope
> that such mechanisms are portable.

In fortran, parentheses must be honored.  With machine floating point
numbers, addition is commutative, but NOT associative.  C (as presented to
me by its compilers on Pyramids, Suns, and Vaxes) is NOT terribly strong
on floating point arithmetic, and if the standard permits arbitrary
rearrangement of floating point arithmetic using algebraic non-identities,
then this will continue to be the case.

To thwart flames, let me elaborate "not terribly strong"--first, for
unknown reasons all floating arithmetic is supposed to occur in double
precision.  I realize that K&R said it, but I don't know why.  This slows
down FP performance when you really don't need all the precision.  Second,
several compilers have exhibited bugs compiling "a += b" when a and b are
floats.  For some reason, these compilers decide to perform THAT addition
in single precision.  Someone had to work hard to add that bug to the
compiler; if "a += b" is the same as "a = a + b", and all floating
additions are done in double, then why should "single-precision-floating-
addition" even be in the code generator's vocabulary?

Again, let me stress that in some cases you DO NOT WANT double precision
arithmetic.  In some cases it is not worth the extra time and you really
don't care that it gives you a better answer.

(To digress...) Given all this, I found it rather amusing to hear that
someone had written a high quality square-root routine in C.  Voices from
my past say:  "On which machine?  Using which compiler? How fast does it
run?"  Writing that stuff in C makes unwary people think that the code is
actually portable; it is not.  Someday, everyone will speak IEEE and
things will be a little bit safer, but they don't yet.  Sometime long
after everyone speaks IEEE I might begin to trust the compilers (for
instance, do you know for sure that the compiler correctly translates your
floating point constants to their best internal representation?  How about
the I/O library?  Given the record of C compilers so far, I'll bet there's
at least one case where it doesn't yield the best answer.)

David

guy%gorodish@Sun.COM (Guy Harris) (03/07/87)

> C (as presented to me by its compilers on Pyramids, Suns, and Vaxes) is
> NOT terribly strong on floating point arithmetic,

To be fair, the primary intent of C was not to support floating-point
computation; that's probably why many C compilers don't devote a lot
of effort to doing a good job with it.

> first, for unknown reasons all floating arithmetic is supposed to occur
> in double precision.  I realize that K&R said it, but I don't know why.

I don't know either.  I think one reason why the PDP-11 C compiler
did so was that it took some extra work to use the instructions to
switch between single-precision and double-precision mode on the
PDP-11's floating point coprocessor (and to make sure the mode was
properly maintained across procedure calls, etc.) and they didn't use
floating point enough to feel it was worth it.  Some other arguments
have been given (which I don't remember the details of offhand), but
given that a number of people knowledgable about floating-point
computation have complained about this aspect of C, I don't know that
those arguments were necessarily valid.

Even given the above, it's not clear to me why K&R *mandated* that
floating-point computations be done in double-precision; it may have
wanted to *permit* that to be done, but I don't see why it should
have *required* it.  The ANSI C committee has fixed this; it is no
longer required to perform all floating-point computations in
double-precision, although it is permitted.  If function prototypes
are used, float-to-double conversions are not performed either unless
the formal argument is declared as "double".  I see no discussion
that indicates the behavior of a function declared to return "float",
but I assume that an implementation is permitted not to treat that as
"really" returning "double".

> Second, several compilers have exhibited bugs compiling "a += b" when a
> and b are floats.  For some reason, these compilers decide to perform
> THAT addition in single precision.  Someone had to work hard to add that
> bug to the compiler; if "a += b" is the same as "a = a + b",

Inside PCC (upon which I believe at least some of the compilers in
question are based), "a += b" *isn't* the same as "a = a + b"; it
knows that there is an "add to" operator distinct from the "add"
operator.

m5d@bobkat.UUCP (03/07/87)

In article <4775@brl-adm.ARPA> rbbb@rice.edu (David Chase) writes:
>
>In fortran, parentheses must be honored.  ...

Is this really true?  I didn't know it if it is.




-- 
Mike McNally, mercifully employed at Digital Lynx ---
    Where Plano Road the Mighty Flood of Forest Lane doth meet,
    And Garland fair, whose perfumed air flows soft about my feet...
uucp: {texsun,killer,infotel}!pollux!bobkat!m5d (214) 238-7474

firth@sei.cmu.edu (Robert Firth) (03/09/87)

In article <693@bobkat.UUCP> m5d@bobkat.UUCP (Mike McNally (dlsh)) writes:
>In article <4775@brl-adm.ARPA> rbbb@rice.edu (David Chase) writes:
>>
>>In fortran, parentheses must be honored.  ...
>
>Is this really true?  I didn't know it if it is.

Yes it is.  See ANSI X3.9-1978 Sect 6.6.3 "Integrity of Parentheses"

henry@utzoo.UUCP (Henry Spencer) (03/17/87)

> ... for unknown reasons all floating arithmetic is supposed to occur in
> double precision.  I realize that K&R said it, but I don't know why...

Dennis Ritchie answered this on the net a long time ago:

> From decvax!harpo!npoiv!alice!research!dmr Wed Sep  8 23:22:25 1982
> Subject: float/double in C
> Newsgroups: net.unix-wizards
> 
> Several people have correctly quoted the manual as calling for evaluation
> of expressions on (single-precision) floats in double precision.  The
> rule was made for 3 reasons.
> 
> 1) To make certain that floating-point function arguments and return values
>    were always double (thus avoiding multiple library routines and constant
>    need for casts.)
> 
> 2) Because the PDP-11 makes it horribly difficult to mix modes otherwise
>    (yes, I admit it).
> 
> 3) Because it is numerically more desirable to evaluate single-precision
>    expressions in double precision, then truncate the result.
> 
> These are in order of importance...

Religious use of X3J11 function prototypes will pretty much solve Dennis's
#1 problem, #2 is increasingly a dead issue, and #3 is usually true for the
naive but makes the pros very unhappy.
-- 
"We must choose: the stars or	Henry Spencer @ U of Toronto Zoology
the dust.  Which shall it be?"	{allegra,ihnp4,decvax,pyramid}!utzoo!henry

howard@cpocd2.UUCP (03/23/87)

In article <7779@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
>> ... for unknown reasons all floating arithmetic is supposed to occur in
>> double precision.  I realize that K&R said it, but I don't know why...
>
>Dennis Ritchie answered this on the net a long time ago:
>
>> 3) Because it is numerically more desirable to evaluate single-precision
>>    expressions in double precision, then truncate the result.
>
>#3 is usually true for the naive but makes the pros very unhappy.

Unless you happen to be working on an IBM 360/370 family machine.  The
"single precision" arithmetic is so bad that it has often been called
"half precision".  The C insistance on double precision makes a lot of
sense in such an environment.
-- 

	Howard A. Landman
	...!intelca!mipos3!cpocd2!howard

mwm@eris.UUCP (03/24/87)

In article <521@cpocd2.UUCP> howard@cpocd2.UUCP (Howard A. Landman) writes:
>>	#3 may be true for the naive, but makes the pros unhappy.
>
>Unless you happen to be working on an IBM 360/370 family machine.  The
>"single precision" arithmetic is so bad that it has often been called
>"half precision".  The C insistance on double precision makes a lot of
>sense in such an environment.

Even there, the comment about #3 is still true. I know how bad the
360/370 single precision floating point is. I also know how
_expensive_ the double precision stuff is - it takes about four times
as long to do things.

You can have the best of both worlds by having important internal
routines do things in double precision, but not having to pass double
precision values to them. Of course, C's forcing things to double
breaks that.

K&R's forcing things to double precision, as well as not having a
reasonable way (I know, use temporary variables; that's not
reasonable!) to force order of evaluation made it almost completely
unsuitable for doing number crunching. ANSI has fixed both of those
(or made it possible for compiler writers to fix them).

Now, all I need is an interface to all those FORTRAN libraries out
there....

	<mike
--
Here's a song about absolutely nothing.			Mike Meyer        
It's not about me, not about anyone else,		ucbvax!mwm        
Not about love, not about being young.			mwm@berkeley.edu  
Not about anything else, either.			mwm@ucbjade.BITNET

edw@ius2.cs.cmu.edu.UUCP (03/24/87)

     It seems to me if doing floating point computations in double
precision is SO expensive for your applications, that maybe the
problem isn't with your software (C compiler)  but with your hardware
or lack of it.   If your applications really uses a lot  of floating
point calculations, maybe you should get either a FPA board or
68881 chip that will speed up your floating point calculations.


				
-- 
					Eddie Wyatt

Those who know what's best for us-
Must rise and save us from ourselves
Quick to judge ... Quick to anger ... Slow to understand...
Ignorance and prejudice and fear [all] Walk hand in hand.
					- RUSH 

kent@xanth.UUCP (03/25/87)

In article <2910@jade.BERKELEY.EDU> mwm@eris.BERKELEY.EDU (Mike (My watch has windows) Meyer) writes:
>K&R's forcing things to double precision, as well as not having a
>reasonable way (I know, use temporary variables; that's not
>reasonable!) to force order of evaluation made it almost completely
>unsuitable for doing number crunching. ANSI has fixed both of those
>(or made it possible for compiler writers to fix them).

Mike,
	Not a flame at you, but C was an ugly language to begin with,
	because so much of it doesn't parse the way it reads, due to
	_bizarre_ precedence results.  Adding the unary plus, which,
	to anyone taught mathematics, is a no-op, and making that the
	way to control the order of evaluation, just makes an ugly
	language uglier.  Code that needs to be _that_ fast should
	be done in assembly.  ANSI should have mandated respecting
	parentheses for evaluation order, so that code executes the
	way it reads.  This would break _no_ code, and would let
	programmers' intuition be of some use in writing code in C.
	I just can't be very sympathetic, in a time of ever increasing
	computer speeds and memory, to optimizing the code at the
	expense of the coder.  The latter is the big expense item
	these days, and has been for over ten years now.
Kent.

nmm@cl-jenny.UUCP (03/27/87)

In article <2910@jade.BERKELEY.EDU>, mwm@eris.BERKELEY.EDU (Mike (My watch has windows) Meyer) writes:
> In article <521@cpocd2.UUCP> howard@cpocd2.UUCP (Howard A. Landman) writes:
> >>	#3 may be true for the naive, but makes the pros unhappy.
> >
> >Unless you happen to be working on an IBM 360/370 family machine.  The
> >"single precision" arithmetic is so bad that it has often been called
> >"half precision".  The C insistance on double precision makes a lot of
> >sense in such an environment.
> 
> Even there, the comment about #3 is still true. I know how bad the
> 360/370 single precision floating point is. I also know how
> _expensive_ the double precision stuff is - it takes about four times
> as long to do things.

This has not been true for at least 12 years on the 'mainstream' range
of IBM 360/370 machines.  I have no experience of the 4300 or 9370 series,
or the very small 360/370 machines, but I very much doubt that it is
true even there.

Most of the bigger machines have 64-bit (i.e. 56 bit) hardware, and use
microcode to strip this down for 32-bit arithmetic.  In fact, on the
370/165 32-bit floating point addition was actually 1 cycle SLOWER than
the 64-bit equivalent.

Howard A. Landman is right.  The only real disadvantage of using 64-bit
arithmetic throughout is the extra store needed and, if you are pushing
IBM's store limits, you are going to need the extra precision to get any
meaningful results.

Nick Maclaren
University of Cambridge Computer Laboratory

atbowler@watmath.UUCP (03/31/87)

In article <521@cpocd2.UUCP> howard@cpocd2.UUCP (Howard A. Landman) writes:
>Unless you happen to be working on an IBM 360/370 family machine.  The
>"single precision" arithmetic is so bad that it has often been called
>"half precision".  The C insistance on double precision makes a lot of
>sense in such an environment.
>
So, for that machine, let the compiler use double precision intermediates,
or write your code using double precision.  This would not violate the
standard.  The fact that /370's have a questionable floating point
format is well known, but it does not seem a good reason to continue
requiring that other machines with better floating point representations
produce sub-optimal code.

henry@utzoo.UUCP (Henry Spencer) (04/04/87)

> 	... ANSI should have mandated respecting
> 	parentheses for evaluation order, so that code executes the
> 	way it reads.  This would break _no_ code, and would let
> 	programmers' intuition be of some use in writing code in C.

It would not break any code; it just would make an awful lot of code, most
of it completely insensitive to evaluation order, noticeably slower.  As
for programmer's intuition, I assure you that it takes very little effort
to get used to thinking of evaluation order as unknown.

>	I just can't be very sympathetic, in a time of ever increasing
>	computer speeds and memory, to optimizing the code at the
>	expense of the coder.  The latter is the big expense item
>	these days, and has been for over ten years now.

In a time of ever-increasing demands on computer systems by ever more
impatient users, I just can't be very sympathetic to optimizing the coder's
time at the expense of the users' time.  The latter group supplies the
motive for all this software, remember.

If you want a coder-based argument, consider that the coder is trying to
work on a machine (or via a disk server) that will run faster with code
that is optimized better.
-- 
"We must choose: the stars or	Henry Spencer @ U of Toronto Zoology
the dust.  Which shall it be?"	{allegra,ihnp4,decvax,pyramid}!utzoo!henry