[comp.lang.c] fabs

jjr@ut-ngp.UUCP (Jeff Rodriguez) (01/07/87)

What's the difference between 
     double x, y;
     . . .
     y = fabs(x)
and
     #define abs(X) ((X) < 0 ? -(X) : (X))
     double x, y;
     . . . 
     y = abs(x);

I.e., why isn't fabs() implemented as a macro?

			Jeff Rodriguez
			jjr@ngp.utexas.edu

rgenter@j.bbn.com (Rick Genter) (01/07/87)

On machines which do not support floating-point instructions,
libraries which implement the DEC G-format floating point (I think
that's right) format can implement fabs() much more efficiently as a
subroutine than as a macro.  If you implement it as a macro you'll end
up generating something like:

	push	address of x
	push	address of constant 0
	call	fp-compare
	jumpge	L1
	push	address of x
	push	address of temp
	call	fp-negate-copy
	jump	L2

L1:
	push	address of x
	push	address of temp
	call	fp-copy

L2:

whereas the routine fabs() could be:

fabs:
	bitclr	x,#BIT_HIGH
	return

Obviously optimizations can be made depending on the intelligence of
your compiler, whether registers (or register-pairs) can hold floating
point values, etc.  Still, it would be hard to beat a single bit-clear
(or AND) operation.

					- Rick
--------
Rick Genter 				BBN Laboratories Inc.
(617) 497-3848				10 Moulton St.  6/512
rgenter@bbn.COM  (Internet new)		Cambridge, MA   02238
rgenter@bbnj.ARPA (Internet old)	seismo!bbn.com!rgenter (UUCP)

firth@sei.cmu.edu (Robert Firth) (01/07/87)

In article <2197@brl-adm.ARPA> rgenter@j.bbn.com (Rick Genter) writes:
>On machines which do not support floating-point instructions,
>libraries which implement the DEC G-format floating point (I think
>that's right) format can implement fabs() much more efficiently as a
>subroutine than as a macro.
...
>whereas the routine fabs() could be:
>
>fabs:
>	bitclr	x,#BIT_HIGH
>	return
>
>Obviously optimizations can be made depending on the intelligence of
>your compiler, whether registers (or register-pairs) can hold floating
>point values, etc.  Still, it would be hard to beat a single bit-clear
>(or AND) operation.

If you really are implementing the VAX G data type, you must first
check whether X is a reserved operand.  That's another couple of
instructions.

By the way, you should clear not the high bit of X but the high bit
of the low-order longword.

gwyn@brl-smoke.ARPA (Doug Gwyn ) (01/08/87)

In article <4477@ut-ngp.UUCP> jjr@ngp.UUCP (Jeff Rodriguez) writes:
>I.e., why isn't fabs() implemented as a macro?

Because if evaluation of its parameter produces side effects,
the macro won't work right.

chris@mimsy.UUCP (Chris Torek) (01/08/87)

In article <4477@ut-ngp.UUCP> jjr@ut-ngp.UUCP (Jeff Rodriguez) writes:
>I.e., why isn't fabs() implemented as a macro?

Consider, e.g.,

	y = fabs(*fp++);

or

	y = fabs(a / b - c + d + e - f * g);

(where the latter may take much longer to compute twice than once
plus a subroutine call).
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
UUCP:	seismo!mimsy!chris	ARPA/CSNet:	chris@mimsy.umd.edu

pinkas@mipos3.UUCP (Israel Pinkas) (01/08/87)

In article <4477@ut-ngp.UUCP> jjr@ngp.UUCP (Jeff Rodriguez) writes:
>What's the difference between 
>     double x, y;
>     y = fabs(x)
>and
>     #define abs(X) ((X) < 0 ? -(X) : (X))
>     double x, y;
>     y = abs(x);
>I.e., why isn't fabs() implemented as a macro?

K & R point out (in their discussion of min and max macros) that creating a
macro like this would cause X to be evaluted twice.  While this might be OK
in most cases, consider if you had a function called read_float, which read
a float in from stdin.  Then the call

y = fabs(read_float())

would call read_float() twice if it was implemented as a macro, but only
once if it was implemented as a function.  Remembering that read_float()
reads a new float every time it is called, the macro implementation would
use the sign of the first number to determine whether to change the sign of
the second number.

-Israel
-- 
----------------------------------------------------------------------
UUCP:	{amdcad,decwrl,hplabs,oliveb,pur-ee,qantel}!intelca!mipos3!pinkas
ARPA:	pinkas%mipos3.intel.com@relay.cs.net
CSNET:	pinkas%mipos3.intel.com

ark@alice.UUCP (01/09/87)

Jeff Rodrigues asks why one should bother with fabs(x)
instead of making it a macro.  One answer is that if
fabs were a macro, then

	fabs(x*x-y*y)

would evaluate x*x-y*y twice on most C implementations.

karl@haddock.UUCP (Karl Heuer) (01/09/87)

In article <4477@ut-ngp.UUCP> jjr@ngp.UUCP (Jeff Rodriguez) writes:
>Why isn't fabs() implemented as a macro [ (X) < 0 ? -(X) : (X) ]?

I think it's primarily because of things like "y = fabs(sin(x))", which would
be inefficent, and "y = fabs(*px++)", which would be wrong.

Generally, the standard library functions are not implemented as macros unless
they evaluate each argument exactly once.  (There are exceptions, though.)

Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint

ark@alice.UUCP (01/11/87)

If you were doing it in C++, you could say:

	inline double fabs (double x)
		{ return x < 0? -x: x; }

and it would do the right thing, efficiently.  Moreover, you could say:

	overload abs;

	inline double abs (double x)
		{ return x < 0? -x: x; }
	inline long abs (long x)
		{ return x < 0? -x: x; }
	inline int abs (int x)
		{ return x < 0? -x: x; }
	inline float abs (float x)
		{ return x < 0? -x: x; }

and not have to remember the type of your arguments.  Generalizing
the second example above to an unbounded family of types is left
as an exercise for the reader.

decot@hpisod2.UUCP (01/29/87)

> In article <4477@ut-ngp.UUCP> jjr@ngp.UUCP (Jeff Rodriguez) writes:
> >Why isn't fabs() implemented as a macro [ (X) < 0 ? -(X) : (X) ]?
> 
> I think it's primarily because of things like "y = fabs(sin(x))", which would
> be inefficent, and "y = fabs(*px++)", which would be wrong.
> 
> Generally, the standard library functions are not implemented as macros unless
> they evaluate each argument exactly once.  (There are exceptions, though.)
> 
> Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint

You could implement fabs() as a macro as follows:

    #define fabs(X)     ((_fabs = (X)), (_fabs < 0? -_fabs : _fabs))

if _fabs were declared as a float in the math library.

Of course, then you would have to include <math.h> to use fabs(), but you
probably already do anyway.

Dave

steele@unc.UUCP (01/31/87)

In article <2550005@hpisod2.HP> decot@hpisod2.HP (Dave Decot) writes:
>
>You could implement fabs() as a macro as follows:
>
>    #define fabs(X)     ((_fabs = (X)), (_fabs < 0? -_fabs : _fabs))
>
>if _fabs were declared as a float in the math library.

Beware
	fabs(a,fabs(b,c)),
as well as
	fabs(a,func(b,c))
where func is a macro which invokes fabs.
-- 

Oliver Steele----------------------------------steele@unc
"When a tree dies,	  ...!{decvax,ihnp4}!mcnc!unc!steele
plant another in its place."	steele%unc@csnet-relay.csnet

tps@sdchem.UUCP (02/02/87)

In article <2550005@hpisod2.HP> decot@hpisod2.HP (Dave Decot) writes:
>In article <4477@ut-ngp.UUCP> jjr@ngp.UUCP (Jeff Rodriguez) writes:
>>Why isn't fabs() implemented as a macro [ (X) < 0 ? -(X) : (X) ]?
>
>You could implement fabs() as a macro as follows:
>
>    #define fabs(X)     ((_fabs = (X)), (_fabs < 0? -_fabs : _fabs))
>
>if _fabs were declared as a float in the math library.
>Dave

This might not work for

	fabs( x ) + fabs( y )

because _fabs gets assigned twice in one expression and the two
comma expressions which result might get interleaved.  There was
a major discussion in this group recently on whether the sub-expressions
a,b,c,d in

	(a,b) + (c,d)

could be evaluated in the order

	a c b d

Since the current defacto standard (K&R) is ambiguous on this point, I
think your method is not safe.

The real solution for this is to have an optimizer which can place
function calls inline.  (There are complilers which can do this).
This even runs faster than your method
because it eliminates the store to _fabs.

|| Tom Stockfisch, UCSD Chemistry	tps%chem@sdcsvax.UCSD

braner@batcomputer.UUCP (02/03/87)

[]

Somebody already said this a LONG time ago:  Any macro definition of
fabs() requires floating-point arithmetic (e.g. comparision with 0).
That is A LOT slower than a dedicated fabs() function, written in
assembly language of course, that simply clears the sign bit.

In this context I'd like to complain again about the fact that, for
historical reasons, the floating-point libraries provided with C compilers
are generally far inferior to those provided with, say, FORTRAN compilers.
Not very long ago most C compilers came without FP support at all, presumably
assuming that C was used for system programming only.  Even now most
microcomputer implementations (on the 68000 at least) seem to have many bugs
and are very slow.  Compare, say, Megamax C with Absoft FORTRAN (both on
the 68000).

Since many people are now turning to C for scientific programming, I hope
that compiler vendors are taking a second look at their ad-hoc second-thought
FP slap-ons.

THE COMPLETE FP LIBRARY MUST BE WRITTEN IN HAND-OPTIMIZED ASSEMBLER LANGUAGE!!!

- Moshe Braner

am@cl.cam.ac.uk (Alan Mycroft) (02/03/87)

In article <2550005@hpisod2.HP> decot@hpisod2.HP (Dave Decot) writes:
>> In article <4477@ut-ngp.UUCP> jjr@ngp.UUCP (Jeff Rodriguez) writes:
>> >Why isn't fabs() implemented as a macro [ (X) < 0 ? -(X) : (X) ]?
>> 
>You could implement fabs() as a macro as follows:
>
>    #define fabs(X)     ((_fabs = (X)), (_fabs < 0? -_fabs : _fabs))
>
>if _fabs were declared as a float in the math library.
>
Of course you could (modulo volatile), but what this example (and many
others like it) REALLY SHOWS, is that C suffers from the lack of the ability
to make declarations within expressions.  The pre-processor merely
amplifies this deficiency --  as we would all like to be able to write
macro's which evaluate each argument once (wouldn't we?).

Many languages (including BCPL from which C derives) have this ability.
E.g. (ML)     <exp> ::= let <decl> in <exp> | ...
     (BCPL)   <exp> ::= valof <cmd> | ...
              <cmd> ::= <block> | resultis <exp> | ...

The absence of this ability (in suitably C-like syntax) to create explicit
temporaries in C expressions explains why getc() and putc() are allowed
to evaluate their parameters more than once.

Question: do not the ANSI committee find this uncomfortable too?
Is the absence of suitable concrete syntax the justification?

BTW, I have been involved in writing a C compiler in which this facility
was abstract syntax (with no concrete correspondent) and which proved very
useful for (almost-)source level sub-expression elimination and de-sugaring of
bitfields etc.

jtr485@umich.UUCP (02/05/87)

In article <628@sdchema.sdchem.UUCP>, tps@sdchem.UUCP (Tom Stockfisch) writes:
> >You could implement fabs() as a macro as follows:
> >    #define fabs(X)     ((_fabs = (X)), (_fabs < 0? -_fabs : _fabs))
> >if _fabs were declared as a float in the math library.
> This might not work for
> 	fabs( x ) + fabs( y )
> because _fabs gets assigned twice in one expression and the two
> comma expressions which result might get interleaved.  There was
> a major discussion in this group recently on whether the sub-expressions
> a,b,c,d in
> 	(a,b) + (c,d)
> could be evaluated in the order
> 	a c b d
> Since the current defacto standard (K&R) is ambiguous on this point, I
> think your method is not safe.
> || Tom Stockfisch, UCSD Chemistry	tps%chem@sdcsvax.UCSD

This fixes that objection:

#define fabs(X)     (((_fabs = (X)) < 0? -_fabs : _fabs))

Again it requires the 'hidden' definition of _fabs but it does not have
order of evaluation problems.

--j.a.tainter

wcs@ho95e.UUCP (02/06/87)

In article <2181@batcomputer.tn.cornell.edu> braner@batcomputer.UUCP (braner) writes:
>Somebody already said this a LONG time ago:  Any macro definition of
>fabs() requires floating-point arithmetic (e.g. comparision with 0).
>That is A LOT slower than a dedicated fabs() function, written in
>assembly language of course, that simply clears the sign bit.

	#define	fabs(x) ( (x) & 0x7FFF )

is probably a *lot* faster than your assembly language function, since
it doesn't need a function call.  You'll have to #ifdef it to get the right bit
pattern for your machine, and worry about details for double and double-extended
versions, but for no-more-bits-than-a-long floating point  it's ok.

>[various complaints about most C compilers' FP code quality]
Yeah, a lot of it's not exciting.  The standard complaint is that everything is
really done in double precision, when you can often get by with single.  I
suspect the 80*8* world is heading back in the PDP-11 direction though, since
the 8087 gives you fairly wide arithmetic.

>THE COMPLETE FP LIBRARY MUST BE WRITTEN IN HAND-OPTIMIZED ASSEMBLER LANGUAGE!!!
Only if your compilers are inadequate for your machines, though some
hand-tuning helps.
-- 
# Bill Stewart, AT&T Bell Labs 2G-202, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs

karl@haddock.UUCP (02/09/87)

In article <1315@ho95e.ATT.COM> wcs@ho95e.UUCP (Bill Stewart) writes:
>	#define	fabs(x) ( (x) & 0x7FFF )
>is probably a *lot* faster than [braner's non-floating-point] assembly
>language function, since it doesn't need a function call.

Dyadic "&" only applies to integral types.  If you hand it a floating type,
you should either get an error, or an implied cast to an integral type.

A float-to-int cast is inappropriate since (altogether now) "unions take-as,
but casts convert".  float-pointer-to-char-pointer (as in *(char *)&x) avoids
that problem, but fails if x is not an lvalue or is a register (and since the
resulting expression is not an lvalue, you have the same problem trying to put
the result back into a float).

The "true solution" is either "#define fabs(x) __builtin_fabs(x)" (where the
compiler knows how to do the bit clear for the operator __builtin_fabs), or
"inline float fabs(float x) { union { int i; float f; } u; u.f=x; u.i &= BIT;
return u.f; }" (where the compiler knows about "inline" and, hopefully, also
optimizes out the redundant moves to/from u).

Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint
(For simplicity, I've ignored the distinction between float and double, and
the machine-dependence which is inherent in this topic.)

guy@gorodish.UUCP (02/09/87)

>	#define	fabs(x) ( (x) & 0x7FFF )
>
>is probably a *lot* faster than your assembly language function,

On my machine, at least, it's incredibly fast, taking no time at all.
This is because using "fabs" on a floating-point value causes the
compiler to reject the code, so you don't waste cycles actually
running the program.

Sorry, but the only way you'd get something like this to work would be to
cast a pointer to the appropriate piece of the floating-point number
to a pointer to some integral type, and dereference that pointer to get
at the appropriate bits and to stuff the result back there.
According to the October 1, 1986 ANSI C draft:

	3.3.10 Bitwise AND Operator

	...

	Constraints

	   Each of the operands shall have integral type.

>Yeah, a lot of it's not exciting.  The standard complaint is that everything
>is really done in double precision, when you can often get by with single.

Fixed in ANSI C, which does not oblige you to perform floating-point
computations in double precision.

firth@sei.cmu.edu.UUCP (02/09/87)

In article <2181@batcomputer.tn.cornell.edu> braner@batcomputer.UUCP (braner) writes:
>THE COMPLETE FP LIBRARY MUST BE WRITTEN IN HAND-OPTIMIZED ASSEMBLER LANGUAGE!!!
>
>- Moshe Braner

Let me scoond this cry!  Most C mathlibs are indeed very bad.
I recall running a mathematical benchmark through a supposedly
optimising compiler and finding negligible difference in runtime
relative to the portable-C compiler generated code.  Turned out
the benchmark was spending 95% of its time computing sqrt.  We
recoded sqrt in Assembler and reduced its runtime by a factor
of 20.  As a bonus, it even returned the correct results.  I
then looked at sin, cos, log &c and... and you really don't
want to know what they were like.

As a very small nitpick with Moshe, your hand-optimised fabs
should of course check for infinity and NaNs rather than just
clearing the "sign bit".  On the PDP-11, for instance, the BIC
trick turns "reserved operand" into "zero", and some programmer
who cared about detecting uninitialised variables loses badly.

mouse@mcgill-vision.UUCP (02/10/87)

In article <756@unc.unc.UUCP>, steele@unc.UUCP (Oliver Steele) writes:
> In article <2550005@hpisod2.HP> decot@hpisod2.HP (Dave Decot) writes:
>>    #define fabs(X)     ((_fabs = (X)), (_fabs < 0? -_fabs : _fabs))
> Beware
> 	fabs(a,fabs(b,c)),
> as well as
> 	fabs(a,func(b,c))
> where func is a macro which invokes fabs.

fabs(a,fabs(b,c)) ->
((_fabs=(a,((_fabs=(b,c)),(_fabs<0?-_fabs:_fabs)))),(_fabs<0?-_fabs:_fabs))

which looks reliable to me (outer assignment doesn't happen until the
inner expression is done).  Problem arises with fabs(a)<fabs(b) and
similar expressions.

However, if fabs is a macro as above, fabs(a,b) will do something very
different from what it would if fabs were a function.  If fabs is a
function, "a,b" is treated as two arguments; the macro treats a,b as a
comma expression instead of two arguments!  Most compilers will
probably ignore extra arguments to functions, meaning that fabs(a,b)
returns fabs(a) if it's a function and fabs(b) if it's a macro...hmmm,
maybe this could be useful (ok, ok, quit shuddering, I *know* it's
nonportable!).

					der Mouse

USA: {ihnp4,decvax,akgua,utzoo,etc}!utcsri!mcgill-vision!mouse
     think!mosart!mcgill-vision!mouse
Europe: mcvax!decvax!utcsri!mcgill-vision!mouse
ARPAnet: think!mosart!mcgill-vision!mouse@harvard.harvard.edu

henry@utzoo.UUCP (Henry Spencer) (02/10/87)

> The absence of this ability (in suitably C-like syntax) to create explicit
> temporaries in C expressions explains why getc() and putc() are allowed
> to evaluate their parameters more than once.
> 
> Question: do not the ANSI committee find this uncomfortable too?
> Is the absence of suitable concrete syntax the justification?

I conjecture that most members of the ANSI committee don't find this
particularly uncomfortable, since old implementations which evaluate the
arguments more than once will be around for a long time and hence everyone
has to be cautious anyway.

The reason for not doing anything about it probably has nothing to do with
the lack of syntax -- anyone can invent syntax -- but with the lack of clear
need and the absence of substantial real experience with the idea.  Although
this rule sometimes seems more honored in the breach than in the observance,
X3J11 really is supposed to be standardizing existing ideas rather than
inventing new ones.
-- 
Legalize			Henry Spencer @ U of Toronto Zoology
freedom!			{allegra,ihnp4,decvax,pyramid}!utzoo!henry

braner@batcomputer.UUCP (02/11/87)

[]

eeexcuse me, but:

#define fabs(x) ((x) & 0x7FFFFF...)

will NOT work, and for several reasons:

The two operands ((x) and 0x7F...) are not of the same type, so one
of them has to be converted.  Then it won't work!!  Actually, it is
illegal to do '&' on a real, if I remember right.  And also, doubles
are usually longer than longs (until we get 64-bit longs :-).

You MIGHT get away with ((*(long *)&x) & 0x7FFFFFFF), but only in cases
where &x is legal, and only on machines where the sign bit of x is where
you expect it...

- Moshe Braner

THE COMPLETE FP LIB MUST BE WRITTEN IN HAND-OPTIMIZED AL!!!
ESPECIALLY SO IF YOU HAVE FP HARDWARE!!!

mouse@mcgill-vision.UUCP (02/11/87)

In article <1315@ho95e.ATT.COM>, wcs@ho95e.ATT.COM (#Bill.Stewart) writes:
> In article <2181@batcomputer.tn.cornell.edu> braner@batcomputer.UUCP (braner) writes:
>> Somebody already said this a LONG time ago:  Any macro definition of
>> fabs() requires floating-point arithmetic (e.g. comparision with 0).

> 	#define	fabs(x) ( (x) & 0x7FFF )

> is probably a *lot* faster than your assembly language function,
> since it doesn't need a function call.  [...] You'll have to #ifdef
> it to get the right bit pattern for your machine, and worry about
> details for double and double-extended versions, but for
> no-more-bits-than-a-long floating point it's ok.

I would be very surprised to find the SVR2 (ho95e *is* SVR2 on a 3B20,
isn't it?) C compiler that badly broken.  Or maybe you just posted
without bothering to test?  Anyway....

	% cat -n x.c
	     1	#define foo(x) ((x) & 0xffff7fff)
	     2	
	     3	main()
	     4	{
	     5	 float f;
	     6	
	     7	 f = -12.34;
	     8	 f = foo(f);
	     9	 printf("f = %g\n",f);
	    10	}
	% cc x.c
	"x.c", line 8: operands of & have incompatible types
	%

>> THE COMPLETE FP LIBRARY MUST BE WRITTEN IN HAND-OPTIMIZED ASSEMBLER
>> LANGUAGE!!!
> Only if your compilers are inadequate for your machines, though some
> hand-tuning helps.

I would really like a C compiler that produces anything that could
*touch* the "Kahan's magic square root" code of the 4.3 VAX libm.  Or
the generation of exceptions in the correct manner, which usually
requires code the compiler simply can't generate.  You wind up writing
it in assembly and putting it in a C file full of asm()s, which is an
exercise in stupidity.  I tend to agree with braner.

Or is your definition of an "adequate" compiler one which can recognize
"oh, this code is trying to do a square root, so let's plug in the good
square root code"?  I think you will find there aren't any such.

					der Mouse

USA: {ihnp4,decvax,akgua,utzoo,etc}!utcsri!mcgill-vision!mouse
     think!mosart!mcgill-vision!mouse
Europe: mcvax!decvax!utcsri!mcgill-vision!mouse
ARPAnet: think!mosart!mcgill-vision!mouse@harvard.harvard.edu

gwyn@brl-smoke.UUCP (02/13/87)

In article <645@mcgill-vision.UUCP> mouse@mcgill-vision.UUCP (der Mouse) writes:
>I would really like a C compiler that produces anything that could
>*touch* the "Kahan's magic square root" code of the 4.3 VAX libm.

The UNIX System V Release 2.0 sqrt routine, written in C, runs about
half as fast as the 4.3BSD assembly code, and is about twice as accurate.

Just for information..

meissner@dg_rtp.UUCP (02/17/87)

In article <68@umich.UUCP> jtr485@umich.UUCP (Johnathan Tainter) writes:
> In article <628@sdchema.sdchem.UUCP>, tps@sdchem.UUCP (Tom Stockfisch) writes:
> > >You could implement fabs() as a macro as follows:
> > >    #define fabs(X)     ((_fabs = (X)), (_fabs < 0? -_fabs : _fabs))
> > >		/* ... */
> > This might not work for
> > 	fabs( x ) + fabs( y )
> > 		/* ... */
> 
> #define fabs(X)     (((_fabs = (X)) < 0? -_fabs : _fabs))
> 
> Again it requires the 'hidden' definition of _fabs but it does not have
> order of evaluation problems.
> 
> --j.a.tainter

It may solve that problem, but:

	fabs( fabs( y ) - 10.0 )

would still get the wrong answer.  In my opinion, the "best" way to do this
is to make fabs (and abs, min, max, etc.) builtin to the compiler.  In order
to allow people to still write their own "fabs" function, you might want
to have some special keyword (I use $builtin which is not a legimate standard
C identifier) to key in the compiler to use the builtin version of the library
function (which would compile down to the "right" instructions for your
machine, and no call overhead).  I also applaud C++'s inline keyword, which
should obviate the need for macro functions, which have the traditional
problems described above.
-- 
	Michael Meissner, Data General
	...mcnc!rti-sel!dg_rtp!meissner

jtr485@umich.UUCP (02/19/87)

In article <1070@dg_rtp.UUCP>, meissner@dg_rtp.UUCP writes:
> > #define fabs(X)     (((_fabs = (X)) < 0? -_fabs : _fabs))
> It may solve that problem, but:
> 	fabs( fabs( y ) - 10.0 )
> would still get the wrong answer.
> 	Michael Meissner

No.  This will still get the right answer.  The only thing to ever worry about
would be a side effect explicitly manipulating _fabs.  Such as:
    fabs( _fabs++ )
which expands to:
    (((_fabs = (_fabs++)) < 0? -_fabs : _fabs))
And now there is no way of knowing when the ++ gets done relative to the
expressions after the '?'.

But only direct manipulation of the 'hidden' variable can do this.

--j.a.tainter

drw@cullvax.UUCP (02/19/87)

meissner@dg_rtp.UUCP (Michael Meissner) writes:
> In article <68@umich.UUCP> jtr485@umich.UUCP (Johnathan Tainter) writes:
> > #define fabs(X)     (((_fabs = (X)) < 0? -_fabs : _fabs))
> 
> It may solve that problem, but:
> 
> 	fabs( fabs( y ) - 10.0 )
> 
> would still get the wrong answer.

Eh?  Let's look at

	(_fabs = X) < 0 ? -_fabs : _fabs

1.  X has to be computed first.

2.  Its value is assigned to _fabs (because the assignment must be
performed before the value of the assignment is used).

3.  There is a sequence point after the test-expression of a ? :, so
all side-effects of X must be completed.

4.  We get to choose -_fabs or _fabs.

The only problem could arise if X affects _fabs via some side-effect.
But this is not possible, even with nested fabs() calls, because the
only code which changes _fabs is "_fabs = X", which is required to
store into _fabs before having its value used.

Dale
-- 
Dale Worley		Cullinet Software
UUCP: ...!seismo!harvard!mit-eddie!cullvax!drw
ARPA: cullvax!drw@eddie.mit.edu