[comp.lang.fortran] C vs. Fortran

jlg@beta.lanl.gov (Jim Giles) (06/22/88)

In article <3136@phoenix.Princeton.EDU>, mjschmel@phoenix.Princeton.EDU (Michael J. Schmelzer) writes:
> [...]
> Given that C produces assembly code very similar to the source
> code it came from, why should it be any slower than the object
> code generated by a FORTRAN compiler?

Who says that efficient object code needs to resemble the source very
much?  In fact, on pipelined machines, good object code always appears
quite confused compared to the source - all those code movement
optimizations.  Actually, even without pipelining, the object code
should be quite different - constant folding, common expression
elimination, strength reduction, and register allocation can all
rearrange the code quite a bit.  It may be true that many C compilers
produce object code that resembles source, but this is probably a
deficiency of those compilers.

> Furthermore, wouldn't optimizers eliminate any difference between
> their final products?

Depends on what type of optimization you're talking about.  The multi-
dimensioned array problems (which have seen so much recent discussion)
are hard for C to optimize because the language simple doesn't have
the dependency information available to it (not even in source).
Also, many C compilers throw out much of the dependency information at
an early stage in the compile process - no later optimization can get
it back.  Of course, the compiler could optimize anyway (by assuming
that no dependencies exist), but this would break code that really DID
contain dependencies.

Of course, C may have the reputation of being slow simply because the
compilers available to most people don't optimize very well.  Certainly
all the C compilers I've seen could stand some improvement (the famous
pcc - pessimizing C compiler, for example).

J. Giles
Los Alamos

eugene@pioneer.arpa (Eugene N. Miya) (06/23/88)

In article <3136@phoenix.Princeton.EDU> mjschmel@phoenix.Princeton.EDU (Michael J. Schmelzer) writes:
>Given that C produces assembly code very similar to the source
>code it came from, why should it be any slower than the object
>code generated by a FORTRAN compiler?
>
>Furthermore, wouldn't optimizers eliminate any difference between
>their final products?

My two yen:

Your fallacy is sort of based on the separation of parts in compilation
and an assumption from everything is performed properly: a common
misconception.  For instance, compilers appear as very monolith things
to most people (like me), only recently have companies started writing
compilers in high-level languages, started thinking about common code
generators for different languages, etc.  Sure, the binary machine
instructions all have to work in the machine instruction environment,
but many machines had different optimizers for their different language
compilers.  This is changing as common intermediate codes are
developing.  The point is things are changing for the better.  Consider
that supercomputer sometimes make terrible byte pushers [over
generalized].

In another case, we also tend to take a lot of features like
optimization for granted.  Most of these systems are poorly tested.
This is not completely the fault of manufacturers.  We ran un-vectorized
libraries month before we found the mistake, this was not a production
machine, a site configuration error.  In another case, I reserved
a big machine for stand alone use on micro-tasking research.  I was
expecting to see 4 CPU crunching a problem.  Only 1 was working.
The tasking package was distributed set up to only run on 1 CPU.
Only a stand alone test would have detected this problem.  TIme to
recompile.

Another gross generalization from

--eugene miya, NASA Ames Research Center, eugene@aurora.arc.nasa.gov
  resident cynic at the Rock of Ages Home for Retired Hackers:
  "Mailers?! HA!", "If my mail does not reach you, please accept my apology."
  {uunet,hplabs,ncar,decwrl,allegra,tektronix}!ames!aurora!eugene
  "Send mail, avoid follow-ups.  If enough, I'll summarize."

rwwetmore@watmath.waterloo.edu (Ross Wetmore) (06/28/88)

In article <3136@phoenix.Princeton.EDU> mjschmel@phoenix.Princeton.EDU (Michael J. Schmelzer) writes:
>Given that C produces assembly code very similar to the source
>code it came from, why should it be any slower than the object
>code generated by a FORTRAN compiler?
>
>What is missing from the equation here? I am currently writing
>programs in C and FORTRAN and thus have some interest in the matter.
>Mike Schmelzer mjschmel@phoenix!princeton.edu

  Part of the problem is that the presence or absence of certain
constructs in the high level language predisposes the programmer
towards a particular algorithm or coding style. Thus an algorithm
written in simple Fortran might be quite radically different than
one written in simple C.

  In C exponentiation is not an operator, but rather a function.
I have seen C programs where a for-loop has been used to avoid
doing a pow() function call or because simple integer exponentiation 
doesn't really exist. A Fortran programmer doesn't really think
and just uses '**' which usually just gets translated into an
appropriate function call by the compiler. The point is that the 
C programmer may produce a radically different program because of 
the different thought process required. The reverse happens when a
C concept like pointers is missing from the language.

  What is needed is a way to preprocess code or add compiler
extensions so that the expression of simple concepts can be done 
with simple high-level code and the implementation is left to the 
compiler. Even better is if the extensions can be added dynamically
by the programmer for his particular application, rather than 
rewriting the code to use a compiler which has the maximum overlap 
with his favorite set.

  For example a scientific application package might be a trivial
program in a language where a data types 'vector' and 'matrix'
existed with standard operators '+-*/' doing the appropriate thing.
Moreover if a new super-hardware device comes along to replace the
functions that presumably get called to do the actual work, then
the user need do very little more than recompile with a new compiler
switch or function library. This is far more preferable and less
time consuming than global changes to explicit programmer implement-
ation of the same functionality.

  Rather than converting Fortran to C, or Cobol, or PL/1, it might be
more worthwhile to look at some of the more flexible languages such
as C++ or ADA or their future incarnations. Hopefully, the relearning
cycle and upgrade path for these will be somewhat less steep after
the initial conversion, as the basic compiler is less liable to change,
but rather different packages of 'includes' will come along to provide
extensions and new features.

Ross W. Wetmore                 | rwwetmore@water.NetNorth
University of Waterloo          | rwwetmore@math.Uwaterloo.ca
Waterloo, Ontario N2L 3G1       | {uunet, ubc-vision, utcsri}
(519) 885-1211 ext 3491         |   !watmath!rwwetmore

jss@hector.UUCP (Jerry Schwarz) (06/29/88)

For those interested in the differences between C and FORTRAN with
regard to optimization I recommend

	Compiling C for Vectorization, Parallelization and Inline Expansion
	R. Allen and S. Johnson
	in Proceedings SIGPLAN '88  (SIGPLAN Notices July 88)
	pp. 241-249

The conference was last week.  The Notices should be arriving soon.

Jerry Schwarz
Bell Labs, Murray Hill

ok@quintus.uucp (Richard A. O'Keefe) (06/29/88)

In article <19633@watmath.waterloo.edu> rwwetmore@watmath.waterloo.edu (Ross Wetmore) writes:
>  In C exponentiation is not an operator, but rather a function.
>I have seen C programs where a for-loop has been used to avoid
>doing a pow() function call or because simple integer exponentiation 
>doesn't really exist. A Fortran programmer doesn't really think
>and just uses '**' which usually just gets translated into an
>appropriate function call by the compiler.

Not to knock either C or Fortran, but this may be a case where C is better
to use than Fortran.  Just because ** _looks_ like a simple operation (not
that different from *) doesn't mean that it isn't a call to a costly function.
I have seen published Fortran code where polynomials were evaluated with an
O(N * log(N)) algorithm (N being the degree of the polynomial) instead of an
O(N) algorithm, presumably because the programmer thought ** must be cheap.
If you have a program calculating X**N where N is an integer but is not in
the range -2..+3, take a very careful look at your program to see how it can
be improved.

smryan@garth.UUCP (Steven Ryan) (06/29/88)

>  Rather than converting Fortran to C, or Cobol, or PL/1, it might be
>more worthwhile to look at some of the more flexible languages such
>as C++ or ADA or their future incarnations. Hopefully, the relearning
>cycle and upgrade path for these will be somewhat less steep after
>the initial conversion, as the basic compiler is less liable to change,
>but rather different packages of 'includes' will come along to provide
>extensions and new features.

That was the idea behind Algol68, from twenty years back. I think Aad
was a bit ahead of time.

smryan@garth.UUCP (Steven Ryan) (06/29/88)

This should be the ten thousand article--sorry, couldn't resist it.

jlg@beta.lanl.gov (Jim Giles) (06/29/88)

In article <143@quintus.UUCP>, ok@quintus.uucp (Richard A. O'Keefe) writes:
> [...]
> Not to knock either C or Fortran, but this may be a case where C is better
> to use than Fortran.  Just because ** _looks_ like a simple operation (not
> that different from *) doesn't mean that it isn't a call to a costly function.
> I have seen published Fortran code where polynomials were evaluated with an
> O(N * log(N)) algorithm (N being the degree of the polynomial) instead of an
> O(N) algorithm, presumably because the programmer thought ** must be cheap.
> [...]

The exponentiation operator isn't bad just because some people use it
badly.  Your example could have been coded just as badly with C (by using
the same algorithm and substituting function calls for the exponentiation
operator - it wouldn't be an intrinsic function if it wasn't cheap, would
it?).  If simple looking expressions should really be restricted to doing
only simple things, the C should not have macros (which can hide a lot
of complexity).

Also, Fortran doesn't _require_ a programmer to use **.  You could
code without it.  But, the exponentiation operator is an example of a
feature which allows the compiler to make optimizations without the
user having to sacrifice readability.  X**3 is more readable than
X*X*X (particularly is X is an expression).  X**3.5 is more readable
than EXP(3.5*LOG(X)).  The compiler automatically selects which
implementation to do, in general helping speed the code.  If you
always do pow(X,Y), the compiler has no choices to make.

The purpose of all higher level features of a language is to make it
easier for a _good_ programmer to do his job.  A feature which does
this should be considered for inclusion no matter how badly it _might_
be misused.  If the potential for misuse doesn't even result in
incorrect code (as in this case), then there is no reason not to
include the feature.

J. Giles
Los Alamos

ok@quintus.uucp (Richard A. O'Keefe) (06/30/88)

In article <20480@beta.lanl.gov> jlg@beta.lanl.gov (Jim Giles) writes:
>In article <143@quintus.UUCP>, ok@quintus.uucp (Richard A. O'Keefe) writes:
>> Not to knock either C or Fortran, but this may be a case where C is better
>> to use than Fortran.

>The exponentiation operator isn't bad just because some people use it badly.

I didn't say that it was bad.  What I do say is that it is _MISLEADING_.
In C, for example, most of the operations for which there is special
syntax correspond to machine operations of similar cost.

What with prototypes and all, an ANSI C compiler is fully entitled to treat
	#include <math.h>
	double x;
	...
	x = pow(x, 3);
	x = pow(x, 3.5);
just the same as Fortran 77 would treat
	DOUBLE PRECISION X
	...
	X = X**3
	X = X**3.5
It is already the case on the UNIX systems where I've tried it that
	pow(x, (double)3)
uses the iterative method used for X**3.  (I personally don't think that
is a good thing, but that's another story.)

>X**3 is more readable than X*X*X (particularly is X is an expression).
pow(x,3) isn't _that_ hard to read, either.
>X**3.5 is more readable than EXP(3.5*LOG(X)).
True, but again, pow(x,3.5) isn't terribly hard to read, and in any
case the two expressions X**3.5 and EXP(3.5*LOG(X)) don't mean exactly
the same thing (may have different over/underflow conditions, yield
different results, &c).

>If you always do pow(X,Y), the compiler has no choices to make.
Misleading and soon to be wrong.  Misleading, because the run-time library
_does_ have some choices to make, and current libraries make them.  Soon
to be wrong, because ANSI C compilers are allowed to detect the standard
library functions and do special things with them.

>If the potential for misuse doesn't even result in incorrect code
>(as in this case), then there is no reason not to include the feature.

Well, ANSI C _does_ include the feature.  My point was simply that the
notation "pow(x, y)" _looks_ as though it might be costly, while the
notation "x ** y" _looks_ as though it is cheap and simple, and that
the first perception is the correct one, so that ANSI C's notation may
be better.  In fact it is _not_ the case in this example that the
misuse did not result in incorrect code.  Consider the two fragments:

C	Assume that N.ge.2		/* Assume that N >= 2 */
C	Fragment 1			/* Fragment 1 */
	T = A(1)			t = a[0];
	DO 10 I = 2,N			for (i = 1; i < n; i++) {
	   T = T + A(I) * X**(I-1)	    t += a[i]*pow(x, (double)i);
10	CONTINUE			}
C	Fragment 2			/* Fragment 2 */
	T = A(N)			t = a[N-1];
	DO 20 I = N-1,1,-1		for (i = n-2; i >= 0l i--) {
	    T = T*X + A(I)		    t = t*x + a[i];
20	CONTINUE			}

The two fragments are *not* equivalent.  It is easy to come up with an
example where fragment 2 correctly yields 1.0 and fragment 1 overflows.
{I am not asserting that fragment 2 is always the right thing to use!}

A good numeric programmer will use X**N __very__ seldom for other than
constant arguments.

What C _does_ lack that a numeric programmer might miss is a single-
precision version of pow().  Given
	float x, y, u, v;
	x = pow(u,v)/y;
it is not in general safe to evaluate pow(u,v) in single precision.
How much faster is float**float than double**double?
Sun-3/50 (-fsoft)	1.2
Sun-3/160 (-f68881)	2.5
Sequent (80387)		1.4
If this is typical (and I have no reason to expect that it is), the
lack of single precision elementary functions in C comes into the
"nuisance" category rather than the "major problem" one.  Since UNIX
systems often come with Fortran these days, and Fortran has single
precision elementary functions, it seems a pity not to let ANSI C
share them.

jlg@beta.lanl.gov (Jim Giles) (06/30/88)

In article <147@quintus.UUCP>, ok@quintus.uucp (Richard A. O'Keefe) writes:
> [...]
> I didn't say that it was bad.  What I do say is that it is _MISLEADING_.
> In C, for example, most of the operations for which there is special
> syntax correspond to machine operations of similar cost.

It's not misleading.  It means what it says - exponentiation.  It looks
as expensive as an exponentiation operator - no more or less.  If the
syntactical appearance is what you mean by 'special syntax', why does
pow() look any more expensive than abs()?  Divide doesn't look all that
much more expensive than multiply either, but it is.
> 
> What with prototypes and all, an ANSI C compiler is fully entitled to treat
>    [example ...]
> just the same as Fortran 77 would treat
>    [other example ...]

Possibly true.  Also irrelevant.  You are making basically a syntactical
point. So the fact that the future versions of C will fix something
(that IT shouldn't have broke to begin with) is not relevant.

> [...]
> >X**3 is more readable than X*X*X (particularly is X is an expression).
> pow(x,3) isn't _that_ hard to read, either.
> >X**3.5 is more readable than EXP(3.5*LOG(X)).
> True, but again, pow(x,3.5) isn't terribly hard to read, and in any

LISP isn't terribly hard to read either, but it's not what I want to
code numerical expressions in.  The syntax that mathematics uses is
really well suited to the task.  The programming language syntax for
the same purposes should look as similar as possible to the original
math.  There is no reason that I can see to adopt any other rule of
choice in language design.

> case the two expressions X**3.5 and EXP(3.5*LOG(X)) don't mean exactly
> the same thing (may have different over/underflow conditions, yield
> different results, &c).

If X is in range, then LOG(X) is in range.  If X**3.5 is in range, then
certainly 3.5*LOG(X) is.  The expression given overflows (or underflows)
ONLY if the original numbers are out of range or if the final answer
is out of range.
> 
> >If you always do pow(X,Y), the compiler has no choices to make.
> Misleading and soon to be wrong.  Misleading, because the run-time library
> _does_ have some choices to make, and current libraries make them.  Soon

The choice I'm talking about is whether to cause a function call (the
most expensive of all the 'simple' operations).  Doesn't matter what the
subroutine library does, you've already made the expensive call.

> to be wrong, because ANSI C compilers are allowed to detect the standard
> library functions and do special things with them.

Fine, C is fixing something that shouldn't have been broken to begin with.
Now, if it would just add the operator ...

> [...]
> C	Assume that N.ge.2		/* Assume that N >= 2 */
> C	Fragment 1			/* Fragment 1 */
> 	T = A(1)			t = a[0];
> 	DO 10 I = 2,N			for (i = 1; i < n; i++) {
> 	   T = T + A(I) * X**(I-1)	    t += a[i]*pow(x, (double)i);
> 10	CONTINUE			}
> C	Fragment 2			/* Fragment 2 */
> 	T = A(N)			t = a[N-1];
> 	DO 20 I = N-1,1,-1		for (i = n-2; i >= 0l i--) {
> 	    T = T*X + A(I)		    t = t*x + a[i];
> 20	CONTINUE			}
> 
> The two fragments are *not* equivalent.  It is easy to come up with an
> example where fragment 2 correctly yields 1.0 and fragment 1 overflows.
> {I am not asserting that fragment 2 is always the right thing to use!}

An interesting example.  But it makes my point, not yours.  The first
fragment (in either language) looks expensive.  The C fragment also
looks harder to write and maintain - but no more expensive than the
Fortran fragment (this, of course, IS misleading, the C fragment is
probably slower).  Fragment 2 looks clearly preferable in either
language.  The existence of the exponentiation operator doesn't alter
my perceptions about these code fragments AT ALL.
> 
> A good numeric programmer will use X**N __very__ seldom for other than
> constant arguments.

Finally!  Something we agree upon.  But what does this have to do with
the value of placing the operator into the syntax?  Just because it's
seldom used for large or non-constant arguments, doesn't mean it needs
to be arcane or cryptic when it IS used.
> 
> What C _does_ lack that a numeric programmer might miss is a single-
> precision version of pow().  Given
> [...]
> How much faster is float**float than double**double?
> Sun-3/50 (-fsoft)	1.2
> Sun-3/160 (-f68881)	2.5
> Sequent (80387)		1.4
> If this is typical (and I have no reason to expect that it is), the
> [...]

For just multiplies, X*Y, the ratio on the Crays is about 60 (actually
that's an underestimate, since the single precision multiply can be
pipelined).  The ratio for exponentiation must be enormous!  Of course,
this would never be acceptable.  So the Cray C compiler uses SINGLE
instead of DOUBLE for all arithmetic.  I don't know if it even HAS
a way of doing real double.  (Cray single has ~14 decimal digits of
precision.)

My point is that if you're suggesting that languages should be designed
to protect a small number of naive users from making simple efficiency
errors - you're wrong.  Programming languages should be designed to
allow _good_ programmers to write and maintain their codes as productively
as possible.  After all, this is why C, with its terse (and sometimes
cryptic) syntax is so popular.  If you really want to protect naive
users, you should lobby for removal of expressions with side-effects
from C.

J. Giles
Los Alamos

dph@lanl.gov (David Huelsbeck) (07/01/88)

From article <20506@beta.lanl.gov>, by jlg@beta.lanl.gov (Jim Giles):
	[...]
> 
> LISP isn't terribly hard to read either, but it's not what I want to
> code numerical expressions in.  The syntax that mathematics uses is
                                                  ^^^^^^^^^^^
> really well suited to the task.  The programming language syntax for
> the same purposes should look as similar as possible to the original
> math.  There is no reason that I can see to adopt any other rule of
> choice in language design.
	[...]

Hmmm.

  Jim, I tend to think lisp looks more like mathematic syntax than 
Fortran does.  A small subset of Fortran looks a lot like arithmetic
but mathematics?  

I don't seem to remember seeing any DO loops in my mathematics texts.
Well, maybe in my numerical methods book but it also contained an
arithmetic-if-statement in an example of "good code".  Anybody who
can defend the AIS should have no problem with x+=(p=foo(b)?++i:i--)

I would have put a smile at the end but it might have looked like code. ;-D

If you want something that really looks like mathematics, with the 
the for-all and the there-exists operators, and REAL subscripting and
superscripting try MODCAP. (sigplan notices years ago)


> 
> 
> J. Giles
> Los Alamos

	David Huelsbeck
	dph@lanl.gov
	...!cmcl2!lanl!dph

news fodder
news fodder
news fodder
news fodder
news fodder
news fodder
news fodder
news fodder
news fodder
news fodder

nevin1@ihlpf.ATT.COM (00704a-Liber) (07/01/88)

In article <20506@beta.lanl.gov> jlg@beta.lanl.gov (Jim Giles) writes:

|LISP isn't terribly hard to read either, but it's not what I want to
|code numerical expressions in.  The syntax that mathematics uses is
|really well suited to the task.  The programming language syntax for
|the same purposes should look as similar as possible to the original
|math.  There is no reason that I can see to adopt any other rule of
|choice in language design.

Since when does 'x = x + 1' resemble anything besides an obviously false
statement in mathematics??  Also, since many of us use C for tasks other
than number crunching, does this mean that we should have NO desire for a
programming language to resemble mathematics?  Your reasoning is a little
faulty here.

|The choice I'm talking about is whether to cause a function call (the
|most expensive of all the 'simple' operations).  Doesn't matter what the
|subroutine library does, you've already made the expensive call.

A function call may not necessarily be made (can you say 'inlining').

|Fine, C is fixing something that shouldn't have been broken to begin with.

It was never broken (a little inefficient, but not broken).

|Finally!  Something we agree upon.  But what does this have to do with
|the value of placing the operator into the syntax?  Just because it's
|seldom used for large or non-constant arguments, doesn't mean it needs
|to be arcane or cryptic when it IS used.

If all the arguments are constant, what do you need a run-time operator
for?
-- 
 _ __			NEVIN J. LIBER	..!ihnp4!ihlpf!nevin1	(312) 510-6194
' )  )				You are in a little twisting maze of
 /  / _ , __o  ____		 email paths, all different.
/  (_</_\/ <__/ / <_	These are solely MY opinions, not AT&T's, blah blah blah

SMITHJ@ohstpy.mps.ohio-state.edu (08/04/89)

X-NEWS: ohstpy comp.lang.c: 2092

This may have been discussed here millions of times before, but here it goes.

What are the reasons for using FORTRAN over C?

I have heard claims that the code is easier to optimize and that current
benchmarks show that this is the case on all mainframes.  My experience is
that this is pure bu****it.  Have done my own testing, I find that C runs
faster on PC's.

Is it likely that C runs slower--on say the VAX--because it is only in
version 2.4 whereas the FORTRAN compiler is at version 90+ (i.e. fewer
man hours have been put into developing mainframe C compilers)?

No remarks about the ease of use of complex numbers in FORTRAN are acceptable.
-- 
/* Jeffery G. Smith, BS-RHIT (AKA Doc. Insomnia, WMHD-FM)       *
 *    The Ohio State University, Graduate Physics Program       *
 *        3193 Smith Lab, Columbus, OH 43210  (614) 292-5321    *
 *    smithj@ohstpy.mps.ohio-state.edu                          */
-- 
/* Jeffery G. Smith, BS-RHIT (AKA Doc. Insomnia, WMHD-FM)       *
 *    The Ohio State University, Graduate Physics Program       *
 *        3193 Smith Lab, Columbus, OH 43210  (614) 292-5321    *
 *    smithj@ohstpy.mps.ohio-state.edu                          */

khb%chiba@Sun.COM (chiba) (08/04/89)

In article <3289@ohstpy.mps.ohio-state.edu> SMITHJ@ohstpy.mps.ohio-state.edu writes:
>
>What are the reasons for using FORTRAN over C?
>
>I have heard claims that the code is easier to optimize and that current
>benchmarks show that this is the case on all mainframes.  My experience is
>that this is pure bu****it. 

No it is quite true.

> Have done my own testing, I find that C runs >faster on PC's.

Ah. A large sample space, involving lots of different computer architectures.

Consider how optimizers work. Three very basic (and not the only)
problems exist:

1)	aliasing. It is not legal to move two aribtrary lines
	of c code...since nearly all lines of c code contain
	unconstrianed pointer references. This inhibits 99%
	of all code motion....which is key to high performance
	platforms from the cdc6600 on.

	This is an open problem, and is considered very hard.
	The fact that no one has solved it despite a decade of
	effort should be noted.

2)	for vs do

	do loops are primitive, but map into all known hardware.
	for loops are more elegent, but since one is allowed
	to store into the index variable, many useful optimizations
	are harder (not impossible, as gcc proves).

	a simple program which does a simple loop, on sun cc and sun f77
	and computes anything, runs faster in f77. gcc is more clever,
	but only for simple loops.

	With work simple special cases can be detected, but it
	is work that is unnecessary for f77... and there is a finite
	amount of time folks are willing to wait for a compile ...
	so if one has to do extensive analysis to get BACK to the point
	at which f77 STARTS other optimizations will have to be skipped.

3)	many things which are OPERATORS in f77 are LIBRARY routines
	in C. See note below.


c optimizers cannot do as good a job as f77 optimizers, without doing
much more extensive analysis...and if one has the computational
resources to do a better analysis, the f77 optimizer could do even
better. 

The counter-examples are all on small computers, where the f77
compilers are not really optimizing (marketing hype to the contrary).


... excerpt from a memo from dgh about a particular complaint about c
vs f77

"cos" in Fortran is an OPERATOR.  The compiler knows what it means.
It may generate a 68881 fcos instruction immediately.  To get
something different you have to instruct the compiler that in this
program "cos" does not mean cosine.

"cos" in C is a FUNCTION.  The compiler does NOT know what it means.
The compiler must generate a function call to an external function
which is found in a library by the linker.  There is no way to
instruct the compiler that "cos" means cosine.  SVID and ANSI C
aggravate this problem by requiring a lot of ill-considered errno
setting in the elementary transcendental functions so a simple fcos
instruction isn't adequate.

This is a fundamental distinction between C and Fortran.  On Suns, in
order to get close to comparable performance between double precision
C and double precision Fortran, you need to compile with the same
optimization (might as well use -O4 routinely when you want to
optimize, it usually doesn't hurt) and use inline expansion templates
which are a kluge around the problem.  They also solve the SVID and
ANSI C problem the way Cray does, by ignoring them.

To get comparable performance in single precision, you also have to
compile the C program with -fsingle.

The sunspots poster aggravated the problem by using a dumb benchmark
that can be readily optimized away to nothing if you know what "cos"
means as does a Fortran compiler.

....

As machines get more interesting (long pipes, multiple functional
units, etc.) the performance difference between C and f77 becomes more
pronounced. Since C compilers cannot restructure the code as
extensively, the size of basic blocks tends to be small (as recent
posters to comp.arch have been noting ... w/o noticing that the
frequent branch instructions could be avoided) and the hardware is
poorly exploited.





Keith H. Bierman      |*My thoughts are my own. Only my work belongs to Sun*
It's Not My Fault     |	Marketing Technical Specialist    ! kbierman@sun.com
I Voted for Bill &    |   Languages and Performance Tools. 
Opus  (* strange as it may seem, I do more engineering now     *)

bobal@microsoft.UUCP (Bob Allison) (08/08/89)

In article <3289@ohstpy.mps.ohio-state.edu> SMITHJ@ohstpy.mps.ohio-state.edu writes:
>
>This may have been discussed here millions of times before, but here it goes.
>

   No, merely thousands of times.

>What are the reasons for using FORTRAN over C?
>
>I have heard claims that the code is easier to optimize and that current
>benchmarks show that this is the case on all mainframes.  My experience is
>that this is pure bu****it.  Have done my own testing, I find that C runs
>faster on PC's.
>

    Let's avoid all the religious arguments and get down to brass tacks:
    if you feel that C works better for you, on the architecture that
    you need to use, with the applications you are interested in running,
    then use C.  There are others who are interfacing to libraries already
    written in FORTRAN, or whose applications or architectures favor
    a FORTRAN implementation and they use FORTRAN.  Okay?  Now go away.

>Is it likely that C runs slower--on say the VAX--because it is only in
>version 2.4 whereas the FORTRAN compiler is at version 90+ (i.e. fewer
>man hours have been put into developing mainframe C compilers)?
>

    No.  Yes.  Maybe.  It depends on whether they use a common optimizer,
    how much man-power is actually put into each project, and whether
    one of the languages is truly easier to optimize than the other.
   
>No remarks about the ease of use of complex numbers in FORTRAN are acceptable.

   Well, I guess that defines your application for you.  If you want a
   general comparison of FORTRAN and C, you've got to accept this as an
   issue.  Otherwise, it seems you've already made up your mind.  Take it
   on faith that everyone else has already made up their minds as well 
   and let's try not to go around proselytizing on this issue for once.

>/* Jeffery G. Smith, BS-RHIT (AKA Doc. Insomnia, WMHD-FM)       *

(This is not directly specifically at you, but rather at the hundred other
people who are going to respond, multiple times, over this issue).

jf@threel.co.uk (John Fisher) (12/13/90)

ttw@lanl.gov (Tony Warnock) remarks, in an aside:

>     [...] (A biologist, I do not remember who, posted 1000 x 1000 x
>     1000 x 1000 x 50 which led to a lot of irrevelant discussion about
>     the lack of such memory sizes.) [...]

The reason I posted is such terms is that he claimed that this was a *typical*
array size.  It obviously isn't.

The discussion seems to me to get bogged down in two rather laughable
attitudes:

-- The macho-macho Fortran "you guys don't understand the real world"
   stance exemplified by the 50 trillion element array man.
-- The "preserve the purity of the truth" C people, like the person who,
   when asked for an introduction to C for engineers, simply replied that
   engineering problems wouldn't provide a complete understanding of
   C's facilities.

I take it for granted that both these languages have their merits, or they
wouldn't exist.  Professional programmers should consider it a matter of
pride to be fluent in a variety of languages, and use them as appropriate. 
People who are not principly programmers should use whatever language they
feel comfortable with, and if this stops them from doing what they want and
they can't find the time to extend their computer expertise, they should
call in a professional. 

--John Fisher

3003jalp@ucsbuxa.ucsb.edu (Applied Magnetics) (12/14/90)

In article <276765ee@ThreeL.co.uk> jf@threel.co.uk (John Fisher) writes:

>ttw@lanl.gov (Tony Warnock) remarks, in an aside:

>[...]
>The discussion seems to me to get bogged down in two rather laughable
>attitudes:
>-- The macho-macho Fortran "you guys don't understand the real world"
>   stance exemplified by the 50 trillion element array man.
>-- The "preserve the purity of the truth" C people, like the person who,
>   when asked for an introduction to C for engineers, simply replied that
>   engineering problems wouldn't provide a complete understanding of
>   C's facilities.
>[...]

I (re-) submit the following examples of possible use for C:
  -Finite elements with random-ish triangular or tetrahedral meshes.
  -Sparse matrices.
Regarding the latter, see e.g. "Sparse Matrix Computations", edited by
J.R. Bunch and D.J. Rose, Academic Press 1976.  (Probably dated, but
relevant to this thread).  The book is full of graph-theoretical
techniques for sparse Gaussian elimination.  _That_ would be easier to
do in C than in Fortran.

I vaguely remember some work by Rob Pike (then at the Caltech Physics
Department) about percolation in a 2-D grid, ca. 1978.  It involved
sparse graphs and was done in C because it was easier that way.

IMHO, the decision to switch languages is tied to the decision to
switch algorithms.  More examples, anyone?  hello?

  --P. Asselin, R&D, Applied Magnetics Corp.  I speak for me.