[comp.lang.ada] Ada vs C implementation efficiency

afes0isi@ZACH.FIT.EDU (Sam Harbaugh-AFES PROJECT) (06/16/91)

Paul Kohlmiller writes:
Jim,
  Could you specify one or more machines that have an available ADA compiler
that generates code that is as efficient as a C compiler for the same machine?
thnx
Paul Kohlmiller
--
Last I checked, about 5 years ago, the DEC Ada compiler outperformed the
DEC C compiler on Whetstone bechmark.
sam harbaugh saharbaugh@ROO.FIT.EDU        
-----------

g_harrison@vger.nsu.edu (George C. Harrison, Norfolk State University) (06/17/91)

In article <9106151737.AA16750@zach.fit.edu>, afes0isi@ZACH.FIT.EDU (Sam Harbaugh-AFES PROJECT) writes:
> Paul Kohlmiller writes:
>> Jim,
>>   Could you specify one or more machines that have an available ADA compiler
>> that generates code that is as efficient as a C compiler for the same machine?
>> thnx
>> Paul Kohlmiller
> --
> Last I checked, about 5 years ago, the DEC Ada compiler outperformed the
> DEC C compiler on Whetstone bechmark.
> sam harbaugh saharbaugh@ROO.FIT.EDU        
> -----------

Supposedly, the SUN Telesoft Ada compiler translates Ada -> optimizer -> C ->
-> -> a.out and is faster on most benchmarks than the same written in the C
it's translated to.  (Sorry for ending with a preposition.)

George C. Harrison, Professor of Computer Science
Norfolk State University, 2401 Corprew Avenue, Norfolk VA 23504
Internet:  g_harrison@vger.nsu.edu    Phone:  804-683-8654

eachus@largo.mitre.org (Robert I. Eachus) (06/18/91)

     There are "benchmarks" which will (almost) always run faster in
Ada than in C, and vice-versa.  In general however, for good compilers
and good benchmarks a program will run fastest in the language it was
originally written in.  Thus, the Dhrystone benchmark, originally in
Ada, normally runs faster in Ada than in C, and people complain that
it overuses the string operations in C.

     If take an application and write it (from scratch) in several
languages with each version written by a team experienced in that
language, it usually turns out that you measure the difference in
cultures not the difference in compilers. 

     I once had the opportunity to do this with a package heavy on
matrix operations in FORTRAN, Pascal, and Ada.  The FORTRAN version
was fastest, the Pascal version had the lowest error bounds on
non-stiff matrices, and the Ada version was the only one that could be
trusted with near singular data.  Which one is "best"?

     We did a "second iteration," putting checking code into the
FORTRAN and Pascal versions, and using the Pascal code in all three
versions (after conditioning), and now the program performance
differences were in the noise.  (The Ada I/O was slower, and the
FORTRAN floating point with overflow checking was much more
cumbersome, etc. so there were input cases where each version was
"fastest," but now all were correct...)

     My personal approach, based on this experiment and others, is to
go for correctness first, and if you need better performance, look at
the algorithms, THEN at the code.  I have gotten to the point of
looking at generated code once or twice, but in every case I have
found a way to coerce the compiler to generate what I wanted.

--

					Robert I. Eachus

with STANDARD_DISCLAIMER;
use  STANDARD_DISCLAIMER;
function MESSAGE (TEXT: in CLEVER_IDEAS) return BETTER_IDEAS is...

munck@STARS.RESTON.UNISYS.COM (Bob Munck) (06/19/91)

In INFO-ADA Digest V91 #165, eachus@mitre-bedford.arpa  (Robert I. Eachus)
said:

> ...  In general, for good compilers
>and good benchmarks a program will run fastest in the language it was
>originally written in.

You CAN'T mean that exactly as written.  Surely the history of a benchmark
has little to do with its results.  Maybe you mean something like "line-at-
a-time, uncritical translation of a program from one language to another
almost always result in poor performance of the translated program compared
to the original."?

>     My personal approach, based on this experiment and others, is to
>go for correctness first, and if you need better performance, look at
>the algorithms, THEN at the code.  I have gotten to the point of
>looking at generated code once or twice, but in every case I have
>found a way to coerce the compiler to generate what I wanted.

An important point, and something that a lot of programmers learn late in
their careers or not at all.  Note too that this approach to performance
tends to preserve maintainability.  My way of stating this approach:

                RULES FOR PROGRAM OPTIMIZATION:
                
                1. Don't do it.
                
                2. (Experts only) Don't do it yet.
                
I do have my doubts about "coerce the compiler to generate what I wanted."
What happens when a new compiler release doesn't coerce the same way?  Besides,
playing with object code makes hair grow on your palms.

Bob Munck

rlk@telesoft.com (Bob Kitzberger @sation) (06/20/91)

Kohlmiller> Could you specify one or more machines that have an available
Kohlmiller> ADA compiler that generates code that is as efficient as 
Kohlmiller> a C compiler for the same machine?

Harrison>   Supposedly, the SUN Telesoft Ada compiler translates 
Harrison>   Ada -> optimizer -> C -> >-> -> a.out and is faster on 
Harrison>   most benchmarks than the same written in the C it's 
Harrison>   translated to.  (Sorry for ending with a preposition.)

Just to set the record straight, our (TeleSoft's) compilers do not
generate C code anywhere in the compilation process.  Roughly,
we follow these steps (taken almost directly from my TeleSoft T-shirt ;-)

		Source code 
		     |
		     v
		+-----------+
		| Front End |
		+-----------+
		     |
		     v
	 "High form" intermediate code
		     |
		     v
	       +-------------+
	       | Middle Pass |
	       +-------------+
		     |
		     v			 +-----------+
	 "Low form" DAGs & trees ------->| Optimizer |
		     |	  ^		 +-----------+
		     |    |                 |
		     |    +-----------------+
		     v
	     +-----------------+
	     | Code generation |
	     +-----------------+
		     |
		     v
		Object code

The optimizer transforms unoptimized 'low form' into optimized low form.
No C code anywhere in the transformation.  Don't want to ruin the soup.

As far as performance goes, yes, we did benchmark our Sun3/Sun3 code
against various C compilers.  It probably violates netiquette for me
to elaborate, though, so I'm outta here...

Disclaimer: I'm a tasking nimnod, not an optimizer guru.
	    (but I can do a decent ASCII box diagram, no?)

	.Bob.
-- 
Bob Kitzberger               Internet : rlk@telesoft.com
TeleSoft                     uucp     : ...!ucsd.ucsd.edu!telesoft!rlk
5959 Cornerstone Court West, San Diego, CA  92121-9891  (619) 457-2700 x163
------------------------------------------------------------------------------
"and when after three of four hours' amusement, I wou'd return to these 
speculations, they appear so cold and strain'd, and ridiculous, that I cannot
find in my heart to enter into them any further." 
				-- David Hume, "Treatise of Human Nature"

eachus@largo.mitre.org (Robert I. Eachus) (06/21/91)

In article <1336.677341010@osprey> munck@STARS.RESTON.UNISYS.COM (Bob Munck) writes:

   In INFO-ADA Digest V91 #165, eachus@mitre-bedford.arpa  (Robert I. Eachus)
   said:

   > ...  In general, for good compilers
   >and good benchmarks a program will run fastest in the language it was
   >originally written in.

   You CAN'T mean that exactly as written.  Surely the history of a benchmark
   has little to do with its results.  Maybe you mean something like "line-at-
   a-time, uncritical translation of a program from one language to another
   almost always result in poor performance of the translated program compared
   to the original."?

    Yep! I meant exactly what I said.  A "good" benchmark will be
balanced so that it tests all features "equally." But equality is
measured in the language of the benchmark.  Let's use Dhrystone, Ada,
and C, instead of hypothetical names.  Dhrystone tests many string
functions, some of which are extremely quick in Ada programs such as
string length.  When these tests are translated to C, where string
length is normally done with a library call that scans for a terminal
null, that function is overrepresented in the benchmark.  On the flip
side things that are hard in Ada but are easy in C are slightly
underrepresented.  None of this should be surprising to anyone who
does benchmarking, the same thing happens when you use an instruction
mix that is appropriate for architecture family A on machines from
family B.

    I do have my doubts about "coerce the compiler to generate what I
    wanted."  What happens when a new compiler release doesn't coerce
    the same way?  Besides, playing with object code makes hair grow on
    your palms.

    But if I coerce the compiler to get speed now, and the compiler
changes all that happens is that the program may slow down.  In
practice, compiler writers don't take optimizations out when making
new releases, they may just catch cases they missed last time.  So if
C := Float(A) * B; is significantly slower than C := B * Float(A); I
change the code and report the "bug" so that next time I won't have to
worry.

--

					Robert I. Eachus

with STANDARD_DISCLAIMER;
use  STANDARD_DISCLAIMER;
function MESSAGE (TEXT: in CLEVER_IDEAS) return BETTER_IDEAS is...