[comp.lang.ada] LRM 4.1.3 paragraphs 17-19

mfeldman@seas.gwu.edu (Michael Feldman) (02/11/91)

In article <1991Feb09.023913.524@mojsys.com> joevl@mojsys.com (Joe Vlietstra) writes:
>Loop statements implicitly declare a variable: the loop index.
>So the following works (but may get the programmer fired):
>	LOOP1:
>	for I in 1..10 loop
>	   LOOP2:
>	   for I in 1..10 loop
>	      X (LOOP1.I,LOOP2.I) := 100.0;
>	   end loop LOOP2;
>	end loop LOOP1;
>But LOOP1.I cannot be referenced outside of the loop.

Perhaps this will start a new thread. I'd fire the programmer for two reasons
(I presume Joe had both reasons in mind):

1. using the same name for the two loop indices is gratuitously obscure
   (thus violates a common-sense style principle); it doesn't even save a
   variable in this case.

2. IMHO one shouldn't use a loop like this in Ada to begin with, to
   clear an array to a single value. An aggregate like
       X := (others => (others => 100.0));
   expresses the intention of the programmer more clearly not only to the
   human reader but also to the compiler, which can - perhaps - generate
   better code accordingly.

What I figure could start a new thread is this: in your experience, what are
the sequential Ada constructs that (may) lead to _better_ optimized code
than the "old way" would?IMHO aggregates, universal assignment (i.e.
assignment operator for structured types) and universal equality are three
such things. Which others? What is the actual experience with code generators?
Do they exploit this possible optimization (yet)?

If I am correct, we in the Ada community should be using these arguments to
rebut the arguments of the anti-Ada crowd that Ada code is _necessarily_
worse than its equivalent in (pick your other language). I believe that
Ada code is not only non-worse, it has the potential to be considerably 
better once the compilers and the programmers catch on.

Am I correct?

Mike

g_harrison@vger.nsu.edu ((George C. Harrison) Norfolk State University) (02/12/91)

In article <2704@sparko.gwu.edu>, mfeldman@seas.gwu.edu (Michael Feldman) writes:
> What I figure could start a new thread is this: in your experience, what are
> the sequential Ada constructs that (may) lead to _better_ optimized code
> than the "old way" would?

What do you mean by "_better_ optimized code?"  If you mean what constructs
_might_ lead the compiler to produce more optimized code than the "Brute Force"
methods like in the example of this original posting, I'd agree with your
assessment below.  "Optimized" to programmers generally means what the compiler
(and often the linker) does to make a faster, smaller executable.  These
constructs, I believe, leave their implementations better open to
these optimizations.

> IMHO aggregates, universal assignment (i.e.
> assignment operator for structured types) and universal equality are three
> such things. 


The other kind of optimization is that which saves the programmers time and
makes him/her more effective.  These include generics, separate compilation,
exceptions, overloading, and other supports for abstraction.  The problem is
that this kind of programmer optimization does not necessarily lead to
executable optimization.  

As an example...  Suppose I have a generic package "MATRIX_OPS" that performs
operations on matrices with elements from an instantiated field of elements. 
By using (and "withing") this package I can solve the system of linear
equations "Coefficient_Matrix * Unknowns_Vector = Constant_Vector" by simply
doing    

       Unknowns_Vector := Coefficient_Matrix ** (-1) * Constant_Vector

That CERTAINLY can save a programmer a lot of time, and with some guarantees of
correctness, it will save his/her job.  However, the implementation of the
exceptions involved, the operators * and **, the hidden implementation of the
underlying field, etc. may have been done by someone else with only abstraction
in mind.  

Ada, unlike most other languages, seems to be (or is getting to be) very
programmer-dependent.  That is, as the compilers and program libraries start
maxing out on optimization, then programmers can really take great pride in
using abstractions effectively.  

I hope I havn't changed Mike's change-in-thread topic, but I thought it
important to make a note on the two kind of optimizations.

-- George C. Harrison -------------- || -- My opinions and observations --
---|| Professor of Computer Science  || -- Only. -------------------------
---|| Norfolk State University, ---- || ----------- Pray for Peace -------
---|| 2401 Corprew Avenue, Norfolk, Virginia 23504 -----------------------
---|| INTERNET:  g_harrison@vger.nsu.edu ---------------------------------

yow@riddler.Berkeley.EDU (Billy Yow 283-4009) (02/13/91)

... [Stuff Deleted]
  
>assert that Ada executables are _necessarily_ slow. They are actually pleased 
>when I point out the "little" ways in which Ada can be faster.
>They say "hmmmm... I never focused on that..."

What are the ""little" ways"? 

                           Bill Yow
                           yow@sweetpea.jsc.nasa.gov
                           
My opinions are my own!

mfeldman@seas.gwu.edu (Michael Feldman) (02/13/91)

In article <1991Feb12.154418@riddler.Berkeley.EDU> yow@riddler.Berkeley.EDU (Billy Yow 283-4009) writes:
>  
>>assert that Ada executables are _necessarily_ slow. They are actually pleased 
>>when I point out the "little" ways in which Ada can be faster.
>>They say "hmmmm... I never focused on that..."
>
>What are the ""little" ways"? 
>
Well, I think that universal assignment and equality testing are one way.
(That's what started this thread...)

Another is the parameter passing scheme, in which reference semantics can
be used to pass arrays to IN parameters with no danger that the actual
will be changed (because IN parameters can't be written to).
Less copying (though a colleague of mine pointed out that the extra
indirection could actually make the program _slower_.)

I think pragma INLINE could be considered one of these, too, in that
it can reduce the number of procedure calls (at the cost of space,
of course) without giving up the abstraction.

I had a few more, but can't think of 'em offhand. Getting old...

Mike

jncs@uno.edu (02/14/91)

In article <2715@sparko.gwu.edu>, mfeldman@seas.gwu.edu (Michael Feldman) writes:
>In article <1991Feb12.154418@riddler.Berkeley.EDU> yow@riddler.Berkeley.EDU (Billy Yow 283-4009) writes:
>>  
>
>Another is the parameter passing scheme, in which reference semantics can
>be used to pass arrays to IN parameters with no danger that the actual
>will be changed (because IN parameters can't be written to).
>Less copying (though a colleague of mine pointed out that the extra
>indirection could actually make the program _slower_.)
>
The LRM leaves to the implementor to actually decide on the method of parameter
passing IMPLEMENTATION in the case of structured types and IN parameters. I
may use IN for arrays hoping for the COPY-semantics, but the LRM tells me not
to trust it!

J. Nino

jls@yoda.Rational.COM (Jim Showalter) (02/15/91)

I wouldn't fire the programmer, but I WOULD send him/her to reeducation camp.