[net.lang.f77] 4.2 f77 compiler comments

woods@hao.UUCP (Greg Woods) (05/29/84)

  I implemented the EISPACK eigenvalue package under 4.2 here, and there are
definitely bugs in the optimizer, and we too have installed all of Don
Seely's bug fixes. When the optimized code *does* run properly, we have 
observed some of our programs running twice as fast when using the optimized
EISPACK. If we could only find *all* the bugs in the optimizer, we would
have a *real* FORTRAN for UNIX! :-) BTW, as far as I can tell, without the
optimizer our compiler is almost fully reliable, now that we fixed a bug that 
kept integer exponentiation from working. One remaining problem is in
equivalences, which I have mentioned before. If you have a non-normalized 
floating point number on the right-hand side of an assignment, occasionally 
the result will be a zero word even though the RHS was non-zero. Example:

       equivalence(f,i),(g,j)
       i=100
       g=f
       write(6,*) j

will print 0. On the other hand, if you change "100" to "200", then it will
print "200" as you would expect. The cutoff point is 128.

	   Greg "FORTRAN hacker" Woods
-- 
{ucbvax!hplabs | allegra!nbires | decvax!stcvax | harpo!seismo | ihnp4!stcvax}
       		        !hao!woods
   
   "Will we leave this place an empty stone?"

trh@ukc.UUCP (T.R.Hopkins) (05/31/84)

<<
<< After the storm ....
<< The f77 that came with our 4.2 distribution tape was very buggy.
<< I've not kept up with the bug fixes for f77, but does anyone know if
<< there is a reliable, working and relatively safe version of the the
<< f77 compiler?   We are currently running the 4.1 version - it works
<< and we are happy - but we will switch to the 4.2 version if it produces
<< better code and IF IT IS RELIABLE.
<< 

I've been implementing the Numerical Algorithm's Group (NAG) Library
of numerical subroutines using the 4.2 f77 compiler. This consists
of approximately 1000 routines and a very large set of test programs.
To start with I could
get the whole library to work if the optimizer was switched off and
about 50% of the library to work if the optimizer was switched on.
Since we have implemented all the bug fixes which have come over the
net (thanks due here to Donn Seeley @ sdchema) the number of failures
is down to about 5-10%. I haven't been able (yet) to isolate the
remaining problems.

Conclusion : the optimizer is still flaky. I don't have a good feel
for whether when it does work if it produces better (faster?) code.
Anyone like to comment??


Tim Hopkins,
Computing Laboratory,
University of Kent,
Canterbury CT2 7NF
Kent
U.K.

{ trh@ukc.UUCP }