[comp.edu] Teaching Assembler

debray@arizona.UUCP (06/03/87)

My impression, from many of the responses I've seen on this issue, is that
a person with a CS degree isn't expected to do anything other than
programming.  So why bother with four-year degree programs that require
students to wade through irrelevant-to-the-"real"-world courses like automata
theory and complexity theory, anyway?  Maybe they need to know nothing more
than Cobol, Fortran and C, which any reasonable two-year college could give?

I'm astounded at the number of CS (graduate!) students who're so hidebound
by their BASIC heritage that they can't write a recursive program to save
their lives, and who're so conditioned to think in terms of destructive
assignment that any mention of declarative or applicative programming only
evokes slack-jawed stares.  I find this frustrating at best, terrifying at
worst.

gregg@a.cs.okstate.edu writes:
> More than anything else, it [thinking at the assembly language level]
> has caused me to be extremely conscience about the execution time of
> all the code that I write.  I know that each line of code is not just
> magic, but rather one or more machine instructions that take TIME to
> execute.

I think a more productive approach would be to (a) spend more time
designing the algorithms and data structures (a bubble sort, even if
microcoded, can only do so well!); (b) code the program in a high level
language to reduce the development time; and (c) use the time you've
managed to save in this way to profiling your program and recoding hot
spots in assembler if necessary.
-- 
Saumya Debray		CS Department, University of Arizona, Tucson

     internet:   debray@arizona.edu
     uucp:       {allegra, cmcl2, ihnp4} !arizona!debray

shebs@utah-cs.UUCP (06/04/87)

In article <1748@megaron.arizona.edu> debray@arizona.edu (Saumya Debray) writes:

>I'm astounded at the number of CS (graduate!) students who're so hidebound
>by their BASIC heritage that they can't write a recursive program to save
>their lives, and who're so conditioned to think in terms of destructive
>assignment that any mention of declarative or applicative programming only
>evokes slack-jawed stares.  I find this frustrating at best, terrifying at
>worst.

I agree, but with the qualification that the folks working in the various
styles of declarative programming have spent so much time pontificating on
the advantages of that style that multi-order-of-magnitude slowdowns are
usually ignored (Your work is the exception not the rule, Saumya!) and "large
programs" may be all of two pages.  Utah teaches functional programming
with Lisp, Scheme, and Miranda in several undergrad courses, but nevertheless
all we seem to produce are C hackers.  Hardly surprising, since the students
write fibonacci and merge sorts in functional languages then wait minutes
for results, then go to the C hacking classes and write graphical editors,
spreadsheets, and so forth, all of which are very speedy.  I think the
students don't have any trouble drawing conclusions from the side-by-side
comparison of elegant/useless with fast/useful...

Of course, this isn't inherent in declarative languages, it's more that
the surrounding culture offers no rewards for implementors doing mundane
things like putting in the ability to call curses or X, and compiling
to efficient native code.  Too bad...

>I think a more productive approach would be to (a) spend more time
>designing the algorithms and data structures (a bubble sort, even if
>microcoded, can only do so well!); (b) code the program in a high level
>language to reduce the development time; and (c) use the time you've
>managed to save in this way to profiling your program and recoding hot
>spots in assembler if necessary.

Amen, but I don't recall many high-level language (Lisp, Prolog, Lucid,
etc) implementations that even *allow* the calling of assembly language
routines!  (Too "impure" I suppose)

>Saumya Debray		CS Department, University of Arizona, Tucson

							stan shebs
							shebs@cs.utah.edu

debray@arizona.UUCP (06/05/87)

In article <4625@utah-cs.UUCP>, shebs@utah-cs.UUCP (Stanley Shebs) writes:
> [referring to slow implementations of declarative languages] I think the
> students don't have any trouble drawing conclusions from the side-by-side
> comparison of elegant/useless with fast/useful...
> 
> Of course, this isn't inherent in declarative languages, it's more that
> the surrounding culture offers no rewards for implementors doing mundane
> things like putting in the ability to call curses or X, and compiling
> to efficient native code.  Too bad...

Part of the problem is that the declarative language implementations a
lot of people see are old and hardly representative of the state of the
art.  E.g. most people's experiences of Prolog are limited to C-Prolog or
UNSW Prolog, which are both interpretive and hence can't really compete in
terms of speed.  Recently announced Prolog systems include one from Belgium
that's claimed to do 200,000 LIPS on a Sun-3/200, and a system by Mike
Newton of Caltech that gets over 900,000 LIPS on an IBM 3090 ("LIPS" --
"Logical Inferences per Second" -- is a measure commonly used to benchmark
Prolog systems.  C-Prolog does about 1500 LIPS on a Sun-3).  Such systems
look like they'd be competitive with current implementations of procedural
languages, but of course they typically cost big bucks, so students don't
see them, and the old, outdated biases live on.

(I don't know much about the speeds of the latest Lisp systems: got any
numbers, Stan?)

The other complaint I've heard about languages like Lisp and Prolog is that
they use gobs of memory, and spend a lot of time collecting garbage.  But
even that isn't entirely valid any more, since recent work by Paul Hudak
(for applicative languages) and Maurice Bruynooghe (for logic languages)
shows that a smart compiler can replace a lot of data structure copying by
code with destructive assignment, and drastically reduce the amount of
garbage generated.
-- 
Saumya Debray		CS Department, University of Arizona, Tucson

     internet:   debray@arizona.edu
     uucp:       {allegra, cmcl2, ihnp4} !arizona!debray

shebs@utah-cs.UUCP (06/07/87)

In article <1753@megaron.arizona.edu> debray@arizona.edu (Saumya Debray) writes:

>Recently announced Prolog systems include one from Belgium
>that's claimed to do 200,000 LIPS on a Sun-3/200, and a system by Mike
>Newton of Caltech that gets over 900,000 LIPS on an IBM 3090 ("LIPS" --
>"Logical Inferences per Second" -- is a measure commonly used to benchmark
>Prolog systems.  C-Prolog does about 1500 LIPS on a Sun-3).  Such systems
>look like they'd be competitive with current implementations of procedural
>languages, but of course they typically cost big bucks, so students don't
>see them, and the old, outdated biases live on.

Amen!  Perhaps language outfits should consider gifts to universities of
the sort that hardware types have profited from for years...

On a tangent here: I wish the use of LIPS would die out.  Nobody seems to
have produced a formula relating LIPS to program performance, that is,
how fast an FFT goes or how long it takes to compute all the possible
first moves in chess, given only the rules of the game.

>(I don't know much about the speeds of the latest Lisp systems: got any
>numbers, Stan?)

Lisp systems don't use FCPS (Function Calls Per Second - add 'u's for
entertainment), but Dick Gabriel's Lisp benchmark book does have a list
of various languages running the trivial but highly recursive TAK program.
The list goes from micros to Crays with speeds ranging over three orders
of magnitude.  It includes Franz with small number arithmetic beating C
on a VAX 780 by about 20%, PSL on a 750 being almost twice as fast as C
(Franz being less impressive), and PSL beating C on 68000s as well.  None
of the languages did better than machine code, though some came close
(TAK is about one page in machine language, so easy to optimize).
The Orbit folks at Yale have also presented some impressive results
for Scheme compilation in their 1986 Compiler Conference paper, although
I don't quite believe that good closure analysis is the reason.
Finally, there are some rumors to the effect that Lucid's compiler
is within a few percent of good floating-point Fortran code, but I
haven't seen any details.  The status seems to be that people know
what to do to get the performance, but proprietary code is the only
way to get the money/time to do it.  Also, Joe Student's program is
still going to be slow, because you have to add declarations, set obscure
flags, redefine functions, and do other strange things to get that
good performance (a detail often glossed over by implementors).

>The other complaint I've heard about languages like Lisp and Prolog is that
>they use gobs of memory, and spend a lot of time collecting garbage.  But
>even that isn't entirely valid any more, since recent work by Paul Hudak
>(for applicative languages) and Maurice Bruynooghe (for logic languages)
>shows that a smart compiler can replace a lot of data structure copying by
>code with destructive assignment, and drastically reduce the amount of
>garbage generated.

This is where Lisp is revealed to be lower-level than one might like.
Much generation of garbage can be avoided by using a C-like style, but
of course that misses the whole point of using Lisp in the first place...
I believe the new techniques are somewhat limited in their applicability
and could be defeated.  To repeat a point touched on earlier,  it's not
enough to show good speed on carefully selected problems; the students
should not have to take performance hits just because they didn't use
some strange declaration.  For instance, Hewlett-Packard's Common Lisp
has a "downward-funarg" declaration that makes all closures stack-allocated.
Fortunately, I've never had to explain that declaration to sophomores!

>Saumya Debray		CS Department, University of Arizona, Tucson

							stan shebs
							shebs@cs.utah.edu