[comp.lang.misc] A note for those not consumed by ef

robison@m.cs.uiuc.edu (03/23/90)

Last year I had the following experience with a group-theory computation:

	Monday:      Start algorithm A running.
	Tuesday:     Discover algorithm B, code it, and start it running.
	Saturday:    Algorithm B finishes.
	Next Monday: Algorithm A finishes.

[The days of the week may actually have been different.]
In this case optimization was clearly worth my time to
obtain the answer more quickly.  I let algorithm A finish
as a check on algorithm B.  Of course, the improvement from
A to B was more than bit twiddling.

Arch D. Robison
University of Illinois at Urbana-Champaign

UUCP: {pur-ee,convex}!uiucdcs!robison
Internet: robison@CS.UIUC.EDU

jensting@skinfaxe.diku.dk (Jens Tingleff) (03/26/90)

robison@m.cs.uiuc.edu writes:
>Last year I had the following experience with a group-theory computation:
>	Monday:      Start algorithm A running.
>	Tuesday:     Discover algorithm B, code it, and start it running.
[..]
>			Of course, the improvement from
>A to B was more than bit twiddling.

Ahh, so the difference was in the algorithm (?) .

Let's agree on what we're dealing with here : EITHER use of
different algorithms for the same jobe OR ``clever'' writing
to make the same algorithm go 10 % faster (without any measurement data
to support the spped improvement claim ... (1/2 ;^) ).

What I objected to in a previous posting to comp.lang.c,
was ``clever'' coding to make an algorithm fast(er). I find,
every now and then, in other peoples programs (as well as my own),
kludges that are supposed to make the program go a little bit faster.
One of the few certain points of these kludges is that the code
becomes more difficult to read - port - understand - use.
All this stems from bad design (and should be avoided), but still I 
feel that the point against efficiency as a concern in the early 
stages HAS to be made.

One of my favourite horror stories concern a FORTRAN program for
numerical analysis, made by one of the top scientists,
This program had no SUBROUTINES, instead it jumped back and forth,
``returning'' under control of global flags. YEEECCCHH !
It may have been faster than a SUBROUTINE ridden program, but 
the point is that the person that tried to use it had to give up,
on account of the inherent illegibility. That made the program
a lot slower .......... .

Of course, if you need the speed (eg. realtime applications
on fixed hardware..) you should use clever coding etc. Better to use
a clever routine in (say) C, than spend a lot of time writing
it in assembler (being bad from a portability point..).

In quite a lot of programs (e.g numbercrunching) the time spent is
spent in a few functions only. In the rest of the program, no particular
effort should be made to improve speed. So, for instance, I think it's 
quite alright to use bubblesort (GASP, BOOOOOH), if I only sort a few 
elements once, comsuming maybe 0.1 % percent of the total runtime... .

In an ideal world, there would be no problem. All programmers would
in the analysis preceeding the actual coding find the parts needing
speed, and use cleverness ONLY in those parts.

In the real world, other factors are important, such as time for
program writing, reliability etc.

	Jens

jensting@diku.dk is
Jens Tingleff MSc EE, Research Assistent at DIKU
	Institute of Computer Science, Copenhagen University
Snail mail: DIKU Universitetsparken 1 DK2100 KBH O