[comp.lang.misc] Poor Algorithms

smith@COS.COM (Steve Smith) (02/25/88)

In article <170500013@uiucdcsb> robison@uiucdcsb.cs.uiuc.edu writes:

>Some friends and I conjecture that a lot of supercomputer time 
>is spent running poor algorithms.  Anyone have any empirical
>evidence or anecdotes?

>Arch D. Robison


Well, a DEC 10 ain't a supercomputer, but the point is still valid.

A friend of mine ran into this one while he was working in a university
computer center.  Seems they had an Oceanography professor who did a lot
of flow modeling.  Like most such programs, his used a big xyz array to
handle flow elements.  As it turned out, one xy plane fitted nicely into
one virtual memory page.  Unfortunately, for no good reason, he had
chosen to scan it first in the x direction and second in the z
direction.  Result - page faults up the wazoo!  When the people at the
computer center pointed this out and suggested changing the scanning
order, he got downright abusive.  After all, those geeks in the computer
center know nothing about Oceanography!

For some reason (:-), his priority tended to get bumped down.
-- 
                -- Steve
(smith@cos.com)    ({uunet sundc decuac hqda-ai hadron}!cos!smith)
"Truth is stranger than fiction because fiction has to make sense."

fred@brl-smoke.ARPA (Fred Bunn ) (02/25/88)

My major production program took about 10 hours to run on a mid-size
computer. We recently did some timing runs to see
where it spent it's time; after several days of judicious changes
to about 150 lines of code, we managed to cut the run time to 1.8 hours.

Since we run the program reasonable frequently and it is used at a half
dozen other sites, we will be saving a good deal of resources. More importantly
for my friends and neighbors, I'm not hogging cycles.

Fred Bunn
Ballistic Research Lab

nevin1@ihlpf.ATT.COM (00704a-Liber) (02/26/88)

[Note:  this article has been posted to two groups.  When you post
followups, please pick the group which your followup is more relevant to.]


In article <1010@cos.COM> smith@cos.UUCP (Steve Smith) writes:

>Seems they had an Oceanography professor who did a lot
>of flow modeling.  Like most such programs, his used a big xyz array to
>handle flow elements.  As it turned out, one xy plane fitted nicely into
>one virtual memory page.  Unfortunately, for no good reason, he had
>chosen to scan it first in the x direction and second in the z
>direction.  Result - page faults up the wazoo!

CS people do this kind of stuff, too.  In college, CS students are taught
to write their code as OS/architecture independent as possible; ie, they
shouldn't write programs that are specific to the machine they are running
on.  Most colleges don't offer an OS course until senior year.

So what happens when Joe Programmer needs to do a 'real' application?  He
writes it with the same old philosophy in mind:  Make it OS independent,
make it architecture independent, make it language-implementation
independent, make it machine independent, etc.

But real applications *are* for specific machines.  I'm not saying that
machine *dependent* code should be written, though (code should be as
portable as is reasonable).  I am saying that code should be written to
take advantage of the primary OS or architecture that it is running on.

How many novice Pascal programmers use many global vars because it saves
memory?  On a paging system, though, you may need more pages in real memory
because of the way the chaining of variables is done in Pascal.

Or look at C.  A lot of programmers use a char or a short for a local
variable where an int would do (although there are some algorithms where
this may not be desirable).  But the runtime overhead that is incurred
(such as that due to implicit variable coersion) is probably not worth it.
Another point: on a paging system, it may be better to have a static lookup
table than to generate data every time it is needed.

When trying to write efficient algorithms for a specific purpose, you must
take into account the OS, architecture, language-implementation, machine,
etc.
-- 
 _ __			NEVIN J. LIBER	..!ihnp4!ihlpf!nevin1	(312) 510-6194
' )  )				"The secret compartment of my ring I fill
 /  / _ , __o  ____		 with an Underdog super-energy pill."
/  (_</_\/ <__/ / <_	These are solely MY opinions, not AT&T's, blah blah blah

sommar@enea.se (Erland Sommarskog) (03/02/88)

Nevin J Liber (nevin1@ihlpf.UUCP) writes:
>[Note:  this article has been posted to two groups.  When you post
>followups, please pick the group which your followup is more relevant to.]

And I deleted comp.edu and added comp.software-eng which seems be 
to a more approriate place for further discussion. 

>How many novice Pascal programmers use many global vars because it saves
>memory?  On a paging system, though, you may need more pages in real memory
>because of the way the chaining of variables is done in Pascal.

This reminds me of when I did my exam work. My task was to do a
capacity analysis of the file system of a 68000-based machine. Since
this file system was quite buggy I had to check the sources to see
what happened. The langauge used was EriPascal, a Pascal extention
with Modula-2 module concept and real-time primitives.
  Nevin would have loved them. They used a lot of global variables. 
A typical procedure could have some local variables, but mostly they
used the used the global ones, even for typical local purposes. To
make it even more fun, all variables had one-letter prefixes like
cnumber, cix etc. And on top of all, the modules were *huge*. 6000
lines or so.
  Now, was this a fast and speedy file system? Laugh. Reading a 
record from a 100 record-index file with variable-length fields 
could take 3 seconds. With a Winchester. No, no time-sharing. At 
disk-manager level, reading a specified block in a file was randomly 
distributed between 100 and 300 ms. 
  Talk about poor algorithms. You should have seen the code for
finding a particular block. In practice they counted on their
fingers, making complicated a MOD for every decrement. Not takling
of the procedure that allocated disk blocks. 27 Gotos into 150
lines of code. Yes, there was a bug in it.

So moral: You may save some execution time by having your variables
global, but you since you are obscuring your code so fatally,
you will lose the notion of what you are doing and introduce
bugs and inefficiencies instead.

Oh, it should be added that the file system I talked of has since then
been improved. I have no idea of how it works today.
-- 
Erland Sommarskog       
ENEA Data, Stockholm        
sommar@enea.UUCP           "Souvent pour s'amuser les hommes d'equipages
                            and it's like talking to a stranger" -- H&C.

net@tub.UUCP (Oliver Laumann) (03/07/88)

All the anecdotes about poor algorithms remind me of an amusing tale
I have read in the (highly recommendable, by the way) paper "Hints for
Computer System Design" by Butler W. Lampson of Xerox Palo Alto.

In a paragraph titled "Get it right" he mentions that neither
abstraction nor simplicity can be a substitute for getting an algorithm
or a computer system right; in fact, abstraction, if overdone or used
unwisely, can lead to severe difficulties.  He illustrates this by the
following anecdote:

A major commercial word processing system used a "FindNamedField"
procedure which, for a given "name", searched the document being edited
for a "field" of the form "{name: contents}" embedded in the text.
The "name" was passed to the procedure as an argument.  The remarkable
point is that this procedure ran in time O(n^2), where n is the length
of the document!

This result was achieved by first writing a procedure "FindIthField"
which finds the i-th field in the document.  This, of course, takes
time O(n).  The procedure "FindNamedField(name)" was then implemented
in the following "natural" way [quoted directly from the article]:

   FOR i = 0 to NumberOfFields DO
      FindIthField; IF its name is "name" THEN EXIT
   ENDLOOP


--
Regards,
    Oliver Laumann, Technical University of Berlin, Germany.
    ...!pyramid!tub!net   or   net@TUB.BITNET

kleonard@prc.unisys.com, Ken Leonard) (03/08/88)

In article <404@tub.UUCP> net@tub.UUCP (Oliver Laumann) writes:
* ...
* I have read in the (highly recommendable, by the way) paper "Hints for
* Computer System Design" by Butler W. Lampson of Xerox Palo Alto.
* ...
* A major commercial word processing system used a "FindNamedField"
* ...
* point is that this procedure ran in time O(n^2), where n is the length
* of the document!
* 
* This result was achieved by first writing a procedure "FindIthField"
* which finds the i-th field in the document.  This, of course, takes
* time O(n).  The procedure "FindNamedField(name)" was then implemented
* in the following "natural" way [quoted directly from the article]:
*    FOR i = 0 to NumberOfFields DO
*       FindIthField; IF its name is "name" THEN EXIT
*    ENDLOOP

In the best tradition of K&R.

Regardz,                                        Engineering--The Art of Science
Ken Leonard
---                                                                         ---
This represents neither my employer's opinion nor my own--It's just something I
overheard in a low-class bar down by the docks.

barmar@think.COM (Barry Margolin) (03/09/88)

In article <404@tub.UUCP> net@tub.UUCP (Oliver Laumann) writes:
>All the anecdotes about poor algorithms remind me of an amusing tale
>I have read in the (highly recommendable, by the way) paper "Hints for
>Computer System Design" by Butler W. Lampson of Xerox Palo Alto.
>
>  The remarkable
>point is that this procedure ran in time O(n^2), where n is the length
>of the document!
>
>This result was achieved by first writing a procedure "FindIthField"
>which finds the i-th field in the document.  This, of course, takes
>time O(n).  The procedure "FindNamedField(name)" was then implemented
>in the following "natural" way [quoted directly from the article]:
>
>   FOR i = 0 to NumberOfFields DO
>      FindIthField; IF its name is "name" THEN EXIT
>   ENDLOOP

Whether they got the algorithm right or not depends on which algorithm
you are talking about.  There is a very similar routine in the
Symbolics Genera user interface, used when you want to retrieve a line
that matches a given string from your input history; the input history
is a linked list, and a very similar loop is used, so it is also N^2.
Such a loop is used because of information hiding; the loop is in the
INTERACTIVE-STREAM flavor, while the linked list is hidden in the
HISTORY flavor, and the interface only provides a routine to get the
Nth history element.

So, is this organization flawed?  I don't think so.  However, the
implementation of the ELEMENTn operation could be improved (in fact,
the reason I even know about this is that someone here patched the
routine years ago in the manner I'm about to describe) to make such a
common operation less expensive.  The flaw in Oliver's argument above
is that FindIthField need not always be O(I).  In particular, if it is
common to request field n+<positive> soon after requesting field n, it
would pay to remember where field n ended, and start from there
instead of from the beginning.  This is a time/space trade-off that
really pays back, because an O(N^2) time algorithm is reduced to O(N)
at the cost of O(1) storage.

So, this is really, in my opinion, an example of the way that
abstraction and information hiding make it easier to improve
performance.  In order to fix this performance bug, all that is needed
is two instance variables (assuming an object-oriented implementation)
and a small change to one method.

In the Symbolics case, an alternate fix would be to use an array
rather than a linked list.  However, since in this case it is more
common to add elements to one end of the history (every time the user
types a line) than to search far back in the history, and in the
upcoming release they will allow selective deletion from the middle,
it should be obvious why the linked list representation was chosen.
But I think they made the right choice in not exposing the
implementation choice in the interface; this allows them to try many
different implementations (others I can think of is are a linked list
of medium-sized arrays and or an array of medium-sized arrays -- the
latter is O(1) for both adding a new element and finding the Nth
element).

I concede that there are cases where the redesign must affect many
levels of the implementation.  This often means that the original
design was severely flawed.  I think a definition I recently saw in
the Object-Oriented Design discussion was that such a design can be
considered good if most specification changes result in modifications
to only one or two classes.  Applying this to the above example, the
addition of the specification "find the result in linear time" results
in a simple change to one class.

Barry Margolin
Thinking Machines Corp.

barmar@think.com
uunet!think!barmar

matt@oddjob.UChicago.EDU (My Name Here) (03/09/88)

Oliver Laumann quotes Butler W. Lampson:

) In a paragraph titled "Get it right" he mentions that neither
) abstraction nor simplicity can be a substitute for getting an algorithm
) or a computer system right; . . .

And getting an algorithm right is no substitute for understanding the
problem.  A friend of mine had worked for a certain electronic
instrument company doing numerical analysis for some engineers.  One
engineer gave him a 200x200 matrix to invert.  (This was in the early
70s when that was a big matrix.)  My friend did the job and noticed
that the inverse looked a little familiar.  He asked the engineer,
"Could this matrix represent a rotation of some sort?"  Answer: "Why,
yes, you could consider it as one."  (The inverse was the transpose.)

				Matt Crawford

shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) (03/09/88)

In article <14476@oddjob.UChicago.EDU> matt@oddjob.UChicago.EDU (My Name Here) writes:
>Oliver Laumann quotes Butler W. Lampson:

>[...] A friend of mine had worked for a certain electronic
>instrument company doing numerical analysis for some engineers.  One
>engineer gave him a 200x200 matrix to invert.  (This was in the early
>70s when that was a big matrix.)  My friend did the job and noticed
>that the inverse looked a little familiar.  He asked the engineer,
>"Could this matrix represent a rotation of some sort?"  Answer: "Why,
>yes, you could consider it as one."  (The inverse was the transpose.)

I wonder if this is an example of a "software urban legend".  Forman Acton
mentions a very similar situation in his 1970 book on numerical analysis,
although there it is purported to be a first-person experience, and it
was claimed that 15 minutes of analysis were necessary to determine that
the matrix was orthogonal (thus representing a rotation).

Incidentally, this same book has some amusing flaming passages about
people who use too many cycles, and about the evils of recursion...

							stan shebs
							shebs@cs.utah.edu