[net.works] [Al Filipski <al@mot.uucp>: Re: Standard, What standard???

@RUTGERS.ARPA:GILLMANN@USC-ISIB.ARPA (03/30/85)

From: Richard Gillmann <GILLMANN@USC-ISIB.ARPA>

I thought this message might be of more interest to you than Info-IBMPC.

Dick Gillmann
                ---------------

Return-Path: <mmdf@BRL-TGR.ARPA>
Received: FROM BRL-TGR.ARPA BY USC-ISIB.ARPA WITH TCP ; 30 Mar 85 08:17:01 PST
Received: from usenet by BRL-TGR.ARPA id a023286; 30 Mar 85 9:05 EST
From: Al Filipski <al@mot.uucp>
Newsgroups: net.micro,net.micro.pc
Subject: Re: Standard, What standard??? (size vs. speed)
Message-ID: <130@mot.UUCP>
Date: 29 Mar 85 05:10:25 GMT
Xref: seismo net.micro:10382 net.micro.pc:3844
To:       info-ibmpc@usc-isib.ARPA



daisy!david writes:

>Small address spaces are not good, in and of themselves.  But they force you
>to find smaller algorithms which often run faster as well.  I don't know why
>smaller algorithms >tend< to run faster; I'm not a philosopher.

Funny, I've always thought that size and speed were INVERSELY related.
Take sort algorithms, f'rinstance.  One of the smallest sorts you can
write is one of the worst-- the bubble.  Quicksorts, heapsorts, etc.
are bigger and faster.  Fastest of all is the address calculation
sort which works in linear time but uses so much memory it is never
used (Basically, just use the key as the address).
Multiplication is like that too: the ordinary n**2 algorithm is trivial,
the Shoenhage-Strassen algorithm (n log n loglog n, or whatever) is
immensely complicated. Or, if you say asymptotic efficiency is not the
point, use a table lookup to multiply as fast as you like.  All it
takes is space.

--------------------------------
Alan Filipski, UNIX group, Motorola Microsystems, Tempe, AZ U.S.A
{allegra|ihnp4}!sftig!mot!al
{seismo|ihnp4}!ut-sally!oakhill!mot!al
--------------------------------

When throwing rocks at seabirds, leave no tern unstoned.
When painting baboons, leave no stern untoned.
-------