[net.works] I started with an example

LEVITT%MIT-OZ@MIT-MC.ARPA (01/11/84)

   From: decvax!linus!utzoo!henry @ Ucb-Vax
      ... the last I heard was a
   general conclusion that it is ***NOT*** easy to anticipate the
   program's demands well enough to get any large advantage out of it.
   (Please do not cite the awesome calculated performance of wonderful
   strategy X as a rebuttal; real results on real loads only, please.)

My first message had an example, where the Apple Lisa achieved higher
graphics performance with less cpu and less memory than an LM-2 via
non-standard storage managment, with a cost during application
switching.  (With only 128K of RAM, LM-2 context switches can be quite
slow too, so the real "cost" may be very small.  It depends on what
applications you use and how often you switch.)  You could say, "oh,
application switching is just demand paging with a large, variable
swap size", but I think you'd be missing the point.  Unfortunately,
Xerox's non-anticipative LOOMS experiment was not obviously a success.

Sure, there's no wonderful algorithm that always works that's as
trivial as demand paging - just alot of heuristics based on what
people do with computers, and what certain algorithms do.  It takes
some thought, and it would be still more effective with run-time
support.  Who's tried a run-time environment that uses records of an
algorithm's time complexity?: "I run in 20N msec and I've just been
called with an argument of N=20.  So I have time do 2 gc/anticipation
disk ops before I have to fetch the data the next routine will need."
Etc.