[net.ai] Workstations vs Timeshare

shebs@utah-cs.UUCP (Stanley Shebs) (11/07/85)

I really hate to go on about this, but if it affects AI expenditures, I
suppose it's worthwhile to explicate the two positions:

>From: forbus@uiucdcsp.CS.UIUC.EDU
>
>Ahem.  How many people who buy vax 780's give them to a single user?  Worse
>still, many people who buy Suns seem to run them with multiple users, and
>often swapping over a network to a shared disk.  The latter is a crazy idea
>for AI programs; while the ethernet may or may not be a bottleneck, sharing
>a disk between several processors certainly is!

Perhaps I'm spoiled; the utah-orion (11/750) load average generally only
goes over 1 when I'm running several compiles in the background, so it's
effectively a single-user machine.  I used to use an Apollo for Lisp, and
gave up on it when I got tired of watching pages fly up and down the little
coax cables.  Each person doing AI work *must* have a certain minimum of
resources - doesn't really matter whether it's in the form of shared big
machine or a workstation.

>And of course, none of the
>Gabriel benchmarks really test paging performance, which is where
>non-trivial AI programs spend most of their time.

I beg to differ; both the "boyer" and "browse" benchmarks are in
the million-cons range, and some others too I think (don't have the
details handy, sorry).  The 3600 spent substantial portions of its
time paging.  One of the reasons that the HP9836s (68000) machines
perform so well on those same benchmarks is that they don't have
virtual memory at all!  Instead, they have maybe about 8 megs of REAL
memory, and really streak along.

I grant that much of the work in qualitative physics is on an even
larger scale than that tested by any of the Gabriel benchmarks, but
most AI research isn't that computation-intensive (for instance,
studying knowledge representation involves more language-building effort
than it does raw computation).  If you *really* want to do massive
computation, get a Cray (as Boyer & Moore are doing for their theorem
prover).

>Remember, however, the original question refered to lisp machines
>versus standard time-sharing environments.  If a stand-alone 20 with PSL
>performs in the neigborhood of a Symbolics, then how will it do with
>20-60 users?  Answer: very badly!

Depends on the load average.  The load average on most machines at
Utah is usually below 1, and therefore the real time results correspond
pretty well to the benchmark results.  

>I think it is safe to say that there
>is NO computer on the market which runs Common Lisp (other lisps are simply
>not contenders at this stage of the game) which will provide for several
>users at once the same performance they can get if they are sitting at
>stand-alone workstations (be they Symbolics, Sun, TI, or Xerox).

Pretty safe all right! For "Common Lisp" substitute "an arbitrary program"
and you still get the same obvious truism.  Again, it depends on the load
average.  The original argument for timesharing hasn't gone away!  If out
of 10 people logged on, only 1 is actually executing a program, then a
timeshared system looks just like a single-user machine.  A timeshared
system gets into trouble under two situations: 1) if everybody is madly
hacking, and not stopping to think, and 2) when the programming environment
does lots of things for the programmer.  Situation 1 seems the normal mode
of operation on workstations - I leave it to you to decide if it's desirable
(personally, I prefer to sit and think about my programs).  Situation 2 will
be a powerful argument for workstations, if it ever comes about... (please
don't flame about any existing PEs, I've used many of them and they have
so little semantic knowledge of my programs that it's ludicrous)

							stan shebs

jbn@wdl1.UUCP (11/08/85)

> If you *really* want to do massive computation, get a Cray (as Boyer & 
> Moore are doing for their theorem
> prover).

      The Boyer-Moore theorem prover does not require a Cray.  As the person
who ported it from the Symbolics to the VAX (Franz) and thence to the Sun,
I can report that performance on a diskless 2MB Sun II is quite satisfactory;
the proofs scroll by faster than you can read them.  As a benchmark, I have
run the entire PROVEALL library (263 theorems, through SUBST-OK, for Boyer-
Moore fans) on a Sun II in 8 hours 57 minutes, using Franz Lisp 38.89 on
the SUN.  Considering that in this time the prover is regenerating much 
of number theory from some very basic axioms, this is not a bad showing; 
it's at least an order of magnitude or two above human performance.
I have been toying with the idea of a port to the PC/AT, so that I
can prove theorems at home.
      Incidentally, the stock version of the prover (available from 
BOYER@UTEXAS-20) contains the Franz compatibility fixes, so it can be run
on most reasonable Franz systems.

					John Nagle

forbus@uiucdcsp.CS.UIUC.EDU (11/09/85)

1.  Time sharing systems are in trouble if TWO people are actually hacking.
Assuming that only one person in a community at a time is either debugging
or experimenting with an AI program at a time is assuming that that
community isn't really doing much research.  Implying that using a computer
more than that is just "hacking without thinking" (to paraphrase) does a
grave injustice to the experimental side of AI.  Far too often programs
have been run on only one example, if that, and part of the reason has
been lack of cycles.  Technology is fixing this situation, but not fast
enough for my taste.  The more time I and my students spend shoehorning
our programs onto processors that are too small (or too overcrowded),
the less time we are thinking about AI.  Consequently, I try to get my
students the best sources of cycles that money can buy (modulo the fact
that no funding agency will buy us several CRAY's to use as single-person
workstations, and we couldn't physically house them as well.  But then
again, with two supercomputer centers on campus....).  We STILL have to
shoehorn, but it takes a much smaller fraction of our time than people
who are struggling along on Apollos or Suns.

2.  8MB of memory sure will run faster than 4MB!  (If only someone would
second-source boards for Symbolics machines I'd upgrade all of ours
accordingly.)  However, how many people can afford 100-200MB of real memory
right now?  I've seen programs eat up that much quite often (not just
mine!).  Look, if you want to make a program that knows alot (or does some
deep analysis of something), then you have to put that knowledge somewhere.
8MB total address space may be fine for small applications programs, but not
for serious research.  Until memory gets VERY cheap, paging will be with us.



I think it's all pretty clear: Say a vax 780 is 500K.  A reasonable
3640 configuration is around 80K list (and if you are a university
you can do much, much better).  For 500K you can get 6 3640's at list
price (and at university discount a few extra on top of that), each of
which will outperform a stand-alone 780 running Common Lisp, not to
mention a 780 struggling along with 6 CL users....right now, the technology
and marketplace make workstations the only sensible choice for serious AI
research.  Tomorrow might be different, but that's how it seems to be
today.

tombre@crin.UUCP (Karl Tombre) (11/09/85)

In article <3528@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley Shebs) writes:
>
>>From: forbus@uiucdcsp.CS.UIUC.EDU
>>
>>Ahem.  How many people who buy vax 780's give them to a single user?  Worse
>
>Perhaps I'm spoiled; the utah-orion (11/750) load average generally only
>goes over 1 when I'm running several compiles in the background, so it's
>effectively a single-user machine.  I used to ....

Yes oh yes, you are spoiled! Our 750 has a load average of up to 15
sometimes when 10 to 15 people are working and somr nroffs and Lisps run in
background.

-- 
--- Karl Tombre @ CRIN (Centre de Recherche en Informatique de Nancy)
UUCP:    ...!vmucnam!crin!tombre  or    ...!inria!crin!tombre
COSAC:   crin/tombre
POST:    Karl Tombre, CRIN, B.P. 239, 54506 VANDOEUVRE CEDEX, France