[net.ai] Lisp Machines

JW-Peterson@UTAH-20@sri-unix.UUCP (08/07/83)

From:  JW-Peterson@UTAH-20 (John W. Peterson)

         [Reprinted from the Info-Graphics discussion list.]

Folks who don't have >$60K to spend on a Lisp Machine may want to
consider Utah's Portable Standard Lisp (PSL) running on the Apollo 
workstation.  Apollo PSL has been distributed for several months now.
PSL is a full Lisp implementation, complete with a 68000 Lisp
compiler.  The standard distribution also comes with a wide range of
utilities.

PSL has been in use at Utah for almost a year now and is supporting 
applications in computer algebra (the Reduce system from Rand), VLSI 
design and Computer aided geometric design.

In addition, the Apollo implementation of PSL comes with a large and
easily extensible system interface package.  This provides easy,
interactive access to the resident Apollo window package, graphics
library, process communication system and other operating system
services.

If you have any questions about the system, feel free to contact me
via
        JW-PETERSON@UTAH-20 (arpa) or
        ...!harpo!utah-cs!jwp (uucp)

jw

dyer%wisc-ai@sri-unix.UUCP (02/02/84)

From:  dyer@wisc-ai (Chuck Dyer)

Does anyone have any reliable benchmarks comparing Lisp
machines, including Symbolics, Dandelion, Dolphin, Dorado,
LMI, VAX 780, etc?

Other features for comparison are also of interest.  In particular,
what capabilities are available for integrating a color display
(at least 8 bits/pixel)?

darrelj@sdcrdcf.UUCP (Darrel VanBuer) (02/05/84)

There really no such things as reasonable benchmarks for systems as different
as various Lisp machines and VAXen are.  Each machine has different strengths
and weaknesses.  Here is a rough ranking of machines:
VAX 780 running Fortran/C standalone
Dorado (5 to 10X dolphin)
LMI Lambda, Symbolics 3600, KL-10 Maclisp (2 to 3X dolphin)
Dolphin, dandelion, 780 VAX Interlisp, KL-10 Interlisp

Relative speeds are very rough, and dependent on application.

Notes:  Dandelion and Dolphin have 16-bit ALUs, as a result most arithmetic
is pretty slow (and things like trancendental functions are even worse
because there's no way to floating arithmetic without boxing each
intermediate result).  There is quite a wide range of I/O bandwidth among
these machines -- up to 530 Mbits/sec on a Dorado, 130M on a dolphin).

Strong points of various systems:
Xerox: a family of machines fully compatible at the core-image level,
spanning a wide range of price and performance (as low as $26k for a minumum
dandelion, to $150k for a heavily expanded Dorado).  Further, with the
exception of some of the networking and all the graphics, it is very highly
compatible with both Interlisp-10 and Interlisp-VAX (it's reasonable to have
a single set of sources with just a bit of conditional compilation).
Because of the use of a relatively old dialect, they have a large and well
debugged manual as well.

LMI and Symbolics (these are really fairly similar as both are licensed from
the MIT lisp machine work, and the principals are rival factions of the MIT
group that developed it) these have fairly large microcode stores, and as
a result more things are fast (e.g. much of graphics primitives are
microcoded, so these are probably the machines for moby amounts of image
processing and graphics.  There are also tools for compiling directly to
microcode for extra speed.  These machines also contain a secondary bus such
as Unibus or Multibus, so there is considerable flexibility in attaching
exotic hardware.

Weak points:  Xerox machines have a proprietary bus, so there are very few
options (philosphy is hook it to something else on the Ethernet).  MIT
machines speak a new dialect of lisp with only partial compatible with
MACLISP (though this did allow adding many nice features), and their cost is
too high to give everyone a machine.

The news item to which this is a response also asked about color displays.
Dolphin:  480x640x4 bits.  The 4 bits go thru a color map to 24 bits.
Dorado:  480x640x(4 or 8 or 24 bits).  The 4 or 8 bits go thru a color map to 
	 24 bits.  Lisp software does not currently support the 24 bit mode.
3600:  they have one or two (the LM-2 had 512x512x?) around 1Kx1Kx(8 or 16
or 24) with a color map to 30 bits.
Dandelion:  probably too little I/O bandwidth
Lambda:  current brochure makes passing mention of optional standard and
	 high-res color displays.

Disclaimer:  I probably have some bias toward Xerox, as SDC has several of
their machines (in part because we already had an application in Interlisp.

-- 
Darrel J. Van Buer, PhD
System Development Corp.
2500 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,sdccsu3,trw-unix}!sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

goodhart@noscvax.UUCP (Curtis L. Goodhart) (05/15/85)

Anybody have any pointers to some good references about lisp machine
architecture; ie why isn't a conventional computer suitable for running
lisp?

     Thanks,

	  Curt Goodhart  (goodhart@nosc    ; on the arpanet)

barmar@mit-eddie.UUCP (Barry Margolin) (05/16/85)

If you want information on why special hardware is extremely useful (but
not necessary) for running Lisp, see the proceedings of the various
Conferences on Lisp and Functional Programming, which were held in the
summers of 1980, 1982, and 1984.  There were papers presented at each of
these conferences on Lisp Machine architectures.  These proceedings can
be obtained from ACM Publications Service, for around $20 each.

There also have been ACM conferences on architectures for programming
languages, but I don't recall the exact name of the conferences.  I
haven't read any proceedings, but I do recall some lisp machine
designers being involved in such a conference.
-- 
    Barry Margolin
    ARPA: barmar@MIT-Multics
    UUCP: ..!genrad!mit-eddie!barmar

henry@utzoo.UUCP (Henry Spencer) (05/17/85)

In the recent AI issue of Byte, there was an article by some folks at a
company (Fairchild?) that is looking at building a super-fast AI machine.
Their analysis of supporting AI languages (notably Lisp) really well on
conventional machines ultimately boiled down to "efficient simulation of
tagged memory is the major stumbling block".  Their conclusion was that
conventional machines will be good for Lisp etc. in direct proportion to
how quickly they can (a) pick out some bits from a word and branch to one
of several places depending on the value of those bits, and (b) do an
indirect fetch which ignores some of the bits in the address register.

For example, the original 68000 is good for (b), since it ignores the top
8 bits of an address value, but its bit-extraction facilities are poor
which hurts (a).  The 68020 has better bit extraction but hits problems
on (b), since it tries to use a full 32-bit address.  And so forth.

I am not up on the intricacies of Lisp machines, but this article made sense.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

shebs@utah-cs.UUCP (Stanley Shebs) (05/20/85)

In article <5604@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
>In the recent AI issue of Byte, there was an article by some folks at a
>company (Fairchild?) that is looking at building a super-fast AI machine.
>Their analysis of supporting AI languages (notably Lisp) really well on
>conventional machines ultimately boiled down to "efficient simulation of
>tagged memory is the major stumbling block".  Their conclusion was that
>conventional machines will be good for Lisp etc. in direct proportion to
>how quickly they can (a) pick out some bits from a word and branch to one
>of several places depending on the value of those bits, and (b) do an
>indirect fetch which ignores some of the bits in the address register.

This is true only if one has a relatively stupid compiler that does
no type inference of any sort (sadly, this is the case for most compilers
today).  The nice thing about a really high-level language is that
a compiler can look at something like (car quux) and turn that into
a single instruction that does offset addressing without any tag
checking at all!  Of course, the compiler has to be smart, so that
it doesn't try that trick with (car 12)...  Compiled PSL code wins
big because it sacrifices some reliability and debuggability of code
for speed.  Also, to get the best performance, one must use several
kinds of declarations and flags.  The net effect is to compile away
a large percentage of function calls and type checks.

The moral?  Fixing the compiler is a better approach than sinking
millions of dollars into hardware, then trying to sell it as an
"advanced AI engine" or "powerful Lisp machine"...

						stan shebs

barmar@mit-eddie.UUCP (Barry Margolin) (05/21/85)

In article <3345@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley shebs) writes:
>...  Compiled PSL code wins
>big because it sacrifices some reliability and debuggability of code
>for speed.  Also, to get the best performance, one must use several
>kinds of declarations and flags.  The net effect is to compile away
>a large percentage of function calls and type checks.

But Lisp Machines provide all the type checking WITHOUT sacrificing
efficiency.  For instance, the hardware that implements the "+" function
on a Symbolics 3600 does all the following things IN PARALLEL:

1) Direct the top two items on the stack to the fixed-point addition
circuit.

2) Direct the top two items on the stack to the floating-point addition
unit.

3) Check the type bits of the top two items on the stack.  If they are
both fixna then the result of (1) will be used; if they are both flonums
then the result of (2) will be used as the result; and in any other case
an exception will be generated, and the addition will be performed by
appropriate microcode routines.

The point is that on a Lisp Machine you don't sacrifice any type
checking in order to get optimal performance.  If you are adding two
fixna then it takes just as long as a fixed point addition instruction
on any other machine.  In order to get all the checking that the Lisp
Machine implements on conventional hardware you would have to slow down
the code for "+" by a factor of two or three; for this reason, most Lisp
cmpilers for conventional machines don't generate type checking code,
and users must use the interpreter to get this checking.  Most Lisp
Machine programmers I know do all their debugging on compiled code,
which would be unthinkable on most other systems.

Another place where special hardware can be a big win is in garbage
collection.  I won't go into details here; see the paper titled
something like "Garbage Collection in Large Address Spaces", by David
Moon of Symbolics, in the Proceedings of the 1984 ACM Symposium on Lisp
and Functional Programming.
-- 
    Barry Margolin
    ARPA: barmar@MIT-Multics
    UUCP: ..!genrad!mit-eddie!barmar

rggoebel@water.UUCP (Randy Goebel LPAIG) (05/22/85)

> ...  Compiled PSL code wins
> big because it sacrifices some reliability and debuggability of code
> for speed...
> 
> 						stan shebs

Admittedly taken out of context, but I'm amazed that anyone would sacrifice
reliability for speed?

Randy Goebel
Waterloo

shebs@utah-cs.UUCP (Stanley Shebs) (05/22/85)

In article <4314@mit-eddie.UUCP> barmar@mit-eddie.UUCP (Barry Margolin) writes:

>The point is that on a Lisp Machine you don't sacrifice any type
>checking in order to get optimal performance.  If you are adding two
>fixna then it takes just as long as a fixed point addition instruction
>on any other machine.  In order to get all the checking that the Lisp
>Machine implements on conventional hardware you would have to slow down
>the code for "+" by a factor of two or three; for this reason, most Lisp
>cmpilers for conventional machines don't generate type checking code,
>and users must use the interpreter to get this checking.  Most Lisp
>Machine programmers I know do all their debugging on compiled code,
>which would be unthinkable on most other systems.

I don't know why debugging compiled code is such a wonderful thing;
object code (even on a LM) is not particularly readable.  With the
interpreter you can see exactly what is being executed.  While runtime
type checking does increase robustness, it's usually an incredible
waste of resources; 99.99999% of type tests will return a result that
is knowable in advance (the remaining .00001% are bug detections).
There are better ways to ensure robustness; after all, we don't put
usually put checksums with every byte on the tape.

In general, I tend to object to doing complex operations (like typechecking)
in hardware - it's just too inflexible.  Does anybody really believe
that the primitive types in Zetalisp are worth wiring into the machine
(or even the microcode)?

>Another place where special hardware can be a big win is in garbage
>collection.

I agree, but a GC coprocessor is really all you need.  Actually, it
would be better just to have a vanilla multiprocessor, and run GC
tasks concurrently with computation tasks, but that's still in research!

							stan shebs

barmar@mit-eddie.UUCP (Barry Margolin) (05/23/85)

In article <3346@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley shebs) writes:
>I don't know why debugging compiled code is such a wonderful thing;
>object code (even on a LM) is not particularly readable.  With the
>interpreter you can see exactly what is being executed.

The Lisp Machine compiler puts enough information in compiled code so
that it is easy to relate to its source code.  For instance, variable
names are still available when debugging compiled code.  When a function
stops with an error there is not much more that you can do with it if it
is being interpreted than if it is being executed from compiled code.

>  While runtime
>type checking does increase robustness, it's usually an incredible
>waste of resources; 99.99999% of type tests will return a result that
>is knowable in advance (the remaining .00001% are bug detections).

Not in a language that provides generic operations but doesn't require
type declarations (see below); in this case, the type-checking is
necessary in order to dispatch.  By the way, about half the bug reports
that I see being sent from MIT Lisp Machine users are generated because
the software made an error that would not be caught by most compiled
Lisp implementations: array bounds and argument types; similar bugs in
Multics Emacs (written in Maclisp) generally cause random errors (like
faults during GC) to start occurring.  Personally, I prefer it when
software stops as soon as the bug occurs, rather than waiting until
twenty minutes later.  Of course, the best thing is for the code not to
have any bugs, but that is not an option as long as people are doing the
programming.

>There are better ways to ensure robustness; after all, we don't put
>usually put checksums with every byte on the tape.

How about parity bits?  Or ECC bits in every word of memory?  Is there
that much of a leap from ECC that checks that the memory word is correct
to tag bits that are used to check that the triple (operation,arg1,arg2)
is correct?  I don't think so.

>In general, I tend to object to doing complex operations (like typechecking)
>in hardware - it's just too inflexible.

The alternatives are either (1) doing type checking in software or (2)
adding type declarations to programs.  For those of you who think I
should add (3) do code analysis that determines the parameter types,
please explain how a compiler is to perform such an analysis when the
entire compilation unit contains a single function definition such as:

(defun sample-function (arg1 arg2)
       (sample-function-2 (+ arg1 arg2))

Without the user adding type declarations, as in

(defun sample-function (arg1 arg2)
       (declare (fixnum arg1 arg2))
       (sample-function-2 (+ arg1 arg2))

(or the implicit declaration facility, such as Maclisp's +/+$/plus
distinction) there is no way for the compiler to know that it can
compile the (+ arg1 arg2) into a simple ADD instruction.

>>Another place where special hardware can be a big win is in garbage
>>collection.
>
>I agree, but a GC coprocessor is really all you need.  Actually, it
>would be better just to have a vanilla multiprocessor, and run GC
>tasks concurrently with computation tasks, but that's still in research!

I suggest you read David Moon's "Garbage Collection in a Large Address
Space Lisp Implementation", in the Proceedings of the 1984 ACM Symposium
on Lisp and Functional Programming.  Without special assist it is really
hard to prevent the GC from seriously impacting your paging performance,
as most GC's need to look at nearly all of virtual memory.  The above
paper describes the mechanism used in the Symbolics 3600 to implement a
very good garbage collector that doesn't need to page in lots of memory.

					barmar
-- 
    Barry Margolin
    ARPA: barmar@MIT-Multics
    UUCP: ..!genrad!mit-eddie!barmar

mark@apple.UUCP (Mark Lentczner) (05/23/85)

-=-
I would claim that if 99.999999% of your runtime checks are actually 
knowable at compile time then you are not taking advantage of the
polymorphic properties of the system.  In the code I've seen and
written for polymorphic systems I'd say that less than 50% of the
checks are knowable at compile time if even that many.

-- 
--Mark Lentczner
  Apple Computer

  UUCP:  {nsc, dual, voder, ios}!apple!mark
  CSNET: mark@Apple.CSNET

darrelj@sdcrdcf.UUCP (Darrel VanBuer) (05/24/85)

The main reason I debug running compiled code (on Xerox lisp machines) is
the substantial difference in speed between native code and interpretation.
Because the Xerox debugging tools are unable to do much inside compiled
code, I switch back to source for the current problem function only.
[The Xerox break package only works at the point a function is about to be
entered.  As a result, inline errors may result in unwinding back to
"before" the error occurred and then breaking.]

-- 
Darrel J. Van Buer, PhD
System Development Corp.
2500 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,orstcs,sdcsvax,ucla-cs,akgua}
                                                            !sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

shebs@utah-cs.UUCP (Stanley Shebs) (05/24/85)

In article <4328@mit-eddie.UUCP> barmar@mit-eddie.UUCP (Barry Margolin) writes:

>The Lisp Machine compiler puts enough information in compiled code so
>that it is easy to relate to its source code.  For instance, variable
>names are still available when debugging compiled code.  When a function
>stops with an error there is not much more that you can do with it if it
>is being interpreted than if it is being executed from compiled code.

In the "primitive" PSL runtime environment, it's possible to edit the
expression whose evaluation caused an error, as long as it's interpreted.
This is invaluable when fixing nuisance bugs (like mismatched numbers
of args).  And of course variable names are always available.

>>In general, I tend to object to doing complex operations (like typechecking)
>>in hardware - it's just too inflexible.
>
>The alternatives are either (1) doing type checking in software or (2)
>adding type declarations to programs.  For those of you who think I
>should add (3) do code analysis that determines the parameter types,
>please explain how a compiler is to perform such an analysis when the
>entire compilation unit contains a single function definition such as:
>
>(defun sample-function (arg1 arg2)
>       (sample-function-2 (+ arg1 arg2))
>
>Without the user adding type declarations, as in
>
>(defun sample-function (arg1 arg2)
>       (declare (fixnum arg1 arg2))
>       (sample-function-2 (+ arg1 arg2))
>
>(or the implicit declaration facility, such as Maclisp's +/+$/plus
>distinction) there is no way for the compiler to know that it can
>compile the (+ arg1 arg2) into a simple ADD instruction.

I exaggerated about the 99.9999%.  It's probably more like 99%, in
average code; the remaining 1% being operations that are being
used in a truly generic way.  LM code probably has a higher percentage
of generic ops, but examination of the source code suggests to me
that many of the generic flavor-mixing operations are gratuituous.

On the other hand, ML does full type inference all the
time, and I don't know of any inherent reason that Lisp can't do
that also.  The above example is unrealistic - presumably sample-function
and sample-function-2 have a context, and type inference starts
from that context.  All that can be done with the above example
is to infer that sample-function and sample-function-2 return the
same type that + does; just a number.

>>I agree, but a GC coprocessor is really all you need.  Actually, it
>>would be better just to have a vanilla multiprocessor, and run GC
>>tasks concurrently with computation tasks, but that's still in research!
>
>I suggest you read David Moon's "Garbage Collection in a Large Address
>Space Lisp Implementation", in the Proceedings of the 1984 ACM Symposium
>on Lisp and Functional Programming.  Without special assist it is really
>hard to prevent the GC from seriously impacting your paging performance,
>as most GC's need to look at nearly all of virtual memory.  The above
>paper describes the mechanism used in the Symbolics 3600 to implement a
>very good garbage collector that doesn't need to page in lots of memory.

I heard the paper, and I read the proceedings, and the paper is a case
study rather than a general treatise on the topic.  There's not much to
convince me that the mechanism is general enough to be useful anywhere
else.  The idea of several kinds of spaces has been around for awhile,
but most of the other details are 3600-specific.  It's also not clear to
me what the performance results were supposed to prove.

							stan shebs

zrm@prism.UUCP (05/28/85)

Not  only  does  runtime  type-tagging  mean that  errors are detected
earlier than in systems that can't or don't  do runtime type-checking,
it also means that an add instruction  does not  need to  know what it
will  be  adding  together at  compile time.   Doing  type checking in
macro-code is one reason why object oriented  systems run not-so-fast.
If  the  hardware  does  it  for  you  you  spend  less  time fetching
flag-words that tell you what you are trying to  operate on.   It also
makes "hidden" pointers possible, and that lets you pack a list into a
vector.  

In short, hardware support lets languages like Zetalisp deliver on the
promise of high-level-languages:  you are  free to  use the convenient
contructs  of object  oriented without  incurring horrible performance
penalties.  

The huge amount of code that exists for the Zetalisp environment means
that in practical terms, Lisp machines do make programmers
significantly more productive than do Lisp environments and other more
conventional environments found on conventional minis.