[net.lang] Lisp Machine Type and GC Techniques

rpk@mit-eddie.UUCP (Robert Krajewski) (05/24/85)

    From: shebs@utah-cs.UUCP (Stanley Shebs)
    Message-ID: <3346@utah-cs.UUCP>

    While runtime
    type checking does increase robustness, it's usually an incredible
    waste of resources; 99.99999% of type tests will return a result that
    is knowable in advance (the remaining .00001% are bug detections).

On the three MIT-derived Lisp Machines (3600, Lambda, Explorer), type
checking is NOT a waste of resources.  The concept of a type in (Common)
Lisp is strong, but freedom to operate on objects without worrying about
the type at compile time isone of its bigger features.  It means that you
can change the representation of the things you are concerned

    There are better ways to ensure robustness.

Even if hardware-type dispatch was much slower, and then anything you do at
compile-time would rob Lisp of its ``dynamic'' qualities.

    In general, I tend to object to doing complex operations (like
    typechecking) in hardware - it's just too inflexible.  Does anybody
    really believe that the primitive types in Zetalisp are worth wiring
    into the machine (or even the microcode)?

The type-checking feature in the hardware amounts to a microcode dispatch
on the contents of a byte field; on the Lambda, CADR, and Explorer, this is
a five bit field in the pointer.  You can use the dispatch table for other
things if you want to.  The tagged pointer architecture uses the hardware,
but the hardware itself is not heavily optimised to run Lisp.  (This may
not be so true on the 3600, since I am less acquainted with the internals
of that machine.)  

    >Another place where special hardware can be a big win is in garbage
    >collection.

    I agree, but a GC coprocessor is really all you need.  Actually, it
    would be better just to have a vanilla multiprocessor, and run GC
    tasks concurrently with computation tasks, but that's still in research!

You have obviously not thought about the many issues in garbage collection,
such as volatility levels (relative lifetime of objects) and what happens
to performance in a virtual memory system when you do garbage collection.
The idea of using a coprocessor does not buy you much either -- you have
got to make sure that storage conventions are never violated; the locking
overhead (between the two processors) would quickly become too much.  And
the multiprocessing aspects of this is no longer research -- jut ask hackers
at LMI and Symbolics.  (I work for the former.)
-- 
``Bob'' (Robert P. Krajewski)
ARPA:		RpK@MC        MIT Local:	RpK@OZ
UUCP:		genradbo!miteddie!rpk