[net.micro] Why Lisp loses on personal machines

FISCHER@Rutgers@sri-unix (12/12/82)

From:  Ron <FISCHER@Rutgers>

[What follows is a massive flame attempting to explain why Lisp is
hard to implement, with reasonable performance, on a 16 bit micro.  It
is late and I can't vouch for readability.  I will gladly clarify or
correct myself, or others will do it for me.]

The biggest problem implementing Lisp on current microcomputer
architectures is that addresses are limited to 16 bits.  Even with the
8086's separate data and program areas it is a hyper pain to do
serious Lisp things, because you just don't have enough space.

Lisp is one of the few languages that does reasonable memory
management and lets you use it to good advantage.  Part of the reason
is that everything in Lisp is held in structures drawn from a central
pool of "managed memory."  I mean EVERYTHING, space for your code, the
editor that you write the code with, pieces of the system that support
your code, etc.  You can therefore see why you'd want/need to have
alot of memory for Lisp.

Lisp structures also don't look much like what most microcomputer CPUs
expect to see in memory (data areas and instructions).  This is what
makes it hard to "do Lisp" in general, Lisp does not expect a
conventional machine underneath it, and that unconventional
expectation has to somehow be simulated.  This is because Lisp
dynamically types its data structures, which means you have to keep
that info somewhere *in* the data object.  It also means that all the
operations you perform on the data have to be able to react to any
kind of data (if even to tell you you've provided data of the wrong
type).

The basic unit of data in Lisp is usually some sort of "Cell" with
some type bits and and address.  The type bits say what the object is
(what the address in the cell points to), a number, a string,
whatever.  The address points to a place where the real information
actually sits.

A "tagged architecture" as referred to in a previous message, is a CPU
that can respond to the "tags" or type information in a Lisp data
object in a very direct manner.  It "understands" about Lisp typing of
data.

The usual way of implementing a Lisp is to take a regular machine
address word and allocate some of its bits to typing, and some to
address.  This is a waste on 16 bit address machines.  If you allocate
say a minimal 3 bits for type codes that leaves you 13 bits to express
an address with.  That means you can have a maximum of 32k Lisp
objects.  Since everything is held in a lisp object this isn't enough
for anything but the smallest Lisp systems.  Probably adequete for
learning what Lisp looks like but not much more.

There are ways of getting around this, but they tend to require too
much effort to create or make the Language too slow to use.  A really
bizarre example would be keeping all of the lisp objects on disk (and
for a micro I don't mean virtual memory on disk) and grabbing the data
that way.  This is a technique currently being used to extend the
performance of microcomputer systems short on address space; add a
RAMdisk, which basically makes up for not enough memory.  Lisp systems
done this way would still be very slow compared to ones done on a 32
bit architecture.

In short, the Motorola 68000 and National 16032 are the first
microcomputers that will have "real" lisps on them because they have
32 bit addresse architectures.  Martin Griss and the people at UTAH-20
have a version of their Portable Lisp system running on a 68000 based
machine now.  It is a "real lisp."

(ron)
PS- What can I say... "Real Lisps don't eat disk"?
-------