[net.ai] Compiled versus interpreted LISP

rggoebel@water.UUCP (Randy Goebel) (10/18/84)

Does anyone have any data that describes the execution speed up
of interpreted versus compiled LISP?  E.g., interpreted versus
compiled MACLISP, INTERLISP, FranzLISP.   Is there any evidence
to suggest that compiling increases the efficiency of constructs
other than PROG?

Randy Goebel
Logic Programming and Artificial Intelligence Group
Computer Science Department
University of Waterloo
Waterloo, Ontario, CANADA N2L 3G1
UUCP:	{decvax,ihnp4,allegra}!watmath!water!rggoebel
CSNET:	rggoebel%water@waterloo.csnet
ARPA:	rggoebel%water%waterloo.csnet@csnet-relay.arpa

holtz@clan.UUCP (Neal Holtz) (10/22/84)

***************************************

Does compiling speed up lisp?  Immensely (usually).  In most cases
you should expect 5 or 10 to one.  Some tests I just did yesterday
(on an Apollo DN660 with a compiler not quite as good as it
could be (compiled code should run about twice as fast as it does)).

1. a couple of old standbys:

  (de fact (n)
      (cond ((lessp n 1) 1)
            (t (times n (fact (sub1 n))) )
            ))

  (fact 200)

      Interpreted:  10.9s
      Compiled:      9.7s    Speedup: 1.1

      Most of the time is spent in bignum arith, so compiling has
      very little effect on this one.  Also, don't use this as a comparison
      with other machines.  I think the bignum package needs work.

2.

  (de fib (n)
    (cond ((izerop n) 0)
          ((ionep n)  1)
          (t (iplus (fib (sub1 n))  (fib (sub1 (sub1 n))) ))
          ))

  (fib 15)

      Interpreted:   8.3 sec.
      Compiled:      0.097 sec.     Speedup:  86 to 1

      This includes an overhead of about 0.05 sec. for read-print.
      Subtract that and you get about a 175 to 1 speed up.


3. A little bit more meaningful - a simple "prolog" unifier essentially
   as given in "LOGLISP: an alternative to Prolog" by Robinson & Sibert,
   Syracuse U.   (about 1 page of code, I can send it if you are interested).

   500 executions of:

       (unify  '($foo x (y z))  '(y x ($foo $bar)) )

             [atoms beginning in "$" are variables]

       Interpreted:  311 sec.  (including 12 - 700K garbage collections)
       Compiled:     0.737 sec.   Speedup:  422 to 1

   In this one, there was a *LOT* of macro expansion going on, and the
   expansions were not destructive.  On could assume a substantial speed
   up in the interpreter if expanded macros displaced original code.

Neal Holtz
Dept. of Civil Engineering
Carleton University
Ottawa             ...!utcsrgv!nrcaero!clan!holtz

mwm@ea.UUCP (10/25/84)

I have in front of me the UOLISP manual. I quote from it:

	The UOLISP philosophy places greater emphasis on compiled
	code since these functions take 1/2 to 1/3 the space of
	interpreted ones as well as running 20 to 75 times faster.

I just got my copy of UOLISP, and haven't had time to verify this. I plan
on doing so in the near future.

Among the reference I also find:

	Marti, J., 'An Optimizing Compiler for LISP for the Z80',
	SIGSMALL 1982.

The z80 is a special case for optimization, as it's addressing modes don't
help with stack fetching except under special conditions. See the October
1984 Dr. Dobb's Journal for an optimization scheme that can help a
z80/8080, but is totally worthless for a reasonable processor (6809, 68000,
VAX, DEC-10, etc.)

	<mike

brennan@iuvax.UUCP (10/26/84)

For the fibonacci example, and others, the speed up depends greatly
on whether the compiler in question compiles tail-recursion properly.

This also makes a big difference in C code, and can really throw off
benchmarks.

JD Brennan
...!ihnp4!inuxc!iuvax!brennan	(USENET)
Brennan@Indiana			(CSNET)
Brennan.Indiana@CSnet-Relay	(ARPA)