[comp.arch] hardware for late-binding languages

pardo@june.cs.washington.edu (David Keppel) (05/16/88)

billo@cmx.npac.syr.edu (Bill O'Farrell) writes:
[late-binding languages]
>In fact, using such languages has the beneficial effect of encouraging
>the design of hardware better suited to higher-level language
>implementation -- example: lisp machines.

I like architectures to support late-binding languages.  A relavent
note, however:  I saw a video by somebody from Symbolics (the video is
a year or more old, I think).  During the Q&A section, one of the
questions was something along the line of "how much faster does a
program run on a machine w/ hardware support?"  His answer was
essentially that if you had a fixed program (e.g., you weren't trying
to debug it), that compiler technology would give you a program that
ran just as fast on an architecture w/o support for late-binding
languages.  The real wins of the extra hardware, he said, were during
development, etc.

Any comments on this?

	;-D on  ( Hardware for Softwears )  Pardo

barmar@think.COM (Barry Margolin) (05/17/88)

In article <4926@june.cs.washington.edu> pardo@uw-june.UUCP (David Keppel) writes:
[regarding Symbolics Lisp Machines:]
>essentially that if you had a fixed program (e.g., you weren't trying
>to debug it), that compiler technology would give you a program that
>ran just as fast on an architecture w/o support for late-binding
>languages.  The real wins of the extra hardware, he said, were during
>development, etc.

If the only issue is early vs. late binding, this is probably true.
Once a program is in production use it is not likely to change often,
so late binding isn't much of a feature.

However, specialized Lisp hardwares typically have special support for
other things besides late binding, e.g. type dispatching,
object-oriented programmin, garbage collection, etc.

Barry Margolin
Thinking Machines Corp.

barmar@think.com
uunet!think!barmar

mdr@reed.UUCP (Mike Rutenberg) (05/17/88)

The tendency among Smalltalk implementations (a very late bound language)
is to run on standard CPUs like 68020s or 80386s.

You might propose that a custom instruction set or hardware support
would make the implementation faster.  Specific hardware support may
help a given implementation, but you then have to build the next
generation of that machine if the performance win is going to continue
with you.

If you do your own custom hardware to support a language, you have to
do it all, both the software and the hardware.  You can't spend as much
time building fast software and you don't get the automatic win
that occurs when somebody *else* spends the millions to do something
like an mc88000.

It looks to me that you get the fastest language machines by concentrating
on building fast software that will work on the fastest standard CPUs.

Mike

billo@cmx.npac.syr.edu (Bill O) (05/18/88)

In article <4926@june.cs.washington.edu> pardo@uw-june.UUCP (David Keppel) writes:

>I like architectures to support late-binding languages.  A relevant
>note, however:  I saw a video by somebody from Symbolics (the video is
>a year or more old, I think).  During the Q&A section, one of the
>questions was something along the line of "how much faster does a
>program run on a machine w/ hardware support?"  His answer was
>essentially that if you had a fixed program (e.g., you weren't trying
>to debug it), that compiler technology would give you a program that
>ran just as fast on an architecture w/o support for late-binding
>languages.  The real wins of the extra hardware, he said, were during
>development, etc.
>
>Any comments on this?

He's probably right, if by "fixed program" you mean one that had
all types fully declared. In that case, run-time type checking gets
you essentially nothing.  On the other hand, if you had to do 
run-time type checking on a machine with no support for it, then
your compiled code would be so burdened down that it would probably
be much slower.

Of course, with RISC designers doubling the speed of their chips
every year or so, it could be that just putting in the extra code
and running on a risc will be the most cost-effective way of 
getting run-time typing.

Bill O'Farrell, Northeast Parallel Architectures Center at Syracuse University
(billo@cmx.npac.syr.edu)