[comp.arch] Survey of architectures was

eugene@pioneer.arpa (Eugene N. Miya) (04/21/88)

This form raises an interesting question.

I work with a young physicist who asked me what would be interesting machines
(computers) to learn about.  Let me explain a little bit about this fellow.
(He asks that I do this.)  He has only programmed on three machines his
entire life (his words): Cray X-MPs [usually blows $60,000 on a single run],
the VAX-line (VMS) and his little PC Jr. which he got from work, cheap.
He's amazed we have not better standardized computers (would really like an
X-MP which runs VMS, wants one language, one OS, one instruction set,
and pure speed) and this (standards) is why computer science will not be
a science.  <Flame [follow up] all you want about this, but use
mail for the next paragraph. He will read these, but he won't change.>

He's curious about what machines have been very influential in the design
of computers (what machines have been most important).  He's willing to
read ten references which I have agreed to try and find for him. He is
not willing to take my suggested list of important machines (the
IBM-360/370, DEC-10/20, Univac 1100, Xerox Alto, Apple II, IBM PC, etc.
I've done a Heisenflop now that I gave these to you, some user has to
recommend these as significant machines IF they are) thru history and wants me
to post this request because he wants the opinions of "experts."  So what
ten machines should I have him read about.  If I get more than 10 machines,
I will summarize and take "votes," so I prefer to get mail on this one
rather than deal with follow ups.  Any computer will do, it just should be
significant or influential, sort of a "top ten of architectural designs."
Let me know if you have run on them, or only read about them.

I guess the question is largely one of the generality of computer
experience.  Do machines completely bias the way we think about computers.
I suspect they do.  It's funny that we don't (can't) take a great deal
from history, only measured doses otherwise we seem to stagnate.

From the Rock of Ages Home for Retired Hackers:

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {uunet,hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

bzs@bu-cs.BU.EDU (Barry Shein) (04/21/88)

Although the ultimate machine/os/language/etc which can be used as a
universal standard is an attractive idea it remains a dream, quite
possibly a false one, it applies the wrong assumptions.

Computers exist within a techno-economic framework. Why doesn't
someone build the ultimate frob panel that will work as well for an
oscilloscope as a microwave oven or synchrotron? Is this a reasonable
question?

So far the purpose of computers is to simulate other realities. As the
technological understanding and economics of the desireability of
having simulations of these realities changes computer architectures
change. New opportunities arise and they are largely economic. For
example, address spaces imply real wires and fighting combinatorial
connectivity problems which cost real money, a group of people have
trouble agreeing on the maximally needed address space. One person
says that enough bits to address every electron in the universe should
be enough, another points out the computational convenience of a
segmented, sparse address space. Another notes that you will need at
least one more bit to store the spin of each electron, another bunch
to store its location or energy level. Another says that is not
worthwhile...

Put another way, no one has successfully described a set of operations
both sufficient and minimal with which to describe reality. It's like
asking why we come up with new words in human languages, it's because
new ideas need to be expressed.

	-Barry Shein, Boston University

esf00@amdahl.uts.amdahl.com (Elliott S. Frank) (04/21/88)

In article <7657@ames.arpa> eugene@pioneer.UUCP (Eugene N. Miya) writes:
>This form raises an interesting question.
>
>He's curious about what machines have been very influential in the design
>of computers

The following list would be my take on what designs other designers have
reacted to over the last twenty years or so.  I've probably slighted
the VLIW and parallel crowd, but I haven't [yet] seen a lot of
impact on other architecture that's come out of their research.  Machines,
yes.  Architectural principles, no.

* IBM 360/370 -- Illustrates 'power of architecture', conformance to
  a scalable design

* Elliot Electric [ICL] 2901/Burroughs 5000/5500 -- commerical stack
  machine, higher level language programming

* Burroughts 1700/1800 -- variable microprogram machine

* IBM 704/709/7090/7094/GE 300/400/600/Honeywell 6000/Series 60/66 -- the
  prototypical CISC [36-bit] word machine

* ??? (later Honeywell) DDP-316, 516 -- the father of all 16-bit minis

* VAX 11/35 --> VAX 8800 -- the 'ultimate' CISC

* Berkley RISC

* IBM STRETCH/IBM 360/91/CDC 7600/Cyber 200/205/ETA 10 -- complex hardware
  (scoreboarding, pipelining, etc) as the way to push technology

* Intel 432 --

* Intel 4004/8008/8080/Zilog Z-80/8086/80286/80386 -- silicon CISC
-- 
Elliott Frank      ...!{hplabs,ames,sun}!amdahl!esf00     (408) 746-6384
               or ....!{bnrmtv,drivax,hoptoad}!amdahl!esf00

[the above opinions are strictly mine, if anyone's.]
[the above signature may or may not be repeated, depending upon some
inscrutable property of the mailer-of-the-week.]

crowl@cs.rochester.edu (Lawrence Crowl) (04/22/88)

In article <7657@ames.arpa> eugene@pioneer.UUCP (Eugene N. Miya) writes:
>He's amazed we have not better standardized computers (would really like an
>X-MP which runs VMS, wants one language, one OS, one instruction set,
>and pure speed) and this (standards) is why computer science will not be
>a science.

Argh!  Languages, operating systems and instruction sets are at usually
commercial products and at most experiments, not theories.  Asking for one
system implies that computer science "has the answer".  If we had the answer,
we would not be a science.  What we need is a better standardized physical
theory.  Physicists keep changing their mind.  Let's stick with Newtonian
physics.  After all, if you have more than one theory it is not a science.  
 
>He's curious about what machines have been very influential in the design
>of computers (what machines have been most important).

Is this what he perceives as computer science?  The most influential boxes?
If he is going to attack computer science, then he should seek to understand
the science, not the boxes.  He seems to be trying to understand physics by
asking which bridges most influenced the design of later bridges.

-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

amos@taux01.UUCP (Amos Shapir) (04/22/88)

An important ommission: PDP11, the most successful mini and the grand-dad
of Unibus architecture (memory mapped devices).

-- 
	Amos Shapir			(My other cpu is a NS32532)
National Semiconductor (Israel)
6 Maskit st. P.O.B. 3007, Herzlia 46104, Israel  Tel. +972 52 522261
amos%taux01@nsc.com  34 48 E / 32 10 N

uday@mips.COM (Robert Redford) (04/22/88)

In article <7657@ames.arpa>, eugene@pioneer.arpa (Eugene N. Miya) writes:
> This form raises an interesting question.
> 
> I work with a young physicist who asked me what would be interesting machines
> (computers) to learn about.  
> He's curious about what machines have been very influential in the design
> of computers (what machines have been most important).  He's willing to
> read ten references which I have agreed to try and find for him. 

   May be " The connection machine" by W. Daniel Hillis is  a good 
   reference to someone having Physics mindset. There is a chapter in the
   book titled "Why Computer Science is No Good or New Computer Architectures
    and their relationship to Physics".


                                                  ..Uday

chuck@amdahl.uts.amdahl.com (Charles Simmons) (04/22/88)

In article <21883@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>
>Although the ultimate machine/os/language/etc which can be used as a
>universal standard is an attractive idea it remains a dream, quite
>possibly a false one, it applies the wrong assumptions.
>
>Computers exist within a techno-economic framework. Why doesn't
>someone build the ultimate frob panel that will work as well for an
>oscilloscope as a microwave oven or synchrotron? Is this a reasonable
>question?
>
>So far the purpose of computers is to simulate other realities. As the

I think Barry has some good points here, and I particularly
like the sentence "The purpose of computers is to simulate
other realities".  I think I have small disagreements with some
of his comments, however.  

First, I would question whether the original poster really
cares about having a single machine implementation and a single
instruction set.  My claim is that most people are not directly
exposed to these aspects of a computer system and that they
could really care less.

Next I would ask, is it really the case that the purpose of
an operating system and a language is to simulate realities?
My response would be "no".  The purpose of an operating system
is to manage and allocate the resources of a computer among
various requesting processes.  The purpose of a language is
to allow a person to specify the other reality that she wants
the computer to simulate.  I see no inherent reason why standard
versions of an operating system and language cannot and should
not exist, except for the fact that no one has yet designed
the perfect OS and language.

>technological understanding and economics of the desireability of
>having simulations of these realities changes computer architectures
>change. New opportunities arise and they are largely economic. For
>example, address spaces imply real wires and fighting combinatorial
>connectivity problems which cost real money, a group of people have
>trouble agreeing on the maximally needed address space. One person
>says that enough bits to address every electron in the universe should
>be enough, another points out the computational convenience of a
>segmented, sparse address space. Another notes that you will need at
>least one more bit to store the spin of each electron, another bunch
>to store its location or energy level. Another says that is not
>worthwhile...
>
>Put another way, no one has successfully described a set of operations
>both sufficient and minimal with which to describe reality. It's like
>asking why we come up with new words in human languages, it's because
>new ideas need to be expressed.

We don't need to describe a sufficient set of minimal operations
to describe all possible realities in order to produce a standard
OS and language.  These tools should be sufficiently powerful
that each individual could use these tools to define their own
realities.  

Of course, this leaves open the question of things
like having a standard set of device drivers, or a standard
set of subroutine libraries.  It would be nice to have standard
*sub*sets of these things.

>	-Barry Shein, Boston University

-- Chuck Simmons

gillies@uiucdcsp.cs.uiuc.edu (04/22/88)

I hope it's o.k. for me to ammend your list:

PDP-11:  The addressing modes of this mini even influenced a major
	 language (C).  Now that's influence!!!  Notable ripoffs are
	 the 68000, and the Vax-11 (the 32032 is a Vax-11 ripoff).

Multics: The segmented architecture was imitated (unfortunately) by
	 the Intel 8086 - 8088 and esp 80286 (remember all those
	 articles in Electronics on "rings of protection" in the '286 ?)

CDC 6X00:Where did the idea for a scoreboard come from ? From
	 Seymour's machine!

CRAY-1:	 Introduced vector processing, and the idea of having huge
	 numbers of registers in a machine (later copied by RISCs).
	 Pioneered the concept of a horribly complicated machine
	 to program.  Put compiler writers/researchers back into business.

Intel 432: The ultimate CISC == Horrible failure.
VAX-11:	 2nd-place for CISC == Success.  9-11 cycles/instruction
	 These machines had a major influence on RISC designers
	 (e.g. disgust).  Also, inept Berkeley computer designers
	 (small designs only!) and small dies in hands-on VLSI courses
	 had a big impact on RISC computers (just kidding).

mbutts@mntgfx.mentor.com (Mike Butts) (04/22/88)

From article <29220@amdahl.uts.amdahl.com>, by esf00@amdahl.uts.amdahl.com (Elliott S. Frank):
> In article <7657@ames.arpa> eugene@pioneer.UUCP (Eugene N. Miya) writes:
>>This form raises an interesting question.
>>
>>He's curious about what machines have been very influential in the design
>>of computers
> 
> The following list would be my take on what designs other designers have
> reacted to over the last twenty years or so.  I've probably slighted
> the VLIW and parallel crowd, but I haven't [yet] seen a lot of
> impact on other architecture that's come out of their research.  Machines,
> yes.  Architectural principles, no.
> 

Allow me to add:

* DG Nova -- arguably an early (circa 1970) and quite popular RISC, which is a 
    direct descendant of...

* PDP-8 -- extremely RISC.  Just as the minimality of today's RISCs allow chip-
    level implementation, so did PDP-8's minimality allow rack-level 
    implementation in its day (25 years ago.)

* PDP-1 -- the first minicomputer.  Although it filled several racks, it was
    mini for 1961, and its instruction set was decidedly a minicomputer's.

* FPS-164 -- original (1981) VLIW mini-super (it's **not** an array processor)


-- 
Mike Butts, Research Engineer         KC7IT           503-626-1302
Mentor Graphics Corp., 8500 SW Creekside Place, Beaverton OR 97005
...!{sequent,tessi,apollo}!mntgfx!mbutts OR  mbutts@pdx.MENTOR.COM
These are my opinions, & not necessarily those of Mentor Graphics.

root@mfci.UUCP (SuperUser) (04/24/88)

In article <76700015@uiucdcsp> gillies@uiucdcsp.cs.uiuc.edu writes:
>
>I hope it's o.k. for me to ammend your list:
>
>PDP-11:  The addressing modes of this mini even influenced a major
>	 language (C).  Now that's influence!!!  Notable ripoffs are
>	 the 68000, and the Vax-11 (the 32032 is a Vax-11 ripoff).

I object to your using the term "ripoff".  We're all using boolean
algebra here; are we all ripping off George Boole?  I think it's 
sufficient to say that the PDP-11 series was very influential in
the minicomputer design space (and that's probably a big
understatement).

>
>CRAY-1:	 Introduced vector processing, and the idea of having huge
>	 numbers of registers in a machine (later copied by RISCs).

No, later copied by the RISC-I, but not by the IBM-801 nor Stanford's
MIPS.  The debate about the connection between large numbers of
registers and RISCs has been beaten to death here, and has long ago
passed beyond the point of useful rational discussion.  But we may as
well be historically accurate where possible.
>
>Intel 432: The ultimate CISC == Horrible failure.

(as I climb back on one of my soapboxes...)
You must mean a failure in the commercial marketplace.  So what?  Are
you attributing that failure to its CISC nature?  If so, I contend
you are wrong, and if not, then what's your point?

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

peter@sugar.UUCP (Peter da Silva) (04/24/88)

In article ... esf00@amdahl.uts.amdahl.com (Elliott S. Frank) writes:
> The following list would be my take on what designs other designers have
> reacted to over the last twenty years or so.

> * ??? (later Honeywell) DDP-316, 516 -- the father of all 16-bit minis
    ^^^--- GE.

> * VAX 11/35 --> VAX 8800 -- the 'ultimate' CISC
    ^^^--- I hope you mean *PDP* 11/35.

> * Intel 432 --
          ^^^--- You're kidding, right? The ultimate in vaporhardware
		 is a significant machine?

> * Intel 4004/8008/8080/Zilog Z-80/8086/80286/80386 -- silicon CISC

What about Motorola 68000?

And there's a couple of *distinct* breaks in your progression. There's no
real comparison between the 4004 and 8008.  I've got the 4004 manual here,
and it looks more like a calculator chip than a general purpose computer.
Wonder why :->?

And the 8086 is nearly as big a break from the 8080.
-- 
-- Peter da Silva      `-_-'      ...!hoptoad!academ!uhnix1!sugar!peter
-- "Have you hugged your U wolf today?" ...!bellcore!tness1!sugar!peter
-- Disclaimer: These aren't mere opinions, these are *values*.

celerity@bucasb.bu.edu (Roger B.A. Klorese) (04/25/88)

In article <1882@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>In article ... esf00@amdahl.uts.amdahl.com (Elliott S. Frank) writes:
>> * ??? (later Honeywell) DDP-316, 516 -- the father of all 16-bit minis
>    ^^^--- GE.

This is incorrect.  The 316, 516, and 716 were introduced by 3C, Computer
Controls Corporation, which later became the Computer Controls Division
(CCD) of Honeywell.  This division was never owned by GE.

(This machine is the machine that the original Prime 200 was introduced as
an object-compatible knockoff for, by the way.)

gillies@uiucdcsp.cs.uiuc.edu (04/25/88)

>> Written 11:13 am  Apr 23, 1988 by root@mfci.UUCP (Bob Colwell)
>>PDP-11:  The addressing modes of this mini even influenced a major
>>	 language (C).  Now that's influence!!!  Notable ripoffs are
>>	 the 68000, and the Vax-11 (the 32032 is a Vax-11 ripoff).
>
>I object to your using the term "ripoff".  We're all using boolean
>algebra here; are we all ripping off George Boole?  I think it's 
>sufficient to say that the PDP-11 series was very influential in
>the minicomputer design space (and that's probably a big
>understatement).

O.k., how about "clone" ?  or what about "shameless imitation" ?
I was just depicting bloodlines.  Please let's not get into semantics wars..

>>CRAY-1:	 Introduced vector processing, and the idea of having huge
>>	 numbers of registers in a machine (later copied by RISCs).
>
>No, later copied by the RISC-I, but not by the IBM-801 nor Stanford's
>MIPS.  The debate about the connection between large numbers of
>registers and RISCs has been beaten to death here, and has long ago
>passed beyond the point of useful rational discussion.  But we may as
>well be historically accurate where possible.

O.k., not all RISCs have large register files, but I don't think
either of us said this....

>>Intel 432: The ultimate CISC == Horrible failure.
>
>(as I climb back on one of my soapboxes...)
>You must mean a failure in the commercial marketplace.  So what?  Are
>you attributing that failure to its CISC nature?  If so, I contend
>you are wrong, and if not, then what's your point?
>
>Bob Colwell            mfci!colwell@uunet.uucp

It is a fact of history that the chip was a commercial failure.  If,
as you assert, its architecture was not a failure, then by all means
name one architecture that was influenced (in a POSITIVE way) by the
Intel 432.

Don Gillies {ihnp4!uiucdcs!gillies} U of Illinois
            {gillies@p.cs.uiuc.edu}

eugene@pioneer.arpa (Eugene N. Miya) (04/25/88)

Keep the content high, guys.........
I hope each of you read all postings BEFORE posting follow ups (jerking).
;-)
The asynchronous nature of the net is bad enough.

>What about Motorola 68000?

I should point out that I ask for COMPUTERS, not just processors.
Brian Reid (when he was reading arch) did an excellent summary (years ago)
about what makes a computer more than a processor (balance).  There was also
a recent posting about "third Amdahl's law (1MIPS/1MB/1MB/Sec)."

I'm not counting JUST processors (which I will do for 1 more day only).
I'll set them aside.  I call this a CPU fetish.

Another gross generalization from

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
				soon to be aurora.arc.nasa.gov
at the Rock of Ages Home for Retired Hackers:
  "Mailers?! HA!"
  {uunet,hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene
  "Send mail, avoid follow-ups.  If enough, I'll summarize."

henry@utzoo.uucp (Henry Spencer) (04/26/88)

While considering the contributions of the PDP-8 and PDP-11, don't forget
something that was a prominent feature of the 11 but really started on the
late-model 8s:  a standard I/O bus running across the whole family, so that
new models could use the same peripherals as old ones.
-- 
"Noalias must go.  This is           |  Henry Spencer @ U of Toronto Zoology
non-negotiable."  --DMR              | {ihnp4,decvax,uunet!mnetor}!utzoo!henry

root@mfci.UUCP (SuperUser) (04/26/88)

In article <1988Apr22.095544.86@mntgfx.mentor.com> mbutts@mntgfx.mentor.com (Mike Butts) writes:
>From article <29220@amdahl.uts.amdahl.com>, by esf00@amdahl.uts.amdahl.com (Elliott S. Frank):
>> In article <7657@ames.arpa> eugene@pioneer.UUCP (Eugene N. Miya) writes:
>>>This form raises an interesting question.
>> reacted to over the last twenty years or so.  I've probably slighted
>> the VLIW and parallel crowd, but I haven't [yet] seen a lot of
>> impact on other architecture that's come out of their research.  Machines,
>> yes.  Architectural principles, no.
>> 

Just wait.  When the RISC micro designers finally achieve their goal
of one op/instr, the logical next step is multiple ops/instr.  If you
want to control those ops, you need instruction bits...we call that a
VLIW.

>Allow me to add:
>
>* DG Nova -- arguably an early (circa 1970) and quite popular RISC, which is a 
>    direct descendant of...
>
>* PDP-8 -- extremely RISC.  Just as the minimality of today's RISCs allow chip-
>    level implementation, so did PDP-8's minimality allow rack-level 
>    implementation in its day (25 years ago.)

I thought nobody still defined their RISCs by the number of
instructions.  Given that Multiflow's TRACE can have up to 2**1024
instructions, I guess you wouldn't consider our machine to be a RISC.
But I can make a strong case for that, I think.

The minimality of today's RISCs doesn't "allow chip-level
implementation"; there are people in the world designing VAXes with
the same technology, and 68xxx's, and 80?86's.  RISC proponents claim
two important things:  faster time-to-market, and higher performance
when it gets there.  Not feasibility per se.

>* FPS-164 -- original (1981) VLIW mini-super (it's **not** an array processor)

OK, it's an *attached* processor.  Either way, it's not a VLIW and it's
not a mini-super.  For all its interesting features, such as
fine-grained parallelism at the hardware microcode level, neither the
FPS-164 nor the AP120B supported virtual memory.  And they couldn't
do their own I/O.  And they weren't designed to be the targets of
highly optimizing compilers.  Call them forerunners of the parallel
uniprocessors of today; looking back on them, one can pick out design
directions they could have taken that might have made it easier for
today's programming environment.  But the name VLIW was coined by the
people who invented them, and it encompasses too many other things
that the FPS machines don't.

>Mike Butts, Research Engineer         KC7IT           503-626-1302

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

root@mfci.UUCP (SuperUser) (04/26/88)

In article <76700022@uiucdcsp> gillies@uiucdcsp.cs.uiuc.edu writes:
>>>Intel 432: The ultimate CISC == Horrible failure.
>>
>>(as I climb back on one of my soapboxes...)
>>You must mean a failure in the commercial marketplace.  So what?  Are
>>you attributing that failure to its CISC nature?  If so, I contend
>>you are wrong, and if not, then what's your point?
>>
>>Bob Colwell            mfci!colwell@uunet.uucp
>
>It is a fact of history that the chip was a commercial failure.  If,
>as you assert, its architecture was not a failure, then by all means
>name one architecture that was influenced (in a POSITIVE way) by the
>Intel 432.
>
>Don Gillies {ihnp4!uiucdcs!gillies} U of Illinois

Hmmm.  There's no easy answer to that one.  Unfortunately, the 432
was such a *miserable* commercial failure that nobody (well, almost
nobody) tried to look behind the large elapsed execution time numbers
to see why they were so large.  Perhaps because the RISC bandwagon
was just getting rolling then, the 432 became a CISC scapegoat, and
everybody assumed that it was slow because a) it had 7 levels of
indirection in its addressing, b) it had loads of complex
instructions, c) it was object-oriented, d) all of the above.  I
tried to show in my thesis that none of these was the real problem.

It isn't fair to require that the 432 have had an influence on other
machines for several reasons.  One is that nobody paid any attention
to it (and I grant you that you can cry "chicken-and-egg" here, but
before you do, please read the short version of my 432 work to appear
in June's ACM TOCS).  And the other is that maybe it addressed issues
that were just too far in the future, and still are.  The basic goal
of the machine was to trade off basic performance for other things
that were felt to be more important at the time, like programmer
productivity (and please hold the flames about how that can't be
achieved if the machine is dog-slow; that isn't the point) .  Perhaps
the day will come when we return to that viewpoint.  But it's not
here now, so it's unreasonable to require any obvious influences.

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

rick@svedberg.bcm.tmc.edu (Richard H. Miller) (04/29/88)

In article <1988Apr26.024011.6516@utzoo.uucp>, henry@utzoo.uucp (Henry Spencer) writes:
> While considering the contributions of the PDP-8 and PDP-11, don't forget
> something that was a prominent feature of the 11 but really started on the
> late-model 8s:  a standard I/O bus running across the whole family, so that
> new models could use the same peripherals as old ones.

This was also a feature of the Decsystem-10 processors. All models used a
standard I/O bus and peripherals could be used. This may have been a feature of
the old PDP-6 as well, but I am not sure. 

As a side note, the I/O bus specs changed between the KA-10 and the KI-10. With
the KA-10, the processor had to terminate one end of the bus. With the KI-10,
the processor could occupy any position on the bus.




Richard H. Miller                 Email: rick@svedberg.bcm.tmc.edu
Head, System Support              Voice: (713)799-4511
Baylor College of Medicine        US Mail: One Baylor Plaza, 302H
                                           Houston, Texas 77030

markv@uoregon.uoregon.edu (Mark VandeWettering) (05/07/88)

In article <76700022@uiucdcsp> gillies@uiucdcsp.cs.uiuc.edu writes:
>>>Intel 432: The ultimate CISC == Horrible failure.

>>(as I climb back on one of my soapboxes...)
>>You must mean a failure in the commercial marketplace.  So what?  Are
>>you attributing that failure to its CISC nature?  If so, I contend
>>you are wrong, and if not, then what's your point?

	The 432 was a commercial flop, but it is one chip from intel
	that I at least thought was innovative.  Maybe it was a little
	ahead of its time.

>>Bob Colwell            mfci!colwell@uunet.uucp

>It is a fact of history that the chip was a commercial failure.  If,
>as you assert, its architecture was not a failure, then by all means
>name one architecture that was influenced (in a POSITIVE way) by the
>Intel 432.

	One architecture?  I suppose it has to be a commercial sucess as
	well?

	Give me a break.  A few of the things I found interesting were

	1. Capability based
	2. Merging of OS primitives into machine
	3. Hardware garbage collection
	4. Attempt to bridge "semantic gap" between Ada and machines

	While the 432 wasn't successful in addressing these problems
	(too slow mostly) I think that future chips may be based upon
	similar ideas.  Whether they will be "successful" we shall see.
	RISC ideas are pretty convincing, but I am not tossing CISC
	chips out the window yet.


>Don Gillies {ihnp4!uiucdcs!gillies} U of Illinois
>            {gillies@p.cs.uiuc.edu}

mark vandewettering