[comp.arch] In defense of the VAX

andrew@frip.gwd.tek.com (Andrew Klossner) (02/19/89)

Paul Rodman at Multiflow (rodman@mfci.UUCP) writes:

	"Personally, I think the vax has about the worst possible
	archtecture one could come up with ... Byte aligned
	instructions make the hardware more difficult and buy you
	nothing. So many instructions that serve no purpose, etc, etc.
	Blech, what a mess. And so many minds worked *hard* to create
	it! Ha!"

The disrespect expressed here is disturbing.  The VAX design was an
excellent job for the project's objectives.  The measure of this is the
outstanding customer acceptance that the VAX architecture has achieved
over the last decade.

To stand here, ten years later, and take pot shots at the VAX is rather
like criticizing Henry Ford for leaving radios out of the model T.

  -=- Andrew Klossner   (uunet!tektronix!orca!frip!andrew)      [UUCP]
                        (andrew%frip.wv.tek.com@relay.cs.net)   [ARPA]

aglew@mcdurb.Urbana.Gould.COM (02/20/89)

Paul Rodman at Multiflow (rodman@mfci.UUCP) writes:

	"Personally, I think the vax has about the worst possible
	archtecture one could come up with ... Byte aligned
	instructions make the hardware more difficult and buy you
	nothing. So many instructions that serve no purpose, etc, etc.
	Blech, what a mess. And so many minds worked *hard* to create
	it! Ha!"

I sure would like to be able to design such a lousy architecture
and make so much money selling it.

Apart from the fact that the VAX was designed under entirely different
constraints than we have today, maybe the fact that it was so
horribly successful says that there are factors to success other
than instruction set. Like, software strategy, support, good
scheduling of new hardware, etc. Isn't that a humbling thought
for all of us aspiring computer architects?

rodman@mfci.UUCP (Paul Rodman) (02/21/89)

In article <11037@tekecs.TEK.COM> andrew@frip.gwd.tek.com (Andrew Klossner) writes:
>Paul Rodman at Multiflow (rodman@mfci.UUCP) writes:
>
>	"Personally, I think the vax has about the worst possible
>	archtecture one could come up with ... 
>
>The disrespect expressed here is disturbing.

Oh, foo. 

> The VAX design was an
>excellent job for the project's objectives.  

That's nice. Maybe the project's objectives totally ignored the future?

I'm not going to sit here and claim that at the time *I* knew
what to do, but I'm just amused that such a large group of
minds could completely ignore the fact that SRAM and DRAM improvements
were going to make instruction set bit effectiveness much less important
than the vax seems to think it is. It *is* a second order effect. I'm also
surprised that they didn't realize that implementing a pipelined version
of the vax architecture would be made more difficult with such
a large set of byte aligned variable length instructions. 

I'll wager that a factory-cost / performance analysis of various
machines would show that the high-end vaxes are worse than just about
any other machines in that absolute performance range. Some of this
might be due to the architecture, you know!

>The measure of this is the
>outstanding customer acceptance that the VAX architecture has achieved
>over the last decade.

Wrongo. All it shows is that the architecture is just one *small* facet in
the success of a new computer from DEC. How much more money would
would DEC have made had the 8600 cost 30% less in factory cost? Or had
it come out 2 years earlier? The architecture would have affected both.
How might the first micro-vax have performed had the architecture been
different?

>
>To stand here, ten years later, and take pot shots at the VAX is rather
>like criticizing Henry Ford for leaving radios out of the model T.
>

Look, you don't think there weren't any parts in the model T that 
could've been done better?  I'm supposed to be so awed by
the model T's success that I ignore the inevitable mistakes and bad
compromises in the design? It couldn't have been improved? Of course
it could, and I'm sure you agree here.

My function EVER SINCE I've been an engineer is to take "pot shots"
at the vax. Every machine I've worked on has competed directly against
a high-end vax. 

You must be bugged because you feel I am stating:  "the
vax arch. design crew were stupid people".  This is not true!

All I'm trying to say is that often in engineering we struggle to use
"scientific" ways of making our tradeoffs. Often, in hindsight,
all the analysis is wonderfully detailed, and misses some much more important
point(s) completely. And very, very intelligent people can totally miss 
the boat.

One of the good things about the free enterprise system is that we can
recognize the errors-of-our-ways and change them quicker than other
systems can. That is why we *have* the Suns and Mips and Multiflows 
etc.

I'm *not* personally trashing on anyone. But 
permit me the right to think that I can do it better. There is no
*personal* "disrespect" intended. 

An interesting thought experiment would be to take the same group of
people, get them in a room and say "If you were transported back in
time and could know what you know now, what kind of machine would
you build?".

If they don't build something very vax-like, then was it worth
all that thought/analysis? Or did they really need less people and
better intuition(s) about hardware/compiler directions for the
future?


    Paul Rodman
    rodman@mfci.uucp

rodman@mfci.UUCP (Paul Rodman) (02/21/89)

In article <28200279@mcdurb> aglew@mcdurb.Urbana.Gould.COM writes:
>
>Paul Rodman at Multiflow (rodman@mfci.UUCP) writes:
>
>	"Personally, I think the vax has about the worst possible
>	archtecture one could come up with ... Byte aligned
>
>I sure would like to be able to design such a lousy architecture
>and make so much money selling it.
>

As you point out below, there is much more to success of a computer
than architecture. As an engineer however, I can't help notice when
things are not as they should be. I don't *care* how successful the
product *is* ...I *still* know when it isn't *right*.

>Apart from the fact that the VAX was designed under entirely different
>constraints than we have today,

Exactly my point. Some of the constraints were based on
current technology, some were self-inflicted in order to do better for
a "family" of machines for the future. Some assumptions were just plain
wrong.

> maybe the fact that it was so
>horribly successful says that there are factors to success other
>than instruction set. 

I'd be the first to agree there.

>Like, software strategy, support, good
>scheduling of new hardware, etc. Isn't that a humbling thought
>for all of us aspiring computer architects?

Humbling no, but good food for thought.



    Paul Rodman
    rodman@mfci.uucp

reiter@babbage.harvard.edu (Ehud Reiter) (02/21/89)

I still remember how happy I was when I was first introduced to the VAX, a
decade or so ago, after too many years on PDP's, 370's, and Interdata machines.
So, I can't resist the chance to make a few comments.

The VAX represented the best archicture that could be done in the mid 70's.
The computing situation was a bit different then, and in particular

1) Lots of code was written in assembler.  This meant that machines had to
have an assembly language that was easy for humans to use, which meant
lots of addressing modes and data types, as much orthogonality as possible,
and high-level instructions where possible.  In short, if people were going
to write lots of programs in assembler, then it was important to give them
as powerful and high-level an assembler language as possible.

2) Memory was expensive.  Remember the days when 15 people time-shared on
a VAX with 1 MB of memory?  Given this, dense coding of instructions made
a lot of sense, particularly since dense coding probably helped performance
of the 11/780 more than it hurt (it didn't hurt the pipeline, since the 780
wasn't pipelined, and it probably made the I-cache more effective and paging
less common).


In the late 1980's, the situation is a lot different.  Assembly programming
is much rarer, thanks to ever-better languages and compilers, and memory is
so cheap that even PC's have several MB.  One could even argue that the state
of the art in computer architecture has improved, with more reliance on data
and less on intuitive feelings.  The newer architectures, like MIPS, are
probably more appropriate for the 90's than the VAX - but's that progress.
It would be somewhat disappointing if we hadn't progressed beyond the VAX
in the last 10-15 years.

And, note that people criticize the VAX architecture on performance, not
functionality.  The most important thing about an architecture is that it
should make it easy for people to write and run programs, and I don't think
anyone has complaints about the VAX on this score.  This contrasts with the
IBM 370-class machines, which suffer greatly from lack of address space,
which *is* very much an annoyance to the programmer and even the end-user.

					Ehud Reiter
					reiter@harvard	(ARPA,BITNET,UUCP)
					reiter@harvard.harvard.EDU  (new ARPA)

mike@arizona.edu (Mike Coffin) (02/21/89)

From article <653@m3.mfci.UUCP>, by rodman@mfci.UUCP (Paul Rodman):
> 
> If they don't build something very vax-like, then was it worth
> all that thought/analysis? Or did they really need less people and
> better intuition(s) about hardware/compiler directions for the
> future?

Predicting the future is not something anyone does very well.  I
suspect that in 20 years there will be a new generation of engineers
scoffing at those ancient idiots who managed to convince themselves
that RISC was a brilliant idea, and they will be able to see
perfectly, in hindsight, exactly the direction that should have been
taken.
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

seanf@sco.COM (Sean Fagan) (02/22/89)

In article <1226@husc6.harvard.edu> reiter@harvard.UUCP (Ehud Reiter) writes:
>The most important thing about an architecture is that it
>should make it easy for people to write and run programs, and I don't think
>anyone has complaints about the VAX on this score.  This contrasts with the
>IBM 370-class machines, which suffer greatly from lack of address space,
>which *is* very much an annoyance to the programmer and even the end-user.

Your first statement there is the argument CISC advocates use.
RISC advocates say that performance matters more, since most people will end
up using a compiler more than assembly code (true, nowadays, but only
because compiler technology has gotten *real* good).

For the most part, the end-user doesn't care *what* the machines
architecture looks like, as they just want performance.  So, in that sense,
the RISC people are correct.

However, not providing a multiply instruction is *real* stupid (IMHO),
because "real world" applications tend to do multiplies.  Providing a POLY
instruction is also *real* stupid (again, IMHO), since most "real world"
applications don't do that (unless you're on a Cray, which, of course,
doesn't provide a POLY instruction 8-)).  Not providing an instruction can
cause the compiler writers to throw their hands up in the air (watch me,
sometime, when I take a look at the 370 instruction set), while providing
excess instructions can (and usually do) slow the machine down.

What's needed, of course, is a combination of the two (something like what
the CDC Cyber's had, or possibly what an Elxsi has):  RISCC, Reduced
Instruction Set Complexity Computer (I know, I know, I'm not the first one
to say this).  This way, compiler people will get the instruction set they
need (case study:  the Elxsi), and applications people will get the speed
they want.

The world needs a 64-bit Cyber-like Personal Computer (with, of course, a
$1k price tag) 8-).

-- 
Sean Eric Fagan  | "What the caterpillar calls the end of the world,
seanf@sco.UUCP   |  the master calls a butterfly."  -- Richard Bach
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

eacelari@phoenix.Princeton.EDU (Edward A Celarier) (02/22/89)

In article <28200279@mcdurb> aglew@mcdurb.Urbana.Gould.COM writes:
>
>Paul Rodman at Multiflow (rodman@mfci.UUCP) writes:
>
>	"Personally, I think the vax has about the worst possible
>	archtecture one could come up with ... Byte aligned
>	instructions make the hardware more difficult and buy you
>	nothing. So many instructions that serve no purpose, etc, etc.
>	Blech, what a mess. And so many minds worked *hard* to create
>	it! Ha!"
>
>I sure would like to be able to design such a lousy architecture
>and make so much money selling it.

Though I admit my experience in such matters is somewhat limited, It
does suggest that those responsible for the purchase of computers, and
thus the financial success of the correspondingmanufacturers, are not,
typically, interested in these sorts of details.

IBM's corporate success is implausible to me, given the difficulty of
using their machines, the inadequate nature of their software, their
primitive OSs, etc....

And I really am hard put to explain why the success of UNIX.

Certainly, these details are much nearer the visible surface of
computing.  The considerations addressed in the foregoing postings are
important, to be sure, but not to the financial success of computer
manufacturers.

>

lm@snafu.Sun.COM (Larry McVoy) (02/22/89)

In article <11037@tekecs.TEK.COM> andrew@frip.gwd.tek.com (Andrew Klossner) writes:
>Paul Rodman at Multiflow (rodman@mfci.UUCP) writes:
>
>	[Bad mouths the Vax]
>
>The disrespect expressed here is disturbing.  

Ahem.  I thought we had a lesson on this topic from Eric ???, the guy who 
put Dennis Ritchie up the pedestal.

The Vax was an OK machine for it's time.  So what?  It sucks now - even DEC
has admitted by switching to MIPS.  Listen to what Paul is saying: who here
can deny that the instruction decode of the Vax was stupid?  If I remember
correctly, the way they are speeding up the decode is by having N decoding
units, where unit N is decoding based on an N length instruction.  I don't 
know for sure that this is correct, but a look at their instruction set
makes it pretty plausible.

Larry McVoy, Lachman Associates.			...!sun!lm or lm@sun.com

gillies@p.cs.uiuc.edu (02/22/89)

Re:  Was the VAX designed to be as bad as possible????

1.  I seem to remember from my OS class that the vax virtual memory
system was quite an innovation.  I don't know about VM in today's
machines -- do they do it any *BETTER*?

2.  The VAX taught us to build RISC's.  It was very successful at that.

3.  The VAX taught us to hate heavyweight procedure calls.

4.  The VAX taught 32000 designers how to architect their machine.

5.  The VAX was made to sell in 1978.  Ok, Flame ON!  Have you GOT ANY
FRIGGING IDEA HOW EXPENSIVE MEMORY WAS IN 1978????  PEOPLE NEEDED TO
USE IT IN 1978, OK???  CAN YOU PARSE THAT????  Flame off.  Other
companies in 1978 (Xerox), full of smart people, were architecting
machines under the same assumptions as the VAX, and came up with
worse/more restricted results (DLions).

5.  The VAX was AMAZING when it came out.  It created the "supermini"
class of machine, as the first such computer from a major manufacturer
(sorry, Gould / Perkin-Elmer).  How quickly we forget all this...



Don Gillies, Dept. of Computer Science, University of Illinois
1304 W. Springfield, Urbana, Ill 61801      
ARPA: gillies@cs.uiuc.edu   UUCP: {uunet,harvard}!uiucdcs!gillies

cdshaw@alberta.UUCP (Chris Shaw) (02/22/89)

In article <653@m3.mfci.UUCP> rodman@mfci.UUCP (Paul Rodman) writes:
>>Paul Rodman at Multiflow (rodman@mfci.UUCP) writes:
>>
>>	"Personally, I think the vax has about the worst possible
>>	archtecture one could come up with ... 
>
>Look, you don't think there weren't any parts in the model T that 
>could've been done better?  

This is trivially true. Times change. Ivan Sutherland once made a salient
point:
	"Perfection is the enemy of the dissertation"

In other words, the task is to come up with something good in a reasonable 
amount of time, not something perfect. Realize also that the context of 
VAX design was 1975, not 1989. Memories being sold then were 4116's 
(or smaller, I don't know, I was 12 at the time). Consider:
16K DRAMs versus 4Meg (or 8 or 16). You can't design an architecture based
on the next decade's memories unless you're prepared to wait for them.
It doesn't matter whether you can see the trend or not, what matters is whether
you can sell machines today given today's raw materials.

>An interesting thought experiment would be to take the same group of people, 
>get them in a room and say "If you were transported back in time and could
>know what you know now, what kind of machine would you build?".

Interesting, but not very intellectually satisfying. Of course they'd design
something different. That's not the point. The interesting question is
whether the people setting the business constraints circa 1975 would be
willing to trade 1978-1983's sales for 1984-1989's sales, given that 1989
knowledge will result in a machine that might not sell if implemented in
1975 technology.

I guess the point that Klossner was trying to make was as follows:
It's obvious that by today's standards, the VAX sucks. So by saying 
that the VAX sucks, Paul Rodman must clearly mean something else, and the
only conclusion one can draw is that Paul Rodman thinks the VAX designers
were stupid. 

Well, Paul Rodman doesn't think that the VAX designers were stupid, he
just says the VAX is stupid. However, it's a grave mistake to judge 
the "inner quality" of yesterday's engineering by today's standards, 
especially in a field that moves as fast as this one. All you can do
is learn from its mistakes given the updated context. 

The biggest challenge with this field compared to other engineering is that
requirements change, and a large part of the engineering process is looking
into the crystal ball to see what's next. Compared to the Model T example,
you don't suddenly find the the average Model T owner wants to haul 150 people 
per trip. It's easy to understand what the fundamental limitation is for cars.
For computers, the fundamental limitation moves very quickly. Todays
excellent engineering is tomorrow's foolish mistake. Because of the high 
speed of improvement in electronics, I think it's inevitable that a ten
year old machine will look hideously out of date. Ten year old computer 
fashions look as silly as ten year old clothing fashions, and for a similar 
reason: today's constraint of technology or taste is not the same as tomorrow's.
One doesn't fine tune a design too much because the time is better spent
building the next iteration.

>    Paul Rodman
>    rodman@mfci.uucp

-- 
Chris Shaw    cdshaw@alberta.UUCP (or via watmath or ubc-vision)
University of Alberta
CatchPhrase: Bogus as HELL !

eriks@cadnetix.COM (Eriks A. Ziemelis) (02/23/89)

In article <6563@phoenix.Princeton.EDU> eacelari@phoenix.Princeton.EDU (Edward A Celarier) writes:
>
>IBM's corporate success is implausible to me, given the difficulty of
>using their machines, the inadequate nature of their software, their
>primitive OSs, etc....
>
>And I really am hard put to explain why the success of UNIX.
>

A great deal of the success, past and current, of a particular computer
and OS come from the fact that once you have exposed someone to it at the
right time, you can get converts for life. That is why manufacturers dump
free or cheap equipment onto universities: in the future, many of these
students will be the ones that will have the sayso in what equipment to
buy for their company. Why do you think NeXT is first being introduced to
college campuses? Sure, NeXT will get S/W out of it, but, they are also
looking down the road for the future sales.

After colleges, come the companies. Once a manufacturer starts to get
the big names on their list (Fortune 100) other potential buyers start to
look and think "If X is buying it..."

Same can be said with OS. Imagine being freed from the evil IBM mainframe
environment and set loose upon a UNIX based system: after that, who
wants to go anything that is not UNIX-like?


Eriks A. Ziemelis


Internet:  eriks@cadnetix.com
UUCP:  ...!{uunet,boulder}!cadnetix!eriks
U.S. Snail: Cadnetix Corp.
	    5775 Flatiron Pkwy
	    Boulder, CO 80301
Baby Bell: (303) 444-8075 X336

jlg@lanl.gov (Jim Giles) (02/23/89)

From article <2324@scolex.sco.COM>, by seanf@sco.COM (Sean Fagan):
> [...]                                     Not providing an instruction can
> cause the compiler writers to throw their hands up in the air (watch me,
> sometime, when I take a look at the 370 instruction set), while providing
> excess instructions can (and usually do) slow the machine down.
>
I disagree here.  A larger instruction set actually makes compiler writing
HARDER - not easier.  The back-end of a compiler on a RISC machine has
only one function - register allocation.  A 'pure' RISC machine executes
all instructions in one cycle, so no pipelining optimizations are possible.
In addition, instruction selection on a RISC machine is trivial - each
high level construct maps onto a unique machine instruction sequence - a
code skeleton.

Suppose you have a RISC-like machine in which the instructions are not all
one cycle, but different instructions use different functional units.  Now
the compiler back-end must do pipelining optimizations as well as register
allocation.  These two types of activity feed back into each other.  Changing
instruction order for pipelining may make the register allocation non-optimal.
Changing register allocation can make pipelining non-optimal.  But, at least
the instruction selection is still easy.

On a CISC, instruction selection is HARD!!  Not only that, it feeds back into
the other two problems in a complex way.  Changing instructions almost always
alters both the pipelining and the register allocation needs.  Changing these
others may make a different instruction selection desireable.  In short, by
introducing a larger instruction set, you have increased the code generation
complexity in a significant way.

To be sure, writing a 'simple' compiler presents none of these problems.
A simple compiler doesn't bother to try to generate optimized code or
to select the optimal instruction sequence or to use registers efficiently.
In this case (and ONLY in this case) the addition of more instructions
may be useful to the compiler writer (who may feel it's a waste of time
creating code skeletons).

J. Giles
Los Alamos 

rogerk@mips.COM (Roger B.A. Klorese) (02/23/89)

In article <2324@scolex.sco.COM> seanf@scolex.UUCP (Sean Fagan) writes:
>However, not providing a multiply instruction is *real* stupid (IMHO),
>because "real world" applications tend to do multiplies.

Not providing a way for multiplications to be done efficiently is stupid.
Not providing a multiply instruction is not necessarily stupid.  The MIPS
R-Series architecture provides one, but that doesn't necessarily mean that
the chips themselves need to (although they do).  The ISA is implemented in
the assembler, not necessarily in the silicon.
-- 
Roger B.A. Klorese                                  MIPS Computer Systems, Inc.
{ames,decwrl,pyramid}!mips!rogerk      928 E. Arques Ave.  Sunnyvale, CA  94086
rogerk@servitude.mips.COM (rogerk%mips.COM@ames.arc.nasa.gov)   +1 408 991-7802
"I majored in nursing, but I had to drop it.  I ran out of milk." - Judy Tenuta

gillies@p.cs.uiuc.edu (02/23/89)

In article <28200279@mcdurb> aglew@mcdurb.Urbana.Gould.COM writes:
>
>I sure would like to be able to design such a lousy architecture [VAX]
>and make so much money selling it.

This is one thing that really frustrated me about engineering computer
architecture.  It's why I avoided entering the field --

A. You can design an extremely good machine an ruin it with software

B. You can design a messy, aesthetically ugly machine and have it be 
   a big success, so long as the performance is not horrible...

In other words, the success is out of your hands....  It depends on
your software people.  The job is only half-done when the architecture
is implemented -- you can still screw it up with a bad compiler and/or
bad OS.  And then you need to worry about applications!

Therefore, software is the key.

Companies like DEC & IBM have the resources to make their machines
successful through software.  The IBM PC was successful only because
of the software it ran (in particular, Visicalc).  For a long time,
people were buying PC's because they were the only thing that would
run Visicalc.


Don Gillies, Dept. of Computer Science, University of Illinois
1304 W. Springfield, Urbana, Ill 61801      
ARPA: gillies@cs.uiuc.edu   UUCP: {uunet,harvard}!uiucdcs!gillies

morrison@cg-atla.UUCP (Steve Morrison) (02/23/89)

In article <2324@scolex.sco.COM>, seanf@sco.COM (Sean Fagan) writes:
> In article <1226@husc6.harvard.edu> reiter@harvard.UUCP (Ehud Reiter) writes:
> >The most important thing about an architecture is that it
> >should make it easy for people to write and run programs, and I don't think
> >anyone has complaints about the VAX on this score.  This contrasts with the
> >IBM 370-class machines, which suffer greatly from lack of address space,
> >which *is* very much an annoyance to the programmer and even the end-user.
> 

	As a softie that started on PDP-8's & PDP-11's, I
greatly resented the VAX for being an overcomplicated machine
that made writing software harder.

	I found the machine so offensive that I moved over
to micros.  Why not, the top selling mini in my area would
become something I did not enjoy programming, my experiences
with RSX-11M convinced me that VMS was going to be cumbersome
to use (as was proved later when I had to use the beast for
source control) and micros had advanced to the point that
systems requiring medium-large software efforts could be
implemented on them.

	I still miss the beautiful orthagonality of the base
PDP-11 instruction set and think that the software development
cost of not having such a programming model cannot be under-
estimated.  The task of writing a compiler becomes an unpleasant
nightmare when "special" registers have to be used to optimize
string moves, etc.

	As for UNIX, it started out as an inocuous development
environment that I enjoyed because it did not get in the way, cluttering
up my applications code with control blocks and their associated
garbage.  Alas poor UNIX, I knew you well...  it has blossomed
into the System V boat anchor and other layered variants that are
incomprehensible that include "features" that are definately not
in keeping with the original design philosophy, as I understand it.

	Both the hardware & software of the 80's represented a
big step backward to me.  The work coming out of Berkeley gives
me hope for the 90's hardware.  Unfortunately, software still
appears to be on a downward spiral.

henry@utzoo.uucp (Henry Spencer) (02/24/89)

In article <2066@pembina.UUCP> cdshaw@pembina.UUCP (Chris Shaw) writes:
>... it's a grave mistake to judge 
>the "inner quality" of yesterday's engineering by today's standards, 
>especially in a field that moves as fast as this one...

Some of us, actually, had a low opinion of the VAX not long after it
was announced.  My first reaction on seeing the details was "great" but
on reflection I changed it to "this is much too complicated".
-- 
The Earth is our mother;       |     Henry Spencer at U of Toronto Zoology
our nine months are up.        | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

paul@taniwha.UUCP (Paul Campbell) (02/24/89)

In article <76700073@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
>5.  The VAX was made to sell in 1978.  Ok, Flame ON!  Have you GOT ANY
>FRIGGING IDEA HOW EXPENSIVE MEMORY WAS IN 1978????  PEOPLE NEEDED TO
>USE IT IN 1978, OK???  CAN YOU PARSE THAT????  Flame off.  Other
>companies in 1978 (Xerox), full of smart people, were architecting
>machines under the same assumptions as the VAX, and came up with
>worse/more restricted results (DLions).

Hear! hear! about that time we bought 1Mb for our Burroughs 6700. It cost
$500,000 (I think we . It was core, Burroughs were selling the new
semiconductor stuff for more!! It wasn't that much earlier that large chunks
of semiconductor memory were becoming cost effective for us mere mortals.
When the Vax came out it had 16k chips. Each of those BIG (2ft wide?) boards
held 256kx32, the 8 slots in the controller held 2Mb! (of course we all
had to replace the controllers with 64k controllers a year or two later :-).
These days I can buy a Mac SIM (1Mbx8) which has the same capacity but is
about 1/2inchx5inches it's easy to forget how much things have changed
so quickly! All along people have being saying something like "I don't
see how I could really use N bytes of memory N/4 seems enough for me ....".

>5.  The VAX was AMAZING when it came out.  It created the "supermini"
>class of machine, as the first such computer from a major manufacturer
>(sorry, Gould / Perkin-Elmer).  How quickly we forget all this...

6(?7). The VAX moved Unix out to a much larger audience (thanks BSD :-)

	Paul
-- 
Paul Campbell			..!{unisoft|mtxinu}!taniwha!paul (415)420-8179
Taniwha Systems Design, Oakland CA

    "Read my lips .... no GNU taxes" - as if they could tax free software

rodman@mfci.UUCP (Paul Rodman) (02/24/89)

In article <2066@pembina.UUCP> cdshaw@pembina.UUCP (Chris Shaw) writes:
>

>year old machine will look hideously out of date. Ten year old computer 
>fashions look as silly as ten year old clothing fashions, and for a similar 
>reason: today's constraint of technology or taste is not the same as tomorrow's.

You guys keep missing my point, perhaps I am not being clear.

I *agree* that it very hard to predict the future in computer engineering.
This is *my* point. 

But I belive that one of the goals of the Vax project *was* not
only to build a 32 bit virtual memory machine, but to create a *computer
family* for the future at the same time. DEC had taken a lot of heat and
suffered internally for creating so many different incompatable machines. 
The Vax was *not* just trying to "build a good machine for year 197x". 
I belive they tried to anticipate the future to the largest extent 
possible.

My point is that, given the above, (which you may not belive, any Vax-ers out
there to pipe up on this?), isn't it amazing that so many errors were 
made. 

>One doesn't fine tune a design too much because the time is better spent
>building the next iteration.
>

And then you run smack into your bad choices. For
example, the 8600, which was the successor to the 780, took an *incredibly*
long time to finish, used 110+ state-of-the art ECL gate arrays, (plus
super low tech *card edge* connectors at the same time?!) and inumerable
boards, and was terrible slow for all that.  

You might claim the 8600's problems were all due to crappy engineers, but
I don't think this would be very fair. The simple fact is, the vax
architecture was *not* optimized with an eye toward building a pipelined
version. Considering that this was THE NEXT THING THEY DID don't you
think this a bit non-optimal? This *must* have contributed to the difficultly
of building the 8600. Many, many other things contributed to the birth
pains of the 8600, I know, but I'll bet the architecture was one of them.
Any former 8600 workers out there want to fill us in?

The micro-vax 1 had a great deal pain associated with it, as I recall, and
I wonder how much a simpler set of instruction alignments  would have helped?
Any ex-microvax I designers want to contribute.

Many other machines with much poorer bit efficient instruction sets 
existed at the same time as the vax and never was text image size a problem,
compared to raw speed. 

You yourself pointed out that the lifetimes of computers are so short that
time to market is worth an equivalent in performance. Which do think I 
can design-in easier: memory or gates? 

Besides, it was well known at the time that
memory was getting a factor of 4 denser with each iteration, with no end
in sight for quite a while. 

The vax designers, individually, were intelligent people. But as a group,
they missed the boat in several areas.

At any rate, I hope we can have more interesting discussions of this 
nature. I'm tired of byte-sex and other "boring" topics. :-)


    Paul K. Rodman 
    rodman@mfci.uucp

P.S. 
    I was suprised I didn't get a rise out of anyone when I referred to the
Denelcor HEP as a "pile of junk" a while ago. Isn't anybody going to stand
up for the thing? After all its the only supercomputer slower than a 
vax 11/780 in linpack, it *needs* a champion, :-)

seanf@sco.COM (Sean Fagan) (02/24/89)

In article <9620@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
>From article <2324@scolex.sco.COM>, by seanf@sco.COM (Sean Fagan):
>> [...]                                     Not providing an instruction can
>> cause the compiler writers to throw their hands up in the air (watch me,
>> sometime, when I take a look at the 370 instruction set), while providing
>> excess instructions can (and usually do) slow the machine down.
>>
>I disagree here.  A larger instruction set actually makes compiler writing
>HARDER - not easier.

I was *not* advocating a *large* instruction set, only a useful one.  As I
continuously state, the instruction set of the CDC Cyber 170 machines is
*very* close to what I want (not completely, but darn close), and it is, by
most definitions, a RISC machine (although, again, I prefer RISCC to RISC).

Now, to advocate a large instruction set:  for the compiler, do what (I
think) gcc does.  If you're not optimizing, just spit out the instructions
in the closest form to your internal representation (gcc has a bunch of
built-in "insn"'s that it uses).  If you are optimizing, try to come up with
the more complex instructions that will do what you want.  While I will
admit that gcc is not the worlds simplest compiler, it's not terribly
complex, and fairly easy to port to both RISC and CISC.

>[comments about pipelines deleted]

Uhm, even on most "pure" RISC machines, the compiler has to worry about the
pipeline (things like delayed branches et al).  The concern is smaller, to
be sure, but it is still there.  Another approach is to do what Seymour does
on Cybers and Crays:  use scoreboarding.  That way, even if your register
selection is wrong, you won't screw things up, you will merely run a bit
slower (on *that* model of the CPU.  If a later model speeds things up, then
you may not even have that slowdown).

Note:  this is getting very close to not being comp.arch, but
comp.compilers.  It stil has some architectural subjects, so I'm not
cross-referencing it (yet).

-- 
Sean Eric Fagan  | "What the caterpillar calls the end of the world,
seanf@sco.UUCP   |  the master calls a butterfly."  -- Richard Bach
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

colwell@mfci.UUCP (Robert Colwell) (02/24/89)

I have an old clipping from EETimes, ca. 1982 or so, covering an
interview with Ken Olsen, Imperial Overlord of DEC (or equivalent).
He made what I considered to be one of the most telling points I'd
ever seen in a pseudo-technical forum.  Talking about the VAX, 
which at that point was pretty young, he said (paraphrasing, since
I don't have this in front of me), "Talk all you want to about 
architectures, but in the end the only thing that REALLY matters
is how many disks you can put on the thing.  That's all the customers
care about."

If you take too many courses in school, you may start to think that
all the interesting technical intricacies actually mean something.
I used to keep that quote over my desk at school as a reminder of
how the world really works.

Bob Colwell               ..!uunet!mfci!colwell
Multiflow Computer     or colwell@multiflow.com
175 N. Main St.
Branford, CT 06405     203-488-6090

rodman@mfci.UUCP (Paul Rodman) (02/24/89)

In article <76700073@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
>
>
>1.  I seem to remember from my OS class that the vax virtual memory
>system was quite an innovation.  I don't know about VM in today's
>machines -- do they do it any *BETTER*?
>

Well you remember wrong. Prime Computer was selling p400 systems that
put "multic-in-a-matchbox" LONG before DEC got the vax out. The vax
was a *response* NOT an innovation in vm.

>2.  The VAX taught us to build RISC's.  It was very successful at that.
>3.  The VAX taught us to hate heavyweight procedure calls.
>4.  The VAX taught 32000 designers how to architect their machine.

Oh, good. lets admit that we can't do good engineering except by
correct obvious screw-ups.

>5.  The VAX was made to sell in 1978.  Ok, Flame ON!  Have you GOT ANY
>FRIGGING IDEA HOW EXPENSIVE MEMORY WAS IN 1978????  PEOPLE NEEDED TO
>USE IT IN 1978, OK???  CAN YOU PARSE THAT????  Flame off.  Other

Yes, as a matter of fact I do. I built a machine with 2102's in it,
remember them? 1kx1 for $6. We had moved by 1978 and 16kx1 drams were
being used. It is my opinion that a small increase in the text size
of vax programs would have been a good tradeoff vs byte aligned instrs.
At the time, I worked at such a company, PRIME, and whilst icky for
assembly language it was not nearly as difficult to pipeline. The
loss in bit efficiency wasn't the end of the world. 

When building a computer indended to have such a long life, you can't
be nano-hacking the architecture with things you'll regret later.

>companies in 1978 (Xerox), full of smart people, were architecting
>machines under the same assumptions as the VAX, and came up with
>worse/more restricted results (DLions).

Oh, ok. You've got another example of a large group of cooks spoiling
the broth..thank you for this support.
>
>5.  The VAX was AMAZING when it came out.  It created the "supermini"
>class of machine, as the first such computer from a major manufacturer
>(sorry, Gould / Perkin-Elmer).  How quickly we forget all this...
>
Hmm, well, *blush* I remember when DEC presented the vax and also being
impressed. Of course, I was a little damper behind the ears then and
hadn't designed any complex machines......

Flame on folks..................



    Paul K. Rodman 
    rodman@mfci.uucp

rodman@mfci.UUCP (Paul Rodman) (02/24/89)

In article <324@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>
>Hear! hear! about that time we bought 1Mb for our Burroughs 6700. It cost
>$500,000 (I think we . It was core, Burroughs were selling the new
>semiconductor stuff for more!! It wasn't that much earlier that large chunks

Sheesh! As I recall you could have bought an entire mincomputer system
with 2Mb of memory from Prime around then.

Burroughs was obviously price gouging on memories. Very common in those
days, very hard to get away with today.
>


    Paul K. Rodman 
    rodman@mfci.uucp

craig@Alliant.COM (Craig Maiman) (02/24/89)

I was on the VAX 8600 design team and I can tell you several reasons
why the machine was late.  I joined in the second half of that project,
so I wasn't there when the first delays occured, but was told about the
initial delays.  At first the machine was register based but it was decided
to go with latches (A wise move) => delay #1.  Some time later after design
was well on the way they realized that instruction decode and address
translation was not going to fit in one cycle, so add a pipe stage!
=> delay #2.  Bad management with subsequent changes of the guard =>
delay #3.  I joined during delay #3 and worked on the IBOX team.  I discovered
that the two designers of the IBOX didn't talk much so their logic didn't
talk much either (timing wise).  My job -> fix it => delay #4.  This
was bad "timing" because the chips were ready to go to fab and the timing
analysis was just starting along with CPU gate level simulation.  This
lead to many revs of chips => delay #5.  As you might imagine debug was
around the clock and took longer than it should have.

I would definitely agree that the VAX architecture was ill-suited for
a pipelined approach.  But it was well suited for its time and
hindsight is always 20/20

						Craig Maiman

jps@wucs1.wustl.edu (James Sterbenz) (02/25/89)

In article <76700073@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:


>1.  I seem to remember from my OS class that the vax virtual memory
>system was quite an innovation.  I don't know about VM in today's
>machines -- do they do it any *BETTER*?

Yes, and so did older machines.  I would consider the virtual memory
systems of ATLAS, the B5000, the 360/67 and Multics (GE 645) to have
been the significant innovators of the idea, not the VAX.

-- 
James Sterbenz  Computer and Communications Research Center
                Washington University in St. Louis 314-726-4203
INTERNET:       jps@wucs1.wustl.edu
UUCP:           wucs1!jps@uunet.uu.net

suitti@haddock.ima.isc.com (Stephen Uitti) (02/25/89)

In article <660@m3.mfci.UUCP> rodman@mfci.UUCP (Paul Rodman) writes:
>But I belive that one of the goals of the Vax project *was* not
>only to build a 32 bit virtual memory machine, but to create a *computer
>family* for the future at the same time.

This may have been a contributer to the kitchen sink approach to
the VAX design.  When I played with the PDP-11, I'd hoped that
the VAX 11 series was really just a Virtual Address Extension
PDP-11.  Just get rid of the 'mark' instruction, provide just a
'jsr pc,foo' style subroutine calling sequence, and replace the
post autodecrement defered addressing mode with something useful.
Make the registers 32 bits and have 16 of them.  Think of
instructions in terms of 32 bit words with 32 bit operands.  It'd
be great.  Then write TOPS 20 for it - the best OS available
anywhere at the time (and still one of the best - POSIX
notwithstanding).  Make it demand paged with 1 KB pages.  My
hopes were dashed.  When I started playing with it, I had hopes
that I could use the machine the way I wanted by ignoring the
other stuff.  Sigh.  Life doesn't work that way.

>DEC had taken a lot of heat and suffered internally for creating
>so many different incompatable machines. 

At the time, DEC had the '8, the '10, and the '11 (for popular
machines).  These all worked pretty well.  I never understood
what they meant by "all those different incompatible machines".
They served such different purposes that it was OK.  Just let
them talk to each other - but that's just software.  But the '10
didn't have enough address space to be a high end machine
anymore.  DEC needed a new machine.  I'd hoped that they'ed come
up with a '10 style machine with 64 bit words and 32 bit
halfwords, so that the PC would be 32 bits.  That would hold them
through the rest of the century - by which time it would have
incredible software.  Go for the one-word per instruction that
the original '10 had so you could implement it in descrete
transistors like the KA-10.  With a little thought the machine
should scream.

Of course, the "just let them talk to each other" did sort of get
fixed - with DECnet.  The '10 had TCP/IP, and now it is at least
sort of available for VMS.  The only machine without this is the
'8, which seems to be pretty much dead.  Why use an '8 if an '11
can be had on a chip?  There are reasons to use an '11 even
though a VAX can be had on a chip.

>The Vax was *not* just trying to "build a good machine for year 197x". 
>I belive they tried to anticipate the future to the largest extent 
>possible.

The VAX really is pretty nice to use.  The main reason that I
wouldn't have done it is "How do you make one of these things
work?".  An architecture with that much stuff in it will be
impossible to verify and hard to make work well enough to be
useful.  DEC has pretty much solved these very hard problems.
The 8700 series shows one way to do it.  Make the core machine
simple.  Do the hard stuff in ucode.  Noone uses it anyway.

>My point is that, given the above, (which you may not belive, any Vax-ers out
>there to pipe up on this?), isn't it amazing that so many errors were 
>made. 

One error was the calls instruction.  It handles register
save/restore.  It handles argument poping.  It did the frame.
There's a return instruction that knows which instruction type
did the call and does "the right thing".  It slices, it dices.
Was it a year ago?  DMR posted notes he'd sent around AT&T on his
first pre-release impressions of the VAX.  At the time, he liked
it (the calls idea).  It has taken some time for the industry
gurus to decide that this wasn't such a hot idea.  The industry
did this to DEC.

>>One doesn't fine tune a design too much because the time is better spent
>>building the next iteration.

DEC spent alot of time & money on the VAX design.

>And then you run smack into your bad choices. For
>example, the 8600, which was the successor to the 780, took an *incredibly*
>long time to finish, used 110+ state-of-the art ECL gate arrays, (plus
>super low tech *card edge* connectors at the same time?!) and inumerable
>boards, and was terrible slow for all that.  

Didn't the 8700 beat the 8600 to market?  Further, the 8700 was
not supposed to be faster - as it was supposed to be a "medium
speed" machine.  It is possible that the 8650 was what the 8600
was supposed to originally do.  Still, the 8600 was the 780
hardware compatible machine.  It was more the 780 with new
technology.  The 8700 uses a new bus, etc.  (They said that the
new bus would be non-propriatary - then, damn their eyes, changed
their mind).  The CPU is of a completely different design.  It is
easier to make, so it shouldn't have been such a surprise that it
was faster and made it out the door so quickly.  The 8600 people
had to do some *really hard work*.  Me, I'm lazy.  I'd never have
done it.  Laziness turns out to be one of those forces of nature
right next to gravity.  Don't fight it.

>The micro-vax 1 had a great deal pain associated with it, as I recall, and
>I wonder how much a simpler set of instruction alignments  would have helped?
>Any ex-microvax I designers want to contribute.

Rumor has it that the VAX 730 was the most bug-free version of
the VAX.  No pipelining.  No problems with instructions getting
in each other's way.  Its biggest problem was that software was
written on 780s, and the '730 didn't have enough speed to support
the overhead that was OK on the '780.  I write DOS applications
on a 4.77 MHz 8088 for this reason.  You need at least '750
performance to be a good machine.  Maybe today's faster silicon
could make the '730 design into the critical mass speed VAX for
cheap.  On the other hand, the uVAX II has this market.  Also,
people don't seem to want a machine that works.  They seem to
want a machine that is fast.  Sigh.

>Many other machines with much poorer bit efficient instruction sets 
>existed at the same time as the vax and never was text image size a problem,
>compared to raw speed. 

Yes.  I agree.  I was always amazed how bit efficient the '10
was.  The '10 didn't look like it ought to be.  You could only do
one main memory reference per instruction.  Shouldn't that mean
you often have to do two instructions?  Apparently that isn't a
problem.  The '11 is really bit efficient.  VAX code is twice the
size of '11 code.  The PC is twice the size on the VAX.  Data
space is nearly always the problem.  Of course, a large program
load image takes more disk space and takes longer to load into
memory - even with demand paging (so you don't have to load the
whole thing in).  Shared text (instruction space) helps alot.

>You yourself pointed out that the lifetimes of computers are so short that
>time to market is worth an equivalent in performance. Which do think I 
>can design-in easier: memory or gates? 

The design still has to be long-term.  As portable as UNIX is
(there are nearly enough people who know how port it), the
industry is still at the point where the software installed base
is important.  People still buy (and will continue to buy) binary
versions of software.  They don't want to buy a machine and have
to repurchase software.  Of course, the software industry could
provide inexpensive "upgrades" for customers for their supported
set of hardware.  A great idea for a startup company would be aid
in porting and supporting software on the plethora of available
hardware - the one company has copies of the millions of
differant available machines so that not everyone has to have all
that capital equipment.  People buying their first machine(s) can
buy whatever they want, but many people will still buy it from
IBM or DEC if they think that the machine will be supported in 10
years.  DEC still supports PDP-11s big time.  Even all the OSs on
that machine.  Even all the silly revisions and versions of that
machine.  I'd be real surprized if you asked DEC for support on
writing WCS (writable control store) on your PDP 11/60 and DEC
said "no".

>Besides, it was well known at the time that
>memory was getting a factor of 4 denser with each iteration, with no end
>in sight for quite a while. 

The end was always in sight.  Someone would say, "we can't do the
next factor of 4 because of 'x'."  Real soon, of course, someone
(else) would say - "I've got this solution to 'x' - and it
doesn't even break physics too badly."  The same has been true in
the speed arena.

So which side am I on, anyway?  - yes.

>P.S. 
>    I was suprised I didn't get a rise out of anyone when I referred to the
>Denelcor HEP as a "pile of junk" a while ago. Isn't anybody going to stand
>up for the thing? After all its the only supercomputer slower than a 
>vax 11/780 in linpack, it *needs* a champion, :-)

My favorite supercomputer in (I think it was) 1983 was a VAX 780
with a third party vector processor.  With a little software, one
could get nearly 40 MFLOPs out of it.  The Cyber 205 only yielded
170 MFLOPS (same problem & code).  For about $400K, it was *very*
cost effective for a number of applications.  the Cyber was much
more than 5 times more expensive.  Nowadays, the '780 part is
much cheaper, bringing the cost to under $100K.  The vector part
is faster and the software is lots better.

	Stephen.  suitti@haddock.ima.isc.com, ...harvard!haddock!suitti

henry@utzoo.uucp (Henry Spencer) (02/25/89)

In article <660@m3.mfci.UUCP> rodman@mfci.UUCP (Paul Rodman) writes:
>>One doesn't fine tune a design too much because the time is better spent
>>building the next iteration.
>
>And then you run smack into your bad choices. For example, the 8600...

If you want another example, consider the VAX 730.  Compare it to the
PDP-11/44, which uses the same size of box.  The 730 uses semicustom
ICs everywhere; the 44 is standard logic.  The 730 fills almost all of
the box, to the point where one or two funny multi-purpose interface
boards got invented so that one could build a complete 730 system
without an expander box -- there literally are only one or two slots
left in the thing! -- whereas the 44 leaves half of the box empty for
peripheral controllers.  (The old utzoo, operational until about 9 months
ago, was a very large 44 configuration with no expander box.)  The 730
cost rather more than a 44.  Finally, the 730 is notorious for running
like a turtle, whereas the 44 is significantly faster than a *750* on
raw integer CPU speed.

This seems like one hell of a price to pay for more address space...
-- 
The Earth is our mother;       |     Henry Spencer at U of Toronto Zoology
our nine months are up.        | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

mark@corona.megatek.uucp (Rocket J. Squirrel) (02/25/89)

I think this discussion of how stupid the design of the VAX is (or is not)
is interesting in its own right, but i think it is missing a little bit on
the reality side.

The following is the history of the VAX as *I* understand it, people who
were actually working at DEC back east in this time frame can feel free
to correct it.

DEC at that time (probably still) was composed of several engineering groups
that were responsible for different product lines. These groups HARDLY EVER
talked to each other. 

When the PDP-11 was obviously running out of address space, there began a
project in the PDP-11 group to fix that problem. This is why the VAX-11 
was called the "Virtual Address Extension 11". This group probably had
nobody that knew how to build a large computer. DEC had another group that
did "large" computers... the Large Computer Group (they built the DEC-10).
It is virtually certain that nobody thought about pipelined instruction 
sets, because DEC just didn't do that sort of stuff. 

One very BASIC goal of the project was to make sure that most of the 
customer's PDP-11 code would run on whatever they built, since they hoped 
to replace those PDP-11s with whatever they built. When the VAX-11/780
first shipped, most of the software was taken directly from RSX-11 and
ran under emulation.

When the 780 became a big success, it eventually happened that the small
computer people began trying to build large computers. There were several
failed attempts - partly because, as has been pointed out here at great
length, the architecture sucks for building fast machines. Eventually,
DEC gave up on having two "large" computer lines, they flushed the DEC-10 *
and put the "Large Computer Group" in charge of building a fast VAX. 
Meantime, the small computer group built the 750 and 730. The 750 is
a decent machine - but small.

The point is that the VAX is not the way it is because people were trying
to build a machine for a particular technology. It is the way it is because 
the people who were designing it wanted to be able to run PDP-11 code on it 
pretty well in emulation mode, and the compiler people wanted something that 
the PDP-11 fortran (and COBOL!) compilers had a prayer of generating code 
for (with the inevitable "just a few improvements").

-mark

* The last project of the DEC-10 group was a thing called "jupiter" which was
to be a pipelined DEC-10. This would have been quite an effort in its own
right. The DEC-10 had fixed-length instructions (which people in this group
seem to find desireable) and they were very orthogonal. Unfortunately, the
architecture encouraged the use of skips and instructions that modified a
register and immediately branched based on the result.


 
--
ucsd.edu!megatek!mark					mark thompson
	"Tiger mutters the fighter pilot's prayer: `Oh shit.'"
--

roy@phri.UUCP (Roy Smith) (02/25/89)

	One of my first impressions of the Vax was "how come an 11/55 with
fpa is faster than an 11/780 with fpa?"  I never did any benchmarks to see
if that was true, but that's what the instruction timings in the handbooks
seemed to indicate.  One of my second impressions was "maybe if I read this
all again, I'll be able to figure out just what register indirect indexed
offset mode really is."
-- 
Roy Smith, System Administrator
Public Health Research Institute
{allegra,philabs,cmcl2,rutgers}!phri!roy -or- phri!roy@uunet.uu.net
"The connector is the network"

henry@utzoo.uucp (Henry Spencer) (02/26/89)

In article <501@megatek.UUCP> mark@corona.UUCP (Rocket J. Squirrel) writes:
>One very BASIC goal of the project was to make sure that most of the 
>customer's PDP-11 code would run on whatever they built, since they hoped 
>to replace those PDP-11s with whatever they built.

This is a curious statement, given that the pdp11 emulation on the 780
omits floating point -- something that was very important to a lot of
customer software, especially for those high-end customers who would
be interested in an expensive new machine.  It omits some other things
as well, but those were less significant to the bulk of DEC customers.

>When the VAX-11/780
>first shipped, most of the software was taken directly from RSX-11 and
>ran under emulation.

Aha, now we get down to the nitty-gritty.  The explanation that best fits
the hardware is that pdp11 emulation mode was for running *DEC system
software* from the 11, with customer code very much a secondary issue.
-- 
The Earth is our mother;       |     Henry Spencer at U of Toronto Zoology
our nine months are up.        | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

w-colinp@microsoft.UUCP (Colin Plumb) (02/27/89)

mark@corona.UUCP (Rocket J. Squirrel) wrote:
> * The last project of the DEC-10 group was a thing called "jupiter" which was
> to be a pipelined DEC-10. This would have been quite an effort in its own
> right. The DEC-10 had fixed-length instructions (which people in this group
> seem to find desireable) and they were very orthogonal. Unfortunately, the
> architecture encouraged the use of skips and instructions that modified a
> register and immediately branched based on the result.

H'm... while I have my concerns over compare-and-branch (it still seems to
serve MIPS very well), I think skips are a natural for deep pipelines.
It's frequently the case that you only need one or two instructions to
execute one side of an if statement, and a pipeline bubble is more expensive
than wasting those one or two cycles.  PDP-10 and Acorn ARM every-instruction-
conditional strategies are even better, as they don't waste cycles decoding
skip instructions, but they require valuable instruction bits and disagree
with the conditions-in-general-registers school.

But skip instructions are one thing I wish the 29000 had.
-- 
	-Colin (uunet!microsoft!w-colinp)

"Don't listen to me.  I never do."

tdonahue@bbn.com (Tim Donahue) (02/27/89)

I've liked this argument!  IMHO, the VAX made a pretty good high-end 11,
since the 11/60 was a loser and the 11/44 wasn't built yet.  I'd guess
the idea to turn the VAX into a family of computers, including
high-performance ones, was a marketeer's decision, not an architect's.

In article <1989Feb26.023058.13906@utzoo.uucp>, henry@utzoo (Henry Spencer) writes:
>In article <501@megatek.UUCP> mark@corona.UUCP (Rocket J. Squirrel) writes:
>...
>Aha, now we get down to the nitty-gritty.  The explanation that best fits
>the hardware is that pdp11 emulation mode was for running *DEC system
>software* from the 11, with customer code very much a secondary issue.
>-- 
>The Earth is our mother;       |     Henry Spencer at U of Toronto Zoology

Henry, didn't you ever hear the saying "To the systems programmer, users
and applications serve only to provide a test load?" #)

Cheers, 
Tim

rodman@mfci.UUCP (Paul Rodman) (02/27/89)

In article <1989Feb24.203711.16796@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>
>If you want another example, consider the VAX 730.  Compare it to the
>PDP-11/44, which uses the same size of box.  The 730 uses semicustom
>
>This seems like one hell of a price to pay for more address space...
>-- 

Yup, <sigh>.

When I finally got a chance to
disassemble a 8600 and look at it there happened to be a one of the
10k ECL based PDP10s in the same room. I looked down at the board in
my hands, crammed with LSI, and over at the poor '10.... and wondered what's
been going on for the last 10 years, besides the LSI, 32 bit 
word standardization and VM, none of which are due to the Vax...... ;-)


    Paul K. Rodman 
    rodman@mfci.uucp

It's a rumble!

henry@utzoo.uucp (Henry Spencer) (02/28/89)

In article <752@microsoft.UUCP> w-colinp@microsoft.uucp (Colin Plumb) writes:
>... skip instructions are one thing I wish the 29000 had.

It does, actually.  There's no reason why a (future) implementation couldn't
pay special attention to branches that go only one or two instructions ahead.
-- 
The Earth is our mother;       |     Henry Spencer at U of Toronto Zoology
our nine months are up.        | uunet!attcan!utzoo!henry henry@zoo.toronto.edu