[comp.arch] Evans and Sutherland quits the superbusiness

stein@dhw68k.cts.com (Rick 'Transputer' Stein) (11/18/89)

The New York Times reported today that E&S is bowing out of the
superconfuser business.  The aritcle, by John Markoff, states that
their system, believed to be a VLIW flavor, is having both hardware
and software production problems, and that the performance is not
quite as competitive as they had once thought.  I guess those
Livermore Loops are worth something!
-- 
Richard M. Stein (aka, Rick 'Transputer' Stein)
Sole proprietor of Rick's Software Toxic Waste Dump and Kitty Litter Co.
"You build 'em, we bury 'em." uucp: ...{spsd, zardoz, felix}!dhw68k!stein 

brooks@vette.llnl.gov (Eugene Brooks) (11/20/89)

In article <27611@dhw68k.cts.com> stein@dhw68k.cts.com (Rick 'Transputer' Stein) writes:
>The New York Times reported today that E&S is bowing out of the
>superconfuser business.
E&S did not see the Killer Micros coming.  Any vendor who
invested time and money in a "custom" CPU implementation
over the past several years is getting eaten alive by the sudden
onslaught of the Killer Micros.  You can't sell an expensive
slow computer in a competitive market.

Its going to be a terrible year for vendors of custom architectures,
new vendors will be completely flushed and old vendors will barely
survive on the hysterisis of their existing customer base.

Even the CEO of Cray Research mentioned the RISC microprocessors
in the recent Supercomputing '89 conference in his keynote address.
He referred to the performance increases of microprocessors as
"astounding."
brooks@maddog.llnl.gov, brooks@maddog.uucp

thomson@cs.utah.edu (Rich Thomson) (11/20/89)

In article <38966@lll-winken.LLNL.GOV> Eugene Brooks writes:
]In article <27611@dhw68k.cts.com> Rick 'Transputer' Stein writes:
]> The New York Times reported today that E&S is bowing out of the
]> superconfuser business.
] E&S did not see the Killer Micros coming.  Any vendor who
] invested time and money in a "custom" CPU implementation
] over the past several years is getting eaten alive by the sudden
] onslaught of the Killer Micros.  You can't sell an expensive
] slow computer in a competitive market.

Wether or not E&S saw the "killer micros" has nothing to do with why they
shut down their supercomputer project.  Hardware problems coupled with the
expense of such a project (especially for a company the size of E&S) were
the major driving factor.

If you take the time to look at the ES-1 (the name of the product)
architecture you will see that it is basically "killer micros" all
connected together through shared memory running Mach.  It is NOT a vector
optimized machine.

The fact that the "killer micros" keep getting more powerful says something
for custom design.  What do you think they make the microprocessors out of?
Surely not gate arrays.

] Its going to be a terrible year for vendors of custom architectures,
] new vendors will be completely flushed and old vendors will barely
] survive on the hysterisis of their existing customer base.

That depends on what your custom architecture is; building a custom
architecture doesn't necessarily mean that somebody with 5 micros under
their arm is going to blow your pants off.

						-- Rich
Rich Thomson	thomson@cs.utah.edu  {bellcore,hplabs,uunet}!utah-cs!thomson
"Tyranny, like hell, is not easily conquered; yet we have this consolation with
us, that the harder the conflict, the more glorious the triumph. What we obtain
too cheap, we esteem too lightly." Thomas Paine, _The Crisis_, Dec. 23rd, 1776

brooks@maddog.llnl.gov (Eugene Brooks) (11/20/89)

If your basic processor is not as fast as a current killer micro,
someone with ONE Killer Micro will blow your pants off.  This was
the case in an extreme for the ES-1 and is why Evans and Sutherland
saw no hope at all for the future and closed shop.  One can quote
the offical words printed in the press, but the bottom line is that
you can't make money selling an expensive slow computer.  Had there
been any money at the end of the tunnel, they would have hung in
there and solved their hardware and software problems.  The Killer
Micros were moving in the on the territory and will have completely
dominated it before they could get their problems fixed.

It took the use of on the order of 10 processors (processing
elements) or more on the ES-1 to match traditional supercomputer
performance.  As noted in an earlier post, the ES-1 was a nice
micro architecture, but without any of the Killer part in either
performance or cost.  Judging from the Livermore Loops figures for
the latest Killer Micro from hell, the MIPS R6000, it matches
traditional supercomputer performance with ONE processor, and not
just for "scalar only" codes.  The other micro vendors will soon follow
with even more terrible critters, its a competitive world after all.

Killer Micros are certainly custom architectures which cost a lot
of money to develop.  The difference for them is that their development
costs are amortized over large markets and the parts end up sold at
"cookie cutter" costs.  Compare this to the costs of developing the
Cray-3, mentioned by Rollwagen at SC'89 to be more than a hundred
million dollars.  This development cost has to be amortized over
the sale base, and for a market which might only be few tens
of machines (after SSI, Cray Computers, Cray Research, Tera,
Convex, and the three Japanese companies divide up the total market
which the Killer Micros leave for them) puts quite a lower bound
on the machine price before you even get around to charging the cost
of the hardware.  This is in sharp contrast to the situation for
the Cray-1, the sale of the first copy of which more than paid the
entire development cost for the machine.  Gone are the days of
high profit margins for supercomputers.

Perhaps I am wrong and the R6000 powered box should be sold for
more than a million and MIPS is just dumping it on the market to
destroy the supercomputer vendors.  I don't see any "anti dumping"
legislation in the works, however.  One thing is clear, if
traditional supercomputers don't find another order of magnitude
in single CPU performance real soon, at fixed cost, they will
not survive The Attack of the Killer Micros.    For scalar codes
supercomputers need even more leverage, two orders of magnitude.
I don't think it is going to happen.


brooks@maddog.llnl.gov, brooks@maddog.uucp

mash@mips.COM (John Mashey) (11/20/89)

In article <38980@lll-winken.LLNL.GOV> brooks@maddog.llnl.gov (Eugene Brooks) writes:
....
>Perhaps I am wrong and the R6000 powered box should be sold for
>more than a million and MIPS is just dumping it on the market to
>destroy the supercomputer vendors.....

Just so they're no confusion:
	it shouldn't be, and nobody's dumping...
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

rodman@mfci.UUCP (Paul Rodman) (11/20/89)

In article <38980@lll-winken.LLNL.GOV> brooks@maddog.llnl.gov (Eugene
Brooks) writes: ....[ deleted soapboxing about Killer Micros]....

Ok, ok, so I'm wasting my time, but I just can't let Mr. Brooks
continue his deluge of propaganda without throwing in my opinions....

Let's ignore the fact that the Killer Micros usually don't have a
decent memory system or I/O system. [Not to mention that their
compilers and O/S are often not very robust...:-)]

There IS a grain of truth in what Mr Brooks says, in that I do belive
that there has been, and will continue to be both a factory cost and
performance compression from the lowest to the highest performance
computers.

Today's dense CMOS and ECL ASICS appear from PCs to workstations to
supercomputers, as do pretty dense packaging techniques.  So it IS
getting harder to figure out how to apply current technology to build
a supercomputer, i.e. how do I use more hardware to build a faster
uniprocessor machine?

<Commercial ON.>

One technology exists that CAN use more hardware for more performance:
the VLIW machine. Furthermore, such a machine uses replicated
functional units, lots of SRAM, and minimal control, all of which
contribute to ease of the design cycle. The ease of use of the VLIW
system [compile-and-go] is important for the Non-Brookses of the
world, belive me.

I have seen the state of typical software efforts in the good 'ol USA
and I feel that it will be a long time before efforts such as that of
Mr Brooks are the norm. [100's, 1000's of micros used for
time-to-solution]. God knows enough folks are working on, and have
worked on, the problem!  As such multiprocessing techiniques improve,
they WILL be used commercially, but mostly with more modest numbers of
VLIW "supercomputers".

I DO agree with Mr. Brooks that single cpus built with 100's of
different ASIC designs, dozens of boards, and mediocre performance, are
dead meat in the not-so-long run [e.g. Cyber 2000]. But Mr. Brooks
errs in thinking that single-chip cpus are the be-all, end-all of cpu
design. 

<Commercial Off.>

Speaking as a engineer that does NOT work on Killer Micros, I just
don't seem to feel as depressed about the future of high-$$$ cpu
design as he thinks I should.... :-) Mr Brooks' attitude reflects his
rigid thinking, and his underestimation of change in areas other than
the one he has focused on.  His refrain reminds me of all the wags 10
years ago that claimed "The Attack of the Killer CMOS" would remove
LSI ECL from use in computer design.  Instead almost all high-end cpus
sold today are ECL.


Paul K. Rodman / KA1ZA /   rodman@multiflow.com
Multiflow Computer, Inc.   Tel. 203 488 6090 x 236
Branford, Ct. 06405
    

rcd@ico.isc.com (Dick Dunn) (11/23/89)

In article <1128@m3.mfci.UUCP>, rodman@mfci.UUCP (Paul Rodman) writes:
>...Let's ignore the fact that the Killer Micros usually don't have a
> decent memory system or I/O system...

No, let's ignore it because it's not a fact but an incredibly biased
(and not particularly useful) opinion.

(Actually, what I've seen suggests that the memory systems are usually
pretty well balanced, and that the I/O systems are out of balance to about
the same extent they are on most machines...I/O has been in catch-up mode
for years.)

>...[Not to mention that their
> compilers and O/S are often not very robust...:-)]

Yeah, let's not mention that, since it isn't true.
-- 
Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
   ...`Just say no' to mindless dogma.

mcdonald@aries.uiuc.edu (Doug McDonald) (11/23/89)

In article <1989Nov22.175128.24910@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>In article <1128@m3.mfci.UUCP>, rodman@mfci.UUCP (Paul Rodman) writes:
>>...Let's ignore the fact that the Killer Micros usually don't have a
>> decent memory system or I/O system...
>
>
>(Actually, what I've seen suggests that the memory systems are usually
>pretty well balanced, and that the I/O systems are out of balance to about
>the same extent they are on most machines...I/O has been in catch-up mode
>for years.)
>
I contend that there is no such thing as an "out of balance IO system".
Certainly there is for memory vs. cpu. But there is an extremely
wide range of needed ratios for io vs cpu power. Certain business
uses need vast IO compared to CPU, some scientific uses need
99.999% cpu and .001% IO. 

There is an obvious needed market for a powerful CPU with far less IO
power than a Cray or an IBM mainframe.  One person's "balance" is
anothers overkill - both ways, of course.

Doug McDonald

seanf@sco.COM (Sean Fagan) (11/24/89)

In article <1989Nov22.175128.24910@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>In article <1128@m3.mfci.UUCP>, rodman@mfci.UUCP (Paul Rodman) writes:
>>...Let's ignore the fact that the Killer Micros usually don't have a
>> decent memory system or I/O system...
>
>No, let's ignore it because it's not a fact but an incredibly biased
>(and not particularly useful) opinion.

What?!  Uhm, I hate to tell you this, but the reason mini's provide much
better throughput than micros was because of the I/O subsystem.  A '286 is
faster than a VAX (785, let's say), but, go multi-user, and there is no
comparison:  the VAX will get *much* better throughput (meaning:  it may
take three times as long to do your sieve of Erasthoneses, but swapping
processes in and out, and doing any disk I/O, is going to go more than three
times faster).  This is because the '286 (a killer-micro, compared to a VAX
8-)) doesn't have the memory or I/O subsystems that the VAX does.  Now,
compare just about *any* system on the market today with, say, a CDC Cyber
running NOS (or, deity forgive me, even NOS/VE).  (Bet you were all waiting
for me to mention those, weren't you? 8-).)  As has been pointed out before,
lots of people don't *need* 250 MFLOPS / MIPS on their desktop; they just
need to shuffle data back and forth (that's why there's a TPS [Transactions
Per Second] measurement; any commentary on that, John?  Michael?).  Without
a decent I/O subsystem, you won't be able to do this.  And the memory in
most "killer micros" is defficient because I can't do *real* DMA (it tends
to steal cycles from the CPU).  (N.B.:  some K.M.'s *do* have *real* DMA.
I'm waiting for them to come out with *real* I/O subsystems [using, say, a
68000 as a PP].  Then they will scream, even compared to a Cyber.)

>>...[Not to mention that their
>> compilers and O/S are often not very robust...:-)]
>Yeah, let's not mention that, since it isn't true.

Some of the compilers available today are pretty amazing, especially
compared to what was available just a decade ago.  The OS's running on most
K.M.'s, however, tend to be unix varients (or, deity help us all, DOS).
This is not a terribly robust OS, nor a terribly quick one (asynchronous I/O
would be really nice; there are some other things that could be useful).

So, yeah, it is true.

-- 
Sean Eric Fagan  | "Time has little to do with infinity and jelly donuts."
seanf@sco.COM    |    -- Thomas Magnum (Tom Selleck), _Magnum, P.I._
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

brooks@maddog.llnl.gov (Eugene Brooks) (11/24/89)

In article <3893@scolex.sco.COM> seanf@sco.COM (Sean Fagan) writes:
>for me to mention those, weren't you? 8-).)  As has been pointed out before,
>lots of people don't *need* 250 MFLOPS / MIPS on their desktop; they just
>need to shuffle data back and forth (that's why there's a TPS [Transactions
>Per Second] measurement; any commentary on that, John?  Michael?).  Without
No one needs a computer of any performance level on his desk.  What one
needs is a modern windowing terminal on a desk, connected to the
computer in the computer room with a connection of suitable bandwidth to
handle the drawing on the screen.  The Killer Micros and striped disk farm
belong in the computer room where fan noise and heat does not bother anyone.
A Killer Micro on ones desk is just a waste of a Killer Micro, along with
a uselessly small main memory size.  The utilization of such a machine is so
low it is hard to measure reliably.

>most "killer micros" is defficient because I can't do *real* DMA (it tends
>to steal cycles from the CPU).  (N.B.:  some K.M.'s *do* have *real* DMA.
>I'm waiting for them to come out with *real* I/O subsystems [using, say, a
>68000 as a PP].  Then they will scream, even compared to a Cyber.)
A 68000 is probably not fast enought to handle IO for a good killer
micro.  A real computer will have handful of Killer Micros hooked
up to a coherent cache system, with possibly VME DMA IO on the main bus
or some adapter attached to it.  A supercomputer will have a scalable
coherent cache system and some number of these caches hooked to striped
disk farms to supply the serious IO needs of such a machine.  Don't
confuse the basic CPU technology of Killer Micros with the really poor
main memory and IO systems which people sell as single user workstations.

>Some of the compilers available today are pretty amazing, especially
>compared to what was available just a decade ago.  The OS's running on most
>K.M.'s, however, tend to be unix varients (or, deity help us all, DOS).
>This is not a terribly robust OS, nor a terribly quick one (asynchronous I/O
>would be really nice; there are some other things that could be useful).
Killer Micros will soon dominate the world of computing, UNIX already does.
DOS users are not computing, but saying just what they are doing is not
appropriate for public consumption.
brooks@maddog.llnl.gov, brooks@maddog.uucp

iyengar@grad1.cis.upenn.edu (Anand Iyengar) (11/24/89)

In article <39361@lll-winken.LLNL.GOV> brooks@maddog.llnl.gov (Eugene Brooks) writes:
>No one needs a computer of any performance level on his desk.  What one
>needs is a modern windowing terminal on a desk, connected to the
>computer in the computer room with a connection of suitable bandwidth to
>handle the drawing on the screen.
	Interesting statement.  

	I'm running off of an X term now, and while it's not bad, it's not
trouble free.  If the serving host, the network, or the terminal itself
are down, I can't use it work get work done.  If I have a high-performance
micro with a reasonable drive, I only crash when the local machine/site
has problems (let's not argue NFS;  that's somewhere in between, depending
on how much you mount and keep locally. Diskless clients have many of the
same problems as windowing terminals).  

	Also, performance of the Xterm decreases with loading of the central
host, and network.  It's not blazingly fast in itself, either.  Doing anything
really "graphical" on it bogs it down or crashes it.  Forget animation or
really neatsy stuff.  Maybe one could put a faster CPU, and internal bus in
it to get the graphics to go fast.  But then why not just go the extra 5 yards,
and drop some more RAM and a drive on it, and make it a real-live computer?  

>The Killer Micros and striped disk farm belong in the computer room where fan
>noise and heat does not bother anyone.
	Why?  I hate the noise as much as anyone, but why is it bad to
have a high-performance computer on your desk?  Drop a net link to it, and
people can log in to it from about anywhere.  

>A Killer Micro on ones desk is just a waste of a Killer Micro, along with
>a uselessly small main memory size.  The utilization of such a machine is so
>low it is hard to measure reliably.
	It is and it isn't.  Price/performance doesn't scale linearly.  It's
not clear that a big mainframe is lots (your mileage will vary) better than
a number of micros.  There are still some things that I can't do on a
mainframe that I can do on a micro, such as crash it.  Because we have a
number of small boxes around people can just connect to a different
one for a while, and it's not a problem.  

>In article <3893@scolex.sco.COM> seanf@sco.COM (Sean Fagan) writes:
>>K.M.'s, however, tend to be unix varients (or, deity help us all, DOS).
>>This is not a terribly robust OS, nor a terribly quick one (asynchronous I/O
>>would be really nice; there are some other things that could be useful).
	Agreed, but people are band-aiding as they go, and it's wide-spread
enough that it will probably be here a while.  

>Killer Micros will soon dominate the world of computing, UNIX already does.
>DOS users are not computing, but saying just what they are doing is not
>appropriate for public consumption.
	In every DOS user is a potential UNIX user.  You might not like DOS,
but that doesn't make it evil.  Funny;  IBM use to think the same thing...

							Anand.  
--
"I've got more important things to waste my time on."
{arpa | bit}net: iyengar@eniac.seas.upenn.edu
uucp: !$ | uunet
--- Lbh guvax znlor vg'yy ybbx orggre ebg-guvegrrarg? ---

seanf@sco.COM (Sean Fagan) (11/25/89)

In article <39361@lll-winken.LLNL.GOV> brooks@maddog.llnl.gov (Eugene Brooks) writes:
>In article <3893@scolex.sco.COM> seanf@sco.COM (Sean Fagan) writes:
>>most "killer micros" is defficient because I can't do *real* DMA (it tends
>>to steal cycles from the CPU).  (N.B.:  some K.M.'s *do* have *real* DMA.
>>I'm waiting for them to come out with *real* I/O subsystems [using, say, a
>>68000 as a PP].  Then they will scream, even compared to a Cyber.)

>A 68000 is probably not fast enought to handle IO for a good killer
>micro.  

A PP-type processor does not need to be fast, really.  If it's fast enough,
you turn your system into a dual-processor system, with heterogeneous
processor types (it can be done, and has been.  Mach can, I think, be made
to work rather well with it).  By having a, say, 16-MHz 68k serve as the I/O
processor for a KM (say, a 67MHz R6k), and doing the system correctly, then
the 68k still has a bit of idle time (say, 1-5%, not counting time spent
waiting for i/o to complete).  More, and you should probably retune /
redesign your system; less, and you should have a slightly faster processor.

PP's for a Cyber are *slow*.  But they get the job done real well.

>Killer Micros will soon dominate the world of computing, UNIX already does.

I don't think so.  IBM still dominates the world of computing, along with
FORTRAN and COBOL.  Personal computers are catching up, though.  Give it
another 5 or 6 years (i.e., more people use an IBM mainframe than use an PC
[except, possibly, as a terminal to the mainframe]).

>DOS users are not computing, but saying just what they are doing is not
>appropriate for public consumption.

They're using computers, aren't they?  Guess what they're doing, then:
they're computing.  A very small percentage of computer users need pure
number-crunching power (or else everyone would go out and buy a Cray or i860
8-)); a larger number of users would like to see more MIPS (for drawing
speed) and more *throughput*.  Again, as I've said before, a CDC Cyber
170/760 is slower, MIPS-wise, than quite a few of the newer RISC systems out
there.  However, it *feels* faster because of the throughput difference,
even with 100 users on it (speaking from experience).  When you have a
system that can compile a 10 000-line FORTRAN program in less than 40
seconds, *without* going through a cache, then I'll be happy with a KM.
Until then, however, the mainframes are going to win, and continue to be
bought.

-- 
Sean Eric Fagan  | "Time has little to do with infinity and jelly donuts."
seanf@sco.COM    |    -- Thomas Magnum (Tom Selleck), _Magnum, P.I._
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

bzs@world.std.com (Barry Shein) (11/26/89)

>I don't think so.  IBM still dominates the world of computing, along with
>FORTRAN and COBOL.  Personal computers are catching up, though.  Give it
>another 5 or 6 years (i.e., more people use an IBM mainframe than use an PC
>[except, possibly, as a terminal to the mainframe]).

Not sure I believe you about PC's, estimates are there are 15 to 30
million PC's out there and their use as terminals onto mainframes is
usually bemoaned as "still waiting to happen". As far as Fortran and
Cobol, again, how do you know this?

Something I've used to measure this latter claim is to take the Sunday
Jobs section of a major newspaper (I've used the Sunday Boston Globe)
and make a simple tick count of jobs being offered in various areas.

Last I did it (86-88) Unix/C was catching up rapidly on traditional
areas (IBM Mainframe, Cobol, Fortran, BAL.) I know, that's only
because Unix/C is a growth area and that's what this is really
measuring. My guess is that's like saying Maseratis usually win races
only because they're fast.

I suppose you could nitpick the measure but it would be far more
productive to suggest a better measure (the nice thing about this one
is that anyone can do it in their living room in a few minutes.)

IBM's mainframe predominance in the computing world is shrinking
rapidly, that's why their revenues are in trouble. There was a time
when they accounted for as much as 80% of *all* computer sales in the
world, just a few years ago. I don't think they account for 50%
anymore (partly due to growth in the industry around them.) And the
current sales are heavily weighted towards a relatively few customers
(fortune 100, US Govt), not that their money isn't green, but it's not
as widespread an environment as it was say 10 years ago, particularly
in relative terms. 50% is nothing to sneeze at, but if one is trying
to do predictions the trends are pretty clear.

Either IBM comes up with something brilliant to buoy their mainframe
market (something I wouldn't discount as a possibility) or expect a
rapid decline over the next 5 or so years as people realize they can
"down-size" effectively ("down-sizing" is a term used in the MIS/DP
market for replacing mainframe facilities with smaller machines, if
you read that press you'd be shocked how many blue-serge suits are
standing up and making testimonials about how they're decommissioning,
or planning to in the near future, their mainframes and moving to a
PC/LAN network with maybe a mini, typically AS/400 or Vax, at the
hub.) There are areas where down-sizing doesn't cut it, but there are
a *lot* of areas where it does, and that's skimming the cream out of
the market.

I suppose what people are talking about here is "down-sizing" in the
super-computer market.

It makes a lot of sense, most scientists I've worked with (I used to
be in charge of most of the computers at BU and earlier, for a short
while, ran the Harvard Chemistry computing facility, before that I was
at the Harvard School of Public Health for several years) seem to
prefer having political control over smaller facilities rather than go
begging to centralized administrators. As under $100K systems approach
100MIPs and 30MFLOPS or so I don't see where the motivation to hassle
with a system that's shared by hundreds of people will come from,
except for those perhaps several dozen groups in the country that
absolutely must be on a super-computer, even they'll do more and more
prototyping and development on department-sized or personal
facilities. Again, dwindling numbers. It's already happening.
-- 
        -Barry Shein

Software Tool & Die, Purveyors to the Trade         | bzs@world.std.com
1330 Beacon St, Brookline, MA 02146, (617) 739-0202 | {xylogics,uunet}world!bzs

mash@mips.COM (John Mashey) (11/26/89)

In article <3898@scolex.sco.COM> seanf@sco.COM (Sean Fagan) writes:
>In article <39361@lll-winken.LLNL.GOV> brooks@maddog.llnl.gov (Eugene Brooks) writes:
>>In article <3893@scolex.sco.COM> seanf@sco.COM (Sean Fagan) writes:
>>>most "killer micros" is defficient because I can't do *real* DMA (it tends
>>>to steal cycles from the CPU).  (N.B.:  some K.M.'s *do* have *real* DMA.
>>>I'm waiting for them to come out with *real* I/O subsystems [using, say, a
>>>68000 as a PP].  Then they will scream, even compared to a Cyber.)

Just to correct a potential mis-impression:

a) Since 1983 (or earlier, in a few cases, I think), anybody seriously
building multi-user systems / servers from microprocessors has tended to
build at least the high end of a product range with micros [68K, 186s,
Z8000s, V-??, etc] as I/O processors.  Some workstations [Sony News,
for example] have 2 68Ks, one as an I/O processor.

b) Although one may choose to use a "Killer Micro" in a workstation/PC/cheap
server architecture, where there may be only one path to a memory bus with
SIMMs, or similar design:
	1) It usually has DMA.
	2) it usually has a cache, and so I/O has some impact, but is hardly
	what people used to call cycle-stealing (where every I/O stopped the CPU	almost cold).

c) Any "Killer Micro" aimed at larger server/multi-user designs (as opposed to
least-cost designs):
	has DMA
	usually has CPUs in at least some of the I/O boards, where appropriate
	sometimes has multiple paths to memory, i.e., a VME I/O bus and a
		private memory bus

d) Many of the current high-performance I/O boards have 68020s, already,
as in some of Interphase's products.

sbf10@uts.amdahl.com (Samuel Fuller) (11/28/89)

In article <1989Nov25.200320.21142@world.std.com> bzs@world.std.com (Barry Shein) writes:
>Either IBM comes up with something brilliant to buoy their mainframe
>market (something I wouldn't discount as a possibility) or expect a
>rapid decline over the next 5 or so years as people realize they can
>"down-size" effectively ("down-sizing" is a term used in the MIS/DP
>market for replacing mainframe facilities with smaller machines, if
>you read that press you'd be shocked how many blue-serge suits are
>standing up and making testimonials about how they're decommissioning,
>or planning to in the near future, their mainframes and moving to a
>PC/LAN network with maybe a mini, typically AS/400 or Vax, at the
>hub.) There are areas where down-sizing doesn't cut it, but there are
>a *lot* of areas where it does, and that's skimming the cream out of
>the market.
>

The opposite is also happening. Many large corporations are consolidating
their data processing into larger centers.  Shutting down several
regional processing centers in favor of one large national center.

Partitioning large mainframes has also become very arrtactive to some
corporations.  One big machine is logically partitioned into
several smaller machines.  Software and maintainance costs are lower
for one large machine than for a half dozen small machines.

-- 
---------------------------------------------------------------------------
Sam Fuller / Amdahl System Performance Architecture

I speak for myself, from the brown hills of San Jose.

UUCP: {ames,decwrl,uunet}!amdahl!sbf10 | USPS: 1250 E. Arques Ave (M/S 139)
INTERNET: sbf10@amdahl.com             |       P.O. Box 3470
PHONE: (408) 746-8927                  |       Sunnyvale, CA 94088-3470
---------------------------------------------------------------------------