[comp.arch] R6000 PCs?

gillies@m.cs.uiuc.edu (01/08/90)

In 1980, the Z-80 and 6502 were state-of-the-art CPUs in PC's of the
time.  I think these machines could probably manage .1 MIPS.  Today,
the state-of-the-art CPU is the '386, the '486, and the 68030.  All
these devices are 2-4 MIPS, about 20-40 times faster.

I conclude that everyone will have an R6000 (or better) in the
(affordable) PC of the year 2000.

This seems like a long time.

nelson@m.cs.uiuc.edu (01/08/90)

/* Written 11:48 am  Jan  7, 1990 by gillies@m.cs.uiuc.edu in m.cs.uiuc.edu:comp.arch */
/* ---------- "R6000 PCs?" ---------- */

I conclude that everyone will have an R6000 (or better) in the
(affordable) PC of the year 2000.

/* End of text from m.cs.uiuc.edu:comp.arch */

I truly believe that the demand for faster and faster PCs will slow down
  quite a bit in the next few years.

Personally, for nearly all PC applications, I think that a 386 system is
  more than enough and most of the demand comes from people who don't 
  need the power at all, but convince themselves (or their superiors)
  that they need it for either ego (I need the best blah blah blah) or
  departmental-monies-have-not-been-fully-spent-that-quarter reasons.

johnl@esegue.segue.boston.ma.us (John R. Levine) (01/08/90)

In article <3300093@m.cs.uiuc.edu> nelson@m.cs.uiuc.edu writes:
>I truly believe that the demand for faster and faster PCs will slow down
>  quite a bit in the next few years.

One thing that has remained constant over 40 years of computer history is
predictions that some level of performace is fast enough and that there
wouldn't be demand for something faster.  So far, all such predictions appear
to have been wrong.  I have faith that we software types can piss away
whatever performace the hardware guys can give us.  It's true, a 386 is
overkill for nearly any 1-2-3 spreadsheet, but if you compare 1-2-3 3.0 to
1-2-3 2.x, you'll find that 3.0 is more functional and much slower, and I
expect that 4.0 or the WIMP version or whatever will be slower still.

More seriously, it's pretty clear that the PC industry is moving toward
graphics, networking, and multi-media, each of which individually is a big
cycle sink, and together which can absorb all of the CPU power likely to be
available in the foreseeable future.
-- 
John R. Levine, Segue Software, POB 349, Cambridge MA 02238, +1 617 864 9650
johnl@esegue.segue.boston.ma.us, {ima|lotus|spdcc}!esegue!johnl
"Now, we are all jelly doughnuts."

filbo@gorn.santa-cruz.ca.us (Bela Lubkin) (01/08/90)

In article <3300092@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu writes:
>In 1980, the Z-80 and 6502 were state-of-the-art CPUs in PC's of the
>time.  I think these machines could probably manage .1 MIPS.  Today,
>the state-of-the-art CPU is the '386, the '486, and the 68030.  All
>these devices are 2-4 MIPS, about 20-40 times faster.

The 6502 takes 2 cycles for some operations and averages 3-5; at 1MHz
(the CPU speed in the 1978 Apple II, at least), it gets .2-.3 native
MIPS.  The Z80 takes more cycles but was already running 2MHz by 1980,
for about the same native MIPS.  If you want to talk VAX MIPS... well...
you might be able to get .05 VAX MIPS out of a 1MHz 6502.  Probably more
like .02.

>I conclude that everyone will have an R6000 (or better) in the
>(affordable) PC of the year 2000.

Better.  (Send me mail then if I'm wrong... ;-)

>This seems like a long time.

Yes, several generations of computers.  Apparently computer years, like
dog years, are 7 times as fast as human years.  ;-}

Bela Lubkin    * *    //  filbo@gorn.santa-cruz.ca.us  CI$: 73047,1112 (slow)
     @       * *     //  belal@sco.com  ..ucbvax!ucscc!{gorn!filbo,sco!belal}
R Pentomino    *   \X/  Filbo @ Pyrzqxgl +408-476-4633 and XBBS +408-476-4945

dce@smsc.sony.com (David Elliott) (01/08/90)

In article <1990Jan8.033050.3360@esegue.segue.boston.ma.us> johnl@esegue.segue.boston.ma.us (John R. Levine) writes:
>In article <3300093@m.cs.uiuc.edu> nelson@m.cs.uiuc.edu writes:
>>I truly believe that the demand for faster and faster PCs will slow down
>>  quite a bit in the next few years.

>More seriously, it's pretty clear that the PC industry is moving toward
>graphics, networking, and multi-media, each of which individually is a big
>cycle sink, and together which can absorb all of the CPU power likely to be
>available in the foreseeable future.

I can think of two other, or at least more specific, considerations:

	1. Decentralization of personnel and resources.  Some people
	   are predicting an increase in the number of computer users
	   working out of their homes, as it saves time, money, and
	   stress.  This will require that PCs hold redundant data
	   and be able to communicate over networks without users
	   seeing a big degradation.

	2. Improvement of user interfaces.  Today's "average" PC user
	   learned about computers in college as an afterthought, so
	   these people are just learning to replace paper with bits.
	   As more people are introduced to the power of computers,
	   they will make more demands on the interfaces: voice
	   recognition, better handling of user preferences, builtin
	   instruction (i.e., popup or voice help for everything),
	   etc.

I actually feel that the demand for faster PCs will increase as people
begin to understand what computers are really capable of doing, and
as they learn to ask instead of taking what we, the industry, give them.

-- 
David Elliott
dce@smsc.sony.com | ...!{uunet,mips}!sonyusa!dce
(408)944-4073
"But Pee Wee... I don't wanna be the baby!"

mash@mips.COM (John Mashey) (01/08/90)

In article <3300092@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu writes:
>
>In 1980, the Z-80 and 6502 were state-of-the-art CPUs in PC's of the
>time.  I think these machines could probably manage .1 MIPS.  Today,
>the state-of-the-art CPU is the '386, the '486, and the 68030.  All
>these devices are 2-4 MIPS, about 20-40 times faster.
>
>I conclude that everyone will have an R6000 (or better) in the
>(affordable) PC of the year 2000.
>
>This seems like a long time.

Well, lots of people will have affordable workstations of that power,
well before the year 2000.  However, just to make things clear, R6000's
use ECL, and you don't stick that in a desktop; desktops will continue
to want CMOS chips.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

slackey@bbn.com (Stan Lackey) (01/08/90)

In article <3300093@m.cs.uiuc.edu> nelson@m.cs.uiuc.edu writes:
>/* Written 11:48 am  Jan  7, 1990 by gillies@m.cs.uiuc.edu in m.cs.uiuc.edu:comp.arch */
>/* ---------- "R6000 PCs?" ---------- */
>I conclude that everyone will have an R6000 (or better) in the
>(affordable) PC of the year 2000.
>/* End of text from m.cs.uiuc.edu:comp.arch */

>I truly believe that the demand for faster and faster PCs will slow down
>  quite a bit in the next few years.
>Personally, for nearly all PC applications, I think that a 386 system is
>  more than enough and most of the demand comes from people who don't 
>  need the power at all, but convince themselves (or their superiors)
>  that they need it for either ego (I need the best blah blah blah) or
>  departmental-monies-have-not-been-fully-spent-that-quarter reasons.

I don't think so.  Greater CPU power supports greater software
functionality, which increases productivity.  Suppose a cheap Killer
Micro machine on a desk, with advanced software, can make an engineer
more productive, say by running simulations super fast.  If I, as an
employer (which I'm not but if I were), had a choice of doubling my
workforce or doubling productivity at say $25,000 per seat per year,
I'm pretty sure I'd go for the Killer Micros.  Would probably make the
employees happier, encourage suppliers to make more productivity
tools, etc.
-Stan

swarren@eugene.uucp (Steve Warren) (01/10/90)

In article <3300093@m.cs.uiuc.edu> nelson@m.cs.uiuc.edu writes:
>
>I truly believe that the demand for faster and faster PCs will slow down
>  quite a bit in the next few years.
>
>Personally, for nearly all PC applications, I think that a 386 system is
>  more than enough and most of the demand comes from people who don't 
                             [...]
Well, without flaming, I'd just like to say that this assumes that
the group of applications that people will want to run on PCs is static.

As more power becomes available the applications that use that power
are going to be written.  Just because we don't know what they will be
doesn't mean they won't exist.  I just think back to when 8K of ram was
a system and paper tape was a user interface.  A CRT, 64K ram, and an 8"
floppy was the elite system, and a hard drive was beyond hoping for.

I remember when the first hard drive came out for a PC and Byte magazine
came out with an article cautioning users that most of them wouldn't
need this kind of power, and it could be so difficult to manage such vast
amounts of information.  And at the time it was true, there weren't many
applications that could take advantage of a hard drive, and the
knowledge about them wasn't readily available.

All this is just to say, we shouldn't presume to think that we know what
the limits are to people's desire for more speed.  I personally think
that CPU speed is sort of like money in the old cliche: "how much is
enough? - just a little bit more..."

What if the graphical user interface of the future is a ray-traced 3-D
world that the user moves around in interactively?  We don't know if
such a thing would be good, because we don't have the horsepower to do it
yet.  This is just one example of an area where available CPU speed
(personally available to individuals) is drastically below the level
needed for good interactive response.  I just say this for a hypothetical
example.  I don't know what the speed would come in handy for, I just feel
that if it becomes available people will find ways to use it.  And then
they will wonder how people ever got by without the new capabilities.

--Steve
-------------------------------------------------------------------------
	  {uunet,sun}!convex!swarren; swarren@convex.COM

dmocsny@uceng.UC.EDU (daniel mocsny) (01/10/90)

In article <4470@convex.UUCP> swarren@convex.COM (Steve Warren) writes:
>In article <3300093@m.cs.uiuc.edu> nelson@m.cs.uiuc.edu writes:
>>
>>Personally, for nearly all PC applications, I think that a 386 system is
>>  more than enough and most of the demand comes from people who don't 
>                             [...]
>As more power becomes available the applications that use that power
>are going to be written.  Just because we don't know what they will be
>doesn't mean they won't exist.

We will have enough computer power only when all human aspirations
have been satisfied, and all human problems have been solved. For
every problem I can imagine, solution requires (possibly among other
things) at least the ability to process information. We can't yet
apply computer power profitably to all human problems, but that is not
due to any theoretical limit on what computers can do. It is the fault
of our understanding, our software, and our transducers. Oh yes, and
for many problems we have constraints of cost and time, so as
computers improve the range of potentially solvable problems
increases.

The original poster does raise an important point---getting the most
out of high-performance hardware on a widespread basis is very
difficult. Few users have the type of scalable problems of relatively
low Kolmogorov complexity that map readily onto faster hardware. An
overworked businessman can't double his profits by buying a faster
machine and merely halving a grid size! The ill-defined problem of
"how to build better products and/or deliver superior service" could
invariably benefit in principle from massive increases in gross
information processing power. However, very few business-people are
able to conceptualize and utter the extensive incantations necessary
to empower the computer to make headway into the general problem of
doing business. Much less the average problem-beset individual trying
to get through life in general...

As I mentioned in a recent article in comp.society, building new
technology is only half the job. The other half is engineering society
to take advantage of the technology. For example, horses were once the
technology of choice for transporting people and loads across
irregular terrain. Inventors were unable to build mechanical analogs
of the horse. Instead they turned to wheeled vehicles. But as these
were unsuited to the existing world, they decided to re-engineer the
world to facilitate wheeled transport (roads, rails, etc.). Then came
the necessary task of educating/coercing almost everyone to abandon
horses and adopt new technology.

Social engineering for computers mostly revolves around identifying
existing information flows in society, and then engineering the
necessary infrastructure, standards, and educated/coerced consumers to
allow computers to take over. For example, most of our information
still moves on paper and inside physically transported human brains.
Computer/telecom technology will eventually replace these archaic
mass-based flows of information, at least those aimed at turning a
profit, creating now-unimaginable MIPS-sinks.

Consider also the market potential of artificial realities. Most
economic activity in an affluent society beyond the level of meeting
basic needs seems aimed at providing desirable sensory experiences.
Once you have a full belly and some sort of roof over your head,
almost everything else is aimed at satisfying your idea of pleasant
sights, sounds, tastes, textures, and presenting a favorable image to
your peers. In short, it's all entertainment. A computer with display
bandwidth well-matched to human sensory capacity could provide sensory
experiences that would blow away anything available (not to mention
affordable) today in the "Real World". The sky is basically the limit
on how many MIPS---MFLOPS---Mbit/s you could chew up this way.

As long as we live with collosal disparity between what we want
and what we've got, there will be room for more computer power.
However, gearing up to use that power will be an ongoing problem,
one that will bring wrenching changes to society (as every
major existing technology has).

Dan Mocsny
dmocsny@uceng.uc.edu

gillies@p.cs.uiuc.edu (01/11/90)

A few people sent me email saying, "Don't be silly.  R6000's are ECL
and you can't put ECL on a desk."

Who says we won't go to KMart once a year for Prestone PC-coolant to
add to our PC's radiator?  I can't see why it's so troublesome to cool
1 card of ECL (CPU + cache).

Don Gillies, Dept. of Computer Science, University of Illinois
1304 W. Springfield, Urbana, Ill 61801      
ARPA: gillies@cs.uiuc.edu   UUCP: {uunet,harvard}!uiucdcs!gillies

jml@tw-rnd.SanDiego.NCR.COM (Michael Lodman) (01/12/90)

In article <76700106@p.cs.uiuc.edu> gillies@p.cs.uiuc.edu writes:
>Who says we won't go to KMart once a year for Prestone PC-coolant to
>add to our PC's radiator?  I can't see why it's so troublesome to cool
>1 card of ECL (CPU + cache).

And how are you going to cool the coolant? The problem is that you will
need really heavy duty air conditioning into the room, or to
radiate the heat outside the room some other way. ECL equipment can
heat up a room FAST!

By the way, add to your costs a much more expensive power supply for
the PC.

-- 
+-----------------------------------------------------------+
| Michael Lodman               Mike.Lodman@SanDiego.NCR.COM |
| NCR Corporation  -  Distributed Systems Lab  -  San Diego |
| 9900 Old Grove Rd.  San Diego, CA.  92131  (619) 693-5353 |
+-----------------------------------------------------------+

roy@phri.nyu.edu (Roy Smith) (01/13/90)

In <220@tw-rnd.SanDiego.NCR.COM> jml@tw-rnd.SanDiego.NCR.COM (Michael Lodman):
> And how are you going to cool the coolant? The problem is that you will
> need really heavy duty air conditioning into the room, or to radiate the
> heat outside the room some other way. ECL equipment can heat up a room FAST!

	Since most (handwave) users of supercomputers access them by network
connections, it doesn't much matter where they are located physically.  Which
is more cost efficient, to build a cooling system for a big ECL monster
located in, say, St. Louis, or to run a T3 link to the Yukon, where getting
rid of heat is not usually a problem?  Just idle curiosity.

--
Roy Smith, Public Health Research Institute
455 First Avenue, New York, NY 10016
roy@alanine.phri.nyu.edu -OR- {att,philabs,cmcl2,rutgers,hombre}!phri!roy
"My karma ran over my dogma"