[comp.arch] RISC & CISC;

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (05/05/91)

>In article <8283@uceng.UC.EDU> dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:
>>Do comp.arch pundits see RISC chips widening their gap over the
>>CISC chips? 

In article <3047@spim.mips.COM> mash@mips.com (John Mashey) writes:
[not an answer to my question...darn! I must have blabbed too much.
:-| I'm still curious, BTW...Are CISC chips going to stay twice as
slow as RISC chips, or are they going to get worse? Intel, comp.arch's
favorite Great Satan :-), has claimed it will deliver a 2000 MIPS
chip in the year 2000 AD (how many design wins do they have on it 
now?), and it will still be binary-compatible with the 4004 or ENIAC or
whatever :-) So will RISC chips be clipping along at "only" 5000 MIPS 
then, or will they have opened up a couple orders of magnitude lead?
And will they be able to run anything coded more than 6 months 
earlier?]

>All of this sounds like a plausible argument ... 6 years ago.
>So, explain why  almost every major computer company (and now including
>	most of the larger PC companies) either is already shipping
>	a RISC-based product.....
>	Are they all fools?

No. Lots of them are making money, right? However, so are the PC
companies with their DOS boxes. I suspect the PC vendors will go right
on making money with their embarrassingly inferior hardware which
people with high school education can get work done with.

>Note that "the i386 is enough" is looking through the rear-view mirror;
>you can do some terrific things if you can get 50-100 mips cheap;
>(mostly to make computers a lot easier to use).

Interestingly, the hardest computers to use are also the ones with the
most MIPS (er, mips?). (If you don't believe me, try this experiment:
hire 3 secretaries, and give each of them a shipping carton with a 
computer in it. The first gets the slowest Mac made (whatever that is),
the second gets a 486 PC and a shrink-wrapped copy of Windows 3.0, and
the third gets the fastest existing RISC workstation. See who gets your
memo typed by lunch, without any help.) My hunch is this will always 
be true, because computers don't really get tamed until they are 
obsolete. That's no reason to shy away from speed...because the only 
way to get fast, obsolete, friendly computers is to build new, faster, 
unfriendly computers, which can eventually decay into something usable.

Let me hereby state that I don't believe any finite computing power
can ever be "enough". (At a minimum, as long as I still need to use
paper, indeed, if I ever have to wait for information, I can't possibly 
say I have "enough" computer power.) However, if vendors make computers 
sufficiently difficult to use (and the most advanced computers *have*
to be), some finite computing power may be the maximum a given user 
can effectively manage. 

Today, the ratio of (necessary brain cells):(effectively usable MIPS)
is quite high. This has nothing to do with RISC vs. CISC, however,
except to the extent that pervasive standards reduce the number of
brain cells required to use a product.

CPU-use figures probably won't rise much until hardware stops speeding
up, or somebody makes a hell of a breakthrough in software technology,
or we invent a drug that makes people more intelligent, or we stop
reading news and spend more time coding, or computers learn to
program themselves, or...

>Aggressive software developers out there understand this and are working
>in that direction, because it's going to happen.

I think it would happen a lot sooner if the software developers
didn't *have* to be so aggressive. Software developers can only absorb
a shelf-load of a manuals so fast, which limits the level of 
target-platform fragmentation they can cope with. 

(The strange part of all this is that the application programmer may 
never even see the hardware. If vendors would get serious about 
delivering compatible languages, libraries, and OS calls, then the chip 
with the highest bang per buck could be the King of the Hill. However, 
most vendors don't want to be quite that "open". If I'm using a box from
vendor A, then a box from vendor B is "open" if I can program it from 
vendor A's box, without having to own it, touch it, see it, think about 
it, or read its manuals, ever. (Note: this doesn't have to rule out a
re-compile, but I don't want to have to buy and learn another box to do 
that.) The only boxes that open are all running a same CPU family.)

Look at an outfit like Hunter Systems, which has developed technology
that lets it make "true binary ports" of DOS software for all the
major RISC platforms. However, Hunter's method of marketing this
technology seems too indirect, and that's slowing things down.

Let's imagine the major RISC vendors had a method that let *users* run DOS 
binaries (and all the extended/protected DOS hacks and kludges) out of 
the wrapper at full RISC speeds (not with a factor of whatever slowdown 
like SoftPC), without any fuss or bother. Then the RISC vendors could 
be shipping 20 million units per year...if they could bring themselves
to sell RAM and disks and so on at market prices (of course, they
could just sell the customer a case with a logo on it, and send the
customer to Andataco to have it filled :-). And in a few years, 
all the Intel CISC chips would seem like merely a bad dream.



--
Dan Mocsny				
Internet: dmocsny@minerva.che.uc.edu