[comp.arch] RISC a short answer??

rwhite@nusdhub.UUCP (Robert C. White Jr.) (04/28/88)

Hi all,

	Can someone give me [short answer style] a description
of what "RISC" means.  I keep seeing all sorts of stuff about
this "processor type" in the media, but other than a translation
of "RISC == Reduced Instruction Set Chip" [or something equaly
vague] there hasen't been one decient definition of what the
term actually means.

	This may be a little simple a question, but all I realy
do is "networks, and system adminstration, sort of..."

What _IS_ a reduced instruction, and who would want less instruction
than they are paying for?	;-)
			       -----
				 2


<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<<  All the STREAM is but a page,<<|>>	Robert C. White Jr.		   <<
<<  and we are merely layers,	 <<|>>	nusdhub!rwhite  nusdhub!usenet	   <<
<<  port owners and port payers, <<|>>>>>>>>"The Avitar of Chaos"<<<<<<<<<<<<
<<  each an others audit fence,	 <<|>>	Network tech,  Gamer, Anti-christ, <<
<<  approaching the sum reel.	 <<|>>	Voter, and General bad influence.  <<
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
##  Disclaimer:  You thought I was serious???......  Really????		   ##
##  Interogative:  So... what _is_ your point?			    ;-)	   ##
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

csg@pyramid.pyramid.com (Carl S. Gutekunst) (04/29/88)

In article <1036@nusdhub.UUCP> rwhite@nusdhub.UUCP (Robert C. White Jr.) writes:
>Can someone give me [short answer style] a description of what "RISC" means.

Can we add this to the list of frequently asked questions?

My favorite answer is this:

	RISC (Reduced Instruction Set Computer) is a design philosophy that
	trades off decreased complexity in hardware for increased complexity
	in software. The instruction set is specifically chosen to provide a
	set of primitives that are most usable by compilers. Then these few
	instructions are made to run as fast as possible. The compilers are
	given the responsibility of building the primitives into the higher-
	level constructs used by the language.

By implication, the compilers becomes as much a part of the design as the CPU
itself.

This still leaves a lot of room for different implementations. For example,
the function call mechanisms on many CISC machines are quite complex, with
automatic register saves, stack manipulation, frame generation, and so on.
Pyramid went with a sliding register window: parmeters go in registers; make
the call, then one procedure's temporary variables becomes the next's call
arguments. (This a construct has come to be part-and-parcel with RISC in the
popular press, perhaps because it is so obviously *different*.) MIPS went even
simpler, with just a call and return, and leaves the compiler and loader to
determine the optimal parameter passage mechanism. But -- the 68000 also
implements a simple call and return, though no one would call the 68000 a RISC
design. So it becomes difficult to generalize. 

<csg>

ron@mucmot.UUCP (Ron Voss) (05/02/88)

From article <21149@pyramid.pyramid.com>, by csg@pyramid.pyramid.com (Carl S. Gutekunst):
> In article <1036@nusdhub.UUCP> rwhite@nusdhub.UUCP (Robert C. White Jr.) writes:
>>Can someone give me [short answer style] a description of what "RISC" means.
> 
> Can we add this to the list of frequently asked questions?
> 
> My favorite answer is this:
> 
> 	RISC (Reduced Instruction Set Computer) is a design philosophy that
> 	trades off decreased complexity in hardware for increased complexity
> 	in software. The instruction set is specifically chosen to provide a
> 	set of primitives that are most usable by compilers. Then these few
> 	instructions are made to run as fast as possible. The compilers are
> 	given the responsibility of building the primitives into the higher-
> 	level constructs used by the language.
> 
> By implication, the compilers becomes as much a part of the design as the CPU
> itself.
> 
> This still leaves a lot of room for different implementations. For example,
> [good discussion deleted for brevity]
> design. So it becomes difficult to generalize. 
> 
> <csg>
It's not that I disagree with Carl, it's just that I think he's left out
what for me is the most central issue:  Dedicating more of the finite
chip real estate to circuits whose chief function is to provide increased
processor speed.  The most significant consequence is that there is no
room left to implement more complex instructions.  The theory (proven?)
is that there is a net increase in performance, since processors spend
most of their time at relatively simple operations, like load, store,
add.  More complex (now missing) instructions, like "toggle bit n of byte m"
may end up running slower in emulation, but the increased speed of simple
instructions more than makes up for it. <rnv>
<my opinions are my own>

root@mfci.UUCP (SuperUser) (05/03/88)

In article <307@mucmot.UUCP> ron@mucmot.UUCP (Ron Voss) writes:
=From article <21149@pyramid.pyramid.com=, by csg@pyramid.pyramid.com (Carl S. Gutekunst):
== In article <1036@nusdhub.UUCP= rwhite@nusdhub.UUCP (Robert C. White Jr.) writes:
===Can someone give me [short answer style] a description of what "RISC" means.
== 
== My favorite answer is this:
== 
== 	RISC (Reduced Instruction Set Computer) is a design philosophy that
== 	trades off decreased complexity in hardware for increased complexity
== 	in software. The instruction set is specifically chosen to provide a
== 	set of primitives that are most usable by compilers. Then these few
== 	instructions are made to run as fast as possible. The compilers are
== 	given the responsibility of building the primitives into the higher-
== 	level constructs used by the language.
== 
== By implication, the compilers becomes as much a part of the design as the CPU
== itself.
== 
== <csg=
=It's not that I disagree with Carl, it's just that I think he's left out
=what for me is the most central issue:  Dedicating more of the finite
=chip real estate to circuits whose chief function is to provide increased
=processor speed.  The most significant consequence is that there is no
=room left to implement more complex instructions.  The theory (proven?)
=is that there is a net increase in performance, since processors spend
=most of their time at relatively simple operations, like load, store,
=add.  More complex (now missing) instructions, like "toggle bit n of byte m"
=may end up running slower in emulation, but the increased speed of simple
=instructions more than makes up for it. <rnv=

But let's not get too provincial.  What lessons there are to learn
from RISC apply to more than just one-chip processors, and it isn't
obvious how to extend your "finite chip real estate" metric to
board-level machine design.  And watch out for something else.  A
processor running a general UNIX/university job mix is going to look
a lot different than a scientific computer, which spends a lot more
of its time doing fairly complex operations (floating point ops).
The key is to 1) keep what you absolutely need; 2) include things you
can prove will help performance; 3) move whatever you can to
compile-time.  And no matter what you design, by all means call it a
RISC (mega-smiley here).

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

henry@utzoo.uucp (Henry Spencer) (05/04/88)

> 	Can someone give me [short answer style] a description
> of what "RISC" means...

Many people confuse things that are often characteristics of current RISC
designs with the fundamental underlying idea, which is:

	Keep the instruction set simple.

Most instructions actually executed are simple even if the instruction set
is complex.  Leaving out the complexity makes it easier to make the simple
stuff fast.  Since the simple stuff accounts for most of the execution
anyway, the result is faster machines.

This is *not* the same as "throw the complexity into the software", since
a simple instruction set frequently makes compilers etc. *simpler*, other
things being equal.
-- 
NASA is to spaceflight as            |  Henry Spencer @ U of Toronto Zoology
the Post Office is to mail.          | {ihnp4,decvax,uunet!mnetor}!utzoo!henry

root@mfci.UUCP (SuperUser) (05/04/88)

In article <1988May3.224604.2252@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>> 	Can someone give me [short answer style] a description
>> of what "RISC" means...
>
>Many people confuse things that are often characteristics of current RISC
>designs with the fundamental underlying idea, which is:
>
>	Keep the instruction set simple.
>
>Most instructions actually executed are simple even if the instruction set
>is complex.  Leaving out the complexity makes it easier to make the simple
>stuff fast.  Since the simple stuff accounts for most of the execution
>anyway, the result is faster machines.
>
>This is *not* the same as "throw the complexity into the software", since
>a simple instruction set frequently makes compilers etc. *simpler*, other
>things being equal.
>-- 
>NASA is to spaceflight as            |  Henry Spencer @ U of Toronto Zoology
>the Post Office is to mail.          | {ihnp4,decvax,uunet!mnetor}!utzoo!henry

I consider this a good explanation of what RISC was, circa 1981-1983.
But if that's ALL you consider RISC to be, then I think you're
missing some very important things.

A simple instruction set makes a simple compiler *possible*, but so
what?  If you implement a simple compiler and I implement one that
utilizes my highly-pipelined RISC machine better, my object code will
run considerably faster than yours.  (And if I didn't pipeline it,
I've already lost the battle.)  Only if we're not interested in
performance does your case hold, and while there are large areas of
the computer design space for which that premise is valid, like
high-reliability systems, it's performance that drives RISC designs.
And compilers that must manage at compile-time what hardware
interlocks used to do at run-time are simple no longer.  It's just a
price you pay for performance.

Your point about making simple stuff fast is legit.  The way I look
at is this:  if your basic technology allows a register-to-register
operation in X ns, and adding some fancy instruction would require
you to lengthen that period to X+e, then you had better be really
sure that your new instruction is worth it, because you'll be
penalizing something like 70% of the executed instructions for the
ability to occasionally invoke the slow one.  In a lecture he gave at
CMU in 1984, John Cocke was willing to extend this principle all the
way down to the one extra gate you need to allow a microcoded set of
complex operators to co-reside in a CPU with the hardwired set that
implemented loads, adds, and so on.  Personally, I wouldn't go that
far, because I think it's often the case that something else, like
on/off chip delays, rear their ugly heads before that extra gate
forces a longer critical path, but that doesn't exonerate
designers from considering this problem.

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

cik@l.cc.purdue.edu (Herman Rubin) (05/05/88)

The philosophy of RISC seems to be, to quote one of the above articles, that
one shoul not worry about the "slow" instructions to speed up 70% of the 
instructions.  If the 30% slow instructions now run at 1/3 the speed of the
fast ones, this would give an overall time of 1.7/instruction.  Now if the 
time of the slow instructions was doubled, and the fast instructions run
infinitely fast, the overall time would be 2/instruction.

It seems that, instead, there is great merit in a VCISC design, in which
useful instructions are included to decrease the number of instructions needed
in a program.  This is especially important if the current instructions require
branches, which are difficult to speed up in many cases.  In many cases, the
instruction complexity can be handled in a tag field, which does not have to
be decoded in the initial assignment to a computing unit.  An example of this
is how to handle the quotient and remainder for a division, depending on the
signs of the arguments.  This may even be important if both arguments are
positive; one may want the remainder non-positive, or of minimum magnitude.
Here software is expensive compared to hardware; if the hardware default is
not what is wanted, it may be necessary to use several instructions, including
conditional branches, to achieve the result.  If 10% of the instructions are
of such a type, I doubt that increasing the instruction time of the "fast"
instructions by 30% will compensate.  And why should there not be different
classes of instructions?
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

ron@mucmot.UUCP (Ron Voss) (05/05/88)

From article <381@m3.mfci.UUCP>, by root@mfci.UUCP (SuperUser):
> In article <307@mucmot.UUCP> ron@mucmot.UUCP (Ron Voss) writes:
> =From article <21149@pyramid.pyramid.com=, by csg@pyramid.pyramid.com
> =(Carl S. Gutekunst):
> == In article <1036@nusdhub.UUCP= rwhite@nusdhub.UUCP (Robert C. White Jr.)
> == writes:
> ===Can someone give me [short answer style] a description of what "RISC"
> ===means.
> == 	[see earlier article for Carl's response]
> =[see earlier article for my response]
> 
> But let's not get too provincial.  What lessons there are to learn
> from RISC apply to more than just one-chip processors, and it isn't
> obvious how to extend your "finite chip real estate" metric to
> board-level machine design.  And watch out for something else.  A
> processor running a general UNIX/university job mix is going to look
> a lot different than a scientific computer, which spends a lot more
> of its time doing fairly complex operations (floating point ops).
> The key is to 1) keep what you absolutely need; 2) include things you
> can prove will help performance; 3) move whatever you can to
> compile-time.  And no matter what you design, by all means call it a
> RISC (mega-smiley here).
> 
> Bob Colwell            mfci!colwell@uunet.uucp
> Multiflow Computer
> 175 N. Main St.
> Branford, CT 06405     203-488-6090

Well said.  I offer an even shorter answer to Robert's (not Bob's) original
question:
    1.  Instructions all the same length.
    2.  Instructions execute in one cycle.
    3.  Instructions are not microcoded.
                           (adapted from Motorola's 88000 press release)
As to Bob's points, Motorola has tried to solve the board-level machine
design problem by providing a compatible chip set, which includes a
physical cache with bus snooping.  The design is general-purpose (ala UNIX)
as opposed to scientific, but floating add and multiply have been included.
As to the above-mentioned keys:
    1)  Integer, bit, and floating +* have been kept.
    2)  11-way pipelined instruction execution, cache as above,
        Harvard parallel architecture.
    3)  Compilers continue to get smarter.
Making a profit means designing for the best price/performance ratio,
which indirectly translates to number of units sold, which indirectly
translates to satisfied customers (is my order reversed?).
The marketplace will determine if Motorola has done a good job.
Caveat my affiliation.

------------------------------------------------------------
Ron Voss                Motorola Microsystems Europe, Munich
mcvax!unido!mucmot!ron                         CIS 73647,752
my opinions are just that, and not necessarily my employer's
------------------------------------------------------------

-- 
------------------------------------------------------------
Ron Voss                Motorola Microsystems Europe, Munich
mcvax!unido!mucmot!ron                         CIS 73647,752
my opinions are just that, and not necessarily my employer's
------------------------------------------------------------

daryl@hpcllcm.HP.COM (Daryl Odnert) (05/05/88)

Somebody gave the following definition at the ASPLOS II conference
last October:

      RISC: any processor designed since 1982.    :-)


Daryl Odnert
Hewlett-Packard Computer Language Lab
hplabs!hpcllcm!daryl

henry@utzoo.uucp (Henry Spencer) (05/06/88)

> I consider this a good explanation of what RISC was, circa 1981-1983.
> But if that's ALL you consider RISC to be, then I think you're
> missing some very important things.

The trouble here is that we're increasingly in a situation where "RISC" is
considered a synonym for "good".  The term is rapidly losing any more
specific meaning because of persistent misuse.  I was attempting to strike
a blow for linguistic purity.  (Now and then I feel like supporting a lost
cause...)

Yes, it is true that optimizing compilers are a crucial part of many "RISC"
projects today.  It is also true that RISC architectures tend to be good
for optimization, since they give the compiler more control (and distract
it less with complex side issues).  This is a useful side effect of RISC
designs, which makes them more popular.  It is not a fundamental part of
the RISC concept.

> ...it's performance that drives RISC designs.

You mean "RISC" designs.

> And compilers that must manage at compile-time what hardware
> interlocks used to do at run-time are simple no longer...

Quite true.  But this is not what RISC (Reduced Instruction Set Computer,
remember) is all about.  Managing interlocks at compile time is a different
concept, albeit one that fits in well with RISC.  Much of the confusion
about the meaning of the term arises from just such goes-nicely-with-RISC
ideas.

Reduced Instruction Set means fewer and less complex instructions.  That
is all it means.  Many new designs that incorporate this concept also use
other somewhat-related concepts like software pipeline interlocks (delayed
branches being a simple form of this), overlapping register windows, heavy
reliance on optimizing compilers to maximize hardware performance, etc.
But proper usage (as opposed to what the marketing people do) does not
apply the name of one of these concepts indiscriminately to all the others.
-- 
NASA is to spaceflight as            |  Henry Spencer @ U of Toronto Zoology
the Post Office is to mail.          | {ihnp4,decvax,uunet!mnetor}!utzoo!henry

lamaster@ames.arpa (Hugh LaMaster) (05/06/88)

In article <770@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>
>The philosophy of RISC seems to be, to quote one of the above articles, that
>one shoul not worry about the "slow" instructions to speed up 70% of the 

"RISC" has become a generic term for any machine that is optimized for having
the fastest possible processor given all the other constraints.  

It is difficult to remember back to those early years, say ten years ago,
when the VAX processor family was in its adolescence, but, a major selling
point of VAXes was that the code was very compact.  That was very important
then because:
1) Processor real estate was expensive
2) Memory was expensive
3) Memory was slow

The CDC 7600 was a ~20MIPS (780=1), 3MFLOPS machine with a two level memory
of 64K (60 bit words) and 512K (likewise): you could call it ~4 MBytes for
the purposes of comparison.  That was a LOT of memory for its day.
1 MIPS processors of the early 70's had about 1 MB of memory. 

Since then, design constraints have changed and code compactness is not a
major selling point.  Possibly, in the future, it may again become so,
increasing the advantages of CISC type instruction sets (e.g. VAX).

As for the specific architectural features of some of the early "RISC"
processors - I still think that register windows are a poor design 
choice in most cases.  It is surprising to me that they are so popular,
.  I wish that the chip real estate devoted to register windows would
be devoted to a smaller number (32 say) all purpose 64 bit registers
with better floating point support.

pardo@june.cs.washington.edu (David Keppel) (05/06/88)

In article <770@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>
>The philosophy of RISC seems to be, to quote one of the above articles, that
>one shoul not worry about the "slow" instructions to speed up 70% of the 
>instructions.  If the 30% slow instructions now run at 1/3 the speed of the
>fast ones, this would give an overall time of 1.7/instruction.  Now if the 
>time of the slow instructions was doubled, and the fast instructions run
>infinitely fast, the overall time would be 2/instruction.

Let me try to rephrase that a little:
    + If an instruction is executed very often, it ought to be fast.
    + If an instruction is hardly ever executed, it can be slow.

A parallel is rather like virtual memory.  If memory accesses are
almost always fast, then it doesn't matter much if they are really
slow once in a great while.

Therefore the RISC, or more appropriately, "streamlined" technique
is to weight all instructions by frequency of use.  Drop the ones
that aren't used often and concentrate on making the rest of them
fast.

One other decision that is often made is to expose underlying
grungy implementation details so that they can be exploited at
by a compiler to make things run faster.

	;-D on  ( Of course I *am* lying to you )  Pardo

lgy@pupthy2.PRINCETON.EDU (Larry Yaffe) (05/06/88)

In article <8333@ames.arpa> lamaster@ames.arc.nasa.gov.UUCP (Hugh LaMaster) writes:
>As for the specific architectural features of some of the early "RISC"
>processors - I still think that register windows are a poor design 
>choice in most cases.  It is surprising to me that they are so popular,
>.  I wish that the chip real estate devoted to register windows would
>be devoted to a smaller number (32 say) all purpose 64 bit registers
>with better floating point support.

	Hear, hear!

------------------------------------------------------------------------
Laurence G. Yaffe			lgy@pupthy.princeton.edu
Department of Physics			lgy@pucc.bitnet
Princeton University			...!princeton!pupthy!lgy
PO Box 708, Princeton NJ 08544		609-452-4371 or -4400

daveb@geac.UUCP (David Collier-Brown) (05/06/88)

In article <1988May5.171444.849@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>The trouble here is that we're increasingly in a situation where "RISC" is
>considered a synonym for "good".  The term is rapidly losing any more
>specific meaning because of persistent misuse.  
...
>But proper usage (as opposed to what the marketing people do) does not
>apply the name of one of these concepts indiscriminately to all the others.

  Actually, the RISCness of something is its "significant
difference" in marketing parlance, and is a sort of handle that
allows one to infer all the other things when you hear it mentioned.

  The sales-and-advertizing crowd love to overload terms (in the Ada
sense), especially if the term has anything to do with the
significant difference.  This drives real humans, technoids and
maketdroids crazy.  Not to mention us pedants.
-- 
 David Collier-Brown.                 {mnetor yunexus utgpu}!geac!daveb
 Geac Computers International Inc.,   |  Computer Science loses its
 350 Steelcase Road,Markham, Ontario, |  memory (if not its mind) 
 CANADA, L3R 1B3 (416) 475-0525 x3279 |  every 6 months.

jlg@a.UUCP (Jim Giles) (05/07/88)

In article <310@mucmot.UUCP>, ron@mucmot.UUCP (Ron Voss) writes:

> Making a profit means designing for the best price/performance ratio,
> which indirectly translates to number of units sold, which indirectly
> translates to satisfied customers (is my order reversed?).
> The marketplace will determine if Motorola has done a good job.
> Caveat my affiliation.
> 
But, wait a minute!!  DEC makes a profit.  Are you saying that VAXen
have a good price/performance ratio?  

J. Giles
Los Alamos

bcase@Apple.COM (Brian Case) (05/07/88)

In article <1988May5.171444.849@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>Yes, it is true that optimizing compilers are a crucial part of many "RISC"
>projects today.  It is also true that RISC architectures tend to be good
>for optimization, since they give the compiler more control (and distract
>it less with complex side issues).  This is a useful side effect of RISC
>designs, which makes them more popular.  It is not a fundamental part of
>the RISC concept.

I disagree completely.  I think the whole point of RISC is to properly
match hardware with the capabilities of software (particularly the software
that generates the code that determines the state transitions, i.e. the
compiler).  Before RISC was called RISC, John Cocke was inventing it.  His
motivation was the needs of optimizing compilers.

>Reduced Instruction Set means fewer and less complex instructions.  That
>is all it means.

Wow, I didn't think I'd ever see this comment.  I believe that you'd have
a hard time finding anybody who has designed or is designing a RISC machine,
and I realize it is hard to define this term for my purposes here, who would
define RISC the way you did.

I plan on posting a summary of RISC-related literature ("The story of RISC
:-), starting with the earliest papers by Patterson and Ditzel (incl. "The
Case for the Reduced Instruction Set Computer."  Has anyone who is arguing
these issues actually read all the relavent literature???), but let me now
give my personal, back-of- the-business-card summary of RISC:

1)  Uniform Pipeline (all, or nearly all, instructions flow through the
pipeline in the same way)
2)  Good match to optimizing compiler technology (the architecture makes it
easy for the compiler in several ways:  code generation, code motion,
register allocation, etc., and *calculating the cost of code sequences* in
both time and space)
3)  Allows technology to be exploited (cycle time dominated by some
"irreducible" component of computation like an ALU or Cache lookup; yes, I
know you can pipeline ALUs and cache lookups, but do you really want to?)

Notice that there will be implementations of CISC machines that have these
characteristics to pretty high degrees!  The junky instructions/addressing
modes will be done "off line" while the things that:  fit in a uniform pipe,
are needed by compilers, and allow a fast cycle time will be done in
hardware.  There is a big cost, but it can and will be done.

RISC is a way of doing things, not a thing.

baum@apple.UUCP (Allen J. Baum) (05/07/88)

--------
[]
>In article <9341@apple.Apple.Com> bcase@apple.UUCP (Brian Case) writes:
>
>I disagree completely.  I think the whole point of RISC is to properly
>match hardware with the capabilities of software .....
>I plan on posting a summary of RISC-related literature , but let me now
>give my personal, back-of- the-business-card summary of RISC:
>.....
>2)  Good match to optimizing compiler technology (the architecture makes it
>easy for the compiler in several ways:  code generation, code motion,
>register allocation, etc., and *calculating the cost of code sequences* in
>both time and space)

Note that 20 years ago, compiler technology was not at the state that current
RISC technology could be used. Five years ago, it wasn't at the state that
VLIW hardware could be used. Hmm, can you determine a historical trend from
these facts?

--
{decwrl,hplabs,ihnp4}!nsc!apple!baum		(408)973-3385

livesey@sun.uucp (Jon Livesey) (05/07/88)

In article <310@mucmot.UUCP>, ron@mucmot.UUCP (Ron Voss) writes:
> From article <381@m3.mfci.UUCP>, by root@mfci.UUCP (SuperUser):
> > In article <307@mucmot.UUCP> ron@mucmot.UUCP (Ron Voss) writes:
> > =From article <21149@pyramid.pyramid.com=, by csg@pyramid.pyramid.com
> > =(Carl S. Gutekunst):
> > == In article <1036@nusdhub.UUCP= rwhite@nusdhub.UUCP (Robert C. White Jr.)
> > == writes:
> > ===Can someone give me [short answer style] a description of what "RISC"
> > ===means.
> 
> ...........  I offer an even shorter answer to Robert's (not Bob's) original
> question:
>     1.  Instructions all the same length.
>     2.  Instructions execute in one cycle.
>     3.  Instructions are not microcoded.
>                            (adapted from Motorola's 88000 press release)

> ------------------------------------------------------------
> Ron Voss                Motorola Microsystems Europe, Munich
> mcvax!unido!mucmot!ron                         CIS 73647,752
> my opinions are just that, and not necessarily my employer's
> ------------------------------------------------------------

	Gee, a Motorola person offers us a definition of RISC processors
in general, by quoting from a Motorola press release.   O tempora,
o mores.  :-)

	Maybe you would tell me what's wrong with Tabak's [1] 1987 definition
which I quoted a week ago?

      1. Few instructions (< 100 is best)
      2. Few addressing modes (1 or 2)  
      3. Few instruction formats.      
      4. Single cycle execution.      
      5. Memory access by load/store instruction only.      
      6. Large register set.        
      7. Hardwired control unit.
      8. HLL support reflected in architecture 

   Or how about Stallings' definition in [2]

	"The key elements shared by all of these designs are these:

	- A limited and simple instruction set.
	- A large number o general purpose registers.
	- An emphasis on optimizing the instruction pipeline.

   Note in particular, that Stallings' definition is as short as yours, but
manages to characterize more of the system architecture.

   Maybe the question I am asking is this:  if all this work has been done
before, by independent people, do we really need each manufacturer to torque
the definition around to suit their own product?   Fine for a Press Release,
but not for general purposes.

jon.
[I don't speak for Sun]

[1] Tabak D. "RISC Architecture", Research Studies Press, 1987.
[2] Stallings W. "Reduced Instruction Set Computers." IEEE Computer Society 1988.

lgy@pupthy2.PRINCETON.EDU (Larry Yaffe) (05/08/88)

In article <9341@apple.Apple.Com> bcase@apple.UUCP (Brian Case) writes:
>In article <1988May5.171444.849@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>>Reduced Instruction Set means fewer and less complex instructions.  That
>>is all it means.
>
>Wow, I didn't think I'd ever see this comment.  [...]
>let me now give my personal, back-of- the-business-card summary of RISC:
>1)  Uniform Pipeline (all, or nearly all, instructions flow through the
>pipeline in the same way)
>2)  Good match to optimizing compiler technology (the architecture makes it
>easy for the compiler in several ways:  code generation, code motion,
>register allocation, etc., and *calculating the cost of code sequences* in
>both time and space)
>3)  Allows technology to be exploited (cycle time dominated by some
>"irreducible" component of computation like an ALU or Cache lookup; yes, I
>know you can pipeline ALUs and cache lookups, but do you really want to?)

    These all seem like reasons for "Why RISC is a good thing",
not at all part of the basic definition of what RISC stands for.
Seems to me that Henry summed up what RISC
means pretty well - fewer instructions.

>RISC is a way of doing things, not a thing.

    This sounds like usurping the name of something specific
(Reduced Instruction Set) for a much more general
philosophy (into which it naturally fits).
I'd call that a misleading choice of terminology.
But then I don't have any simple acronym which
sums up a philosophy of "keep things simple,
devote resources & effort where they produce the
most benefit".  (Except for "common sense" :-))

.
.
.
.
.
------------------------------------------------------------------------
Laurence G. Yaffe			lgy@pupthy.princeton.edu
Department of Physics			lgy@pucc.bitnet
Princeton University			...!princeton!pupthy!lgy
PO Box 708, Princeton NJ 08544		609-452-4371 or -4400

henry@utzoo.uucp (Henry Spencer) (05/08/88)

> >... optimizing compilers are a crucial part of many "RISC"
> >projects today...  [RISCs are good for optimization, but this] is not a
> > fundamental part of the RISC concept.
> 
> I disagree completely.  I think the whole point of RISC is to properly
> match hardware with the capabilities of software...  [etc etc etc]

If you are using "RISC" in its increasingly-frequent sense as a marketing
buzzword, sure.  I will agree with you if you change "RISC" to "many RISC-
based projects".  Any realistic project has to use a number of concepts to
meet its goals, and RISC is usually only one of them.  (After all, CISC
designers generally don't consider complicating the instruction set to be
their only design guideline!)

> RISC is a way of doing things, not a thing.

Please look up "reduced" and "instruction set" in a dictionary (a technical
dictionary for the latter, of course).
-- 
NASA is to spaceflight as            |  Henry Spencer @ U of Toronto Zoology
the Post Office is to mail.          | {ihnp4,decvax,uunet!mnetor}!utzoo!henry

root@mfci.UUCP (SuperUser) (05/08/88)

In article <2810@phoenix.Princeton.EDU= lgy@pupthy2.PRINCETON.EDU (Larry Yaffe) writes:
=In article <9341@apple.Apple.Com= bcase@apple.UUCP (Brian Case) writes:
==In article <1988May5.171444.849@utzoo.uucp= henry@utzoo.uucp (Henry Spencer) writes:
===Reduced Instruction Set means fewer and less complex instructions.  That
===is all it means.
==
==Wow, I didn't think I'd ever see this comment.  [...]
==let me now give my personal, back-of- the-business-card summary of RISC:
==1)  Uniform Pipeline (all, or nearly all, instructions flow through the
==pipeline in the same way)
==2)  Good match to optimizing compiler technology (the architecture makes it
==easy for the compiler in several ways:  code generation, code motion,
==register allocation, etc., and *calculating the cost of code sequences* in
==both time and space)
==3)  Allows technology to be exploited (cycle time dominated by some
=="irreducible" component of computation like an ALU or Cache lookup; yes, I
==know you can pipeline ALUs and cache lookups, but do you really want to?)
=
=    These all seem like reasons for "Why RISC is a good thing",
=not at all part of the basic definition of what RISC stands for.
=Seems to me that Henry summed up what RISC
=means pretty well - fewer instructions.
=

Nope, Henry missed the boat.  You both are basically citing the
definition that Patterson originally proffered, taking us to task for
trying to change that.  But if you want to be historically accurate,
then start with Radin's ASPLOS-I paper on the IBM-801, which does not
talk about the number of instructions but about their efficacy and
usefulness.

And if you still insist that RISC stand for 1981-1983's "few instrs"
phase, then we may as well dump the acronym and get another, because
yours is of no interest any more.

==RISC is a way of doing things, not a thing.
=
=    This sounds like usurping the name of something specific
=(Reduced Instruction Set) for a much more general
=philosophy (into which it naturally fits).
=I'd call that a misleading choice of terminology.
=But then I don't have any simple acronym which
=sums up a philosophy of "keep things simple,
=devote resources & effort where they produce the
=most benefit".  (Except for "common sense" :-))
                             ^^^^^^^^^^^^^^
See, we agree on something!  As per my comment above, I think you
have been fooled by the definition that was in vogue for a short
while.  If you want historical accuracy, read Radin and forget your
"few instrs" idea.  If you want useful ideas, consider broadening
your perspectives on RISC.  Thinking that RISC only constitutes
a recipe for designing an instruction set is (and was) a useless
viewpoint. 

So maybe we're all agreeing that "R.I.S.C." is a bad acronym (it also
stands for Rhode Island Seafood Council, by the way), and we need
some other one to avoid misunderstandings.  But I just haven't quite
been able to accept HP's "Streamlined" or "Precision" nomenclature...

=------------------------------------------------------------------------
=Laurence G. Yaffe			lgy@pupthy.princeton.edu

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

mangler@cit-vax.Caltech.Edu (Don Speck) (05/09/88)

Like it or not, we're stuck with the acronym.  RISC used to mean
	Reduced Instruction Set Complexity.
What we really want it to mean is:
	Removal of Inherently Sequential Constructs.

Don Speck   speck@vlsi.caltech.edu  {amdahl,ames!elroy}!cit-vax!speck

root@mfci.UUCP (SuperUser) (05/09/88)

In article <52338@sun.uucp= livesey@sun.uucp (Jon Livesey) writes:
=In article <310@mucmot.UUCP=, ron@mucmot.UUCP (Ron Voss) writes:
== From article <381@m3.mfci.UUCP=, by root@mfci.UUCP (SuperUser):
== = In article <307@mucmot.UUCP= ron@mucmot.UUCP (Ron Voss) writes:
== = =From article <21149@pyramid.pyramid.com=, by csg@pyramid.pyramid.com
== = =(Carl S. Gutekunst):
== = == In article <1036@nusdhub.UUCP= rwhite@nusdhub.UUCP (Robert C. White Jr.)
== = == writes:
== = ===Can someone give me [short answer style] a description of what "RISC"
== = ===means.
=	Maybe you would tell me what's wrong with Tabak's [1] 1987 definition
=which I quoted a week ago?
=
=      1. Few instructions (< 100 is best)
=      2. Few addressing modes (1 or 2)  
=      3. Few instruction formats.      
=      4. Single cycle execution.      
=      5. Memory access by load/store instruction only.      
=      6. Large register set.        
=      7. Hardwired control unit.
=      8. HLL support reflected in architecture 
=
=   Or how about Stallings' definition in [2]
=
=	"The key elements shared by all of these designs are these:
=
=	- A limited and simple instruction set.
=	- A large number o general purpose registers.
=	- An emphasis on optimizing the instruction pipeline.
=
=   Note in particular, that Stallings' definition is as short as yours, but
=manages to characterize more of the system architecture.
=
=jon.
=[1] Tabak D. "RISC Architecture", Research Studies Press, 1987.
=[2] Stallings W. "Reduced Instruction Set Computers." IEEE Computer Society 1988.

In Tabak's list, number 6 is wrong, and number 8 is meaningless.  The
number of registers in a machine has nothing to do with its RISC or
CISC-ness.  I think the reason everybody seems to get this confused
is because the most famous RISC (RISC-I) had lots of registers and
the Berkeley people did not go out of their way to point out that
that style of register organization did not go hand-in-hand with
RISC.  Perhaps they thought that point was obvious; after all,
Stanford's MIPS and the 801 did not have lots of registers.  And if
Tabak is merely offering the observation that many machines designed
since 1981 have more registers than the VAX, then fine, but that
doesn't qualify the number of registers as being a RISC litmus test.

Number 8 is meaningless because "HLL support" was the holy grail that
led all the CISC designs down the microcoded path, and if I can't use
the list's items to help distinguish RISC from CISC, it is useless.

Stalling's definition is too simple.  The register issue is junked as
above.  The "limited and simple" item is fine.  But the third can
just as easily describe the extensive instruction stream
lookahead/cycle-stealing techniques that characterize some CISC
micros.

If you remove the two items I contend are wrong in Tabak's list, it
collapses to the list we published two years earlier than his (in
IEEE Computer Sept. 1985, in 32-bit Microprocessors, H.J. Mitchell,
Collins, 1986, and reprinted in Stalling's RISC tutorial.)

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

alan@pdn.UUCP (Alan Lovejoy) (05/09/88)

In article <2810@phoenix.Princeton.EDU> lgy@pupthy2.PRINCETON.EDU (Larry Yaffe) writes:
/But then I don't have any simple acronym which
/sums up a philosophy of "keep things simple,
/devote resources & effort where they produce the
/most benefit".  (Except for "common sense" :-))

How about "engineering"? Doesn't "engineering" mean "designing widgets 
that accomplish their function(s) (to spec) with the least cost per unit 
of work performed"?

-- 
Alan Lovejoy; alan@pdn; 813-530-8241; Paradyne Corporation: Largo, Florida.
Disclaimer: Do not confuse my views with the official views of Paradyne
            Corporation (regardless of how confusing those views may be).
Motto: Never put off to run-time what you can do at compile-time!  

alan@pdn.UUCP (Alan Lovejoy) (05/10/88)

In article <603@a.UUCP> jlg@a.UUCP (Jim Giles) writes:
>But, wait a minute!!  DEC makes a profit.  Are you saying that VAXen
>have a good price/performance ratio?  

Well, if you consider factors such as software librairies,
communications, service, support, financial stability and other
business issues to be important to "performance", then it just
may be that VAXen have a good price/performance ratio.  

But I don't concede that they do (or don't)!  After all, DEC doesn't
pay my salary :-) and I don't endorse products for free unless I have
a significant investment in them :-) * 10!


-- 
Alan Lovejoy; alan@pdn; 813-530-8241; Paradyne Corporation: Largo, Florida.
Disclaimer: Do not confuse my views with the official views of Paradyne
            Corporation (regardless of how confusing those views may be).
Motto: Never put off to run-time what you can do at compile-time!  

daveb@geac.UUCP (David Collier-Brown) (05/10/88)

In article <388@m3.mfci.UUCP> colwell@m3.UUCP (Robert Colwell) writes:
 | You both are basically citing the
 |definition that Patterson originally proffered, taking us to task for
 |trying to change that.  But if you want to be historically accurate,
 |then start with Radin's ASPLOS-I paper on the IBM-801, which does not
 |talk about the number of instructions but about their efficacy and
 |usefulness.

  Definitions, like standards, come out of popular usage, codified
after the usage becomes **in fact** popular.
  The inventor of a term gets a slightly better chance to define it
because she's first....

--dave (it a simple matter of who's the boss:
	the word or me) c-b
-- 
 David Collier-Brown.                 {mnetor yunexus utgpu}!geac!daveb
 Geac Computers International Inc.,   |  Computer Science loses its
 350 Steelcase Road,Markham, Ontario, |  memory (if not its mind) 
 CANADA, L3R 1B3 (416) 475-0525 x3279 |  every 6 months.

ron@mucmot.UUCP (Ron Voss) (05/10/88)

In article <52338@sun.uucp>, livesey@sun.uucp (Jon Livesey) writes:
> In article <310@mucmot.UUCP>, ron@mucmot.UUCP (Ron Voss) writes:
> > From article <381@m3.mfci.UUCP>, by root@mfci.UUCP (SuperUser):
> > > In article <307@mucmot.UUCP> ron@mucmot.UUCP (Ron Voss) writes:
> > > =From article <21149@pyramid.pyramid.com=, by csg@pyramid.pyramid.com
> > > =(Carl S. Gutekunst):
> > > == In article <1036@nusdhub.UUCP= rwhite@nusdhub.UUCP
> > > ==(Robert C. White Jr.) writes:
> > > ===Can someone give me [short answer style] a description of what "RISC"
> > > ===means.
> > 
> 	Gee, a Motorola person offers us a definition of RISC processors
> in general, by quoting from a Motorola press release.   O tempora,
> o mores.  :-)
> 
> 	Maybe you would tell me what's wrong with Tabak's [1] 1987 definition
> which I quoted a week ago?
> 
>       [items 1 to 8]
> 
>    Or how about Stallings' definition in [2]
> 
> 	[header and three items]
> 
>    Note in particular, that Stallings' definition is as short as yours, but
> manages to characterize more of the system architecture.
> 
>    Maybe the question I am asking is this:  if all this work has been done
> before, by independent people, do we really need each manufacturer to torque
> the definition around to suit their own product?   Fine for a Press Release, > but not for general purposes.
> 
> jon.
> [I don't speak for Sun]
> 
> [1] Tabak D. "RISC Architecture", Research Studies Press, 1987.
> [2] Stallings W. "Reduced Instruction Set Computers." IEEE Computer
> Society 1988.

Bob Colwell answered these items, which inadvertently but correctly includes
valid criticism of my own simplistic offering.

I take issue with Jon's rather petty outrage that an employee would repeat
what his employer said.  Don't forget our context here.  A participant
asked for a short answer.  I ran across one (required reading, you see)
which was the shortest one I had seen.  "Fine for a Press Release"?
I clearly labelled it as such, and my affiliation is given.  From the variety
of responses, there seem to be a number of definitions.  Wouldn't it be
interesting for someone to gather all vendor's definitions and publish them
here?  We might then get closer to the real (in silicon) truth.  Then again,
we might just get marketing's press releases.  And I do like tenpura
and smores.
-- 
------------------------------------------------------------
Ron Voss                Motorola Microsystems Europe, Munich
mcvax!unido!mucmot!ron                         CIS 73647,752
my opinions are just that, and not necessarily my employer's
------------------------------------------------------------

mash@mips.COM (John Mashey) (05/11/88)

In article <6467@cit-vax.Caltech.Edu> mangler@cit-vax.Caltech.Edu (Don Speck) writes:
>Like it or not, we're stuck with the acronym.  RISC used to mean
>	Reduced Instruction Set Complexity.
>What we really want it to mean is:
>	Removal of Inherently Sequential Constructs.

From this discussion, it must be that the half-life of folks on comp.arch is
about a year, since I'd swear we went thru all of this a while back.
But if not, here are more RISCs from talks we've been using for years:

	Reduced Instruction Set Computer
		maybe 10%; reducing is a side-effect of ruthlessly wanting
		something to go faster for a given cost, NOT a goal
	Reusable Information Storage Computer
		per Marty Hopkins: caches + enough regs for optimizer (>16);
		a better definition
	Revolutionary Innovation in the Science of Computing
		no way

and the one we like the best, at least for VLSI part of computing:

	Response to Inherent Shifts in Computer technology
		HW: a) DRAMS get bigger, less penalty for large code
		    b) SRAMS get faster/cheaper, hence caches cost-effective,
			even in microprocessor domain
		    c) Cost-effective VLSI packages cross the threshold where
			you can support the CPU-cache bandwidth RISCs like,
			to get the speed they can give
		SW: d) Assembler->high-level languages, even on fairly
			small (PC-class) machines
		    e) UNIX and other OS's now written in high-level languages
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

rcd@ico.ISC.COM (Dick Dunn) (05/20/88)

In article <770@l.cc.purdue.edu>, cik@l.cc.purdue.edu (Herman Rubin) writes:
> The philosophy of RISC seems to be, to quote one of the above articles, that
> one shoul not worry about the "slow" instructions to speed up 70% of the 
> instructions...

This is Rubin's creation.  I don't know where the 70% comes from.  Rubin
goes on to construct a hypothetical situation where, if you use the slow
instructions often enough, and they're slowed down a lot, you can actually
make things worse.

Fine.  So what?  It is certainly true that it would be possible to mis-
apply the RISC approach and make a machine which is slower than an
otherwise-comparable CISC in comparable technology.  Why would you do that?

Look, the whole RISC game relies heavily on studying what REALLY happens in
REAL programs--people finally figured out that at least some CISCs spent
enormous amounts on hardware that didn't do much for program execution
speed; the silicon was better spent elsewhere.  You can't construct a valid
counter-argument to empirical results with "what-if" arguments...you're
just setting up a straw man and knocking it down.  When Rubin can show us
some REAL results that illustrate a real (or at least realistic) RISC
falling down compared to a comparable-cost real CISC on something that
approximates a real problem, then we should sit up and listen.

>...It seems that, instead, there is great merit in a VCISC design, in which
> useful instructions are included to decrease the number of instructions needed
> in a program...

I see no evidence of any merit whatsoever for a VCISC design.  It's a
Herman Rubin wish, but he doesn't run my programs for me.  His arguments in
favor of CISC can be intellectually appealing--it's not that the ideas are
inherently wrong.  It's just that the ideas, if they are to work, require
empirical results to come out differently than they really do.
-- 
Dick Dunn      UUCP: {ncar,cbosgd,nbires}!ico!rcd       (303)449-2870
   ...Never attribute to malice what can be adequately explained by stupidity.