[comp.arch] To RISC or to CISC .....

kadkade@decsim.dec.com (05/11/87)

In article <1516@drivax.UUCP> tyler@drivax.UUCP (William Tyler) writes:
>>Code size is important because the size of the code determines how much
>>memory traffic is generated by instruction fetches (exclusive of improvements
>>due to caching).  All other things being equal, fewer bytes of code lead
 
Comment by lm@cottage.wisc.edu  or  uwvax!mcvoy>
>Excuse me, but I think you need to look at cache performance a little bit
>more.  Last *I* heard, given a reasonable icache, you could count on about a
>90-99% hit ratio.  Think about that for a momement...  If you never executed
>the same instruction twice then the hit ratio would be 0%, right?  So that
>suggests if one is getting a 90% hit ratio that instruction is being used a
>lot, right?  So if you throw in a cache that gives you 90% hits then you've
>just decreased memory traffic (instruction only, but a similar argument
>applies for data) to 10% of what it was with no cache....
 
I suppose if you had long loops, or similar code then given a more tightly
packed instruction set (CISC) would help, so that the code in the cache would
be replaced less frequently. Or in the case of RISC you would need a larger
cache, which would make such systems costlier. Though, by how much I don't
know. In any case I think the preceding argument is still valid if cache is
expensive.

The argument goes on about pipelining helping RISC machines and indeed being
a part of RISC. Didn't pipelining exist long before the acronym RISC was
coined? Pipelining is used in CISC machines, too. Are there are reasons to 
believe that pipelining provides is a greater reason for efficiency in RISC
machines than CISC machines.

Shouldn't the arguments on the instruction set and micro-architecture get
down to trying to figure out what instructions on CISC machines cause
problems (I-stalls, too much silicon, etc.) and how they should be replaced
rather than a RISC vs. CISC slugfest? Can there be such a compromise? Seems
to me that the main benefit of the RISC architectures to the CISC architects
has been to have to justify every instruction they throw into the box.

Thanx,
Sudhir.

The above are my personal opinions - at this moment anyway - and should 
not be construed as DEC's positions on this or any other subjects.

pec@necntc.NEC.COM (Paul Cohen) (05/18/87)

In Article <1204lm@cottage.WISC.EDU> writes:

>Excuse me, but I think you need to look at cache performance a little bit
>more.  Last *I* heard, given a reasonable icache, you could count on about a
>90-99% hit ratio.  

>So if you throw in a cache that gives you 90% hits then you've
>just decreased memory traffic 


In the context of the current discussion concerning microprocessor 
architecture, a memory access should be defined as any access to the 
memory SYSTEM and that includes any access to an off-chip cache.  I may
be showing my ignorance, but I am not aware of any on-chip icaches that 
get anything approaching 90% hit ratios.  In addition, in order to be 
fair, you should not count as a hit any cache access that would have 
missed had there been no pre-fetching into the cache (since the pre-
fetching generates memory traffic).  Such pre-fetching is often a 
significant contributor to these kinds of hit ratios.


> But look at what CISC is costing DEC

Again I may be showing my ignorance, but I thought DEC's balance sheet
was in pretty good shape these days.

lm@cottage.WISC.EDU (Larry McVoy) (05/20/87)

In article <4709@necntc.NEC.COM> pec@necntc.UUCP (Paul Cohen) writes:
>In Article <1204lm@cottage.WISC.EDU> writes:
>
>>So if you throw in a cache that gives you 90% hits then you've
>>just decreased memory traffic 

>In the context of the current discussion concerning microprocessor 
>architecture, a memory access should be defined as any access to the 
>memory SYSTEM and that includes any access to an off-chip cache.  I may
>be showing my ignorance, but I am not aware of any on-chip icaches that 
>get anything approaching 90% hit ratios.  In addition, in order to be 

Um, I have to admit I was thinking of the branch prediction paper when I
wrote that bit about 90% hits, but I went back and checked.  I'm still ok,
take a look at Computing Surveys, Vol. 14, No. 3, September 1982.  The paper
by AJ Smith called "Cache Memories".  In particular, pages 508-509.  I'll
grant you that he is talking about mainframe technology, but the numbers and
cache sizes are by no means out of reach of present day technology.  And all
of the figures start with miss ratios of 10%.

>> But look at what CISC is costing DEC

>Again I may be showing my ignorance, but I thought DEC's balance sheet
>was in pretty good shape these days.

Oh, yes, it's in great shape.  But tell me, would you like to buy a uVax
or a Sun3?  You say you want a uVax 'cause it says DEC and runs your code,
and besides, a Sun3 is only 2-4 times a uVaxII.  Well, ok, how about a
machine that runs 10 times faster (532, AMD 29K, etc, etc).  Dec is going
to be left in the dust if they don't trim the fat off their CPU's.  If you
really believe that I'm wrong, go buy DEC stock :-)


Larry McVoy 	        lm@cottage.wisc.edu  or  uwvax!mcvoy

bcase@apple.UUCP (05/20/87)

In Article <1204lm@cottage.WISC.EDU> writes:
>So if you throw in a cache that gives you 90% hits then you've
>just decreased memory traffic 

I should have responded when this was originally posted.  A cache often
does, but does not necessarily, lower memory traffic.  In particular,
when the block size (the minimum transfer quantum, I mean) is large,
as it is for relatively large caches with few tags, even low miss rates
can require lots of bandwidth. A totally bogus, but illustrative,
example is a 4K cache with one tag: even if the miss rate is only 1%,
you can see that the reload bus bandwidth requirement is almost 100%.
Techniques like one valid bit per word (or few words) can mitigate
the effect.

    bcase

mash@mips.UUCP (John Mashey) (05/21/87)

In article <3604@spool.WISC.EDU> lm@cottage.WISC.EDU (Larry McVoy) writes:
>Oh, yes, it's in great shape.  But tell me, would you like to buy a uVax
>or a Sun3?  You say you want a uVax 'cause it says DEC and runs your code,
>and besides, a Sun3 is only 2-4 times a uVaxII....

While I'd certainly concur heartily in general, be careful to note that
the 2-4 ratio is OK for integer work.  For floating-point, the ratios are
all over the map, and sometimes the uVAXen (esp. under VMS) even outperform
the Suns, esp. when using 68881s. Most micros are much better at integer
work than at floating-point, at least relative to the VAXen [or maybe that
says that the VAXen have either over-powered FP, or under-powered integer!]
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{decvax,ucbvax,ihnp4}!decwrl!mips!mash, DDD:  	408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

ram@nucsrl.UUCP (Renu Raman) (05/22/87)

[Our news I/O is so disordered that I wonder at times if my response
 reaches before the actual posting:-)].

John Mashey, rightly pointed out:

>I wouldn't call the TRON architecture RISCy [note that this isn't
>saying good or bad, 

  As Mashey was out of the country (I am becoming clairvoyant) 
  and my thesis being due to-day, this
  reply comes late.  Correction on fixed format instructions
  (for the mitsubishi CPU) is in order.  

>.............
>just that it tends to not be much like what most
>people think are RISC machines]:

>``What's a RISC?''
>ANS: any machine announced since 1983.

    why 1983?  I thought 1981.


----------------------------------------------------------------------------
	Renu Raman				UUCP:...ihnp4!nucsrl!ram
	1410 Chicago Ave., #505			ARPA:ram@eecs.nwu.edu
	Evanston  IL  60201			AT&T:(312)-869-4276               

P.S:  Its good that somebody like John polices this newsgroup.