[comp.sys.apple2] RISC systems

ericmcg@pnet91.cts.com (Eric Mcgillicuddy) (05/29/91)

RISC computers simplify a number of architectural features and add a
Parallelism to the execution stream that improve processor efficiency
immensely. It is possible to run an instruction in zero clock cycles for
instance. The use of Register variables becomes useful in most cases, given
that ther eare almost as many registers in RISC machines as there are zero
page locations in the 6502. For the same performance, you can produce a RISC
chip much cheaper than a "regular" chip and for the same price you can produce
a faster RISC chip. It is easier to redesign a RISC chip if you get it not
right the first time.

CISC chips are using parallelism more in newer designs, the '030, '040 and
'486 in PCs for example. Real world comparisions are difficult because
compilers for RISC machines are equivalent to CISC versions in many cases, you
end up comparing compilers, not systems, however I would expect hand-crafted
assembler to be better on a RISC that CISC machine given equivalent programmer
competency.

UUCP: bkj386!pnet91!ericmcg
INET: ericmcg@pnet91.cts.com

edwatkeys@pro-sol.cts.com (Ed Watkeys) (05/30/91)

In-Reply-To: message from ericmcg@pnet91.cts.com

I would say you're right about RISC aembly being "faster" than CISC, but on
RISC machines, we must remember what gets us that speed: expanded code
size.  This requires larger caches, more main memory and more disk space
because of the nature of the code.  By virtue of it's simplicity, RISC code
also tends to make programming a chore.  On a 6502, for instance, which I
would consider a halfbreed, writing assembly code is not fun compared to C
(not that there are any decent C compilers out there for the II...).

I don't want to start the assembly vs. high level language thing again, but
to put it bluntly, writing in assembly these days is plain stupid if you
ever plan to keep the product alive for any length of time, and as you
stated before, CISC compilers are an even match for RISC compilers in most
cases, which takes away any non-assembly advantage that RISCs had going for
them.

What this all means, in my opinion, is that RISCs and CISCs aren't a
religious decision; they are simply design decisions.  Let me qualify that:
CISC vs. RISC should be no more religious than Motorolla vs. Intel.

Ed Watkeys III

Internet: edwatkeys@pro-sol.cts.com  ProLine:  edwatkeys@pro-sol
UUCP:     crash!pro-sol!edwatkeys    ARPA:     crash!pro-sol!edwatkeys@nosc.mil
BitNET:   edwatkeys%pro-sol.cts.com@nosc.mil

MQUINN@UTCVM.BITNET (05/30/91)

On Thu, 30 May 91 05:16:08 GMT Ed Watkeys said:
>
>I don't want to start the assembly vs. high level language thing again, but
>to put it bluntly, writing in assembly these days is plain stupid if you
>ever plan to keep the product alive for any length of time, and as you

Well, Gee!  I can't imagine how a statement like that could spark a debate,
do you?  Of course, speed is NEVER an issue when choosing a language is it?
(not wanting to start another 'thing' again).

>Ed Watkeys III
>Internet: edwatkeys@pro-sol.cts.com  ProLine:  edwatkeys@pro-sol
>UUCP:     crash!pro-sol!edwatkeys    ARPA:     crash!pro-sol!edwatkeys@nosc.mil
>BitNET:   edwatkeys%pro-sol.cts.com@nosc.mil

----------------------------------------
  BITNET--  mquinn@utcvm    <------------send files here
  pro-line-- mquinn@pro-gsplus.cts.com
Assembly-  a computer LANGUAGE.
Assembler- NOT the name of a language.  It's equivellent to a compiler.

rhyde@hubbell.ucr.edu (randy hyde) (06/01/91)

BTW, the 486 manual from Intel lists one or two instructions as having
zero cycle instruction times (obviously, the instr opcode must be in
the cache for this to occur).

gwyn@smoke.brl.mil (Doug Gwyn) (06/01/91)

In article <1991May30.051608.22524@crash.cts.com> edwatkeys@pro-sol.cts.com (Ed Watkeys) writes:
>What this all means, in my opinion, is that RISCs and CISCs aren't a
>religious decision; they are simply design decisions.

That's correct, but the rest of your arguments were basically religious ones.

A proper comparison of RISC to CISC would need reasonable implementations
of each, done with equal (high) competence using comparable levels of
technology throughout the whole implementation (including system software).
So far as I am aware, such a "laboratory experiment" has not actually been
conducted, although some software simulations have come close.  It was such
simulations that led RISC proponents to believe they were on a winning
track.

I must say that simple, regular architectures strike me as preferable on
general principles.

ericmcg@pnet91.cts.com (Eric Mcgillicuddy) (06/01/91)

>I would say you're right about RISC aembly being "faster" than CISC, but on
>RISC machines, we must remember what gets us that speed: expanded code
>size.  This requires larger caches, more main memory and more disk space
>because of the nature of the code.  By virtue of it's simplicity, RISC code
>
>Ed Watkeys III
>
>Internet: edwatkeys@pro-sol.cts.com  ProLine:  edwatkeys@pro-sol

The latest CISC machines are really RISC machines with certain subroutines
included on chip (the microcode and nanocode (in the 680x0)). There is no
fundamental difference between this and a library function doing the same
thing, except possibly where one gets the parameters. Instructions tend to
have parameters follow the instruction, subroutines tend to have them on the
stack, of course register passing is also used for both (MVP instruction
discussed earlier). And the Prodos MLI of course has parameters following the
call. SO ther is really not much difference between a RISC architecture and a
CISC these days, just a matter of what libraries the manufacturer has decided
to include and where. Really depends on what you need, I am willing to spend
the extra 20% in code size for the flexibility of tuning the libraries to a
specific task, but not if I were designing an embedded controller where memory
is the main constraint. 

Fit the components to the system, this is the only rational design criteria.

UUCP: bkj386!pnet91!ericmcg
INET: ericmcg@pnet91.cts.com

ericmcg@pnet91.cts.com (Eric Mcgillicuddy) (06/02/91)

>BTW, the 486 manual from Intel lists one or two instructions as having
>zero cycle instruction times (obviously, the instr opcode must be in
>the cache for this to occur).
>From: rhyde@hubbell.ucr.edu (randy hyde)

That is because the 486 uses some RISC architecture features, as do the '030
and '040. (don't recall if the '020 was pipelined). A pipeline is different
from a cache, but the idea is the same even if the implementation is not. 

By and large RISC processors have taken over the microprocessor market, there
are very few "CISCs" still being designed. At the heart of every new processor
is a RISC processor wrapped in microcode to maintain compatibility with the
installed base.

UUCP: bkj386!pnet91!ericmcg
INET: ericmcg@pnet91.cts.com

edwatkeys@pro-sol.cts.com (Ed Watkeys) (06/03/91)

In-Reply-To: message from ericmcg@pnet91.cts.com

If I recall, I think I said something like RISC and CISC are
essentially "brand names" these days...  While I have a big 128K, 8MB has
become the minimum for "real" applications.  When it comes down to it, I
think that programmers' laziness is a far greater problem than compiler or
CPU inefficiency.  For instance, I just finished a port of an MS-DOS
program to ProDOS: the basis for the port was written in QuickBASIC and
took 150K, and my ML version for any 64K IIe or later to SEVEN disk blocks,
which is about $c00 bytes (it's AF4 bytes, actually...)  And besides the
space, it's actually FASTER!  This show two things in my mind: for most
people, a 128K IIe would be fine, and programmers are lazy, especially when
they have "good" compilers...  

Ed Watkeys III

Internet: edwatkeys@pro-sol.cts.com  ProLine:  edwatkeys@pro-sol
UUCP:     crash!pro-sol!edwatkeys    ARPA:     crash!pro-sol!edwatkeys@nosc.mil
BitNET:   edwatkeys%pro-sol.cts.com@nosc.mil

ericmcg@pnet91.cts.com (Eric Mcgillicuddy) (06/05/91)

>If I recall, I think I said something like RISC and CISC are
>essentially "brand names" these days...  While I have a big 128K, 8MB has
>become the minimum for "real" applications.  When it comes down to it, I
>think that programmers' laziness is a far greater problem than compiler or
>CPU inefficiency.  For instance, I just finished a port of an MS-DOS
>program to ProDOS: the basis for the port was written in QuickBASIC and
>took 150K, and my ML version for any 64K IIe or later to SEVEN disk blocks,
>which is about $c00 bytes (it's AF4 bytes, actually...)  And besides the
>space, it's actually FASTER!  This show two things in my mind: for most
>people, a 128K IIe would be fine, and programmers are lazy, especially when
>they have "good" compilers...
>
>Ed Watkeys III

Two years ago now I finished a port of a C64 program to the Apple II. Not only
was it smaller than the original BASIC program and much faster, it also had
fewer bugs and could run from a network (although files had to be saved to
S6,D1  :(  ). A friend did the IBM port. It fit in 256k, but just bearly and
he had to use a couple overlays for printing and saving. Iwas written in
TurboC and it was just as fast as my Apple II version, of course mine was
running on a 1MHz IIe and his was running on a 4.77MHz XT with 10 times the
memory. The only thing better was the Graphics since he took advantage of VGA.

Ithink you will find that shipping complete product is more important than
shipping late product. This is the big advantage of compilers.

UUCP: bkj386!pnet91!ericmcg
INET: ericmcg@pnet91.cts.com