[comp.sys.mac] New Mac Rumoursdown

d_volaric@vaxa.uwa.oz (02/26/89)

In article <70755@ti-csl.csc.ti.com>, holland@m2.csc.ti.com (Fred Hollander) writes:
> In article billkatt@caen.engin.umich.edu (billkatt) writes:
>>language, and RISC chips are optimized for C.
> 
> This is very interesting.  I don't have the strongest background in
> hardware architecture, but, could you please explain how a processor
> could be optimized for a specific high level language?
> 
> Fred Hollander

I don't have a background either, but how about the Novix Fourth processor?
The debate might be high-levelness of Fourth. Also I have been told that a
Nat Semi proc (the "32000" I think) was like "writing in Pascal".

Saying that RISC chips are optimised for any high level language sounds
contradictory (sp?) to me. The idea is to provide very low level instructions
that the raw hardware finds easy to execute. Although compiler writers may
find it easier to write for RISC chips, a processor that supported
high(er) level intructions would have to be described as CISC (Complex 
Instruction Set Computer).

Some EE processor guru may like to correct me on some points, though :-).

Darko Volaric,
Dvorak Computer.

daveh@cbmvax.UUCP (Dave Haynie) (03/01/89)

in article <568092@vaxa.uwa.oz>, d_volaric@vaxa.uwa.oz says:

> Saying that RISC chips are optimised for any high level language sounds
> contradictory (sp?) to me. 

It really isn't.

> The idea is to provide very low level instructions that the raw hardware 
> finds easy to execute. 

That's only part of the story.  Certainly, if I only have a limited number
of instructions, lots of orthogonality, and few addressing modes, it makes
the hardware simpler.  This has it's own immediate benefits.  First of all,
it makes the chip easier to design.  Since the instruction set is simple, I
can hard wire it instead of microcoding, so each instruction can go much
faster since there's no micro execution phase.  And since the design is
simple, the chip comes out small.  So now I can do something cool with the
space I have left over.  You have your choice here.  Motorola decided to
put some FPU units, a deep pipeline, and register scoreboarding on their
88k RISC chip.  AMD decided they wanted beaucoup de registers (192 of 'em)
and a branch target cache (though it still doesn't work) on their 29K.  MIPS
put cache control logic, MMU, and a cool pipeline on their system chips.  Sun
decided to keep it simple, which let them build their first SPARC in a gate
array instead of full custom.  This approach is currently being used by TI
to put a decent 32 bit microprocessor in GaAs, long before their process
technology is ready for something on the order of a modern 68020 style CISC
processor.

Another idea behind RISC is that, while we've had 30 years or so of compiler
technology to get it right, compilers just plain can't take advantage of
complex instructions enough to make use of all the powerful instruction
sets on most CISC machines.  Maybe some future compiler can use an assembly
hacker's expert system to do this, but it just ain't here today.  So when
you assume that everyone is going to write in high level languages, you get
to throw out the instructions that only assembly hackers have a use for.  And
you can take it even further.  There are some things that compilers can do
very well that programmers can't, and you can take advantage of that.  Most
of the modern CPU architectures do this.  For example, many of them find
clever ways to hide bus cycles.  So you execute a load instruction, followed
by something else.  That load is still in progress when the following
instruction is executed, as long as that next instruction doesn't use the
register that's being loaded.  Or your branch instruction starts executing,
and the execution unit executes the instruction following it, regardless of
the branch taken or not.  Or you interleave integer and floating point 
operatings, knowing that your CPU's internal parallelism can execute both
at the same time.  The kind of architectures being designed today let a 
simple analysis of instruction interactions wring extra performance out of a
processor.  This kind of analysis is much easier for a machine to work out
than a human.  

> Darko Volaric,
> Dvorak Computer.
-- 
Dave Haynie  "The 32 Bit Guy"     Commodore-Amiga  "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: D-DAVE H     BIX: hazy
              Amiga -- It's not just a job, it's an obsession