[net.arch] RISC/CISC/microcode: personal experi

johnl@ima.UUCP (06/25/85)

I spent a little while doing battle with a Burroughs B1720, a machine with
per-process switchable microcode, not to mention word langths variable from
1 to 24 bits and your choice of big-endian or little-endian addressing.  It
was a frustrating experience.  I got the distinct impression that the machine
was designed by 3 geniuses and 800 idiots.  The geniuses thought up the
flexible microcode stuff and the essentials of the operating system which
included such snazzy stuff as named pipes.  The idiots wrapped it in a thick
layer of punch cards (even though it had a terminal or two) and incomplete and
impenetrable manuals.  I get the impression that if you were writing Cobol
programs it wasn't such a bad machine.  For Fortran and (in desperation) Basic,
it was just awful.  There seem to have been two problems.  One is that there was
about 8K of microcode store and if your microcode was bigger than that, the 
extra ran from main memory and severely reduced speed.  The Fortran microcode
had all sorts of swell operators and a nice stack architecture, but it didn't
fit so it was slow.  The other problem was that the microcode (for which we
never got manuals although they always said they'd send them) seemed to be
designed for commercial programs that moved data around a lot and didn't do a
whole lot of computation relative to I/O, e.g. Cobol programs.  If you wanted
to do floating point arithmetic, you could dump in your program and come back
tomorrow.

Well, anyway, it persuaded me that with the sort of technologies we have 
today, heavily microcoded architectures are losers.  Gordon Bell once drew me 
a little sketch plotting memory prices relative to other stuff.  If memory is 
expensive, and particularly if ROM is a lot faster than RAM, microcode wins.  
Neither of those two things is true now, nor appears to be likely to be true 
in the forseeable future for systems of any appreciable size.  This suggests 
that RISC machines with lots of memory are the way to go, particularly since 
a good compiler can optimize the RISC object code for a program for that 
particular program, while CISC microcode has to be optimized for what the 
designer hopes is a typical mix.  

But what I think people will really find is that addressing and segmentation 
architectures are much more important than instruction architectures.  The IBM
370 instruction set is not great, but since it gives you a fairly large flat
address space, you can get work done.  And as been noted often, IBM has spent
20 years learning about optimizing compilers to squeeze the most out of their
instruction set, which may be silly in retrospect, but has let them keep
reimplementing the same architecture for 20 years with, one must admit, fair
commercial success.  On the other hand, the PDP-10 has a wonderful instruction
architecture, but its limited address space and word addressing has condemned
it to a lingering but inevitable death.  The extended architecture that was
grafted on later was a nice try, but too different and too late.

John Levine, Javelin Software, Cambridge MA 617-494-1400
{ decvax!cca | think | ihnp4 | cbosgd }!ima!johnl, Levine@YALE.ARPA

PS:  If you think that this means that I wish a speedy death to processors
that impose such ridiculous things as 64K byte segments on their addresses,
you're right.