[net.arch] RISC question

aoki@trwspp.UUCP (07/19/84)

<munch...munc...mun...mu...m...>

	I have been recently hearing a lot (as compared to nothing
previous) about RISC computers.  Can anyone enlighten me on the
subject.  I have many questions.

	Here is the most important one currently.

	Where can I find reading material on the subject?

	Please reply to me directly.  If others are interested
I'll post the results.

	As they say, thanks in advance.

			::::: Dean K. Aoki :::::

			UUCP:  { ucbvax | decvax } !trwrb!trwspp!aoki
			ARPA:  !trwrb!trwspp!aoki@BERKELEY
			USPS:  T R W  Defense Systems Group
			       One Space Park  MS:  119-2142F
			       Redondo Beach, CA  90278

john@ektools.UUCP (02/19/86)

In article <411@ccivax.UUCP> rb@ccivax.UUCP (What's in a name ?) writes:
>A RISC chip also makes
>bus sharing with very high resolution displays or very high speed DMA
>peripherals and co-processors more practical as well.

This doesn't seem right.  Does 'practical' in this sentence mean less
bus contention?

Since a RISC machine doesn't have the fancy microcoded instructions of
a CISC machine, it takes more instructions to do the same job.  Even
though a RISC instruction typically requires fewer bits than a CISC
instruction, a program for a RISC machine is generally said to be
larger than the equivalent program for a CISC machine.  With today's
low memory prices, this is not a terrible thing.

I was always taught that 80%-95% of the bus usage of a processor was
for instruction fetches.  Therefore if a RISC machine takes more bytes
of instructions to run a program than a CISC machine would, the RISC
processor will eat up MORE bus cycles, leaving fewer for displays, DMA
, and co-processors.

Now, I'm not a professional computer designer, so there's a good chance
I missed something in the original argument.  Elucidation is welcome.
Flames to /dev/null.

-- 
-------------------------------------------------------------------------
John Hall
Supervisor, Software Tools Laboratory
Product Software Engineering

USPS:   EASTMAN KODAK COMPANY, 901 Elmgrove Rd., Rochester, NY 14650
VOICE:  716 726-9345
UUCP:   {allegra, seismo}!rochester!kodak!ektools!john
ARPA:   kodak!ektools!john@rochester.ARPA

dana@gatech.CSNET (Dana Eckart) (02/24/86)

I have a question concerning the choice of word size in RISC machines.
Namely, many seem to choose 32 bit architectures.  I have done some
reading on the subject and given it a fair bit of thought -- however,
I can not think of a "good" reason why 24 bit architectures are not
used instead.  Consider the RISCI Berkeley:

	     7         1          8          8          8   (bits)
	-------------------------------------------------------
	| opcode | addr mode | operand1 | operand2 | operand3 |
	-------------------------------------------------------

[I must admit that since the information that I was able to get from some 
of the published articles was not very complete, the above may not be
completely accurate.]  The use of 8 bits for register operands is needed to
address the many registers inside of the "windows" (I guess).   The
alternative apporoach taken by IBM and Stanford (to use a set of "general"
purpose registers) seemed to require fewer registers (16 for IBM and 32
for Stanford -- if memory serves correctly -- although I did not find any
hints about the size of the instructions, perhaps they are able to get
2 instructions per word?).  Furthermore it seems that using the more advanced 
compiler technology that was then required to get the most of the these 
sets of "general" purpose register sets, that these machines were comparable 
to that built by the Berkeley group.

Thus it seems to me (naively so perhaps) that one could design a RISC to
have 64 "general" purpose registers (in fact one might want to consider
whether or not such a small set could also be effectively windowed I
suppose), a 24 bit word (and data bus), with 32 instructions (many of 
the existing RISCs are very close to this) and two addressing modes (PC 
realative and indexed).

Is there something wrong with this idea?  Is my basic understanding of RISCs
incorrect or misguided?  Have we surrendered to 32 bits without a "good"
reason?  I am very curious and would appreciate any and all notes (mail or
net).  [I appologize to those involved if I have unwittingly misrepreseneted
their work or ideas.]  Thanks in advance.....

--dana  (dana@gatech)

-- 
--Dana Eckart 

Georgia Insitute of Technology, Atlanta Georgia, 30332
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!dana

kds@intelca.UUCP (Ken Shoemaker) (02/25/86)

> >A RISC chip also makes
> >bus sharing with very high resolution displays or very high speed DMA
> >peripherals and co-processors more practical as well.
> 
> This doesn't seem right.  Does 'practical' in this sentence mean less
> bus contention?
> 
> Since a RISC machine doesn't have the fancy microcoded instructions of
> a CISC machine, it takes more instructions to do the same job.  Even

I think what could be meant is that with a smaller amount of the chip
required to be the "processor," there is more room left over to implement
such goodies as an on-chip cache, which, if large enough, will significantly
reduce the memory requirements of the processor.
-- 
If you don't like the answer, then ask another question!  Everything is the
answer to something...

Ken Shoemaker, Microprocessor Design, Intel Corp., Santa Clara, Ca.
{pur-ee,hplabs,amd,scgvaxd,dual,qantel}!intelca!kds
	
---the above views are personal.

bs@faron.UUCP (Robert D. Silverman) (02/25/86)

> I have a question concerning the choice of word size in RISC machines.
> Namely, many seem to choose 32 bit architectures.  I have done some
> reading on the subject and given it a fair bit of thought -- however,
> I can not think of a "good" reason why 24 bit architectures are not
> used instead.  Consider the RISCI Berkeley:
> 
> 	     7         1          8          8          8   (bits)
> 	-------------------------------------------------------
> 	| opcode | addr mode | operand1 | operand2 | operand3 |
> 	-------------------------------------------------------
> 
> [I must admit that since the information that I was able to get from some 
> of the published articles was not very complete, the above may not be
> completely accurate.]  The use of 8 bits for register operands is needed to
> address the many registers inside of the "windows" (I guess).   The
> alternative apporoach taken by IBM and Stanford (to use a set of "general"
> purpose registers) seemed to require fewer registers (16 for IBM and 32
> for Stanford -- if memory serves correctly -- although I did not find any
 
etc. etc.

Some of us feel that 32 bits is not enough, especially those of us who like
to do a lot of integer arithmetic. 2^24 is not enough numerical precision for
many applications. Personally I'd settle for a 1 MIP machine with 256 bit
words and double length registers.  :-) :-)

Bob Silverman

bcase@amdcad.UUCP (Brian case) (02/25/86)

In article <2809@gatech.CSNET>, dana@gatech.CSNET (Dana Eckart) writes:
> I can not think of a "good" reason why 24 bit architectures are not
> used instead.
>                        Have we surrendered to 32 bits without a "good"
> reason?

The reason that 32-bit architectures are so popular is addressing
range:  a 32-bit address space is nice and big (for now at least)
but a 24-bit address space is too small.  Since the CPU has to
manipulate address quantities as well as loop counters and other
things, its ALU and data paths must be 32-bits wide.  Thus, even
though the 80186 can address 1 gigabyte (is that the right number?)
of virtual memory, C compilers generate bad code for large model
programs (because the CPU cannot manipulate a full address at once).
For languages like C that allow pointers to be manipulated like
data, the ALU must be able to handle a full address; the size of
the ALU determines the size of the linear address space.

bcase@amdcad.UUCP (Brian case) (02/26/86)

In article <9921@amdcad.UUCP>, bcase@amdcad.UUCP (Brian case) writes:
> things, its ALU and data paths must be 32-bits wide.  Thus, even
> though the 80186 can address 1 gigabyte (is that the right number?)

Oops, I meant the 80286, not 80186.

nather@utastro.UUCP (Ed Nather) (02/26/86)

In article <487@faron.UUCP>, bs@faron.UUCP (Robert D. Silverman) writes:
> Some of us feel that 32 bits is not enough, especially those of us who like
> to do a lot of integer arithmetic. 2^24 is not enough numerical precision for
> many applications. Personally I'd settle for a 1 MIP machine with 256 bit
> words and double length registers.  :-) :-)
> 
> Bob Silverman

Too small.  2e+256 is only 10e+77, or 10e+/-39 (about).  You might still
need floating point operations.  If, however, you are a little more generous
with word size, say 2048 bits, then a signed integer can represent a number
of 10e+/-340 in size, and you don't need any floating point operations in
either hardware or software.  Just use integers for everything.

Too slow.  If all operations are integer functions, the computing machine
should be at least 10 mips with modern hardware technology, and simple to
boot.  (er ... simple as well.)

Also, with 2048-bit words, you don't need an instruction cache, since each
word can hold a whole slew of instructions -- each instruction is its own
cache.  Of course, a 2048-bit-wide bus may pose a design problem, but what
the hell -- we don't worry about engineering details on the net; we just set
policy.

-- 
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nather@astro.UTEXAS.EDU

aglew@ccvaxa.UUCP (02/27/86)

In my dabblings in RISCy architectures, I too considered non-power-of-two
instruction sizes - 12, 18, 24, and 40 bits. But I eventually surrendered
to the inevitability of powers-of-two -- being able to form memory addresses
by concatenation and simple field extraction is just so much more convenient.
After all, instructions have to be generated as data by some program.

Of course, you can always deal with funny numbers of byte sized packages,
letting the instruction fetch unit give them to you three bytes at a time,
but it is nice if every branch is at immediately addressible unit, so that
you don't have to perform any shifts before you can handle a branch.
Branches cause enough trouble as it is.

No, 32 bits is a reasonably sized instruction - after all, we're not really
worried about saving memory at the cost of speed. However, the next step up,
to 64 bits, seems a bit much, so we'll probably see a host of 32-64 bit
instruction words in the next generation. Just like in the generation before
this. But hey! there are already machines with 512 bit instructions.
Can anyone tell the net what it is really like programming on ELI(?) ?

But you know, if you decide not to be binary, powers of two don't need
to bother you so much anymore. And you've got room for an undefined value
in a decimal nibble.

knudsen@ihwpt.UUCP (mike knudsen) (02/28/86)

> In article <411@ccivax.UUCP> rb@ccivax.UUCP (What's in a name ?) writes:
> >A RISC chip also makes
> >bus sharing with very high resolution displays or very high speed DMA
> >peripherals and co-processors more practical as well.
> 
> This doesn't seem right.  Does 'practical' in this sentence mean less
> bus contention?
> 
> Since a RISC machine doesn't have the fancy microcoded instructions of
> a CISC machine, it takes more instructions to do the same job.  Even
> though a RISC instruction typically requires fewer bits than a CISC
> instruction, a program for a RISC machine is generally said to be
> larger than the equivalent program for a CISC machine.  With today's
> low memory prices, this is not a terrible thing.
> 
> I was always taught that 80%-95% of the bus usage of a processor was
> for instruction fetches.  Therefore if a RISC machine takes more bytes
> of instructions to run a program than a CISC machine would, the RISC
> processor will eat up MORE bus cycles, leaving fewer for displays, DMA
> , and co-processors.
> 
> John Hall
> Supervisor, Software Tools Laboratory
> Product Software Engineering
> 

I agree with you.  Modern CISC processors are microcoded
(nanocoded?) and fetch one CISC instruction from system RAM,
then proceed to fetch many nano-instrs from internal ROM
to perform it.  Meanwhile, the bus is free.
RISC machines essentially run "nano code" out of YOUR main
RAM over YOUR bus.  So yes, you seem right to me.
	mike k
Or are we both missing something?

aglew@ccvaxa.UUCP (03/02/86)

>/* Written  2:46 pm  Feb 28, 1986 by knudsen@ihwpt.UUCP in ccvaxa:net.arch */
>I agree with you.  Modern CISC processors are microcoded
>(nanocoded?) and fetch one CISC instruction from system RAM,
>then proceed to fetch many nano-instrs from internal ROM
>to perform it.  Meanwhile, the bus is free.
>RISC machines essentially run "nano code" out of YOUR main
>RAM over YOUR bus.  So yes, you seem right to me.
>	mike k
>Or are we both missing something?

The problem is, most of the microcoded instructions that keep the CISC off the
bus can be executed in one cycle, so all those extra micro-cycles are useless,
and only go to support uncommon multicycle instructions.

The biggest use of microcode is in implementing complicated addressing modes
(I believe that RISCs should rather be called RAMMs - Reduced Addressing 
Mode Machines). Indexed address calculations can be implemented in the cycle
that sets up to get data from memory; indirect addressing requires an extra
memory access, so there goes the advantage of keeping the CISC off the bus.

What about other CISC instructions? Queue insertion, translate tables, etc
- all require lots of memory accesses, so the bus contention argument doesn't
hold.

The only microcoded instructions that truly reduce bus contention are compute
intensive operations like multiply and divide, and possibly multi-bit shifts
if you are foolish enough not to want a barrell shifter. Unfortunately,
simple operations like add and subtract are much more common.

Can anyone think of any more complicated instructions that would have to be
done in microcode that would not require constantly going back to memory for
data?

Well, where CISC operations can't reduce bus contention overmuch, RISCs can
take two steps to reduce it: (1) by providing a lot of registers, most scalar
(and in some architectures, vector (the Cray is really just a big RISC))
data can be accessed without going to memory; and, (2) you can spend time
and money on providing good instruction caches.

rentsch@unc.UUCP (Tim Rentsch) (03/04/86)

In article <738@ihwpt.UUCP> knudsen@ihwpt.UUCP (mike knudsen) writes:
>> Since a RISC machine doesn't have the fancy microcoded instructions of
>> a CISC machine, it takes more instructions to do the same job.  Even
>> though a RISC instruction typically requires fewer bits than a CISC
>> instruction, a program for a RISC machine is generally said to be
>> larger than the equivalent program for a CISC machine.  With today's
>> low memory prices, this is not a terrible thing.
>> 
>> I was always taught that 80%-95% of the bus usage of a processor was
>> for instruction fetches.  Therefore if a RISC machine takes more bytes
>> of instructions to run a program than a CISC machine would, the RISC
>> processor will eat up MORE bus cycles, leaving fewer for displays, DMA
>> , and co-processors.
>
>I agree with you.  Modern CISC processors are microcoded
>(nanocoded?) and fetch one CISC instruction from system RAM,
>then proceed to fetch many nano-instrs from internal ROM
>to perform it.  Meanwhile, the bus is free.
>RISC machines essentially run "nano code" out of YOUR main
>RAM over YOUR bus.  So yes, you seem right to me.
>	mike k
>Or are we both missing something?


The proper term is microcode.  Nanocode actually exists in some
machines (although what the term means varies).  The Nanodata QM-1
(no, I am not making this up) is the most well known example.

Also, you are missing something.  In the first place, the numbers
are wrong: instruction fetches use only about half of the bandwidth
used by a program, not 80-95%.  (As I recall this percentage was
given for the recent Clipper chip, a RISC chip, as 55%.)

For another thing, you assume that the amount of bandwidth so
consumed is significant.  I will grant that RISC programs are
bigger.  Let's be generous and say they are twice as big (usual
numbers are less than 2).  Assuming 50% of pre-RISC program
bandwidth came from instruction fetches, this is an increase of
program bandwidth requirements of 50% (((2*50% + 1*50%) / 100%) - 100%). 
But programs do *not* use up all of the memory bandwidth available.
I don't have any actual numbers handy, but suppose they use 1/3 of
the total bandwidth available (pre-RISC).  After RISCing, they use
1/2.  In other words, "extra" memory bandwidth has decreased by only
25%.  I for one would trade the memory cycles for a faster
processor, since it is then easy to reduce the need for the
"extra-processor" memory cycles -- just buy more RAM.

Finally, instruction bandwidth does not necessarily translate into
bus bandwidth, because of memory caches.  If the cache hit rate is
very high, almost no bus bandwidth will be consumed by the longer
instruction fetches.  Granted, a larger cache will be required to
achieve the same hit rate for the now-larger instructions, but
instuctions generally have higher cache hit rates than data so
that a modest increase in cache size should suffice.  (In any case,
caches scale up nicely and easily, so if you need more cache, just
add it.)  The Clipper, somewhat RISCish, has two caches, one for
instructions and one for data, so that instruction bandwidth does
not interfere with data bandwidth.

rb@ccivax.UUCP (rex ballard) (03/06/86)

In article <738@ihwpt.UUCP> knudsen@ihwpt.UUCP (mike knudsen) writes:
>> In article <411@ccivax.UUCP> rb@ccivax.UUCP (What's in a name ?) writes:
>> >A RISC chip also makes
>> >bus sharing ... more practical as well.
>>  the RISC
>> processor will eat up MORE bus cycles, leaving fewer for displays, DMA
>> , and co-processors.
>RISC machines essentially run "nano code" out of YOUR main
>RAM over YOUR bus.  So yes, you seem right to me.
>	mike k
>Or are we both missing something?

Yup!  If you ran ALL instructions directly from main memory, this would
be correct, but RISC chips usually have either internal or tightly
coupled CACHE which reduces the number of fetches from main RAM.  In
fact, some RISC machines have several LAYERS of CACHE, such as 2K
internal, 2 meg external, then the bus, and even a cache to the disk.
The CACHE can be updated via fifo, burst, or interleaved DMA.  The
result, given any demand "paging" at all, in a "Threaded Interpreter",
the RISC becomes a SELF OPTIMISING CISC machine to the outer buss!! Cute eh?

john@ektools.UUCP (03/07/86)

In article <449@ccivax.UUCP> rb@ccivax.UUCP (What's in a name ?) writes:
>Yup!  If you ran ALL instructions directly from main memory, this would
>be correct, but RISC chips usually have either internal or tightly
>coupled CACHE which reduces the number of fetches from main RAM.  In
>fact, some RISC machines have several LAYERS of CACHE, such as 2K
>internal, 2 meg external, then the bus, and even a cache to the disk.

Is there some reason that RISC machines require cache to operate? 

Are the benefits of cache greater for RISC architectures?

I guess I see cache as an architectural feature separate from the
RISC-iness of a machine's instructions set.

Indeed, many modern computers incorporate both RISC instruction sets and
cache memories.  Is this cause and effect, or are these merely two
good ideas that are being used in the same machine?

Why wouldn't cache benefit a CISC machine?

For single-chip designs, given a particular die-size and line width it
is easier to find room for on-chip cacheing with a RISC design.  (Of
course, the extra space could also be used for a huge register file,
another "RISC characteristic" that seems to be equally applicable to
CISC designs.). 

Are we muddying the waters by lumping a bunch of good ideas:
   - Reduced instruction sets
   - Large register files
   - Multi-layer cache
all into the category of RISC?


-- 
-------------------------------------------------------------------------
John Hall
Supervisor, Software Tools Laboratory
Product Software Engineering

USPS:   EASTMAN KODAK COMPANY, 901 Elmgrove Rd., Rochester, NY 14650
VOICE:  716 726-9345
UUCP:   {allegra, seismo}!rochester!kodak!ektools!john
ARPA:   kodak!ektools!john@rochester.ARPA

mash@mips.UUCP (John Mashey) (03/10/86)

John Hall writes:
> Is there some reason that RISC machines require cache to operate? 

No. The IBM ROMP (in the PC/RT) is tuned for cacheless operation.  IBM's
nice book about the RT explains the tradeoffs made in going from the
cache-oriented 801 to the ROMP.
> 
> Are the benefits of cache greater for RISC architectures?
Sometimes.  In particular, a carefully tuned RISC design can fetch from
an I-cache every single machine cycle, and if you do that, things like
pre-fetch or translated intruction queues can go away.
> 
> I guess I see cache as an architectural feature separate from the
> RISC-iness of a machine's instructions set.
Yes.
> 
> Indeed, many modern computers incorporate both RISC instruction sets and
> cache memories.  Is this cause and effect, or are these merely two
> good ideas that are being used in the same machine?
Almost any high-performance system (except, perhaps, vector systems)
ends up having to use caches to get reasonable cost-performance.
> 
> Why wouldn't cache benefit a CISC machine?
> 
> For single-chip designs, given a particular die-size and line width it
> is easier to find room for on-chip cacheing with a RISC design.  (Of
> course, the extra space could also be used for a huge register file,
> another "RISC characteristic" that seems to be equally applicable to
> CISC designs.). 
> 
> Are we muddying the waters by lumping a bunch of good ideas:
>    - Reduced instruction sets
>    - Large register files
>    - Multi-layer cache
> all into the category of RISC?

1) It is not clear that on-chip caching is a good idea with the current
level of technology.  To be more specific, with 2 micron CMOS, you can get
enough cache (like 256 bytes) to be an SBA ("SMall Benchmark Accelerator").

2) The waters are certainly muddied, since lots of people have different ideas
of what RISC is supposed to be.
-- 
-john mashey
UUCP: 	{decvax,ucbvax,ihnp4}!decwrl!mips!mash
DDD:  	408-720-1700
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

rshepherd@euroies.UUCP (Roger Shepherd INMOS) (03/10/86)

The problem with 24-bit machines (and 40, 48 ...) is that
they do not have a power of two bytes in a 
word. This makes byte addressing tricky. This problem can be overcome by
very careful design and the separation of address arithmetic and integer
arithmetic (for instance). In the transputer we separate a pointer into
two halves, a word address and a byte selector (enough bits, ie 1 for
16-bit machine, 2 for 24-bit machine, 2 for 32-bit machine ...). The effect
of addressing arithmetic is then to to generate a new pointer using
only BytesPerWord values in the byte selector part.

For example the pointers to successive bytes in a 24-bit machine would 
look like (if interpreted as integers)

	0	-- assuming we astart out addressing at zero
	1
	2
	4
	5
	6
	8

Of course with this scheme 16-bit and 32-bit machines have pointers w
which appear entirely normal.
-- 
Roger Shepherd, INMOS Ltd, Whitefriars, Lewins Mead, Bristol, BS1 2NP, UK
Tel: +44 272 290861
UUCP: ...!mcvax!euroies!rshepherd

jer@peora.UUCP (J. Eric Roskos) (03/10/86)

> The result, given any demand "paging" at all, in a "Threaded Interpreter",
> the RISC becomes a SELF OPTIMISING CISC machine to the outer buss!!  Cute
> eh?

Yep... sounds a lot like an Adaptive Instruction Set Computer, doesn't it?
("What's that," you say?  You mean, you don't read the research done by
the folks on the Other Side?)
-- 
      Ofc:  jer@peora.UUCP  Home: jer@jerpc.CCUR.UUCP  CCUR DNS: peo
   S Mail:  MS 795; CONCURRENT Comp.ter Corp. SDC; (A Perkin-Elmer Compa
	    2486 Sand Lake Road, Orlando, FL 32809-7642
----------------------
Obviously if I supplied LOTD(7) you'd know what was going to happen next,
wouldn't you?

dick@ucsfcca.UUCP (Dick Karpinski) (03/12/86)

In article <449@ccivax.UUCP> rb@ccivax.UUCP (What's in a name ?) writes:
>fact, some RISC machines have several LAYERS of CACHE, such as 2K
>internal, 2 meg external, then the bus, and even a cache to the disk.
>...
>the RISC becomes a SELF OPTIMISING CISC machine to the outer buss!! Cute eh?

RISC vs CISC -- where does it lead?

Premises: What I have learned, often contradictory.
  - CISC machines can be effective.
  - RISC machines make sense, should get to be faster.
  - Both camps are right...and both are wrong.
  - They cannot agree because they aren't arguing about 
    the right issues.
  - Sometimes we can resolve confusing issues by 
    increasing the resolution of our examination to see 
    finer detail.  But also, the real answer often 
    seems to pop out sideways, with a new conception 
    of what the proper issue is.
  - Late binding is expensive and effective, the issue
    is what to leave open to bind at run time.  The
    answer is more and more, but not that (yet).  Each
    argument against a feature merely values it lower
    than a competing feature.  If it gets cheaper in a
    later version of the base design, it may become a
    favored feature; they keep putting more on one chip.

Where I get to.
  - Chip layout strategy will resemble a go game rather
    than a chess game; this is effective in situations
    where the area is relevant, not any special feature.
  - The major blocks of current uPs will become the small
    features of future chips.  Whether you call them 
    register files or data caches or stacks, some one or
    more forms of fast word memory will be available to
    compilers and programmers.  And ALUs and Barrel 
    Shifters and FPUs and MMUs etc.
  - Active revisable wiring, ie switching circuits, will
    permit reconfiguration on sub-millisecond cycles to
    optimize operation of virtual larger components in
    compile/assemble phases and even at run time.  
  - Active optimizing algorithms (like caches and buffers
    and on-the-fly garbage collection) will be extended
    to choosing between multiple code generations for a
    given piece of source code and adjusting the I/O
    bandwidth vs processing power of the machine.
  - Multiple processors and awesome amounts of memory
    will be available in the single-user workstation.
  - Multiple strategies will be employed in every camp;
    we will look back on these days as the simplistic
    past.  Complexity will take on whole new meanings.

Perhaps I am just talking through my hat.

Dick
-- 

Dick Karpinski    Manager of Unix Services, UCSF Computer Center
UUCP: ...!ucbvax!ucsfcgl!cca.ucsf!dick   (415) 476-4529 (12-7)
BITNET: dick@ucsfcca   Compuserve: 70215,1277  Telemail: RKarpinski
USPS: U-76 UCSF, San Francisco, CA 94143

kludge@gitpyr.UUCP (03/15/86)

In article <250@euroies.UUCP> rshepherd@euroies.UUCP (Roger Shepherd INMOS) writes:
>The problem with 24-bit machines (and 40, 48 ...) is that
>they do not have a power of two bytes in a 
>word. This makes byte addressing tricky. This problem can be overcome by

   There is no problem with byte addressing.  All you need is a 12-bit-byte.
With the extra word size, you could implement a bunch of extra useful chars
(as in the 16-bit Extended EBCDIC), and it really wouldn't cost that much
more for your I/O stuff.  In fact, you could use 8-bit I/O and just forget
about the other 4-bits for dumb terminals, but use the extra ones for more
intelligent boxes.  The opportunities are endless.
   Please, don't tell me about how bad the 12-bit character set on the CDC
Cyber machines are.. They aren't even 12-bits long anyway, they are usually
6 bits (except lowercase and special characters, which are 12).  Don't even
ask. 
-- 
-------
Disclaimer: Everything I say is probably a trademark of someone.  But
            don't worry, I probably don't know what I'm talking about.

Scott Dorsey
Kaptain_kludge
ICS Programming Lab (Where old terminals go to die), Rich 110,
Georgia Institute of Technology, Atlanta, Georgia 30332
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!kludge

USnail:  Box 36681, Atlanta GA. 30332