[comp.arch] 80960 IO

chris@mimsy.UUCP (Chris Torek) (04/15/88)

>In article <11026@mimsy.UUCP> I said
>>IO space access is a bit muddy to me ...

In article <3368@omepd> mcg@omepd (Steven McGeady) answers:
>The 80960 has no special I/O - It is entirely memory mapped.  I/O registers
>(or whatever) can occur anywhere in the address space.

I was unclear in my unclarity.  I remember something about burst
mode memory access; if there is any sort of data cacheing in the
80960 architecture itself, one would need a way to inhibit multiword
reads from the bus during IO space references.

Of course, if data cacheing is to be done externally, this problem
vanishes (or rather, retreats into the board level).
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

mcg@omepd (Steven McGeady) (04/18/88)

In article <11067@mimsy.UUCP> chris@mimsy.UUCP (Chris Torek) writes:
>
>I was unclear in my unclarity.  I remember something about burst
>mode memory access; if there is any sort of data cacheing in the
>80960 architecture itself, one would need a way to inhibit multiword
>reads from the bus during IO space references.
>
>Of course, if data cacheing is to be done externally, this problem
>vanishes (or rather, retreats into the board level).

I should have tried to understand more thoroughly.  The existing
implementations do no data caching per se (other than the stack frame register
cache).  Therefore, the burst mode bus is not a problem.

It should be pointed out that the pipelined bus I/O, coupled with burst mode,
the register cache, and other aspects of the architecture make the 80960
relatively less sensitive to slow memory than other processors (especially
other RISC processors).  The 80960 suffers from 2-10% slowdown per
memory waitstate (7% typical).  This is much lower than many competing
processors, and makes it practical to build 80960 systems without
expensive and board-space-consuming caches.  Another reason why the 80960
is targeted at a controller market, as opposed to a system market.


S. McGeady

larryh@tekgvs.TEK.COM (Larry Hutchinson) (04/19/88)

In article <3385@omepd> mcg@iwarpo3.UUCP (Steve McGeady) writes:
>
>I should have tried to understand more thoroughly.  The existing
>implementations do no data caching per se (other than the stack frame register
>cache).  Therefore, the burst mode bus is not a problem.

Caching is not the only problem with I/O devices.  It is (was?) common
practice for status registers to be cleared upon being read.  Thus burst
mode is a no-no with such registers.

>memory waitstate (7% typical).  This is much lower than many competing
>processors, and makes it practical to build 80960 systems without
>expensive and board-space-consuming caches.  Another reason why the 80960
>is targeted at a controller market, as opposed to a system market.
>

Right!  And I suppose you have some swamp land to sell us too! :-)


Larry Hutchinson, Tektronix, Inc. PO Box 500, MS 50-383, Beaverton, OR 97077
UUCP:   [uunet|ucbvax|decvax|ihnp4|hplabs]!tektronix!tekgvs!larryh
ARPA:   larryh%tekgvs.TEK.COM@RELAY.CS.NET
CSNet:  larryh@tekgvs.TEK.COM

mac3n@babbage.acc.virginia.edu (Alex Colvin) (04/20/88)

> Caching is not the only problem with I/O devices.  It is (was?) common
> practice for status registers to be cleared upon being read.  Thus burst
> mode is a no-no with such registers.

All too common a practice!  Stop it!  If I'd 'a wanted it cleared I'd 'a done
a read-and-clear!  Would you do this to a processor register?

This just means that I've got to keep a shadow copy somewhere.  Why not
keep status in the status register?

					mac the nai"f

newsa@psu-cs.UUCP (News Administrator) (04/26/88)

In article <3364@tekgvs.TEK.COM> Larry Hutchison explains that burst bus
accesses can screw up memory mapped I/O.  There is a basic confusion here.

Instruction fetches and prefetches are not what is at issue here, but for
completeness' sake:  The 80960 architecture allows burst fetches and
prefetches of instructions.  Note that the burst prefetch is required to be
implemented so that spurious page faults are not reported.  This is easy.

On data accesses, which is at issue here, the size of the data access is 1
byte, 2 bytes, 3 bytes, 4 bytes, 8 bytes, 12 bytes, or 16 bytes.  The size of
the access is determined by the instruction.  If the access is not aligned on
a natural boundary, a "split" access will occur.  All this is independent of
caching.

Architecturally speaking, one advantage of defining the wider instructions
is that it allows an implementation to easily exploit wider internal datapaths.

Some of the load instructions (ignoring the subword cases for now) are:
	ld	load 32 bits
	ldl	load 64 bits
	ldt	load 96 bits
	ldq	load 128 bits
There are matching store instructions.

Although it may not be obvious, significant experience coding with these
multi-word loads and stores has convinced me that they are useful instructions
to have.  Many data structures are small enough to be easily manipulated with
these instructions, without resorting to multiple instructions or to string
move instructions.

mouse@mcgill-vision.UUCP (der Mouse) (05/10/88)

In article <253@babbage.acc.virginia.edu>, mac3n@babbage.acc.virginia.edu (Alex Colvin) writes:
>> Caching is not the only problem with I/O devices.  It is (was?)
>> common practice for status registers to be cleared upon being read.
> All too common a practice!  Stop it!  If I'd 'a wanted it cleared I'd
> 'a done a read-and-clear!

How many machines *have* a read-and-clear instruction?  (No, the
read-and-clear must be atomic, you're not allowed to use two
instructions to do it.)

> Would you do this to a processor register?

Ever hear of the PDP-8?

					der Mouse

			uucp: mouse@mcgill-vision.uucp
			arpa: mouse@larry.mcrcim.mcgill.edu