[comp.sys.acorn] History of interrupt latency and Mode 21

RWilson@acorn.co.uk (01/08/91)

OK, several people have asked for the Acorn view on this, in spite of the
correct answer being posted a number of times.

Yes, it is all down to interrupt latency, though the driving force for the
interrupt latency was the Econet rather than the floppy disc since there's
no other way to run it. Though (as we shall see) the floppy disc ends up
being the guy who turns the screen off.

Anyway, back in 1979 when Econet was designed to work on an Acorn Atom (1MHz
6502), the NMI latency of the 6502 was an enormous simplifier for the
design, being only 7 cycles (plus 7 cycles for the longest instruction and 6
cycles for the return with most absolute data instructions taking 4 cycles).
So we happily ran 250KBits through the network, interrupting every two
bytes: loads of time. [note to 6809 fans interrupt might start in SWI3
instruction (20), NMI takes 21, return takes 15 and most of the instructions
one needs to use take 5 or 6 cycles (direct page is a scarce resource).
Acorn did in fact build a 6809 cpu card for its "System" range of computers
and Econet and double speed floppy discs would not work!] [Mind you the
NS32016 is MUCH MUCH worse!]

The same system was used for floppy discs: 125KBits, one byte interrupt.

The architecture was copied directly for the BBC machine (no time to change
it, no money to spend on DMA controllers...). There the CPU got twice the
bandwidth of the 6502 in the Atom due to the better memory system which
paved the way for faster Econets and the ADFS double density floppy disc
using the 1770 controllers. So floppies became the worst case (since a
500KBit Econet is physically too small to be truly interesting).

So when the ARM chip set was designed (1983, but who's counting?) we (I...)
copied this requirement again. A doddle for ARM's overall design. Note that
the MEMC system, instead of allocating fixed amounts of memory bandwidth
(2MB, 2MB) to CPU and Video system, allocates it flexibly out of a 25.6MByte
"pool". This lets the CPU get a much larger %age.

Trundle, trundle, fabricate, fabricate.

Write Arthur OS.

Oh dear, if you chuck enormous bandwidths down the screen ("mode 21") you
haven't always got enough left to run the disc. OK: remove such modes from
the OS.

Release 310 and Arthur (June 1987).

People write demo programs for mode 21.....

Pressure for next OS to include it! How can the OS cope with the NMI latency
for running the floppy disc? Well, ADFS can observe itself running out of
throughput - the controller says "late DMA" to it. In which case it can turn
off the screen refresh and retry the operation. Which (with a bit of
hysteresis in turning it back on) is what it does. Note that byte level
versus sector level interrupt makes no difference: the critical operation
(getting the NMI) has still got the same worst case characteristic.

Start ARM3.

Release RISC OS.

ARM3 worst case processing is actually worse than an ARM2 (though it happens
less frequently). Heave sigh of relief that OS will behave sensibly:
whenever the machine is running out of real throughput (as opposed to info
from cache) it will get some more memory bandwidth and try again. Also, with
everyone designing their own modes, the ability to starve the CPU is not
restricted to modes 21 and 28...

The non-existence of Econets beyond 250KBits stops this being a problem at
all: even at 24MBytes per second down the screen, the sound going as well,
an ARM2 can cope with Econet. Anyway the guy at the other end will
retransmit if all is failing!

Everyone happy now?

--Roger Wilson ("he was THAT MAN")