[net.arch] time for a RISCy bus

henry@utzoo.UUCP (Henry Spencer) (01/17/85)

(Sorry about the title, I couldn't resist.)

Of late, we have seen announcements of the VME bus, the Multibus II,
the IEEE whatever-it's-called-this-week bus, the TI Nu-bus, and maybe
one or two that I've missed.  While all these bus schemes do have
interesting characteristics, there is one disturbing problem that they
all share:

Every last one of them is appallingly complex.

(Well, maybe I'm slandering the Nu-bus a little bit, but not the others.)

It has reached the stage where a recent review article on one of them
(the VME bus, I think) said, essentially, "nobody's implemented the
XYZ sub-bus yet, because company Q hasn't yet shipped the new VLSI IC
needed to run it".  Ye Gods!  Note that this wasn't even a whole bus,
just one specialized piece of it.

It is high time somebody did for bus structures what the RISC has done
for machine architecture:  provided a simple, high-performance alternative
to the convoluted, baroque "mainstream" approach.  Preferably before
we need a forklift to carry a bus spec -- a day that is fast approaching.
Anybody got any bright ideas?
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

rpw3@redwood.UUCP (Rob Warnock) (01/19/85)

+---------------
| While all these bus schemes do have
| interesting characteristics, there is one disturbing problem that they
| all share: | Every last one of them is appallingly complex.
| (Well, maybe I'm slandering the Nu-bus a little bit, but not the others.)
+---------------

I agree. The major complexity with the Nu-bus is that you need a FIFO to
stack up "interrupt events" (or did they change that in the TI version?
I only have the old MIT stuff). All in all, though, it IS very clean.

+---------------
| It is high time somebody did for bus structures what the RISC has done
| for machine architecture:  provided a simple, high-performance alternative
| to the convoluted, baroque "mainstream" approach...
| Anybody got any bright ideas?
| Henry Spencer @ U of Toronto Zoology|{allegra,ihnp4,linus,decvax}!utzoo!henry
+---------------

Yeah... Ethernet! Now wait a sec before you start flaming... It's one wire, 
the text of the full spec is only 100 pages, you can put 99% of the important
stuff on two pages, and many many vendors have proven they can interface to it.
It supports both standard protocols and proprietary protocols, simultaneously.
Simple devices can use simple protocols, complex systems can use complex stuff.
You can use it in bus, star, wire, fiber, and microwave configurations, and
all of that can appear to be one net if you like (with repeaters and/or
bridges). The bandwidth is about that of an early PDP-11 UNIBUS or Q-bus.
It only takes two chips to interface to it (to the transceiver cable level),
and though they are expensive today, remember that "all silicon eventually
costs $5 [Gordon Moore -- Intel]". (Besides, look at your bus-driver/receiver
costs with those big, wide, monster busses!)

I was going to say SCSI, but in the name of "improvements", it has gotten
more and more complex (with sub-devices and disconnect/reconnect, etc.),
so that a full-functioned SCSI controller can be more complex than an
Ethernet controller. SCSI DOES supply somewhat higher bandwidth (12Mhz),
and by using the (proposed) asynchronous-acknowledge feature you can get
up to 4Mbytes/sec in some configurations. It also has SMALL limits (8) on
the number of controllers+devices on the bus, as well as distance problems.

The increasing necessity (for both speed and manufacturing economics) of
packaging whole sub-systems on a board, rather than just pieces, and the
decreasing costs of "CPUs" per se (a Z80 costs less than an SIO!), makes
the "skinny backplane" worth considering. If several subsystems are
packaged in one box (read: power supply and RFI shield), the interconnect
can be even cheaper. And when was the last time you were able to take
a terminal controller or a disk controller out of your system, fix it,
test it, and put it back without crashing the system???!?!? "Skinny"
backplanes make that possible (though not guaranteed, as anyone with
certain UNIBUS Ethernet controllers knows).

Yes, the bandwidth limit is a problem, but the latency across the "Etherbus"
for a 1K disk block is a small fraction of the latency of reading that block
from disk, and even small compared to the kernel CPU time to process the data
("read()" call, buffer cache search, etc.). [For those who haven't done the
arithmetic, the Ethernet-induced latency to send a small request packet and
get back 1024 bytes of data is just under one millisecond under light load,
about 3-5 times that if the net is 70% loaded AVERAGE -- you never see average
loads that high in real configurations.]

Yes, many currently available board-level controllers have latency and
throughput problems. I suspect  that's because such controllers were
designed to be "everything to everybody", rather than a simple, transparent
path to the "bus". It's NOT inherent in bit-serial transmission.

O.k., I have presented my bright idea: "skinny" backplanes, with the current
default being present-day Ethernet version 2.0 (IEEE 802.3), especially when
used with simple <req>/<ack> protocols (like XNS Packet Exchange).

Comments?

(Guidelines: Let's try to keep it technical, on "skinny" vs. "fat" backplanes,
not Ethernet vs. Pronet vs. Hyperchannel, for example.)


Rob Warnock
Systems Architecture Consultant

UUCP:	{ihnp4,ucbvax!dual}!fortune!redwood!rpw3
DDD:	(415)572-2607
USPS:	510 Trinidad Lane, Foster City, CA  94404

wunder@wdl1.UUCP (01/24/85)

One of the sub-buses in Multibus II is a serial "skinny bus".  Since
most (all?) Multibus II traffic is in packets, a CSMA/CD packet
bus works just fine.  I think that the serial bus is called iSSB
for Serial System Bus.  The "fat bus" part is called iPSB.

There was a session on VLSI network interfaces at Spring CompCon 82,
and iNTEL made it very clear that their chip would be able to work
on a variety of CSMA/CD nets (including Ethernet) at different data
rates.  They described a two-wire serial system bus as one possible
application.

VMEbus also has a serial sub-bus defined, but I doubt that anyone
will ever bother to implement it.

w underwood

PS:  Personally, I am getting a little tired of this iBFD naming convention.

roy@phri.UUCP (Roy Smith) (01/26/85)

> w underwood
> PS:  Personally, I am getting a little tired of this iBFD naming convention.
                                                        ^^^
                                                        |||
Hey, shouldn't you ROT13 that? *-)
-- 
allegra!vax135!timeinc\
   cmcl2!rocky2!cubsvax>!phri!roy  (Roy Smith)
         ihnp4!timeinc/

The opinions expressed herein are mine, and do not necessarily
reflect the views of The Public Health Research Institute.

rcd@opus.UUCP (Dick Dunn) (01/29/85)

> Of late, we have seen announcements of the VME bus, the Multibus II,
> the IEEE whatever-it's-called-this-week bus, the TI Nu-bus, and maybe
> one or two that I've missed.  While all these bus schemes do have
> interesting characteristics, there is one disturbing problem that they
> all share:
> 
> Every last one of them is appallingly complex.

I'll agree that they seem complex, but I can't see the same level of
complexity as I see in the definition of a processor like the VAX.  I was
able to muddle through the VME bus spec in a few hours and get a good
enough understanding (for a software type) that I knew where everything was
and could find answers to questions with little trouble.  I suspect that if
I started to gain the same level of understanding of the VAX, it would take
several days starting from scratch.  My point is that, although a simple
bus would be a good idea, it's just not as big or urgent a problem.

> It is high time somebody did for bus structures what the RISC has done
> for machine architecture:  provided a simple, high-performance alternative
> to the convoluted, baroque "mainstream" approach.  Preferably before
> we need a forklift to carry a bus spec -- a day that is fast approaching.
> Anybody got any bright ideas?

One characteristic to keep in mind is that RISC architecture tosses out a
certain amount of "how things were done" and there's a certain loss in
compatibility of sorts.  We may have to accept the same sort of thing with
a reduced bus.  For example, study the VME spec and ask yourself, "how much
of this junk could we toss out if we could assume one size for the
instruction and data paths?"  VME handles 8/16/32 bit data transfers and
16/24/32 bit addresses.  I'm not knocking the idea.  Who knows--it might be
more cost-effective overall to build all your cards with a 32-bit interface
and eliminate the signals and logic that it takes now for smart cards to
deal with dumb ones.
-- 
Dick Dunn	{hao,ucbvax,allegra}!nbires!rcd		(303)444-5710 x3086
   ...Never offend with style when you can offend with substance.