[comp.dcom.lans] Token Ring

david@ms.uky.edu (David Herron -- One of the vertebrae) (12/31/88)

Yeah, I just got done reading the current BYTE this morning and
was especially intrigued with the token ring article ...

but I had a couple of questions..  mainly about assumptions of the author.


the style of daisy chaining -- The IBM design (he didn't make it clear
if he was talking about just the IBM design or token rings in general
at this point) has some sort of central ring off which you have two
cables going out to the node.  The central ring has, at the point where
the cable-going-out-to-the-node (I'm going to call it "drop cable")
connects in there's some circuitry which detects whether the node is up
or not and if it isn't simply passes packets (frames) along.

Why have all this extra cabling?  Yeah it's a good idea, if you're going
to have a fancy net like token ring, to have some smarts to allow packets
to go through when nodes go down or are disconnected.  BUT .. I think you
could do it without having to use two cables to reach the node ... one
should be able to do the job.

To guard against nodes being turned off you put some stuff on the board
which is somehow always powered.  This is the thing that detects when
the node is down and starts passing packets along.  Or maybe it's not
on the board itself but it's in a little box that sits on the floor and
the little box even detects when the cable going up to the node is
physically disconnected ...  Or .. the broadband cabling we have on
campus proves that you can have multiple transmitters on a cable all
transmitting at the same time.  Have the central box transmit on one
frequency and have the node transmit on another.

Lastly .. it was suggested in the article that we use the normal wiring
conduit and closets as we do now for phones and ether cable.  I don't
think that would work unless these cables are very very very thin.  At
least not compared to ethernet where we pull just one cable all around
the floor to serve all our workstations.

only one talker at a time -- I see that this helps to avoid collisions
but it seems a big waste of the available bandwidth.  Ether also allows
only one talker at a time so neither has an advantage over the other
in this regard.  There should be some way of having more than one packet
circling the ring at a time.  Or did I misread the article?

I'm thinking of something like a node has a packet waiting to go out,
it see's a packet go by then looks for some special event following the
packet to determine whether or not it can transmit its packet.  Perhaps
the transmitter always transmits a token after transmitting a packet.
Then the next station down the line will pick up the token, transmit
it's packet (if it has one) then transmit a token.  'cept we have something
like a circular buffer and we have to make sure of not overrunning it.

size limits -- He said a few things about token rings not being as size
limited as ethernets.  First off if you were to have a ring of any appreciable
number of stations and each station were the maximum distance apart
the delays for the one-packet-at-a-time to go around the ring would be
very hideous, at least in computer terms.  This implies that you'd
want to split a token ring into two far earlier than you normally
do on ethernets.  (We have 40-50 hosts on our ether doing NFS all
the time (few of them are diskless suns fortunately, so we don't see
much swapping over ether) and aren't thinking of splitting our ether
and probably won't think about it until we double in size again).

Ethernets are very cheaply split what with LANbridge type boxes being
about $1000-$2000 a crack.

broadcast packets -- how do you do one?  Off hand I'd think you could
do it much as now, put a special TO-ADDR (all one's?) and your own
FROM-ADDR ... all the nodes around the ring see the packet and actually
receive it but don't set that special bit saying it was received.
Eventually the original transmittor will receive it again and set
the special bit and pass it on.

extra overhead beyond normal ethernets .. both the extra cabling and
also the two watchdog boxes. -- As I think it over I just see all
this extra stuff tossed in that makes it more expensive than ether.
And you're not gaining anything in speed.  OOooh  AAaah, they've raised
the speed up to 16Mbps ... that's still the same ball-park as ether.
Yet there were articles in Network World all concerned with whether
people will be able to find something to do with all that extra
bandwidth!

Gimme a break!
-- 
<-- David Herron; an MMDF guy                              <david@ms.uky.edu>
<-- ska: David le casse\*'      {rutgers,uunet}!ukma!david, david@UKMA.BITNET
<-- Now I know how Zonker felt when he graduated ...
<--          Stop!  Wait!  I didn't mean to!

glass@tehran.berkeley.edu (Brett Glass) (01/01/89)

In <10777@s.ms.uky.edu>, David Herron writes:

> Yeah, I just got done reading the current BYTE this morning and
> was especially intrigued with the token ring article ...

Thank you.

> but I had a couple of questions..  mainly about assumptions of the author.

> the style of daisy chaining -- The IBM design (he didn't make it clear
> if he was talking about just the IBM design or token rings in general
> at this point)...

All IEEE 802.5 Token Rings have the same topology.

> has some sort of central ring off which you have two
> cables going out to the node.  The central ring has, at the point where
> the cable-going-out-to-the-node (I'm going to call it "drop cable")
> connects in there's some circuitry which detects whether the node is up
> or not and if it isn't simply passes packets (frames) along.

The "circuitry" consists of a simple relay and a few passive
components.

> Why have all this extra cabling?  Yeah it's a good idea, if you're going
> to have a fancy net like token ring, to have some smarts to allow packets
> to go through when nodes go down or are disconnected.  BUT .. I think you
> could do it without having to use two cables to reach the node ... one
> should be able to do the job.

There is only one cable. It contains two pairs (four conductors):
A transmit pair and a receive pair. Since the transmitted and
received signals go to different nodes, it would be messy and
expensive to use the same pair for both.

> To guard against nodes being turned off you put some stuff on the board
> which is somehow always powered.

The Token Ring was designed for the absolute maximum degree of
reliability. A circuit which relies on a power source that
must always be present (a lithium battery, for instance) will
eventually fail. This is why IBM's MAUs are entirely passive.

> Or .. the broadband cabling we have on campus proves that you
> can have multiple transmitters on a cable all transmitting at the
> same time.  Have the central box transmit on one frequency and
> have the node transmit on another.

Using broadband transmission, rather than baseband, would make
the system MUCH more expensive than adding a pair of wires to
each cable. FCC certification would be harder and reliability
could suffer. IBM's decision seems to be the right one.

> Lastly .. it was suggested in the article that we use the normal wiring
> conduit and closets as we do now for phones and ether cable.  I don't
> think that would work unless these cables are very very very thin.

The standard Token Ring cable isn't much thicker than good 50-ohm
coax.

> At least not compared to ethernet where we pull just one cable
> all around the floor to serve all our workstations.

Pulling one cable all around the floor is fine for a "techie"
environment, but won't cut it in the business world. And it's a
nightmare if you need to add stations in unanticipated places.
IBM takes a more mature view. It envisions the LAN as a permanent
part of the building, like the telephone system, and recommends
that cables and wiring centers be put in when the building is
constructed. A number of companies have followed this advice, and
it has worked.

> only one talker at a time -- I see that this helps to avoid collisions
> but it seems a big waste of the available bandwidth.  Ether also allows
> only one talker at a time so neither has an advantage over the other
> in this regard.  There should be some way of having more than one packet
> circling the ring at a time.  Or did I misread the article?

You did. The article mentioned Early Token Release, implemented
as part of the new 16-Mbps standard.

> size limits -- He said a few things about token rings not being as size
> limited as ethernets.  First off if you were to have a ring of any appreciable
> number of stations and each station were the maximum distance apart
> the delays for the one-packet-at-a-time to go around the ring would be
> very hideous, at least in computer terms.

Early Token Release reduces this latency. Also, remember that
there is only 1.5 bit of delay in each adapter except the Active
Monitor. The time for a token to circulate is comparable to the
time an Ethernet node must wait to make sure the line is clear
before transmitting.

> broadcast packets -- how do you do one?  Off hand I'd think you could
> do it much as now, put a special TO-ADDR (all one's?) and your own
> FROM-ADDR ...

This is how it's done. It's standard IEEE addressing.

> all the nodes around the ring see the packet and actually
> receive it but don't set that special bit saying it was received.
> Eventually the original transmittor will receive it again and set
> the special bit and pass it on.

In the Token Ring, every recipient attempts to set the FCI bit.

> extra overhead beyond normal ethernets .. both the extra cabling and
> also the two watchdog boxes.

A "star" wiring configuration is no more inefficient than phone
wiring. And it is especially worthwhile in the case of the Token
Ring because it allows the ring to be fault-tolerant. The
"watchdog boxes" (the MAUs) are worth the money not only for
reliability purposes but because they allow nodes to be inserted
and removed easily. The net can be reconfigured without bringing
it down (Try to do that with Ethernet!). You can't afford to shut
down a business to add or remove a node on a network.

> As I think it over I just see all this extra stuff tossed in
> that makes it more expensive than ether. And you're not gaining
> anything in speed.

A 4-Mbps Token Ring will outperform a 10-Mbps Ethernet under
heavy loads because of the absence of collisions. And the
priority scheme guarantees that important messages arrive
quickly. Read the section titled "Bravery Under Fire" in the
article.

> OOooh  AAaah, they've raised the speed up to 16Mbps ... that's
> still the same ball-park as ether.

That's a 60% improvement, not counting the additional gains from
the lack of collisions. Perhaps this is why Sun Microsystems was
one of the original test sites for the 16-Mbps ring.

-- Brett Glass

(Note: I don't usually read Usenet news, so please direct responses to
 my electronic mailbox.)

karn@ka9q.bellcore.com (Phil Karn) (01/01/89)

>A 4-Mbps Token Ring will outperform a 10-Mbps Ethernet under
>heavy loads because of the absence of collisions.

This is news to me. Dave Boggs, one of the inventers of Ethernet, recently
published a series of tests of Ethernet performance under heavy load.  The
results were presented at ACM SIGCOMM last summer at Stanford, and in
abbreviated form during the Interop '88 conference at Santa Clara.

His results show that a properly constructed Ethernet is quite capable of
carrying a sustained load at a level considerably greater than 40% of its
capacity.  Therefore it is impossible for a 4 Mbps Token Ring to outperform
a 10 Mbps Ethernet, regardless of the Token Ring's efficiency.

He also observed that with the progress being made in "porting" Ethernet to
media other than coaxial cable, "it is now possible to run Ethernet on
wiring plants originally installed for IBM Token Ring". Needless to say,
this got quite a reaction out of the audience. So did the comment, "Ethernet
works in practice, but not in theory".

Phil

jqj@oregon.uoregon.edu (J Q Johnson) (01/02/89)

In article <18672@agate.BERKELEY.EDU>, glass@tehran.berkeley.edu (Brett 
Glass) argues for the superiority of token rings over Ethernets because of
> ... the built-in
> acknowledgement provided by the trailer at the end of a returning frame.  On
> an Ethernet, one must send a packet to acknowledge receipt of a message --
> with all the delays inherent in setting up a buffer, waiting for the cable
> to clear, etc. This overhead can cut the net throughput of an Ethernet by
> more than 75% under any protocol requiring reliable data transport.

To belabor the obvious, this "feature" is only useful at the local link
level.  It does not guarantee that a packet won't be lost (or corrupted)
in the network interface or in a gateway.  Designers of protocols
designed for internetworking (e.g.  TCP/IP, XNS, etc.) argue strongly
for end-to-end ACKs and error detection, which largely vitiate the
benefit of a TR ack. 

glass@tehran.berkeley.edu (Brett Glass) (01/03/89)

In article <13096@bellcore.bellcore.com> karn@ka9q.bellcore.com (Phil Karn) writes:

>His results show that a properly constructed Ethernet is quite capable of
>carrying a sustained load at a level considerably greater than 40% of its
>capacity.  Therefore it is impossible for a 4 Mbps Token Ring to outperform
>a 10 Mbps Ethernet, regardless of the Token Ring's efficiency.

Boggs' argument may ignore certain key practical considerations.  While
doing the research for my article, I spoke to many consultants who'd
designed large networks for campuses and businesses. All recommended that
the Token Ring be used as the backbone (at least) -- even before the 16 Mbps
ring was announced -- and claimed that the throughput was similar. With all
due respect to Dave Boggs (who seems to be quite a brilliant fellow), I'd
credit these people with less partiality than one of the inventors of
Ethernet. In this case, it's probable that Boggs was looking only at the
number of bits travelling down the cable, rather than overall system
throughput.  There's a big difference, as I'll explain below.

One of the reasons a 4 Mbps Token Ring can outperform a 10 Mbps Ethernet
(other than the lack of collisions I mentioned earlier) is the built-in
acknowledgement provided by the trailer at the end of a returning frame.  On
an Ethernet, one must send a packet to acknowledge receipt of a message --
with all the delays inherent in setting up a buffer, waiting for the cable
to clear, etc. This overhead can cut the net throughput of an Ethernet by
more than 75% under any protocol requiring reliable data transport.

I have direct experience with this phenomenon. The Token Ring isn't the only
network that implements built-in acknowledgements; ARCnet does, too.  When I
first set up a 2.5 Mbps ARCnet in my apartment for a consulting project, I
was surprised to find that it ran faster than my 10 Mbps Cheapernet under
the same network software! I thought this was just a fluke until I confirmed
my observations with a few gurus who designed low-level network software for
a living.  The writers of QNX, a UNIX-like operating system based on message
passing, tell me that a 2.5 Mbps ARCnet outperforms a 10 Mbps Ethernet under
their OS for the same reason. Two implementors of IBM's NetBIOS protocol
(CBIS and Performance Technology) reported similar results. (I don't have a
Token Ring right now, but I expect that it would perform even better than
ARCnet due to the higher bit rate, larger frame sizes, lower latency, and
even more efficient acknowledgement scheme.)

Moral: Like CPU clock speeds, network bit rates tell only part of the story
about real-world performance. It's not just what you've got, it's how you use
it.

>He also observed that with the progress being made in "porting" Ethernet to
>media other than coaxial cable, "it is now possible to run Ethernet on
>wiring plants originally installed for IBM Token Ring". 

That's true; most networking magazines carry ads for baluns which take
Ethernet coax at one end and Token Ring cabling at the other. But the
motivation for using such kluges is usually logistics.  Too many hardware
vendors support only one network standard, and customers are forced to adapt
or rewire. At least one thing can be said for the Token Ring wiring system
in such situations: You can run Ethernet over Token Ring cabling, but not
the other way 'round!

>"Ethernet works in practice, but not in theory".

By this, does Boggs mean that the IEEE and others are standardizing
equipment that can't be proven to work? Gee, maybe I should give up EE
and change my major to voodoo.... ;-)

-- Brett Glass

=======================================================
Copyright (C) L. Brett Glass 1989. All rights reserved.
=======================================================

karn@ka9q.bellcore.com (Phil Karn) (01/03/89)

>Boggs' argument may ignore certain key practical considerations....

I suggest that you read his paper. It contains actual test measurements made
on a real live Ethernet network, not theoretical predictions.

>On an Ethernet, one must send a packet to acknowledge receipt of a message --
>with all the delays inherent in setting up a buffer, waiting for the cable
>to clear, etc. This overhead can cut the net throughput of an Ethernet by
>more than 75% under any protocol requiring reliable data transport....

Nonsense. Surely you must understand the principles of internetworking well
enough to know that an end-to-end acknowledgement and retransmission
mechanism is essential in any heterogeneous internetwork, even if one
component subnet happens to be a "reliable" ring.  Because of its collision
detection mechanism, Ethernet drops packets without warning so rarely that
there is simply no need for a "link level" ack -- end-to-end retransmission
is sufficient.  (The only time Ethernet *does* silently drop packets is when
the destination host is down or somebody pulls the plug on an intervening
repeater -- and then neither end-to-end or link-level retransmission will
help.) I refer you to the classic paper by Saltzer et al: "End-to-End
Arguments in System Design".

I suspect you are referring to Logical Link Control Type 2 (the
connection-oriented version closely resembling LAPB). However, I don't
personally know anyone using it. In fact, I don't know anyone using LLC Type
1 either; the original Blue Book (DEC-Intel-Xerox) Ethernet spec works just
fine for us. In my opinion, the world would be a better and less confusing
place if the IEEE 802.3 committee never existed.

Anyone with experience in writing drivers can tell you that performance
depends much more strongly on the hardware design of the controller than
anything else. There are good Ethernet controllers, and there are bad ones.
Van Jacobson of LBL recently presented the results of experiments with two
controller chips (AMD LANCE and Intel 82586) and found dramatic differences.
He was able to run the useful throughput of the LANCE chip very close to 10
megabits/sec, but the Intel chip did no better than about 5 megabits/sec. It
didn't run into collision problems on the Ethernet; the chip simply didn't
ask for data from the host fast enough.  By the way, these were true
end-to-end throughput figures, using TCP/IP.  Note that both are greater
than 4 megabits/sec.

>>"Ethernet works in practice, but not in theory".
>
>By this, does Boggs mean that the IEEE and others are standardizing
>equipment that can't be proven to work?

I was taught that when many people can confirm that something happens in
practice that doesn't match the predictions of a theory, then it is usually
safe to assume that there's something wrong with the theory.

The token ring does have its advantages over CSMA/CD when distances are
large or data rates are very high. But your cause is not helped by spreading
misinformation about the competition. Ethernet and token rings have
complementary strengths and weaknesses, and they each have their place.  At
the moment, Ethernet is clearly superior to the token ring in doing what it
is designed to do -- providing simple, reliable and relatively inexpensive
computer networking over a small local area. Token rings, because of their
more complex design, are inherently more expensive and less reliable, but
they can cover much larger physical areas and run at much faster physical
speeds.  The Internet philosophy allows you to apply each technology where
it makes sense, while still producing a single virtual network.

Phil

narten@cs.purdue.EDU (Thomas Narten) (01/03/89)

In article  <13096@bellcore.bellcore.com> karn@ka9q.bellcore.com (Phil Karn) writes:
>>A 4-Mbps Token Ring will outperform a 10-Mbps Ethernet under
>>heavy loads because of the absence of collisions.
>
>This is news to me. Dave Boggs, one of the inventers of Ethernet, recently
>published a series of tests of Ethernet performance under heavy load.  The
>results were presented at ACM SIGCOMM last summer at Stanford, and in
>abbreviated form during the Interop '88 conference at Santa Clara.
>

Indeed, the paper gives *experimental* results (read real numbers
measured by real hardware) that show that even in a worst case
scenario (25 transmitters competing against each another by
continously attempting to transmit tiny 64 byte packets) 85% of the
bandwidth was utilized for data (only 15% lost to collisions, etc.).
With large packets (1500 bytes) utilization surpasses 95%.

However, just because he gets good performance doesn't mean that your
whiz-bang Ethernet card will.  One of the paper's observations is that
much of the comercially available Ethernet hardware is braindamaged in
design in such areas as memory management -- a problem just as
prevalent in ring networks.
-- 
Thomas Narten
narten@cs.purdue.edu or {ucbvax,decvax}!purdue!narten

smb@ulysses.homer.nj.att.com (Steven M. Bellovin) (01/03/89)

In article <18659@agate.BERKELEY.EDU>, glass@tehran.berkeley.edu (Brett Glass) writes:
> The net can be reconfigured without bringing
> it down (Try to do that with Ethernet!). You can't afford to shut
> down a business to add or remove a node on a network.

That depends on your topology and wiring techniques; if done right,
it's easy.  For office environments, I'm fond of star-wired 802.3,
using either thinwire coax or the newer twisted-pair schemes; for
machine-room environments, one can either use that or standard coax
with vampire-tap transceivers.  Neither requires a net to be shut down
to add or delete stations.

I'm *not* going to get into the 802.3 vs. token ring religious wars
(I hope), but the advantages and disadvantages are not quite that simple.

kre@cs.mu.oz.au (Robert Elz) (01/03/89)

In article <13137@bellcore.bellcore.com>, karn@ka9q.bellcore.com (Phil Karn) writes:
> (The only time Ethernet *does* silently drop packets is when
> the destination host is down or somebody pulls the plug on an intervening
> repeater ...)

This is not quite true, ethernets drop packets when the receiving
controller isn't fast enough to handle the incoming packet rate.

This, and the independant ability to determine if the addressed host
is up and running, are probably the only real uses of the frame status
byte.

> I suspect you are referring to Logical Link Control Type 2

No he's not, that does protocol level acks, and the received bit in the
transmitted packet doesn't help at all.

As you, and others, have pointed out, all real protocols do acks in
packets (and that includes all the ISO stuff) .. apart from anything
else these acks serve as flow control, which (at least at any level
beyond the controller), the frame copied bit doesn't.

kre

ron@ron.rutgers.edu (Ron Natalie) (01/03/89)

The problem with using anecdotal evidence such as your experience with thin-ethernet
interfaces and token ring cards is your experimental evidence is heavily swayed by
poor throughput of the interfaces between the card and the machine it is installed
in, rather than through the Ethernet.

In addition, a well desigend Ethernet system will experience high capacities with
low number of collisions.  When we were using Proteon token ring implementations
we saw a higher number of packets that went around the ring without being picked
up by the destination because it wasn't ready than we ever saw Ethernet collisions.

-Ron

jim@belltec.UUCP (Mr. Jim's Own Logon) (01/03/89)

In article <13137@bellcore.bellcore.com>, karn@ka9q.bellcore.com (Phil Karn) writes:
> Anyone with experience in writing drivers can tell you that performance
> depends much more strongly on the hardware design of the controller than
> anything else. There are good Ethernet controllers, and there are bad ones.
> Van Jacobson of LBL recently presented the results of experiments with two
> controller chips (AMD LANCE and Intel 82586) and found dramatic differences.
> He was able to run the useful throughput of the LANCE chip very close to 10
> megabits/sec, but the Intel chip did no better than about 5 megabits/sec. It
> didn't run into collision problems on the Ethernet; the chip simply didn't
> ask for data from the host fast enough.  By the way, these were true
> end-to-end throughput figures, using TCP/IP.  Note that both are greater
> than 4 megabits/sec.
> 

    While I haven't seen anything that Mr. Jacobson has presented, he is
clearly wrong or you are misinterpreting his results. Both the LANCE and
the 82586 will request data from the host at speeds sufficient to sustain
the 10 Mbps rate of the Ethernet transmission. Underflows do not happen 
unless there is a system design problem. At Bell Technologies, we have AT
compatible cards using a LANCE and one using a 82586. Measuring end to end
throughput, the 82586 card clearly outperforms the 8390 (LANCE).  But this 
is because the 82586 card has a 16 bit interface to the dual port memory. I'm
sure something similar is present in the Mr. Jacobson's test. 

    At the chip level, all the controllers do is pass information at the 
Ethernet data rate, and interrupt the system when they are done. The only
room for performance differences are in the times required to set up the
chip for packet transmission and reception, AND the SYSTEM DESIGN. Dual 
port memory vs. DMA, interrupt vs. status polling. And I would venture a 
guess that the particular driver that was used would have somethign to do
with the throughput.


							-Jim Wall
							 Bell Technologies Inc.

P.S.  Everything I have heard about the chip level bugs of the LANCE make it
      something to run far away from, especially if you are the code pig that
      has to do the driver.

ron@ron.rutgers.edu (Ron Natalie) (01/03/89)

> However, just because he gets good performance doesn't mean that your
> whiz-bang Ethernet card will. 

As are token ring cards.  Ever measure the performance of the IBM Token Ring
Adapter/A?  We're still no where near 1 MB on these interfaces, let alone
4, 10, or 16.

-Ron

dennis@gpu.utcs.toronto.edu (Dennis Ferguson) (01/04/89)

In article <18672@agate.BERKELEY.EDU> glass@tehran.berkeley.edu (Brett Glass) writes:
> One of the reasons a 4 Mbps Token Ring can outperform a 10 Mbps Ethernet
> (other than the lack of collisions I mentioned earlier) is the built-in
> acknowledgement provided by the trailer at the end of a returning frame.  On
> an Ethernet, one must send a packet to acknowledge receipt of a message --
> with all the delays inherent in setting up a buffer, waiting for the cable
> to clear, etc. This overhead can cut the net throughput of an Ethernet by
> more than 75% under any protocol requiring reliable data transport.

Unless I'm misreading something, I don't think this is true.  The bits
in the trailer don't constitute any sort of useful acknowledgement, other
than indicating there may have been some station out there willing to accept
the packet.  It would be silly for any protocol to rely on this as a reception
acknowledgement.  The protocol wouldn't work across bridges, where the bits
would only indicate that the bridge accepted the frame while telling you
nothing about the final destination.  I also note that those bits, on an
802.5 ring, are in a part of the frame which is not included in the CRC
check, so I wouldn't necessarily want to rely on them for anything important.

I have a feeling what is being confused here is the on board support for
IEEE 802.2 Type 2 LLC circuits the IBM TR adapters have.  Be assured that
this does send packets back to acknowledge receipt of frames (this is
fairly obvious if you think about it, since this works across bridges.  It
will also work equally well (poorly?) over 802.3-style ethernets).  If an
802.2 Type 2 LLC is a performance winner for some PC networking software
it is only because the protocol is implemented on board on IBM adapters, where
it can be intimate with the TR controller chip and with a resulting decrease
in bus traffic to the board (since the network software doesn't have to
receive the acknowledgements itself).  For real computers, with good quality
software and a properly designed interface to the hardware, the on-board
firmware would make a whole lot less difference.

Dennis Ferguson
University of Toronto

jerry@olivey.olivetti.com (Jerry Aguirre) (01/04/89)

In article <18659@agate.BERKELEY.EDU> glass@tehran.berkeley.edu (Brett Glass) writes:
>A 4-Mbps Token Ring will outperform a 10-Mbps Ethernet under
>heavy loads because of the absence of collisions. And the

Absolutely true!  Except that it doesn't specify which ethernet we are
talking about.  I have heard from several sources that the theoretical
study that proved the above was not based on the latest 802.3 ethernet
standard.  No, it wasn't based on ethernet II either.  No, not ethernet
I either.

You have to remember that the study was done back when Xerox was
developing ethernet so it was based on an experimental version.  I
believe the key difference was in carrier sense, detection of
collisions, and random backoff.

Orriginal versions just transmitted whever they wanted to and depended
on luck to avoid collisions.  If there was a collision they didn't know
about it and had to wait for timeout before trying to retransmit.  It is
easy to predict how such an ethernet would perform under heavy load.
Talk to the average token ring guru and they will tell you this his how
ethernet performs today.  (While at the same time stating that ethernet
I, II, and 802.3 are incompatable and can't coexist on the same cable.)

Modern ethernets check to see if anyone is using the cable before trying
to transmit.  If two stations happen to start transmitting so close
together that that one doesn't see the other then a collision is
detected and both abort without wasting time sending the rest of the
packet.  So collisions don't happen as often and when they do they are
very short.  Finally, the colliding stations use a random backoff to
reduce the chances of subsequent collisions.

Finally, the performance figures for station to station and the various
problems with different ethernet boards and chips affect how much
bandwidth a single station can use.  They have little to do with the
aggrigate bandwidth that multiple stations can achieve.

			Jerry Aguirre

smb@ulysses.homer.nj.att.com (Steven M. Bellovin) (01/04/89)

In article <327@belltec.UUCP>, jim@belltec.UUCP (Mr. Jim's Own Logon) writes:
>     While I haven't seen anything that Mr. Jacobson has presented, he is
> clearly wrong or you are misinterpreting his results.

Van's statements were pretty unambiguous, as were his measurements.  Slower
Sun CPUs could transmit faster using Sun's LANCE-based Ethernet interface
than their faster ones could with an Intel chip.

> Both the LANCE and
> the 82586 will request data from the host at speeds sufficient to sustain
> the 10 Mbps rate of the Ethernet transmission.  Underflows do not happen 
> unless there is a system design problem.

I'm afraid you misunderstand the claim; no one is saying that the 82586
gets underflow errors.  Rather, the claim is that total protocol throughput
is far lower with the Intel chip.  Jacobson listed his guess as to what
was going on, based on his knowledge of the protocol behavior and watching
the Intel chip with a bus analyzer.  Since he asked that he not be quoted
out of context, I'm not going to be too specific, but the trouble seemed
to occur when several packets were awaiting transmission.  I'm going to
ask him if I can repost his entire article to this newsgroup.

		--Steve Bellovin

karn@jupiter..bellcore.com (Phil R. Karn) (01/04/89)

>    While I haven't seen anything that Mr. Jacobson has presented, he is
>clearly wrong or you are misinterpreting his results. Both the LANCE and
>the 82586 will request data from the host at speeds sufficient to sustain
>the 10 Mbps rate of the Ethernet transmission.

I was referring to the relative ability of each chip to keep the cable
occupied with useful data over the long term. OBVIOUSLY both chips have
to be able to operate at an instantaneous rate of 10 mb/s.

Van's results were of the form "it took X seconds to transfer a
block of Y bytes over Ethernet using TCP/IP."

I would rather let his results speak for themselves.

Phil

karn@ka9q.bellcore.com (Phil Karn) (01/04/89)

>> (The only time Ethernet *does* silently drop packets is when
>> the destination host is down or somebody pulls the plug on an intervening
>> repeater ...)
>
>This is not quite true, ethernets drop packets when the receiving
>controller isn't fast enough to handle the incoming packet rate.

True. But in my experience, this has never been a significant problem except
when the upper layer protocol code is seriously broken, or when the
receiving Ethernet controller design is terminally brain-damaged (e.g., the
notorious 3Com 3C501 and the even more infamous DEC DEQNA). With a sane
controller design having a reasonable amount of buffer memory, the
end-to-end flow control in the upper layer protocol is more than sufficient
to prevent appreciable packet loss.

Phil

mac3n@babbage.acc.virginia.edu (Alex Colvin) (01/05/89)

In article <18659@agate.BERKELEY.EDU>, glass@tehran.berkeley.edu (Brett Glass) writes:
> In <10777@s.ms.uky.edu>, David Herron writes:

> All IEEE 802.5 Token Rings have the same topology.

There are other token rings.  I use a Proteon proNET-10, which has similar
wiring, except for the dual-contrarotating topology.

> A 4-Mbps Token Ring will outperform a 10-Mbps Ethernet under
> heavy loads because of the absence of collisions. And the

Now you're getting into the benchmarking fog.  It depends STRONGLY on the
kind of load.  There's a lot of overhead traffic in 802.5

> priority scheme guarantees that important messages arrive
> quickly. Read the section titled "Bravery Under Fire" in the
> article.

sort of...  If the ring is lightly loaded, then priority isn't significant.
If the ring is heavily loaded, then the priority information arriving in
the token is likely to be out of date.

> 
> > OOooh  AAaah, they've raised the speed up to 16Mbps ... that's
> > still the same ball-park as ether.
> 
> That's a 60% improvement, not counting the additional gains from
> the lack of collisions. Perhaps this is why Sun Microsystems was
> one of the original test sites for the 16-Mbps ring.

A 16Mb/s ring will almost always carry more traffic than a 10Mb/s Aether.

The performance visible to any station, however, depends mostly on the
hardware and software.  The network designers worry about access delays in
the range of microseconds.  Then the adaptor design loses a few msec in
data transfer with the host.  Last comes the software and operating system,
which blows off a few 100msec.

mac3n@babbage.acc.virginia.edu (Alex Colvin) (01/05/89)

> I suspect you are referring to Logical Link Control Type 2 (the
> connection-oriented version closely resembling LAPB). However, I don't
> personally know anyone using it. In fact, I don't know anyone using LLC Type
> 1 either; the original Blue Book (DEC-Intel-Xerox) Ethernet spec works just
> fine for us. In my opinion, the world would be a better and less confusing
> place if the IEEE 802.3 committee never existed.

Well hello!  I use LLC type 1 (unack'd DG).
But not 2 or 3 - my token ring isn't noisy enough either.

It's true that the ACK should be done at a higher end than the link.
This can win by ACK'ing a numberof packets in one.  BUT, many implementations
can't get that to happen, so you wind up with an ACK packet anyway, only it's
bigger and later.

Anyway, the LLC stuff is 802.2, it's generic to IEEE, not to CSMA/CD.

> Anyone with experience in writing drivers can tell you that performance
> depends much more strongly on the hardware design of the controller than
> anything else.

Actually, the design of the driver, particularly the client interface, is
probably even more important.

> He was able to run the useful throughput of the LANCE chip very close to 10
> megabits/sec, but the Intel chip did no better than about 5 megabits/sec. It

I'm impressed! What kind of memory interface?  I can't get more than about
3Mb/s across a PC bus or Multibus.

> >>"Ethernet works in practice, but not in theory".
> I was taught that when many people can confirm that something happens in
> practice that doesn't match the predictions of a theory, then it is usually
> safe to assume that there's something wrong with the theory.

In this case, a theory says that ethernet can't run more than 1/e*10Mb/s
is based on an unrealistic (but tractable) model - an infinite number of
unsynchronized processes.  In real life, stations that communicate are
probably loosely synchronized.

> the moment, Ethernet is clearly superior to the token ring in doing what it
> is designed to do -- providing simple, reliable and relatively inexpensive

Mostly it's a mature technology.

> computer networking over a small local area. Token rings, because of their
> more complex design, are inherently more expensive and less reliable, but

That's that IBM approach.  Not all token rings are as hairy.  I like Proteon's
proprietary rings, at 10 and 80 Mb/s.

narten@cs.purdue.EDU (Thomas Narten) (01/05/89)

In article  <484@babbage.acc.virginia.edu> mac3n@babbage.acc.virginia.edu (Alex Colvin) writes:
>> computer networking over a small local area. Token rings, because of their
>> more complex design, are inherently more expensive and less reliable, but
>
>That's that IBM approach.  Not all token rings are as hairy.  I like Proteon's
>proprietary rings, at 10 and 80 Mb/s.

Proteon (like most Ethernet vendors) botched their ring hardware in
its packet buffering capability.  The ProNET-10 that we have (both
UNIBUS and Q-Bus versions) can't handle back-to-back packets.  The
device apparently cannot buffer more than one packet on board at a
time.  Folks that I know that have used the 80 Mbit ring complain of
the same problem -- they wanted to use it for bulk transfer between
two stations, and it doesn't work well for that.
-- 
Thomas Narten
narten@cs.purdue.edu or {ucbvax,decvax}!purdue!narten

srg@quick.COM (Spencer Garrett) (01/06/89)

In article <327@belltec.UUCP>, jim@belltec.UUCP (Mr. Jim's Own Logon) writes:
-> At Bell Technologies, we have AT
-> compatible cards using a LANCE and one using a 82586. Measuring end to end
-> throughput, the 82586 card clearly outperforms the 8390 (LANCE).  But this 
-> is because the 82586 card has a 16 bit interface to the dual port memory. I'm
-> sure something similar is present in the Mr. Jacobson's test. 
-> ... 
-> P.S.  Everything I have heard about the chip level bugs of the LANCE make it
->       something to run far away from, especially if you are the code pig that
->       has to do the driver.

Ahem.  The 8390 is a National part, and is indeed buggy.  The LANCE
is an AMD part (Am7990), has a 16-bit DMA interface, and is about
as bug-free as they come.

mac3n@babbage.acc.virginia.edu (Alex Colvin) (01/06/89)

> Proteon (like most Ethernet vendors) botched their ring hardware in
> its packet buffering capability.  The ProNET-10 that we have (both
> UNIBUS and Q-Bus versions) can't handle back-to-back packets.  The

You're right there.  Your driver has to get the packet off the board as fast
as it possibly can.  By the way, I believe the VME bus version has several
receive buffers.

Personally, if I get a REFUSED (not copied), I usually do an immediate
transmit (after DMA).  That usually finds the receiver ready.
802.5's copied bit would be slightly better.

I heard that the new IBM 16 Mb/s TRN card for the PC has 64K on-board buffer
instead of the old 4K.  No matter what you've got, for sustained bulk transfer
you have to match the transfer rates across the system busses.

Another thought:  I'd like to get a receive interrupt when the packet starts
arriving, not when it's completed.  That would let me get an early start.
By the time i get into the interrupt handler the header would be ready.
More overlap.

			-- happy trailers...

dough@iscuva.ISCS.COM (Doug Hockin) (01/08/89)

In article <295@quick.COM> srg@quick.COM (Spencer Garrett) writes:

>Ahem.  The 8390 is a National part, and is indeed buggy.  The LANCE
>is an AMD part (Am7990), has a 16-bit DMA interface, and is about
>as bug-free as they come.

The newest revision of the National 8390 (Rev. C) is close to bug
free.  There are still problems with loopback, but normal operation
works fine.

-- 
Doug Hockin                UUCP:  dough@iscuva.iscs.com
ISC Systems Corporation           (uunet!iscuva!dough)
East 22425 Appleway        Phone: (509) 927-5477
Liberty Lake, WA  99019

mogul@decwrl.dec.com (Jeffrey Mogul) (01/10/89)

First, I'd like thank the people who, in defense of Ethernet, have invoked
our paper, and to point out that anyone who actually wants to read it
can send a message to "wrl-techreports@decwrl.dec.com" or
"decwrl!wrl-techreports"; send a message with the word "help" on
the "Subject:" line.  WRL Research Report 88/4 is a slightly expanded
version of our paper in SIGCOMM 88.  ("Us" = Dave Boggs, Chris Kent,
and myself.)  This paper is NOT a comparison of Ethernet to Token Ring,
by the way.

In article <18672@agate.BERKELEY.EDU> glass@tehran.berkeley.edu (Brett Glass) writes:
>>"Ethernet works in practice, but not in theory".
>
>By this, does Boggs mean that the IEEE and others are standardizing
>equipment that can't be proven to work? Gee, maybe I should give up EE
>and change my major to voodoo.... ;-)

This is an example of the danger of putting catchy phrases into a
paper.  What we actually wrote was:
	Ethernet works in practice, but allegedly not in theory:
	some people have sufficiently misunderstood the existing
	studies of Ethernet performance so as to create a
	surprisingly resilient mythology.

Ethernet can be proven to work, both in practice and in theory,
but doing theoretical analysis is hard, often involving unrealistic
simplifying assumptions, and isn't always easy to apply or understand.

-Jeff