[net.lan] Ethernet query

ward@hao.UUCP (Mike Ward) (12/02/84)

We are about to embark on a journey into the deepest jungles
of  Ethernet, and we have a few questions.  Hopefully the explorers
who have been there before us will provide us with the benefit
of  their knowlege. 


1. We have been told that it is "Better" to install a single
length of cable, rather than join several smaller lengths with
barrel connectors.  The benefits of using many smaller lengths
are obvious.  The dangers of doing so are not so obvious.  Why
is  it better to use a single cable, and what kinds of problems
will  we encounter if we use several smaller lengths? 


2. Has anybody tried to install "zero impedance bump" connectors?
 Is this something that might help?  If not, why not? 


 3. We will be joining together machines running Unix with  machines
 running VMS (and possibly machines running VM/CMS).  We  hope
 to have systems using Decnet co-existing on the same cable 
 as systems using TCP/IP.  Are these things feasable?  Are there
 traps lying in wait for us? 


 4. Does the bit error rate increase as the cable length  approaches
 the specification maximum?  Is there some problem  other than
 collision time that constrains the length? 

 5. Do repeaters work?  Are they available?  Do the board makers
 suppy them?  Are they expensive?

-- 
"The number of arguments is unimportant unless some of them are correct."

Michael Ward, NCAR/SCD
UUCP: {hplabs,nbires,brl-bmd,seismo,menlo70,stcvax}!hao!ward
ARPA: hplabs!hao!sa!ward@Berkeley
BELL: 303-497-1252
USPS: POB 3000, Boulder, CO  80307

rpw3@redwood.UUCP (Rob Warnock) (12/04/84)

+---------------
| 1. We have been told that it is "Better" to install a single
| length of cable, rather than join several smaller lengths with
| barrel connectors.  The benefits of using many smaller lengths
| are obvious.  The dangers of doing so are not so obvious.  Why
| is  it better to use a single cable, and what kinds of problems
| will  we encounter if we use several smaller lengths? 
+---------------

The problem is impedance discontinuity, which causes reflections,
which (if there are enough of them and they are big enough) causes
distortion of the signal, which if bad enough causes data to be lost.
Note that this is NOT a statistical phenomenon, but deterministic.
(Of course, it does change with time and temperature). If a packet
can't get from host-A to host-B on the first try, there's no guarantee
that 1000 tries will be any better.

See the Ethernet Standard (2.0), para. 7.6.1 ff. "Cable Sectioning":

	"The boundary between two cable sections... [is a] signal reflection
	point due to the impedance discontinuity caused by the batch-to-batch
	impedance tolerance [permitted variation] of the cable. Since the
	worst-case variation from 50 ohms is 2 ohms (see 7.3.1.1.1), a
	possible worst-case reflection of 4% may result from the joinging of
	two cable sections.  The configuration...  must be made with care."

They then give some recommendations, paraphrased as:

1. If possible, use just one piece of cable, no breaks.

2. Else, use the same lot (from one manufacturer, natch!) for all sections
   (avoiding the batch-to-batch variations). [Myself, I'd want to buy one long
   piece, and cut it up so that when installed the pieces are in the same order
   and direction as in the original piece. There are no restrictions on the
   amount of cutting in either case, though.]

3. Otherwise, use lengths that don't re-enforce reflections, i.e., use
   odd integral multiples of a half-wavelength at 5 MHz, which means 23.4,
   70.2, and 117 meters. [NOW you know why those lengths are the ones
   you can buy pre-connectored!] Using these lengths, any mix and match
   up to 500 meters is o.k.

4. Finally, do anything you want, if the worst-case reflection at any point
   on the cable is less than 7% when driven by a "standard" transceiver.

Note that this final "condition" means that you actually can use almost
any old junk you have lying around, including RG-8/U cable, if the total
configuration is small enough or "clean" enough (low ambient noise).
(But 75 ohm cable doesn't hack it, sorry.) Also note that the "7%" figure
is for the cable only. When you add transceivers it gets worse, but that's
included in the transceiver placement rules (per 7.6.2).

+---------------
| 2. Has anybody tried to install "zero impedance bump" connectors?
|  Is this something that might help?  If not, why not? 
+---------------

From above, the problem is NOT the connectors, but the cable. See also
section 7.3.1.2 "Coaxial Cable Connectors", where they say to use normal
type "N" 50-ohm constant-impedance connectors. "Since... [it's] well below
UHF range...,  military versions... are not required (but are acceptable)."

+---------------
|  3. We will be joining together machines running Unix with  machines
|  running VMS (and possibly machines running VM/CMS).  We  hope
|  to have systems using Decnet co-existing on the same cable 
|  as systems using TCP/IP.  Are these things feasable?  Are there
|  traps lying in wait for us? 
+---------------

Co-existence on the same cable is not a problem. Co-existence of your
software for both types in any given machine may be, depending of the
flexibility of the driver interface (when faced with several protocols).

+---------------
|  4. Does the bit error rate increase as the cable length  approaches
|  the specification maximum?  Is there some problem  other than
|  collision time that constrains the length? 
+---------------

The random ("Gaussian") bit-error rate does go up with larger configurations,
due to decreased signal-to-noise, but even in the largest case (500 meters and
100 transceivers) the signal-to-noise is good enough that the physical packet
error rate (due to thermal noise) should be essentially zero (say, less than one
packet in a million).  The primary causes of packets lost to "noise" will be
impulse noise from local high-energy transient events (such as an elevators or
floor polishers starting up), or controller boards with poor phase-locked
loops (that occasionally just can't lock), or software that can't keep
up with the data rate, or other non-electrical causes.

The total "diameter" of one Ethernet (including all repeatered network
sections) is constrained by the requirement that collisions be detected
reliably, such that ALL stations agree that it was a collision. (A collision
is not an "error", but a normal part of the CSMA multiple-access protocol. In
Ethernet 1.0/2.0, they are detected by analog comparators, not by CRC checks.)
The maximum "diameter" is therefore set by the minimum packet size, since both
(all) parties must still be transmitting when the signal gets to the farthest
receiver (which may or may not be one of the transmitters).

+---------------
|  5. Do repeaters work?  Are they available?  Do the board makers
|  suppy them?  Are they expensive?
+---------------

In order: Yes. Yes (sort of). Try DEC and Interlan, at least. A few $K.
(In high enough volume, they could be made to sell for sub-$1000, but
the demand is probably not that high.)

There was a minor bug in the Ethernet 1.0 spec, having to do with the
undecidability of the existence of a collision by a repeater under certain
worst-case configurations (which would not be likely to happen). It was
fixed in the 2.0 spec by tightening the specification of transceiver drive
current a little bit. But don't worry. Even if you are buying 1.0-style
transceivers still (because your controller boards need them), most of the
transceiver manufacturers who produce 2.0-spec stuff now give you 2.0-spec
drive current on your 1.0 transceivers anyway. Check with your vendor.

Another improvement in 2.0 was the change to the way preamble is handled.
In 1.0, the repeater could "eat" a certain number of preamble bits. After
going through too many repeaters, you could lose the entire preamble (which is
one reason for the limit of two repeaters). In Ethernet 2.0 (and IEEE 802.3)
the repeater must (re)generate the full 64-bit preamble, and must have a delay
from input to output of 6 bits or less (600ns). This means that if you are
willing to give up some "diameter" for flexibility, you can have more than
two repeaters "hops" from one end of the Ethernet to the other, if you still
obey the maximum round trip delay spec. Each repeater (after the first two,
which are already included) would decrease the allowed "diameter" from the
usual maximum of 2500 meters by about 20 meters for each 100 ns (1 bit) delay
in the "startup time" from one cable to the next. Roughly, this comprises the
transceiver receiver (6 bits), repeater carrier-detect (2 bits), repeater delay
(6 bits), repeater encoder (1 bit), and transceiver transmitter (3 bits), for
a total of 18 bits or 180ns or 360 meters per additional repeater. Still, this
can be a major "win" in configuration flexibilty in certain situations.

Rob Warnock

UUCP:	{ihnp4,ucbvax!amd}!fortune!redwood!rpw3
DDD:	(415)572-2607
Envoy:	rob.warnock/kingfisher
USPS:	510 Trinidad Ln, Foster City, CA  94404

sunny@sun.uucp (Sunny Kirsten) (12/04/84)

> We are about to embark on a journey into the deepest jungles
> of  Ethernet, and we have a few questions.  Hopefully the explorers
> who have been there before us will provide us with the benefit
> of  their knowlege. 
> 
> 
> 1. We have been told that it is "Better" to install a single
> length of cable, rather than join several smaller lengths with
> barrel connectors.  The benefits of using many smaller lengths
> are obvious.  The dangers of doing so are not so obvious.  Why
> is  it better to use a single cable, and what kinds of problems
> will  we encounter if we use several smaller lengths? 
> 
> 
> 2. Has anybody tried to install "zero impedance bump" connectors?
>  Is this something that might help?  If not, why not? 
> 
Every tap, connector, kink, etc. in a cable tends to introduce an
impedance discontinuity, which causes signal reflection, standing waves,
and some attenuation.  Although these things in general are to be avoided,
there are other considerations.  For example, if you choose to go the 3Com
route (connectorized transceivers), you'll find that their transceivers
exceed the spec more than enough to compensate for the connectors.
> 
>  4. Does the bit error rate increase as the cable length  approaches
>  the specification maximum?  Is there some problem  other than
>  collision time that constrains the length? 
>
Yes.  Detection of collisions.  Stations at opposite ends of the cable
tend to not notice that they're participating in a collision, so go ahead
and transmit anyway...producing garbled packets which get thrown out by the
CRC logic.  The net effect is not an increased BIT error rate, but an
increased PACKET error rate.  Bit error rate is seldom a problem.
> 
>  5. Do repeaters work?  Are they available?  Do the board makers
>  suppy them?  Are they expensive?
Yes.Yes.No?Relative to what?
Xerox makes a good one.  That and a pair of 3Com transceivers, and you're
all set.
-- 
mail ucbvax\!sun\!sunny decvax\!sun\!sunny ihnp4\!sun\!sunny<<EOF

EOF