[comp.dcom.lans] 802.3 AUI/MAU questions

yarak@apple.com (Dennis Yarak) (08/01/90)

After a careful reading of most of the standard, it appears that it would 
not be a violation for a powered-down host with a separately powered MAU, 
to interfere with the operation of the MAU.  Does anybody know, 
practically speaking, whether a problem exists with reliability or 
operation, for either the host or the MAU, when an MAU remains powered 
(and therefore driving AUI signal circuits) while its AUI host is 
completely shut down? Or perhaps someone can point to where the standard 
covers this situation?

Also, in analyzing worst case physical implementations over 185 meters of 
cable for 10BASE2, I just can't reproduce either threshold requirements 
for collisions, or where the 10 Ohm maximum DCR of the loop came from. 
Furthermore, a maximum of 30 nodes seems awfully conservative, as my 
simulations don't show anything near the breaking point.  If anyone has 
been on the committee from the early days, can they shed some light here?  
Are the thresholds, DCR, and node count ad hoc, compatibility having ruled 
the day for existing vendors rather than the actual worst case values 
expected from conformant transcievers and media?

Finally, do runt packets ever not get rejected at the physical layer?  
This is an issue when implementing shifted collision thresholds for long 
reach applications--Transmit mode CD allows a non-participating MAU to not 
recognize the collision, relying on the runt packet so generated to be 
discarded at the PHY.  I'm wondering if anyone has encountered problems 
identifiable to Transmit-mode thresholds on extra-long segments.

Thanks for any and all input.

Dennis Yarak
Now at Apple.

pat@hprnd.HP.COM (Pat Thaler) (08/04/90)

> 
> After a careful reading of most of the standard, it appears that it would 
> not be a violation for a powered-down host with a separately powered MAU, 
> to interfere with the operation of the MAU.  Does anybody know, 
> practically speaking, whether a problem exists with reliability or 
> operation, for either the host or the MAU, when an MAU remains powered 
> (and therefore driving AUI signal circuits) while its AUI host is 
> completely shut down? Or perhaps someone can point to where the standard 
> covers this situation?

I don't think that there are any statements in the standard which cover
this specifically.  However, other requirements do result in the AUI
recievers having an input squelch.  The more recently developed MAU
definitions such as 10BASE-T explicitly include a requirement for input
squelch.  (For 10BASE-T the requirement is to squelch any signal less
than 160 mV.)  The normal state of lines from a powered down DTE would
be the same as that from a non-transmitting DTE -- 0 V, so I am not
sure what you are concerned about.
> 
> Also, in analyzing worst case physical implementations over 185 meters of 
> cable for 10BASE2, I just can't reproduce either threshold requirements 
> for collisions, or where the 10 Ohm maximum DCR of the loop came from. 
> Furthermore, a maximum of 30 nodes seems awfully conservative, as my 
> simulations don't show anything near the breaking point.  If anyone has 
> been on the committee from the early days, can they shed some light here?  
> Are the thresholds, DCR, and node count ad hoc, compatibility having ruled 
> the day for existing vendors rather than the actual worst case values 
> expected from conformant transcievers and media?

Without seeing your calculation, I can't tell what factors you might have
left out.  The numbers that went into the 10BASE2 standard were the
actual ones from the worst-case calculation.  Perhaps, you are leaving
out the effect of sending-end overshoot or impulse response of the 
collision detect filter.  Perhaps you are not calculating for the
worst case situation.  The thresholds, cable length etc were the result
of a worst case calculation.  The node count limitation was more
based on the potential for reflections from the nodes and from cable
impedence mismatches at each node adding and causing bit errors than
on the effect of the nodes on the collision threshold.  Nodes do
have a small effect on the collision threshold.
> 
> Finally, do runt packets ever not get rejected at the physical layer?  
> This is an issue when implementing shifted collision thresholds for long 
> reach applications--Transmit mode CD allows a non-participating MAU to not 
> recognize the collision, relying on the runt packet so generated to be 
> discarded at the PHY.  I'm wondering if anyone has encountered problems 
> identifiable to Transmit-mode thresholds on extra-long segments.

Packets shorter than minimum packet size get rejected at the physical
layer.  That is not the reason for receive mode collision detect.
Receive mode collision detect in coax transceivers (10BASE5
and 10BASE2) is necessary in order to maintain accurate carrier sense 
for the Carrier Sense Multiple Access/ Collision Detect (CSMA/CD) 
media access control.

During a collision, AC signals from the colliding nodes may cancel.
If the receiving nodes do not have receive mode collision detect, they
then fail to detect carrier during the cancelation.  Without accurate
carrier sense, the deferral algorithm does not work properly.  Lost
packets, CRC errors, and other such effects can occur.  I have seen 
signal cancelation occur on real networks.  It is not just a theoretical
idea.  It has a negative impact on network efficiency, though I doubt that
it would render a network inoperative.  The effects are worse on
a repeater, which is why 10BASE2 and 10BASE5 MAUs for repeaters
are required to implement receive mode colliison detect.
> 
> Thanks for any and all input.
> 
> Dennis Yarak
> Now at Apple.
> ----------
Pat Thaler
Opinions expressed are my own and not necessarily those
of IEEE 802.3

rpw3@rigden.wpd.sgi.com (Rob Warnock) (08/04/90)

In article <9505@goofy.Apple.COM> yarak@apple.com (Dennis Yarak) writes:
+---------------
|                                         ...  Does anybody know, 
| practically speaking, whether a problem exists with reliability or 
| operation, for either the host or the MAU, when an MAU remains powered 
| (and therefore driving AUI signal circuits) while its AUI host is 
| completely shut down? Or perhaps someone can point to where the standard 
| covers this situation?
+---------------

The Ethernet Specification (don't know about the 802.3 spec) says that the
common mode voltage on the transceiver cable pairs is set by the controller
(AUI), not the transceiver (MAU), and that the common-mode voltage shall be
between 0 and 5 volts (ref'd to the "ground" pin in the transceiver cable).
The way most transceivers handle this (and the required D.C. isolation) is
to use transformers on the transceiver end of the cable. Thus, no problem.

Also, a completely powered-down station's output drivers will be effectively
shorted together (Vcc and Gnd both zero), and so it is unlikely [but possible,
one might suppose] for the signal on the transmit pair to exceed the required
threshold (a couple hundred millivolts differential) to turn on transmit in
the transceiver.

However... *while* powering down, it is quite likely that the station may
generate garbage into the transceiver, and thus into the net. But this
garbage is unlikely [one hopes!] to have a valid CRC...


-Rob

-----
Rob Warnock, MS-9U/510		rpw3@sgi.com		rpw3@pei.com
Silicon Graphics, Inc.		(415)335-1673		Protocol Engines, Inc.
2011 N. Shoreline Blvd.
Mountain View, CA  94039-7311

yarak@apple.com (Dennis Yarak) (08/08/90)

Thanks for the response, Pat.  Perhaps I could bother for a little more 
clarification?

You state:

*The normal state of lines from a powered down DTE would
*be the same as that from a non-transmitting DTE -- 0 V, so I am not
*sure what you are concerned about.

I see a potential problem with the DTE's circuits being banged on by the 
still-powered MAU when DTE's silicon has no VCC.  Usually chips (SIA or 
all-in one controller like SONIC) are specified  max. voltage at any pin = 
VCC + 0.5 Volts, so when VCC=0, the CD and RX lines from the MAU could 
cause this to be violated.  Technically this represents an AUI fault, so 
according to the standard the MAU need only work upon removal of the 
fault--so if,  for example, the MAU objected to having its CD and RX lines 
protected (say via diodes to VCC and ground on the host, causing short 
circuits), and mucked up the backbone in so doing, it wouldn't be a 
violation of the standard.  I recognize this is hypothetical and takes a 
rather extreme view of what might happen, but it does seem to be 
overlooked in the standard.

*The numbers that went into the 10BASE2 standard were the
*actual ones from the worst-case calculation.  Perhaps, you are leaving
*out the effect of sending-end overshoot or impulse response of the 
*collision detect filter.  Perhaps you are not calculating for the
*worst case situation.

I had left out signal overshoot but am considering how best to model it... 
 At any rate, are you saying that the topology constraints were developed 
considering worst-case for ALL parameters?  That would be useful 
information.

*The node count limitation was more
*based on the potential for reflections from the nodes and from cable
*impedence mismatches at each node adding and causing bit errors than
*on the effect of the nodes on the collision threshold.

Yes, this was the specific thing I was modeling.  It really seems that 30 
is conservative.
Does jitter enter in anywhere in this consideration?

*If the receiving nodes do not have receive mode collision detect, they
*then fail to detect carrier during the cancelation.  Without accurate
*carrier sense, the deferral algorithm does not work properly.

Gee, I hadn't heard this before.  The manufacturers don't say anything 
about this.  I was under the impression that carrier sense was set 
independently from collision thresholds, so moving one doesn't necessarily 
move the other in the implementations I've seen.  If bad things happen, 
shouldn't the standard just have insisted on RX mode collision detect for 
all implementations?

Anyway, I really do appreciate your responses here on 802.3 questions.  
It's great (and rare)to have the real experts on the net who take the time 
to straighten us out.

Regards,
  

Dennis Yarak
Now at Apple.

pat@hprnd.HP.COM (Pat Thaler) (08/11/90)

> 
> 
> *The normal state of lines from a powered down DTE would
> *be the same as that from a non-transmitting DTE -- 0 V, so I am not
> *sure what you are concerned about.
> 
> I see a potential problem with the DTE's circuits being banged on by the 
> still-powered MAU when DTE's silicon has no VCC.  Usually chips (SIA or 
> all-in one controller like SONIC) are specified  max. voltage at any pin = 
> VCC + 0.5 Volts, so when VCC=0, the CD and RX lines from the MAU could 
> cause this to be violated.  Technically this represents an AUI fault, so 
> according to the standard the MAU need only work upon removal of the 
> fault--so if,  for example, the MAU objected to having its CD and RX lines 
> protected (say via diodes to VCC and ground on the host, causing short 
> circuits), and mucked up the backbone in so doing, it wouldn't be a 
> violation of the standard.  I recognize this is hypothetical and takes a 
> rather extreme view of what might happen, but it does seem to be 
> overlooked in the standard.

I checked data sheets of two serial chips.  Neither specified such a
requirement.  (One did specify such a requirement for its TTL inputs,
but not for its differential inputs, the CD and RX lines.  The other
had a VCC related requirement for its common mode voltage, but not
its differential voltage.  The inputs are normally transformer isolated
so the common mode voltage is established locally, not by the 
transmitter at the other end of the AUI.)  I don't recall data sheets 
for other serial chips I have worked with requiring protection from 
differential inputs when powered down.

I think it is fairly unlikely that a MAU would disturb the media in
the case you describe.  However, if you believe that a requirement
that the MAU not disturb the media during AUI faults should be added,
you could submit a revision request to the Maintenance Task Force of
IEEE 802.3.  Such a request should include the exact text you propose
adding or changing, the reason for the change, and the impact of the
change on existing implementations.  The Maintenance Task Force does
not write changes to the standard.  They evaluate whether the changes
are ready to ballot or need further work by the proposer.  When a
number of changes are ready for ballot, they manage the ballot process.
> 
> *The numbers that went into the 10BASE2 standard were the
> *actual ones from the worst-case calculation.  Perhaps, you are leaving
> *out the effect of sending-end overshoot or impulse response of the 
> *collision detect filter.  Perhaps you are not calculating for the
> *worst case situation.
> 
> I had left out signal overshoot but am considering how best to model it... 
>  At any rate, are you saying that the topology constraints were developed 
> considering worst-case for ALL parameters?  That would be useful 
> information.

Yes, that is what I am saying.  
> 
> *The node count limitation was more
> *based on the potential for reflections from the nodes and from cable
> *impedence mismatches at each node adding and causing bit errors than
> *on the effect of the nodes on the collision threshold.
> 
> Yes, this was the specific thing I was modeling.  It really seems that 30 
> is conservative.
> Does jitter enter in anywhere in this consideration?

Reflections can result in jitter.  30 was not conservative.  Remember that
each of those 30 taps can have a mismatch in the cable impedance plus 
the effect of the tap impedance.  30 was worst case, it assumed the
worst node spacing and impedance mismatches.  Probability is very low
of actually getting this situation.
> 
> *If the receiving nodes do not have receive mode collision detect, they
> *then fail to detect carrier during the cancelation.  Without accurate
> *carrier sense, the deferral algorithm does not work properly.
> 
> Gee, I hadn't heard this before.  The manufacturers don't say anything 
> about this.  I was under the impression that carrier sense was set 
> independently from collision thresholds, so moving one doesn't necessarily 
> move the other in the implementations I've seen.  If bad things happen, 
> shouldn't the standard just have insisted on RX mode collision detect for 
> all implementations?

The carrier sense in the PLS is, simplistically speaking, the logical
OR of two conditions: signal quality error (SQE, aka collision detect)
on the CI pair or input on the DI pair.  If there are no transitions
on the DI pair for approximately 1.5 bit times, the PLS may sense
the DI pair's state as input_idle (if the DI pair is HI for 2 bit
times, that is the start of idle signal).  This signal cancellation
on DI happens only during collisions.  Receive mode collision detect
ensures that SQE will be present to keep carrier sense on if dropouts
occur on DI.

The effect of this on the network is a small loss of efficiency if
repeater MAUs have receive mode collision detect.  It
is not a serious problem.

I don't think that they were aware of the problem when the initial
802.3 standard was drafted.  When we wrote the repeater standard, 
we were aware of it and were aware that it had more effect on
repeaters than DTEs, so we made RX mode collision detect required
for repeater MAUs but didn't require it on 10BASE2 and 10BASE5
MAUs for DTEs.

Pat Thaler