[net.lan] Ethernet 1 vs. Ethernet 2 vs. 802.3 Specifications

lauck@bergil.DEC (11/09/84)

>It is well known that the 802.3 Standards Committed is well influenced by
>manufacturers who are entering the LAN market well after the emergence of
>Ethernet 1 as a standard, and they went out of their way to make Ethernet 2
>(802.3) incompatible with Ethernet 1, to negate the market-place lead-time
>that the existing Ethernet 1 manufacturers had gained.

As one of the reviewers of Ethernet 1 and one of the developers of both 
Ethernet 2 and 802.3, I would like to correct a mistaken assumption.  The
changes between Ethernet 1 and 2 (most of which were adopted by the
802.3 committee) were instituted to correct problems with Ethernet 1 and
to improve the system reliability and maintainability and, where possible,
permit lower cost VLSI implementations.  An example of a difference is
the Collsion Detect Heartbeat signal (aka SQE Test).  This signal makes
it possible for a controller to detect that the tranceiver collision
detect hardware (e.g. the cable) has failed.  This protects the
network from a run-away station which would otherwise not back-off.

802.3 and Ethernet 2 are quite similar.  Most of the differences are
in terminology, which is needed so that 802.3 can conform to the overall
802 terminology.  The most significant differerence is in the frame format,
a software (driver) issue in most implementations.  There are several
harware specification differences.  Most of these are either clarifications, 
tightening of certain specifications to improve system margins, or relaxing of
certain specifications to reduce product costs.  No doubt there will be 
further changes along similar lines in future versions.

As a member of the 802.3 committee when it was adopting the standard I can 
assure you that compatibility with existing equipment was a key concern.  
There were many representatives from companies with products on the market.  
However, this was not the only concern.  The committee was also concerned 
with longer term technical issues, similar to those which motivated the
change from Ethernet 1 to Ethernet 2.  

sunny@sun.uucp (Sunny Kirsten) (11/11/84)

> The changes between Ethernet 1 and 2 (most of which were adopted by the
> 802.3 committee) were instituted to correct problems with Ethernet 1 and
> to improve the system reliability and maintainability...
> 
> 802.3 and Ethernet 2 are quite similar...  The most significant differerence
> is in the frame format, a software (driver) issue in most implementations.
>
> As a member of the 802.3 committee when it was adopting the standard I can 
> assure you that compatibility with existing equipment was a key concern.  

Ok, so how do I add new Ethernet 2 spec systems to an existing Ethernet 1
network in a compatible fashion?  How does my (new) driver know when it receives
a packet off the net, whether it's an Ethernet 1 or 2 packet?  How do I 
interpret the packet type field, which changes from 2 bytes to 6, if I
remember correctly?  Has the definition of the 6-byte packet type field been
constrained to upward compatibility with the older Ethernet 1 2-byte type field?
Or do I have to convert my entire network from Ethernet1 to Ethernet2...
i.e., can they coexist on the same cable?  If not, why not?  What does an
Ethernet2 compatible driver do to recognize an old 2-byte packet type field
followed by 4 bytes of packet data, from a new Ethernet2 6-byte type field?

I wasn't on the comittee, but know someone who was, who was the source of the
idea that there were purposeful incompatibilites foisted by other reps.  I
do not judge that data, just forward it.  It could be all wrong.  It's nice
to hear the opposing point of view...which is more encouraging for the future
of the Ethernet standard(s).  It's nice to have two datapoints versus one.

Has anyone got an Ethernet running which supports both standards simultaneously?
Or are they truly mutually incompatible?
-- 
mail ucbvax\!sun\!sunny decvax\!sun\!sunny ihnp4\!sun\!sunny<<EOF

EOF

mark@cbosgd.UUCP (Mark Horton) (11/14/84)

What is the advantage, if any, to the pair of 8 bit type fields in
the IEEE 802.3 spec, rather than the single large type field in
Ethernet?  It's hard for me to imagine a different type for the sender
from the receiver.  In fact, I don't see why the sender needs to record
his own type in the packet at all.  In effect, with two reserved bits,
it reduces the number of higher level protocols that can be supported to
64 - not a lot.  Is there a central registry for these 64 values so
people will always choose the same values for the same protocol and
different ones for different protocols?