[comp.protocols.misc] "network-byte-order"

cyrus@hi.unm.edu (Tait Cyrus) (01/22/88)

I am working with some ethernet hardware and am interested in knowing 
WHY most of the "ethernet chip" manufacturers have a mode where you
can have the "ethernet chip" read/write words (16 bits) from/to
memory in either vax/320xx/8086 order or 680xx order.

I know that the 680xx is already in 'network-byte-order' so doing
anything with the ethernet is REAL easy.  My question is that
having a "byte reverser" mode for 8086/vax/320xx machines does
you no good.  If I were ONLY transmitting words (16 bits), then
this mode would be GREAT, but since TCP/IP has ints/words/chars/etc
this mode is useless.  For example, if I were using a vax and I had a
character string, then I would have to reverse every pair of characters
so that when the "ethernet chip" read this string (reversing them as
it went) they would get put on the network in the correct order.  If this
mode is useless, way provide it?

Maybe there are some "other" protocols out there they this mode would
work with.  If so, what are they?  How do other manufacturers handle
"network-byte-order"?  Why doesn't everyone build an I/O engine that
using a 680xx machine to handle the ethernet?

I don't wish to start a controversy over which architecture is best.
I do wish, though, to gain a better understanding of why there seems
to be a discrepency between ethernet hardware and ethernet
software (protocols).

Thanks in advance for any comments.

-- 
    @__________@    W. Tait Cyrus   (505) 277-0806
   /|         /|    University of New Mexico
  / |        / |    Dept of Electircal & Computer Engineering 
 @__|_______@  |       Parallel Processing Research Group (PPRG)
 |  |       |  |       UNM/LANL Hypercube Project
 |  |  hc   |  |    Albuquerque, New Mexico 87131
 |  @.......|..@    
 | /        | /     e-mail:      
 @/_________@/        cyrus@hc.dspo.gov