[comp.arch] >>>>> endian etc...

ssimmons@convex.com (Steve Simmons) (05/13/91)

I coming little late into this conversation; however, I don't see 
the benefit of having a bi-endian system.  That is, does the 
user really benefit by being able to program in either 
big endian or little endian on the same machine?  From my perspective, it has 
always been a religous war and the issue crops up only in distributed
processing with heterogenous architectures.  That is, when two 
machines of different byte order are conversing, one machine will 
have to pay the cost of switching the byte order.  The OSI network 
people solve this problem in the presentation layer. This is probably
the best place to solve the problem because byte ordering is only 
one of many different data types that must be solved.  That is,
the two machine must also agree on floating point format (IEEE vs. proprietary),
character set (ASCII, EBCDIC, Kanji, etc..), etc.. 

>>A real solution would be to add byte lane swapping hardware to the chip.
>>This hardware would actually swap the bytes on every load and store
>>depending on the byte order bit. (it can stay in the status register, it
>>doesn't matter.) That way, memory is laid out so that bytes are always in
>>the same place and word operations swap them around as necessary. If this
>>were the case, buffers would not need to be byte swapped at all.

>>Surely, it is a real sulution to have a bi-endian OS. By doing so, external
>>bus (and OS and IO) can have some fixed endian which may or may not be
>>different from user processes.

Sorry, I disagree with both of these approaches.  Forcing byte swapping 
into the load-store operation takes us back to the CISC days.  With most
machines prohibiting the crossing of integral boundaries on a load, 
it seems worse to force an extra cycle on the load so that the byte
swapping can take place. 

Allowing the OS to be bi-endian would require two sets of entry points
into the kernel.  One set of entry points would be responsible for 
for switching the byte ordering to the OS kernel's type.  Not 
as costly since OS calls are already fairly expensive but is still
it worth doing???  Certainly, you may have the binary IO problem
for a device that may have different byte ordering; however, let
the driver do all of the swapping.

Probably, I am missing the point still and so feel free to enlighten 
me.  Personally, I argue it to be only a networking problem (or binary
IO).  The problem should not be solved in either the hardware architecture 
or the OS. 

Thank you.

						Steve Simmons