[comp.protocols.misc] Higher speed LANs ?

mel1@houxa.UUCP (M.HAAS) (06/20/87)

Is anyone working on higher speed LANs?  We have lots of conversation
here on ISO, TCP/IP, and ISO vs. TCP/IP, but they all seem to be
content with the same old (what, 12 years now?) speed.  Ethernet at
10Meg, twisted pair stuff at 1 or 2Meg, fiber backbones at 80 and
100Meg.  But, disk to disk rates are still down at 20K to 100K bytes
per second.

What are the issues that keep us down at such low rates?  The disks
themselves are now in the 500K to 2Meg bytes per second range. 
Backplanes and memory are up in that range.  Certainly, interface
logic can go that fast (it does to disks, why can't it to LANs?).

In theory one can stuff bits down an Ethernet at 1,500K bytes per
second.  Why is it such a struggle to get 200K bytes per second
onto it?

It seems to me that diskless workstations, bitmapped terminals with
windows, cartograpy, medical and satellite image processing, weather
map transmission, video/music/speech processing, and lots of other
applications need high speed transfer of individual files.  Rather,
than the large number of users sending small items of data that the
current LANs handle so nicely.

Are there fundamental problems with TCP/IP that limit its application
to higher speed use?  Are the new ISO protocols better for high speed
file transfer?  Or, is the problem in the operating systems or
interface to the computer bus?  Can the high speed backbone fiber
networks be made to handle individual computer to computer or
computer to terminal traffic?  Or, it there some fundamental issue
that keeps them in the LAN to LAN bridging application?

I know that there are propritary protocols and links that go much
faster, but is it necessary to throw out the standards and ability
to internet and use diverse equipment and applications to get the
higher speed?

   Mel Haas  ,  odyssey!mel

montnaro@sprite.steinmetz (Skip Montanaro) (06/24/87)

In article <541@houxa.UUCP> mel1@houxa.UUCP (M.HAAS) writes:
>Is anyone working on higher speed LANs?
>What are the issues that keep us down at such low rates?
>Why is it such a struggle to get 200K bytes per second onto it?
>Are there fundamental problems with TCP/IP that limit its application
>to higher speed use?  Are the new ISO protocols better for high speed
>file transfer?  Or, is the problem in the operating systems or
>interface to the computer bus?  Can the high speed backbone fiber
>networks be made to handle individual computer to computer or
>computer to terminal traffic?  Or, it there some fundamental issue
>that keeps them in the LAN to LAN bridging application?

You (and others interested in such topics) may want to read Greg
Chesson's paper in the recent Summer 1987 USENIX Proceedings, entitled
"Protocol Engine Design". He discusses the reasons for the current
bottlenecks, at least as he sees them. He sees the "software and
system overheads" as a "limiting factor in the 10Mb/s case". Thus
moving to higher speed cabling (i.e., 100 Mb/s fiber optic systems)
will not provide any substantial increase in throughput, since the
software can't punch out the higher level information any faster. The
primary method for increasing throughput has got to come by placing
more and more of the networking protocols in hardware. The protocol
engine he's been working on is just such a beast. 

         Skip|  ARPA:      montanaro@ge-crd.arpa
    Montanaro|  UUCP:      montanaro@desdemona.steinmetz.ge.com
(518)387-7312|  GE DECnet: advax::"montanaro@desdemona.steinmetz.ge.com"

lamaster@pioneer.arpa (Hugh LaMaster) (06/24/87)

In article <541@houxa.UUCP> mel1@houxa.UUCP (M.HAAS) writes:

>Is anyone working on higher speed LANs?

Yes.

>..  We have lots of conversation
>here on ISO, TCP/IP, and ISO vs. TCP/IP, but they all seem to be
>content with the same old (what, 12 years now?) speed.  Ethernet at
>10Meg, twisted pair stuff at 1 or 2Meg, fiber backbones at 80 and
>100Meg.  But, disk to disk rates are still down at 20K to 100K bytes
>per second.

I believe that there is a fundamental problem in using a general purpose
reliable network protocol like TCP/IP or most of the ISO/OSI protocols for
VERY high speed data transfers.  But, not all of the cited problem is
TCP/IP, and not that many people need as high rates as they think they need.
In fact, if many people really knew what rates they were actually getting,
they really might confuse themselves.  

Anyway, TCP/IP could easily support packet sizes of 32KB which would make a
very big difference in speed (My guess is that you could do 1-2MB/sec easily
just by increasing packet size to 32K).  Hyperchannel will support packets
that big (Hyperchannel has other problems, though).  But what packet size will
Proteon 80Mbit/s rings support, or the new FDDI standard?  I'm not sure, but
the 1500 Byte Ethernet limitation is too small.  Even if you can get the data
rates up there, you burn up too many CPU cycles doing it.  Your TCP/IP needs
to be smart enough to negotiate larger packet sizes.  The default limit still
has to be 576 bytes or whatever for dealing with PC's, the internet, etc.

But I would like to add to the question and ask - How much bandwidth do you
really need, and for which purposes?  You also have to distinguish between
individual rates and aggregate throughput [for example, two 1 MIPS machines on
an Ethernet may do .25 Mbit/s between them, when the Ethernet itself is
running at 3 Mbit/s aggregate].

Also, one of the reasons that packets must be kept small on lower speed
networks is so that response time can be kept reasonable.  One of the
advantages of using higher speed (e.g. optical fiber) LANS will be that packet
sizes can be increased without increasing the access time for an individual
user.

>
>Are there fundamental problems with TCP/IP that limit its application
>to higher speed use?  Are the new ISO protocols better for high speed
>file transfer?  Or, is the problem in the operating systems or
>interface to the computer bus?  Can the high speed backbone fiber
>networks be made to handle individual computer to computer or
>computer to terminal traffic?  Or, it there some fundamental issue
>that keeps them in the LAN to LAN bridging application?
>

In general, ISO will not be better.  The usual limitation is the operating
system and CPU, which can't switch between walking (running your job) and
chewing gum (handling network interrupts) too many times per second, and may
not be able to process all the network level stuff fast enough to provide
acknowlegements fast enough, etc.  An alternative approach is to put all the
interrupts in a special processor.  Excelan makes boards that do just that.
Many people, including people at Ames, are looking at high speed fiber as a
way to increase network speed.

>I know that there are propritary protocols and links that go much
>faster, but is it necessary to throw out the standards and ability
>to internet and use diverse equipment and applications to get the
>higher speed?

Some of those proprietary protocols and links don't run as fast as people
think.  If you want channel speeds, you are going to have to look at the whole
data transfer process very carefully.  How many interrupts will you generate,
are the processes involved kernel processes or user processes, what is the
protocol, packet sizes, acknowledgement and window strategy, etc.  One way to
increase the speed between two specific machines would be to reimplement ftp
with a special high speed channel backdoor that would avoid TCP/IP altogether,
while running normally to other nodes.  You may have to do some extra
checksumming to guarantee data integrity, but you don't have to worry about
data out of order, routing, etc.  Acknowledgement is much faster, and your
data transfers can be in much larger blocks. 

If you need the reliability and generality of a complete network protocol, you
are going to have to pay the price for it in slower speed and larger
consumption of CPU cycles.


  Hugh LaMaster, m/s 233-9,  UUCP {seismo,topaz,lll-crg,ucbvax}!
  NASA Ames Research Center                ames!pioneer!lamaster
  Moffett Field, CA 94035    ARPA lamaster@ames-pioneer.arpa
  Phone:  (415)694-6117      ARPA lamaster@pioneer.arc.nasa.gov

("Any opinions expressed herein are solely the responsibility of the
author and do not represent the opinions of NASA or the U.S. Government")

henry@utzoo.UUCP (Henry Spencer) (06/25/87)

> Is anyone working on higher speed LANs? ...

The wave of the (near) future is probably FDDI, an impending ANSI (?)
standard 100-Mbit/s fiber network.  There will presumably be interest
in pushing its speed up further as well.

> What are the issues that keep us down at such low [net] rates?  The disks
> themselves are now in the 500K to 2Meg bytes per second range. 
> Backplanes and memory are up in that range.  Certainly, interface
> logic can go that fast (it does to disks, why can't it to LANs?).

The interface hardware, on the whole, does/can go that fast.  However...
Can you say the nasty word "protocols"?  The problem is that neither the
disks nor the other things you mention are contending with the problems
of sending data over long distances on multiple-host unreliable gatewayed
networks.  If you work hard at it and have the right hardware, you can get
a good fraction of the Ethernet bandwidth by using very specialized software
that ignores most of these problems, but the solutions don't generalize
easily.  Many of the issues also either aren't well understood or haven't
been well understood until fairly recently.  To slightly paraphrase an
observation that has been made in other connections:  "networks don't really
pose any fundamentally new problems, they just break all the old kludgey
special-purpose solutions".

It is also worth mentioning that most current protocol implementations,
like most current software in general, really haven't had much attention
paid to performance in their design or implementation.

> Are there fundamental problems with TCP/IP that limit its application
> to higher speed use?

I'm told that the answer is more or less "yes".  Remember that TCP/IP dates
from the Neolithic age of network protocols (which wasn't very long ago).

> Are the new ISO protocols better for high speed file transfer?

Fat chance.

> ...  Can the high speed backbone fiber
> networks be made to handle individual computer to computer or
> computer to terminal traffic?  Or, it there some fundamental issue
> that keeps them in the LAN to LAN bridging application?

No reason why they can't work for more local traffic, but they are relatively
costly and there has been no standard, which has discouraged such use.  The
standardization of FDDI should help.

> I know that there are propritary protocols and links that go much
> faster, but is it necessary to throw out the standards and ability
> to internet and use diverse equipment and applications to get the
> higher speed?

The combination of FDDI hardware and lightweight transport protocols
supported by special hardware -- see Greg Chesson's paper in the latest
Usenix for a very interesting example -- has a reasonable chance of giving
us the speed without sacrificing interconnection and internetworking.  It
*will* take a little while before everybody is set up to do this, though.
-- 
"There is only one spacefaring        Henry Spencer @ U of Toronto Zoology
nation on Earth today, comrade."   {allegra,ihnp4,decvax,pyramid}!utzoo!henry

schoff@nic.nyser.net (Martin Lee Schoffstall) (06/26/87)

I understand that AMD has been doing an initial chip set for
the FDDI "spec".  I understand that there have been some
problems, would anyone care to comment?

Marty Schoffstall
schoff@nic.nyser.net

ron@topaz.rutgers.edu (Ron Natalie) (06/26/87)

I think you are wrong to indicate that the sole reason for the slow
effective throughputs is protocols.  The two major problems in UNIX
to day (for either networking or disk access) is IMPLEMENTATION rather
than protocol design problems.  First, there is too much memory to
memory copying in UNIX.  Studies show that much of the disk throughput
is lost through this and the implemenation of IP on UNIX is even worse.
Code is copied from memory to memory three times in UNIX.  None of this
is the fault of the protocol design.

Also bringing up TCP/IP for disk to disk through put is not really relevent.
TCP is designed for streams not disk sectors.  Notice that the datagram
protocols in the DOD suite (e.g. UDP) are used for most of the remote
disk/file system approaches.

As I already pointed out, many computer's I/O systems are where the bottleneck
is currently, especially on micros.  They can't achieve anything near
a megabyte/sec throughput on their interfaces.  This more than anything
else is what is the current bottleneck.  New protocols are inevitable,
especially for higher performance applications, but I don't think we've
run out of performance in the IP protocol suite yet.

-Ron

henry@utzoo.UUCP (Henry Spencer) (06/28/87)

> I think you are wrong to indicate that the sole reason for the slow
> effective throughputs is protocols...

Well, I should have elaborated that a little.  The reason for low
throughputs (compared to the hardware) is the presence of protocols
between hardware and user.  The biggest problem with protocols on most
current systems is that they are implemented inefficiently.  One can
definitely do better.  Eventually the protocol designs do impose limits
(or rather, obstacles that are increasingly hard to overcome), although
few existing implementations suffer from this yet.
-- 
Mars must wait -- we have un-         Henry Spencer @ U of Toronto Zoology
finished business on the Moon.     {allegra,ihnp4,decvax,pyramid}!utzoo!henry