[comp.dcom.lans] Higher speed LANs ?

mel1@houxa.UUCP (M.HAAS) (06/20/87)

Is anyone working on higher speed LANs?  We have lots of conversation
here on ISO, TCP/IP, and ISO vs. TCP/IP, but they all seem to be
content with the same old (what, 12 years now?) speed.  Ethernet at
10Meg, twisted pair stuff at 1 or 2Meg, fiber backbones at 80 and
100Meg.  But, disk to disk rates are still down at 20K to 100K bytes
per second.

What are the issues that keep us down at such low rates?  The disks
themselves are now in the 500K to 2Meg bytes per second range. 
Backplanes and memory are up in that range.  Certainly, interface
logic can go that fast (it does to disks, why can't it to LANs?).

In theory one can stuff bits down an Ethernet at 1,500K bytes per
second.  Why is it such a struggle to get 200K bytes per second
onto it?

It seems to me that diskless workstations, bitmapped terminals with
windows, cartograpy, medical and satellite image processing, weather
map transmission, video/music/speech processing, and lots of other
applications need high speed transfer of individual files.  Rather,
than the large number of users sending small items of data that the
current LANs handle so nicely.

Are there fundamental problems with TCP/IP that limit its application
to higher speed use?  Are the new ISO protocols better for high speed
file transfer?  Or, is the problem in the operating systems or
interface to the computer bus?  Can the high speed backbone fiber
networks be made to handle individual computer to computer or
computer to terminal traffic?  Or, it there some fundamental issue
that keeps them in the LAN to LAN bridging application?

I know that there are propritary protocols and links that go much
faster, but is it necessary to throw out the standards and ability
to internet and use diverse equipment and applications to get the
higher speed?

   Mel Haas  ,  odyssey!mel

darrelj@sdcrdcf.UUCP (Darrel VanBuer) (06/24/87)

In article <541@houxa.UUCP> mel1@houxa.UUCP (M.HAAS) writes:
>Is anyone working on higher speed LANs?  We have lots of conversation
>What are the issues that keep us down at such low rates?  The disks
>themselves are now in the 500K to 2Meg bytes per second range. 
>Backplanes and memory are up in that range.  Certainly, interface
>logic can go that fast (it does to disks, why can't it to LANs?).
>
>In theory one can stuff bits down an Ethernet at 1,500K bytes per
>second.  Why is it such a struggle to get 200K bytes per second
>onto it?
>
>   Mel Haas  ,  odyssey!mel

There are two (almost) fundamental problems:
Few computers can do interesting amounts of data processing faster than that
(e.g. 10 Mbit/sec = 30K lines of code to compile per second.  Even raw
images are only a few million bits, so you can send faster than you can look
at them (it's not fast enough for moving pictures, ala TV, but LANs aren`t
trying to be cable TV).
The second problem is that few operating systems can schedule more than a
few hundred I/O activities per second.  In our experience, a VAX 780 running
4.2 saturates near 100 to 150 packets per second in PUP or TCP/IP, almost
independent of packet size.  Data throughput thus can range from a low of a
few thousand bits per second with one byte of user data per packet (and
under 100K bits per second on the Ether), to maybe 2 million bits per second
between processes writing and reading the SAME buffer (i.e. absolutely NO
data processing).  The VAX CPU is the bottleneck (at least with the existing
networking software).  It does NOT seem to be a misimplementation, since
Xerox equipment running different protocols and different hardware (with
microcode support) and totally different system software has similar
performance.
Every measurement of networks in real usage environments (not the stress
tests you write papers and claims for) show (Ethernets) with a few dozen
workstation to VAX size machines show network utilization of a few hundred
thousand bits per second in prime time, with occasional transients up to a
million bits per second.  Our network has collisions for less than 1% of
packets sent.
When we get to the point where we have a Cray-1 on every desk and all take
up data intensive jobs like CAD/CAM and image processing, faster networks
will be necessary; right now they just drive up the cost to increase the
network idle time from 90% to 95 or 99%.

-- 
Darrel J. Van Buer, PhD; unisys; 2525 Colorado Ave; Santa Monica, CA 90406
(213)829-7511 x5449        N6PFO/AA        darrel@CAM.UNISYS.COM   or
...{allegra,burdvax,cbosgd,hplabs,ihnp4,sdcsvax}!sdcrdcf!darrelj

montnaro@sprite.steinmetz (Skip Montanaro) (06/24/87)

In article <541@houxa.UUCP> mel1@houxa.UUCP (M.HAAS) writes:
>Is anyone working on higher speed LANs?
>What are the issues that keep us down at such low rates?
>Why is it such a struggle to get 200K bytes per second onto it?
>Are there fundamental problems with TCP/IP that limit its application
>to higher speed use?  Are the new ISO protocols better for high speed
>file transfer?  Or, is the problem in the operating systems or
>interface to the computer bus?  Can the high speed backbone fiber
>networks be made to handle individual computer to computer or
>computer to terminal traffic?  Or, it there some fundamental issue
>that keeps them in the LAN to LAN bridging application?

You (and others interested in such topics) may want to read Greg
Chesson's paper in the recent Summer 1987 USENIX Proceedings, entitled
"Protocol Engine Design". He discusses the reasons for the current
bottlenecks, at least as he sees them. He sees the "software and
system overheads" as a "limiting factor in the 10Mb/s case". Thus
moving to higher speed cabling (i.e., 100 Mb/s fiber optic systems)
will not provide any substantial increase in throughput, since the
software can't punch out the higher level information any faster. The
primary method for increasing throughput has got to come by placing
more and more of the networking protocols in hardware. The protocol
engine he's been working on is just such a beast. 

         Skip|  ARPA:      montanaro@ge-crd.arpa
    Montanaro|  UUCP:      montanaro@desdemona.steinmetz.ge.com
(518)387-7312|  GE DECnet: advax::"montanaro@desdemona.steinmetz.ge.com"

henry@utzoo.UUCP (Henry Spencer) (06/25/87)

> Is anyone working on higher speed LANs? ...

The wave of the (near) future is probably FDDI, an impending ANSI (?)
standard 100-Mbit/s fiber network.  There will presumably be interest
in pushing its speed up further as well.

> What are the issues that keep us down at such low [net] rates?  The disks
> themselves are now in the 500K to 2Meg bytes per second range. 
> Backplanes and memory are up in that range.  Certainly, interface
> logic can go that fast (it does to disks, why can't it to LANs?).

The interface hardware, on the whole, does/can go that fast.  However...
Can you say the nasty word "protocols"?  The problem is that neither the
disks nor the other things you mention are contending with the problems
of sending data over long distances on multiple-host unreliable gatewayed
networks.  If you work hard at it and have the right hardware, you can get
a good fraction of the Ethernet bandwidth by using very specialized software
that ignores most of these problems, but the solutions don't generalize
easily.  Many of the issues also either aren't well understood or haven't
been well understood until fairly recently.  To slightly paraphrase an
observation that has been made in other connections:  "networks don't really
pose any fundamentally new problems, they just break all the old kludgey
special-purpose solutions".

It is also worth mentioning that most current protocol implementations,
like most current software in general, really haven't had much attention
paid to performance in their design or implementation.

> Are there fundamental problems with TCP/IP that limit its application
> to higher speed use?

I'm told that the answer is more or less "yes".  Remember that TCP/IP dates
from the Neolithic age of network protocols (which wasn't very long ago).

> Are the new ISO protocols better for high speed file transfer?

Fat chance.

> ...  Can the high speed backbone fiber
> networks be made to handle individual computer to computer or
> computer to terminal traffic?  Or, it there some fundamental issue
> that keeps them in the LAN to LAN bridging application?

No reason why they can't work for more local traffic, but they are relatively
costly and there has been no standard, which has discouraged such use.  The
standardization of FDDI should help.

> I know that there are propritary protocols and links that go much
> faster, but is it necessary to throw out the standards and ability
> to internet and use diverse equipment and applications to get the
> higher speed?

The combination of FDDI hardware and lightweight transport protocols
supported by special hardware -- see Greg Chesson's paper in the latest
Usenix for a very interesting example -- has a reasonable chance of giving
us the speed without sacrificing interconnection and internetworking.  It
*will* take a little while before everybody is set up to do this, though.
-- 
"There is only one spacefaring        Henry Spencer @ U of Toronto Zoology
nation on Earth today, comrade."   {allegra,ihnp4,decvax,pyramid}!utzoo!henry

schoff@nic.nyser.net (Martin Lee Schoffstall) (06/26/87)

I understand that AMD has been doing an initial chip set for
the FDDI "spec".  I understand that there have been some
problems, would anyone care to comment?

Marty Schoffstall
schoff@nic.nyser.net

ron@topaz.rutgers.edu (Ron Natalie) (06/26/87)

I think you are wrong to indicate that the sole reason for the slow
effective throughputs is protocols.  The two major problems in UNIX
to day (for either networking or disk access) is IMPLEMENTATION rather
than protocol design problems.  First, there is too much memory to
memory copying in UNIX.  Studies show that much of the disk throughput
is lost through this and the implemenation of IP on UNIX is even worse.
Code is copied from memory to memory three times in UNIX.  None of this
is the fault of the protocol design.

Also bringing up TCP/IP for disk to disk through put is not really relevent.
TCP is designed for streams not disk sectors.  Notice that the datagram
protocols in the DOD suite (e.g. UDP) are used for most of the remote
disk/file system approaches.

As I already pointed out, many computer's I/O systems are where the bottleneck
is currently, especially on micros.  They can't achieve anything near
a megabyte/sec throughput on their interfaces.  This more than anything
else is what is the current bottleneck.  New protocols are inevitable,
especially for higher performance applications, but I don't think we've
run out of performance in the IP protocol suite yet.

-Ron

henry@utzoo.UUCP (Henry Spencer) (06/28/87)

> I think you are wrong to indicate that the sole reason for the slow
> effective throughputs is protocols...

Well, I should have elaborated that a little.  The reason for low
throughputs (compared to the hardware) is the presence of protocols
between hardware and user.  The biggest problem with protocols on most
current systems is that they are implemented inefficiently.  One can
definitely do better.  Eventually the protocol designs do impose limits
(or rather, obstacles that are increasingly hard to overcome), although
few existing implementations suffer from this yet.
-- 
Mars must wait -- we have un-         Henry Spencer @ U of Toronto Zoology
finished business on the Moon.     {allegra,ihnp4,decvax,pyramid}!utzoo!henry