[comp.protocols.tcp-ip] When is an ethernet full?

ssw@cica.cica.indiana.edu (Steve Wallace) (05/29/90)

When is an ethernet full?  We have a campus backbone composed of a
chipcom  10 Mbs ethernet over broadband and a UB 5 Mbs ethernet over
broadband (buffered repeaters).  The UB and chipcom networks are bridged
to form one logical networks.  According to our Network General sniffer, 
we constantly maintain about 10-15 percent utilization or (300 - 1000
packets per second).   How much more traffic can this network support
before performance falls off measurably?  Any ideas?  

Thanks,

Steven Wallace
Indiana University
wallaces@ucs.indiana.edu

craig@bbn.com (Craig Partridge) (05/29/90)

> When is an ethernet full?  ....
> According to our Network General sniffer, 
> we constantly maintain about 10-15 percent utilization or (300 - 1000
> packets per second).   How much more traffic can this network support
> before performance falls off measurably?  Any ideas?  

The best place I know of to start answering this question is Boggs,
Mogul and Kent's article in Proc. of SIGCOMM '88 pp. 222-233.  The
gist of that article is that you can drive the Ethernet all the way
to its rated capacity assuming you are careful in the way you lay out
your network, and all your systems have good Ethernet hardware.

In practice of course, many of the systems won't have good Ethernet
hardware (for example, Jacobson's talk at SIGCOMM '88 indicated he'd
found an Ethernet chipset that could only go about 6 Mbits/sec).  So
you need to find some people out there with some good practical experience
about when some of their systems start breaking down, to figure out when
your network will die due to poor hardware/software.

Craig

rsmith@vms.macc.wisc.edu (Rusty Smith, MACC) (05/30/90)

In article <1141@cica.cica.indiana.edu>, ssw@cica.cica.indiana.edu (Steve 
Wallace) writes...

> 
>When is an ethernet full?  We have a campus backbone composed of a
>chipcom  10 Mbs ethernet over broadband and a UB 5 Mbs ethernet over
>broadband (buffered repeaters).  The UB and chipcom networks are bridged
>to form one logical networks.  According to our Network General sniffer, 
>we constantly maintain about 10-15 percent utilization or (300 - 1000
>packets per second).   How much more traffic can this network support
>before performance falls off measurably?  Any ideas?  
>

We have a similar setup here. There are about 50 Chipcom's connected to
our broadband backbone. All but 3 are coupled to DEC Lanbridges to
keep local traffic local. We have had similar 1 minute averages and
peaks of 3-4 times as much. As far as we can tell everyone is 
satisfied with the performance with these numbers. We have had
other performance problems not caused by traffic volume.

Rusty Smith			Internet:  rsmith@vms.macc.wisc.edu
MACC Data Communications	Bitnet:    rsmith@wiscmacc
(608)  263-6307			Univ. of Wisconsin @ Madison

hedrick@athos.rutgers.edu (Charles Hedrick) (05/30/90)

>When is an ethernet full?  We have a campus backbone composed of a
>chipcom  10 Mbs ethernet over broadband and a UB 5 Mbs ethernet over
>broadband (buffered repeaters).  The UB and chipcom networks are bridged
>to form one logical networks.  According to our Network General sniffer, 
>we constantly maintain about 10-15 percent utilization or (300 - 1000
>packets per second).   How much more traffic can this network support
>before performance falls off measurably?  Any ideas?  

I'd like to see you get data with a bit more time resolution.  It's a
bit unusual for networks to run at 10-15% all the time, day and night.
More typically, there's a long-term variation over the course of a
day, with more traffic during the day than night, and short-term
variation as people boot machines, transfer big files, or do other
things that cause a short-term demand for bandwidth.  If you're
running at 10% 24 hours a day, this suggests either a very odd mix of
users and applications, or that most of your bandwidth is going to
broadcast packets produced by rwhod or things of that nature.  I have
heard of networks with a constant broadcast load of that sort.  In
that case, replacing some or all of your bridges with routers might be
more useful than trying to increase the bandwidth.  In general I'd
expect a peak to average ratio of about 10 to 1.  That is, if you are
averaging 10% usage, you are probably using 100% during brief periods.
So you're about at capacity.  If your 10% is made up mostly of a
continuous background of broadcast packets, this might not be the
case.  But if you've really got that much broadcast traffic, you've
got other problems.  Like your hosts are all spending significant CPU
dealing with it.  If your 10% represents the maxima of your peaks,
rather than a true average, then you're probably in good shape and
still have some room to grow.

ssw@cica.cica.indiana.edu (Steve Wallace) (05/30/90)

A little more info.

     We have about 45 IP subnets all behind cisco routers.  We
route appletalk phase I, DECnet, and bridge IPX.  Between the
hours of 9am to 5pm we see a pretty steady 10 - 15 percent load.
Sometimes this drops to 2 percent but only for very brief
periods.

Steven Wallace
wallaces@ucs.indiana.edu

jim@syteke.be (Jim Sanchez) (05/30/90)

One thing you want to be SURE and remember is that the ethernet on
broadband stuff has a significant distance limitation.  If your campus
cable system is a as large as I suspect, then the 10broad36 channel is
probably working more as csma than csma/cd and the effective channel
capacity is ~2 Mb not 10 Mb.  That is why we use 802.4 for backbone
applications it also uses much less bandwidth.  The UB stuff is also
just CSMA (if my memory serves me).  In both cases, the effective
channel capacity is approximately 35% of the data rate.  If you
calculate the maximum number of packets on an 802.3 channel it is
about 13,000 and scale accordingly I don't think you are overloaded
based on your numbers.  However, this is a tricky thing to find out.
-- 
Jim Sanchez          | jim@syteke.be (PREFERRED)
                     | OR {sun,hplabs}!sytek!syteke!jim
Hughes LAN Systems   | OR uunet!mcsun!ub4b!syteke!jim 
Brussels  
-- 
Jim Sanchez          | jim@syteke.be (PREFERRED)
                     | OR {sun,hplabs}!sytek!syteke!jim
Hughes LAN Systems   | OR uunet!mcsun!ub4b!syteke!jim 
Brussels  

mogul@jove.pa.dec.com (Jeffrey Mogul) (05/31/90)

       When is an ethernet full?  ....
       According to our Network General sniffer, 
       we constantly maintain about 10-15 percent utilization or (300 - 1000
       packets per second).   How much more traffic can this network support
       before performance falls off measurably?  Any ideas?  
    
    The best place I know of to start answering this question is Boggs,
    Mogul and Kent's article in Proc. of SIGCOMM '88 pp. 222-233.  The
    gist of that article is that you can drive the Ethernet all the way
    to its rated capacity assuming you are careful in the way you lay out
    your network, and all your systems have good Ethernet hardware.

Thanks for the plug, Craig ... but I think you have misconstrued
our results, at least in trying to apply them to the question at hand.

True, "you can drive the Ethernet all the way to its rated capacity"
(well, at least 95% of the way) if what you are trying to do is to
make full use of the bandwidth.  This is NOT the same thing as saying
that you will have a useful network if the average load is 95%.  In
fact, as I found out last night (while running some TCP benchmarks
on our lab's main Ether) if you use 90%+ of the network between one set
of hosts, other hosts are going to suffer badly.

The reason is queueing delay.  Think of an arbitrary host with a
stream of packets it wants to send.  If the load on the network is
100%, then its output queue will grow forever and the effective delay
will become infinite.  Actually, I think you can show that the
asymptote for infinite delay happens at a load below 100%, for any
finite inter-arrival time for new packets.

What then is the "right" level at which to declare an Ethernet "full"?
That depends.  If you are running a real-time application that can
never accept a delay > 1.2 milliseconds, then you may not be able
to use an Ethernet at all.  If you are only using the net to carry
non-interactive traffic (like electronic mail) then you might get
away with an average load above 90%.  In the usual "NFS+xterm+other
stuff" kind of environment that we run, I've seen 5-second load
averages above 50% without hearing users complain, although I would
probably complain myself if the load stayed this high all the time.
If your average load (calculated over one-second intervals, as is
the usual practice) is only 15%, then you are probably not going
to notice any problems.

The point of our paper is not that you should run your net at 50%
(or 70% or 90%) utilization; we even said ``Don't try this at home.''
The point is that an Ethernet is no worse when carrying high loads
than other 10Mbit/sec multi-access LANs.
    
    In practice of course, many of the systems won't have good Ethernet
    hardware (for example, Jacobson's talk at SIGCOMM '88 indicated he'd
    found an Ethernet chipset that could only go about 6 Mbits/sec).  So
    you need to find some people out there with some good practical experience
    about when some of their systems start breaking down, to figure out when
    your network will die due to poor hardware/software.

In general, even the hosts with the full-speed ethernet interfaces
won't be using them at full speed (because most protocols are
flow-controlled at some level, and the ultimate data sources and
sinks seldom run at 10Mbits/sec.)  If you are worried about worst-case
scenarios, such as somebody like David Boggs or myself running network
benchmarks on your net, then you might want to pay attention to the
capabilities of your host interfaces.  But in most cases, your network
load comes from a composite of many slower sources, and what matters
is how many hosts you have and what fraction of them are going to
be active at once.

-Jeff

wsmith@cs.umn.edu (Warren Smith [Randy]) (05/31/90)

In article <56724@bbn.BBN.COM> craig@ws6.nnsc.nsf.net.BBN.COM (Craig Partridge) writes:
>> When is an ethernet full?  ....
>
>The best place I know of to start answering this question is Boggs,
>Mogul and Kent's article in Proc. of SIGCOMM '88 pp. 222-233.  The
>gist of that article is that you can drive the Ethernet all the way
>to its rated capacity assuming you are careful in the way you lay out
>your network, and all your systems have good Ethernet hardware.
>
....
>
>Craig

One thing to remember - while Boggs, Mogul and Kent's article shows that
the Ethernet will run right up to saturation (~95% depending on packet
size and number of stations), it does not fully address the matter of
delay.  Delay increases as your Ethernet becomes more heavily loaded.
BMK's measurement of delay does not include measurement of queueing
delays, and thus underestimates the real delays that will be seen by
many hosts (and users) on your network.

I have seen real Ethernets running more than 40% load (averaged over
1 hour!, bursts up in the 80-95% range).  Most of those nets aren't
around any more - they've been split to improve performance.  These
networks were (and are) growing, so they would have had to split at
some point anyway.   When you should split depends on what the needs
are for your network, and what the growth rate is.

-Randy
-- 
Randy Smith
wsmith@umn-cs.cs.umn.edu
...!rutgers!umn-cs!wsmith