[comp.protocols.tcp-ip] "... and statistics"

brescia@PARK-STREET.BBN.COM (Mike Brescia) (04/14/88)

I think the Internet community would be better served if you could compare
these gateways in some way.  I want to point out that the LSI11 gateways on
the arpanet/milnet border will drop packets and count them for reasons other
than congestion, such as '1822 host dead' or 'net unreachable'.  Also the
"average" throughput is a measure of packets actually offered over the course
of the day or week reporting period, so that 10 packets per second really
means 864,000 packets in one day, not that the machine is somehow limited to
10 packets per second.  While I can cite higher numbers over the 15 minute
periods the statistics are sampled, both for arpanet-milnet gateways, and more
so for ethernet-ethernet gateways, that still is a measure of handling offered
load rather than limitation.  There is also no indication whether the packets
are long and saturate the communication lines, or short and saturate the
gateway processor.

Specifically on the msg from phil@brl...

     Some recently obtained per node averages for gateways:

     The seven BBN ARPA/MILNET Core gateways:
        10.04 packets/sec       5.78 % drop rate

(These gateways connect 2 wide area packet switch nets which have 56 kb and
9.6 kb lines.)

(As a comparison, here are 2 lsi11 gateways' statistics from yesterday)
(               tot sent    avr/sec(day)    peak/sec(15min) drop(day) )
(MILBBN          1.5e6       19.61           34.30             2.01%  )
(CORPE(lan-lan)  1.4e6       18.16           50.01             0      )

     The NSFNET Backbone "Fuzzball" gateways:
        15.55 packets/sec       0.18 % drop rate

(These 5(?) gateways connect to each other with 56 kb lines.)

     The Bld 394 BRL gateway:
        ~20 packets/sec         ~0.8 % drop rate


I would also look for more pleasing statistics from the arpa/mil gateways now
that the processors have all been upgraded from dec lsi 11/23 to 11/73.

Mike Brescia
BBNCC Gateway Development Group

Mills@UDEL.EDU (04/15/88)

Folks,

Further to Mike Brescia's comments, the mailbridges are connected to
virtual-circuit networks that may have some pretty stiff ideas on
flow control, while the NSFNET backfuzz are connected only to each other
via DDCMP serial lines and to Ethernets at each site. While the mailbridges
can get beat up rather badly if some j-random host or gateway keels, the
backfuzz can get blown up by a co-Ether Cray. My point is that the (seven)
NSFNET critters face a quite different environment than the mailbridges
and each may have predominantly different drop mechanisms. Nevertheless,
I continue to think that engineered selective-preemption, source-quench
and priority queueing disciplines could help improve mailbridge service
in significant ways and (you saw this coming) consideration for these
issues should be incorporated into their successors, both of the LSI-11
mailbridges and the NSFNET backfuzz.

Dave

mike@BRL.ARPA (Mike Muuss) (04/15/88)

The BRL Gateway mentioned in Phil's message is a DEC PDP-11/70 running
the BRL-GATEWAY software under the LOS operating system.  It has
3 InterLan Ethernet interfaces, one ProNet-10 ring interface, one
ProNet-80 ring interface, and two ACC LH/DH-11 1822 interfaces, one
running to MILNET IMP 29 via a 480,000 bps ECU link, and the other
directly to BRLNET IMP #1.  This is BRL-GATEWAY #1;  gateway #2 is
similar, with a substitution of a Hyperchannel for the ProNet-80,
and only 1 Ethernet.  The remaining 6-7 gateways on our campus are
much simpler (typically a ProNet-10, an LH/DH, and an ethernet), and
are built on smaller processors (11/24, 11/34, 11/44).

The rates mentioned were average rates, intended merely to give folks
some impression of the levels of inter-building traffic on our campus.
We have measured 200 packets/sec as the maximum switching rate of our
gateways, when link-limiting is not a factor (ie, using Ethernet or
ProNet on both sides of the gateway when testing).  This is a round-trip
measure, ie, each packet traverses an interface in the gateway 4 times
(we use FLOODPING for this statistic). Many would prefer to claim this
as a peak rate of 400 packets/sec (2 interface traversals per "packet",
counting the ping responses as a second packet) -- we would say "400
monodes/sec" in this case.

This is not an attempt to put down the work of others, merely to report
on behavior of older gateways at BRL.  Clearly, the new commercial gateways
have performance several times higher than this, and clearly, it is not
a sensible idea to consider the purchase of PDP-11/70 systems for use
as gateways.  However, it makes a nice retirement job for our old friends,
the 11/70s.

Also, note that our campus is "traffic rich", with two Cray computers
(a Cray X-M/P48 and a Cray-2) that talk TCP/IP, and with 6 Alliant FX/8
super-minis, along with over 100 other machines, many of which exchange
high resolution 24-bit-deep color graphics images over the network on a
regular basis.

	Best,
	 -Mike

reschly@BRL.ARPA ("Robert J. Reschly Jr.") (04/16/88)

Mike writes:
>  ... Many would prefer to claim this as a peak rate of 400 packets/sec
> (2 interface traversals per "packet", counting the ping responses as a
> second packet) -- we would say "400 monodes/sec" in this case.

   Actually that should be "monograms" not "monodes".  The term
"monogram" is derived from the simile between "diodes" and "datagrams",
and their one-legged cousins.  (Ask Ron Natalie about "monodes" if
you're interested).

				Later,
				    Bob