[comp.protocols.tcp-ip] Some Internet gateway performance data

Mills@UDEL.EDU (03/24/88)

Folks,

You might be interested in the following summary of data on NSFNET Backbone
Fuzzball performance, as well as a comparison with the ARPANET/MILNET
gateways, which operate in a similar state of massive traffic insult. These
data update those reported in the SIGCOMM 87 paper and include the period
since the source-quench policy described previously (and in a paper just
submitted) was implemented.

In July 1987 after the Fuzzball selective-preemption policy was installed, but
before the source-quench policy was implemented, the throughput per trunk
ranged from 0.3 to 4.0 packets per second, with a mean of 2.0. At that time
the total trunk throughput was 31 pps with drop rate due all causes of .09
percent. In March 1988, several months after quench was installed, the trunk
throughput ranges from 2.6 to 9.2 packets per second, with a mean of 4.7. At
this time the total trunk throughput is 71 pps with drop rate 0.48 percent and
with 0.27 percent of all trunk packets resulting in a quench.

The following table shows the performance of the NSFNET Backbone Fuzzballs for
periods ending on 21 March. These numbers include Ethernets as well as all
56-kbps trunks.

	Node   Mpkt    UpHr      PPS     Drop   Quench
	----------------------------------------------
	 1     3.49     100     9.75     0.17     0.04
	 2    18.48     260    19.72     0.38     0.39
	 3     6.33     102    17.33     0.16     0.17
	 4    15.08     262    15.99     0.31     0.45
	 5    19.36     266    20.20     0.85     0.18
	 6     2.97      48    17.35     0.17     0.14
	 7     7.02     262     7.44     0.71     0.04
	----------------------------------------------
	Total 72.74    1299    15.55     0.34     0.18

The "Mpkt" column shows the aggregate throughput in megapackets for all output
queues, including serial lines and Ethernet. The "UpHr" column shows the
aggregation interval in hours. The "PPS" column down through the "Total" row
shows the resulting throughput, which is the "Mpkt" column divided by the
"UpHr" column adjusted to the proper units. The "Drop" and "Quench" columns
show the percentage of packets dropped and quenched respectively. The value
shown in the "Total" row for these columns is the average of the column
itself. The existing NSFNET Backbone clearly meets the performance objective
of less than one percent drop rate.

For comparison the following table shows the performance of the ARPANET/MILNET
gateways for the week ending 21 March. So far as can be determined, each
gateway is connected to two 56-kbps data paths.

		ID     Mpkt    UpHr      PPS     Drop
		-------------------------------------
		 1     4.83     144     9.32     7.26
		 2     6.15     144    11.86     8.18
		 3     7.06     146    13.48     7.40
		 4     7.03     139    14.08    12.87
		 5     3.14     145     6.00     0.83
		 6     3.75     109     9.54     3.23
		 7     5.07     146     9.66     2.85
		 8     2.76     129     5.95     3.65
		-------------------------------------
		Total 39.79    1101    10.04     5.78

As evident from these figures, the NSFNET Backbone Fuzzballs carry a
throughput over fifty percent greater per node than the ARPANET/MILNET
gateways with a drop rate of over ninety percent less. Note that this
comparison may not be fair in two ways: first, the ARPANET/MILNET gateways are
connected to networks, not trunks, which can have large dispersive delays;
second, the NSFNET Backbone Fuzzballs are connected to Ethernets, which
provide no insulation against unruly traffic generators.

From measurements made last July and reported in the SIGCOMM paper last year,
the selective-preemption policy made a whale of a difference. The case for the
source-quench policy installed recently is less clear, although there is
recent evidence that it is in fact effective for those hosts that respond to
quench messages. However, even if the crafted policies and Fuzzball
implementations may be suboptimal and change next Monday, the data above
should be convincing beyond doubt that fairness policies and queue disciplines
similar to these will be necessary for future generations of connectionless
packet switches and gateways.

Dave