[comp.protocols.tcp-ip] PSN 7 End-to-End question.

robert@spam.istc.sri.COM (Robert Allen) (01/29/88)

    Recently I've been thinking about the changes in host connections to 
    the ARPAnet, and some questions have come up I can't answer for myself.
    If anyone can answer these publicly or privately I'd appreciate it.  If
    the questions are to basic for this group any pointers to RFCs which are
    pertinent would be welcome.  Thanks,
	Robert Allen, robert@spam.istc.sri.com
	415-859-2143 (work phone, days)

    ----------------------------------------------------------

    The questions are primarily two: first, why was X.25 chosen as the
    replacement for 1822 for all new IMP connections? second, why was
    it deemed necessary for an End-to-End protocol in the network?

    Regarding the first question: What exactly are the options for X.25
    connections to the ARPAnet?  Is HDLC *or* LAPB acceptable, or only
    HDLC?  Is LAPB deemed necessary for a reliable connection or is HDLC
    sufficient?  Simply stated, are two reliable layers under IP needed
    or is 1 sufficient?


    Regarding the second question: First, I assume that the the ETE protocol
    is IMP-to-IMP.  In the older releases of IMP software I assumed that it
    operated on a reliable link-to-link basis (except in cases of extreme
    problems), in other words, my concept of the IMP model comes from chapter
    5 of the Tannenbaum "Computer Networks" book.  Under the ETE model as I
    understand it, there is an 'ETE ACK' for the packets sent.  Questions
    I have include

	"is the ACK on a per-IP-packet basis or on a per-IMP-packet basis?",

	"if ETE is used on an per-IP-packet basis can fragmentation occur?",

	"is the ETE protocol used on all packets, or just those that request/
	    require it?",

    and of course

	"why was the ETE protocol deemed necessary in a network that already
	    had IMP-to-IMP reliablity (this leads me to believe that ETE is
	    only used for some packets, namely those that require extreme
	    reliability)?"

    Picture:
        host    IMP					IMP	host
	----	----					----	----
	|  |====|  |<---------->NETWORK<--------------->|  |====|  |
	----    ----					----	----
	   |X.25|~~~~~~~~~~~~~~~~~ETE protocol~~~~~~~~~~~~~|X.25|

pogran@CCQ.BBN.COM (Ken Pogran) (01/29/88)

Robert,

One of your questions has an easy answer; the answer to the other
is a lot more complex.

X.25 was chosen to replace 1822 for new IMP connections because
it is an international standard that has broad support among host
vendors.  It's DoD policy to adopt commercial standards whenever
possible.  This says nothing one way or the other, of course,
about the relative technical merits of X.25 and 1822 as access
protocols to a wide-area packet network.

Why was it deemed necessary for an End-to-End protocol in the
network?  Well, first:  the network has ALWAYS had an end-to-end
protocol.  It's not something that's been made a big deal of.
The new end-to-end protocol introduced in PSN 7 is the first
change in this protocol since the relatively early days of the
deployed ARPANET.

The purpose of the end-to-end (EE) protocol is, primarily, to
manage the INTERNAL resources of the network in response to the
demand for services from the network's hosts.  It's called an
end-to-end protocol because it operates between the source PSN
(the PSN to which the host originating a given message is
attached) and the destination PSN (the PSN to which the host
which is the destination of the message is attached).  The EE
functionality of the PSNs of the network is in addition to the
"store-and-forward" functionality that occurs from one PSN to the
next.

The "IMP-to-IMP" protocol accomplishes reliable transmission from
one PSN to the next; the EE protocol manages resource utilization
for a flow of data "across" the network from source to
destination PSN.  It also provides the mechanism by which a PSN
is able to enform a source host about what happened to his
message -- whether it was delivered to the destination (host gets
a RFNM, in 1822 parlance), or not (host gets a Destination Dead,
or Transmission Incomplete, in the event something in the network
failed while the message was in transit).

EE ACKs are on a per-host-message basis, which in an IP world
translates into per-IP-packet.  (Under the "new EE" of PSN 7, EE
ACKS can be aggregated when being sent across the net for
efficiency, but are sorted out at the source PSN for proper
presentation -- via individual RFNMS in 1822, for example -- to
the source host.

In the ARPANET, messages from hosts can be up to (approx) 8K bits
long and are fragmented by the PSN into packets of (approx) 1K
bits.

The EE protocol is employed for all host traffic in the network;
after all, it's used to manage the resources of the network
itself.  Which answers the question of why the EE protocol was
deemed necessary in a network that already has PSN-to-PSN
reliability.

Hope this helped.

Regards,
 Ken Pogran

mckenzie@LABS-N.BBN.COM (Alex McKenzie) (01/29/88)

Robert,

One addition to Ken Pogran's message: several of the designers of the ARPANET
packet switches made by BBN presented a description of the design issues we
were concerned about, and our approaches to solving them, at the 1975 "AFIPS
National Computer Conference".  Our paper is on pages 161-175 of the
Proceedings.  There is quite a detailed discussion of what we felt were the
appropriate design choices for the Node-Node and Source-Destination
transmission procedures, and the trade-offs between them.  Although I am one of
the authors of the paper, and therefore probably have an exaggerated sense of
its importance, I recommend it to you if you want to understand why the ARPANET
design choices were made as they were.

Regards,
Alex McKenzie, BBN
 

cheriton@PESCADERO.STANFORD.EDU ("David Cheriton") (01/30/88)

Presumably, the EE and Imp-to-imp protocols also consume the INTERNAL
resources of the network while they are doing this managing.  Is there
any evidence to assure us that these protocols are a net performance
win over a simple, lean and mean best-efforts datagram service, which is
all that IP/TCP wants and can use?
  What is the best reference to understand how these protocols manage the
network resources, particularly in dealing with network congestion?
Thanks,
David Cheriton

narten@PURDUE.EDU (Thomas Narten) (02/01/88)

> Presumably, the EE and Imp-to-imp protocols also consume the INTERNAL
> resources of the network while they are doing this managing.  Is there
> any evidence to assure us that these protocols are a net performance
> win over a simple, lean and mean best-efforts datagram service, which is
> all that IP/TCP wants and can use?

Total "best effort" systems work well only as long as the switches and
communications lines run below maximum capacity. Once maximum capacity
is reached or exceeded, problems arise. Many of them are solvable, but
they must be addressed, and the resulting system may no longer be "lean
and mean".

1) Best effort systems rely totally on hosts for congestion
management. That is, transport protocols are responsible for
congestion control and congestion avoidance.

In practice, existing protocols don't play by those rules. For
instance, only recently (Van Jacobson's work) have TCP implementations
started reacting (in a positive way) to congestion.  UDP based
protocols implement no congestion control at all. I cringe at the
thought of running NFS across the Internet.

2) Without TOS priority, all datagrams are considered equal. Routing
protocols suffer just as much from congestion as other protocols, but
consequences are much more severe.  Nowhere in the Internet (that I
know of) are the datagrams that are used for the exchange of routing
information given precedence over others.

3) As John Nagle describes, congestion collapse is inevitable in
situations where transport protocols send more packets into the
network in response to congestion. Furthermore, I know of no
point-to-point networks that are designed to run steady-state at or
above maximum capacity. They all assume that the network will be
lightly loaded. Some networks (e.g. the ARPANET) take steps that
guarantee this.

4) In order for best effort systems to work well, transport protocols
must practice congestion avoidance. Congestion control deals with
congestion once it exists, congestion avoidance is aimed at keeping
congestion from ever reaching the point where the congestion control
mechanism kicks in. Congestion avoidance aims to run the network at
maximum throughput *AND* minimum delay.

Consider TCP: its window tends to open as far as it can (8 packets at
1/2K each). The network is forced to buffer the entire window of
packets.  If the two endpoints are separated by a slow speed link,
most of the packets will be buffered there. Congestion and delay
increase.  Congestion could be reduced without reducing throughput by
decreasing the size of the window. In reality, TCP won't do anything
unless a packet is dropped, or a source quench is received. A new
mechanism is needed to distinguish between high delays due to
congestion from those due to the transmission media (e.g. satellite
vs. terrestrial links).

Raj Jain's work deserves close study in this regard.

5) In the present Internet, congestion avoidance is a dream. That
pushes congestion control/avoidance into the gateways and physical
networks. In many systems, congestion control simply resorts to
dropping the packet that just arrived that can't be put anywhere. Some
of the issues:

a. Fairness: He who transmits the most packets, gets the most
resources. This discourages well-tuned protocols, and encourages
antisocial behavior.

b. "Fair queuing" schemes limit the resources that a particular class
of packets can allocate. For instance, Dave Mills' selective
preemption scheme limits buffer space according to source IP
addresses.

There are fairness issues here too. All connections and protocols from
the same source are lumped into the same class. Does the "right" thing
happen when a TCP connection competes with a NETBLT connection?

c. Queuing strategies increase rather than decrease the per-packet
overhead. Furthermore, the information used to group datagrams into
classes must be readily available. In the worst case, you have to be
able to parse higher layer packet headers.

d. These queuing strategies rely entirely on local rather than global
information. It may be that 90% rather than 20% of the packets should
be discarded; a link two hops away might be even more congested. 

6) Because of (1) above, physical network designers should give
considerable thought to congestion control/avoidance.

The ARPANET practices congestion avoidance. That is one of the biggest
reasons that one of the oldest networks, based on "old" and "obsolete"
technology still works extremely well in today's environment.  People
should be much more careful in distinguishing between the ARPANET and
the Internet.  For instance, "the ARPANET is congested", usually
really means that the gateways connected to it are congested, or the
Internet routing mechanism has broken down.

My understanding of ARPANET internals is as follows: Before packets
can be sent, an end-to-end VC is opened to to the destination IMP. I
call this type of VC "weak", because it provides VC services, but is
actually implemented as a sliding window protocol (roughly) similar to
TCP.  "Strong" VCs refer to those in which buffers and routes are
preallocated, and packet switches contain state information about the
circuits that pass through them.  Inside the ARPANET, IP packets are
fragmented into small 200+ byte datagrams that are sent through the
network using best effort delivery. The destination IMP reassembles
them and sends back an acknowledgement that advances the window.

The number of packets in the network at any given time for any given
src/dest IMP pair is limited. This essentially limits the total number
of packets in the network at any one time, resulting in one form of
congestion avoidance. Presumably the window size (8 IP packets) has
been chosen based on extensive engineering considerations.

This scheme also raises the same fairness issues described above. For
instance, should (or shouldn't) the gateways at the NSFnet/ARPANET
gateways be able to get more resources than site X?

Of course, total best effort systems have advantages over other
schemes. One is their relative simplicity, and the loose coupling
among gateways and packet switches. Another is the ability of one user
to grab a large percentage of all available network resources.
Although considered a disadvantage if the user is a broken TCP
implementation, it is necessary if a user is to expect good
performance running a well tuned bulk transfer protocol (e.g. NETBLT).

>   What is the best reference to understand how these protocols manage the
> network resources, particularly in dealing with network congestion?
> Thanks,
> David Cheriton

I too am interested in further references, especially those relating
to best effort systems.

Thomas Narten

Mills@UDEL.EDU (02/02/88)

Thomas,

Please see: "The NSFNET Backbone Network," Proc. 1987 SIGCOMM Symposium and
also recent Internet Monthly reports from U Delaware.

Dave

karn@thumper.bellcore.com (Phil R. Karn) (02/02/88)

> 1) Best effort systems rely totally on hosts for congestion
> management. That is, transport protocols are responsible for
> congestion control and congestion avoidance.

The problem with the existing flow control mechanisms in the ARPANET is
that they add considerable overhead even when the network is otherwise
lightly loaded. As I understand it, the ARPANET links all run at 56kbps;
so in theory they could carry 7 kilobytes/sec. However I've *never* seen
a file transfer throughput of more than 2.5-3.0 kilobytes/sec, even over
a short East Coast path in the middle of the night. My timings are
consistent enough that I can only attribute the difference to the
ARPANET's internal packetizing and flow control overhead. (If there are
other factors at work I'd like to know about them).

Yes, transport protocol behavior is important, but there's no reason why
the "best effort" network can't have defense mechanisms that activate
only when the network is congested.  For example, the network might
normally run in a pure datagram mode, with no network bandwidth wasted
on edge-to-edge acknowledgements. However when the network becomes
congested, the internal equivalent of "source quench" packets are sent
to the entry node, telling it to stop injecting so much traffic into the
network. The entry node might translate that into an access protocol
message telling the host (or gateway) to slow down, but more importantly
the entry node could simply delay or discard additional traffic before
it enters the network. Discarding packets is certainly one way to get
TCP's attention, but the delaying tactic would be more efficient.

There have been theoretical assertions that even infinite buffering is
insufficient to prevent datagram network congestion. However, this
assumes no interaction between the network and the transport protocol,
and this is certainly not true with TCP.  TCP cannot transfer more than
one window's worth of data per round trip time, so to slow it down you
either reduce the window size or increase the round trip time. If you
can't get it to reduce its window size voluntarily (e.g., with a source
quench) then you can certainly increase its round trip time (i.e., with
additional buffering). Given a finite number of TCP connections, enough
buffer space will eventually reduce the offered datagram load to the
capacity of the network, although everyone would be better served if the
TCPs could instead cut their window sizes. NFS and ND are not completely
uncontrolled, rather they are basically stop-and-wait protocols. They
would behave well too if only they did retransmission backoff like TCP.

A lot of effort has gone into tuning TCP round trip time estimates.
Question: has anyone looked into techniques for tuning TCP window sizes?
Intuitively, I expect that increasing the window size for a TCP-based
file transfer would increase throughput until the slowest link in the
path saturates. Beyond this point, throughput would remain constant but
round trip delay would goes up, unnecessarily increasing delay for other
users of the same path. It would be nice to find an algorithm that found
and operated at this optimal operating point automatically. Any ideas?

Phil

CERF@A.ISI.EDU (02/03/88)

Dave,

good questions! There was an attempt at a truly datagram network in
the early 1970's. It was developed by Louis Pouzin and Hubert
Zimmermann and, I think, Gerard LeLann. They worked at IRIA which
is now known as INRIA (National Research Institute on automation and
information processing) in France. The network was called CYCLADES.

I am sorry I don't have references at hand (I am on travel using a small
laptop for comm), but my recollection is that they experienced packet
loss rates up to 48% when the network became congested. The ARPANET E-E
protocols are intended to push back towards the source soon enough to 
avoid sustained loss rates of that magnitude, as I understand them.

Lean and mean often applies best when the resources are plentiful, but
can become a liability if the network is permitted to enter a congested
state.

Vint

robert@SPAM.ISTC.SRI.COM (Robert Allen) (02/03/88)

>> > 1) Best effort systems rely totally on hosts for congestion
>> > management. That is, transport protocols are responsible for
>> > congestion control and congestion avoidance.
>> 
>> Yes, transport protocol behavior is important, but there's no reason why
>> the "best effort" network can't have defense mechanisms that activate
>> only when the network is congested.  For example, the network might
>> normally run in a pure datagram mode, with no network bandwidth wasted
>> on edge-to-edge acknowledgements. However when the network becomes
>> congested, the internal equivalent of "source quench" packets are sent
>> to the entry node, telling it to stop injecting so much traffic into the
>> network. The entry node might translate that into an access protocol
>> message telling the host (or gateway) to slow down, but more importantly
>> the entry node could simply delay or discard additional traffic before
>> it enters the network. Discarding packets is certainly one way to get
>> TCP's attention, but the delaying tactic would be more efficient.

    This is what was on my mind when I asked if the ETE was used for all
    traffic.  If everything is fine and dandy then there shouldn't be a
    need for ETE.  It should only activate when needed, similar to when
    traffic directing cops are only sent out when the stoplights breakdown
    (hmmm, maybe that's sort of poor metaphor, but...).  As originally
    stated, the Arpanet was designed as an open network, where the hosts
    or gateways were assumed not to overuse the network.  Now however that
    can no longer be assumed.  Perhaps indeed some new protocols need
    to be developed that put a chokehold on gateways, instead of trying to
    choke the traffic at an IMP (via ETE, and assuming that the network
    consists of homogeneous hardware and software) or at a host (which is
    obviously a problem currently, since host behavior is very heterogenous).
    Current routing protocols seem to be very weak at load sharing and con-
    gestion control.  Most metrics are are hops, rather than quality metrics.
    cisco gateways have the administrative distance metric, but for the most
    part this field is in its infancy.

>> Question: has anyone looked into techniques for tuning TCP window sizes?
>> Intuitively, I expect that increasing the window size for a TCP-based
>> file transfer would increase throughput until the slowest link in the
>> path saturates. Beyond this point, throughput would remain constant but
>> round trip delay would goes up, unnecessarily increasing delay for other
>> users of the same path. It would be nice to find an algorithm that found
>> and operated at this optimal operating point automatically. Any ideas?

    I wonder about this.  If the situation were ideal it might be the case
    that throughput would remain the same when saturation occurred, however
    in reality wouldn't the increasing load (over and above the sat. level)
    on the IMP be likely to cause congestion problems, perhaps enough
    to back up the series of links to the transmitting host, or more likely,
    to cause enough of a delay to cause TCP to declare the host/network
    unreachable?

    My understanding of the internal structure of the Arpanet is not what
    it should be, so excuse me this question.  Are all Arpanet (internal)
    links pretty much equal in speed (assumedly their MTU size is constant
    since it is all the same hardware)?  If that is the case perhaps your
    model works, but consider this.  With networking becoming the widespread
    thing that it is, is it fair to design protocols which are designed for
    a relatively homogeneous PSN(etwork)?  The Arpanet seems to me to be
    rather unique in its construction and administration.  Networks of the
    future may not rely on a homogeneous core, but may rather be a series
    of smaller networks which provide routes of varying expense or quality
    of service to each other, on a peer rather than superior basis.  In
    fact, isn't it possible that the current problems encountered in the
    Arpanet with loops and backdoor routes messing up the network could be
    considered to be weaknesses of the existing protocols?  This by the way
    really is a question, and not an assertion.

    Comments welcome.

    Robert Allen, robert@spam.istc.sri.com
    415-859-2143 (work phone, days)

Mills@UDEL.EDU (02/03/88)

Phil,

At least on paths in the mid-Atlantic region I regularily get 12K-16K
bps between ARPANET hosts and at least 9.6 Kbps fetching gobs from
SRI-NIC, because that's what my home access line speed is and it
runs flat-out. I have seen rates to 23 Kbps on some occasions. Be
advised PSN 96, through which all my packets fly, is one of the
busiest ARPANET spots. Oh, I bet you're connected via X.25. We use
real 1822 here.

Dave

medin@AMES-TITAN.ARPA (Milo S. Medin, NASA ARC Code ED) (02/03/88)

Phil, I used to see throughput rates 3-4 years ago as high as 38-39 Kb/s 
moving files with FTP between a DEC-10 system running DEC System-10 (CCC) at
Lawrence Livermore National Lab to a 4.2 BSD VAX 11/750 (P mach.) at Los Alamos
National Lab via the MILNET when I was working at LLNL.  This was after the
ARPANET/MILNET split, and as I recall, there were a couple PSN's in the
path from LLNL to LANL.  This was for a large file (100K or so) in the evening.
Of course, the FTP's could have been lying to me (I didn't use a stopwatch),
but it 'felt' pretty snappy...

True, I don't see that kind of rate these days, but then there is a
lot more traffic on MILNET these days as well.  So I don't think
it's the internal PSN network that is the bottleneck.  True, both
systems were directly attached to MILNET PSN's via 1822 interfaces, and
we were running an older version of PSN software before, but I doubt
things have changed for the worse *that* much since then...  Let's make sure
the bottleneck isn't an extra hop through an EGP neighbor or some X.25
interface before throwing stones at the poor PSN's...

Ah yes, back in the good ol' days, before all the users discovered these 
networks...  I remember it well.

						Milo

slevy@UC.MSC.UMN.EDU (Stuart Levy) (02/03/88)

We have 1822 here, too, but I've never seen rates over 3.5 KBytes/sec
(i.e. 28 Kbits or half the nominal line speed) on any TCP transfer.
Even talking to other nodes on the same IMP as we are.  Even talking to
our own net-10 address, for that matter.
We have an HDH connection to a remote IMP.  Until recently we were connected
via an ECU (to the same IMP, of course); that did about the same.

LYNCH@A.ISI.EDU (Dan Lynch) (02/04/88)

Phil and Dave,  Just to add to the history of how much data we used to
be able to shove through the old Arpanet:  In the late 70s I used to
FTP (in TCP mode) the huge LISP executable file that was a few megabytes
in size.  I did it from SRI to/from BBN.  Regularly got 40-45 kilobits
out of the throretical limit of 56.  That was on assumed lightly loaded
lines because I did it late at night.  In the daytime I would get
anywahere from 10-30 kilobits per second.  (As Dave pointed out,  it was all
1822 interfaces.)

Lenny Kleinrock told me a magic number in 1966 -- it was 37%.  Anytime
you try to get more than 37% of the capacity out of any shared services
you will start to get unhappy.  When you go over 70% you will be miserable.

Dan
-------

braden@VENERA.ISI.EDU (02/04/88)

Once upon a time, about 1982, I measured 70kbits/sec from my host
looped through the local IMP and back, using max-sized UDP packets.

The IMP was number 1 on the ARPANET (UCLA); the host was an IBM 3033
mainframe; the IMP interface was the ACC IF/370 using an 1822 VDH connection.

We commonly saw 45-48 kbits/sec doing FTP's between two different hosts
via the UCLA IMP; the source was generally a VAX runnning LOCUS
in Computer Science, and the sink was the aforementioned 3033 (Computer
Science sure loved the cheap, fast printing available on the mainframe!).

Bob Braden

wbe@bbn.com (Winston B Edmond) (02/04/88)

Robert Allen @ SPAM.ISTC.SRI.COM writes:
>
>> Yes, transport protocol behavior is important, but there's no reason why
>> the "best effort" network can't have defense mechanisms that activate
>> only when the network is congested.  For example, the network might
>> normally run in a pure datagram mode, with no network bandwidth wasted
>> on edge-to-edge acknowledgements. However when the network becomes
>> congested, the internal equivalent of "source quench" packets are sent
>> to the entry node, telling it to stop injecting so much traffic into the
>> network.
>
>    If everything is fine and dandy then there shouldn't be a
>    need for ETE.  It should only activate when needed, ...

   One must be careful about such things.  Maybe for a LAN it makes sense to
say "when *the network* becomes congested", but for a widely distributed
network, such as the ARPANET, with many independent communication paths, the
question should really be rephrased as: Can delay caused by congestion
controls be avoided when all the paths and nodes between the message's source
and destination, including whatever multiple and alternate paths might be
used, are relatively uncongested?  That would require enough timely
information about the path to decide, and the cost of that decision would
have to be weighed against the performance improvement to be gained.

   Once upon a time, the ARPANET had no preallocation of space in the
destination IMP, and still does not use it for short-enough messages.
Suffice it to say that its absence was enough of a problem to be worth
putting it in.

   If performance in the uncongested case is currently a problem, perhaps the
source IMP could estimate the likelihood of congestion along its selected
path and then choose between sending multi-packet messages with or without
preallocation, with the destination accepting either.

		       ------------------------------

   I would strongly recommend against building a system that activated and
deactivated a congestion control system, if by that one means sometimes
ceasing to use network processing and communication resources to exchange
congestion information.  Instead, the congestion control system should always
be running, with an option to note that a particular network path is
uncongested at the moment.

   Congestion avoidance requires that information about the state of the
network be propagated throughout the network.  Perfect knowledge would
require that predictions at the source node be accurate for the transit time
of the packet.  Decreasing the amount or timeliness of the information
increases the likelihood of congestion and packet loss.

   If a disableable congestion control system were used, the network nodes
would have to be careful to kick in the congestion controls BEFORE congestion
actually occurs, or else there would be a catastrophe point in the network's
ability to handle traffic.  That point would occur as IMPs switch from
sending just user traffic to sending user traffic PLUS congestion control
traffic.
 -WBE

Disclaimer: these are personal remarks and should not be taken as official
statements of BBN Laboratories Incorporated.

karn@THUMPER.BELLCORE.COM (Phil R. Karn) (02/04/88)

We're on a 56Kb/s HDH link to the Columbia IMP, and Stuart's figures are
consistent with mine. I spent some time analyzing the HDLC link level on
our Sun gateway to make sure it's not the bottleneck. As far as I can
tell, it isn't. We're running in message mode (does anybody even bother
with packet mode?), with zero acknowledgement delay and a window size of
7.

Yes, Dave, the tests were done between Columbia and UDel, but I wouldn't
expect such consistent numbers if heavy loading at Delaware was the cause.

While we're on the subject of network throughput, I came across an
interesting statistic the other day. The French national packet network
carries 1,200 billion characters/month, "more than three times the
traffic of Tymnet, Telenet and all other American networks combined...
In fact, France accounts for more data traffic than all the rest of the
world's nations combined..."  ["Tout le Monde! C'est Telematique
Francaise!", US Black Engineer, Winter 1987, p.5].

While this certainly *sounds* impressive, 1.2 terabytes/month is only
3.7 megabits/sec, roughly 40% of a single Ethernet.  We routinely see
cables around here running at such levels for sustained periods (NFS/ND
traffic between Sun-3's, naturally).  Add up all the LANs in the world
and even France's awesome network capacity withers into insignificance. 
Now all we need is a long-haul packet network with a capacity matched to
that of tomorrow's LANs so we can have national NFS server banks...

Phil

Mills@UDEL.EDU (02/04/88)

Phil,

Tout l'faire!

Dave

karels@OKEEFFE.BERKELEY.EDU (Mike Karels) (02/05/88)

Out of curiosity, I just timed an ftp transfer through our 1822 IMP
connection (10AM PST, load average 2.5 on a VAX 11/785) and got 9.6Kbytes/s.
The same transfer through software loopback ran up to 68Kbytes/s (with more
variance due to process scheduling).

Note that even when timing TCP transfers on an idle ARPANET over short
hops, the round-trip time is higher than when using most local-area networks.
Depending on the ack strategy, the TCP transfer rate is limited to
something on the order of one window per round-trip time.  Thus, failure
to reach 56 kb/s through the ARPANET using TCP is not necessarily due
only to internal overhead.

		Mike

slevy@UC.MSC.UMN.EDU (Stuart Levy) (02/05/88)

I did try running two TCP connections to our own interface as well as one.
If round trip time is the limiting factor, then two TCPs should give 
better total throughput.  But (in the one case I tried, sending 50K bytes
to our own net-10 address at 3 AM local time) the total rate actually
decreased -- ~ 31 kilobits/s on one connection, ~ 12 kb/s for each of two.

craig@NNSC.NSF.NET (Craig Partridge) (02/05/88)

Stuart,

    Two TCP connections run in parallel are not guaranteed not to affect
each other.  Van Jacobson can show you interesting graphs which show
TCP connections fighting with each other for bandwidth.

Craig

rcoltun@GATEWAY.MITRE.ORG (Robert Coltun) (02/06/88)

   We have a version of ping that transmits packets based on the ARPANET
1822 interface considerations: number of outstanding rfnms for the
destination and the total in the output queue.  Ping (in its original
form) is excellent for looking at IP delays but it is difficult to get
to actual throughput using it. With the interface scanner, which is based
on netstat -h, we can send packets out as fast as the net can take them;
the transmitter can be throttled by a rfnm high-water mark and a high-water
mark of total packets in the output queue. 
	 Our definition of throughput is:
	   total # bits sent/(time of last received - time of first received)

   Our previous delay tests showed delay and variance of delays increasing
as the number of PSN hops increase. We tried the new tests for 3, 4, and
5 (minimum) PSN hops sending bursts of 30 512 byte packets out.

   The results of these test are preliminary and much work is still
need to draw any real conclusions. The results of the 3 hop case ranged
from 35Kb to 16Kb averaging around 23Kb. A rfnm high-water mark of <= 2
seemed to give us the best throughput values as well as the lowest rtt 
delays for this case. The 4 hop case worked well with a 2 rfnm high-water
mark (best throughput was around 28Kb); 5 PSN hop case worked well with a
5 rfnm high-water mark (best throughput was around 22KB).

    I too have seen low FTP transmission times. If these tests are
an indication of subnet throughput, where has all the throughput gone? 

--- Rob Coltun
    The MITRE Corporation
    rcoltun@gateway.mitre.org

CERF@A.ISI.EDU (02/06/88)

Dan,

Bob Kahn and I got about 42 kb/s on stress tests of the ARPANET
in its early stages. In later years, there was more code in the
IMP and it slowed down some. But then came end/end protocols like
TCP and, depending on the delay, I seem to recall we managed ony
about 35 kb/s through a single IMP, less if many hops were involved.

As to Lenny Kleinrock's 37%, that is 1/e and is the kind of result
you get with multi-access links - and even then, that number did not
guarantee stability (the packet satellite protocols demonstrated
that when running ALOHA style).

Vint