[comp.protocols.tcp-ip] When is a link saturated?

HANK@VM.BIU.AC.IL (Hank Nussbacher) (01/14/91)

I have recently started to monitor my links with SNMP on an hourly
basis and have seen that for my 64kb lines - the typical line utilization
is around 25%-30%.  I compute maximum link thruput for a 64kb line
as 28.1Mb/hour.  But I know that achieving 28.1Mb/hour is close to
impossible due to protocol overhead.  In addition, the lines I am
analyzing are routing IP, Decnet and Appletalk at the same time.

I previously believed that a 65% upper limit was a rational limit to
use for such a line.  That translates into 18Mb/hour (for a 64kb line)
and I figured that I may have a spike here and there above 18Mb/hour
but not anything that could be sustained.  This past week, I had 2 links
that maintained a 23Mb/hour and 25Mb/hour sustained rate for over 5
hours.  That is 88% link capacity.  This turned out to be almost all
VMNET (NJE/IP) traffic due to a reload of a 2 tapes that were restored
to the NJE spool system for dispersion throughout our network.  This
taught me that VMNET can drive a link to 88% of its total capacity, even
while other protocols are running in parallel as well as other
applications (Telnet and FTP).

This led me to check further and I found that our 9.6kb IP link to the
USA (which is hopelessly overloaded and scheduled to be upgraded to
64kb on January 14th), has been running at 70% of capacity (maximum
capacity is 101.2Mbytes per day) on average for the past 4 months.
This link is a strict IP link with just FTP, Telnet and IRC traffic.

I now have all the numbers but I am missing one crucial number.  At
what percentage of capacity should a link be upgraded?  Is it 25%?
40%? 65%?  I'd like to hear what "rules of thumb" others use in order
to determine when a link is saturated or near saturation and needs to
be upgraded.

Thanks,
Hank Nussbacher
Israel Network Information Center

prindevi@INRIA.INRIA.FR (Philippe-Andre Prindeville) (01/14/91)

I don't think you can simplify the question to this extent.  It
depends on how bursty the traffic patterns are, and what sort
of best/worst/mean case service you want to provide.  The phone
companies use two numbers, for instance, 80% and 95% of the
service expected (or provided).  For instance, they try to
engineer the networks so that levels of quality for 80% and
95% of the calls-placed, packets routed, or whatever exhibit
a certain level of quality.

For datagram traffic, if you look at Delay vs. Throughput on
a graph, you will see that for a linear increase is Throughput,
you start to see much greater increases in Delay.  This might
not be acceptable for interactive uses.

Look at the famous "DEC-bit" paper by Raj Jain, DEC-TR-508
(if memory serves).  It is available upon request from DEC.
Don't send me mail asking how to get it.

-Philip

Juha.Heinanen@FUNET.FI (Juha Heinanen) (01/15/91)

i have heard that the next version of cisco software (8.3 or 9.0 i
don't know what they decide to call it) will support separating
interactive traffic in higher priority output queues, which should
allow higher average loads than is currently feasible.

-- juha

nipper@ira.uka.DE ("Arnold Nipper - XLINK - Phone: +49 721 608 4331") (01/15/91)

In your letter dated Tue, 15 Jan 91 08:33:21 +0200, you wrote:
>
>i have heard that the next version of cisco software (8.3 or 9.0 i
>don't know what they decide to call it) will support separating
>interactive traffic in higher priority output queues, which should
>allow higher average loads than is currently feasible.
>

For those who don't read Comp.dcom.sys.cisco:
--------------------------------------------------------------------------------
From: kozel@milano.cisco.com (Edward R. Kozel)
Newsgroups: comp.dcom.sys.cisco
Subject: Release: Priority Queuing
Date: 24 Sep 90 03:19:44 GMT

PRIORITY QUEUING FEATURE LETS CISCO ROUTERS IMPROVE 
SERVICE ON LOW-BANDWIDTH SERIAL LINES

MENLO PARK, Calif., Sept.18, 1990 -- Cisco Systems has added to its
internetwork router/bridges a software feature that lets users assign
priorities to classes of data sent over a network, thereby maximizing service
on low-bandwidth congested serial interfaces.

Cisco's new "priority output queuing" feature is a mechanism for prioritizing
datagrams, typically classified by protocol (e.g., TCP, DECnet, AppleTalk,
bridging) or sub-protocol (Telnet, FTP, LAT, electronic mail) type.  It is
designed as a flexible way to let the user specify the data types most
important to his application (e.g., TCP/IP, over DECnet, terminal traffic over
file transfer) and ensure that those types are transmitted first over an
interface.

Cisco, whose router products also support concurrent bridging, is the first
vendor to offer priority queuing for both routed and bridged protocols.

According to Doug Tsui, manager of product marketing, priority output queuing
addresses the problem of heavily loaded, low-bandwidth serial interfaces,
generally 56 Kbps or slower.

"The serial lines linking wide-area multi-purpose networks often get bogged
down with large numbers of file transfers occurring simultaneously with
interactive terminal traffic such as Telnet or LAT sessions," Tsui said.  "Not
only is response time for terminal traffic traditionally low, but in the case
of LAT, for example, there is a maximum timeout of 255 milliseconds; if an
acknowledgement isn't received by then, the session terminates.  Priority
output queuing solves this problem by letting the user give LAT priority over
all other traffic."

Priority queuing addresses operational as well as technical issues, Tsui said.
"Suppose a company has set up a multiprotocol WAN.  Department A, the
company's R&D lab using TCP/IP, has bought and installed all the routing
equipment, and wants to ensure that its critical design data have top priority
on available bandwidth.  Priority queuing in effect establishes grades of
service, 'favoring' TCP/IP packets so they get switched before DECnet packets
from another department."

Tsui noted that priority queuing is especially useful in international
networks, where bandwidth is often most expensive.

Priority output queuing works by classifying datagrams according to various
criteria and queuing them on one of four output queues.  When the router is
ready to transmit a datagram, it scans the priority queues in order, from
highest to lowest, to find the highest-priority datagram.  When that datagram
has been transmitted, the queues are scanned again.

Priority output queuing will be available as a standard feature (no extra
cost) with cisco routers shipped beginning in November.  Existing units can be
upgraded under cisco's software maintenance program.

--------------------------------------------------------------------------------

Arnold

Juha.Heinanen@FUNET.FI (Juha Heinanen) (01/15/91)

well, to my best understanding the "priority output queue" feature as
implemented in cisco 8.2 DOESN'T support giving high priority to
telnet, rlogin, etc. interactive tcp/ip services no matter what the
announcement on sep 18, 1990 said.  this is because the extended
access lists (on which the feature is based) simply are not powerfull
enough to do so.  i'm pleased if someone can prove that i'm wrong by
giving me the proper configuration commands.

-- juha

Christian.Huitema@MIRSA.INRIA.FR (Christian Huitema) (01/15/91)

Cisco's response is one of the possible solution to the "link saturation"
problem. In fact, to state a complete definition, a link is saturated when in
cannot accept more traffic without degrading the overall quality of service
beyond acceptable limits. This relates directly to the problem of congestion
control in datagram networks.

Amongst the strategies for pushing the limits, one find indeed the separation
of traffic in different "priority" classes. This can be done by using a
priority parameter in the datagram headers, e.g. IP "class of service"
parameter. The effect is that priority traffic is treated as "foreground",
while non priority traffic is queued in background and only obtain what is left
after the priority traffic is served. In fact, that strategy should be
implemented carefully, as one must guarantee that at least some non priority
traffic is transmitted.

The scheme used by cisco does probably not rely on the IP "class of service", 
as most TCP and UDP implementations use the same "standard" class of service
for all connections. One can indeed use the "protocol identifier" (TCP, UDP,
ICMP..) and place different priorities to different protocols, but I am a bit 
puzzled by cisco's assertion that they give "telnet" traffic precedence over 
"ftp": after the first synchronisation, neither the IP headers nor the TCP 
header bear any indication of the application layer protocol. The only way to 
make this correlation is to "remember the first exchange"; but if parallel 
routes can be used, there is not even a garantee that the synchronisation 
packets and the data packets will follow the same route! One should also note
that if IP segmentation is used, the TCP header is only present in the first
segment...

A more conventional scheme relies on the separation of the traffic in classes
based on the packet size: interactive traffic use shorter packets than file
transfer; the shorter packets get routed first.

A promising scheme, which I heard several time, is to manage one queue per
source host. The idea here is to give an even share of the network to every
station; it is also an incentive to implement decent end to end flow controls
(e.g. slow start) as the rogue TCP which use very long queues will observe very
long transit delays and thus very poor performance, while the correct TCP will
work normally. I dont know whether cisco or others plan to implement that.

Christian Huitema 

Juha.Heinanen@FUNET.FI (Juha Heinanen) (01/15/91)

Christian,

   The scheme used by cisco does probably not rely on the IP "class of
   service",  as most TCP and UDP implementations use the same
   "standard" class of service    for all connections. One can indeed
   use the "protocol identifier" (TCP, UDP, ICMP..) and place
   different priorities to different protocols, but I am a bit  
   puzzled by cisco's assertion that they give "telnet" traffic
   precedence over "ftp": after the first synchronisation, neither the
   IP headers nor the TCP header bear any indication of the
   application layer protocol. 

   The only way to make this correlation is to "remember the first
   exchange"; but if parallel routes can be used, there is not even a
   garantee that the synchronisation packets and the data packets will
   follow the same route! 

This can't be true.  If I start a telnet session to another host,
isn't the destination port number equal to 23 in all TCP headers that
are sent from my host and the source port number equal to 23 in all
reply TCP headers?  

The current problem in Cisco's implementation is that the priority
queue mechanism can use IP extended access lists but the in an
extended access list one can only specify the destination TCP port
number, which in the case od a reply TCP packet is whatever number my
host has selected.  So even in the current implementation, my packets
be routed at high priority, but the replies come slow!

   One should also note that if IP segmentation is used, the TCP
   header is only present in the first segment... 

This should not be a major problem if we can assume that most of the
interactive packets are small as you say yourself below.

   A more conventional scheme relies on the separation of the traffic
   in classes based on the packet size: interactive traffic use
   shorter packets than file transfer; the shorter packets get routed first.

   A promising scheme, which I heard several time, is to manage one
   queue per source host. The idea here is to give an even share of
   the network to every station; it is also an incentive to implement
   decent end to end flow controls (e.g. slow start) as the rogue TCP
   which use very long queues will observe very long transit delays
   and thus very poor performance, while the correct TCP will work
   normally. I dont know whether cisco or others plan to implement
   that.

This won't help interactive users in a multiuser environment where
other users' file transfers can override the interactive user.

-- Juha

prindevi@INRIA.INRIA.FR (Philippe-Andre Prindeville) (01/16/91)

	well, to my best understanding the "priority output queue" feature as
	implemented in cisco 8.2 DOESN'T support giving high priority to
	telnet, rlogin, etc. interactive tcp/ip services no matter what the
	announcement on sep 18, 1990 said.  this is because the extended
	access lists (on which the feature is based) simply are not powerfull
	enough to do so.  i'm pleased if someone can prove that i'm wrong by
	giving me the proper configuration commands.

You're overlooking a point here:  the *hosts* are just as involved
in this process -- they *must* use proper type-of-service labelling
of their packets (as per Host Req.) for this to work.  It is not
the job of the router to look at Transport-level information, no
matter what nifty features ciscos have.  Thus telnet/rlogin must
use low-delay, and ftp high throughput service specifiers in their
IP header.

The latest version of telnet, which will be released with 4.4,
includes code to enable this option.

-Philip

prindevi@INRIA.INRIA.FR (Philippe-Andre Prindeville) (01/16/91)

	Tsui noted that priority queuing is especially useful in international
	networks, where bandwidth is often most expensive.

Wow!  That is a new one on me.  Can someone explain how ordering
packets (but not discarding) can save bandwidth?  Assuming that
the number of retransmissions aren't influenced, but merely that
interactive applications observe smaller round-trip times, the
total throughput should be the same...

I must be missing something obvious.  Can someone enlighten me
(and possibly others)?

Thanks,

-Philip

Yves.Devillers@INRIA.INRIA.FR (Yves Devillers) (01/16/91)

 In your previous mail you wrote:

   	Tsui noted that priority queuing is especially useful in international
   	networks, where bandwidth is often most expensive.

   Wow!  That is a new one on me.  Can someone explain how ordering
   packets (but not discarding) can save bandwidth?  Assuming that
   the number of retransmissions aren't influenced, but merely that
   interactive applications observe smaller round-trip times, the
   total throughput should be the same...

   I must be missing something obvious.  Can someone enlighten me
   (and possibly others)?

-->aren't you missing the fact that when interactive users get poor reactions
they complain about the network being overloaded but don't get satisfied
(satisfaction being higher bandwidth to allow better interactions) since
those lines are so expensive that noone has money for them

On the other side giving better priority to interactive traffic ("re-ordering")
makes:
1- interactive users happy (better responsiveness)
2- network manager happy (no extra penny spent)
3- bulk traffic ftp fans unhappy

The total throughput is *globally* the same, not *individually* :-)

 Yves

 ----------------------------------------------------------------
 Yves Devillers
 Internet:    Yves.Devillers@inria.fr  Institut National de Recherche
 Goodie-Oldie: ...!uunet!inria!devill  en Informatique et Automatique
 Phone: +33 1 39 63 55 96              INRIA, Centre de Rocquencourt
 Fax:   +33 1 39 63 53 30              BP 105, 78153 Le Chesnay CEDEX
 Twx:   633 097 F                      France.

jbvb@FTP.COM (James B. Van Bokkelen) (01/16/91)

ISO TP and DECnet only include service ID information when creating the
connection.  TCP always includes this information, in the "port" fields.

James B. VanBokkelen		26 Princess St., Wakefield, MA  01880
FTP Software Inc.		voice: (617) 246-0900  fax: (617) 246-0901

ejm@ejmmips.NOC.Vitalink.COM (Erik J. Murrey) (01/17/91)

In article <9101150900.AA08526@jerry.inria.fr>,
Christian.Huitema@MIRSA.INRIA.FR (Christian Huitema) writes:
> .... One can indeed use the "protocol identifier" (TCP, UDP,
> ICMP..) and place different priorities to different protocols, but I
am a bit 
> puzzled by cisco's assertion that they give "telnet" traffic precedence over 
> "ftp": after the first synchronisation, neither the IP headers nor the TCP 
> header bear any indication of the application layer protocol. The only
way to 
> make this correlation is to "remember the first exchange"; but if parallel 
> routes can be used, there is not even a garantee that the synchronisation 
> packets and the data packets will follow the same route! One should also note
> that if IP segmentation is used, the TCP header is only present in the first
> segment...
> 
 
I don't understand why the "remember the first exchange" is necessary. 
Both telnet and rlogin use a reserved port number that appears in either
the source or destination TCP port fields on *every* packet that is
routed for the entire session.  The one exception is when IP gets
fragmented, which is rare in modern WAN's with current TCP implementations.


... Erik
---
Erik Murrey
Vitalink Communications
ejm@NOC.Vitalink.COM   ...uunet!NOC.Vitalink.COM!ejm

BILLW@MATHOM.CISCO.COM (William "Chops" Westfield) (01/17/91)

	    Tsui noted that priority queuing is especially useful in
	    international networks, where bandwidth is often most expensive.

    Wow!  That is a new one on me.  Can someone explain how ordering
    packets (but not discarding) can save bandwidth?

Well, reordering the packets will also affect which packets get dropped
when a queue becomes full.  If you give interactive (small) packets
priority, big packets are more likely to be discarded.  Bandwidth doesn't
increas, but packets per second does.  So does number of happy users.  This
is essentially the same argument as "the bandwidth is the same but the users
are happier" that someone else made.


    Assuming that the number of retransmissions aren't influenced, but
    merely that interactive applications observe smaller round-trip times,
    the total throughput should be the same...

Unfortunately, this is a bad assumption.  TCP is perhaps the best protocol
in this regard, since round trip timers and retransmission backoff have
been in the protocol since its inception.  Still, many TCP implementations
retransmit excessively in the presence of network congestion.  Almost every
other protocol is considerably worse, especially those that need bridged.

BillW
-------

jbvb@FTP.COM (James B. Van Bokkelen) (01/17/91)

    You're overlooking a point here:  the *hosts* are just as involved
    in this process -- they *must* use proper type-of-service labelling
    of their packets (as per Host Req.) for this to work.

Note that Assigned Numbers actually specifies the TOS values.  In terms of
installed base, V2.05 of PC/TCP (out since 10/90) conforms to RFC 1060 on
TOS for Telnet, FTP, SMTP, Rlogin and DNS lookups.  So do programs which use
high-level Telnet, Rlogin & FTP libraries from v2.05 of our Developer's Kit.

James B. VanBokkelen		26 Princess St., Wakefield, MA  01880
FTP Software Inc.		voice: (617) 246-0900  fax: (617) 246-0901

kseshadr@quasar.intel.com (Kishore Seshadri) (01/18/91)

Christian Huitema writes:
 > 
 > A promising scheme, which I heard several time, is to manage one queue per
 > source host. The idea here is to give an even share of the network to every
 > station; it is also an incentive to implement decent end to end flow controls
 > (e.g. slow start) as the rogue TCP which use very long queues will observe very
 > long transit delays and thus very poor performance, while the correct TCP will
 > work normally. I dont know whether cisco or others plan to implement that.
 > 

This may not work well, considering that there are always hosts on a
network that legitimately send and receive more network traffic than
others. A fileserver for example may appear to be a network pig compared
to a workstation...as might a mailserver. It seems to me that we would
have to think up schemes that are somewhat more sophisticated. This
scheme that you describe sounds uncomfortably close to circuit switching
techniques that are so dear to our phone companies ;-)

Using the priority parameter in the datagram header is probably a much
better approach. I much prefer to have the communicating hosts try and
prioritize their own communications. Applications that have a genuine
need for high priority service should be able to provide hints to
entities between source and destination, requesting higher priority.

Kishore Seshadri

Kishore Seshadri,(speaking for myself)       <kseshadr@mipos3.intel.com>
Intel Corporation                            <..!intelca!mipos3!kseshadr>
"When the only tool you own is a hammer, every problem begins to resemble
 a nail" -Abraham Maslow

srg@quick.com (Spencer Garrett) (01/20/91)

-> I don't understand why the "remember the first exchange" is necessary. 
-> Both telnet and rlogin use a reserved port number that appears in either
-> the source or destination TCP port fields on *every* packet that is
-> routed for the entire session.

Alas, no.  A server is free to answer the connection request
with a different port number, and they commonly do.  (The reason
for this eludes me.  It is permitted by the RFC's, but not
required or particularly encouraged.)

jstewart@ccs.carleton.ca (John Stewart) (01/21/91)

In article <1991Jan20.040130.18339@quick.com> srg@quick.com (Spencer Garrett) writes:
>-> I don't understand why the "remember the first exchange" is necessary. 
>-> Both telnet and rlogin use a reserved port number that appears in either
>-> the source or destination TCP port fields on *every* packet that is
>-> routed for the entire session.
>
>Alas, no.  A server is free to answer the connection request
>with a different port number, and they commonly do.  (The reason
>for this eludes me.  It is permitted by the RFC's, but not
>required or particularly encouraged.)

The main reason for doing so is to facilitate multiple sessions.  For example
if 10 people telnet to a machine, each user will get their own telnetd 
process communicating to them via a unique set of ports.  Now imagine how
difficult this would be to do if you could only have one process running 
connected to the well known telnet port.
-- 
---
Artificial Intelligence: What some programmers produce.
Artificial Stupidity:    What the rest of us produce.

barmar@think.com (Barry Margolin) (01/22/91)

In article <1991Jan20.040130.18339@quick.com> srg@quick.com (Spencer Garrett) writes:
>-> I don't understand why the "remember the first exchange" is necessary. 
>-> Both telnet and rlogin use a reserved port number that appears in either
>-> the source or destination TCP port fields on *every* packet that is
>-> routed for the entire session.
>Alas, no.  A server is free to answer the connection request
>with a different port number, and they commonly do.

Either I misunderstand completely, or the above response is just plain
wrong.  If a server were to respond with a different port number, how would
the client's system tell which server sent the response?  The original
poster was correct.

Quick TCP lesson: When a client sends a TCP datagram to a server, the
source port is generally an arbitrary port chosen for that connection, and
the destination port is the server's well-known port.  Datagrams from the
server to that client will have the same port numbers, except the roles
will be reversed (the source port will be the well-known port, the
destination port will be the client's arbitrary source port).  This rule,
plus a similar rule for the IP addresses, is what permits a datagram to be
associated with a particular connection.  A connection is identified by the
4-tuple <local-address, local-port, foreign-address, foreign-port>, so all
datagrams in a connection must include the same four values (the sense of
"local" and "foreign" changes for each direction, though).

--
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

ejm@ejmmips.NOC.Vitalink.COM (Erik J. Murrey) (01/22/91)

In article <1991Jan21.141530.7031@ccs.carleton.ca>,
jstewart@ccs.carleton.ca (John Stewart) writes:
> In article <1991Jan20.040130.18339@quick.com> srg@quick.com (Spencer
Garrett) writes:
> >-> I don't understand why the "remember the first exchange" is necessary. 
> >-> Both telnet and rlogin use a reserved port number that appears in either
> >-> the source or destination TCP port fields on *every* packet that is
> >-> routed for the entire session.
> >
> >Alas, no.  A server is free to answer the connection request
> >with a different port number, and they commonly do.  (The reason
> >for this eludes me.  It is permitted by the RFC's, but not
> >required or particularly encouraged.)
> 
> The main reason for doing so is to facilitate multiple sessions.  For example
> if 10 people telnet to a machine, each user will get their own telnetd 
> process communicating to them via a unique set of ports.  Now imagine how
> difficult this would be to do if you could only have one process running 
> connected to the well known telnet port.
> -- 

Wait a minute.  On most BSD implementations, "inetd" spawns a separate
rlogind or telnetd process for each incoming telnet or rlogin session
requested.  The processes share the same local port number (23 or 513)
since TCP/IP allows them to do so.  (The connection is still unique
based on (source ip address, source TCP port, dest ip address, dest TCP port)

I will quote from RFC 854 (telnet)
"  The TELNET TCP connection is established between the user's port U
   and the server's port L.  The server listens on its well known port L
   for such connections.  Since a TCP connection is full duplex and
   identified by the pair of ports, the server can engage in many
   simultaneous connections involving its port L and different user
   ports U.
"

A netstat -n on all of the machines I can access show port 23 or 513
as the host's local port for incoming or foreign port for outgoing
telnet/rlogin sessions.

This allows a router to look at the source/dest TCP port to determine
whether this is a rlogin or telnet session.

---
Erik J. Murrey
Vitalink Communications NOC
ejm@NOC.Vitalink.COM	...!uunet!NOC.Vitalink.COM!ejm

mni@techops.cray.com (Michael Nittmann) (01/22/91)

... and the multiple ports, not only reserved well known ports,
show, how the tcp layer does the session management.
If you look at it this way , that tcp not only
provides for the transport service but also the 
session management, then this port arithmetic 
should be somewhat more understandable.

michael
.

jbvb@FTP.COM (James B. Van Bokkelen) (01/22/91)

    >Alas, no.  A server is free to answer the connection request
    >with a different port number, and they commonly do.  (The reason
    >for this eludes me.  It is permitted by the RFC's, but not
    >required or particularly encouraged.)
    
    The main reason for doing so is to facilitate multiple sessions.  For
    example if 10 people telnet to a machine, each user will get their own
    telnetd process communicating to them via a unique set of ports.  Now
    imagine how difficult this would be to do if you could only have one
    process running connected to the well known telnet port.

I'm sorry, you're both badly misinformed.  All Telnet connections share
a common well-known port number (23) on the server for the life of the
connection (Rlogin servers do the same with port 513).  The uniqueness
of the individual connections is based on the remote host IP address and
remote TCP port values, and the TCP API has to allow multiple processes
on a single local port number.  The client's originating port is usually
randomly assigned a unique value by the client's TCP, so that it can
tell its various connections apart as well.

You may have gotten Telnet (RFC 854, over TCP) confused with TFTP (RFC
783? over UDP), which does have server-side port-switching as part of
the connection startup.

James B. VanBokkelen		26 Princess St., Wakefield, MA  01880
FTP Software Inc.		voice: (617) 246-0900  fax: (617) 246-0901

srg@quick.com (Spencer Garrett) (01/23/91)

Someone else wrote:
-> >-> I don't understand why the "remember the first exchange" is necessary. 
-> >-> Both telnet and rlogin use a reserved port number that appears in either
-> >-> the source or destination TCP port fields on *every* packet that is
-> >-> routed for the entire session.
-> >
-> In article <1991Jan20.040130.18339@quick.com> I wrote:
-> >Alas, no.  A server is free to answer the connection request
-> >with a different port number, and they commonly do.  (The reason
-> >for this eludes me.  It is permitted by the RFC's, but not
-> >required or particularly encouraged.)
-> 
In article <1991Jan21.141530.7031@ccs.carleton.ca>,
	jstewart@ccs.carleton.ca (John Stewart) writes:
-> The main reason for doing so is to facilitate multiple sessions.  For example
-> if 10 people telnet to a machine, each user will get their own telnetd 
-> process communicating to them via a unique set of ports.  Now imagine how
-> difficult this would be to do if you could only have one process running 
-> connected to the well known telnet port.

Not so.  A "connection" is identified by both source and destination
addresses *and port numbers*, so as long as the originator of each
session grabs a unique port number all is well.  My own networking
code does not shift away from the original well-known-port, and
multiple sessions work just fine.  I think the reason BSD does shift
port numbers may have something to do with the notion of "priviledged
port numbers" being those less than some small fixed number (1024
as I recall).  They may have thought it would be easier to implement
some security features if neither end of a regular session used
a small port number.

srg@quick.com (Spencer Garrett) (01/23/91)

-> In article <1991Jan20.040130.18339@quick.com> srg@quick.com (Spencer Garrett) writes:
-> >-> I don't understand why the "remember the first exchange" is necessary. 
-> >-> Both telnet and rlogin use a reserved port number that appears in either
-> >-> the source or destination TCP port fields on *every* packet that is
-> >-> routed for the entire session.
-> >Alas, no.  A server is free to answer the connection request
-> >with a different port number, and they commonly do.
-> 
In article <1991Jan21.184716.18820@Think.COM>, barmar@think.com (Barry Margolin) writes:
-> Either I misunderstand completely, or the above response is just plain
-> wrong.  If a server were to respond with a different port number, how would
-> the client's system tell which server sent the response?  The original
-> poster was correct.

My mistake.  The behavior I described is a part of the TFTP protocol,
not TCP or even UDP in general.  It is cleanly implementable, though.
All you need do is accept any foreign port number (and record the
one you accept) if the socket in question is in SYN_SENT state
and the incoming packet contains a proper SYN.  That being the case,
the socket moves into the ESTABLISHED state and all is well.
My tcp code is about to get a bit shorter.  :-)

cliff@garnet.berkeley.edu (Cliff Frost) (01/24/91)

In article <1991Jan22.191059.5523@quick.com>, srg@quick.com (Spencer Garrett) writes:
|> ...
|> 
|> Not so.  A "connection" is identified by both source and destination
|> addresses *and port numbers*, so as long as the originator of each
|> session grabs a unique port number all is well.  My own networking
|> code does not shift away from the original well-known-port, and
|> multiple sessions work just fine.  I think the reason BSD does shift
|> port numbers may have something to do with the notion of "priviledged
|> port numbers" being those less than some small fixed number (1024
|> as I recall).  They may have thought it would be easier to implement
|> some security features if neither end of a regular session used
|> a small port number.

I think some details are missing and people are talking past eachother.
Here is my stab at making sense of what various folks are saying (comments
welcome):

I think for telnet sessions, the telnet well-known port will apear in every
packet.  This should make it possible for a device (bridge/router) to give
priority to telnet packets without maintaining per-connection information.

(Obviously, someone can set up a telnet server that listens to a different
port, and telnets to that port will not get the priority unless the device
maintains state.  Maybe this possibility is why someone claimed that it is
impossible to identify telnet connections merely by looking at packets in
isolation?)

For FTP the control connection is similarly identifiable.

For FTP the data connection can be, (and often?) is, negotiated away from
the well-known port.  This will make it difficult to give priority to ftp
data in the general case.

(Again, someone with control at both ends can play games with port numbers
to get around the priority scheme, but that same person can do this
whether or not the box in the middle maintains state.)

I think most TCP protocols are more like the telnet case than the ftp data
case, and the cisco scheme makes a lot of sense to me.

	Cliff Frost
	UC Berkeley (Computer Center, not related to BSD development)

jbvb@FTP.COM (James B. Van Bokkelen) (01/25/91)

    For FTP the data connection can be, (and often?) is, negotiated away from
    the well-known port.  This will make it difficult to give priority to ftp
    data in the general case.
    
There exist FTP servers which won't initiate data connections to clients
unless the client's port is 20 (the well known FTP data port).  Thus, most
Internet-tested implementations can be expected to be identifiable.

James B. VanBokkelen		26 Princess St., Wakefield, MA  01880
FTP Software Inc.		voice: (617) 246-0900  fax: (617) 246-0901

tcs@ccci.UUCP (Terry Slattery) (01/30/91)

This discussion on identifying port numbers for use with the cisco feature
of prioritizing packets for output has concentrated on the 'standard'
network utilities.  There are many applications other than telnet and ftp
that can benefit from this feature.  The developers of these applications
often need a way of getting the data through the network in a timely
manner.  When other users are running large FTP transfers, time critical
data may be delayed, reducing its value.  The cisco port priority feature
allows the net manager to specify certain traffic, by port number, as having
higher priority than other traffic.  Telnet vs ftp is generally not their
concern.  

For example, stock market data being sent from a ticker-plant to trader
workstations probably has higher priority than a background FTP of a
database dump between two systems.  Without the priority selection (or
type-of-service), the market data may be delayed enough to seriously affect
its value to the trader.

I'd also consider configuring routers to give priority to SNMP packets so
that if someone is choking the net, I can still perform net management
through the mess.  (Not as good as out-of-band control, but better than
nothing.)

	-tcs
Terry Slattery
Chesapeake Computer Consultants, Inc.		Network and Unix Consulting
2816 Southaven Drive				(301) 970-8076
Annapolis, MD  21401

ejm@ejmmips.NOC.Vitalink.COM (Erik J. Murrey) (02/01/91)

In article <9101301444.AA13881@ccci>, tcs@ccci.UUCP (Terry Slattery) writes:
> For example, stock market data being sent from a ticker-plant to trader
> workstations probably has higher priority than a background FTP of a
> database dump between two systems.  Without the priority selection (or
> type-of-service), the market data may be delayed enough to seriously affect
> its value to the trader.
> 

This is a valid point.  However, a lot of the newer applications are
forced to use not-so-well-known-ports via portmapper, etc.  There just
aren't that many ports to reserve them for speicalized use.  This
presents a big problem to routers since they shouldn't have to track
the portmap requests to see what service is registered to what port.

This makes TOS via port # an unrealistic choice.

The real solution to this problem is to make sure that specialized
services that use variable port numbers set the low-delay, etc., bits
in the IP header to tell the routers to prioritize these packets.

And, yes, the BSD stack is capable of setting these bits with some
mods to the code via socket options.

---
Erik J. Murrey
Vitalink Communications NOC
ejm@NOC.Vitalink.COM	...!uunet!NOC.Vitalink.COM!ejm

barns@GATEWAY.MITRE.ORG (02/04/91)

I have been trying to interest people in using the Precedence field
where appropriate, as well as TOS.  There are some words in the draft
update of Router Requirements RFC on what the router should do about
precedence (along with what it should do about everything else).
(This document is available as an Internet Draft at the usual places.)
This, too, gets around the problem of the routers having to be told
in advance which protocols are considered important.  It also allows
for the possibility that (for example) some SNMP traffic is considered
very important and other SNMP traffic is considered expendable.

I also drafted a qualitative description of what facilities a host
should have for dealing with the precedence field.  This hasn't been
published anywhere but I'll send it to anyone who asks.

Bill Barns / MITRE-Washington / barns@gateway.mitre.org

PIRARD%vm1.ulg.ac.be@CUNYVM.CUNY.EDU (Andr'e PIRARD) (02/08/91)

Up to recently, my opinion was that slower interfaces should have larger
amount of buffers in their output queues (or that a minimum reserved number
should be defined with a scheme for picking additional ones from a common
pool using up unallocated memory).
Now I read from Cisco's doc: "For slow links, use a small output queue
hold limit.". The only reason I see is avoiding retransmissions making the
congestion problem worse with duplicate packets. But I am not sure that,
as soon as a buffer of a congested small output queue becomes free, the next
datagram to fill it will not be retransmission anyway.
What is true? Are routers able to use techniques to match retransmissions
waiting in an output queue and discard the duplicate of a waiting datagram?

Andr'e PIRARD             SEGI, Univ. de Li`ege
B26 - Sart Tilman         B-4000 Li`ege 1 (Belgium)
pirard@vm1.ulg.ac.be  or  PIRARD%BLIULG11.BITNET@CUNYVM.CUNY.EDU

jbvb@FTP.COM (James B. Van Bokkelen) (02/08/91)

    I have been trying to interest people in using the Precedence field
    where appropriate, as well as TOS.

Note that current production PC/TCP allows the setting of the Precedence
field on a per-PC basis (The API also allows it on a per-connection
basis, but none of our applications set it).  A separate configuration
item allows the user to enable Precedence checking as defined in the Mil
Std for TCP (1776?), so you can use it for either router traffic
identification, or to let a general override the ranks, as it was
originally designed to.  

James B. VanBokkelen		26 Princess St., Wakefield, MA  01880
FTP Software Inc.		voice: (617) 246-0900  fax: (617) 246-0901