[comp.protocols.tcp-ip] IP queuing for X.25 SVC's: conqestion and performance problems.

eckert@medusa.informatik.uni-erlangen.de (Toerless Eckert) (02/21/90)

Hi.

I am experiencing some Problems with running IP on top of X.25 (RFC877).
We are currently running IP over 64kpbs X.25 trunks, with a
paket size of 128 byte, window size of 2( ;-() and a MTU of 576 byte.
When unloaded the performance is around 3kbyte/sec and turnaround
time is between 100msec and 200msec depending on the number of
X.25 switches between the two sites.

Now there are two problems:

1. Why is the maximum throughput so low ? I understand that increasing the
   paket size and especially window size on X.25 will increase it, and indeed
   that happens, but tests showed that it will not go beyond 3.5kbyte/sec.
   Has anyone observed better figures for this type of link ?
   I have not been able to trace this problem down.

2. If the network geets more loaded, the paket turnaround times go up
   to 30, 60 or even more seconds. This times are surely not reasonable.
   From what i heve seen, this problems happens, because of a combination
   of the following:
   - the throughput on the SVC used for IP goes down, possible because
     other SVC's use part of the bandwidth.
   - The queues on the IP level and on the SVC level become very long.
     
   Now i really don't understand why one has to build up a long queue
   on a link that is possible very slow. If one would use a much shorter
   queue (possible down to 1 paket), the throughput could be still the
   same, but the turnaround times (for those pakets who get through)
   would be in a normal range.

   The special problem if a X.25 SVC as a provider for the IP link
   is that the throughput on those SVCs may vary in a wide range,
   from unloaded to nearly 0. In this case it could be advantageous to
   measure the SVC throughput and change the queue length according
   to throughput.

   Am i wrong with my ideas - then what is the advantage of having
   a long queue, when this only causes the turnaround times to
   increase, but has not any effect on throughput.

   Any ideas about this ? What is the best way to handle those type of
   links ? 
-- 

Toerless Eckert     Internet: eckert@informatik.uni-erlangen.de
 		    X.400: /C=de/A=dbp/P=uni-erlangen/OU=informatik/S=eckert/

news@haddock.ima.isc.com (overhead) (02/23/90)

In article <2452@medusa.informatik.uni-erlangen.de> eckert@medusa.informatik.uni-erlangen.de (Toerless Eckert) writes:
>I am experiencing some Problems with running IP on top of X.25 (RFC877).
It sounds as though you are running with the D-bit (delivery
confirmation bit) set.  If so, you might consider if you really need
end-to-end confirmation.  Let the DCE do your buffering for you.

Just because you have a 64kb line to the first DCE doesn't necessarily
mean that that all links to the destination DTE are running at that
speed.  Is the Throughput Class facility meaningfully implemented by
your PDN?

A packet size of 1024 should be the optimal size for an MTU of
576.  A window size of two should be OK if you run without The D-bit
set, otherwise run with a window of seven (or larger if extended
packet sequence numbering is supported).

If your IP and SVC queues are getting very long, it appears that your
implementation has a flow control problem.  You haven't said anything
about your environment, so I can't comment any more on this area.

Jim

craig@NNSC.NSF.NET (Craig Partridge) (02/23/90)

Running IP over X.25 is a notoriously tricky business.  Here are the
two pieces of wisdom I can offer from my limited experience with the
CSNET IP over X.25 driver.

One question is why the queue size ever gets so large.  Are the TCPs
not doing slow-start?  Does the queue never ever drop packets?  Something
sounds odd about the dynamics here.

The other item is a suggestion.  There are cases where there's simply
more traffic that wants to be sent that there is bandwidth over one
X.25 connection to send it.  The CSNET driver had a feature where it
would open additional connections when the queue got large (up to,
I think 6 parallel connections).  There was some simple code to
estimate cost-benefit so connections would close down later if there
wasn't enough bandwidth to justify keeping them open.

Craig