[comp.protocols.tcp-ip] retry

guru@FLORA.WUSTL.EDU (Gurudatta Parulkar) (10/02/88)

------- Forwarded Message

To: "David Cheriton" <cheriton@pescadero.stanford.edu>
cc: tcp-ip@sri-nic.ARPA
Subject: Re: ST in Gateways 
In-reply-to: Your message of Thu, 29 Sep 88 23:01:39 -0700.
             <8809300601.AA13414@Pescadero> 
Date: Fri, 30 Sep 88 13:34:21 -0500
From: Gurudatta Parulkar <guru>

  
>I regard ST as a rather unfortunate direction unless I am missing something.
>First, it totally violates the IP architecture by putting a network-oriented
>virtual circuit protocol in place of internetwork datagrams.  This has been
>done with almost no critical discussion and review outside of BBN to my
>knowledge.  Informal comments made to me suggest that there are about 3-4
>people in the known universe than understand ST, and not much enlightment
>to be gained from the confusing out-of-date RFC.

>Second, ST people seem to claim that there is some magic that makes it
>"more efficent" yet I know of no evidence to this effect. For example,
>Mackenzie's claimed advantage in multi-site delivery can be equally
>well provided (I conjecture) using the IP multicast extension.  And, as he
>admits, the ST guarantees are no guarantee.  I believe that IP multicast
>and a good TOS implementation could do just as well.

Well, I don't know the ST effort and BBN policies, but I want to
dispute your claim (I guess hypothesis) that the current IP
architecture is good for supporting applications such as video
conferencing and other such applications which need relatively high 
bandwidth and have real time constraints.

First of all, it is appropriate to describe my interpretation of
current IP architecture to put my thinking in perspective. 

In simple terms, the current IP architecture consists of a set of
gateways and networks which differ in their speed, access policies,
resource management, packet size and formats, etc. etc. Only thing
internet expects from the component networks is that they should try
to forward a minimum size datagrams and support an internet level
logical addressing scheme.  They are allowed to lose, resequence, and
duplicate datagrams and also they are not required to make any
guarantees about the performance. Gateways on the other hand are only
required to send the datagram towards the final destination. Again,
gateways are not required to do optimal routing in any sense. Of
course, TOS option allows an application to indicate what kind of
service it needs - optimize throughput, delay, and/or reliability.
However, the TOS is only a request to gateways which they can ignore,
or even worse, they may have no choice even if they understand the TOS
field.  (Well, that is my interpretation of the internet architecture,
and it is the right way to do things for the kind of objectvies it was
designed for as explained in Dave Clark's paper in recent SIGCOMM)

Now, in any internet, how can we make performance guarantees ? 

There are two approaches that can be used at the internet level to
make performance guarantees. The first approach involves resource
management on per "flow"(or quasi-reliable connection in my
terminology) basis and require some performance guarantees from
component networks. This approach requires that an application specify
its resource needs before starting to use them.  The application is
started or continued only if its needs can be met.  Once it starts,
mechanisms are provided to ensure that the component networks provide
the resources to the application and that the application does not use
more than its specified needs. Thus, if every application is using its
specified resources, an application can be sure to get its share of
resources, and thus, the expected performance.

The second approach is to over engineer the internet which means
over engineering component networks. If networks are sufficiently over
engineered, applications can be pretty much unconstrained and still be
sure that they can meet their resource needs, and thus get guaranteed
level of performance.   

Clearly, the current internet does not use the first approach and
depends on the second approach. However, it should be obvious that over
engineering a rapidly growing large internet is "unrealistic". And
therefore, I believe it is difficult, if not impossible, to
support applications which need performance guarantees or predictable
performance in the existing internet architecture. 

I argue that we can make much better performance guarantees if we use
combination of two approahces with emphasis on the first approach.
One of the ways to achieve this is to have an internet architecture
which consists of

- - a quasi-reliable connection-oriented (NOT THE SAME AS X.25 VIRTUAL
  CIRCUIT) internet protocol 

- - component networks which can do resource management on per
  connection basis or networks which help their directly connected
  gateways to do the resource management

- - component networks which can make performance guarantees using their
  resource management schemes  

I can elaborate on how to do this and why it can provide better
guarantees, but I am not sure if that is appropriate at this time.
However, the following comments are necessary:  

- - connection-oriented approach does not mean RELIABLE virtual
  connection where 
  you do hop-to-hop flow and error control. Thus, a connection only
  implies a pre-identified path and appropriate resources allocated on
  this path for the application. 

- - resources are not allocated on the basis of peak requirements but
  based on peak as well as average requirements. 

- - thus the key advantages of the quasi-reliable connection-oriented
  internet protocol is that it allows resource allocation on per
  connection basis, and as a result, can help us make performance
  guarantees and avoid congestion.

- - as the internet level protocol takes into account component networks'
  ability to make performance guarantees and their available resources,
  it can make better end-to-end performance guarantees. 

- - the connection-oriented model is inherently less flexible and less
  fail safe than the datagram model, but do you really need the
  flexibility or you need performance and performance guarantees. Note
  that now computer networks are not designed only for the department of
  defense. 

I hope this long note clarifies the following simple point:

"In the current internet architecture, it is difficult to make
performance guarantees. It would be easier to achieve this in an
internet which uses a quasi-reliable connection oriented internet
protocol and requires component networks to provide mechanisms for 
resource managment on per connection basis and to make performance
guarantees."

I hope this makes some sense.

- -guru

Dr. Guru Parulkar
Asst Professor             guru@flora.wustl.edu
Dept of Computer Science   parulkar@udel.edu 
Washington University      wucs1!guru@uunet.uu.net
St. Louis MO 63130 
(314) 889-4621

------- End of Forwarded Message