[net.lan] Standards for commercial packet radio

mark@cbosgd.UUCP (Mark Horton) (08/26/85)

In article <166@rpics.UUCP> schoff@rpics.UUCP (Martin Lee Schoffstall) writes:
>> The refusal of any of the commercial standards
>> organizations to consider datagram protocols is what prompted the DoD to
>> develop their own.
>> 
>Then they realized that they had enemies too and have "built" a
>transport protocol (TP4) which lives on top of a datagram network
>protocol.  :-)

As I read the intent of the CCITT folks working on OSI (and this is
based on an OMNICOM seminar taught by Hal Folts, so his biases may
be what you're seeing here) they think virtual circuits are wonderful
and that anyone who would do anything with a datagram must be crazy.
As evidence to support this, they point to the fact that in 1980,
X.25 had a datagram facility, but by 1984 nobody had implemented it,
so it was deleted from the 1984 spec.  (It seems to me that there was
no serious demand for datagrams from the common carriers until around
1982 or 1983 when TCP/IP became popular - CSNET would have been the
first real user.)

Most of the interest in TC4 and CLNS seems to be from the people doing
the AUTOFACT stuff - mostly American computer companies.  The Telcos
which make up CCITT have decreed that there will be no wide area
network protocols except X.25, and they are trying to move the LAN
protocols in the connection oriented direction.  (Did you know that
IEEE 802.3 has a connection oriented mode?  Does anyone care?)  They
have gone so far as to get DOD to announce that the preferred interface
to the ARPANET is no longer 1822, it's X.25.  I have no idea how you are
supposed to get through a virtual circuit oriented X.25 bottleneck to
send IP packets out over the ARPANET, this must make an interesting story.

	Mark

karn@petrus.UUCP (Phil R. Karn) (08/26/85)

> As evidence to support this, they point to the fact that in 1980,
> X.25 had a datagram facility, but by 1984 nobody had implemented it,
> so it was deleted from the 1984 spec.

Some might consider this a cheap shot, but in my opinion the X.25 datagram
facility was such a brain-damaged afterthought that it never had a fair
chance in the first place. Worse, many of the PDNs out there assume at a
very low level that they're only providing virtual circuit service (e.g.,
Telenet, TYMNET) so implementing a datagram service would essentially
require the overhead of setting up and tearing down a virtual circuit on
each and every datagram.

> I have no idea how you are
> supposed to get through a virtual circuit oriented X.25 bottleneck to
> send IP packets out over the ARPANET, this must make an interesting story.

We are one of the relative handful of sites running the CSNET IP/X.25
interface, so I can describe it and our experiences in some detail. RFC-877
specifies a standard for the transmission of IP datagrams over X.25 virtual
circuits. Virtual circuits are established whenever there are datagrams to
send, and timed out after periods of inactivity.  "X.25 bottleneck" is
highly appropriate; because of the tiny packet size limits and flow control
windows, it is often necessary to open several parallel virtual circuits to
the same destination and multiplex datagrams among them, just so the
bandwidth of our 9600 baud "local loop" can be fully utilized.  Even then,
we're lucky to get 4800 baud left over for IP after all the redundant X.25
link level acks, packet level acks and so forth.

And the CCITT bigots claim that TCP/IP has too much "overhead".

Phil

mark@cbosgd.UUCP (Mark Horton) (08/27/85)

In article <481@petrus.UUCP> karn@petrus.UUCP (Phil R. Karn) writes:
>Some might consider this a cheap shot, but in my opinion the X.25 datagram
>facility was such a brain-damaged afterthought that it never had a fair
>chance in the first place. Worse, many of the PDNs out there assume at a
>very low level that they're only providing virtual circuit service (e.g.,
>Telenet, TYMNET) so implementing a datagram service would essentially
>require the overhead of setting up and tearing down a virtual circuit on
>each and every datagram.

I am not familiar with the 1980 X.25 datagram, but the stories I've heard
agree with you.  I also understand that the current X.25 spec has a feature
called "fast select" which essentially opens up a virtual circuit (carrying
data with the open request) and immediately closes it down, which might
be reasonably suitable for datagrams.  (At least, it might reduce the
number of round trips to one, I suspect the first ACK will come back anyway.)
Is this really a good thing?  Is anybody implementing it?

>> I have no idea how you are
>> supposed to get through a virtual circuit oriented X.25 bottleneck to
>> send IP packets out over the ARPANET, this must make an interesting story.
>
>We are one of the relative handful of sites running the CSNET IP/X.25
>interface, so I can describe it and our experiences in some detail. RFC-877
>specifies a standard for the transmission of IP datagrams over X.25 virtual
>circuits. Virtual circuits are established whenever there are datagrams to
>send, and timed out after periods of inactivity.  "X.25 bottleneck" is
>highly appropriate; because of the tiny packet size limits and flow control
>windows, it is often necessary to open several parallel virtual circuits to
>the same destination and multiplex datagrams among them, just so the
>bandwidth of our 9600 baud "local loop" can be fully utilized.  Even then,
>we're lucky to get 4800 baud left over for IP after all the redundant X.25
>link level acks, packet level acks and so forth.

That's not what I meant, Phil.  You aren't on the ARPANET, you're on CSNET,
which is one of the networks in the ARPA Internet.  As you point out, being
on CSNET is not as good as being directly on the ARPANET, because you have
to use X.25 virtual circuits underneath your IP datagrams.  Obviously this
is silly, but it does have and clear advantage that it works.

As I understand the situation, people plugging directly into the REAL LIVE
ARPANET (the one you have to know somebody in the pentagon to get onto)
are now being told that the preferred interface is no longer 1822, it's
X.25.  This is what I don't understand.

>And the CCITT bigots claim that TCP/IP has too much "overhead".

In all fairness, you're looking at TCP/IP/X.25/9600, which is a silly
configuration.  It would be much more reasonable to compare
FTP/TCP/IP/SLIP/9600 with the appropriate virtual circuit oriented 
file transfer, which I suppose might be FTAM/X.409/?/TC3/X.25/HDLC/9600
(I'm not sure of the details or what the session layer protocol is.)
My impression of the OSI stuff was that, while there is high overhead
in setting up and virtual circuits (one at each layer from 2 to 7),
and while each of the 6 layers above the physical adds a header (and
a trailer in HDLC) that the total number of bytes of headers is only
about 30, compared to 60 for combined TCP and IP.  Over low speed networks
like RS232 there might be less overhead for large file transfers.

Of course, there are other factors, like the memory copies that may
be necessary to add 2 bytes of header onto an existing packet at
each layer, and the overhead of actually having a layer look at the
packet.  These could become very important with a high speed LAN,
where the bottleneck is the CPU on each end.  Also, the complexity
of the implementations and the wide choices that different implementors
will have to choose from may make most ISO "Standard" implementations
unable to talk to each other, and even unable to be released as
products until some time after everybody has Ada compilers.

	Mark

chris@umcp-cs.UUCP (Chris Torek) (08/27/85)

>... [in] the OSI stuff ...  the total number of bytes of headers is only
>about 30, compared to 60 for combined TCP and IP.

Dunno about ISO/OSI, but it's 40 bytes for a TCP/IP header.  (Ignoring
options, which no one uses anyway, except at setup time.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251)
UUCP:	seismo!umcp-cs!chris
CSNet:	chris@umcp-cs		ARPA:	chris@maryland

hhs@hou2h.UUCP (H.SHARP) (08/27/85)

I have also heard that DOD (or ARPANET) has made a commitment to
different standards groups (NBS, FTSC, etc.) that it is going to 
migrate toward ISO's layer 4 and away from TCP/IP.  The  FTSC and
NBS came out with a dual release of the ISO's layer 4.  I wonder
if this is actually going to happen.

While on the subject, I have a question.  Has anyone ever run VI on
Unix from a terminal with an X.25 packet switching network between the
terminal and host?  How did it work?  Just curious.

karn@petrus.UUCP (Phil R. Karn) (08/28/85)

> I have also heard that DOD (or ARPANET) has made a commitment to
> different standards groups (NBS, FTSC, etc.) that it is going to 
> migrate toward ISO's layer 4 and away from TCP/IP.  The  FTSC and
> NBS came out with a dual release of the ISO's layer 4.  I wonder
> if this is actually going to happen.

This was the subject of the famous "NRC Report". As I interpret the official
DoD response to the report, they said that they've already got a set of
working protocols that do everything they want. Until ISO becomes a real set
of standards implemented in real products instead of so much paper, there's
no hurry to consider any form of conversion.

> While on the subject, I have a question.  Has anyone ever run VI on
> Unix from a terminal with an X.25 packet switching network between the
> terminal and host?  How did it work?  Just curious.

We can do this on our CSNET/X.25 interface. It works, but slowly, as you'd
expect.

Phil

karn@petrus.UUCP (Phil R. Karn) (08/28/85)

> I also understand that the current X.25 spec has a feature
> called "fast select"...

Fast select was the Japanese answer to the datagram advocates. The "pure"
datagram facility that was added in 1980 and removed in 1984 was the US
proposal. In the true spirit of bureaucracies everywhere, the CCITT accepted
both.  I don't know for sure which networks implement fast select. As you
might know, the current CSNET IP/X.25 software drops datagrams if a VC isn't
already open to the destination. When I asked the implementors why the
didn't use fast select so as to avoid dropping the initial datagram of a TCP
session, they said that not all networks they had to deal with implemented
them.

It seems to me that a standard that isn't universally implemented isn't a
standard at all...

> That's not what I meant, Phil.  You aren't on the ARPANET, you're on CSNET,
> which is one of the networks in the ARPA Internet.

True, I should have been more precise. You aren't "on" the ARPANET unless
you've got an IMP interface and an address on network 10. But IP-level
connectivity is what really counts, even if it's slow, so as far as our
users are concerned, we're "on" the ARPANET. CSNET/X.25 is one hell of a lot
better than uucp.

> As I understand the situation, people plugging directly into the REAL LIVE
> ARPANET (the one you have to know somebody in the pentagon to get onto)
> are now being told that the preferred interface is no longer 1822, it's
> X.25.  This is what I don't understand.

Me neither, although I've seen the interface specs, so it seems to be for
real.

> >And the CCITT bigots claim that TCP/IP has too much "overhead".
> In all fairness, you're looking at TCP/IP/X.25/9600, which is a silly
> configuration....

But unavoidable if you want to do internetting on a large scale but you don't
have direct access to the ARPANET, and you can't justify leased lines.  At
least until some PDN gets smart and offers a true commercial datagram
service...

Actually, it's interesting to go through the numbers. If you're doing an FTP
over TCP/IP/X.25 on Telenet with reasonable TCP segment sizes (e.g., 1KB)
then I think you'll find that the majority of the line overhead is due to
the so-called "low overhead" X.25 protocol because of its small packet sizes
and its fetish for acknowledgements at every layer. The actual amount
depends on how well the link layer acknowledgement delay timer is tuned.  If
it is tuned badly (and I suspect it is in our case, although I haven't the
foggiest idea how to change it on our interface board), then just hitting a
single key on a terminal connected to a PAD in character-at-a-time mode
(which is how most people operate them) will involve the transmission of
30 bytes of X.25 headers!

True, it's also at least theoretically possible to use larger packet and
window sizes in X.25 but no one seems to know how to do that with the actual
PDN we have to deal with.

I think best way to understand the X.25/CCITT mentality is to understand
that it's just fine for what almost everybody uses it for -- remote access
from a dumb terminal through a PAD. Relatively few people are doing the kind
of true host-to-host resource sharing that is the basis of the Internet.

Phil

hhs@hou2h.UUCP (H.SHARP) (08/28/85)

>From: karn@petrus.UUCP (Phil R. Karn)
>Posted: Tue Aug 27 19:44:57 1985

>True, it's also at least theoretically possible to use larger packet and
>window sizes in X.25 but no one seems to know how to do that with the actual
>PDN we have to deal with.

>I think best way to understand the X.25/CCITT mentality is to understand
>that it's just fine for what almost everybody uses it for -- remote access
>from a dumb terminal through a PAD. Relatively few people are doing the kind
>of true host-to-host resource sharing that is the basis of the Internet.

It is interesting to consider another viewpoint of this matter.
Recommendation X.25 and the ISO standard for X.25 for 1984 
make allowances for packet sizes up to 512 octets (or 1024 octets).
The PDN's are only implementing the smaller packet sizes (128 octets)
because the only demand is for remote access from a dumb terminal
through a PAD.  Maybe if the PDN's had more demand for true
host-to-host resource sharing, they would offer the larger packet
sizes and faster lines.  Any comments from a PDN out there?
 
If a switch can handle a certain number of packets per second, then
increasing the packet size will increase throughput (of course, other
factors come into play, such as need for more buffer space).

george@mnetor.UUCP (George Hart) (08/30/85)

In article <1032@hou2h.UUCP> hhs@hou2h.UUCP (H.SHARP) writes:
>...
>While on the subject, I have a question.  Has anyone ever run VI on
>Unix from a terminal with an X.25 packet switching network between the
>terminal and host?  How did it work?  Just curious.

X.25 pads generally only send packets when they're full, after a
specified timeout or when some significant characters (usually CR and
DEL) are received.  Needless to say, editors like vi don't work very
well (if at all) since, for example, you have to hit carriage return
after every character in command mode.

Pad parameters can be tweaked so that the packet timeout is extremely
short or the packet size is very small (say 1!) but since you are
usually charged by the packet (or kilopackets), this is very expensive.
-- 


Regards,

George Hart, Computer X Canada Ltd.
UUCP: {allegra|decvax|duke|floyd|linus|ihnp4}!utzoo!mnetor!george
BELL: (416)475-8980

martin@sabre.UUCP (Martin Levy) (08/31/85)

Given this statment, and the fact that it is correct.

> They have gone so far as to get DOD to announce that the preferred interface
> to the ARPANET is no longer 1822, it's X.25.

Can someone explain why?, was it just because more people know X.25 than 
1822?. How much of X.25 is used in this link?. Where are the specs?.

martin levy
bellcore.

ml@dlvax2.UUCP ( Martyn Legge ) (09/03/85)

In article <1032@hou2h.UUCP> hhs@hou2h.UUCP (H.SHARP) writes:

>While on the subject, I have a question.  Has anyone ever run VI on
>Unix from a terminal with an X.25 packet switching network between the
>terminal and host?  

I have.

>How did it work?  Just curious.

Appallingly badly!

Martyn Legge (Data Logic)

tcs@usna.UUCP (Terry Slattery <tcs@usna>) (09/04/85)

> They have gone so far as to get DOD to announce that the preferred interface
> to the ARPANET is no longer 1822, it's X.25.

We're just going through a Milnet connection and were told that X.25 was
the desired connection protocol.  We opted for 1822 HDH (1822 protocol
on HDLC) connection since ACC makes the IF11/HDH that does that for
our Unibus machine.  As I understand it, the X.25 is only a point-to-point
link to replace the 1822 point-to-point link.  Since both use HDLC bit
protocol, the only difference is the link level protocol (1822 vs X.25).
I can't comment on the differences in performance of 1822 vs X.25
in this configuration.  Anyone know?
	-tcs
	Terry Slattery	  U.S. Naval Academy	301-267-4413
	ARPA: tcs@brl-bmd     UUCP: decvax!brl-bmd!usna!tcs

adh@cstvax.UUCP (Adam Hamilton) (09/05/85)

In article <296@dlvax2.UUCP> ml@dlvax2.UUCP ( Martyn Legge ) writes:
>In article <1032@hou2h.UUCP> hhs@hou2h.UUCP (H.SHARP) writes:
>
>>While on the subject, I have a question.  Has anyone ever run VI on
>>Unix from a terminal with an X.25 packet switching network between the
>>terminal and host?  
>
>I have.
>
>>How did it work?  Just curious.
>
>Appallingly badly!
>
>Martyn Legge (Data Logic)

At Edinburgh University, we access machines via X25 all the time.
Performance varies depending on the power at the host end.
My description of running vi is "a bit lumpy", but it really is quite
acceptable (unless you judge by running SPY on a PERQ or SUN)

		Adam Hamilton

(Disclaim, exclaim, reclaim, inflame)

ewiles@netex.UUCP (Ed Wiles) (09/21/85)

>
>While on the subject, I have a question.  Has anyone ever run VI on
>Unix from a terminal with an X.25 packet switching network between the
>terminal and host?  
>
>How did it work?  Just curious.
>

Our company uses X.25 extensively between terminals and hosts, the major
problem seems to be "lumpyness", as another put it.  The characters you
type, and the effect that they have, are seen as: one char, then another,
then a whole gob of them.  As to slowness, like any network, if you
overload it, its going to slow down.

				E. L. Wiles
				Member Tech Staff, NetExpress Inc.