[comp.dcom.modems] Network protocols questions

SY.FDC@CU20B.COLUMBIA.EDU.UUCP (03/26/87)

In putting together a course on data communication protocols, I've run up
against some questions that I can't find answers to in any books, journals,
or magazines (like Data Communications) I've seen.  Can anybody out there help?

Physical Layer protocols -- every network book I've seen uses CCITT X.21 as its
only example of a physical layer protocol.  Is X.21 really used anywhere?  I
suspect not, because it requires special interfaces (as described in X.24),
which are probably not (yet?) widespread.  How about X.21bis, which works with
V-series interfaces (e.g. V.24, i.e. RS-232)?  Are there any other well-defined
physical layer protocols, or is access to the physical medium (connection
establishment, maintenance, and release) usually just mushed into the datalink
layer?  I suppose Hayes modem language could be considered a "de-facto"
standard physical layer protocol...(?)

Forward Error Correction.  This is talked about a lot ("when you have long
delays, then you can use FEC..."), but what networks or datalink protocols
actually use it?  What form of it do they use -- Hamming code, or what?  How
many bit errors per character can they detect and correct?  What's the
per-character overhead?  Since these codes can typically correct only one bit
error per byte, there must also be allowance for retransmission.  Do they also
use a block check in addition to per-character FEC?  etc etc.  Where can I read
about this?

Does anybody know what kind of protocols and error control techniques are used
to send back data from space (like from Uranus and Neptune), where corruption
is very likely, and retransmission is very expensive?

It seems hard to believe that Bisync can actually work under noisy conditions,
since many of the commands and responses are not error-checked.  Do sites that
depend on Bisync abound in horror stories?  Also, I don't understand how
transparent data mode is supposed to work, in light of the fact that SYNs that
are inserted into the data by the INTERFACE (e.g. idles) are supposed to have
DLEs stuffed in front of them, presumably by the SOFTWARE that already gave the
data to the interface to send.  Or are there special Bisync interfaces that
actually understand all of this...?

Is ANSI X3.66 (ADCCP) datalink protocol actually used anywhere, or is it really
just a paper model for talking about its subsets (like HDLC and SDLC)?

Was ANSI X3.28 (sort of an ASCII version of Bisync) ever used anywhere?

Outside of IBM land, and in peer (host-to-host) networks, does anyone use the
unbalanced versions of the HDLC-like datalink protocols?

Has anyone really analyzed checksums to the same degree as CRCs have been
anaylized?  The certainty with which they can catch different kinds of errors,
the probability that they will let various kinds of errors go undetected, etc.,
and their effectiveness compared with CRCs under real conditions.

For that matter, have there been any recent studies (i.e. later than the 1960s)
of error patterns on various kinds of communication lines -- the switched phone
network, leased voice-grade lines, "conditioned" lines, etc.  The distribution
of errors by burst size (1 bit, 2 bits, etc), the typical bit error rates, ...

IEEE 802.2, Logical Link Control for LANs, describes two datalink options:
Type 1 (connectionless, no acknowledgement or error control), and Type 2
(a nearly full-blown balanced HDLC-like affair).  Does anyone use Type 2 in
real life?

Enough for now... (as you can see, I haven't got past the lower two layers
yet).  Thanks in advance to anyone who's willing to provide pointers!  - Frank
-------

howard@cos.UUCP (03/27/87)

In article <12289499503.24.SY.FDC@CU20B.COLUMBIA.EDU>, SY.FDC@CU20B.COLUMBIA.EDU (Frank da Cruz) writes:
> In putting together a course on data communication protocols, I've run up
> against some questions that I can't find answers to in any books, journals,
> or magazines (like Data Communications) I've seen.  Can anybody out there help?
> Physical Layer protocols -- every network book I've seen uses CCITT X.21 as its
> only example of a physical layer protocol.  Is X.21 really used anywhere?  I
> suspect not, because it requires special interfaces (as described in X.24),
> which are probably not (yet?) widespread.  
X.21 is fairly common in Europe for providing circuit-switched
data service.  I have used its physical layer as a "simplified
digital interface" in custom building networks.
>How about X.21bis, which works with
> V-series interfaces (e.g. V.24, i.e. RS-232)?  
X.21 bis _is RS-232C, for all practical purposes.
>Are there any other well-defined
> physical layer protocols, or is access to the physical medium (connection
> establishment, maintenance, and release) usually just mushed into the datalink
> layer? 
CCITT V.35 is a common wideband interface; originally, it was intended
for microwave use, but is the common U.S. 56KBPS interface.  Other
standards include RS-449 (a generally dead 37 pin interface making some
comebacks for digitized video conferencing), and the RS-423/422 _electrical
interfaces for longer distances than RS-232.
ISDN physical interfaces also are coming
> I suppose Hayes modem language could be considered a "de-facto"
> standard physical layer protocol...(?)
I would not call Hayes language a physical layer protocol, but a routing
(i.e., OSI network or Layer 3) protocol.  Essentially, it sets up a virtual
call to the modem/network, and goes away.  The physical interface in the
Hayes context is RS-232 (unless there is a directly connected modem)

Physical interfaces, as well as MAC layer interfaces, are defined for
LAN's in the IEEE 802/ISO 8802 series.
> 
> It seems hard to believe that Bisync can actually work under noisy conditions,
> since many of the commands and responses are not error-checked.  Do sites that
> depend on Bisync abound in horror stories?  
YES.
>Also, I don't understand how
> transparent data mode is supposed to work, in light of the fact that SYNs that
> are inserted into the data by the INTERFACE (e.g. idles) are supposed to have
> DLEs stuffed in front of them, presumably by the SOFTWARE that already gave the
> data to the interface to send.  Or are there special Bisync interfaces that
> actually understand all of this...?
Bisync can work, with appropriate software, slowly but adequately until
it dies.  In practice, SYN's are rarely inserted.
> Is ANSI X3.66 (ADCCP) datalink protocol actually used anywhere, or is it really
> just a paper model for talking about its subsets (like HDLC and SDLC)?
In some military nets, yes, but HDLC and SDLC are more common.
ADCCP is the Federal standard (FED-STD-1003).
> Was ANSI X3.28 (sort of an ASCII version of Bisync) ever used anywhere?

Yes, for intelligent async terminals.  Its flow control primitives are
still used.
> For that matter, have there been any recent studies (i.e. later than the 1960s)
> of error patterns on various kinds of communication lines -- the switched phone
> network, leased voice-grade lines, "conditioned" lines, etc.  The distribution
> of errors by burst size (1 bit, 2 bits, etc), the typical bit error rates, ...
See the CCITT G.82x series.  Also, interesting work on error burst analysis,
with some public domain software, is being done by Neal Seitz and associates
at the Institute for Telecommunications Sciences (U.S. Commerce Dept,
Boulder, CO)--(303) 497-3106.

Howard Berkowitz
(703) 883-2812 (voice)

Howard@cos ... via sundc, hadron

sirbu@GAUSS.ECE.CMU.EDU.UUCP (03/27/87)

I'll try to answer a few of your questions.




	Is X.21 really used anywhere?  

X.21 is used all over Europe for connections to X.25 networks.  I think (not
sure) X.21 bis is used in the U.S.



  Are there any other well-defined physical layer protocols, or is access to
  the physical medium (connection establishment, maintenance, and release)
  usually just mushed into the datalink layer?  

The ISDN physical layer interfaces are fairly well defined.  Look at I.430

  Forward Error Correction.  This is talked about a lot ("when you have long
  delays, then you can use FEC..."), but what networks or datalink protocols
  actually use it?  What form of it do they use -- Hamming code, or what?  How
  many bit errors per character can they detect and correct?  What's the
  per-character overhead?  Since these codes can typically correct only one
  bit error per byte, there must also be allowance for retransmission.  Do
  they also use a block check in addition to per-character FEC?  etc etc.
  Where can I read about this?  Does anybody know what kind of protocols and
  error control techniques are used to send back data from space (like from
  Uranus and Neptune), where corruption is very likely, and retransmission is
  very expensive?  

Re:  Forward error correction -- many of the companies offering private
satellite links use it (e.g. American Satellite, ).  It is used in space
probes.  For a good article that I have used in a course see: Edelson, R.E.
et. al.  "Voyager Telecommunications:  The Broadcast from Jupiter"Science
vol 204, No. 1, June 1979 pp. 913-921.


  For that matter, have there been any recent studies (i.e. later than the
  1960s) of error patterns on various kinds of communication lines -- the
  switched phone network, leased voice-grade lines, "conditioned" lines, etc.
  The distribution of errors by burst size (1 bit, 2 bits, etc), the typical
  bit error rates, ...  

There was a 1983 Loop Survey which was reported recently in either IEEE TOC
or JSAC.


  IEEE 802.2, Logical Link Control for LANs, describes two
  datalink options: Type 1 (connectionless, no acknowledgement or error
  control), and Type 2 (a nearly full-blown balanced HDLC-like affair).  Does
  anyone use Type 2 in real life?  

Regarding 802.2.  I believe the IBM token ring network implementation
uses Type 2.

mangler@cit-vax.UUCP (03/30/87)

In article <12289499503.24.SY.FDC@CU20B.COLUMBIA.EDU>, SY.FDC@CU20B.COLUMBIA.EDU (Frank da Cruz) writes:
> Does anybody know what kind of protocols and error control techniques are used
> to send back data from space (like from Uranus and Neptune), where corruption
> is very likely, and retransmission is very expensive?

They use convolutional codes, often with a rate of 1/2 (i.e. half of
the bits sent are data, the rest are error correction).  These are
serial codes, with the next error correction bit depending on the last
N data bits.  As N gets large, the code can handle longer error bursts,
and a higher percentage of errors, at immense decoding cost.  With such
codes you can approach the Shannon limit, which is why they're used in
deep-space communications, where transmission power is dear.

If you wanted to make a dialup modem with continuous full-duplex
9600-baud throughput, that's what you'd have to do.  ISDN should
eventually obviate the need for it, though.

Don Speck   speck@vlsi.caltech.edu  {seismo,rutgers,ames}!cit-vax!speck

heath@ncrcae.UUCP (03/30/87)

1) physical layer -- X.21 is used in the Nordic countries in a network
   call NordicNet.  As for other physical layer interfaces, don't 
   forget RS-422, RS-449, TTY current loop, your LAN interfaces 
   (e.g. IEEE 802.3), optic fiber, etc.

2) forward error correction -- I don't know of any commercial data 
   communications which use forward error correction.

3) bisync -- bisync does have theoretical problems in extremely noisy 
   environments, (these were corrected under its successors HDLC and ADCCP) 
   but it works very well in the workaday world.
   As for stuffing DLE-SYN during bisync transparency, the DLE-SYN's are never
   inserted by the layer above bisync. They are inserted at the data link
   control layer itself, presumably by the same logic which affixes the 
   STX and ETX. The sending bisync layer also parses the outgoing data
   for codes which coincidentally match the DLE character. On finding a DLE,
   it inserts an extra DLE. The receiver disassembles this sequence into 
   only one. This mechanism is widely-used in industry-standard 2780/3780 
   bisync. Nowadays you can buy chips which will do this for you.

4) ADCCP -- as a former contributor to ANSI X3.66, I feel that it is a paper
   model for talking about its subsets HDLC and SDLC. Like many standards,
   it's written too loosely to be implemented as a whole, but does provide a
   mechanism for classifying its subsets.

5) ANSI X3.28 -- properly known as Basic Mode, is implemented in many of 
   NCR's cash registers, teller machines, and terminals.

6) Unbalanced HDLC -- NCR uses a proprietary form of HDLC known as NCR/DLC
   between its POS terminals, teller machines, and terminals.

7) 802.2 Class 2 -- Though it's not popular now, just wait. 
   The full-blown balanced HDLC-like class will become more popular 
   under standards groups working with OSI.

earle@smeagol.UUCP (03/31/87)

>Does anybody know what kind of protocols and error control techniques are used
>to send back data from space (like from Uranus and Neptune), where corruption
>is very likely, and retransmission is very expensive?

All (forseeable) future spacecraft will use Reed-Solomon encoders on board the
spacecraft for telemetry transmissions.

The best reference I can come up with is
	Reed-Solomon Encoders -
	Conventional vs. Berlekamp's
		Architecture

	Authors: Marvin Perlman
		 Jun-ji Lee

	JPL Publication 82-71
	December 1, 1982

You should write to either the Publications Office (or, as a default,
the Office Of Public Affairs) at JPL and enquire about obtaining the
above booklet.

	Jet Propulsion Laboratory
	Public Information Office
	M/S 180-200
	4800 Oak Grove Drive
	Pasadena, CA  91109

The beginning of the first chapter reads:

				I. BACKGROUND

	Reed-Solomon (RS) codes are a special case of the nonbinary
generalization of Bose-Chaudhuri-Hocquenghem (BCH) codes.  They are among the
Maximum Distance Seperable (MDS) codes which realize the `maximum' minimum
Hamming distance possible for a linear code (Refs. 1 and 2).  The interest in
RS codes was primarily theoretical until the concept of concatenating coding
was formulated and first introduced in Ref. 3.  Concatenated coding has been
adopted by the U.S. National Aeronautics and Space Administration (NASA) for
interplanetary space missions (see Fig. 1).  The application of concatenated
coding to NASA's Jet Propulsion Laboratory (JPL) spacecraft telemetry with a
convolutional inner code and an RS outer code was first proposed and analyzed
in Ref. 4.
...

`Figure 1' shows the pathway through a telemetry encoding and downlink, and
looks approximately like this:

info->RS encoder & symbol interleave->convolutional encoder->modul.->downlink
									|
									|
 ------------------------------------------------------------------------
 |
 |
 --> demodulator->viterbi decoder->RS symbol deinterl'ving buf->RS decode->info

(Where `RS' == Reed-Solomon)

Hope this helps.

-- 
	Greg Earle	UUCP: sdcrdcf!smeagol!earle; attmail!earle
	JPL		ARPA: elroy!smeagol!earle@csvax.caltech.edu
AT&T: +1 818 354 4034	      earle@jplpub1.jpl.nasa.gov (For the daring)
I'm not an Iranian!!  I voted for Dianne Feinstein!!

rab@well.UUCP (03/31/87)

In a previous article a System Mangler writes:
  (in answer to a question about what codes are used in space)
+ They use convolutional codes, often with a rate of 1/2 (i.e. half of
+ the bits sent are data, the rest are error correction).  These are
+ serial codes, with the next error correction bit depending on the last
+ N data bits.  As N gets large, the code can handle longer error bursts,
+ and a higher percentage of errors, at immense decoding cost.  With such
+ codes you can approach the Shannon limit, which is why they're used in
+ deep-space communications, where transmission power is dear.

+ Don Speck   speck@vlsi.caltech.edu  {seismo,rutgers,ames}!cit-vax!speck

   Could you post some references, where I might find a good but not
too technical explanation of such codes?  I don't want to get too deeply
into the math, I'd just like to find out about some typical ways of
implementing them.


-- 
Robert Bickford         {hplabs, ucbvax, lll-lcc, ptsfa}!well!rab
terrorist cryptography DES drugs cipher secret decode NSA CIA NRO IRS
coke crack pot LSD russian missile atom nuclear assassinate libyan RSA

cabo@tub.UUCP (04/01/87)

() [...] putting together a course on data communication protocols [...]
() Can anybody out there help?

Although I'm not the one who teaches the data communications courses
here, I'll try to throw in my USD 0.02.

() Is X.21 really used anywhere?

Yes, it is used as the base for the (pre-ISDN) line switched digital
networks in Europe like Datex-L (DATa EXchange/Line-switched) in Germany.
Since Datex-L in turn is the base network for the German TELETEX network,
a friend and I did have the pleasure to implement X.21 last year.

() I suspect not, because it requires special interfaces (as
() described in X.24), which are probably not (yet?) widespread.

It does require special interfaces (called RS-422 in the US),
but in Europe you just have to satisfy the PTT requirements when
you want to connect to public networks.

() How about X.21bis, which works with V-series interfaces (e.g. V.24, i.e.
() RS-232)?

Actually, there are two versions of X.21 (based on V.11 == RS422): A
dialling version and a simple connect/disconnect version (without any way
to give a number for the remote end) usually called the leased-line
version.  X.21bis is just a way to express the leased-line X.21 protocol on
RS232 wires.  If you have a working RS232 (with all the modem control
signals correctly handled) you essentially have a working X.21bis, thus I
could say X.21bis is used all over the world.

The X.21 dialling protocol is only defined on RS422 wires and is weird
enough that you essentially need hardware assistance to do it right (we
managed to do it in software on a Z80 in assembly language, shudder).
I'd recommend to stay away from the X.21 dialling protocol at all costs.

() I suppose Hayes modem language could be considered a "de-facto" standard
() physical layer protocol...(?)

I think so, too; but line-switching (be it X.21 or Hayes modems) doesn't
fit well into the OSI model; it moves layer 3 aspects down to layer 1.
Now, how do we get the European PTTs to accept this de-facto standard?

() It seems hard to believe that Bisync can actually work under noisy
() conditions, since many of the commands and responses are not
() error-checked.  Do sites that depend on Bisync abound in horror stories?

If Bisync does work, it works by accident.
Every other description would be euphemic.
RSCS (the protocol BITNET is based on) only really works because
it has a second data link layer (with sequence numbers) above BSC.

() Also, I don't understand how transparent data mode is supposed to work, in
() light of the fact that SYNs that are inserted into the data by the
() INTERFACE (e.g.  idles) are supposed to have DLEs stuffed in front of
() them, presumably by the SOFTWARE that already gave the data to the
() interface to send.  Or are there special Bisync interfaces that actually
() understand all of this...?

There are nice chips out there that do all the crubby BSC stuff (the
R68561 comes to mind, I run my UREP on this GREAT chip), but you can do
everything necessary on a dumb chip like the Z8530 SCC or the 8274.
You are right, it is difficult to design a clean interface to a BSC
line driver, but it is difficult to do anything cleanly in less than
10 meters of distance from an IBM defined protocol.

Regards, Carsten
--
Carsten Bormann, <cabo@tub.UUCP> <cabo@db0tui6.BITNET> <cabo@tub.BITNET>
Communications and Operating Systems Research Group
Technical University of Berlin (West, of course...)

jhc@mtune.UUCP (04/05/87)

> ... a dumb chip like the Z8530 SCC or the 8274.

I must be getting old, I remember thinking that the SCC was the
greatest thing since toilet seats when I first ran across it. And I'd
already got over the shock of not having to do my own bit-stuffing in
software...
-- 
Jonathan Clark
[NAC,attmail]!mtune!jhc

Albatross! Stormy petrel on a stick!