[mod.telecom] DMI/CPI

lars@ACC-SB-UNIX.ARPA (Lars Poulsen) (01/09/86)

Nobody has taken the bait. There has not been a single note about
integrated data and voice either in my mailbox or in the digest.
I guess the people who are into that do not read TELECOM.

Does anyone care ? I could probably get hold of some brief introductory
material to the subject, but I won't bother the list, if this is not
a topic of general interest. Is there a net newsletter where I'd be more
likely to find people to share this interest with ?
	Lars Poulsen @ Advanced Computer Communications
	<LARS@ACC.ARPA>

jem@MIT-BORAX.ARPA (John E. McNamara) (01/11/86)

OK, I'll take the bait. While I was working for DEC I worked with NTI
(actually with Bell Northern Research) on creating the CPI spec. I
later was given a copy of the DMI spec and had occasion to talk at
some length with the ATTIS people who wrote it.

Unfortnuately, since I left my copies of this material with DEC when I
left, I might not be able to answer very specific questions, but I'd
be happy to give a general outline.

First, the issue of goals. DEC and NTI were very anxious to create a
product which could be used immediately. They felt that the
appropriate way to connect the TDM backplane of an electronic PBX and
the TDM backplane of a computer would be via a TDM transmission
facility. Hence, T1 carrier was chosen. The T1 interfaces made by NTI
and other PBX manufacturers used "robbed bit signalling", and the T1
facilities that one could obtain were often of that type (although
short point-to-point T1 need not be). AT&T was making noises about
getting away from that signalling system "before the end of the
decade", but this was Fall 1981, so that seemed a long way away and
the product goal was speed to market with an economical product.

The reason for the long-winded introduction is that I am now about to
explain the evils of robbed bit signalling. In T1, at least when used
with the most common channel banks, 24 channels produce 8 bit samples
(192 bits) and an additional bit called the "framing bit" is added to
produce a 193 bit frame. The frames occur at an 8KHz rate, i.e. one
every 125 microseconds. The "framing bit" varies in a prescribed
pattern (100011001110 or something like that). This pattern not only 
permits receiving equipment to eventually figure out where frames are,
it also permits one to count the frames and identify every sixth
frame, example. Now, in voice transmission, you don't REALLY need
eight bits of sampling, do you? No, an occasional seven bit sample
would never be noticed. SO, the T1 transmiting channel bank steals
a sample bit in each channel on every sixth frame and uses that bit
to convey on-hook/off-hook signalling information, i.e. both
supervision and dial pulses. Touch-tone(R) would of course go right
down the voice channel (but supervision is still in the robbed bits).

Another property of much of the installed T1 base is that since binary
1's are pulses and 0's are no-pulse, a certain "density" of 1's is
required to maintain clocking. The best way to insure you don't get
in trouble is to make sure each 8-bit sample has at least one 1. In
voice, this can be done by making silence be all 1-s and rarely if
ever using the all-0's code. The Bell System devised a scheme called
B8ZS coding which substituted aa particular illegal pulse pattern
that violated the "every pulse must alternate polarity" rule in a
specific way to mean "00000000" and recognized that at the receiving
T1 terminal. However, this has/had not been widely installed.

A final issue, not specifi to T1, is rate adaption, i.e. how do you
send 4800 bps data on a 56,000 or 64,0000 bit facility.

OK, with that background, here's what CPI and DMI do to solve these
problems:

Robbed bit signalling:

CPI always assigns the high order (or is it low order?) bit as a 1.
Since this bit is not being used, it doesn't matter if it's stolen for
signalling. Unfortunately, this also limits the speed to 56,000 in the
initial version of CPI. A new version is being / has been proposed by
NTI which also offers a full 8-bit format for use on non-robbed-bit 
facilities. This format is in addition to the existing formats and
offers full 64,00 bit capability. 

The DMI only operates on non-robbed-bit facilities and uses all 8 bits
to obtain full 64,000 bit capability. All signalling is done over the
24th channel and uses a very elaborate protocol built on LAPB, etc.
When people say that DMI is ISDN-compatible, what they really mean
is that is uses 23 64Kb data channels and 1 64Kb signalling channel,
like an ISDN primary rate interface does. This does not mean DMI and
ISDN are compatible,, interoperable, or anything else.

One-s Density Requirements:

Since every CPI data byte has one of the bits permanently 1, to avoid
the robbed bit problem, the density problem is also solved. In the
NTI full-8-bit variation, I guess you just have to be careful.

In DMI, their full 8-bit format also has to be careful. There are some
other formats used (see Rate Adaption, below), and one of them is
quite clever. It uses HDLC and inverts the data so that there are
never more than 5 ZERO bits in a row, except for the six in a flag.

Rate Adaption:

Since the CPI uses only seven bits,, it has the following formats:
For 56,000 bps, just pour the bits in, seven bits per frame. For
48,000 bps, just pour the bits in, six bits per frame. For async,
put data in four-bit nibbles. The coding of the formats permits one
to have "stuffer" bytes which are time fills, announce the next
character as being EIA signalling information for modem pools, etc.
An additional feature is that for speeds of 9600 and below the data
nibbles are send high-nibble, low-nibble, high nibble, low nibble,
etc, a total of three times. The bits are then voted on two out of
three, a form of forward error correction. This sounds complex, but
the DEC interface uses a fairly simple 2901 state machine to do it,
as I recall.

The DMI also has several formats, and my recollection is that the
synchronous transmission formats are basically the same pour-it-in
fashion as the CPI. There are two async formats, one used for "weird"
speeds and one used for more conventional speeds. The weird speed one
samples the line at high speed and treats the samples as high speed
synchronous data. The standard speed format waits for twenty characters
or eight milliseconds, whichever comes first. It then takes the
characters accumulated and packages them as an HDLC message, inverts
the data, and sends them on their way.

Summary:

CPI Pros: Fairly simple to implement, VLSI not required.
          Works with existing facilities and PBX interface boards.

DMI Pros: More flexible in meeting long range requirements.

Sorry for the long-windedness. Hope it's been some help.
Regards,
John