[comp.dcom.modems] Break definition

jantypas@ucrmath.UUCP (John Antypas) (09/07/88)

I purchased a California Communications Corp. 2400 baud internal modem.
Overall, it works just fine under Unix, but I have yet to be able to
send a "break tone".  The problem appears to be that CCC didn't know how
to handle a break.  They claim the modem has a litteral serial port
on it so there should be no problems but...

CCC is willing to correct the problem but they need to know how to handle
the break.  After all, there is no such thing as a break character.  What
must they do to send a break and what do they do when a "break" is received?
How do external modems do this?

Many thanks.

mhw@wittsend.LBP.HARRIS.COM (Michael H. Warfield) (09/08/88)

In article <402@ucrmath.UUCP> jantypas@ucrmath.UUCP (John Antypas) writes:
>I purchased a California Communications Corp. 2400 baud internal modem.
>Overall, it works just fine under Unix, but I have yet to be able to
>send a "break tone".  The problem appears to be that CCC didn't know how
>to handle a break.  They claim the modem has a litteral serial port
>on it so there should be no problems but...
>
>CCC is willing to correct the problem but they need to know how to handle
>the break.  After all, there is no such thing as a break character.  What
>must they do to send a break and what do they do when a "break" is received?
>How do external modems do this?
>

        A break is sent on an async line by placing the  line  in
the  "low" or "spacing" state (level, tone, what-ever) for two or
more charracter times.  To a receiver this would  be  seen  as  a
charracter  of  all zeros, with a framing error (no stop bit) and
lasting at least 20 bit times (16 data bits  plus  2  start  bits
plus  two  stop  bits total across the 2 charracter times).  Note
that that is AT LEAST.  Some UARTS support long and short  breaks
where  short  breaks  are  of  the same order of magnitude as two
charracter times but a long break may be 1/4  second  or  longer!
The modem per se should have little to do with a break (he simply
holds his transmission to send a  looooonnnnnnngggggg  series  of
zero  bits)  but  the  UART  or SIO has everything to do with it.
Some modems will, however, disconnect and  drop  carrier  in  the
event of a "long break".  Some, but not all.

Michael H. Warfield  (The Mad Wizard)	| gatech.edu!galbp!wittsend!mhw
  (404)  270-2170 / 270-2098		| mhw@wittsend.LBP.HARRIS.COM
An optimist believes we live in the best of all possible worlds.
A pessimist is sure of it!

jpd@usl-pc.usl.edu (DugalJP) (09/08/88)

In article <402@ucrmath.UUCP> jantypas@ucrmath.UUCP (John Antypas) writes:
>I purchased a California Communications Corp. 2400 baud internal modem.
>Overall, it works just fine under Unix, but I have yet to be able to
>send a "break tone".  The problem appears to be that CCC didn't know how
>to handle a break.  They claim the modem has a litteral serial port
>on it so there should be no problems but...
>
>CCC is willing to correct the problem but they need to know how to handle
>the break.  After all, there is no such thing as a break character.  What

A break can be detected by many UARTS as a NULL received with a framing
error.  Think of a break as a start bit that persists for much longer that
a character time, perhaps 250 ms.  Many uarts can be programmed to send a
break with a special order, but if not, changing to the slowest baud and
sending a null might work.  This of course is a real kludge and has nasty
side effects if the uart can't support different send and receive bauds.

Good luck getting your modem fixed ... I'm doubtful they'll fix it.

You may be able to add a break switch to your modem by modifying the
input to the RS232 line driver chip.

-- 
-- James Dugal,	N5KNX		USENET: ...!{dalsqnt,killer}!usl!jpd
Associate Director		Internet: jpd@usl.edu
Computing Center		US Mail: PO Box 42770  Lafayette, LA  70504
University of Southwestern LA.	Tel. 318-231-6417	U.S.A.

rick@pcrat.UUCP (Rick Richardson) (09/10/88)

In article <6095@galbp.LBP.HARRIS.COM> mhw@wittsend.UUCP (Michael H. Warfield) writes:
>        A break is sent on an async line by placing the  line  in
>the  "low" or "spacing" state (level, tone, what-ever) for two or
>more charracter times.  To a receiver this would  be  seen  as  a
>charracter  of  all zeros, with a framing error (no stop bit) and
>lasting at least 20 bit times (16 data bits  plus  2  start  bits
>plus  two  stop  bits total across the 2 charracter times).  Note
>that that is AT LEAST.  Some UARTS support long and short  breaks

Where did you find the citation that a break is at least 20 bit times?
The only citation I could find (in some obscure CCITT spec) defined
break as the space condition for at least 130 milliseconds.  There
was no formula based on the bps that the interface was operating at.

I'm not saying you are wrong, just that a while back I tried to
find this information and wasn't really happy with finding just
the one definition.  I'm also not saying that all devices do this.
There are definitely devices that work the way you describe.

I believe that sending break as 130 milliseconds or longer of space (0)
is a better definition, though.  No matter what the receiver
speed is set to (if greater than 110 bps), the receiver is
guaranteed to see a break.

I think the best bet is to try to detect short breaks if you
can, but always generate at least the full 130 milliseconds,
if not longer.  Then again, a modem, or other communications (DCE)
device, should attempt to reconstruct the signal at the far end
as near as possible to the way it came in at the near end.
Garbage in...garbage out!!!!  Short breaks in, short breaks out.
-- 
		Rick Richardson, PC Research, Inc.
		rick%pcrat.uucp@uunet.uu.net (INTERNET)
		   uunet!pcrat!rick (UUCP, Personal Mail)
..!pcrat!jetroff (JetRoff Info)		..!pcrat!dry2 (Dhrystone Submissions)

henry@utzoo.uucp (Henry Spencer) (09/11/88)

In article <6095@galbp.LBP.HARRIS.COM> mhw@wittsend.UUCP (Michael H. Warfield) writes:
>>CCC is willing to correct the problem but they need to know how to handle
>>the break.  After all, there is no such thing as a break character.  What
>>must they do to send a break and what do they do when a "break" is received?
>>How do external modems do this?
>The modem per se should have little to do with a break (he simply
>holds his transmission to send a  looooonnnnnnngggggg  series  of
>zero  bits)  but  the  UART  or SIO has everything to do with it.

At 300 baud, yes.  At 2400, not so.  Higher-speed modems are *NOT* just
transmitting the RS232 signal as tones; they are actually receiving the
characters, packaging them up in odd ways (e.g. more than one bit per line
transition), and reversing the process at the other end.  This means that
the RS232 signal has to make sense (that's why fast modems have settings
for things like number of bits per character).  It also means that
break requires special attention, since it is *not* a character!  Tell
CCC to look up the V.22bis spec, which describes the signals on the wire
for a 2400-baud modem; it will tell them how to send a break over the
wire, and with luck will also tell them exactly when to send one and what
to do when they receive one.

Don't be too optimistic.  If CCC does not know this stuff already, I
wouldn't buy a modem from them if it cost $4.95.  They are incompetent.
They are not the only ones; this problem has been seen before.
-- 
NASA is into artificial        |     Henry Spencer at U of Toronto Zoology
stupidity.  - Jerry Pournelle  | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

mhyman@cup.portal.com (09/12/88)

In message <569@pcrat.UUCP> rick@pcrat.UUCP (Rick Richardson) asks:
> Where did you find the citation that a break is at least 20 bit times?
> The only citation I could find (in some obscure CCITT spec) defined
> break as the space condition for at least 130 milliseconds.  There
> was no formula based on the bps that the interface was operating at.

First some background:

  The bit (not baud) rate between the DTE side and the TELCO side of a
modem may differ.  Using V.22bis (standard 2400 bps modem) as an example:
The spec calls for an TELCO bit rate of 2400 b/s +/- 0.01%.  The DTE side
is allowed the range of 2400 b/s +1.0%, -2.5%. [+2.3%, -2.5% in modems with
``extend signalling rate range (optional)'' implemented].

  To handle the possible overspeed on input a modem is allowed to delete
stop bits now and again (no more than one stop bit deleted in every 8
characters sent [4 characters with extended option]).  When the receiving
modem receives a character with a missing stop bit from the TELCO it adds
a stop bit to the DTE.  It makes time for the added stop bit by making it
and the next 7 stop bits 12.5% shorter than a standard bit.  [25% shorter
for the next 4 stop bits with the extended option].

  What does this have to do with a break?  A break must be ``2M + 3'' bits
long at the receiver.  This requirement allows the receiver to tell the
difference between a break and consecutive NULL characters, the first with
a deleted stop bit.  Section 4.1.3 of V.22bis says:
  
        If the converter detects M to 2M + 3 bits all of ``start''
        polarity, where M is the number of bits per character in the
        selected format, the converter shall transmit 2M + 3 bits of
        ``start'' polarity.

M is typically 10 in async mode and 8 in sync mode.
 

--Marc
 
Marco S. Hyman          ...!sun!portal!cup.portal.com!mhyman
			mhyman@cup.portal.com
 

csg@pyramid.pyramid.com (Carl S. Gutekunst) (09/12/88)

>Higher-speed modems are *NOT* just transmitting the RS232 signal as tones;
>they are actually receiving the characters, packaging them up in odd ways
>(e.g. more than one bit per line transition), and reversing the process at
>the other end.

Um, this is rather oversimplified. Allow to me explain, and in doing so give a
partial answer to the original question. There are several levels of things
going on here, each of which impose their own constraints.

First is the modulation scheme: how bits are encoded on the wire. All modems
at 300 bits-per-second or slower use frequency shift keying (FSK), which just
means that one tone means a mark (1) bit, and another tone means a space (0).
Note that an FSK modem is _n_o_t transmitting bits _p_e_r_ _s_e; it is sending out
the current state of the RS-232 Transmit Data line as a tone. The line can
change state whenever it wishes, and the modem will follow.

Modems at 1200 bps and above all use some kind of phase-shift encoding, where
multiple bits of data are encoded into a single phase shift of a carrier tone.
Bell 212 and V.22 (1200 bps) use phase encoding (PE), where a pair of bits are
represented by one of four phase changes: 0, 90, 180, or 270 degrees. V.22bis
(2400 bps) modems use quadrature amplitude modulation (QAM), in which two bits
specify an x-y quadrant, and two more bits specify a shift of phase and ampli-
tude within the quadrant; so four bits are encoding in a single state change.
In V.29 (9600 bps), the first bit specifies amplitude, and the next three
specify phase angle (0, 45, 90, 135, etc.); again, 4 bits are encoded into a
single state change.

V.32 (9600bps full duplex) defines two schemes: QAM similar to V.22bis, and a
five-bit variation of QAM called trellis coding. In trellis coding, four bits
are still encoded into a single state change, but a "redundant" bit is also
calculated, for 32 distinct QAM states. (Anybody know how trellis coding got
that name? I'd guess that is comes from the way that two of the data bits and
the redundant bit are permuted based on the bits in the previous five-bit
group. The bits climb all over themselves, like flowers climbing a trellis.)

In each case there exists a precise one-to-one relationship between _b_i_t_s
on the digital side and state changes on the wire. This is very different from
FSK, which is insensitive to bit boundaries. But note that the phase-encoding
techniques remain insensitive to _c_h_a_r_a_c_t_e_r framing. All characters are just
concatenated strings of bits, and break is just a string of space bits.

These high-speed standards were all designed for synchronous communications,
in which all the bits dance in lockstep to the beat of the modem's clock. In
asynchronous (which is what we all use for dialup and UUCP), the data bits are
framed according to the transmitter's internal clock. This clock will almost
certainly not even be the same speed (much less the same phase) as the modem's
clock. This creates a major dilema: the modem _m_u_s_t run at exactly its nominal
bits-per-second speed, but it has no control over the speed of the data being
sent to it.

So the V.22bis standard provides a lengthy set of rules for how asychronous
devices must behave. The critical elements of the V.22bis asynchronous speci-
fication are:

- The amount by which the asynchronous bit rate may deviate from the nominal
  speed of 2400bps. If the transmitter is running a little faster, then the
  modem must occasionally discard stop bits. If the transmitter is running
  slow, then the modem has to occassionally add stop bits.

- The precise specification of a break signal: from _M to 2_M+3 consecutive
  start (space) bits, where 'M' is the number of bits per character. In fact,
  if the modem receives a break shorter than 2M+3 bit times, it is required to
  extend it to the full time. Breaks longer than 2M+3 bits are passed through
  for their full duration. And a break must always be followed by 2_M stop
  (mark) bits, to allow the receiver to resynchronize.

So, the real reason why fast modems need to know how many bits-per-character
are being used is so they know where the stop bits are. And breaks are defined
so that the modems will know when the normal progression of character frames
is disrupted. (By the way, V.22bis only allows character sizes of 8, 9, 10,
and 11 bits. Those of us in the UNIX and PC worlds always use is 10 bits: 1
start; 8 data; 1 stop.) 

An added restriction arises in all _p_r_a_c_t_i_c_a_l implementations of V.22bis and
other high-speed modems: they use microprocessors and UART chips. UART chips,
being character oriented devices, are very fussy about character framing. :-)

Finally, modern modems are often "smart." At the least, they accept command
strings to do autodialing and set options. At the most, they have modem proto-
cols like Microcom's MNP and Telebit's PEP that bundle up characters, strip
off the start and stop bits, perform compression, and do a lot of other char-
acter-oriented shredding. Here there is actual interpretation of the data
going on, and the modem's CPU needs to know a lot more about the data than it
does to simply satisfy the encoding scheme.

None of which answers the original question. :-)

The problem is that the CCC is an internal modem. When dealing with an exter-
nal modem, you talk to a serial interface on the PC. The serial interface has
a UART chip on it (a Signetics 8251, I recall). And the UART has a control
port with a bit in it that, when set by the CPU, drives the Transmit Data line
into the space state. To send a break, the CPU sets this bit, spins for the
appropriate number of CPU cycles, and then clears the bit. Voila, a break. 

For the CCC modem, then vendor has to supply something equivalent: a bit that
you can assert to cause a break. If they are really clever, they'll make it a
toggle, so that the modem itself will time the break rather that you having to
use the PC's CPU or timer to do it. An ugly alternative is an escape sequence:
you send a magic sequence of characters, and the modem sends a break. My Cerm-
etek 199A modem does this, and it's useless; UUCP uses *all* the characters,
so I have to disable the modem's escape character. 

<csg>

csg@pyramid.pyramid.com (Carl S. Gutekunst) (09/12/88)

Someone's gonna flame me for it elsewise, so I suppose I should mention how a
Telebit TrailBlazer works. (I suppose this is preaching to the choir, but if
you've carried through this far I might as well finish the job.)

The TrailBlazer uses a proprietary modulation scheme called Dynamic Adaptive
Multicarrier Quadrature Amplitude Modulation (DAMQAM). Rather than the single
high-frequency carrier used by other modulation schemes, the TrailBlazer uses
511 different low-frequency carriers. Each is QAM modulated at a rate of 12
baud (that is, 12 state changes per second); so each carrier has a data rate
of 48 bits per second. Each carrier can be slowed down to 8 or 4 baud (32 or
16 bps), or dropped entirely. On real phone lines, quite a few carriers are
dropped, and a number are slowed down to 8 baud; so the theoretical speed of
24528 bits per second is reduced to a real maximum of 18031. (Why an odd num-
ber? I don't know.) Note that since DAMQAM modulation broadcasts across the
entire bandwidth of the telephone line, it is half duplex -- only one end can
be transmitting at a time. 

On top of DAMQAM, Telebit uses a proprietary data packetizing scheme, the
Packetized Ensemble Protocol (PEP) that we've all come to know and love. PEP
incorporates error correction, and a family of dataflow prediction schemes
that make the modem appear to be full-duplex. This latter process is called
"Adaptive Duplex," a term to which both Telebit and Racal-Vadic claim trade-
marks (the latter for the RV 9600-VP modem).

PEP and DAMQAM are trademarks of Telebit Corp, too. MNP is a trademark of
Microcom. But you knew that, didn't you?

<csg>

aad@stpstn.UUCP (Anthony A. Datri) (09/13/88)

In article <39157@pyramid.pyramid.com> csg@pyramid.pyramid.com (Carl S. Gutekunst) writes:

>means that one tone means a mark (1) bit, and another tone means a space (0).
>Note that an FSK modem is _n_o_t transmitting bits _p_e_r_ _s_e; it is sending out

So *that's* why modems are rated at 0-300 baud, not just 300 baud.  I could
_n_e_v_e_r understand that before.

-- 
@disclaimer(Any concepts or opinions above are entirely mine, not those of my
	    employer, my GIGI, or my 11/34)
beak is								  beak is not
Anthony A. Datri,SysAdmin,StepstoneCorporation,stpstn!aad

mhw@wittsend.LBP.HARRIS.COM (Michael H. Warfield) (09/13/88)

In article <569@pcrat.UUCP> rick@pcrat.UUCP (Rick Richardson) writes:
>In article <6095@galbp.LBP.HARRIS.COM> mhw@wittsend.UUCP (Michael H. Warfield) writes:
>>        A break is sent on an async line by placing the  line  in
>>the  "low" or "spacing" state (level, tone, what-ever) for two or
>>more charracter times.  To a receiver this would  be  seen  as  a
>>charracter  of  all zeros, with a framing error (no stop bit) and
>>lasting at least 20 bit times (16 data bits  plus  2  start  bits
>>plus  two  stop  bits total across the 2 charracter times).  Note
>>that that is AT LEAST.  Some UARTS support long and short  breaks
>
>Where did you find the citation that a break is at least 20 bit times?
>The only citation I could find (in some obscure CCITT spec) defined
>break as the space condition for at least 130 milliseconds.  There
>was no formula based on the bps that the interface was operating at.
>
        Point taken.  The information I was quoting was based  on
several  USART  chip  specs (I know, I know, bad move) and not on
any particular specification.  At this  point  I  have  seen  the
references in the chip specs, as well as other feed-back from the
net specifing anything from 100 msec on  up  (250  msec  seems  a
common  opinion).   I would also like to see some hard fast specs
on this as well.  None of my comms  references  mention  specific
timings and I have little to go on outside of the specs I have on
hand ( Yeah, I  guess  when  it  comes  to  manufacture's  specs,
there's   a  sucker  born  every  minute  ;-) ).

Michael H. Warfield  (The Mad Wizard)	| gatech.edu!galbp!wittsend!mhw
  (404)  270-2170 / 270-2098		| mhw@wittsend.LBP.HARRIS.COM
An optimist believes we live in the best of all possible worlds.
A pessimist is sure of it!

brad@looking.UUCP (Brad Templeton) (09/15/88)

But if DAMQAM is really a broadband scheme with 511 carriers using up
the entire band, why do all 511 of the carriers have to be in the
same direction?  In interactive mode at least, I wish they always had
at least one carrier in each direction, so that no turnaround would be
required.

In fact, even if having a reverse direction carrier meant you couldn't
use several of the nearby bands in the forward direction, it would still
be worth it in interactive mode.  Ideally it would be nice to see 2
permanent carriers in each direction, and 507 switchable carriers.
(You may need more than 2 to achieve the information rate required with
error correction, but you get the gist. 100 bps would be adequate for
typing and ACK/NAKs of other protocols.)
-- 
Brad Templeton, Looking Glass Software Ltd.  --  Waterloo, Ontario 519/884-7473

dg@lakart.UUCP (David Goodenough) (09/16/88)

From article <402@ucrmath.UUCP>, by jantypas@ucrmath.UUCP (John Antypas):
> I purchased a California Communications Corp. 2400 baud internal modem.
> Overall, it works just fine under Unix, but I have yet to be able to
> send a "break tone".  The problem appears to be that CCC didn't know how
> to handle a break.  They claim the modem has a litteral serial port
> on it so there should be no problems but...
> 
> CCC is willing to correct the problem but they need to know how to handle
> the break.  After all, there is no such thing as a break character.  What
> must they do to send a break and what do they do when a "break" is received?
> How do external modems do this?
> 
> Many thanks.

I read this with a great deal of concern. I run a 1200 BPS modem, which
is (in efferct) a bit transmitter: I throw a stream of bits into it,
and they come out the far end. I know this to be the case, because on
one occasion the getty here at lakart got stuck at 300 BPS, but with
modems swapping 1200 BPS carrier. So I just dropped my terminal speed
to 300, logged in, did a stty 1200, and re adjusted my local speed, and
all was well.

Reading between the lines of the above posting it would seem that 2400
BPS modems are character transmitters, which means they are going to
break a lot of things for me. For example I call in 7e1 to talk text
with lakart, but if I'm going to do an Xmodem transfer, I have to set
my end to 8n2 to get it to go. Now if the modem is hooked on the notion
of 7e1, I'm in real trouble. How are other people getting around this,
or is it not a real problem, and I don't have a clue what I'm talking
about.
-- 
	dg@lakart.UUCP - David Goodenough		+---+
							| +-+-+
	....... !harvard!cca!lakart!dg			+-+-+ |
						  	  +---+

smb@ulysses.homer.nj.att.com (Steven Bellovin[jsw]) (09/16/88)

My thanks to csg@pyramid.pyramid.com (Carl S. Gutekunst) for an
excellent article.  One point I should add:  many (most? all?) modems
use a ``long break'' -- on the order of 3 seconds -- to indicate
disconnect.  That is, modems can be strapped so that when they wish to
hang up (for example, if the host drops DTR), they will first transmit
a 3-second break to the remote modem.  Similarly, when they receive
such a break signal, they will hang up, too.  I suspect (but do not
know for certain) that this usage dates back to half-duplex modems,
which of necessity do not transmit a continuous carrier.  Rather, they
only generate carrier when they wish to transmit.  Thus, the remote end
cannot tell when the local end has hung up -- no carrier may simply
mean that the modem doesn't have anything to say.

		--Steve Bellovin

grr@cbmvax.UUCP (George Robbins) (09/17/88)

In article <10607@ulysses.homer.nj.att.com> smb@ulysses.homer.nj.att.com (Steven Bellovin[jsw]) writes:
> My thanks to csg@pyramid.pyramid.com (Carl S. Gutekunst) for an
> excellent article.  One point I should add:  many (most? all?) modems
> use a ``long break'' -- on the order of 3 seconds -- to indicate
> disconnect.

It might be more accurate to state that most "traditional" modems have
this as one of their AT&T modem compatible strapping options.  It's
normally called "long space disconnect" though...  Most modems are
configured to disconnect after n seconds without carrier, making the
option a somewhat passe.
-- 
George Robbins - now working for,	uucp: {uunet|ihnp4|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

grr@cbmvax.UUCP (George Robbins) (09/17/88)

In article <2035@looking.UUCP> brad@looking.UUCP (Brad Templeton) writes:
> But if DAMQAM is really a broadband scheme with 511 carriers using up
> the entire band, why do all 511 of the carriers have to be in the
> same direction?  In interactive mode at least, I wish they always had
> at least one carrier in each direction, so that no turnaround would be
> required.
> 
> In fact, even if having a reverse direction carrier meant you couldn't
> use several of the nearby bands in the forward direction, it would still
> be worth it in interactive mode.  Ideally it would be nice to see 2
> permanent carriers in each direction, and 507 switchable carriers.
> (You may need more than 2 to achieve the information rate required with
> error correction, but you get the gist. 100 bps would be adequate for
> typing and ACK/NAKs of other protocols.)

The disparity between transmitted and received signal levels mean that
you need either fairly sophisticated processing or a wide guard band
to prevent interference.  Obviously it could be done, and might be a
real win.

Note that 100 bps is probably ok for typing, but some protocols, for
example uucp "g", require a backwards channel of somewhere between
300 and 600 bps to keep up with 9600 baud forward.  Tis why the
USR Courier HST isn't the magic modem for uucp...

> -- 
> Brad Templeton, Looking Glass Software Ltd.  --  Waterloo, Ontario 519/884-7473


-- 
George Robbins - now working for,	uucp: {uunet|ihnp4|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

mhyman@cup.portal.com (09/18/88)

In article <248@lakart.UUCP> dg@lakart.UUCP (David Goodenough) asks:
> break a lot of things for me. For example I call in 7e1 to talk text
> with lakart, but if I'm going to do an Xmodem transfer, I have to set
> my end to 8n2 to get it to go. Now if the modem is hooked on the notion
> of 7e1, I'm in real trouble. How are other people getting around this, ...

At 300 bps the modem encodes 1 bit per baud; when the signal at the RS-232
port changes the modem sends a different frequency.  That's why modems
are rated 0-300 bps.  At the higher frequencies multiple bits are encoded
per baud.  The modem needs to know the character size (but not the parity)
so it may drop a stop bit when needed.  The receiving modem needs to know
the character size so it can tell when it has to insert a stop bit. 

Most modems these days assume a character size of 10 bits.  If you look
at a Smartmodem 2400 manual (page A25) you'll see that all supported
Asynchronous data formats add up to 10 bit characters.

--Marc

Marco S. Hyman
...!sun!portal!cup.portal.com!mhyman
mhyman@cup.portal.com