csg@pyramid.pyramid.com (Carl S. Gutekunst) (09/12/88)
>Higher-speed modems are *NOT* just transmitting the RS232 signal as tones; >they are actually receiving the characters, packaging them up in odd ways >(e.g. more than one bit per line transition), and reversing the process at >the other end. Um, this is rather oversimplified. Allow to me explain, and in doing so give a partial answer to the original question. There are several levels of things going on here, each of which impose their own constraints. First is the modulation scheme: how bits are encoded on the wire. All modems at 300 bits-per-second or slower use frequency shift keying (FSK), which just means that one tone means a mark (1) bit, and another tone means a space (0). Note that an FSK modem is _n_o_t transmitting bits _p_e_r_ _s_e; it is sending out the current state of the RS-232 Transmit Data line as a tone. The line can change state whenever it wishes, and the modem will follow. Modems at 1200 bps and above all use some kind of phase-shift encoding, where multiple bits of data are encoded into a single phase shift of a carrier tone. Bell 212 and V.22 (1200 bps) use phase encoding (PE), where a pair of bits are represented by one of four phase changes: 0, 90, 180, or 270 degrees. V.22bis (2400 bps) modems use quadrature amplitude modulation (QAM), in which two bits specify an x-y quadrant, and two more bits specify a shift of phase and ampli- tude within the quadrant; so four bits are encoding in a single state change. In V.29 (9600 bps), the first bit specifies amplitude, and the next three specify phase angle (0, 45, 90, 135, etc.); again, 4 bits are encoded into a single state change. V.32 (9600bps full duplex) defines two schemes: QAM similar to V.22bis, and a five-bit variation of QAM called trellis coding. In trellis coding, four bits are still encoded into a single state change, but a "redundant" bit is also calculated, for 32 distinct QAM states. (Anybody know how trellis coding got that name? I'd guess that is comes from the way that two of the data bits and the redundant bit are permuted based on the bits in the previous five-bit group. The bits climb all over themselves, like flowers climbing a trellis.) In each case there exists a precise one-to-one relationship between _b_i_t_s on the digital side and state changes on the wire. This is very different from FSK, which is insensitive to bit boundaries. But note that the phase-encoding techniques remain insensitive to _c_h_a_r_a_c_t_e_r framing. All characters are just concatenated strings of bits, and break is just a string of space bits. These high-speed standards were all designed for synchronous communications, in which all the bits dance in lockstep to the beat of the modem's clock. In asynchronous (which is what we all use for dialup and UUCP), the data bits are framed according to the transmitter's internal clock. This clock will almost certainly not even be the same speed (much less the same phase) as the modem's clock. This creates a major dilema: the modem _m_u_s_t run at exactly its nominal bits-per-second speed, but it has no control over the speed of the data being sent to it. So the V.22bis standard provides a lengthy set of rules for how asychronous devices must behave. The critical elements of the V.22bis asynchronous speci- fication are: - The amount by which the asynchronous bit rate may deviate from the nominal speed of 2400bps. If the transmitter is running a little faster, then the modem must occasionally discard stop bits. If the transmitter is running slow, then the modem has to occassionally add stop bits. - The precise specification of a break signal: from _M to 2_M+3 consecutive start (space) bits, where 'M' is the number of bits per character. In fact, if the modem receives a break shorter than 2M+3 bit times, it is required to extend it to the full time. Breaks longer than 2M+3 bits are passed through for their full duration. And a break must always be followed by 2_M stop (mark) bits, to allow the receiver to resynchronize. So, the real reason why fast modems need to know how many bits-per-character are being used is so they know where the stop bits are. And breaks are defined so that the modems will know when the normal progression of character frames is disrupted. (By the way, V.22bis only allows character sizes of 8, 9, 10, and 11 bits. Those of us in the UNIX and PC worlds always use is 10 bits: 1 start; 8 data; 1 stop.) An added restriction arises in all _p_r_a_c_t_i_c_a_l implementations of V.22bis and other high-speed modems: they use microprocessors and UART chips. UART chips, being character oriented devices, are very fussy about character framing. :-) Finally, modern modems are often "smart." At the least, they accept command strings to do autodialing and set options. At the most, they have modem proto- cols like Microcom's MNP and Telebit's PEP that bundle up characters, strip off the start and stop bits, perform compression, and do a lot of other char- acter-oriented shredding. Here there is actual interpretation of the data going on, and the modem's CPU needs to know a lot more about the data than it does to simply satisfy the encoding scheme. None of which answers the original question. :-) The problem is that the CCC is an internal modem. When dealing with an exter- nal modem, you talk to a serial interface on the PC. The serial interface has a UART chip on it (a Signetics 8251, I recall). And the UART has a control port with a bit in it that, when set by the CPU, drives the Transmit Data line into the space state. To send a break, the CPU sets this bit, spins for the appropriate number of CPU cycles, and then clears the bit. Voila, a break. For the CCC modem, then vendor has to supply something equivalent: a bit that you can assert to cause a break. If they are really clever, they'll make it a toggle, so that the modem itself will time the break rather that you having to use the PC's CPU or timer to do it. An ugly alternative is an escape sequence: you send a magic sequence of characters, and the modem sends a break. My Cerm- etek 199A modem does this, and it's useless; UUCP uses *all* the characters, so I have to disable the modem's escape character. <csg>
csg@pyramid.pyramid.com (Carl S. Gutekunst) (09/12/88)
Someone's gonna flame me for it elsewise, so I suppose I should mention how a Telebit TrailBlazer works. (I suppose this is preaching to the choir, but if you've carried through this far I might as well finish the job.) The TrailBlazer uses a proprietary modulation scheme called Dynamic Adaptive Multicarrier Quadrature Amplitude Modulation (DAMQAM). Rather than the single high-frequency carrier used by other modulation schemes, the TrailBlazer uses 511 different low-frequency carriers. Each is QAM modulated at a rate of 12 baud (that is, 12 state changes per second); so each carrier has a data rate of 48 bits per second. Each carrier can be slowed down to 8 or 4 baud (32 or 16 bps), or dropped entirely. On real phone lines, quite a few carriers are dropped, and a number are slowed down to 8 baud; so the theoretical speed of 24528 bits per second is reduced to a real maximum of 18031. (Why an odd num- ber? I don't know.) Note that since DAMQAM modulation broadcasts across the entire bandwidth of the telephone line, it is half duplex -- only one end can be transmitting at a time. On top of DAMQAM, Telebit uses a proprietary data packetizing scheme, the Packetized Ensemble Protocol (PEP) that we've all come to know and love. PEP incorporates error correction, and a family of dataflow prediction schemes that make the modem appear to be full-duplex. This latter process is called "Adaptive Duplex," a term to which both Telebit and Racal-Vadic claim trade- marks (the latter for the RV 9600-VP modem). PEP and DAMQAM are trademarks of Telebit Corp, too. MNP is a trademark of Microcom. But you knew that, didn't you? <csg>