[net.bugs.v7] Serial driver problem query

fair@dual.UUCP (Erik E. Fair) (01/28/84)

We're having troubles with our serial driver & System III.

The DUAL SIO4/DMA looks somewhat like a DEC DH-11, in that it is DMA on
output, and has a 256 byte FIFO for input (shared between the four
ports on the board).

The problem manifested itself as a 1200 baud port with a modem on it
would lock up randomly and refuse to function until the next time the
system was reset. This only happened on our systems which were running
System III UNIX. Version 7 did, and continues to, work just fine.

After much digging through the tty driver, and lots of hand waving, I
found that the serial driver was not waiting for the USARTs to get
themselves into an idle or ready state before setting the baud rate &
character width, and other things of import.

It seems that where Version 7 always did either an output flush or a
wait-until-output-has-drained before even asking the serial driver to
perform functions like that, System III just goes ahead & does it. Thus
does the USART get poked with information when it's busy outputting a
character, and it gets upset and refuses to deal with us a anymore.  At
1200 baud, the `window of vulnerability' as Reagan would put it, was
wider than at 9600, so while it happened to our modems all the time,
we never saw the problem with our hardwired ports.

Now after that backgroud, the query: Has anyone out there seen things
like this before with System III or System V, and is it really supposed
to work that way? No waiting, just change the baud rate in the middle of
a transmission? It forces us to put in some ugly code to wait for the
next xmit interrupt from that port, and it's messy.

The other question that comes to mind is why did USG choose to eliminate
the `ttyrend' interface for canonicalization of a buffer full of input?
As you might guess, that would be perfect for our serial board. As it is
we forced to do something like this:

	while((c = getc(FIFO)) != NO_MORE)
		(*linesw[tp->t_line].l_input)(tp, c);

when something more like this

	while((c = getc(FIFO)) != NO_MORE)
		buf[n++] = c;
	(*linesw[tp->t_line].l_rend)(tp, buf, n);

would be desireable. These are, of course, metaphorical code fragments.

I would appreciate any & all comments.

	Erik E. Fair

	dual!fair@BERKELEY.ARPA
	{ucbvax,ihnp4,cbosgd,amd70,zehntel,fortune,unisoft,onyx,its}!dual!fair
	Dual Systems Corporation, Berkeley, California

guy@rlgvax.UUCP (Guy Harris) (01/28/84)

> It seems that where Version 7 always did either an output flush or a
> wait-until-output-has-drained before even asking the serial driver to
> perform functions like that, System III just goes ahead & does it. Thus
> does the USART get poked with information when it's busy outputting a
> character, and it gets upset and refuses to deal with us a anymore.  At
> 1200 baud, the `window of vulnerability' as Reagan would put it, was
> wider than at 9600, so while it happened to our modems all the time,
> we never saw the problem with our hardwired ports.

From the System III manual page TTY(4):

	TCSETAW   Wait for the output to drain before setting the new
	          parameters.  *This form should be used when changing
	          parameters that will affect output.*

V7 had TIOCSETP and TIOCSETN; the former waited for output to finish, and
the latter didn't.  The difference is that the V7 manual mentions the one
that waits first, while the S3 manual mentions the one that doesn't wait
first, so one may pick the "right" one by default in V7 and the "wrong" one
by default in S3; the functionality is the same.  Note that boards like the
DZ11 and DH11 have two characters queued up even after it's gotten the last
one from the host, so they have to properly deal with speed changes of this
sort in the hardware.

> The other question that comes to mind is why did USG choose to eliminate
> the `ttyrend' interface for canonicalization of a buffer full of input?
> As you might guess, that would be perfect for our serial board.

USG didn't totally eliminate the "canonicalize a buffer of input" interface.
They changed it to assume (on the PDP-11 and VAX-11, anyway) that the most
common terminal interface would be a KMC-11 running a DZ-11, and that the
KMC-11 would do certain preprocessing - namely:

parity error checking
break checking (ignoring if IGNBRK, interrupting if BRKINT - try disabling your
	interrupt and quit characters but still permitting interrupts with
	the BREAK key on a vanilla non-USG system!)
stripping to 7 bits, if requested
XON/XOFF response - this should be done as soon as the XOFF comes out of the
	FIFO
INLCR processing (mapping of input \n to \r
IGNCR and ICRNL processing (mapping of \r to \n or discarding of \r)
IUCLC processing (mapping of A-Z into a-z)

The KMC-11 would then either pass the KMC/DZ driver a single character,
which would be passed to "ttin" as

	ttin(tp, <the character>, 1)

which means "here's a character, but the KMC-11 did all that extra stuff
already", or would have been given a "clist" block to fill in, and the
"clist" block would be passed to "ttin" as

	ttin(tp, <address of the clist block>, <number of characters
		in the clist block>)

As such, this isn't really intended as an Official Architecture for terminal
drivers; implementors of UNIX on a particular machine should feel free to
change the interface to routines like "ttin" and the "t_proc" routine to
fit their hardware.

	Guy Harris
	{seismo,ihnp4,allegra}!rlgvax!guy