[net.dcom] ibm pc and interrupt driven asynch output

kelvin@arizona.UUCP (Kelvin Nilsen) (03/30/85)

I have written a terminal emulator for the ibm pc to which i am attempting
to add several enhancements.  One feature i would like is to be able to
send all asynchronous port output under the direction of interrupt handlers.
I believe I have properly implemented this capability, but have observed
problems when communicating with mainframes.

The problems are:
	On a VAX running 4.2 UNIX, when I type a fast string of characters
	at 1200 baud, the command line interpreter aborts the line I am
	typing and echoes my interrupt character, ^C.  I am running the
	C-shell.  I have similar problems when running editors, etc. 

	On a DEC-10, when I type a fast string of characters to the command
	line interpreter, all or some of the characters are ignored.

	In "loop-back" testing on the pc, no matter how fast i type characters
	i am unable to drop or garbage any characters.  It is quite easy to
	reproduce the problem on a mainframe at will.
		Note that "loop-back" testing is vulnerable to timing
		problems not present in the mainframe hookup as the same
		pc will receive two interrupts almost simultaneously, one
		upon completion of the transmission, and the other on receipt
		of the character transmitted.

		i have not yet been able to test the program between two PC's.

I have hypothesized that the main-frames are not able to keep up with 1200
baud characters packed as closely together as is possible using interrupt
driven output.  Is this consistent with anyone else's knowledge/experience?
If so, what is the proper delay between characters for compatibility with
a maximum number of host computers?

Is my PC corrupting the data word before/while it's being transmitted?  It
seems possible that the PC might be garbaging data even though it can
read its own garbage in the "loop-back" test.

I would greatly appreciate any insight you might be able to offer,
	thanks in advance.

	kelvin nilsen

root@bu-cs.UUCP (Barry Shein) (03/31/85)

>Re: Problem with remote system (4.2bsd,PDP10) dropping chars at
>1200 baud input

Yes, nuisance nuisance. Solutions:

1. If you are not doing 'raw' [ie. need to send entire 256 char set]
then you can put UNIX into TANDEM (stty tandem). This will cause
UNIX to respond with ^S when buffer is about to overflow and ^Q
when drained back down to a safe level. As long as you can quickly
stop transmission (there's a few chars of grace) you'll probably
be OK.

2. Another approach is to put about a 1/10th of a second delay after
every CR or 100 or so chars whichever comes first. This will work
in general but unfortunately the needed delay will vary with remote
system load. You may have to play around. I have used exactly
1/10th of a sec with TIP and it works most of the time on 4.2bsd
and sometimes has to be raised to 2 or 3 tenths for SYSV.

3. The only other resort is a 'smart' protocol at the other end
catching the characters and doing flow control and ACK'ing etc.
but I bet this exactly what you don't want (otherwise, try KERMIT.)

	-Barry Shein, Boston University

nather@utastro.UUCP (Ed Nather) (03/31/85)

> >Re: Problem with remote system (4.2bsd,PDP10) dropping chars at
> >1200 baud input
> 
> Yes, nuisance nuisance. Solutions:
> 
> 1. If you are not doing 'raw' [ie. need to send entire 256 char set]
> then you can put UNIX into TANDEM (stty tandem). This will cause
> UNIX to respond with ^S when buffer is about to overflow and ^Q
> when drained back down to a safe level. As long as you can quickly
> stop transmission (there's a few chars of grace) you'll probably
> be OK.
> 
We tried this and it doesn't work.  Uploading to a busy Vax loses characters.

> 2. Another approach is to put about a 1/10th of a second delay after
> every CR or 100 or so chars whichever comes first. This will work
> in general but unfortunately the needed delay will vary with remote
> system load. You may have to play around. I have used exactly
> 1/10th of a sec with TIP and it works most of the time on 4.2bsd
> and sometimes has to be raised to 2 or 3 tenths for SYSV.
> 
We tried this and it didn't work either.  System load changes too rapidly
and unpredictably.

> 3. The only other resort is a 'smart' protocol at the other end
> catching the characters and doing flow control and ACK'ing etc.
> but I bet this exactly what you don't want (otherwise, try KERMIT.)
> 
> 	-Barry Shein, Boston University
We tried this and it works fine.  The Kermit terminal emulator is superb,
the system is 100% reliable either directly connected or via modem, and we
can upload and download executable files without any problems.  Why kick a
dead whale along the beach when the solution is FREE?

-- 
Ed Nather
Astronony Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather

root@bu-cs.UUCP (Barry Shein) (04/01/85)

> Rhetorical question: Why *not* use Kermit????

non-Rhetorical answer:

Cuz I guessed the asker may have been doing something like
simulating very smart function keys that send arbitrary
text to a program at the other end (rather than simple file
transfer). You can't always put Kermit or some such between
you and the program you are trying to talk to although I
suppose a clever fellow could make good use of a psuedo-tty
running something like Kermit as it's front end...?

	-Barry Shein, Boston University

josh@v1.UUCP (Josh Knight) (04/01/85)

> > >Re: Problem with remote system (4.2bsd,PDP10) dropping chars at
> > >1200 baud input
> > ...
> > but I bet this exactly what you don't want (otherwise, try KERMIT.)
> ... 
> We tried this and it works fine.  The Kermit terminal emulator is superb,
> the system is 100% reliable either directly connected or via modem, and we
> can upload and download executable files without any problems.  Why kick a
> dead whale along the beach when the solution is FREE?

BUT...you can do both...there is a version of Kermit for TSO...actually
there is more than one (but of course :-) :-) :-).

The opinions expressed (or implied) are my own, not those of my employer.

		Josh Knight, IBM T.J. Watson Research
    josh at YKTVMX on BITNET, josh.yktvmx.ibm on CSnet,
    ...!philabs!v1!josh

kelvin@arizona.UUCP (Kelvin Nilsen) (04/02/85)

Thanks to all for an abundance of responses, comments, and suggestions.

To clarify, I want interrupt driven output because I have actually written
a small time-sharing kernel that can schedule different activities to be
going on concurrently.  One thing I like to do when transferring files
with "kermit" or "xmodem" (both are already incorporated in this program),
is read the next packet of data from the disk while the current packet is
being transmitted via the interrupt driver.

As has been pointed out, I cannot always use a protocol like kermit when
interacting with existing programs through smart function keys or ascii
file dumps.  And slowing everything down to 18 characters a second seems
a little unreasonable. 

Although it is not likely that one types faster than 1200 baud, a peculiarity
of my design (in which the keyboard is polled 18 times a second) makes it
appear to the outgoing interrupt handler that certain bursts of characters
are typed even faster.

The most helpful clue so far was for someone to point out to me that UNIX
translates framing errors into the user's interrupt character (^C in my case).
So I tried setting stop bits to two and lo and behold the problem disappears.
Now, I'll need to find out if this is a peculiarity of my system, an XPC by
XOR(?!) which has already demonstrated a few serial port anomalies, or a
problem common to all IBM PC compatibles.

Any more thoughts?
	thanks again, kelvin nilsen

jr@bbnccv.UUCP (John Robinson) (04/04/85)

Adding the extra stop bit introduces some delay, but as it is only
1/10 of a character time, it hardly seems that this is what is
helping.

What I would guess is going on is that the crystal generating the 1200
baud timing in your pc is in disagreement with that of the VAX (I'm
not saying who's wrong, necessarily).  The extra stop bit allows the
receiver in the VAX to reaquire bit-timing when the next start bit
comes in, whereas with just one stop bit, what is probably happening
is that the receiver can never resynchronize, and eventually the bit
sampler finds the bit after the stop bit (the start bit of the next
character, or the last data bit of the previous).  Recall that start
bits are 0 and stop bits (and idle lines) are 1.

So I would check the crystal generating timing on your PC, or that on
the VAX (do this by trying a port on a different interface, if
possible).  Meanwhile, the 2-stop-bit approach sounds like a good one,
and the penalty is only 10%.

Besides, it means you can talk to tty33's  :-)

/jr