whna@cgcha.uucp (Heinz Naef) (10/25/88)
Assume a VT-100-type terminal accessing a UNIX host in the following ways: (1) via a TCP/IP terminalserver networked with the target host (2) via an asynchronous line interface integrated in the target host There is a significant difference in how a BREAK (CTRL-C) condition is handled: In case (1) the terminalserver (3Com/Bridge LS/1, Cisco xSM) continues to empty its buffer towards the terminal. In case (2) the output to the terminal stops immediately. On a UNIX system, try to cat /usr/dict/words with the two attachments described above. In case (1) tens, hundreds of pages will be displayed after hitting BREAK (or ^C), which is considered a problem of acceptance. What is the reason of this different behavior? Would there be no way to "rollback" the current buffer's worth of packets upon receiving a BREAK and just flush the buffer? Thanks in advance for any comments. Regards, Heinz Naef, c/o CIBA-GEIGY AG, R-1032.5.58, P.O.Box, CH-4002 Basel, Switzerland UUCP: cgch!whna - Internet: whna%cgch.uucp@uunet.uu.net BITNET: whna%cgch.uucp@cernvax.bitnet
hedrick@athos.rutgers.edu (Charles Hedrick) (10/27/88)
whna@cgcha.uucp (Heinz Naef) asks why ^C doesn't stop output when you are logged into a system via a terminal server. The most likely answer is that one or more of your host or terminal server doesn't implement telnet sync. Output keeps coming because it has been buffered. In order to get good performance, data is aggregated into 1.5K segments. Several KBytes of such segments may be sent at a time. Meanwhile the terminal server is parcelling data out at a measly 9600 baud or whatever. If the host stops sending stuff, the terminal server may not get anything new, but there's this 10K or so of data already in the pipeline (both on the terminal server and in the host). There is a way to stop this. What is supposed to happen is the following: When a host wants to flush output, it sets the "urgent" bit in the TCP headers. This bit will be set in the next packet sent from the host. It takes effect on the terminal server as soon as such a packet arrives. It is not necessary to wait until the terminal server gets to that packet in the course of dumping data to your terminal. Its effect occurs "out of band." As soon as the terminal server sees a packet with this bit, it is supposed to stop output, throw away all output in its buffers, and start ignoring new packets. This continues until it sees a special "sync" signal. The sync is put into the data stream where new data starts and output is supposed to resume. If both ends implement this properly, you will still get a bit of overrun when you type ^C, because it does take some time for the ^C to get to the host and the response to get back. But it will no longer go on for pages. I am reasonably sure that both the Bridge CS/1 and cisco ASM implement this, so the problem is most likely in your host TCP/IP implementation. 4.2 didn't do sync at all. The initial 4.3 did it at only one end (I think user telnet but not server, though I may have it reversed). I believe the latest 4.3 gets things right, but probably most vendor implementations haven't updated to that release yet. /usr/dict/words is a worst-case test, because it is very short lines. So a delay of a couple of seconds can result in 10 pages going by.