[net.arch] dumber terminal device drivers

dlm@piggy.UUCP (Daryl Monge) (03/24/85)

The individual who first had the idea of removing all the trash from TTY
device drivers wasn't "off the wall".  The overhead of user level code
doing much of the work may not be a valid argument.  I observe that
the following utilities and applications process character by character
anyway.
	EMACS
	VI
	ksh
	vnews
	all of our internal graphics applications.

With the growing list of user friendly applications and utilities, it seems
that fewer and fewer useful programs actually need the device drivers to do
more that read/write characters.  (Flow control may be needed there also).


Daryl Monge
AT&T Bell Labs  - Holmdel, NJ

...!ihnp4!piggy!dlm

jon@nsc.UUCP (Jon Ryshpan) (03/29/85)

In article <327@piggy.UUCP> dlm@piggy.UUCP (Daryl Monge) writes:

>The individual who first had the idea of removing all the trash from TTY
>device drivers wasn't "off the wall".  The overhead of user level code
>doing much of the work may not be a valid argument.  I observe that
>the following utilities and applications process character by character
>anyway.
>	EMACS
>	VI
>	ksh
>	vnews
>	all of our internal graphics applications.
>
>With the growing list of user friendly applications and utilities, it seems
>that fewer and fewer useful programs actually need the device drivers to do
>more that read/write characters.  (Flow control may be needed there also).

Vi runs in raw mode (give or take a brick).  Foo!  When vi is just
accepting text to be input into a file there doesn't seem to be any
need for it to wake up every time the user enters a character.  It only
needs to wake up when something interesting happens - ie. a newline,
escape or tab - or if text is being inserted into the middle of a line,
and the text to the right needs to be pushed over.

What's needed is a general way for vi to tell the terminal handler when
it needs to be awakened.  Clearly the vi must be wake up whenever its
input buffer if full.  Also, there are a number of characters that must
cause it to wake up.

The easiest way to do this is with a system call whose argument is a
table with an entry for each character, telling whether it completes
the buffer or not.  This can be done with a string of 256 bits, or only
8 longwords.  There is no reason why this system call should take more
time than a single activation of the program, since each involves one
trip through the kernel.

I think this scheme will handle input to vi when it is accepting text
input at the end of a line, so that no nothing needs to be moved over.
(This case accouns for a large part of the input to vi, probably most
of it.)

Whenever vi is ready to accept another line of input, it sets the
buffer length to the length of the line to be accepted (the screen
width less the wrap margin), and gets text from the terminal.  Newline
and escape complete the input.  If the user enters a tab, vi wakes up,
moves the cursor (if necessary) and requests a read for the number of
characters remaining on the line.  The other control characters are
converted to "^<whatever>".

The situation is quite general, and applies to other applications as
well: line handlers, communications programs, applications that accept
large amounts of text.  I have looked at the vi situation in detail
because it's sort of complicated, and you might doubt that the scheme
would really work here.  There is some extra programming to be done to
use a facility like this, but there are a lot of cycles to be saved.

-> These opinions are my own, and have nothing to do with anything <-
-> at National Semiconductor.					   <-
-- 

				Good Luck - Jonathan Ryshpan

root@bu-cs.UUCP (Barry Shein) (03/31/85)

>Comments about using a bit-mask to indicate wake-up characters
>as a way to optimize terminal handling in programs like VI,EMACS

Absolutely valid, used in TOPS-20 all the time (exactly as decribed)
also exists in VMS. The TOPS-20 front end processors seem to also
take advantage of this architecture and could be used in front ends
in general.

On the other hand, if I were to expend my efforts I have seen the
future and it is here: Our XEROX1108 Lisp Machines run PUP to speak
to our 4.2 Vaxen. One of the nice things LEAF supports (a file protocol)
is the ability to just do a 'cd' to a VAX directory, quite transparently.
They have a text editor TEDIT with a package that more or less simulates
EMACS.

The result is if I want to edit a file on the VAX I just CD
(actual cmd is CONNECT) to it's directory and run the editor
which runs completely on my local workstation (except for the
occasional block transfers of the file to the local box [*not*
the whole file]). Voila' the cycles for character handling are
well spent locally w/o bothering the central machine and I am
freed more or less from either creating or suffering from a loaded
central machine. I really have little else to do with the local
cycles (there's enough left over for the background processes) so
I feel no real need to optimize further.

The upshot is: Traditional, centralized time sharing is DEAD.
Turn those super-minis, main-frames into file/compute servers
and get yourself a workstation (they're getting quite cheap.)
Optimizing a program so that 40 people EMACS'ing on a VAX11/780
or so will be non-interfering is a waste of time. Buying a
slightly faster (4X or so) super-mini is a waste of money, you
wake up a few weeks later and you have 4X the people running
those programs and you are back where you started.

Workstations provide linear growth solutions to these problems
with linear, predictable non-catastrophic costs (ie. the kind of costs
engendered in upgrading a super-mini to the next stupid model.)

	-Barry Shein, Boston University

...and get out of the way if you can't lend a hand cuz the times
they are a' changing

phil@osiris.UUCP (Philip Kos) (04/01/85)

> . . . . Foo!  When vi is just
> accepting text to be input into a file there doesn't seem to be any
> need for it to wake up every time the user enters a character.  It only
> needs to wake up when something interesting happens - ie. a newline,
> escape or tab - or if text is being inserted into the middle of a line,
> and the text to the right needs to be pushed over.
> 
> What's needed is a general way for vi to tell the terminal handler when
> it needs to be awakened . . . .
> Also, there are a number of characters that must cause it to wake up.


I prefer having input/output character processing in the tty driver.
It keeps me from having to deal with it.  I've actually written a
simple character/line-editing driver, and did not relish the prospect
of building into it all the control offered by the UNIX tty drivers.

Ideally, the processing is there if you need it and skipped if you tell
the driver you don't.  Running in raw mode, there shouldn't be a lot
of overhead.  Of course, "shouldn't" doesn't mean "isn't".  Could some-
body who knows give us the correct story here?  (Notice that we're dis-
cussing some of the same things that caused so much religious flaming
in the RMS "discussion".  Hope this turns out the same - nothing like
a little diatribe to go with my morning coffee. :-)

I don't know about everyone else on the net, but when I'm typing long
lines into a file with vi, I like to have characters echoed *before* I
hit \n.  Also, I think vi would be nicer if it *didn't* insist on rewri-
ting a line every time you insert a character before the end.  This is
not normally a problem, but on slower dialup terminals it sucks.

As for specifying a list of "activation characters", the current "new"
tty driver has three activation characters which apparently signal the
running process (intr, quit, brk) - somebody could try writing a prog-
ram that maps these into \n, ESC, and \t, with the tty driver running
in raw -cbreak mode.  It seems like this would give you what you are
looking for - buffered text input with activation caused by a small set
of control characters.  Of course, you'll be basically ignoring most of
the normal process control signals, but vi does that anyway.

Can this actually be done?  Has anybody out there tried it?  Am I tal-
king through my hat again?  And what about Naomi?


					Phil Kos
					The Johns Hopkins Hospital

paul@osu-eddie.UUCP (Paul Placeway) (04/01/85)

Actually what is needed is a dispatch table for the keys rather than the
current method.  The problem is that the Unix drivers want to be told which
key does a given function, rather than which function is invoked by a given
key.  The result is that you can only have one key to do the DEL function at
a time (for example).  If you supported the META (ie. 8th bit set) keys, and
had 16 different functions (4 bits/character), each terminals map would take
up 128 bytes of memory.  This might seem excessive (2K for 16 terminals),
but it allows you to make any number of keys do any function (for vi, for
example, ESC, CR, NL, and various control characters would do a wake up).
The other advantage is that the processing characters is much quicker
because you only have to do a table lookup folowed by a shift and mask
rather than 13 compairs to find what a function is.

				Paul W. Placeway
				The Ohio State University
				(UUCP: ...!cbosgd!osu-eddie!paul)
				(CSNet: paul@ohio-state)
				(ARPA: paul%ohio-state.csnet@CSNET-RELAY)

afb3@hou2d.UUCP (A.BALDWIN) (04/02/85)

Response to Jon  Hyshpan's "Pick your own terminator tty read"
which was a response to Daryl Monge's "one character at a time tty read"

Flames not withstanding (I'm really a UN*X lover too),
the method you discribe (re: 256 bit mask for terminators) 
has been used in RSX-11M and VMS for quite a few years.  It is
called "read with special terminators".  Another useful "special"
tty "read" function is "read with escape sequences".  Both RSX
and VMS have this type of modifier to terminal I/O calls (QIO's
anyone?).  That's how EDT and other screen oriented junk is done in
DEC-land.  If you don't think this can reduce overhead, try inplmenting
a "dumb" terminal or "file transfer" program over an async. without
them.

Excuse me while I don my fire retardent suit.

Al Baldwin
AT&T-Bell Labs
...!ihnp4!hou2d!afb3


[These opinions are my own....Who else would want them!!!]

ross@dsd.UUCP (Evan Ross) (04/03/85)

	As I recall, Data General operating systems also allow you
	to provide a bit mask to represent terminating characters.

-- 
			Evan Ross   {ihnpv, decwrl!amd} !fortune!dsd!ross

"To oppose something is to maintain it.
 To oppose vulgarity is inevitably to be vulgar."

guy@rlgvax.UUCP (Guy Harris) (04/03/85)

This particular article is more in the lines of "net.unix"; I'm sending
the reply there as well, and redirecting further followups there.

> Also, I think vi would be nicer if it *didn't* insist on rewriting a
> line every time you insert a character before the end.  This is not
> normally a problem, but on slower dialup terminals it sucks.

Try "set slowopen"; it won't rewrite until the insert is done.  Or try
running on a terminal with character insert/delete.

> As for specifying a list of "activation characters", the current "new"
> tty driver has three activation characters which apparently signal the
> running process (intr, quit, brk) - somebody could try writing a prog-
> ram that maps these into \n, ESC, and \t, with the tty driver running
> in raw -cbreak mode.  It seems like this would give you what you are
> looking for - buffered text input with activation caused by a small set
> of control characters.  Of course, you'll be basically ignoring most of
> the normal process control signals, but vi does that anyway.
> 
> Can this actually be done?  Has anybody out there tried it?  Am I tal-
> king through my hat again?  And what about Naomi?

You're confusing "signal" with "signal".  "intr" and "quit" send a real
live UNIX signal to the process.  "brk" just terminates a line, like "\n"
does, in "-cbreak -raw" mode.  If the driver is running in "raw -cbreak" mode,
you do *NOT* get buffered text input.  "raw" is like "cbreak"; you get no
erase/kill processing and each character is handed to the program at the
time it's typed.

99% of the time you very definitely want to run in "cbreak" rather than "raw"
mode.  For one thing, you can get the normal process control signals (which
"vi" does *NOT* ignore; try typing your interrupt character while "vi" is
doing a long operation).  For another, XON/XOFF works, which is very
important if you're using a VT100 in smooth scroll mode - or using any
other terminal which isn't running at the nominal baud rate of the line.

Furthermore, if you mapped the "intr" and "quit" characters, the interrupt
they caused would flush all input that had been typed until that point,
rather than handing it to the program.

By the way, all three of those characters are in the orignal V7 terminal
driver; Berkeley didn't add any of them.  (They're also in the System III/
System V terminal driver, although with slightly different names.  The
VAX System V driver, at least, has two characters like the "brk" character -
c_cc[VEOL] and c_cc[VEOL2].  The latter isn't documented and may go away.
This still isn't enough.  Your scheme needs three.)

	Guy Harris
	sun!guy

afb3@hou2d.UUCP (A.BALDWIN) (04/05/85)

[Line Eater Food]

In "raw" mode every character cause "activation" (wake up of your
process for UN*X types).  In general this means a single character
per I/O call.  Result is one system call per character (and the
overhead there to).  The real killer is the system call -- not the
interpretive processing that might be done (its done anyway, who
cares if its in the system or in a library).


Al Baldwin
AT&T-Bell Labs
...!ihnp4!hou2d!afb3


[These opinions are my own....Who else would want them!!!]

bass@dmsd.UUCP (John Bass) (04/09/85)

As noted by others is that the real killer is the tty system call times --
as there are atleast TWO such calls -- one read and one write to echo the char.
The reason the echo is done in most applications is that they require sending
cursor position and control information which MUST NOT have echoed input
interleaved into it. Thus on most mini & micro unix systems this consumes
about 7-15ms per keystroke including user time, system calls, and context
switch.

In most user interfaces the REAL KILLER is that the user interface suffers
keystroke echo latencys of up to several seconds depending on the jobs
relative priority at the time and the amount of paging/swapping that occurs.
These LONG echo times occur on most machines under light load when ABUNDANT
memory is not available -- several times what is needed under a better
user interface. For most commercial systems this makes even the fastest
systems appear doggy at times when compared to dedicated micros like the
IBM PC.

The best solution for multiuser systems is to EXPAND the keyboard input
function to include a break bitmap and an echo bit map -- plus extend the
terminal interface to include a 1D line editor with left-right margin control
and max window size - this is a minor change to V7 tty routines and is
backward compatable with V7 line sematics. A Special IOCTL to tty is added
to read/write the editor params and another IOCTL to preset the cannon buffer.

The net effect is a HOT KEYBOARD user interface that is line oriented and
takes the guts of keyboard handling in most data editor user interfaced and
makes it REALTIME at the users terminal. Since the number of system calls
and context switches are reduced the system handles heavy loads better and
still provides dedicated system type response times. IT IS VERY IMPORTANT
not to get carried away with too much functionality since this MUST be done
at input interrrupt time -- you cann't afford to do a wakeup/context switch.
Several V6 exeriments in the late 70's died due to getting too fanncy -- they
tried doing complete window management at a very severe performance cost
for non-window applications.

I have code and a document presenting this approach. It comes from work done
over the last 8 years on various unix systems. I raised this issue as
an item for the /usr/group/standards committee but have sat the proposal waiting
for the IEEE change over to settle out.

John Bass

mac@uvacs.UUCP (Alex Colvin) (04/12/85)

Providing a bit mask for terminating characters isn't enough.  For
instance, it's usually the character AFTER an ESC that's the terminating
character.  The ESC[ ... control sequences can be arbitrarily long, and
appear to terminate on an alphabetic character.

It's a mess.

steveg@hammer.UUCP (Steve Glaser) (04/15/85)

In article <2023@uvacs.UUCP> mac@uvacs.UUCP (Alex Colvin) writes:
>Providing a bit mask for terminating characters isn't enough.  For
>instance, it's usually the character AFTER an ESC that's the terminating
>character.  The ESC[ ... control sequences can be arbitrarily long, and
>appear to terminate on an alphabetic character.
>
>It's a mess.

Alternatively you could just make the terminal driver echo only between
write requests (provided the request was under some specified size -
writes of over that size would work, but might have echoed characters
in them).  The only change that this requires is that applications be
careful when they flush the output buffer if they care about echoed
characters.  I don't think delaying the echo until the end of a write
would be noticed by most users - if the machine is really typing at
you, you aren't going to be able to read what you're getting echoed
anyway - even if the machine is only typing text.

As for terminating characters, something like the following looks.  If
there is a pause of over ___ time between characters, consider it a
terminating character.  If less than that, keep looking (it might be a
multi-character keystroke).  Granted, it has problems over networks and
fancy terminal switches where timing gets mangled, but then unix has
always had problems in that arena (e.g. output delays get lost also).

The only real long term solution is to go toward something like the
Virtual Protocol Machine (VPM) from System V.  I think there is a
version for async lines (doesn't the 5620 DMD support uses this on the
3B computers?).  Granted, it's probably overkill and it's definately
not very portable (not everybody implements VPM), but it does provide
the right kind of hooks for dealing with this sort of problem (you get
full user programmability as interrupt level cause it's in a dedicated
processor).

Don't hold you breath waiting for anything like this though.  Outside of
AT&T offerings, I don't know of any VPM implementations.  Maybe AT&T will
do something like this as a stream processing module in System V Release N
(for some appropriately large value of N).


	Steve Glaser
	Tektronix Inc.

greg@ncr-tp.UUCP (Greg Noel) (04/16/85)

In article <191@dmsd.UUCP> bass@dmsd.UUCP (John Bass) writes:
>The best solution for multiuser systems is to EXPAND the keyboard input
>function to include a break bitmap and an echo bit map -- plus extend the
>terminal interface to include a 1D line editor with left-right margin control
>and max window size - this is a minor change to V7 tty routines and is

Humpf.....  For a change, I'm agreeing with John!  I don't think that
it is really an "expansion" of the functionality, though; this is the
kind of classic simplification and unification of concepts for which
Unix is justly famous.  It is, in fact, very close to the functionality
that I proposed for a smart terminal front end at one point.  Great
minds running in similar ways?

For another variant on the concept, one of the negiotiated options of
Telnet (one of the ARPAnet protocols) is RCTE, which I believe stands
for Remote Controlled Transmission and Echoing.  The idea was that a
user in, say, Hawaii could be connected to a computer on the mainland
and still have the advantage of a highly-interactive keyboard.  (If the
link is via satelite, the propagation delay is very noticable.)  There
were some race problems with the protocol, but the overall approach
was very similar -- download a table that specified break (transmit)
characters, echoed characters, and some info about whether to echo the
break character or not.  Is there somebody in ARPAland with access to
the RFC archives that could cite the applicable RFCs?  Jon Postel, are
you out there?
-- 
-- Greg Noel, NCR Torrey Pines       Greg@ncr-tp.UUCP or Greg@nosc.ARPA

greg@ncr-tp.UUCP (Greg Noel) (04/16/85)

In article <2023@uvacs.UUCP> mac@uvacs.UUCP (Alex Colvin) writes:
>Providing a bit mask for terminating characters isn't enough.  For
>instance, it's usually the character AFTER an ESC that's the terminating
>character.  The ESC[ ... control sequences can be arbitrarily long, and
>appear to terminate on an alphabetic character.

But the mask doesn't have to reamain fixed -- once you read the ESC,
gobble down the control sequence one charater at a time if you have to.
The idea is to optimize the 99 and 44/100% of the typing that is pure
text -- no need to wake up the process for every character of that.....
-- 
-- Greg Noel, NCR Torrey Pines       Greg@ncr-tp.UUCP or Greg@nosc.ARPA

hmm@unido.UUCP (04/16/85)

There is a simple solution to the ESC[... problem:
just enter a new map with alphabetics as terminators when you got an
ESC [ sequence.  This should be not too much overhead compared
with the single-character processing in raw mode.

	Hans-Martin Mosner
	Universitaet Dortmund (Germany)
	    ihnp4!hpfcla!hpbbn!unido!hmm 
	{decvax,philabs}!mcvax!unido!hmm

twholm@wateng.UUCP (Terrence W. Holm) (04/18/85)

If someone attempts a device driver for `dumber terminals' on
Unix systems, I would suggest that they take a look at how
Data General has given AOS users a uniform one-line editor.

Programmers can use a routine, call it "SCREENREAD(line)", which
will retrieve one line from the users terminal. The advantage of
using this call is that the routine has a set of line editing
features built-in, this includes all of the common DELETE, 
NEXT_WORD, LAST_WORD, INSERT, GOTO_EOL, DELETE_EOL, etc. The
most powerful feature is that an old `line' may be specified
in the call, making the edits common with csh "!!" and "^p^r" 
extremely easy.

Once a `user' has learned the keystrokes for this one-line
editor, he knows how to interact with all programs which request
input of a line. This includes their visual editor, which
uses EXACTLY the same keystrokes while the user is within one
line. 

SUMMARY: Once a standard line-editing format has been specified
then all programs can use this one routine. This simplifies most
programs, and also eases the use of a system. The standardization
also helps improve performance because the device drivers can
contain special code for the known one-line editing characters.
This section of the device driver can also be moved to the
other end of the user/mainframe connection. Thus, we can get
optimization at the other end of a network, or built into
the non-dumb terminals (as has been done).

As opposed to a `general device driver', a Unix system could have
one which handles the line-editing characters of `vi'. And all
programs which decided to improve their performance could
use a new "get_edit_line()" call.

					Terrence W. Holm


AOS is an operating system supported by Data General
Unix is an operating system supported by AT&T

david@daisy.UUCP (David Schachter) (04/21/85)

One can elaborate on the solution to the ESC problem.  The solution suggested
by several people is to have the process send a new break table when an
ESC is encountered.  A generalization of this is to let a break character
automatically install a new break table and a new echo table.  (I'm sure
everyone can think of efficient ways of implementing this.)  To get even more
elaborate, one can put most of the input routine into the driver, thus reducing
the number of context switches to one per token.

In my company's operating system, we have had a mechanism for performing break,
echo, tranlation, key expansion, and other processing since Day One.  It
is very useful.  The default tables work fine for "dumb" applications and
cost little.  Our sophisticated applications change the tables on the fly,
on a per-token basis.

We have found it useful, at various levels of the O.S., to allow user code
to attach routines to system functions, on a per-process basis.  For example,
the routines that perform screen output frequently have application-specific
auxiliary processing routines attached.  This reduces the amount of context
switching substantially.

The two key ideas are: (1) Push routine functions into the O.S. if they don't
cost non-users a lot and (2) let application programs attach auxiliary
processors to system routines to avoid context switches (with the appropriate
attention to system security, of course!)

jerry@oliveb.UUCP (Jerry Aguirre) (05/07/85)

> If someone attempts a device driver for `dumber terminals' on
> Unix systems, I would suggest that they take a look at how
> Data General has given AOS users a uniform one-line editor.
> 					Terrence W. Holm

The AOS screenread() call is nice and does solve the problem of context
switches.  It also gives a uniform set of editing codes, something Unix
could learn from.  They also have a data-sensitive read in which the
user can specify (via a bit map) the characters that terminate a line.
So even when not doing a screenread the input and echoing of characters
can be done with minimal context switching.  It is really neat to
realize that you can backup on a line, delete some text, insert some
other text, go to the end and add some more, etc. all while your
application is swapped to disk.

AOS needs to save on context switches as it does not do them as
quickly as Unix.  I have compiled programs which did single character
at a time output on both systems and on the AOS system the context
switching really slowed the output even on an unloaded system.  On Unix
the output will slow only when the load is high.

The PROBLEM with AOS screenread is that it only works with a small
number of terminal types.  The terminal type is specified to AOS with
their equivalent of stty.  The drivers for the various terminals are
compiled into the kernal.  Last time I saw an AOS system it only
recognized 4 types of terminals.  Compare this with the number of
terminals types that the vi editor can support (hundreds).

The only answer I can see that provides this efficiency and terminal
independence is to pass some subset of the termcap to the tty driver.
The amount of information and code necessary for INTRA-line editing
is probably not to big.  Where it gets messy is handling line overflow.
The AOS screenread interface is sufficiently complex that it can return
status indicating that the line overflowed while the user was at a
specific column doing insertion.  That is a bit much to extract from
the one word status returned by read(2).  You could add another ioctl to
get that info but doing so adds another context switch.

I think that the setable delimiter table is the best solution.  It
provides most of the efficiency gain and is relatively simple to
implement.  Presumably the delimiter would not echo as the program gets
control and can itself decide whether to echo it.

				Jerry Aguirre @ Olivetti ATC
{hplabs|fortune|idi|ihnp4|tolerant|allegra|tymix}!oliveb!jerry