[comp.protocols.tcp-ip] supdup protocol

BILLW@MATHOM.CISCO.COM (William Westfield) (09/29/87)

Although the SUPDUP protocol has a lot of advantages over the TELNET
protocol for modern computers systems, it does not provide for any
local echoing capability at all, nor does it provide for local "break
characters"  (what it does provide is a standard way of negotiating
terminal capabilities (now somewhat dated in scope), and terminal
independent display capabilities).

TOPS20 has the concept of a break character bitmap, but not for an
echoing bitmap, and it required monitor modifications to get the
break character implementation to be at all useful to the popular
display editor emacs.  (I originally implemented these changes back
at SRI, to (hopefully) lessen the impact of emacs on system performance.

I believe that VMS has a similar scheme.

DEC's CTERM protocol provides the means of transmitting such info over
a network connection, but DEC engineers I have talked with said that
even within DEC, the standard is rather abused (eg, most of VMS
terminal IO is done via system-extension-code XYZZY, and such).

Unix, one of the worlds most popular operating systems, doesn't have
either concept.  The Annex boxes implement some local editing
functionality, but this requires both a custom version of the editor,
and custom software on the Annex, and is not a published standard.
(nor does it help outside of the editor...)

A good protocol would permit some local intelligence (eg echoing,
bunching of characters) without the local server having to know
specifics about the type of terminal being used.

What it amounts to is that most operating systems are STILL dealing
with the terminal as if it were a printer, and that this probably has
to change before a smarter virtual terminal protocol can be defined.

Bill Westfield
cisco Systems
-------

DCP@QUABBIN.SCRC.SYMBOLICS.COM.UUCP (09/30/87)

    Date: Mon 28 Sep 87 15:31:31-PDT
    From: William Westfield <BILLW@MATHOM.CISCO.COM>
	...
    What it amounts to is that most operating systems are STILL dealing
    with the terminal as if it were a printer, and that this probably has
    to change before a smarter virtual terminal protocol can be defined.

That's an interesting point.  I can name at least two operating systems
that natively know about display terminals and considered printing
terminals as a (crippled) subset.  I believe they knew this as long as
15 years ago.  They are: ITS (developed at the MIT AI lab) and, if
memory serves, WAITS (developed at Stanford).  There are things in ITS
that are profound even today, since as you say, "most OSs are STILL"
wedged about terminal==printer.  SRA and JTW aren't talking through
their teeth; they are familiar with and have access to existing systems
that know about display terminals and have for many years.

The cynic in me says you won't see much real improvement in Unix or VMS
or whatever unless and until their owners bite the bullet, commit to
entering the 1980s (from the 1960s), and pour money into the development
hole.  I would actually suggest they try to be visionaries and enter the
1990s.

brennan@alliant.UUCP (10/02/87)

In article <12338315541.10.BILLW@MATHOM.CISCO.COM> BILLW@MATHOM.CISCO.COM (William Westfield) writes:
>Although the SUPDUP [...] it does not provide for any
>local echoing capability at all, nor does it provide for local "break
>characters" [...]
>TOPS20 has the concept of a break character bitmap, but not for an
>echoing bitmap, [...]
>DEC's CTERM protocol provides the means of transmitting such info over
>a network connection [...]
>Unix, one of the worlds most popular operating systems, doesn't have
>either concept.  The Annex boxes implement some local editing
>functionality, but this requires both a custom version of the editor,
>and custom software on the Annex, and is not a published standard.
>(nor does it help outside of the editor...)
>
>Bill Westfield
>cisco Systems

The local editing mentioned above is a special mode the Annex may enter
in either telnet, rlogin, or local mode. In fact, it does require a custom
version of an editor (gnu-emacs, currently). However, in its "native mode",
(Annex to Encore's Multimax), local editing, character batching, etc. are
all performed at the Annex. The initial developers of the Annex, Jonathan
Taylor (now of SUN Microsystems) and Rob Drelles (now of Stratus) designed
a "distributed tty driver" for the Annex and Encore's Unix OSes (both 4.2
and V); the bulk of the Unix tty driver(s) (both 4.2 and V), runs in the
Annex. In cooked mode, nary a character is returned to the host until a
"normal Unix forwarding character" is typed, i.e. something that would
cause characters to be moved from the raw to canonical queue. IOCTLs for
modem handling, lines speed changing, etc. are all processed. Of course in
raw/cbreak mode, there is little that may be done, though some character
batching (under control of a forwarding timer) may still be performed.

Rrrrrrrich.

bzs@BU-CS.BU.EDU.UUCP (10/02/87)

From: David C. Plummer <DCP@QUABBIN.SCRC.Symbolics.COM>
>The cynic in me says you won't see much real improvement in Unix or VMS
>or whatever unless and until their owners bite the bullet, commit to
>entering the 1980s (from the 1960s), and pour money into the development
>hole.  I would actually suggest they try to be visionaries and enter the
>1990s.

Although I can't speak to VMS one would think the current efforts in
windowing standards (eg. X and NeWS) for Unix indicate about as strong
a commitment to advancing interactive interfaces as one sees elsewhere
today. Unix has always been perhaps unique in this area in that all
fundamental developments such as this have been viewed in terms of the
widest possible view of machine architectures (currently ranging at
least from PC/AT to Cray-2's in sheer size, RISC, CISC, parallel
architectures etc on another dimension, variety I suppose.)

When one has narrowed their view to purely the current architectural
technology it's not surprising that some speed in introduction of
products is gained. I can only make allusion to the hare and the
tortoise to perhaps put this into some perspective. Consider, for
example, the status of (eg) ITS and Unix today, their age in fact is
not all that different.

I believe the current widespread introduction of remote window
standards such as X and NeWS render the above anything but
hypothetical.

In fact I think they are "SUPDUP". It's the discussion of dumb ASCII
terminals at all (and their optimization) that casts this conversation
into ancient terms. I believe we are simply in a similar transition
phase towards bitmapped (etc), locally intelligent interfaces that we
were several years past when co-workers would tell me "how can you
work on all that CRT stuff when everyone around here has this large
investment in keypunches and teletypes, you live in the clouds..."

Put simply, a Macintosh or a Sun workstation (&c) attached to a real
network (ie. not emulating RS232) are about the dumbest "terminals" I
want to think about anymore.

	-Barry Shein, Boston University

PADLIPSKY@A.ISI.EDU (Michael Padlipsky) (10/05/87)

I seem to be missing something here, perhaps because I've yet to get
my hands on an X-Windows spec.  Superfically, if I'm talking to a
command language interpreter--or, worse, an editor--that "expects"
character-at-a-time interaction from/with me, doing so in a windowed
environment ought to lead to more transmissions rather than fewer,
since I could be char-a-a-t in several windows rather than just the
one I'm used to.  Don't want to sound like I'm still living in the
days when it was a survival trait to know how to make 026 drum cards
(though I must confess I do miss kepunches: unpunched cards were very
handy for keeping in breast pockets to make notes on), but unless the
window-oriented things contain some mechanisms for distinguishing
between what stays at the workstation and what goes to the Server
(or counterpart, or peer, or whatever it's fashionable to call the
other side these days) all we've got is jazzier interfaces to the
same old problem.  Would somebody please clarify?
   puzzled cheers, map

P.S.  Similar considerations apply to the subsequent msg about X.25
and TOPS-20: sure seemed as if case 2) (EMACS) was still doing
precisely what we're trying to avoid....  Which in a roundabout way
reminds me: can anybody speak to the rumor I recall hearing years ago
that RCTE wasn't actually a buggy protocol, it was just the
TIP's implementation that was at fault?  (Seem to recall picking
that one up from somebody who had had something to do with the
Multics implementation of RCTE, after I'd left Project MAC, as
it was then known.)
-------

DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer) (10/05/87)

Indeed, various people's vision of the future, which include high
degrees of real time interaction with keyboards and pointing devices,
would suggest that RCTE is trying to solve a shrinking problem.  There
still exist time-shared systems and a lot of personal computers not yet
powerful enough to be weaned from having to login-style connect to those
time-shared systems, and that's why I see RCTE as solving an existing
problem.  Eventually, RCTE should become part of the "good old days" and
exist only in stories to grandchildren about what it was like "back
then."  Perhaps SUPDUP was (and still is?) ahead of its time by assuming
that interaction is important and communication is cheap.

barmar@think.COM (Barry Margolin) (10/05/87)

This discussion of SUPDUP stemmed from a discussion of RCTE.  Several
people pointed out that SUPDUP doesn't actually solve the problem that
RCTE is intended to solve.  However, RMS long ago proposed a
SUPDUP-based solution, called the Local Editing Protocol (LEP), which
goes much further than RCTE.  LEP allows a host program to tell the
terminal emulator about many simple key bindings.  These include
self-insert, relative cursor motion, motion by words, and simple
deletion commands.  A large number of the operations of a video text
editor end up being performed in the workstation, and when the user
types a command it can't perform locally all the buffered up
operations are transmitted to the host, so that it can update its
buffer.

RMS also proposed a related protocol, called the Line Saving Protocol,
which allows the host to send lines outside the physical screen, or
the workstation can remember lines that are scrolled off and the host
can ask it to recall them.

I believe RMS actually implemented support for LEP in ITS EMACS, but I
don't think he ever wrote a client SUPDUP that implemented it.  Most
of the SUPDUP support in the world at the time was connected to MIT's
Chaosnet, and network speed was never a problem within that
environment (when supduping from a Lisp Machine to ITS they didn't
bother turning on insert/delete-line operations, because it slowed
down EMACS's redisplay computation, and redrawing the screen over the
net was faster!).

---
Barry Margolin
Thinking Machines Corp.

barmar@think.com
seismo!think!barmar

bzs@BU-CS.BU.EDU (Barry Shein) (10/06/87)

[Wednesday is the tenth anniversary for RFC736, TELNET SUPDUP Option]

I am certainly not casting aspersions at the SUPDUP protocol, in fact
it should be useful in any environment where the mode of interaction
models an intelligent ASCII terminal (that is, intelligent for an
ASCII terminal.) Whether SUPDUP is what I would sit down at my tabula
rasa and write today is yet another question, let's at least
distinguish between software lying on the shelf vs new efforts and
their costs. RFC's tend to represent and encourage both.

More importantly, if one generalizes to the point that all host-host
interactive interactions are made to appear similar in nature (gee,
it's only keystrokes and indices for their graphic representations
passing back and forth) then I believe the spirit of the thing is
lost.

That is, SUPDUP is a very specific protocol with very specific
definitons for interactions and a model of the world most closely
resembling a relatively fixed (generalized) ASCII terminal utilizing
telnet to speak to a remote host. It is very clever within it's model,
but ten years have passed and some things have changed.

My point is that window protocols like X and NeWS almost certainly
-ARE- (plus or minus a little intention) SUPDUP for current times.
They perform nearly the same services and much, much more. My only
comment really was that if I were king of the universe (good start)
I would like to see people working on thinking about how these window
systems might be standardized and accepted and just leave SUPDUP more
or less alone as a standing standard (I have nothing against
interested parties sorting out changes that may be desired in SUPDUP
but I do think we as a community need to get on with other things.)

It goes something like this: If we don't lead, we surely will follow.

I would agree it might be early to standardize given the current
competition of proposed standards out there, but it's almost too late
for this community to begin talking about what they would like in a
standard (eg. subset support for ascii terminals has already been
rejected, it's not impossible to put into these windowing standards
but I don't believe either of them even entertains the possibility.
Should they? That's a question, and another good start.)

Nothing earth-shattering or shibboleth-violating here, mostly just
trying to open a discussion. If you find any *answers* in anything
I've said you've misunderstood me.

	-Barry Shein, Boston University

bzs@BU-CS.BU.EDU (Barry Shein) (10/06/87)

From: Michael Padlipsky <PADLIPSKY@A.ISI.EDU>
>Speaking of misunderstandings, please be aware that I'm NOT one of
>SUPDUP's advocates.  Just trying to "call for the order of the day" by
>asking for an explanation (which I'd still appreciate getting) of how
>windowing sorts of things minimize number of transmissions.

Although at this point I would love to see the core window gnurds jump
in perhaps I could offer some examples.

In the first place, window systems (the hardware used to support them)
present new transmission opportunities and a need for solutions. A
straightforward example from X is the ability either to track the
pixel-by-pixel motion of a mouse versus requesting the remote server
to simply inform the client (with a single transmitted event) when
certain conditions occur such as the mouse entering or leaving a
window. The rest of the tracking is done in the server.

[For those less grounded in such things let me point out that the
"server" is typically one large program in charge of the physical
screen, keyboard, mouse etc and the "clients" are the applications
programs which send requests to either the remote or local server.]

Similarly, keystrokes can be mapped into multiple character
transmissions on the server (by request of the client) and these
would typically be sent as one network transaction.

NeWS of course offers a whole other dimensionality in its ability to
send a program text (in postscript) to be executed locally by the
server's language interpreter. Such a text I assume could open a
window, display a form to be filled out, collect the user's entry and
zap it all back in one transmission.

[Let me stop right here and say I don't claim that any of these
features I describe are unique or even original with the systems I
mention, I am simply trying to stick to some examples I am familiar with.]

Thus modern, networked window systems (both of these use Internet
protocols for their transmissions) offer both more powerful problems
and more powerful solution models than previous protocols aimed
at keyboard/screen interactions.

>...If, however,
>your point is that the need for progress outweighs the need to avoid
>being charged for each character typed, so that windowing protocols
>should become the focus of the discussion irrespective of their
>properties in the cost dimension, I'm inclined to duly note it and
>repeat my question to everybody else as to whether a genuinely
>simple fix to RCTE (whether the protocol or the implementations)
>wouldn't be worthwhile, in context.

I think we can have both, all three; a fix to RCTE where it exists
currently (I don't have a version on the entire B.U. campus), progress
into the discussion of networked window systems, and cost reductions in
network transmissions under window systems -if these needs are expressed-!

That's the key point, I don't think such needs have ever been much
expressed, most of the window systems were developed within ethernet
environments where things like character-at-a-time overheads were
probably not very important.

The prospect (as people on this campus are asking for) of remote
access to facilities such as super-computers over long-haul networks
via windowed interfaces makes these issues more pressing. Data
visualization and this split interaction makes a lot of sense on
remote, high-end facilities with a graphically oriented workstation on
one's desk and a network connection.

I would dare to say that the transmissions we are already starting to
see generated by such interactions will make character-at-a-time
overhead seem like mere child's play. We're looking at the prospect of
a keystroke being echoed with a megabit or more of graphical data etc.

I suppose I could better allegorize my view as SUPDUP presenting a
finger in the dyke and others having run off to fetch some caulking to
put around the finger...it's a fine finger and the others will no
doubt come back with fine caulking.

	-Barry Shein, Boston University

DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer) (10/06/87)

    Date: Tue 6 Oct 87 10:22:40-EDT
    From: Michael Padlipsky <PADLIPSKY@A.ISI.EDU>

    Speaking of misunderstandings, please be aware that I'm NOT one of
    SUPDUP's advocates.  Just trying to "call for the order of the day" by
    asking for an explanation (which I'd still appreciate getting) of how
    windowing sorts of things minimize number of transmissions.  If, however,
    your point is that the need for progress outweighs the need to avoid
    being charged for each character typed, so that windowing protocols
    should become the focus of the discussion irrespective of their
    properties in the cost dimension, I'm inclined to duly note it and
    repeat my question to everybody else as to whether a genuinely
    simple fix to RCTE (whether the protocol or the implementations)
    wouldn't be worthwhile, in context.

One of my points is that the need for windowing and interactiveness is
great, and that having to worry about unrelated-to-that-work things like
number of packets and random monetary costs severely detracts from
progress in windowing and interaction.

Your question still stands, and I am not qualified to answer it.  I hope
people keep windowing and RCTE separate.  If you must think of them
together, try to think of RCTE being an optimization to windowing, not a
requirement (because of $$ constraints, etc).

PAP4@AI.AI.MIT.EDU ("Philip A. Prindeville") (10/08/87)

> then."  Perhaps SUPDUP was (and still is?) ahead of its time by assuming
> that interaction is important and communication is cheap.

Well, someone suggested that SUPDUP is archaic, and that we should devote
our attention instead to developing applications based on windowing
systems such as X.

I have a hard time imagining BITBLTing across the Internet at 56kps (and
less).  Now if we had T1 or T3...  In any case, such bitmap transfers
would be slower than waiting for the remote host to do your editing for
you...  And gobble much more bandwidth.

-Philip

barmar@think.COM (Barry Margolin) (10/09/87)

In article <266223.871008.PAP4@AI.AI.MIT.EDU> PAP4@AI.AI.MIT.EDU ("Philip A. Prindeville") writes:
>I have a hard time imagining BITBLTing across the Internet at 56kps (and
>less).  Now if we had T1 or T3...  In any case, such bitmap transfers
>would be slower than waiting for the remote host to do your editing for
>you...  And gobble much more bandwidth.

How often would you have to transfer huge bitmaps?  About the only
time would be when you dump a screen to a file or printer.  Most of
the time the units that window systems operate on are much higher
level, such as characters, lines, polygons, and windows.

Assuming that packet transmission cost is the same regardless of the
packet size, SUPDUP and windowing protocols can have about the same
network cost.  In SUPDUP, each keystroke results in a tiny packet (one
TCP octet) being sent from the user's machine to the remote machine,
and a similar packet being returned.  In X, it results in a keystroke
event packet, and an output packet being returned; X has its own
headers and stuff, so these packets are larger than the corresponding
SUPDUP packets, but they are still just one packet each way.  This
requires that applications use X efficientl; for example, it has the
ability to transmit an event when a key is pressed and when it is
released, but it can be told not to bother sending the KeyUp.
Similarly, mouse tracking is usually done in the local host, not in
the remote; an event is generated when when a mouse button is pressed,
when boundaries are crossed, etc., unless the application really needs
to see all mouse motion.

---
Barry Margolin
Thinking Machines Corp.

barmar@think.com
seismo!think!barmar

jqj@drizzle.uoregon.EDU (JQ Johnson) (10/10/87)

An important point about the evolution from telnet-style protocols to
X-style windowing protocols is that a parallel evolution is towards
remote file systems (e.g., though not i.e., SUN NFS).  The pair of
trends implies that there are now many interesting alternatives available
for standardized distributed computing.  Some examples:
1/ an interface to a remote command language interpreter that is
extremely smart about local editing (e.g. SUN cmdtool or the various
menu-based command extensions).  The menu-based extension amount to PFCs,
but with a better user interface.  They allow the transmission of a whole
command or part of it in a single packet rather than c-a-a-t (at the user's
typing rate).
2/ special purpose RPCs for typical commands, often with arguments that
are built automatically by the software on the local workstation.  I never
run sysline style programs remotely over a telnet stream!
3/ transparent local editing.  At least in some cases, it makes much more
sense to download a whole file and edit it locally.  That was a user-interface
nightmare when downloading meant firing up FTP (but that's the way Symbolics
tcp/ip implements it).  A remote file system gives you much more flexibility
and syntactic sugar.  Note that if your charges are per-packet with a max.
packet size of 128 bytes, and you plan to type 1K keystrokes during the
editing of a single file, then even if the file is 127K bytes long it is
cheaper to download it!  And of course an intelligent system design allows
downloading of only the pieces of the file actually needed.

Granted such things don't work well if your network connection is 9600b.
They work reasonably at 56Kb, though, given careful tuning.  And they
are often a big win not just in terms of packet charges but in terms of
latency -- I'd much rather wait 5 more seconds for my (local) editor to
fire up on a remote file than wait 1 sec. for the echo of every keystroke!

 .
QUIT

kent@DECWRL.DEC.COM (10/12/87)

Some work was done a number of years ago (I can't find a reference, but
it was at Arizona) to investigate how to use a micro to do editing
across a 1200 baud link. They had fairly good results with doing
pre-fetch and post-write, applying essentially a "virtual editor" model
at the backend, with a "virtual window" underneath it that did
demand-paging across the link. Editing commands across the link
operated on lines, not characters, though that wasn't necessarily the
interface presented to the user.

Anyone else remember this? Can you give more details? They certainly
used a protocol that was lighter weight than TCP, but the idea is an
old one. With 9600 baud links, I would think we could achieve acceptable results.

chris

barmar@think.COM (Barry Margolin) (10/13/87)

In article <8710121554.AA07611@armagnac.DEC.COM> "Christopher A. Kent" <kent@sonora.dec.com> writes:
>Some work was done a number of years ago (I can't find a reference, but
>it was at Arizona) to investigate how to use a micro to do editing
>across a 1200 baud link.

I don't think this is the exact paper you are talking about, but it is
similar:

Judd, J. Stephen, Corinne J. Letilley, "Memory and Communication
Tradeoffs During Screen-Editor Sessions", Univ of Saskatchewan, August
16, 1984.

Abstract:

   Screen Editor sessions typically make heavy use of the
communication channel between processor and display screen.  This is
because relatibely simple and quick operations like window movements
can cause the transfer of 1000 or more characters.  To get a
quantitative measure of communication requirements, we need to
determine how people use such systems.  We accumulated a
representative sample of user activity by tracing the movements of the
cursor during 1500 editing sessions.  Some information about these
sessions is presented.

   To make effective use of an interactive screen editor, the two-way
channel between the computer and the screen terminal must have a
fairly high baud rate.  By simulating the observed sessions at various
baud rates, we measured the amount of time lost during such a session
if the baud rate is low.  Then we estimated the increase in
performance afforded by keeping a buffer of text lines local to the
terminal.  Resultant graphs are suitable for comparing the performance
of terminals with various memory sizes and baud rates.

   We prpose a terminal that takes an active role in the management of
text during editing sessions and we estimate its impact on CPU demands
in the host.  This work has implications for the design of terminal
hardware and screen-editor software.

---
Barry Margolin
Thinking Machines Corp.

barmar@think.com
seismo!think!barmar

sas1@sphinx.uchicago.edu (Stuart Schmukler) (10/14/87)

In article <8710121554.AA07611@armagnac.DEC.COM> "Christopher A. Kent" <kent@sonora.dec.com> writes:
>Some work was done a number of years ago (I can't find a reference, but
>it was at Arizona) to investigate how to use a micro to do editing
>across a 1200 baud link. 

I think that I have the references you are talking about; they were written
by Christopher W. Fraser of The University of Arizona and others. They are:

C.W. Fraser,"A Generalized Text Editor", Communications of the ACM, March
1980, Volume 23, Number 3.

C.W. Fraser, "A Compact, Portable CRT-based Text editor", Software-Practice
and Experience, Vol. 9, 121-125 (1979).

Cary A. Coutant and C.W. Fraser, "A Device for Display Terminals",
Software-Practice and Experience, Vol. 10, 183-187 (1980).

And a report from their department:

TR 79-7a, C.W. Fraser, "The Display Editor S"

They concluded that the links available at the time UUCP 1200 baud dialups
were to slow and error prone for effective use.  Got a copy of the software
from them on a ratfor distribution tape.  They may still have it available
for copying from their archives.  [I hope so because that tape has vanished
over the years.]

SaS

hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick) (10/15/87)

There was also a Ph.D. thesis on this subject by Robt. Goldberg
at Rutgers.  I believe this is the work that the original message
referred to.  I believe Rutgers makes all of its theses available
through University Microfilms.  It would also be available as
a Computer Science Dept. technical report, I assume.  It is
unlikely that any online copies are around, so I can only refer
you to the Department for more information.

dcrocker@TWG.ARPA ("Dave Crocker") (10/30/87)

This is a couple of week's late, but Chris Kent cited a study of
splitting the front and back ends of an editor.  I, too, recall reading
that study quite a few years ago.  The basic concepts were quite 
straightforward, in terms that Chris described.

In particular, I remember the researcher (no, I haven't the foggiest idea
of who or where) was using 1200 baud and claimed an effective (subjective)
9600 baud for most activities.

In the early days of Interactive Systems, with an intelligent terminal
and tailored code added to it, they claimed highly effective interactions
with 1200 baud lines, using the INed editor.  This was circa 1978.

Dave
------

farber@UDEL.EDU (Dave Farber) (11/01/87)

The first notion I know of to separate the front end of an
editor and the back end was at least 1963 (the one I know was
at BTL back then). (Note the term AT LEAST)

Dave