[comp.protocols.tcp-ip] RCTE

dcrocker@TWG.ARPA ("Dave Crocker") (09/26/87)

Reviving the Remote Controlled Transmission and Echoing telnet option is
an interesting idea.  Certainly SOMETHING needs to be done.  Part of the
problem with the original protocol (my very first effort and it looks like
it) was that it was not rich enough in maintaining synchronization.  

Stepping back a bit, it requires that the serving host allow the application
to specify what kinds of characters are "interesting" (on the theory that
the sending host can aggregate all the characters up to the interesting
one and then send them in a batch).  RCTE was based upon John Davidson's
(then of Univ. of Hawaii) observation that Tenex (a derivative of which
now runs on Decsystem-20s) had just such a feature.

Unfortunately, it is not common to some well-known operating system.

Even more interesting is the thought that such a protocol should be
more ambitious.  RCTE does not know anything about the terminal or the
semantics of the session.  A "display-oriented" protocol could be much
more powerful.  John Day, then of Univ of Illinois, began a campaign for
such a telnet option and continued it for quite some time.  I believe
that a version is still in the protocol books.  Perhaps we should dust
it off and see why it shouldn't be aggressively implemented.

Dave
------

BEAME@MCMASTER.BITNET (09/26/87)

In the SET HOST protocol (CTERM) DEC uses a one or several longwords passed
from the remote side to indicate what characters the local size should terminate
the QIO request. Thus echoing and deleting of characters is performed on the
local end and sent only when one of the characters listed as a terminator is
hit. DEC also sends a length field which will terminate the QIO and send the
data when X number of characters have been sent.

With the above method both "line at a time" and single character transfers
can be requested.

- Carl

hedrick@TOPAZ.RUTGERS.EDU (Charles Hedrick) (09/26/87)

On Unix, my impression is that we have roughly three modes:
  - normally software lets the system do line editing.  In this
	mode, the terminal server could do the editing and
	then pass whole lines without echo
  - raw or cbreak mode is used by a few programs that do screen
	handling.  It is specifically enabled by a system call,
	which would give the kernel a chance to trigger
	negotiation of a different telnet mode.  THis would
	be full duplex, char at a time.
  - some screen-oriented programs are used enough that it is
	worth modifying them to give the terminal server
	instructions.  Emacs is probably the best example.
	The main screen-oriented programs I use are emacs nad
	vnews.  Vnews is not worth doing, I suspect, since
	normally the characters typed a single letters anyway,
	so not much improvement is to be had.  I understand
	that Encore already has support for emacs in their
	terminal servers.  I don't know what they are doing,
	but experiments were done with TOPS-20 Emacs based
	on the concept that Emacs should let the temrinal
	server echo, and should specify two things: a bit
	map - when any of these characteris is typed, the
	server should go back into char at a time mode, and
	let Emacs respond to that character; a count - when
	that many characters had been typed, ditto.  The idea
	was that most people spent most of their time typing
	text at the end of a line.  Emacs would set the bitmap
	to all of the command chars (in effect, all but printing
	characters), and the count to the number of characters
	left on the line (since Emacs will have to do some
	screen management when you reach the end of the line).
	probably Encore is in a position to make additional
	suggestions, but this seems a reasonable place to start.

DCP@QUABBIN.SCRC.SYMBOLICS.COM.UUCP (09/28/87)

RCTE sounds a lot like the echo-negotiation protocol that the Multics
Emacs people developed circa 1980.  I'm sorry I can't give you a
reference; they may not have published anything.

There is and has been a display oriented terminal protocol for a long,
long time.  The only reason it needs "dusting off" is because only now
are commonplace terminals and computer systems really able to take
advantage of it.  Most operating systems were (originally) written with
printing terminals in mind, which is why it has been hard to graft this
into existing systems, such as TOPS20.  I'm refering, of course, to the
SUPDUP protocol, RFC 734 of 1978.  If you want graphics, you can do that
too: the SUPDUP Graphics extension, RFC 746 March 1978.  I suggest you
look at these before doing too much wheel reinvention.

PADLIPSKY@A.ISI.EDU (Michael Padlipsky) (09/28/87)

Dave--
   It happens that I had occasion to check with John Day about his
"NVDET" (Network Virtual Data Entry Terminal) stuff several years ago.
He told me to wait for the ISO Virtual Terminal Protocol instead.
So I did that.  And did that.  And am still doing that.
   Actually, some work on NVDET is/was being done in the "DoDIIS"
(DoD Intelligence Information System) arena.  A year or so ago,
I reviewed a draft RFC on the topic by a contractor; still waiting
for the next draft.  Since it was intended for release to the research
community eventually, you might see it before I do if impending
threats to my daily access to the net eventuate.
   In my view, the trouble with NVDET--and TN3270, which somebody
semi-facetiously put forward in a side msg--is that you get wrapped
around the the screen-at-a-time axle instead of the char-a-a-t one,
and in the context we're addressing that isn't desirable.  (Not to
deny that there are contexts in which it's necessary to deal with
screen-a-a-t, just to observe that this doesn't have to be one of
them--unless, of course, a solution to char-a-a-t falls out of it
naturally.)
   Are you volunteering to retrieve the RCTE baton/torch, by the way?
   cheers, map
-------

PADLIPSKY@A.ISI.EDU (Michael Padlipsky) (09/29/87)

Never having been at all fond of reinventing wheels, I hastened to
FTP the SUPDUP RFC and print it out at my terminal.  When I got to
"Due to the highly interactive characteristics of both the SUPDUP
protocol and the ITS system [which was the original Server for which
the protocol was developed], all transactions are strictly character
at a time and all echoing is remote" I aborted the printing.  Am I
misunderstanding something--the context is, as far as I know, avoiding
"unnecessary" transmissions--or have you misremembered SUPDUP?
(If the latter, let me be the first to welcome you to Early Middle Age.)
As things stand, I have to deem SUPDUP a reinvention of the travois,
in context; please correct me if I'm wrong.
   cheers, map
-------

slevy@UF.MSC.UMN.EDU (Stuart Levy) (09/29/87)

I've just submitted an RFC that tries to deal with this problem --
it basically is CCITT's X.3 adapted for Telnet, with extensions.
So it does things like local editing, echoing, selective forwarding,
flow control &c, all controllable from the host side.
It -doesn't- try to do as much as RCTE.  It also doesn't try to
provide for general local screen editing, e.g. interpreting & buffering cursor
movements -- that seems like an awful lot to pile onto a poor Telnet protocol.


I'm leaving the draft RFC for anonymous FTP from uc.msc.umn.edu,
as "staff/rfc.x3", if anyone's interested.

					Stuart Levy
					Minn. Supercomputer Center
					612 626 0211

DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer) (09/29/87)

I assumed, incorrectly it turns out, that your requests for local
echoing and display-oriented terminal service were separate.  If your
goal is to minimize network traffic, then indeed, SUPDUP is not for you.
if your goal is to minimize program (e.g., editor) wakeups, then you
can still use SUPDUP as is.  You do need to make a system call that
causes the system, not the program, to do the echoing and return control
to the program on the special characters.  ITS has such a system call.
Multics has such a system call and/or communication with its front end.

andrew@mitisft.UUCP (09/30/87)

	It seems to me there are is a middle ground in here, between
char-at-a-time and line- (or screen-) at-a-time, that can be implemented
purely on the server side using the normal telnet protocol (ECHO negotiation).
We are considering implementing this for support of low speed TCP links
(eg async modems), and I'm curious if I'm going to run into some "common
knowledge problem"...

        The basic idea is to have the kernel (virtual terminal driver) inform
the telnet daemon when it *would be* doing immediate character echo, and
not do it.  The daemon turns this information into echo negotiation, which
the client (hopefully) heeds.  This results in speeded echo response in
(for example) un*x "cooked" mode, plus a reduction in packet traffic.

	Has anyone tried this?  The UCB virtual terminal driver has some
hooks in it ("packet mode", currently used for flow control negotiation
with rlogin) that could be used for this (after extension).

Andrew Knutsen
(408)435-3623

dyer@spdcc.UUCP (09/30/87)

I am aware of an effort at BBNCC a few years ago in support of the MINET
(European MILNET) network to handle this via TELNET echo negotiation when
switching between RAW, CBREAK and "cooked" terminal modes.  MINET trunk
lines at the time were 9600 baud, hence the desirability of minimizing
micro-packet traffic induced by character echoing.

I don't remember exactly how successful this was, perhaps because many of
the applications they were using ("vi", InfoMail, "tcsh") preferred to run
in character-at-a-time mode with echoing under the explicit control of the
application.  Cooked mode applications seemed to work OK, as I remember.
-- 
Steve Dyer
dyer@harvard.harvard.edu
dyer@spdcc.COM aka {ihnp4,harvard,linus,ima,bbn,m2c}!spdcc!dyer

JTW@XX.LCS.MIT.EDU (John Wroclawski) (09/30/87)

Hmm. The basic SUPDUP protocol doesn't address the remote echoing
overhead problem at all, and won't do a thing for those paying
per-packet charges, which is how this all started.

Some (long) time ago Richard Stallman designed and implemented an
extension called the SUPDUP Local Editing Protocol, which allowed a
remote display-oriented program, such as EMACS, to arrange to have
most operations performed at the user's local terminal, with
information transferred to the remote end as required. This vastly
reduces the need to send screen update information over the net, and
allows bunching of update information into a few large packets rather
than zillions of one-character ones. Network utilization is improved
in both directions.

This protocol was documented in at least one MIT AI Lab memo. I think
the final version of the AI memo that described SUPDUP included it
also.

SUPDUP LEP is to some extend a superset of the functionality of RCTE,
and could be useful for reducing the load caused by
non-display-oriented programs.

The basic SUPDUP protocol has some very good ideas about how to do
virtual terminals, but could use some updating. A new protocol could
be based on the structure of SUPDUP, maintaining the LEP and input
processing concepts, but with a newer set of capability descriptors in
the startup negotiation and output encodings based on the ANSI X3.64
terminal standards. This protocol would nicely address both the
network efficiency concerns raised here and the problems which come up
using arbitrary window-based emulators as remote terminals.
-------

turkewit@CCV.BBN.COM ("Kenneth A. Turkewitz") (09/30/87)

Steve,
	The MINET experiment was fairly successful, but was very limited
in scope.  The users on the MINET were using NO screen editors, or
anything else that required "special" characters to be noticed right
away.  As a matter of fact, all MINET applications were tailored so that
a response was not needed until a linefeed was seen.  Hence, MINET users
were all able to run in a "line at a time" (i.e. send characters only
on line feed) mode.
	(This only lasted for a short time, due to a minor bug that needed
to be fixed in the BBN O/S kernel or the TELNET server, I forget which.  By
the time it was fixed, nobody really wanted to go back to the line-at-a-time
mode, despite the savings on the network trunks.)
	Interestingly, during the planning and integration of the MINET
project, we were seriously considering using RCTE (or a modification of it,
known as "RACE" [Remote Application Controlled Echoing] in our planning
sessions).  The MINET people were not particularly interested in funding
it, however.
		--Ken

GLM@IPGUNIV.BITNET (Gianfranco Galmacci) (09/30/87)

please remove me
Acknowledge-To: <GLM@IPGUNIV>

BILLW@MATHOM.CISCO.COM (William Westfield) (09/30/87)

    The basic idea is to have the kernel (virtual terminal driver) inform
    the telnet daemon when it *would be* doing immediate character echo,
    and not do it.  The daemon turns this information into echo
    negotiation, which the client (hopefully) heeds.  This results in
    speeded echo response in (for example) un*x "cooked" mode, plus a
    reduction in packet traffic.

This means that the client must send every character immediately to
the host, and then wait (for an indeterminate time) for a response
from the host that indicates whether the next characters should not
be echoed.  (This is in the worst case, of course.  If you are willing
to assume that when the client is doing local echo, it is doing local
echo of ALL characters, and that it doesn't matter if the client
accidently echos some characters it should not have or doesn't echo
some characters it should have, it may indeed help...)

Bill Westfield
cisco Systems
-------

CERF@A.ISI.EDU (10/01/87)

The MCI Mail system, which runs on an X.25 version of the BBN C.30 packet
net, is a line-at-a-time system which forwards on CR and does intra-line
editing at the PAD (TAC). Most users preferred that mode because
of the immediate echoing response. Remote echo mode was simply not
acceptable. Of course, many of the long haul lines were 9.6 rather
than 50 kb/s and this contributed to increased "stickiness" of the
echoing. On the whole, I felt strongly for that particular application
the line at a time mode was best - presuming, of course, that most
of the real text editing was done off-line with a PC and that the
interaction was mostly for preparation of addressees.

Eventually, PC packages like Lotus Express and Desktop Express for
the IBM PC and Apple Macintosh were produced which largely decoupled
the users from any direct interaction exposing the network. Most users
of these packages prefer not to go back to the direct mode at all,
I believe.

The point of all this is to argue that localizing much of the
interaction which would otherwise require char-by-char network
support seems preferable and in keeping with trends towards more
powerful, local workstations using background processes to handle
network activities.

Vint

backman@interlan.UUCP (Larry Backman) (10/01/87)

In article <8709291511.AA18314@ucbvax.Berkeley.EDU> PADLIPSKY@A.ISI.EDU (Michael Padlipsky) writes:
>Never having been at all fond of reinventing wheels, I hastened to
>FTP the SUPDUP RFC and print it out at my terminal.  When I got to
>"Due to the highly interactive characteristics of both the SUPDUP
>protocol and the ITS system [which was the original Server for which
>the protocol was developed], all transactions are strictly character
>at a time and all echoing is remote" I aborted the printing.  Am I


	[]

	Me too.  SUPDUP has been in the back of my mind for the past year
	as a viable TELNET alternative.  However, examination of the
	spec reveals that it too does remote host echoing.  The product
	that we provide, TELNET through a TCP gateway from a Novell LAN to the w
	orld has
	4 hops to go through before a typed character reappears on the
	screen.  Each keystroke on a PC workstation goes across the Novell
	subnet to the gateway,  from the gateway to the remote host, and
	thence back from where it came.  We do all sorts of tricks in the
	PC to limit subnet traffic, buffering et. al. but no matter what
	you do, the remote echo is a killer.

	I am looking for alternatives also.  Ideas? solutions?


					Larry Backman
					Micom - Interlan

mckee@MITRE.ARPA (H. Craig McKee) (10/01/87)

There has been much discussion of SUPDUP and what "we" might do to
minimize the need for remote echoing.  I think "we" are the ARPANET
community.  Can anyone offer assurance that the ISO community, in the
development of the Virtual Terminal Protocol, is equally interested in
minimizing the need for remote echoing?

Regards - Craig

mckenzie@LABS-N.BBN.COM.UUCP (10/02/87)

The ISO VT service/protocol divides the world into synchronous and asynchronous
terminals.  Synchronous terminals don't echo ever.  Async terminals have a
RCTE-like mechanism defined (but perhaps not required).

Alex McKenzie
 

haas%gr@CS.UTAH.EDU (Walt Haas) (10/02/87)

I've been following the discussion about TELNET echoing with some interest.
The problem has long since been solved in the big (public) network world.
A good example of how the solution works is represented by my X.25
implementation for the DEC-20 (RIP, sigh...).  There were two cases worth
distinguishing:

1) Minimum packet charge.  In this case the PAD which was connected to the
   user's terminal did echoing of characters, and forwarded a packet only
   when there were enough characters to fill one, OR the user entered a
   transmission character, OR the user didn't type anything for a while.
   In this case the TOPS-20 system was set for page mode, half duplex
   operation.  The PAD grabbed ^Q/^S to use for terminal flow control.

2) Screen editting.  In this case characters were echoed by the host.
   The PAD forwarded soon after each character was keyed in.  The TOPS-20
   system was set for full duplex, and passed ^Q/^S thru transparently to
   the applicateion (usually EMACS or some such).

I wrote a little command which switched between the two modes by sending
an X.29 packet from host to PAD and, at the same time, switching terminal
modes inside TOPS-20.  With just a little more work this sequence could
have been built into EMACS.

So how did it work?  Great!  I had the pleasure of sitting in New York
running EMACS on UTAH-20 over Telenet, with good response.  Then I could
quickly switch back to mode 1 (the default) for normal TOPS-20 command
processing.

One of the reasons this is hard to do with TELNET is that the TELNET
standard is worded in such a way that you don't have to implement these
functions in order to say you have a standard TELNET implementation.
The CCITT standard for PADs, in contrast, requires that you actually
implement a lot of functionality before you can say you conform.

----------------*
Cheers  -- Walt     ARPA: haas@cs.utah.edu     uucp: ...utah-cs!haas

DISCLAIMER: If my boss knew I was using his computer to spread opinions
            like this around the net, he'd probably unplug my ter`_{~~~

CERF@A.ISI.EDU (10/03/87)

The ISO community, to the extent it works in the public networking
domain, is equally interested in avoiding costly char at a time modes,
in my opinion.

Vint Cerf

alan@mn-at1.UUCP (Alan Klietz) (10/04/87)

In article <244@mitisft.Convergent.COM> andrew@mitisft.Convergent.COM (Andrew Knutsen) writes:
<
<	It seems to me there are is a middle ground in here, between
<char-at-a-time and line- (or screen-) at-a-time, that can be implemented
<purely on the server side using the normal telnet protocol (ECHO negotiation).
<We are considering implementing this for support of low speed TCP links
<(eg async modems), and I'm curious if I'm going to run into some "common
<knowledge problem"...
<
<        The basic idea is to have the kernel (virtual terminal driver) inform
<the telnet daemon when it *would be* doing immediate character echo, and
<not do it.  The daemon turns this information into echo negotiation, which
<the client (hopefully) heeds.  This results in speeded echo response in
<(for example) un*x "cooked" mode, plus a reduction in packet traffic.
<
<	Has anyone tried this? 

We modified the Cray-2 UNICOS kernel to signal a modified (kludged)
version of telnetd on a change in tty state.   For example, when
the user typed "vi" on the Cray-2, the ioctl(TIOCSETA) call is sent
to the pty driver.  The information in the call is stored in a dummy
kernel tty structure and the telnetd process is signaled.  The telnetd
process wakes up and interrogates the terminal state by issuing an
ioctl(TIOCGETA) on the pseudo-tty.  It picks up the info and says
"humm, this user wants raw mode".  It then re-negotiates the ECHO
option with the client to switch to single character mode.

One problem with this approach is that the change of state is
asynchronous to the I/O.

--
Alan Klietz
Minnesota Supercomputer Center (*)
1200 Washington Avenue South
Minneapolis, MN  55415    UUCP:  ..rutgers!meccts!mn-at1!alan
Ph: +1 612 626 1836              ..ihnp4!dicome!mn-at1!alan (beware ihnp4)
                          ARPA:  alan@uc.msc.umn.edu  (was umn-rei-uc.arpa)

(*) An affiliate of the University of Minnesota

DCP@QUABBIN.SCRC.SYMBOLICS.COM (David C. Plummer) (10/05/87)

    Date: 3 Oct 1987 06:15-EDT
    From: CERF@A.ISI.EDU

    The ISO community, to the extent it works in the public networking
    domain, is equally interested in avoiding costly char at a time modes,
    in my opinion.

Opening the door to a more unhindered future question: Assuming
"costly" includes money, when will public networking come up with a
deterministic usage fee so that researchers can budget their
communications costs instead of fretting?  I imagine most researchers
want to spend money on research and correspond with collegues and know
from the outset how each will cost; having to worry about variable
communications charges that they possibly don't understand or care to
understand is probably an undesired and recurring distraction.

bzs@BU-CS.BU.EDU (Barry Shein) (10/06/87)

From: David C. Plummer <DCP@QUABBIN.SCRC.Symbolics.COM>
>Opening the door to a more unhindered future question: Assuming
>"costly" includes money, when will public networking come up with a
>deterministic usage fee so that researchers can budget their
>communications costs instead of fretting?  I imagine most researchers
>want to spend money on research and correspond with collegues and know
>from the outset how each will cost; having to worry about variable
>communications charges that they possibly don't understand or care to
>understand is probably an undesired and recurring distraction.

I'd like to underscore this point, it's critical. This was the biggest
initial design goal which motivated the Cypress network project, fixed
and predictable costs from month to month (that is, not based on data
flow.)

It's critical in more ways than one. When the cost is based on
per-packet (or whatever) one can waste money using the network. When
it's flat rate one can only waste money by not using the network. The
distinction is important.

One might argue that this would just encourage irresponsibility, but
in reality the former just encourages irresponsibility by those who
can bury their costs at the expense of those who cannot. I assume any
common denominator of price will be a mere bagatelle to many folks
anyhow. There should be other ways to control irresponsibility besides
mere chargeback (eg. limiting bandwidth into the network.) I suppose
the question is whether one sees the network as infrastructure or a
commodity.

It also, of course, encourages network access based purely upon
political clout within an organization (ie. the managers will limit it
to themselves, the rats always guard the cheese...) I suppose whether
or not this would be a negative factor is subject to discussion.

Predictability is critical within a University context (and, I
suspect, other business situations.) I can get statements from any
number of bean-counters around here (I collected these verbally during
the initial Cypress discussions) that they would far rather commit to
(eg) $500 a month than a varying cost of $300-$700 per month which
would *probably* average out to $500. Just keeping tabs on whether
some change in behavior has jumped that to $1000/mo involves staff
time better placed at the vendor's end (at which point they could
raise their flat fees which, I assume, would reflect average usages
rather than simply my singularities, they have more to work with to
respond to the situation other than simply imposing little rules.)

The question of course arises "what about a small organization that
truly believes they would benefit from per-quantum charges and feel
they are subsidizing the heavier users?" Well, for one thing other
adjustments could be made but more importantly one has to be able to
show that the economy of scale is working in general and, as I
believe, that the per-quantum costs would end up costing the smaller
user more (if rates are here, could bulk-rates be far behind? etc.)  I
suspect a sound financial argument could be made that the small user
is benefitting in perhaps less obvious ways (eg. large users would
tend to have multiple (separately charged for) connections and provide
a stable revenue base which is what it takes to re-tool
infrastructure, small users would probably tend to come and go, it's a
two-way street.)

	-Barry Shein, Boston University

CERF@A.ISI.EDU (10/10/87)

There hasn't been much pressure from the data communication users for
flat fee arrangementst. There are, of course, lease or dedicated
circuits and flat rate fees in local calling areas (voice). 

I suggest that you would find more concrete answers if you went to
one or more public carriers to ask about options for flat rates.

Vint

rick@SEISMO.CSS.GOV (Rick Adams) (10/11/87)

I have been able to negotiate a flat rate per hour (no kchar charges)
with Tymnet. They require some large monthly minimums, but are
quite willing to talk about a flat hourly charge.

I believe Compuserve has a flat monthly rate. you pay by the
number of simultaneous connections that you want permitted. It's something
like $750 per connection with a 6 connection minimum.

Flat rates are available. You have to know to ask for them and
be prepared to haggle over the eventual rate.

--rick