[comp.protocols.tcp-ip] Workstations, SUPDUPl, Windows...

cerf@ISI.EDU (10/12/87)

Please blame me and not Mr. Mankins if you object to injection of this
message into the TCP-IP mailing list. His comments suggest that we 
ought to reconsider the function of the workstation and PC in relation
to mainframes and interconnecting (inter)nets. Where is the proper dividing
line between workstation and mainframe? How can context be satisfactorily
maintained between these two over connections of varying capacity and
delay?

Vint
------
	
Begin forwarded message
Received: from BFLY-VAX.BBN.COM by A.ISI.EDU with TCP; Sun 11 Oct 87 22:02:24-EDT
Date: 11 Oct 87 21:05:17 EDT (Sun)
From: David Mankins <dm@bfly-vax.bbn.com>
To: CERF@a.isi.edu, info-futures@bu-cs.bu.edu
Subject: SUPDUP, window systems, slow-links, few packets, fast response 
Return-Path: <dm@bfly-vax.bbn.com>
Sender: dm@bfly-vax.bbn.com


[Mr. Cerf, I'll follow JR's lead and let you decide if this is worth
posting to the TCP/IP mailing list.]

I hesitate to prolong the SUPDUP discussion on the TCP/IP list, nor do I
want to light the flames of window-systems religious wars.  However, in
discussing window systems, people have seemed to assume that a network
window protocol has to work by shipping bulk packages of pixels and
mouse movements around (as X v.10 did).  As has been pointed out, this
is mostly impractical on the ARPANET, or across a serial line (the
technologies of the past).

But there is an alternative approach: that taken by Sun's
News, for example.  News ships a high-level description of a window
across the network (in the case of News, the description is a
Postscript program).  I don't know how it deals with mouse-motions and
things like menus, but I understand that those are handled by the
computer on your desk, too.  There are reported to be implementations
of News on personal computers communicating with a workstation over a
serial line (the technologies of the impoverished present).

Well, not only the technologies of the impoverished present.  At last
year's X forum at MIT, one person asked, ``Is my Cray going to have to
pause in calculating a Fourier transform because someone moved a mouse
across their desk?''.  There are some very rich people who were
concerned about the micro-management of pixels pushed onto the host by
the early X.  In all fairness, I should remark that X v.11 gives the
small, cheap computer with the display lots of information that let it
save the big, expensive computer (and the network that joins them)
from this kind of micro-management.

In a way, the News solution is like Stallman's remote-editing
protocol: put the responsiveness and user-interface in the $700
hardware on the user's desk, and the computing and file-storage with
the big machine across the country (or across the hall).  (Stallman's
remote editing protocol description is very good, by the way, I second
the recommendations it has received on the TCP-IP list.  Time still
may not have passed it by.)

Another approach to the remote-editing/slow link problem was explored
by a DEC intern at Project Athena.  He looked into combining
Stallman's remote-editing protocol with an adaptive encoding
data-compression scheme for editing across slow links.  I think the
computation required to do the data-compression cost more than the
transmission time saved.  Either that or the summer ended before he
finished his work -- I don't recall anything coming out of it.

Yet another approach to the slow-link/cheap but sophisticated hardware
problem I've seen is this thing from Apple called ``Macworkstation''
or ``Machostconnection'' or something like that.  This was a
serial-line remote procedure call protocol that permitted your
mainframe to invoke Macintosh routines to do menu and icon
hunt-and-click computing -- either allowing you to debug your Mac
programs in the rich debugging environment of your mainframe, or
conceivably allowing your mainframe program to have a Macintosh
user-interface.  I think there are some third-party products like this
now (took 'em long enough).

(dm)

          --------------------
End forwarded message
		

markl@ALLSPICE.LCS.MIT.EDU (10/14/87)

I've been working on mainframe-processing-vs-workstation-processing
issues for a couple of years now, in the form of the Pcmail
distributed mail system.  Everyone receives their mail at a central
point and reads and modifies it over the network at a workstation.
The question is, how much mail processing is done on the mainframe and
how much at the workstation?  This becomes especially interesting if
your workstations have wildly differing capabilities and some are
unable to perform sophisticated operations like searches or sorts on
their own.  A related problem is how the workstation manages to
communicate efficiently with the mainframe over a 1200 BPS network
connection.  Dave Clark and I spent a fair amount of time designing a
set of operations that minimised packet traffic over slow links, and
placed a minimal computing burden on the mainframe, while not placing
too much of a computing burden on resource-poor workstations.  

markl

Internet: markl@ptt.lcs.mit.edu

Mark L. Lambert
MIT Laboratory for Computer Science
Distributed Systems Group

----------