[comp.protocols.iso] User Interfaces in ISORM, X and OSI, RM should expand in middle ?

craig@gpu.utcs.utoronto.ca (Craig Hubley) (06/22/89)

I have a comment on X in OSI and on user interface's reasonable place in
ISO standards in general, so skip this next bit if X isn't your bag:

In article <33332@bu-cs.BU.EDU> kwe@buit13.bu.edu (Kent England) writes:
>	If X was designed without reference to the OSI model,
>including ASN.1, all the Application Layer Service Elements,
>Presentation, and Session, what is the use of trying to graft the
>model on after the fact?
>...
>
>	It just sounds awfully peculiar to me.  I thought the model
>was for designing new systems, not explaining old ones.  Never occured
>to me that that was what the OSI model was for, universal explanation
>and interpretation.  :-)

For software whose internals have not been frozen in stone (by that I mean
that massive investment in their peculiarities hasn't been made yet - such as 
burning them into hardware, etc.) reforming them to comply with OSI seems more
than reasonable, if only to ensure that what is today a monolithic and
inflexible beast (like X) might someday accomodate some layering and 
preferences of operation that are impractical or unthought-of today.

However, it seems to me that in general the 'upper layers' of ISORM will 
not be the upper layers forever.  Clearly the next steps, once one has
defined 'session', 'presentation', and 'application', are to go higher.
After all, what ISO calls 'application' is what a present-day *operating
system* calls an application, not what a human thinks of as an application,
unless they are too far gone in hacking to notice that they aren't really
doing anything... :-)

File transfer, mail, distributed databases and file systems, and remote
interactive sessions will have to be taken for granted someday, in order for
totally interoperable, user-configured systems to work.  This day isn't going
to be all that far off.  Protocol layers to deal with 'hot links' and such
stuff, like HP's NewWave or some of the things defined in Apollo/now-HP NCS,
or indeed the higher layers of X, are going to be formalized and standardized
so that all applications can use them someday.  Thus the 'user interface'
standards are defined not as 'layers 6 and 7' but 'the top layers', leaving
room for expansion in the middle.

Imagine the ISORM, circa 2000, in this fanciful model:

1989:			2000:
			(user interfaces and proprietary applications)
			
			Control (abstracted user and other control actions)
			Manifestation (presentation of the model to the user)

(user interfaces)	Application (control and update response, processing)
(and proprietary)	Hot-links (updates to dynamic regions and processing)
			Interchange (one object type = one access protocol)
Application->becomes->	Translation (between presentations, for old systems)
Presentation				
Session
Transport		(remain the same, of course)
Network
Data-Link
Physical

In this example, the translation layer ensures that all data is delivered in
a single set of formats to the interchange layer, which can then exchange
data between locations independent of format - terminal emulation and also
file-transfer protocols are implemented here, delivering a common data format
to the Interchange layer.

The Interchange layer adds on a set of operations to that format, making it a 
formal data type or object.  Now both the data and the manipulations allowed
on it are standardized. 

The Hot-Links layer matches up requests for updates with newly updated objects.
This includes things like dynamic regions, hypertext capabilities, etc.

The (new) Application layer itself has to receive and respond to requests from
above and below, keeping the model it maintains up-to-date.  By 'application'
I mean what 'Joe Business' thinks of as application - mapping controls from the
user or shell, and updates from the architecture below, into the model used
in the actual human problem-solving.  This could also be called the 'model'
layer, but I've left it as is to indicate it acts somewhat like ISO layer 7.

The upper layers, Control and Manifestation actually exist side-by-side.
Control defines a stream from the user-interface and batching (shell) system
into the application, and Manifestation defines a stream from the application
out to the user-interface or batching system.  

Note that there are already things that work at what I've called the
Translation level (NCS NIDL, XNS Courier, remote procedure call mechanisms
and even applications which accept multiple data formats from other programs)
Interchange level (standards like TIFF, Amiga IFF, Mac Clipboard, NewWave,
ODA to some extent).  Some things have started to emerge at Hot-link level
(NewWave, Xanadu (if it ever shows up), Amiga REXX, any good IPC mechanism,
and even Linda, which is a real nice IPC mechanism).  With parallelism, IPC
and RPC mechanisms will have to become indistinguishable anyway.

User-interface standards presently occupy the Control layer, defining how the
Application receives its information from the user (NewWave, MS-Windows and PM,
OSF/Motif, Open Look, Macintosh, X), but most of them use their OS's batching
system or shell/script language rather than defining their own.  They also can
define the Manifestation of the model into the user's perceptual space, though
the UI and it's definition of 'control' seems to be abstracted more and more,
allowing Manifestations such as graphics standards (RenderMan, Display PS, etc)
to be independent of the system that puts up the windowframe.

This Manifestation layer will probably get more abstract with the introduction
of systems like Ardent's DORE, for scientific visualization.  Certainly there
are enough people working on similar tools, including UIMS's for those 'messy'
layers in between, which is where the real research hotpoints are right now.
As ordinary users start configuring their own applications, these will be very
necessary - how else will a user be able to make an ISDN phone call ? :-)

Industry will demand Control and Manifestation standards for the simple reason
that they don't want to train people too many times as the UI is moved up, as
new layers emerge from above.  They will also want some measure of backwards-
compatibility, which is understandable given the investments they endure in
times of rapid technological change.  So far as I can see, the kinds of 
things that have to be displayed and controlled by users haven't changed much.
Otherwise old Unix applications (like mail and news :-)) wouldn't be much use.

Unix's 'no UI is a good UI' makes it flexible enough to support many UIs and
applications, with the bonus of a good batch system (the shell) that can do
anything the applications do under remote, script, or another application's
control - witness the contrast to the Mac, which provides no batch system.
So the wisdom of having a control protocol (in Unix, the interface from the
shell) seems to be selling a lot of systems these days.

Any comment on this, or the above 'extended' reference model ?  Or the
appropriateness of my examples ?  Or a nice new suit with long sleeves ?

	Craig Hubley
	Craig Hubley & Associates	-------------------------------------
	craig@gpu.utcs.utoronto.ca	"Lead, follow, or get out of the way"
	craig@gpu.utcs.toronto.edu	-------------------------------------
	mnetor!utgpu!craig@uunet.UU.net  
	{allegra,bnr-vpa,cbosgd,decvax,ihnp4,mnetor,utzoo,utcsri}!utgpu!craig

	DISCLAIMER:  My associates don't know about this.
-- 
	Craig Hubley			-------------------------------------
	craig@gpu.utcs.toronto.edu	"Lead, follow, or get out of the way"
	mnetor!utgpu!craig@uunet.UU.NET -------------------------------------
	{allegra,bnr-vpa,cbosgd,decvax,ihnp4,mnetor,utzoo,utcsri}!utgpu!craig