[comp.windows.x] Summary of UIST '90

ellis@osf.ORG (01/12/91)

                    Summary of UIST '90
                    -------------------

                      Ellis S. Cohen
                  Open Software Foundation


UIST, the Annual Symposium on User Interface Software and Technology,
is the flagship conference for those interested in constructing
innovative systems that explore ideas which are at the heart of
evolving user interfaces.

UIST has always been a cozy one-track conference, which has helped to
promote a sense of community among the researchers and practitioners
in the field.  UIST '90, following in this tradition, was held on
October 3-5, at a spectacular, and otherwise nearly deserted ski lodge
in Snowbird, Utah.  It was a setting with breathtaking views and walks
(literally, given its elevation), and with a heated pool and hot tub
out under the stars.

The technical part of the conference focused on three main areas --
(1) Access, Visualization and Manipulation, (2) User Interface
Toolkits, and (3) Automatic Generation of User Interfaces.  This
summary provides a tour of the papers, talks, and panels in each of
these areas.

There has been an emphasis at UIST on exploration through the
construction of working systems.  That emphasis is carried over to
this summary, which identifies the talks by the name of the systems.
The matchup between the systems and the papers which discussed them
can be found at the end of the report.

Access, Visualization and Manipulation
--------------------------------------

As "ordinary" user interface become more standardized, attention is
increasingly turning towards hardware and software for accessing,
visualizing and manipulating information within complex environments.
Both the opening and closing addresses, as well as two of the papers,
described innovative and exciting developments in this area.

Stu Card delivered the opening address, describing work done in
conjunction with Jock Mackinlay and George Robertson at Xerox PARC on
Information Workspaces.  When users peruse a large information
structure, the entire contents, with all of its details, cannot all be
visible at once.  At best, the user can see the details of a small
part, with enough additional contextual cues to place it within the
larger structure.

The problem is that when the user moves their attention to another
part of the structure, the instantaneous change from the old view to a
view focused on the new part can be disorienting.  Stu showed how
their system, the Information Visualizer, animates the change in view,
preventing this disorientation and giving the user a sense of embodied
navigation within the structure.

Stu also showed off a number of new presentations which are
particularly well suited to animation -- the information wall,
designed for viewing long 2D "walls" of information, and the cone and
cam, for viewing hierarchical structures, and which produced
animations which were both clear and visually exciting.

The paper on n-Vision also focused on visualization, this time, of
n-dimensional surfaces.  We all understand how to display and view 2D
graphs and 3D surfaces on a computer screen, but displaying
higher-dimensional surfaces is more of a challenge.  n-Vision uses a
"worlds within worlds" metaphor for viewing such surfaces.  For
example, to view a 6D surface, the user would move a "box" around
within a 3D space.  The origin of the box would fix three coordinates.
The box itself would display the 3D surface determined by the
remaining three coordinates.  The box can be scaled and rotated about
its origin.

As the user moves the origin of the box, the surface displayed by the
box changes to reflect the coordinates of the origin.  Multiple boxes
can be instantiated simultaneously, each at different coordinates, to
allow static comparisons of their surfaces.

n-Vision provides tools for operating on boxes, such as a magnifying
glass and various objects which can be intersected with the surface
displayed in a box.  The VPL DataGlove is used to interact with the
system; various gestures are used to manipulate tools and boxes.
Stereoscopic imaging provides the user with a realistic 3D view.

The underlying structure of n-Vision is similar to a 3D version of the
X Window System(tm).  It defines a hierarchy of nested boxes; the
coordinate system of each may (unlike X) be arbitrarily transformed
relative to its parent.  Mapping, exposure, and 3D Enter/Leave
events, as well as grabbing, are analogous to X.

Tailor is a system which allows users with cerebral palsy, and more
generally, those with limited strength and dexterity to accurately
manipulate one and two dimensional control devices.  By using a new
method for synthesizing sounds based just on two coordinates which
represent the vertical position of the tip of the tongue, and the
horizontal position of the base of the tongue, manipulation of a two
dimensional control device can then be used to synthesize crude, but
understandable, continuous speech.

By attaching a magnetic tracking devices to some part of the user's
body, and telling the user to move freely, Tailor can determine the
range of positions and orientations available to the user.  For users
who exhibit rocking motions, a pair of trackers can be used to produce
consistent relative positions and orientations.  Points within the
range are then mapped onto a one dimensional scale (currently by a
therapist, eventually more automatically) -- the mapping is "tailored"
to the user.  The user then learns how control the one dimensional
device based on their motions.

If a user has adequate dexterity, then manipulation of a two
dimensional device can be controlled by mapping the available range of
motion onto two dimensions.  Otherwise, a user will need to move two
different parts of the body, tracked separately, and combined in
software.

Elaine Rich, Director of the Artificial Intelligence Lab at MCC, gave
the closing address, describing the ongoing work at MCC on using
Natural Language for Natural Access to Text Data Bases.  Elaine's
argument was somewhat provocative for the UIST audience; she argued
that direct manipulation user interfaces can only get you so far --
effective queries in text data bases will have to use natural
language.

The AI Lab at MCC has, for some time, been engaged in a project to
build a complete knowledge base of the concepts that we humans
ordinarily learn as we grow up.  A language understanding system
requires such a knowledge base if it to have a "deep semantic
understanding" of text that it processes.  Elaine's group is working
towards a system that will use this knowledge base to build a semantic
network reflecting its understanding of each article it processes in
the text database, and to build a semantic network for any query.
Matching the semantic network of the query against those of the
articles will someday be the basis of very accurate text retrieval.

In the short term, they have built a system which does understand a
natural language query, and reformulates it as a keyword query for a
standard text retrieval engine, resulting in high recall, but low
precision.  They can then use the knowledge base to generate more
specific queries, to get high precision but low recall.  The user can
mix these techniques, and browse and select from the resulting
structure of concepts to obtain the desired degree of precision and
recall.  This same technology is being used for hypertext generation,
both to find appropriate nodes, and to build links.

Toolkits
--------

Toolkits are the backbone of standardized user interfaces; they
provide a standard library upon which graphically-based applications
can be built.  Two of the papers this year described important issues
in toolkit design aimed at simplifying the coding certain portions of
user interfaces.  Three other papers dealt with toolkits designed for
special environments, and two panels addressed issues dear to the
heart of toolkit designers, all described below.

Typical object-oriented toolkits have not generally used objects for
low-level drawable components such as individual characters because of
the amount of data space taken up by each drawable component.  Glyphs,
incorporated into the latest version of Interviews, are just such
objects.  Their use yields cleaner, smaller, and more reusable code,
and is part of the general thrust of Interviews to provide support
for building the "insides" of components.

Glyphs do not themselves store most the information needed to render
them, such as their coordinates; this information is passed to a glyph
when it needs to be drawn, typically by its parent, who calculates,
rather than stores this information.  This further allows glyphs,
particularly glyphs for characters, to be shared, significantly
reducing data space.

The Artkit toolkit deals with another issues in toolkit design -- that
of flexible event dispatching.  Object-oriented toolkits typically use
an input event dispatching model in which the destination object for
the event is first determined, and then that object determines the
action to be taken.  This model makes it cumbersome, even for a
toolkit in which a parent can "grab" an event before its child does,
to support interaction techniques such as gravity snapping and
gesturing, in which the interaction may begin outside of the desired
destination object.

Artkit attacks this problem by using a "programmable" dispatcher.
The dispatcher maintains an ordered list of dispatch agents,
which examine each input event in turn, and decide whether to consume
it or pass it along to the next agent.  Each agent can recognize
a separate interaction technique, and at appropriate times, send
specific high-level events on to the appropriate destination.

QUICK is a toolkit designed to be used by "non-programmers" for rapid
prototyping of non-standard applications.  The system does not support
inheritance.  On the other hand, each object in the system can have
its own bitmap, produce sounds and animations (recorded by rehearsal),
determine its spatial relationship to other objects, and specify the
actions to occur when the object is clicked-on, double-clicked,
dragged, and dropped.  It appears well suited to producing cartoon-like
applications for a wide range of classroom uses.

VUIMS is a toolkit developed by Wavefront Technologies for their
visualization products.  Low-level objects are combined to form
composite objects which embody a specific look and feel.  Objects
communicate by passing messages which are interpreted by the objects
to invoke methods.

TGE, the Tree/Graph Editor, is a family of extensible base editors for
trees and for directed and undirected graphs.  Base classes,
implemented in C++, for the types node, node_picture, arc, and
arc_picture can be subclassed to provide domain specific semantics and
visuals for nodes and arcs.  By subclassing the base classes graph,
graph_picture, and graph_editor, additional user interfaces, layout
algorithms, and interactions with other tools can be implemented.
Using TGE, a variety of specialized graph editors have been rapidly
developed, including a software module interconnection editor, a
database schema editor, and a user interface generator.

To allow the accumulated wisdom of toolkit design to be disseminated,
a panel with the designers of the three most widely used standard
toolkits discussed the lessons they've learned, moderated by Jarrett
Rosenberg.  Paul Asente, co-designer of the Xt Intrinsics, Andy Palay,
co-designer of the Andrew Toolkit, and Mark Linton, the designer of
Interviews, described the successful and unsuccessful features of
their respective toolkits.  The proceedings contain excellent brief
statements from each of the panelists.  The topics addressed included
subclassing, geometry management, event management, graphics support,
choice of implementation language, printing, and the relationship to
the underlying window system.  The primary additional piece of wisdom
to prospective toolkit designers imparted at the session itself was
the unanimous recommendation, made only partly in jest, that such
activity would best be avoided.

Another panel, addressing the question of how Operating Systems might
provide better support for user interfaces, was moderated by William
Jones.  The panel consisted of Peter Williams, an architect of HP's
NewWave(tm) system, George Robertson, a co-designer of Accent, the
precursor of MACH, and of ZOG, one of the earliest hypertext systems,
Vania Joloboff, the architect of FENIX(tm), an early window management
system integrated into the UNIX(tm) kernel, and project manager of
OSF/Motif(tm), and Mike Conner, the technical manager of the Andrew
project in the early 80's.  Again, excellent brief statements from the
panelists may be found in the proceedings.  The topics addressed
included concurrency, real-time scheduling, persistent storage, data
sharing and interchange, and internationalization.

Automatic Generation of User Interfaces
---------------------------------------

Even with standardized toolkits, developing the user interface for an
application entails a significant amount of work, both in design and
coding.  It is natural to want to automate this work, and quite a few
papers dealt with systems that automated various aspects of the user
interface.

Suite constructs a user interface from an application strictly from an
annotated file of the program declarations in C.  Hierarchically
organized data is viewed using a structure editor, and the annotations
control aspects of the appearance.

Suite also uses annotations to control when user edits should be
syntactically or semantically checked, or provided to the underlying
application.  In addition, Suite provides support for persistent
objects; annotations indicate which variables are mapped to persistent
objects, and Suite controls locating, loading, and updating them.

Humanoid takes a "template-oriented" approach in determining how
to map an application data structure onto a hierarchy of widgets (i.e
user interface objects).  A template describes how to map a subgraph
of the data structure to a subtree of the widget hierarchy.  Humanoid
recursively matches templates in its library against a data structure
to produce an interface for it.

Humanoid's matching is based on "predicate subsumption", a feature of
the Loom knowledge representation language.  This allows it to
efficiently determine the best user interface based on the context and
values of the actual data.  In fact, when the data changes its value
or structure, Humanoid can automatically change the widget structure
in response.

The ITS system also uses recursive matching; it produces an interface
from a hierarchy of dialogue components by matching them against style
rules.  In addition to selecting widgets for dialogue components, a
style rule can determine style attributes for all descendent widgets
on a type by type basis -- for example, in deciding to map some
dialogue component to a message box, it can also specify the color of
all of the pushbutton components of the message box.  The style
attributes control all presentation aspects, including layout.
Finally, the style rules can arrange to automatically add children to
various types of descendant widgets, such as adding a Help button to
all message boxes.

ITS, like Humanoid, can choose the best style rule that matches a node
in the dialogue tree.  However, different style rules may have
distinct and orthogonal purposes and set different kinds of style
attributes.  Style rules can be arranged so that a combination of the
best orthogonal rules are all applied.

In both Humanoid and ITS, the widget hierarchy reflects the data or
dialogue structure hierarchy defined by the designer.  In contrast,
DON allows the designer to specify a wider range of information about
the application data and actions, including the conditions under which
an action can be performed by the user, and uses that to construct an
appropriate widget organization.

DON's Organization Manager uses the application information, as well
as a profile of organizational preferences, to first decide how to
group together various portions of data and actions in dialogue boxes
and menus.  Afterwards (like Humanoid and ITS) it selects the widgets
to which the actions and data map.  DON allows the designer to adjust
the priorities of various organizational criteria (e.g. group together
actions which operate on the same type of object) to control the
organizing process.

DON's Presentation Manager then decides how to layout and structure
the dialogue boxes and menus based on graphical design principles.  If
the components don't fit, then DON can use a variety of strategies
(including making separate sub-dialogue boxes); again the strategies
can be prioritized by the designer.

Automatically finding the "best" user interface for an application
could take a considerable amount of time; alternately, the "best"
interface might not be produced, because the rule base is inadequate,
or because the rules or their priorities are not perfect.  An
alternative to fully automatic design is cooperative computer-aided
design in which a designer produces a partial design, allows the
computer to partially explore a number of designs based on it, selects
one of them, refines it some more, allows the computer to explore some
more, etc.  The FLATS system describes just such a cooperative
computer-aided design system, although its domain is architectural
floor plans rather than user interfaces.

There already exist many examples of standard non-computer-based user
interface designs -- address books, dashboards, etc.  It may be
worthwhile to consider whether an interface for some application can
be adapted from one of these designs.  The MAID system is a step in
this direction.

In addition to a description of the application data structures and
actions, the designer also provides a description of a real-world
interface, along with the known correspondences between components of
the two.  MAID can then reorganize the design of the application
interface to match that of the real-world interface, and adopts the
style of the real-world interface as appropriate for the corresponding
application interface components.

In addition, MAID may find that some real-world component is not
associated with any application component, but appears to be
coherently connected to some other real-world component which is.
MAID can then suggest adding that component to the application.

Druid acts like an intelligent assistant to the designer.  As the
designer interactively lays out the widgets, Druid infers size,
spacing, and alignment constraints, and then maintains these if the
designer confirms them.  Next, the designer demonstrates sequences of
inputs and actions to Druid and Druid remembers how to execute the
demonstrated actions in response to user inputs.  Druid also uses the
demonstrations to automatically generate animated help.

Cartoonist can automatically generate animated help that is context-
sensitive.  For each application action, the designer provides
detailed information about its parameters, including how their values
should be obtained from the context or the user.

To provide animated help about an application action in the current
context, Cartoonist selects values for the parameters that are
appropriate in the context.  For example, if an action requires a
rectangle, and a rectangle is currently displayed, Cartoonist does the
animation using that rectangle.

As in DON, preconditions and postconditions are associated with each
action.  An action can only be animated in the current context if its
precondition is satisfied; if not, backward chaining is used to find
a sequence of actions that can be executed in the current context
to satisfy the precondition.

Interactive techniques (e.g. clicking on an object, making a selection
from a menu, etc.) are specified for each parameter that needs to be
obtained from the user.  The designer provides a description of the
detailed user steps for each separate interactive technique.
Cartoonist then uses these to produce the animation for the desired
sequence of actions.  Since Cartoonist animations actually call the
underlying application actions, the state is saved prior to the
animation, and the user can restore the previous state after the
animation is over.

It is not always obvious which interactive technique is best for
carrying out some task.  Toto is a system which assists the designer
in choosing.  It takes a detailed description of the available input
devices, the interactive techniques which can be performed using each,
and the operations needed to perform the task and their requirements
and characteristics.  Using heuristics based on ergonomic principles,
it then selects one or more interactive techniques to perform the
task; in future, it may be able to synthesize new interactive
techniques as well.

Many interaction techniques involve moving or resizing an assemblage
of components based on the motion of the mouse pointer.  In a number
of prototyping environments, the designer can specify the exact
response of all the components by providing a collection of algebraic
constraints.  In the general case, a closed form solution to the
constraints may not be available, so constraint-based systems often
use numerical methods -- that is, on each and every mouse motion,
relaxation techniques are used to solve the constraints and determine
the resulting geometry of each component.  However, these techniques
may operate too slowly to provide accurate real-time response.

GITS, a tool for creating new interactive techniques, limits the
constraints to a fixed set of geometric constraints general enough for
most 2D interactions.  This allows an exact non-iterative solution to
be computed as a sequence of calls to built-in solution routines.
About 100 solution routines are needed to handle all the possible
situations.

The Papers
----------

[Artkit] Tyson R. Henry, Scott E. Hudson, Gary L. Newell, "Integrating
Gesture and Snapping in a User Interface Toolkit"

[Cartoonist] Piyawadee Sukaviriya, James D. Foley, "Coupling a UI
Framework with Automatic Generation of Context-Sensitive Animated
Help"

[DON] Won Chul Kim, James. D Foley, "DON: User Interface Presentation
Design Assistant"

[Druid] Gurminder Singh, Chun Hong Kok, Teng Ye Ngan, "Druid: A System
for Demonstrational Rapid User Interface Development"

[FLATS] Sandeep Kochhar, Mark Friedell, "User Control in Cooperative
Computer-Aided Design"

[GITS] Dan R. Olsen Jr., Kirk Allan, "Creating Interactive Techniques
by Symbolically Solving Geometric Constraints"

[Glyphs] Paul R. Calder, Mark A. Linton, "Glyphs: Flyweight Objects
for User Interfaces"

[Humanoid] Pedro Szekely, "Template-Based Mapping of Application Data
to Interactive Displays"

[ITS] Charles Wiecha, Stephen Boies, "Generating User Interfaces:
Principles and Use of ITS Style Rules"

[MAID] Brad Blumenthal, "Strategies for Automatically Incorporating
Metaphoric Attributes in Interface Designs"

[n-Vision] Steven Feiner, Clifford Beshers, "Worlds within Worlds:
Metaphors for Exploring n-Dimensional Virtual Worlds"

[QUICK] Sarah Douglas, Eckehard Doerry, David Novick, "Quick: A
User-Interface Design Kit for Non-Programmers"

[Suite] Prasun Dewan, "A Tour of the Suite User Interface Software"

[Tailor] Randy Pausch, Ronald D. Williams, "Tailor: Creating Custom
User Interfaces Based on Gesture"

[TGE] Anthony Karrer, Walt Scacchi, "Requirements for an Extensible
Object-Oriented Tree/Graph Editor"

[Toto] Teresa Bleser, John Sibert, "Toto: A Tool for Selecting
Interaction Techniques"

[VUIMS] Jon H. Pittman, Christopher J. Kitrick, "VUIMS: a Visual User
Interface Management System"