[sci.virtual-worlds] Many-dimensional input devices

cphoenix@csli.Stanford.EDU (Chris Phoenix) (09/21/90)

So far, I've heard people talking about input devices mainly in terms of 
moving the user's viewpoint around the "space".  But there are many other
things that can be done as well.  For example, a VR artist may draw a 
picture and want to be able to change several things quickly until it is 
"just right".  An architect may have hundreds or thousands of variables to
specify a building.  A statistician needs to specify how to map a data set
onto an N-dimensional graph.

All of these tasks involve many variables which can change independently.
As long as we're dealing with many dimensions anyway, it seems like VR 
could be quite useful to allow people to develop a good fit to a problem
with many variables.  

But input devices have a limited number of degrees of freedom, so they can
only specify a few dimensions at a time.  What kind of transformation could 
you apply to, say, a dataglove to allow specification of coordinates in 
arbitrarily large dimensions?  Perhaps you could use one DF to specify which
dimensions the others controlled.  As you bend your finger, different parts 
of the building you're designing become highlighted, and then moving your
hand changes their parameters.  However, this seems clumsy and it still won't
let you change many dimensions at a time.  Perhaps some functions could be
specified to derive some coordinate values from others--you could specify that
windows should always end 2 feet below the ceiling, and then when you changed
the ceiling height the windows would change too.  But these functions would
be impossible to guess in advance, and awkward to specify at runtime.  Perhaps
you could use "intelligent" software to allow you to change dimensions by 
reaching for their manifestations in the picture, so that you could just grab
a windowframe and move it.  But this only works for attributes that you can 
touch in some sense, and it would be hard to make it intuitive for very many
things.  (Picture a point in a graph.  When you grab it, does that mean you 
want to change the shape or the color?)

Does anyone else have any suggestions for ways to build such input devices,
either hardware or software?


-- 
War is a little naked kid running along a road and screaming because the 
napalm hurts so bad. War is young men in body bags -- theirs and ours. And 
the dying doesn't necessarily have anything to do with baseball, apple pie 
and the Grand Old Flag.  -- Mike Royko

hlab@milton.u.washington.edu (Human Int. Technology Lab) (09/26/90)

In article <7985@milton.u.washington.edu> cphoenix@csli.Stanford.EDU (Chris Phoe
nix) writes:
>... 
> 
> But input devices have a limited number of degrees of freedom, so they can
> only specify a few dimensions at a time.  What kind of transformation could 
> you apply to, say, a dataglove to allow specification of coordinates in 
> arbitrarily large dimensions?  Perhaps you could use one DF to specify which
> dimensions the others controlled.  As you bend your finger, different parts 
> of the building you're designing become highlighted, and then moving your
> hand changes their parameters.  However, this seems clumsy and it still won't
> let you change many dimensions at a time.
> ... [more suggestions on input techniques deleted] ...
> 
> Does anyone else have any suggestions for ways to build such input devices,
> either hardware or software?
> 

I think we have the same problem here which the dicussion on navigation ran
into awhile back: trying to map a many-dimensional, possibly highly
symbolic domain onto a few degrees of freedom which are easily mapped to a
space which human perceptions can make sense of (pun intended).  The idea
behind VR is to take adavntage of the many megayears of evolution which
have tuned human motor-sensory systems to their environment by simulating
the kinds of space (both physical and kinesthetic) found in that
environment.  But how to map a problem containing hundreds or thousands of
degress of freedom to the dozen or two that you can get from datagloves or
a datasuit?  Oh, you can add some degrees of freedom with gaze detectors
and such , but you still come up short.

So, let's cheat.  In normal conversation with other people, we don't use
just one sensory mode (if we did, we wouldn't need smileys in electronic
conversation to disambiguate what would normally be made clear by voice
tone or gesture).  Conversation with computers can be multi-modal too, e.g.
the voice, gaze, and gesture systems developed at the Architecture Machine
Group (the forerunner of the Media Lab at MIT).  Symbolic input, whether
via voice or gesture (like picking an item from a menu) can be used to
drastically reduce the number of degrees of freedom required *at any one
instant* to specify the user's requests.

Consider gestures: not static positions, but time sequences of
position/flexure data for joints.  Gesture detection software can add a
syntactic layer on top of the normal lexical interpretation of body
language (also intended).  Many more degrees of freedom exist now; a given
finger position, for instance, can be a part of many gestures, just as a
given letter can be a part of many words.  Better still, humans can learn
gestures and sequence of gestures quickly and reliably, probably better
than they can individual positions (try holding *any* position accurately
for any length of time).

Consider voice: a primarily symbolic input technique which can be used
(among many other things) to select the mode of input to be used by other
input techniques ("Print the document I'm pointing at"), to select the
range of valid values of another input ("Print the second and third
documents in the stack I've just picked up") and to manipulate the current
reality to reduce the number of degrees of freedom required for further
interaction ("Bring all documents tagged "R" in green to the front where I
can see them.").

--
---------------------------------------------------------------------------
NOTE: USE THIS ADDRESS TO REPLY, REPLY-TO IN HEADER MAY BE BROKEN!
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077