[sci.virtual-worlds] Who says what to whom

wex@dali.pws.bull.com (Buckaroo Banzai) (09/13/90)

In article <1990Sep9.182518.12605@watserv1.waterloo.edu> broehl@watserv1.waterlo
o.edu (Bernie Roehl) writes:
   Some protocol thoughts...

   There should be an initial interchange with the building, in which:

   1. The user identifies him/herself to the building
   2. The building either welcomes the user or rejects them (e.g. building close
d)
   3. The user tells the building what attributes to send
   4. The building tells the user what user attributes to send

I'd like to talk about 3&4 particularly, as they raise annoying model
questions.  The problem is this: in order for the user to tell the building
what attributes to send, the user has to know what attributes the building
*might* send, which means he knows a hell of a lot about the building's
structure (and probably a fair amount about the *implementation* of the
building's structure, which is even worse).

This problem carries along as we add more and more interacting objects,
until you have a situation where a supposedly-simple object (like a
baseball) has to "know" a hell of a lot about the world.

We ran into this problem while trying to design a system that could model
gravity and collisions by objects interacting intelligently.  We had three
test cases we wanted to be able to solve:
        - the baseball being pitched and hit;
        - a jet airliner taxiing to a halt at a terminal gate;
        - a glass of water falling off a table to the floor.

Simpler versions of these problems have been solved by other approaches,
such as cognitive modeling and constraint-based programming.  We wanted to
see if the interacting-objects model could do as well, but we got bogged
down in the issue of how much knowledge objects need.

We ended up with a ridiculously topheavy structure where the generic
superclasses had all sorts of specialized information which was used to
optimize the enormous searches that the leaf-class objects were required to
perform.

I had an idea for improving this by going from a pure object representation
to a frames+objects representation, where the objects would handle action
rules and the frames would contain "knowledge" in the AI/KR sense.  However,
I haven't had a chance to test out this idea.

--
--Alan Wexelblat                        phone: (508)294-7485
Bull Worldwide Information Systems      internet: wex@pws.bull.com
"Politics is Comedy plus Pretense."

brucec%phoebus.phoebus.labs.tek.com@RELAY.CS. (Bruce Cohen;;50-662;LP=A;) (09/14/90)

In article <7507@milton.u.washington.edu> wex@dali.pws.bull.com (Buckaroo Banzai
) writes:
> 
> I'd like to talk about 3&4 particularly, as they raise annoying model
> questions.  The problem is this: in order for the user to tell the building
> what attributes to send, the user has to know what attributes the building
> *might* send, which means he knows a hell of a lot about the building's
> structure (and probably a fair amount about the *implementation* of the
> building's structure, which is even worse).
> 
> This problem carries along as we add more and more interacting objects,
> until you have a situation where a supposedly-simple object (like a
> baseball) has to "know" a hell of a lot about the world.
> 
> We ran into this problem while trying to design a system that could model
> gravity and collisions by objects interacting intelligently.  We had three
> test cases we wanted to be able to solve:
>         - the baseball being pitched and hit;
>         - a jet airliner taxiing to a halt at a terminal gate;
>         - a glass of water falling off a table to the floor.

No question, this is a nasty problem.  I can think, off hand, of two
possible solutions (read: here are a couple of wild ideas, I have no idea
if they will solve the problem or not).

1) The object and the building negotiate the list(s) of attributes they will
   communicate.  This is the technique used in standard interfaces which
   must be extensible, or at least support optional parts of the interface
   protocol.  One requirement of this approach is that there be a bounded
   list of possible attributes, and that there be some minimal list of
   attributes that can be relied upon to be present.  Then the party
   requesting the use of an attribute can handle the respondent's lack of
   it by negotiating the use of attributes which it can use to emulate the
   missing attribute.  If the possible number of optional attributes is
   large, the number of such emulation strategies can also be large, and
   that's where this scheme can break down.

2) Treat attributes as objects in themselves.  This is an approach I toyed
   with briefly when working on an object-oriented graphics system.
   Graphics is typically divided into rendering primitives (which may
   contain a grouping mechanism to make "compound primitives" :-)), and
   attributes; trying to implement the primitives as objects which have
   attribute values as part of their internal state can get you into
   exactly the kinds of trouble you describe.  Suppose instead that you
   build inheritance hierarchies of attribute object classes, starting from
   some small set of root classes (the base set of attributes which all
   "interacting objects" know about).  Then maybe you can structure the
   inheritance so that the specialization of a subclass is such that the
   emulation I described in suggestion 1) happens as a result of the
   polymorphic invocation of the attribute objects' methods.

> 
> Simpler versions of these problems have been solved by other approaches,
> such as cognitive modeling and constraint-based programming.  We wanted to
> see if the interacting-objects model could do as well, but we got bogged
> down in the issue of how much knowledge objects need.
> 
> We ended up with a ridiculously topheavy structure where the generic
> superclasses had all sorts of specialized information which was used to
> optimize the enormous searches that the leaf-class objects were required to
> perform.
> 
> I had an idea for improving this by going from a pure object representation
> to a frames+objects representation, where the objects would handle action
> rules and the frames would contain "knowledge" in the AI/KR sense.  However,
> I haven't had a chance to test out this idea.
> 

I don't see how to make the knowledge sets in two separate frames impinging
on each other for the first time mesh.  I guess I'm not sure how you mean
to relate the objects and the frames.
--
---------------------------------------------------------------------------
NOTE: USE THIS ADDRESS TO REPLY, REPLY-TO IN HEADER MAY BE BROKEN!
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

wex@dali.pws.bull.com (Buckaroo Banzai) (09/15/90)

In article <7523@milton.u.washington.edu> brucec%phoebus.phoebus.labs.tek.com@RE
LAY.CS. (Bruce Cohen;;50-662;LP=A;) writes:
   1) The object and the building negotiate the list(s) of attributes they will
      communicate.  This is the technique used in standard interfaces which

Right - we tried this.  The theory is good, but the problem is that the
implementation required a *lot* of message interchanges.  Think of the
baseball-approaching-the-bat vs. the airplane-approaching-the-gate.  At what
distance do you communicate what information?  I suppose with faster
machines we'd be able to do more of this, but I'm not happy with the
"negotiation" approach.

      One requirement of this approach is that there be a bounded
      list of possible attributes, and that there be some minimal list of
      attributes that can be relied upon to be present.  Then the party

Right - this is how we got topheavy superclasses.  You have to play this
balancing game between having all objects know a lot about objects'
structure or you have to have a very intelligent way of handling missing
attributes. 

      If the possible number of optional attributes is
      large, the number of such emulation strategies can also be large, and
      that's where this scheme can break down.

Number of strategies?  I'd be hard-pressed to come up with *one* general
strategy.

   I don't see how to make the knowledge sets in two separate frames impinging
   on each other for the first time mesh.  I guess I'm not sure how you mean
   to relate the objects and the frames.

OK - the idea was not to have each object have KR frames, but to have there
be a system of knowledge about the world to which all objects could refer.
That is, if I'm a ball object and I'm about to impinge on a bat object, I
know that this object is of type T (or one of its subclasses) and I know
where to look in the knowledge net for information about objects like that.
Once I've inferred some things about this type of object, I can construct a
request of the bat.

The idea of this sort of implementation is to prevent what is essentially
"real-world" knowledge from having to be part of objects' structures and
also to avoid having objects know too much about how other objects are
implemented.

--
--Alan Wexelblat                        phone: (508)294-7485
Bull Worldwide Information Systems      internet: wex@pws.bull.com
"Politics is Comedy plus Pretense."

shebs@Apple.COM (Stan Shebs) (09/15/90)

In article <7507@milton.u.washington.edu> wex@dali.pws.bull.com (Buckaroo Banzai
) writes:

>   3. The user tells the building what attributes to send
>   4. The building tells the user what user attributes to send
>
>I'd like to talk about 3&4 particularly, as they raise annoying model
>questions.  The problem is this: in order for the user to tell the building
>what attributes to send, the user has to know what attributes the building
>*might* send, which means he knows a hell of a lot about the building's
>structure (and probably a fair amount about the *implementation* of the
>building's structure, which is even worse).

In the system I've been working on, the game^H^H^H^Hvirtual world designer
decides which attributes will be communicated to which users, then it's up
to the users' proxy/client code to decide what (if anything) is to be done
with the attributes.  The trick is then to write clients that can deal with
different sorts of data reasonably.  I assume that most clients will be
customized to particular kinds of worlds, although I've been working on a
Mac client that tries to set up reasonable menus and windows using only
datatype info coming from the server - it does do the right thing sometimes!...

>This problem carries along as we add more and more interacting objects,
>until you have a situation where a supposedly-simple object (like a
>baseball) has to "know" a hell of a lot about the world.

I started my (meta)world design from a literal rendering of the physical
world, where you could track every atom if you had enough resources, then
introduced scaled-down concepts "for efficiency" :-).  The resulting object
hierarchy seems to be fairly good for modeling.  For instance, "physical
objects" or "physobs" reside in "spaces" of assorted shapes, while behavior
can be specified as "processes" or "events".  Motion, for example, is a
process acting uniformly over physobs in a space, and governed by a few
parameters that are attached to the physobs.  A baseball wouldn't need
much more than a shape, mass, elasticity, and color.

In general, physical modelling seems to disfavor the normal computer
science models of computation.  I suspect that it's more fruitful to
let the general computational model be a "non-primitive" (so to speak)
of the virtual reality, in much the same way that physical computers
have to be constructed out of complicated assemblages of materials operating
according to physical laws.  This sounds very ethereal - practical advice
is "don't let arbitrary C/Lisp/etc code into your VR!".

                                                stan shebs
                                                shebs@apple.com

broehl@watserv1.waterloo.edu (Bernie Roehl) (09/15/90)

In article <7507@milton.u.washington.edu> wex@dali.pws.bull.com (Buckaroo Banzai
) writes:
>In article <1990Sep9.182518.12605@watserv1.waterloo.edu> broehl@watserv1.waterl
o
>   3. The user tells the building what attributes to send
>   4. The building tells the user what user attributes to send
>
>I'd like to talk about 3&4 particularly, as they raise annoying model
>questions.  The problem is this: in order for the user to tell the building
>what attributes to send, the user has to know what attributes the building
>*might* send, which means he knows a hell of a lot about the building's
>structure...

Apparently my explanation (later in the article you quote) wasn't very clear.
(That's what I get for writing in a hurry).

My idea is that the attributes the building sends to the user *must* be
defined by the user.  If I'm on a monochrome display, I tell the building
not to send color.  I do this my not including 'color' is the list of
attributes I want the building to send.

In other words, the attributes I can handle are determined by my hardware
and software, and if the building learns about additional attributes I can't
be expected to deal with them intelligently.  So I only tell the building
what I *can* handle.  If I list an attribute  *it* doesn't know about, that's
fine; it doesn't send it and I leave the attribute at its default value.
(The first VR stations that implement, say, barometric pressure will have it
at its default value until such time as buildings begin to support it).

The same is true in the other direction (though I suspect most buildings
would accept all attributes and simply store them; the building does
very little actual processing, and mostly just relays attribute information
between occupants).

>We ran into this problem while trying to design a system that could model
>gravity and collisions by objects interacting intelligently.

A very complex problem that might best be solved (at least initially) by
altering (read eliminating) many of the physical laws that apply to a given
VR.  Bear in mind that (as I pointed out in an earlier posting) a Newtonian
model is not necessarily what we want.  If I were designing reality, I
might well choose not to implement gravity (even if doing so were easy,
which it's not).  "Collisions" are often a *bad* thing.

>        - the baseball being pitched and hit;
>        - a jet airliner taxiing to a halt at a terminal gate;
>        - a glass of water falling off a table to the floor.

Good examples, all of which can be done more easily in the real, physical
world than in a virtual one.

I think we're entering into what may be one of the great ongoing debates in
the realm of VR: are we trying to model existing physical reality, or are
we trying to define new worlds with new sets of properties and physical laws?

I think the answer may be "both", but that the latter (being far easier) will
be the first to be implemented.

-- 
        Bernie Roehl, University of Waterloo Electrical Engineering Dept
        Mail: broehl@watserv1.waterloo.edu OR broehl@watserv1.UWaterloo.ca
        BangPath: {allegra,decvax,utzoo,clyde}!watmath!watserv1!broehl
        Voice:  (519) 885-1211 x 2607 [work]

brucec%phoebus.phoebus.labs.tek.com@RELAY.CS. (Bruce Cohen;;50-662;LP=A;) (09/17/90)

In article <7569@milton.u.washington.edu> wex@dali.pws.bull.com (Buckaroo Banzai
) writes:
    [I wrote:]
>       If the possible number of optional attributes is
>       large, the number of such emulation strategies can also be large, and
>       that's where this scheme can break down.
> 
> Number of strategies?  I'd be hard-pressed to come up with *one* general
> strategy.

Sorry, I wasn't clear there: by strategy, I meant a per-attribute strategy
for emulating that attribute using some set of other attributes.  For
instance, in graphics texture is frequently emulated with crosshatching or
some other regular pattern.

> 
> OK - the idea was not to have each object have KR frames, but to have there
> be a system of knowledge about the world to which all objects could refer.
> That is, if I'm a ball object and I'm about to impinge on a bat object, I
> know that this object is of type T (or one of its subclasses) and I know
> where to look in the knowledge net for information about objects like that.
> Once I've inferred some things about this type of object, I can construct a
> request of the bat.
> 
> The idea of this sort of implementation is to prevent what is essentially
> "real-world" knowledge from having to be part of objects' structures and
> also to avoid having objects know too much about how other objects are
> implemented.
> 

I'm still a little hazy on this, so let me try to rephrase it and correct
me if I'm wrong.  I think you are saying that the frames contain (inter
alia) knowledge of the attributes valid to some sublattice of the object
inheritance graph, and that an object (the baseball, say) wanting to
negotiate attributes with another object (the bat) can find the
intersection of the attribute lists in the ball's frame and the bat's
frame.

If this is what you are saying, how is this different from each object
having the ability to emit its attribute list on request?  There still has
to be a computation somewhere which determines how to map the things the
ball can do to the things the bat wants to do to it; where is this done?

--
---------------------------------------------------------------------------
NOTE: USE THIS ADDRESS TO REPLY, REPLY-TO IN HEADER MAY BE BROKEN!
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

wex@dali.pws.bull.com (Buckaroo Banzai) (09/18/90)

In article <1990Sep14.190247.10645@watserv1.waterloo.edu> broehl@watserv1.waterl
oo.edu (Bernie Roehl) writes:
   My idea is that the attributes the building sends to the user *must* be
   defined by the user.  If I'm on a monochrome display, I tell the building
   not to send color.  I do this my not including 'color' is the list of
   attributes I want the building to send.

The problem is not one of implementation - of course you can construct
special cases for buildings, chairs, planes, balls, bats, etc. ad nauseum.
But wouldn't it be better to have a model of objects where you could define
some general rules (for what attributes you want to ask about) and just
specialize a few of them?

The other problem is that you're more or less violating the "object-ness" of
the model.  The more one object knows about the internal implementation of
another object, the less modular your implementation, etc.

   If I list an attribute  *it* doesn't know about, that's
   fine; it doesn't send it and I leave the attribute at its default value.

So what's the default value for something like "has a door I can walk
through"?  Do you see how much knowledge is presupposed simply in asking
that question?  This is what led me to want to separate out the real-world-
modeling aspects from the objects-that-implement-stuff aspects.

   [re gravity:] a Newtonian model is not necessarily what we want.  If I
   were designing reality, I might well choose not to implement gravity
   (even if doing so were easy, which it's not).  "Collisions" are often a
   *bad* thing. 

True, we might choose a relativistic model where gravity is a property of
the space in which the objects interact.  We did try one design in which
there was a special object known as Space, which contained a naive-physics
model and interacted with the objects in it.  Unfortunately, the machine
immediately bogged down in zillions of message passes, as everyone wanted to
talk to Space almost all the time.  I always thought this model was on the
right track, but people moved on before we could explore the possibilities
more fully.

   I think we're entering into what may be one of the great ongoing debates
   in the realm of VR: are we trying to model existing physical reality, or
   are we trying to define new worlds with new sets of properties and
   physical laws?

The problem is that the people paying the bills were airlines; they wanted a
system that more or less accurately modeled the real world.  Sure, it would
be nice if we could drop annoying things like gravity and collisions.
They're *hard*.  But that doesn't mean we can dodge them forever.  And I
don't favor building up a protocol/system which is going to collapse the
first time you throw a real problem at it.

--
--Alan Wexelblat                        phone: (508)294-7485
Bull Worldwide Information Systems      internet: wex@pws.bull.com
"Politics is Comedy plus Pretense."

wex@dali.pws.bull.com (Buckaroo Banzai) (09/18/90)

In article <7661@milton.u.washington.edu> brucec%phoebus.phoebus.labs.tek.com@RE
LAY.CS. (Bruce Cohen;;50-662;LP=A;) writes:
   Sorry, I wasn't clear there: by strategy, I meant a per-attribute strategy
   for emulating that attribute using some set of other attributes.  For
   instance, in graphics texture is frequently emulated with crosshatching or
   some other regular pattern.

OK - it just looks to me that if there are per-attribute strategies it's
going to get awfully messy.  Essentially you'd need some kind of forward- or
backward-chaining system to say "Here are the attributes I have, here's what
I need, here are strategies for getting from to any A from some subset of A,
now synthesize the rest."

   I'm still a little hazy on this, so let me try to rephrase it and correct
   me if I'm wrong.  I think you are saying that the frames contain (inter
   alia) knowledge of the attributes valid to some sublattice of the object
   inheritance graph, and that an object (the baseball, say) wanting to
   negotiate attributes with another object (the bat) can find the
   intersection of the attribute lists in the ball's frame and the bat's
   frame.

Nope.  What I meant was that when we try to build a VR that models some
attributes of the world, we in some way "encode" real-world knowledge.  At
the same time, we have to invent "world-things" which do our VR stuff, like
display themselves, interact with users, etc.  Most systems that I know of
build the real-world knowledge into the objects.

I think that the right way to go about this is to separate the two kinds of
information.  Then we provide some way for objects to refer into the
knowledge lattice based on something about an object (e.g. the class to
which the object bleongs).  The requestor then asks something about the real
world, which the requested object can try to answer.  This avoids the
problem of the requestor having to know anything about the implementation of
the requestee.

   If this is what you are saying, how is this different from each object
   having the ability to emit its attribute list on request?  There still has
   to be a computation somewhere which determines how to map the things the
   ball can do to the things the bat wants to do to it; where is this done?

Objects can, of course, be queried for their attribute lists.  And, true,
there's still computation to be done.  In fact, my scheme *increases* the
amount of computation, but *decreases* the message passing.  That is why we
were looking at something like parallel Smalltalk for implementation.  If
you can have each object computing away on its own (in parallel) any only
rarely passing messages, we felt we had a better chance at a usable
implementation.  Unfortunately, we never got a chance to try out the idea.

--
--Alan Wexelblat                        phone: (508)294-7485
Bull Worldwide Information Systems      internet: wex@pws.bull.com
"Politics is Comedy plus Pretense."

brucec%phoebus.phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen) (09/21/90)

In article <7801@milton.u.washington.edu> wex@dali.pws.bull.com (Buckaroo Banzai
) writes:
> We did try one design in which
> there was a special object known as Space, which contained a naive-physics
> model and interacted with the objects in it.  Unfortunately, the machine
> immediately bogged down in zillions of message passes, as everyone wanted to
> talk to Space almost all the time.  I always thought this model was on the
> right track, but people moved on before we could explore the possibilities
> more fully.

I like the idea of objectifying Space, but maybe the problem is having a
unique Space object.  Suppose instead that there are a number of them; in
the limit, one for each "material" object (surely we can come up with some
terminology to make talking about virtual objects easier!).  Each Space
object has some spatial locality to be concerned about, and has knowledge
about and control over the geometry in that locality (incidentally
making multiplex manifolds easy to implement).

Whether or not Space objects are shared, no object should directly
send messages to a Space object other than ones which are local to the
position of the object (there might be more than one if the object was on
the boundary between two localities).  A bat object might send a message to
its Space saying "I am being thrusted to the left (in my local coordinate
system) with a force of xx newtons; I mass yy kilos and have such & such a
cross-section in the plane transverse to the thrust" (I'm begging the model
of physical interaction here, of course that's not a sufficient
specification for a realistic baseball simulation, but it should be enough
for the example).  The Space keeps track of the position and velocity of
the bat as modified by gravity, air resistance, etc., and sends a message
to the ball indicating any acceleration forces on it as a result of its motion.

Now if a ball is moving towards the bat, at some point its Space will hand
it or copies of its messages off to the bat's Space, just like a handoff in
a cellular phone network.  When the ball collides with the bat, the local
Space(s) will modify the positions, and velocities of the bat and ball, and
send new force messages back to them.

While in this scheme there are probably more total messages than if there
a single Space, there is a greater potential for parallelism, since we
have, very literally, a great deal of locality of reference.  Since VR
worlds are likely to be highly ditributed, I think this is a win.

> The problem is that the people paying the bills were airlines; they wanted a
> system that more or less accurately modeled the real world.  Sure, it would
> be nice if we could drop annoying things like gravity and collisions.
> They're *hard*.  But that doesn't mean we can dodge them forever.  And I
> don't favor building up a protocol/system which is going to collapse the
> first time you throw a real problem at it.

Agreed.  Its also true that if we can't model the worlds we know about, we
can have no confidence in being able to model different worlds in a
self-consistent way.  Just because we change the rules to make a new world
doesn't mean we can predict the ways in which the new rules will interact
in manifesting that world, and if we flange the rules to make the world
easier to implement, we may find out (or never know) that we've missed out
on some interesting behavior because of the broken rules (pun intended).
--
---------------------------------------------------------------------------
NOTE: USE THIS ADDRESS TO REPLY, REPLY-TO IN HEADER MAY BE BROKEN!
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

broehl@watserv1.waterloo.edu (Bernie Roehl) (09/22/90)

In article <7801@milton.u.washington.edu> wex@dali.pws.bull.com (Buckaroo Banzai
) writes:
>   If I list an attribute  *it* doesn't know about, that's
>   fine; it doesn't send it and I leave the attribute at its default value.
>
>So what's the default value for something like "has a door I can walk
>through"?  Do you see how much knowledge is presupposed simply in asking
>that question?

I don't think of a door as an attribute; I would say it's an object that's
contained within the room.

The door object sends its appearance, location, etc; if I (for example) touch
the doorknob, the door responds by altering its orientation and appearance
to show me whatever's on the other side.  I pass through the door, and I'm
(transparently) transported to another room.

In principle, I could bring a door to my house with me whereever I go, and
leave it behind in case people want to come over and visit.

>   [re gravity:] a Newtonian model is not necessarily what we want.  If I
>   were designing reality, I might well choose not to implement gravity
>   (even if doing so were easy, which it's not).  "Collisions" are often a
>   *bad* thing. 
>
>True, we might choose a relativistic model where gravity is a property of
>the space in which the objects interact.

Or even dispense with these complexities altogether (which was the original
intent of my statement).  A world in which I can simply float around,
perhaps by swimming through the ether, is immensely easier to model.

>The problem is that the people paying the bills were airlines; they wanted a
>system that more or less accurately modeled the real world.  Sure, it would
>be nice if we could drop annoying things like gravity and collisions...

... the very things most airlines would *love* to avoid :-)

>And I don't favor building up a protocol/system which is going to collapse the
>first time you throw a real problem at it.

Depending on what you mean by a "real problem"...

-- 
        Bernie Roehl, University of Waterloo Electrical Engineering Dept
        Mail: broehl@watserv1.waterloo.edu OR broehl@watserv1.UWaterloo.ca
        BangPath: {allegra,decvax,utzoo,clyde}!watmath!watserv1!broehl
        Voice:  (519) 885-1211 x 2607 [work]

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (09/23/90)

wex@dali.pws.bull.com (Buckaroo Banzai) writes:
>
>broehl@watserv1.waterloo.edu (Bernie Roehl) writes:
>   My idea is that the attributes the building sends to the user *must* be
>   defined by the user.  If I'm on a monochrome display, I tell the building
>   not to send color.  I do this my not including 'color' is the list of
>   attributes I want the building to send.
>
>The problem is not one of implementation - of course you can construct
>special cases for buildings, chairs, planes, balls, bats, etc. ad nauseum.
>But wouldn't it be better to have a model of objects where you could define
>some general rules (for what attributes you want to ask about) and just
>specialize a few of them?
>
>The other problem is that you're more or less violating the "object-ness" of
>the model.  The more one object knows about the internal implementation of
>another object, the less modular your implementation, etc.
>
>   If I list an attribute  *it* doesn't know about, that's
>   fine; it doesn't send it and I leave the attribute at its default value.
>
>So what's the default value for something like "has a door I can walk
>through"?  Do you see how much knowledge is presupposed simply in asking
>that question?  This is what led me to want to separate out the real-world-
>modeling aspects from the objects-that-implement-stuff aspects.

But in the real world, nothing screams at me "has a(nother) door I can walk
through" when I enter a room. What I get, with the aid of a lot of visual
system processing is: "has a rectangular parallelopiped with a striated
texture and two shiny cylinders on one side's edge and one white shiny
circular object on the other".

It is _my_ knowledge base/subsequent processing that classifies that as
"probable door, two brass hinges, one enamel doorknob", with a possible
alternate, subject to further interactive test, classification of
"photorealistic painting".

So until I bring my (defined to the building) physical envelope within
contact distance of that doorknob, all the building has to do about the door
is send responses to my "accepts visual input of types location, geometric
form, color, texture, reflectance, transparency" profile, and depend on me
to correctly classify them. It need not send semantics (heck, the _building_
doesn't know that's a "door", merely that it is a physical, impenetrable
object that pivots out of the way on one side and has a latching mechanism
on the other; how I use it is up to me.)

Similarly, while the building needs to know about my mass and physical
extent and position, unless it contains mirrors, it is probably intensely
uninterested in the fact that I can provide visual information, and it
is probably inappropriate design to put that capability in the building,
simply to transport it to the other users or to the mirror.  At some
predefined interaction distance (subsumes at least one pixel), let the
_mirror_ tell me it accepts and provides visual stimuli.  Let the _doorknob_
tell me at contact distance that it provides tactile stimuli, and accepts
both push and torque input.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

wex@dali.pws.bull.com (Buckaroo Banzai) (09/25/90)

In article <7986@milton.u.washington.edu> brucec%phoebus.phoebus.labs.tek.com@RE
LAY.CS.NET (Bruce Cohen) writes:
   I like the idea of objectifying Space, but maybe the problem is having a
   unique Space object.  Suppose instead that there are a number of them; in
   the limit, one for each "material" object (surely we can come up with some
   terminology to make talking about virtual objects easier!).  Each Space
   object has some spatial locality to be concerned about, and has knowledge
   about and control over the geometry in that locality (incidentally
   making multiplex manifolds easy to implement).

I like this idea - sort of like the way air traffic control is done today.
I think it would work, but I'm not in a position to go off and implement a
trial system :-(

   Whether or not Space objects are shared, no object should directly
   send messages to a Space object other than ones which are local to the
   position of the object (there might be more than one if the object was on
   the boundary between two localities).

Right.  There could even be a stylized "handoff" procedure, just as is done
with aircraft in flight.

I can see two problems - one theoretical, and one implementational.  Theory
first: The subdivision of space carries with it the implicit assumption that
object have no effects outside their volume.  This will break down if you
get to forces that act over a significant distance (say, the effect of solar
wind and gravity on an earth-moon flight).

The implementational problem comes the first time you have an adhesive
collision (say, ball into catcher's mitt) occurring at the boundary of two
volumes.  There will be *lots* of message-passing going on.

This is not to say I've changed my mind - I think this idea is on the right
track; I'm just pointing out things we should be careful of.

   While in this scheme there are probably more total messages than if there
   a single Space, there is a greater potential for parallelism, since we
   have, very literally, a great deal of locality of reference.  Since VR
   worlds are likely to be highly ditributed, I think this is a win.

I agree again.  The more I think about it, the more I think I want a
parallel language to do any kind of reasonable modeling.  Unfortunately, I
don't know squat about parallel programming :-( That's why I was hoping to
sneak by with a parallel implementation of an object-oriented language I did
know.

--
--Alan Wexelblat                        phone: (508)294-7485
Bull Worldwide Information Systems      internet: wex@pws.bull.com
"Politics is Comedy plus Pretense."

wex@dali.pws.bull.com (Buckaroo Banzai) (09/25/90)

In article <7989@milton.u.washington.edu> xanthian@zorch.SF-Bay.ORG (Kent Paul D
olan) writes:
   wex@dali.pws.bull.com (Buckaroo Banzai) writes:
   >So what's the default value for something like "has a door I can walk
   >through"?  Do you see how much knowledge is presupposed simply in asking
   >that question?  This is what led me to want to separate out the real-world-
   >modeling aspects from the objects-that-implement-stuff aspects.

   But in the real world, nothing screams at me "has a(nother) door I can walk
   through" when I enter a room. What I get, with the aid of a lot of visual
   system processing is: "has a rectangular parallelopiped with a striated
   texture and two shiny cylinders on one side's edge and one white shiny
   circular object on the other".

I disagree strongly with you here.  See, for example, Don Norman's work on
affordances, particularly The Psychology of Everyday Things.  I know Don
reads this group from time to time - perhaps he'd care to comment.

   It is _my_ knowledge base/subsequent processing that classifies that as
   "probable door, two brass hinges, one enamel doorknob", with a possible
   alternate, subject to further interactive test, classification of
   "photorealistic painting".

My point is not "can you figure out there's a door there" based on sensory
input.  My point is that the fact that you're *looking* for a door indicates
that you've brought along a *huge* amount of "real-world" knowledge, not
least of which involves things like "hinges" and "doorknobs."  It's easy for
humans; we have years of learning this stuff.  However, trying to program it
into objects is, I contend, ultimately futile.

That was the major thesis of my opposition to the objects-interact-by-
sending-properties-around protocol.

   Similarly, while the building needs to know about my mass and physical
   extent and position, unless it contains mirrors, it is probably intensely
   uninterested in the fact that I can provide visual information, and it
   is probably inappropriate design to put that capability in the building,

But what if the building has windows through which you are being observed by
other users or by a visual recording device?  [Note again: I'm not saying it
can't be done.  I'm saying that you're heading for a world of trouble both
in terms of trying to give objects knowledge about the world, and in terms
of the number of special cases you're going to have to create.]

--
--Alan Wexelblat                        phone: (508)294-7485
Bull Worldwide Information Systems      internet: wex@pws.bull.com
"Politics is Comedy plus Pretense."

noble@shumv1.ncsu.edu (Patrick Brewer) (09/25/90)

In article <1990Sep21.192518.6956@watserv1.waterloo.edu> broehl@watserv1.waterlo
o.edu (Bernie Roehl) writes:

>>   [re gravity:] a Newtonian model is not necessarily what we want.  If I
>>   were designing reality, I might well choose not to implement gravity
>>   (even if doing so were easy, which it's not).  "Collisions" are often a
>>   *bad* thing. 
>>
>>True, we might choose a relativistic model where gravity is a property of
>>the space in which the objects interact.
>
>Or even dispense with these complexities altogether (which was the original
>intent of my statement).  A world in which I can simply float around,
>perhaps by swimming through the ether, is immensely easier to model.

        Not to mention, swimming through the ether would be much more
fun from a user point of view. I have no experiance in VR research, but I
find it disturbing that people keep talking about ways to duplicate this
universe. 
        For two reasons I think it is important that the VR appears like 
this universe only in ways that are necessary. IE. 3-D. 
1.      To make it saleable: No one would spend many thousands of their
own or their companies money on a VR duplicate of this universe. Think
about it: You walk into your office strap on your 3-D goggles and put 
on the gloves, earphones, and are magically transported into an duplicate
of the same mundane office. 
2.      For it to have an advantage over "staying in this reality" the VR
must be diffeeren. Some fundamental difference must provide an advantage 
that allows the user to get his work done faster. It should make MORE 
information, MORE EASILY understood. 

        I have read postings to this group about how to display text
on a 3-d display. I don't know, and probably more importantly I don't
care! Text is a 2-D way of communication.. If I wnt to read something 
I will use a regular flat screen (read 2-D) display. Remember the quote
"A picture is worth a thousand words."? Well how many words is a 3-D
model that can be viewed from any angle (and moved into, and can work 
interactively) worth? I'm willing to bet much more than a thousand. :-)

Thanks for listening.
--
-----------------------------------------------------------------------
Patrick W. Brewer          President of CATT Program at NCSU 
noble@shumv1.ncsu.edu

hlab@milton.u.washington.edu (Human Int. Technology Lab) (09/26/90)

In article <8077@milton.u.washington.edu> wex@dali.pws.bull.com (Buckaroo Banzai
) writes:
> 
> I can see two problems - one theoretical, and one implementational.  Theory
> first: The subdivision of space carries with it the implicit assumption that
> object have no effects outside their volume.  This will break down if you
> get to forces that act over a significant distance (say, the effect of solar
> wind and gravity on an earth-moon flight).

Ah, but that's only a problem in Newtonian universes where there is action
at a distance :-).  So let's make all our universes Einsteinian, and let
all forces be the result of local effects.  Global results will occur as
the result of communication between Space objects.  The cost for this is a
large volume of message traffic when the system starts up, or when
different parts of the universe first get connected, but that should reside
rapidly as the system relaxes to its steady state.

> 
> The implementational problem comes the first time you have an adhesive
> collision (say, ball into catcher's mitt) occurring at the boundary of two
> volumes.  There will be *lots* of message-passing going on.

I'm not sure that's necessarily true.  A lot of graphics and text layout
software (TeX and InterViews spring to mind) use a similar concept, in
which the individual components don't care about their positions, but are
all connected to special "glue" components which maintain local
relationships between non-glue components.  Global relationships are
maintained by modifying the parameters of the glue (say by adding "cost"
functions, in terms of physics, this is adding energy functions and
conservaton laws).

>  The more I think about it, the more I think I want a
> parallel language to do any kind of reasonable modeling.  Unfortunately, I
> don't know squat about parallel programming :-( That's why I was hoping to
> sneak by with a parallel implementation of an object-oriented language I did
> know.
> 

I can't see how we can do it without parallel programming.  But I think
your instinct was right: it's possible to hide the parallelism behind an
object-oriented model in which objects can run in parallel.  Only the
systems programmers need to face the synchronization problems directly.
--
---------------------------------------------------------------------------
NOTE: USE THIS ADDRESS TO REPLY, REPLY-TO IN HEADER MAY BE BROKEN!
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

mike@x.co.uk (Mike Moore) (09/26/90)

broehl@watserv1.waterloo.edu (Bernie Roehl) writes:
>
>
>wex@dali.pws.bull.com (Buckaroo Banzai
>) writes:
>>   If I list an attribute  *it* doesn't know about, that's
>>   fine; it doesn't send it and I leave the attribute at its default value.
>>
>>So what's the default value for something like "has a door I can walk
>>through"?  Do you see how much knowledge is presupposed simply in asking
>>that question?
>
>I don't think of a door as an attribute; I would say it's an object that's
>contained within the room.
>
>The door object sends its appearance, location, etc; if I (for example) touch
>the doorknob, the door responds by altering its orientation and appearance
>to show me whatever's on the other side.  I pass through the door, and I'm
>(transparently) transported to another room.
>
>In principle, I could bring a door to my house with me whereever I go, and
>leave it behind in case people want to come over and visit.

As an interjection:

It strikes me that this is an example of the kind of behaviour necessary
in the real world, but not in virtual reality.  What I'm saying is that
in a VR environment if we don't want people entering a room, we don't
even tell them it's there, they just see a blank wall (or one with
pictures hung on it, or whatever) and all the hacking in the world won't
change that fact.  For people we do want to allow in there is simply an
entry point, no messing around with doors just a 'transporter' machine/
object which moves you into the chosen room/building/area/'country',
wherever you intend going.

With regards to the object attribute protocol:

In my (humble) opinion, there isn't a need for every object to shout it's
attributes at me when I enter a room.  An object is a passive item that is
acted on, and, if I perform an action recognised by the object it performs
an action of it's own.  i.e. I swing the baseball bat, the baseball bat
pushes the baseball, the baseball alters its trajectory *and* pushes
the baseball bat, the baseball bat pushes me.  The space in which each
of the objects (including me) exists defines the trajectory rules that
the ball operates on, and baseball bat operates on me (i.e. friction
due to my feet touching the ground - ground, what ground?  I'm spinning!)
The baseball bat, the baseball, myself and the space in which we exist can
be processing on entirely seperate machines, there is only a need to
communicate visual information when I enter the room, other information is
gathered empirically (i.e. the weight of the bat is discovered when I lift
it).  Of course, having discovered an empirical attribute (and assuming it
doesn't change) there is no need to have the attribute sent again.

Something else I'd like to start a discussion on is the apparent necessity
we have of modelling the real world.  I believe that so long as the physical
laws are apparent, there is no need to extend beyond this (of course, we
don't *really* want to accurately model somebody jumping off the golden
gate bridge!).  Familiar objects are already changing in the real world,
push-button phones as opposed to rotary phones, digital display watches
as opposed to analogue display.  The virtual reality would begin to alter
these 'familiar' objects in the same way that digital electronics has
already altered the real world examples I've given.  I'm currently thinking
about what might be the most spectacular changes, but the 'door' argument
above is a good enough example to begin with.

Comments, opinions?
-- 
---
Mike Moore
mike@x.co.uk or mike@ixi-limited.co.uk
Usual and obvious disclaimers... etc

wex@dali.PWS.BULL.COM (Buckaroo Banzai) (09/28/90)

writes:
 
   For people we do want to allow in there is simply an   entry point, no
   messing around with doors just a 'transporter' machine/object which moves
   you into the chosen room/building/area/'country', wherever you intend
   going.
 
Well... yes and no.  I'm extremely fond of the "new modes of interaction"
idea.  Driving while looking in the rear-view mirror will only get us so
far.  But on the other hand, it's extremely hard to ignore, as Meredith
Bricken put it, the fact that we're wired for up/down, forward/backward
one-step-at-a-time.  It's some of the most deeply learned behaviors and
relationships we have to the world.  That's why I agree with the assertion
that you can't train a cybernaut, you're going to have to breed one.
 
One need only see the differences in adults & children using a powerglove in
order to see the truth of this.  You and I are already too old, our brains
too ossified.  We're trained that if we want to enter a room, we look for a
more or less conventional entry point (door, window, chimney).  Those of us
with slightly bent minds can accept a teleporter to get us inside.  But the
"transporter machine" is still a door in terms of its affordances.
 
   In my (humble) opinion, there isn't a need for every object to shout it's
   attributes at me when I enter a room.  An object is a passive item that is
   acted on, and, if I perform an action recognised by the object it performs
   an action of it's own.
 
But this begs the question.  If the objects in the room don't shout at you,
how do you know they're there?  How do you know what you can do with 
them?
 
   Something else I'd like to start a discussion on is the apparent necessity
   we have of modelling the real world.  I believe that so long as the physical
   laws are apparent, there is no need to extend beyond this (of course, we
   don't *really* want to accurately model somebody jumping off the golden
   gate bridge!).
 
See above for a partial answer to this.  I'll also recommend again my two
favorite papers on this topic:
 
        Smith, Randall B.  "Experiences with the Alternate Reality Kit: An
        Example of the Tension Between Literalism and Magic," CHI+GI'87
        Conference Proceedings, April 1987.
 
and
 
        Fairchild & Gullichsen.  "From Modern Alchemy to a New Renaissance,"
        MCC Technical Report HI-400-86, December, 1986.
 
--
 
--Alan Wexelblat                        phone: (508)294-7485
Bull Worldwide Information Systems      internet: wex@pws.bull.com
"Politics is Comedy plus Pretense."

broehl@watserv1.waterloo.edu (Bernie Roehl) (10/01/90)

In article <8204@milton.u.washington.edu> mike@x.co.uk (Mike Moore) writes:
>In my (humble) opinion, there isn't a need for every object to shout it's
>attributes at me when I enter a room.  An object is a passive item that is
>acted on

Agreed, provided we allow "acted on" to encompass "looked at".  That is, if
I tell an object I'm looking at it, it tells me what it looks like (unless
it's invisible).

The reason I was suggested such information be cached in the room server
is purely to reduce network traffic.

-- 
        Bernie Roehl, University of Waterloo Electrical Engineering Dept
        Mail: broehl@watserv1.waterloo.edu OR broehl@watserv1.UWaterloo.ca
        BangPath: {allegra,decvax,utzoo,clyde}!watmath!watserv1!broehl
        Voice:  (519) 885-1211 x 2607 [work]

tsmith@uunet.UU.NET (Timothy Lyle Smith) (10/02/90)

In article <8370@milton.u.washington.edu> wex@dali.PWS.BULL.COM (Buckaroo Banzai
) writes:
>far.  But on the other hand, it's extremely hard to ignore, as Meredith
>Bricken put it, the fact that we're wired for up/down, forward/backward
>one-step-at-a-time.  It's some of the most deeply learned behaviors and
>relationships we have to the world.  That's why I agree with the assertion
>that you can't train a cybernaut, you're going to have to breed one.
> 

  Sorry about the deleted text but our mailer has this new line count check
which can make things difficult.  I am going to have to disagree with both
you and Mr. Bricken.  Have you ever tried to learn how to play a drum set?
A friend of mine recently tried to teach me how to play one and I found out
how difficult it can be to remember that 3 comes after 2.  The process
involved starting with hitting the cymbol(spelling?) while counting out
load.  This then moved on to hitting the snare drum on even beats and then
hitting the bass drum on odd beats all the while counting out load.  I
couldn't handle all the different activities at once, this was driven home
by the many times that I got to the count of 2 and stoped because I did not
remember what came after 2.

  As I practiced it I was able to continue for multiple counts of 4.  With
enough practice I would have been able to do even better.  This only points
out that with pratice it is possible for at least 2 people to learn how to
in a more than one-step-at-a-time world.  It is possible that we are just
more capable than others or it may be that it is easier to exist in a one-
step-at-a-time world so that we don't create actions which require more
steps needing to be done in parallel.  Those actions which do require
coordination of multiple events at the same time are actions which do not
happen in the day to day life of most people. IMHO, I think that we can
deal with as many different events as we have ways of providing controlling
devices for those events.  By controlling devices I mean those devices
which are directly or indirectly connected to us, fingers, toes, feet, and
etc.

-- 
Tim Smith  
Minard 300                      UUCP:        ...!uunet!plains!tsmith
North Dakota State University,  BITNET:      tsmith@plains.bitnet
Fargo, ND  58105                INTERNET:    tsmith@plains.NoDak.edu

brucec%phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen;;50-662;LP=A;) (10/05/90)

In article <8511@milton.u.washington.edu> plains!tsmith@uunet.UU.NET (Timothy Ly
le Smith) writes:
> [description of learning how to use a drum set deleted] ...
>
>   As I practiced it I was able to continue for multiple counts of 4.  With
> enough practice I would have been able to do even better.  This only points
> out that with pratice it is possible for at least 2 people to learn how to
> in a more than one-step-at-a-time world.  It is possible that we are just
> more capable than others or it may be that it is easier to exist in a one-
> step-at-a-time world so that we don't create actions which require more
> steps needing to be done in parallel.  Those actions which do require
> coordination of multiple events at the same time are actions which do not
> happen in the day to day life of most people.

I don't agree: I drive a stick shift car which requires me to coordinate my
left foot and right arm to shift and my right foot to control the
accelerator, while simultaneously steering with my left hand and watching
around to make sure I don't run into something.  I can keep up a
conversation while doing this as well.  As far as I know, this is a common
ability.

> IMHO, I think that we can
> deal with as many different events as we have ways of providing controlling
> devices for those events.  By controlling devices I mean those devices
> which are directly or indirectly connected to us, fingers, toes, feet, and
> etc.
> 
> -- 

I agree with this, with the proviso that these sorts of control tasks
typically require a great deal of training and/or practice for an operator
to become good at one of them, and there's some overhead in adding one,
even one you're already trained for, to the set you can handle
simultaneously.

Personally, I would rather use mechanisms I already know, like picking
up objects, folding and spindling them, and throwing them into round
receptacles :-), then try to learn a bunch of new motor skills, each one of
which is specific to a particular step in one task which I perform as a
part of my work.  As I see it, one of the major benefits of the VR style of
user interface is that the system is designed to map itself to analogies
which are familiar to the user, so that training reduces to exploration and
existing motor skills can be used.

--
---------------------------------------------------------------------------
Speaker-to-managers, aka
Bruce Cohen, Computer Research Lab        email: brucec@tekcrl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077

wex@pws.bull.com (Buckaroo Banzai) (10/06/90)

In article <8511@milton.u.washington.edu> plains!tsmith@uunet.UU.NET (Timothy Ly
le Smith) writes:
>[...] with pratice it is possible for at least 2 people to learn how to
>in a more than one-step-at-a-time world.  It is possible that we are just
>more capable than others or it may be that it is easier to exist in a one-

You misinterpret me.  What I meant, when I quoted Ms. Bricken's "one-step-at-
a-time" phrase, was that this was our learned mode of locomotion (step as in
what we do with our feet).  The remark was made in reference to Mike Moore's
thoughts about non-linear locomotion means.  I was trying to say that I
agreed with Mike about the desirability and implementability of such means,
but that users going into such an environment would be fighting decades of
learning, and millenia of evolution.

For example, consider the plight of the highly-educated, highly-intelligent
gentlemen whom I observed playing Mattel's GLOVEBALL game at the Interactive
Experience at CHI'90:  This game (the first designed explicitly for the
powerglove) is a relatively simple break-out style game in 3D.

While children had no trouble picking up the concepts of this game, older users
exhibited the following problems:
        - certain of the bricks had question marks on them.  This indicated a
special-scoring brick.  Older users repeatedly asserted that the question
mark probably indicated (paraphrase) "this is how to get help with the system."
        - at random times, various creatures would appear on the screen.  The
game allows one to shoot by making a gun shape with the hand.  The person
from Mattel running the game was repeatedly asked (again, by the older non-
video generation) "What are those creatures?"  The answer "The meanies! of
course" was deemed not acceptable by most of the questioners.
        - The game consists of about a hundred interconnected playing areas.
One moves out of an area into another by clearing all the bricks from a wall
and then flying into/through the wall.  Many users, including myself this time,
had trouble remembering this when told (no one discovered it on their own).
Once told it was a wall, that concept stuck and the icea of walking through
walls, although a popular fiction in our culture, is still highly counter-
intuitive.

I could go on, but the point, I think, is made: we carry enormous baggage
inside our heads.  We will inevitably bring this baggage into cyberspace.
If we build our VRs too unlike users' expectations, they will be unable to
use them.  Remember that we are a strange bunch of enthusiasts, willing to
try anything.  Before we get the average mechanical engineer to use one of
these thigns every day, we're going ot have to do a lot of adapting (to him).


-- 
--Alan Wexelblat                        phone: (508)294-7485
Bull Worldwide Information Systems      internet: wex@pws.bull.com
"Politics is Comedy plus Pretense."

lance@uunet.UU.NET (10/10/90)

In article <8511@milton.u.washington.edu> plains!tsmith@uunet.UU.NET (Timothy Ly
le Smith) writes:
> [description of learning how to use a drum set deleted] ...
>
> It is possible that we are just
> more capable than others or it may be that it is easier to exist in a one-
> step-at-a-time world so that we don't create actions which require more
> steps needing to be done in parallel.  

Ahhhhh, but a VR drum interface would include a simple set of drills to
teach you how to use your sticks.  Like those "typing tutor" programs.

I've got a childhood of classical piano and an adulthood of professional
typing under my belt, and I have no problem with the concept of running
a computer with my hands, feet, and a breath controller to boot.  I've 
been hunting for a foot-controlled mouse for years, and I've finally found
an add-on to those airplane-stick joysticks that provides two foot pedals
and plugs into an IBM joystick port.

Lance

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (10/17/90)

wex@pws.bull.com (Buckaroo Banzai) writes:
>
>I could go on, but the point, I think, is made: we carry enormous baggage
>inside our heads.  We will inevitably bring this baggage into cyberspace.
>If we build our VRs too unlike users' expectations, they will be unable to
>use them.

The "danger", I think, comes from a close, but inexact, approximation of
reality.  If the approximation is very good, then there will be little
dissonance with expectation.  But this is also true if we create a "reality"
in which _none_ of the expected rules hold good.  It is the middle case,
where things that look like they should work, don't, that causes the most
dissatisfaction.

>Remember that we are a strange bunch of enthusiasts, willing to
>try anything.  Before we get the average mechanical engineer to use one of
>these things every day, we're going to have to do a lot of adapting (to him).

Before I could use a car, everyday, I had to do a lot of adapting to it.
It is a highly useful skill with a high payoff for success, so I was
willing to invest the effort.  Don't assume that the M.E. won't invest the
effort to acclimate to a lot of counter-intuitive behavior if the reward is
sufficient.  Our systems need first to be useful; easy can come later.

Example: give me a V.R. in which I can develop an intuition for topological
theory, and I will invest incredible effort to get past the barriers implicit
in your interface, because the present tools for learning topology do _not_
grant me an intuition, and it frustrates me immensely.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

pepke@SCRI1.SCRI.FSU.EDU (Eric Pepke) (10/19/90)

In article <9397@milton.u.washington.edu> xanthian@zorch.SF-Bay.ORG (Kent 
Paul Dolan) writes:
> The "danger", I think, comes from a close, but inexact, approximation of
> reality.  If the approximation is very good, then there will be little
> dissonance with expectation.  But this is also true if we create a 
"reality"
> in which _none_ of the expected rules hold good.  It is the middle case,
> where things that look like they should work, don't, that causes the most
> dissatisfaction.

I think that the trick is correctly to identify those aspects of the world 
which most help us get around in it and spend most of the effort on doing 
the same kinds of things in VR.  When we design interfaces, they should be
such that the limitations can easily and quickly be learned and accepted.

I have played with the VPL system, the Very Nervous System, and the 
Mandala system.  All are very nice in different ways.  But, in a very 
important way, my best virtual reality experience came from the video game 
Battle Zone.

I think that there are a number of things that contributed to its success. 
 For one thing, the model of interaction was simple and easily 
understandable, and one quickly got used to what one could and could not 
do.  Where there were frustrations, such as running into a pyramid and 
being unable to move, they were due to constraints of the problem, not the 
interface.  Another advantage was that the graphics were simplified to the 
point that they could be done fast enough to be within the human closed 
feedback loop tolerances of about 200 ms.  There were little features that 
improved it as well, such as bone-conduction of the sound effects through 
the forehead and the "cracking" of the CRT when you got hit.

But I think the fact that it provided, like most video games, an interface 
which is not really intuitive but is simple and predictable, was the 
biggest factor in its success.  It is the predictibility more than the 
intuitiveness which lets us learn how to use the interface without 
thinking about every little detail.  If the interface is designed so that 
its limitations are inherent and are likewise predictable, then much of 
the frustration goes away.

Take the VPL system.  I am beginning to think that it is a mistake to make 
the effector figure appear as a hand.  It clearly does not behave as a 
hand does.  To fly about in the world, you point in the direction you want 
to go and lower your thumb, as a child pretending to fire a pistol.  This 
is strange.  I would much rather have a specialized interface unit, which 
might be held in the other hand, that I can orient in the direction I want 
to go and then push a rocker or slider switch to go there.  The 
limitations of one rigid object, switch clicking forward or backward, are 
things I can literally feel.  I am not likely to expect it to do something 
it cannot.

On the other hand, it may be that after spending a few hours in the 
system, all of those funninesses go away.  I won't know until I get my 
Power Glove interface and homebrew goggles together, if I ever manage it.

Eric Pepke                                    INTERNET: pepke@gw.scri.fsu.edu
Supercomputer Computations Research Institute MFENET:   pepke@fsu
Florida State University                      SPAN:     scri::pepke
Tallahassee, FL 32306-4052                    BITNET:   pepke@fsu

Disclaimer: My employers seldom even LISTEN to my opinions.
Meta-disclaimer: Any society that needs disclaimers has too many lawyers.