hlab@milton.u.washington.edu (Human Int. Technology Lab) (06/27/91)
Reposted from The WELL (415-332-6106) vr conference, by permission
of Johannes Nicholas Johannsen:
Topic 75: Virtual Worlds Conference at SRI
By: Johannes Nicholas Johannsen (jojo) on Wed, Jun 19, '91
Anyone go to the SRI conference? (Virtual Worlds: Real Challenges)
I went, and thought it was great. Nearly every presentation was by
someone doing real work in the field (i.e. VR itself or technologies
that help create a feeling of presence in a virtual world). There
was so much covered, and alot I missed because of the parallel sessions.
Anyway, here's a random sampling of a few things I remember:
Mark Bolas sold me on his boom system for entering a visual world, mainly
because its so easy to enter and leave the world (he compared it to
using a telephone handset). The software application developement
toolkits was interesting, there were only three companies represented,
Sense8 - complete VR system for $20,000, VPL - complete VR system for
$250,000, and Autodesk - complete VR system, price unknown, release
date unknown. VPL's was the best, which you might have guessed by the
price. Sense8 is pretty good if you are a programmer, or think $20k
isn't all that much (as opposed to $250k).
The presentation I found most interested was a surgeon who is planning
on doing telepresence (stereo vision) surgery probably fairly soon. He
said surgeons have already made the required leap of faith when they
started doing surgery with a mono-view camera inside the body with
their mini-surgery tools not directly controlled by their hands.
Besides being an interesting way to operate, its pretty good for the
patients -- his example of a gall bladder operation had patients spending
a week in the hospital with a 6 week recovery with traditional cut-em-open
surgery, as opposed to a one-day in the hospital and 1 week recovery with
this mini-video-through-the-hole surgery (he called it laporoscopic surgery).
Anyway, he seemed to be convinced that stereo view would make these
operations go alot smoother, since they spend alot of time poking around
to get a feeling of depth because of the mono view used currently.
And in case you might be wondering what this has to do with VR (as I was)
the explanation is that as soon as you have the surgeon viewing screens
operating tools as seen on the screen, the surgery is already virtual,
the patient doesn't necessarily have to really exist if the video feedback
is appropriate.
Topic 75: Virtual Worlds Conference at SRI
# 2: Johannes Nicholas Johannsen (jojo) Thu, Jun 20, '91 (16:31) 81
lines
Here's some of the other stuff I saw:
- an input device for computers which senses muscle tension, it was
able to sense eye movement with a small band placed on the forehead,
and muscle tension anywhere they could strap something which I never
did see. Their device also senses brain waves, but they said this
was only accurate enough to act as a switch rather than being used
for more sophisticated control.
- TiNi Alloy's tactile output device, which can get small enough to
put about 40 touch-pixels on the end of your finger. They had a mouse
with about 5 of these pixels on the button, so you can feel when you
move it over certain spots on the screen, and a glove with touch
pixels on the finger tips which I didn't try.
- the Convolvotron, which uses 300 Mips just to place a sound "out there"
at a specific place real time. VPL systems use this.
- lots of stuff related to VR for people with physical disabilities.
This is fairly relevant since, as in VR, often direct interaction
with the world is impossible, and technology must be used to bridge
the gap.
- another surgery presentation, heavy into the aspects of VR simulation
for training and for future robot controlled surgery. There's lots of
advantages of giving up direct control in situations like this, since
the surgeons aren't limited by their physical size (lots can work together)
or location (only digital communication required to control robots), and
altering scale of movement on the robots can simplify tricky situations.
- a robot arm VR for doing something underwater, in which the arm kind of
pokes around an object until it gets enough data points for a 3-d picture,
then using the resulting picture and changing the point of view to be
able to deal with the object, pick it up I guess. This wasn't stereo,
but like the surgery, could benefit from depth perception.
The sense8 software keeps looking better, so I placed an order. Its frame
rate is decent 7-8 frames for fairly simple worlds on a 25 mhz 486 with
soon to be obsolete DVI boards (supposedly a faster next generation is out
soon). It is entirely possible that the frame rate more than doubles
within a year.
There were a few interesting things that I learned, such as how the
our vision gives a seemingly uniform high resolution even though the
number of photoreceptors in the eye decreases as you move from the
focal point. Similarly our sense of touch is processed into a somewhat
consistent feel from several types of sensors with different distributions
in the skin. Another interesting thing was that telepresent people work
better when software simulations eliminate time delays in their
teleoperations, even though the software model may not be entirely
accurate. It probably works because the simulation is accurate most of
the time.
The areas I missed were system architecture, data visualizeaion, virtual
worlds and learning, arts and design. It was nice to be at a conference
where I wanted to be two places at once, though unfortunately being a
somewhat lazy person, often I wasn't even one place at once. Also, I left
before the "group-designed world" where the conference participants
directed the construction of a virtual world. I left during the future
issues when the conversation turned to race and gender in VR, bizarre
agendas strike again.
One thing that almost struck me as strange was the lack of imagination
in the applications discussed. For some reason most applications consist
of physical objects such as boxes, walls, rooms, which represent (surprise)
boxes, walls, rooms (though the VPL demo did have a magic hat that turned
into a rose when grabbed). Visual properties only, and no symbols or data.
If you drop away the mapping from physical-visible to virtual-visible,
there is nothing left to see. There is rarely a physical-invisible to
virtual-visible mapping, even though the invisible properties may be
relevant to many applications.
Similarly, visualizing the invisible (as in the "data plane" topic) is
rarely heard when potential applications are enumerated. In some ways
this makes sense given the additional complexity of having an
information-analysis phase. On the other hand, there are cases where
by its nature the information must be structured (as in compiled
program text) or where the information analysis is not all that
complicated. In these cases immersion allows physical location to
convey information, and specific groupings of information can be shown
symbolically. It might sound weird, but no weirder than thinking.