[sci.virtual-worlds] VR directions of growth IV,V

williamb@milton.u.washington.edu (William Bricken) (12/13/90)

Virtual Reality:  Directions of Growth
    Notes from the SIGGRAPH '90 Panel 
    
    Copyright (C) 1990  All Rights Reserved by William Bricken

        William Bricken
        Human Interface Technology Laboratory
        University of Washington, FU-20
        Seattle, WA  98125
        9/10/90
        william@hitl.vrnet.washington.edu



IV.  VIRTUAL WORLD PROJECTS AT HITL

Our knowledge about VR and about how people respond to the VR
experience is being extended at HITL through several active projects:

        information database, sci.virtual-worlds
        simulation laboratory
        virtual environment operating shell
        laser microscanner display techniques
        design and construction of worlds
        3D audio display
        instrument display prototypes
        multiple participant worlds
        educational experiences and environments
        virtual prostheses

The information database is a project for NASA to follow the
development of VR and to serve as a clearinghouse for references and
research in the field.  Sci.virtual-worlds is a moderated USENET
newsgroup for the discussion of VR issues.

The simulation laboratory provides a research environment for
prototyping VR hardware and for testing and evaluating effects on
human sensory, perceptual and psychomotor behavior.

The Virtual Environment Operating Shell is a software suite currently
written in C that wraps around the UNIX operating system.  VEOS
provides resource and communication management for coordination of the
modules which make a VR system:

        i/o hardware, behavior transducing input and display devices
        world construction kits, CAD packages
        dynamic simulation kits, for interaction and animation
        virtual world tools
        computational and display processors

The laser microscanner is a hardware research project to design a high
performance, low cost virtual display.  Rather than creating an aerial
image using cathode-ray tubes or matrix element devices, the laser
microscanner scans a color image directly onto the retina.  We don't
think in terms of addressing pixels, we think in terms of addressing
rods and cones directly.  The head-mounted unit will integrate 3D
visual and audio display, voice recognition, and head and eye
tracking.

We build worlds for presentation, evaluation, and experimentation.
Our interest is the design of comfortable, functional worlds.

For Boeing, we are exploring 3D audio display techniques, and building
prototypes for design and display of complex instrument panels and
machines, in essence simulating the design of aircraft cockpits.

We are working on the implementation of multiple participant worlds
for an application to telecommunications.  You can think of VR as a
very sophisticated replacement for the telephone.

Education and industrial training are natural applications of VR
techniques.  We are designing virtual environments conducive to
learning, we're studying the transfer of skills between virtual and
actual tasks, and we're exploring the implications of VR for
educational theory and practice.

And we have great interest in the application of VR to prostheses for
the handicapped, for providing virtual bodies which extend individual
capacities, for providing alternative control devices for interaction
in virtual worlds.



V.  OTHER RESEARCH AREAS

VR has intersected other areas of research in some surprising ways:

        audio modeling
        teleoperation, telepresence
        image integration, HDTV
        interactive drama
        military simulation

3D audio hardware is commercially available, we should expect to hear
of inclusive sound systems in the stores soon.  Audio theorists are
interested in specification languages for 3D music, in audio lenses
and icons (earcons), and in modeling ambience, the analog of
ray-tracing for sound.

Telepresence, the development of remotely controlled robots, requires
the same interface techniques as VR.  The primary difference between
these disciplines is that teleoperation looks at interaction with real
(usually inaccessable) images, VR looks at virtual images.  Both want
inclusive, interactive environments.  The possibility of inhabiting
real worlds shook me out of a self-imposed computer graphics
narrowness.  We can apply VR interaction and hardware techniques to
explore anywhere we can place a probe.  We can inhabit a remote
undersea vehicle, processing digitized images into worlds that mix the
actual with the virtual.  We can swallow a miniaturized transmitter
and explore our own stomach.  We can build artificial bees with fiber
optic visual links and micromotors for dancing and for rubbing
antennae.  We can then put our virtual bee-selves into the physical
hive and interact with real bees in their home environment.  I can
hardly wait.

The multimedia community is very interested in digital images.  It
seems only natural that we should port these flatlander tools into VR.
We could tile polygons with TV.  More importantly, automated
conversion of images to 3D objects (the image recognition problem)
would permit a seamless integration of video-real with
graphic-virtual.

Hypertext has raised the question of interactive fiction.  The
theatrical community is working to install plot and character into
virtual worlds, creating interactive drama.  What do a good story and
a good experience have in common?  Can we construct participatory
plots, guided experiences, autonomous characters?

Actually, VR grew up in the military.  The first substantive
application of VR was to help Air Force pilots improve their ability
to aim missiles.  The most refined and widely distributed VR
environment today is SIMNET, a large scale, simulated tank combat
system.  Recently, I saw a paper on training close combat fighters in
VR.  Sort of reminds me of the video arcade.