[sci.virtual-worlds] VR notes Part 01/02 Long Latex File

hentsche@serss0.fiu.edu (Erich Hentschel) (04/12/91)

Last term I spent some time putting together a term paper in Virtual
Reality.  It seems that some people might benefit from it.  I would
also like to get some feedback and comments on it.  It is in "latex"
format, so you  will have to process it: latex filename, and then
either view it with a dvi viewer of convert to postscript and print.

There is actually two parts.  Part two is the bibliography file, you'll
need bibtex to get that to work with the notes.  

Thanks for any comments and suggestions.

Erich Hentschel
hentsche@serss0
Florida Intl. Univ.
Miami Fl

--------------  Cut Here -------------
\documentstyle[12pt]{article}
\begin{document}
\input{title}
\pagenumbering{roman}
\tableofcontents
\newpage
%\input{latex_commands}
\newcommand{\bd}   {\begin{description}}
\newcommand{\ed}   {\end{description}}
\newcommand{\bi}   {\begin{itemize}}
\newcommand{\ei}   {\end{itemize}}
\newcommand{\be}   {\begin{enumerate}}
\newcommand{\ee}   {\end{enumerate}}
\newcommand{\ind}   {\hspace*{0.0in}}
\newcommand{\bl}   {{\\ \vspace*{.05in} \\}}
\newcommand{\blind} {\bl \ind}
\newcommand{\hci}  {Human Computer Interface}
\newcommand{\vr}   {Virtual Reality}
\newcommand{\ar}   {Artificial Reality}
\newcommand{\ve}   {Virtual Environment}
\newcommand{\ccl}  {Computerized Clothing}
\newcommand{\vw}    {Virtual World}
\newcommand{\env}  {environment}
%
\setcounter{page}{0}
\pagenumbering{arabic}
\section{Introduction}
%\input{intro}
\ind Computers have the potential
to influence and change our entire civilization 
because computers are not simply tools that perform tasks
on our behalf but actually ``computers are a medium".
The nature of tools and the learning of how to use them
reshapes us. \cite{KAY}
\bl
\ind In order for us to communicate via a medium, we must be capable
of becoming the medium itself,  in this way the message being
communicated is not corrupted by the medium.  When the user of
a computer has trouble with the interface,  the medium
looses its effectiveness.  
The goal
is then, to create a \hci\ that reflects the way we interact with
the real world.  It is here that {\bf Virtual Reality}
makes its debut as a powerfull
new technology that is pointing in the direction of this goal.  
In a \ve\ the participant establishes a direct connection with
their virtual senses.  The participant does not have to learn
how to extract
information from the medium. \cite{JACOBSON}
\bl
\ind There are two directions being follow in the field of  \vr.
Myron  W. Krueger, who coined the phrase {\bf Artificial 
Reality}, feels that we will pursue an 
interface that ``merges seamlessly with the rest of our 
environment". \cite{KRUEGER}  So the use of devices in order
to interact with a computer should not be needed.   The other
direction is using current technology to create various
devices called {\bf Computrized Clothing}.\cite{LANIER}
These devices are being used to
track body position, track head position, recognize voice,
give force feedback, and place the user inside simulated worlds. 
\bl
\ind 
% In this paper we will only consider the second direction.  We will
% explore the various senses being enticed in this inclusive 
% interface.
This paper is a survey of some of the issues related to \vr\ and
some of the current developments in the field.  An analysis of visual
perception will give us an insight of how our senses interact with
the environment.   We also explore some of the
issues in the design of \hci s.  A description of \vr\ explains how
current systems operate and what devices have been created to
achieve it.  A view of some developments in the devices that deal
with sound, force--feedback, and touch, as well as some work related
to virtual entities is presented.   A look of some applications should
explain the exitement that \vr\ is stirring up.
%%%%%%%%
\section{Visual Perception}
%\input{perception}
\ind Our senses can be considered as perceptual systems.
Our visual perception involves more than simple snapshot vision.
We must also consider apperture vision, ambient vision, 
and ambulatory vision   since
Visual Awareness is panoramic and is perceived during
acts of locomotion.  This is the ``Ecological Approach to
Perception", a new approach in the whole field of psycology.\cite{GIBSON}
\subsection{ The Environment}
We observe that 
animals are organisms that perceive and behave.  Humans perceive
and behave.
Locomotion and behavior are continually
controlled by the activities of seeing, smelling, and hearing,
together with touching.
The \env\ consists of the surroundings of animals,  this \env\
includes other organisms in the same \env\ as well.  
There exists a phenomena called the ``Mutuallity of animal and \env\ ".
It's similar to the analogy of the tree falling in the
woods far away,  does the tree make any sound as it falls?
In this case surroundings are not considered and \env\ when there
is not some animal to perceive it.
Information about the self accompanies information about the
environment.  One perceives the \env\ and coperceives oneself.

\subsection{The Ambient Optic Array}
\ind The central concept of ecological optics is the {\bf ambient
optic array} at a point of observation.  To be an {\em array} means
to have an arrangement, and to be {\em ambient at a point} means
to surround a position in the environment that could be occupied
by the observer.
% Visual Kinesthesis, the 
Optical information gathered when
movement takes place can be analyzed by the following types of
movement.
\bd 
\item [Head Turning -- ] The sweeping of the field of view over
the ambient array.
\item [Limb Movement -- ] Manipulation.  Protusion of special shapes
and the field of view.
\item [Locomotion -- ] The flow of the ambient array.
\ed
\subsection{The Mediums}
Our planet is formed by air, water, and earth.
These are states of matter gas, liquid, and solid respectivelly and
the interface between them constitutes a surface.  They exhibit the
following characteristics:
\be 
\item Gas and liquid states
afford locomotion to animate bodies, thus they are
mediums for animal locomotion.  Solids present us with
great resistance.
\item 
Gas or liquid mediums are generally transparent, transmitting light.  Solids
are generally opaque, absorbing or reflecting light.  A homogeneous
medium thus affords vision.  A terrestrial medium is not only a region
where light is transmitted but it also reverberates.
\item 
Air and water transmit vibrations or pressure waves outward from
a mechanical event, a source of sounds, thus it affords hearing,
listening to the vibratory event (sound).
\item The medium of air or water permit rapid chemical diffusion
whereas the earth does not.  The medium thus affords smell by
allowing molecules to dissolve or diffuse 
outward from a source.
\ee
\ind The medium in which animals live is at the same time the
medium for light, sound, and odor coming from sources in the \env .
It contains information about objects and animals in the
\env\ that reflect light, vibrate or are volatile.
Instead of geometrical points and lines we have points 
of observation and lines of locomotion.  As the observer
moves from point to point the information in the \env\ changes
accordingly.  Each potential point of observation in the medium
is unique in this respect.
\section{Human Computer Interface}
%\input{hci}
\ind 
``The Art of Human Computer Interface Design" edited
by Brenda Laurel,  contains a wealth of information
that sets up the stage for \vr\ interface design and 
beyond. \cite{LAURELBOOK}
Many issues are presented to help us understand and explore
the possibilites in the development of \hci s.
%\subsection{User Interface: A Personal View}  
\subsection{On User Interfaces}
\ind The most important concept that I found throughout my research has
staggering psychological implications.  Namely, as McLullen put
it in ``Understanding Media", the computer is a medium.  A
seemingly harmless statement, but the mere usage of a medium has
shown to have cause profound changes on those who use it.  Entire
civilizations can be reshaped thus.  Other examples that substantiate
this claim are the printing press and the microscope.  In relation
to \vr\ it seems that we are one step closer to making this
medium right for unforseeable events to take place.  \cite{KAY}
\vr\ takes
the user to  a place,  it connects directly with the participants
perceptory senses.  It has the potential of being the \hci\ that
can be used by anybody, in particular those who are completely
computer illiterate.  In order to communicate the medium used must
not interfere.
Interfaces seem to get in the way. \cite{NORMAN}
  If the user has to spend part of her or his concentration
, however minimal, figuring out how to use the medium, the message that
was to be transmitted will be corrupted.  The user must become
the medium so that the message is retrieved effectivelly.
%\subsection{Negroponte} \cite{NEGROPONTE} 
\blind
Humans have certain cognitive facilities, in particular the doing mentality,
the iconic mentality, and the symbolic mentality.  User interfaces
should cater to these cognitive facilities. \cite{NEGROPONTE}
The phrase ``doing with images makes symbols" was
mentioned in relation to the enactive, iconic, and symbolic modes.  Also
the mouse was related to the enactive, the icons and windows to the
iconic, and SMALLTALK to the symbolic. \cite{KAY}  In other words, 
SMALLTALK was
designed with these modes in mind, catering directly to them.
%\subsection{What's the Big Deal about Cyberspace?}
\blind
Virtual \env s should enable the participants to visualize symbolic
worlds in new ways.  The capability for total sensory transport give
cyberspace the potential to become a historical phenomena.  The 
cyberspace experience is bound to transform us because its an
undeniable reminder of the fact that our normal state of conciousness
is itself hyper--realistic--simulation.  We live in self erected
mental models.  We are 
experts at creating virtual worlds. \cite{RHEINGOLD}
\subsection{Virtual Agents}
\ind
Another issue that was brought out extensively and in various disguises
was that of special agents.   These agents should be able to perform
tasks that are either to time consuming or that require
considerable amount of strategy to perform.   This should allows us to
concentrate on the task and not on what tools are needed to complete
the task.  We should expect to have direct manipulation of these
agents, who can execute complex functions, filter information, and
intercommunicate in our interests. \cite{KAY}
%\subsection{Interface Agents: Metaphors with Character} 
\blind
``An interface agent can be defined as a character, enacted by the
computer, who acts on behalf of the user in a \ve ."  Some arguments
against these agents exist.  For example,  a virus--agent, but 
the argument fails in that being a virus is a characteristic
of the agent.  Because there are rampant computer viruses
does not mean that one will stop using their computer.
There is also ethical questions being asked.  In general, not
everyone should have agents.  Only those people who
choose to use them should have them.  There are even those who
are against the agents for antropomorphic reasons.  Some agents
have antropomorphic characteristics, users might not be able
to deal with this, but on the other hand we can associate with
agents who are competent, successfull, and are capable of doing
work on our behalf.  They should provide us with expertise, skill,
and labor. \cite{LAUREL}


\subsection{Guides} \cite{OREN}
A hypermedia application chose the guides metaphor to navigate
through their hypermedia database management system.  The application
covered a period in US history from 1800 to 1850.  In order to explore
any area of the history you have to choose among a variety of guides to
show you around.  The following is a compilation of some of the
problems and observations obtained from users of the guides.
\bi
\item  The guides did not seem able to respond to user when
asked why they had been brought to a particular place.
\item The users wanted to know if the story was being told
from the guide's point of view.
\item Some users got upset and felt betrayed by their guides.
\item Some users did not allow the observers to be in the same room while
they were traveling with the guides,  these people felt that they
were having private conversations with the guide and no one should be
listening.
\item Users want characterization in the guides.
\ei
This application is an example of how agents can be used in
our \hci\ as virtual agents with antropomorphic behavior.
\section{Virtual Reality}
%\input{virtual}
\ind {\bf Virtuality} is a metasense that synthezises our five physical
senses and the experience of time.  The  synthesis of sensory
inputs give us a sense of time and place.\cite{BRICKEN}
% check to make sure about this cite
\vr\ is explained as follows: 
``A computer simulation of reality that can surround a person
that is created with {\bf computerized clothing}....it's an externally
perceived reality that you perceive through your sense organs and the
physical world." \cite{LANIER}
\bl
\ind The user is placed inside the simulated world and can
experience 3D sights and sounds, and perceive tactile and haptic sensations.  
The user can also interact with the
environment, that is interact
with other entities and manipulate objects in the simulated world.
The following sections describe how \vr\ can be accomplished, 
how some current systems have been built, and how to deal
with \vr .

\subsection{ \ccl\ }
\vr\ deals with various of our senses using \ccl .  These behavior
transducers are devices that map natural behavior to digital information.
\cite{BRICKEN}
The following sections describe some of these devices.
\subsubsection{Head Mounted Display} 
This device has as its primary function to convince the
user that she, or he, is inside the virtual world.  It 
provides the user with a 3D stereoscopic view of the
virtual world.
Head mounted displays generally 
consist of two color computer displays, one for
each eye. The images displayed are different to give the illusion
of depth.
Straps are used to mount the device on the users head, like
a pair of goggles.  All external light is blocked out.
The overall effect is that of complete inclusion in the
\ve\ where the user participates in the medium.  Instead of
seeing or looking at a picture we find ourselves in
a place.  Head mounted display devices can include trackers, sensors,
voice recognition, and audio capabilities.
\bl \ind
Commercially available Head Mounted Displays include:
\bi
\item Eyephone system {\bf VPL}. With 86000 pixels, 100 degrees
field of view, and a price of \$ 9,400.  It includes a Polhemous
tracker.
\item Cyberface II from {\bf LEEP Systems}. More than 125000 pixels,
150 degrees field of view, and price of \$ 8,100.  Does not include
tracker.  Includes integral headphones and additional controls.
\item BOOM II from {\bf Fake Space Labs}.  Monochrome 300000 pixels,
100 degrees field of view and a price of \$ 27,000.
\ei
\subsubsection{Position Sensors}
\ind Head mounted displays have position sensors
attached to it.
The system can detect the position and orientation
of the head.
Moving your head in one direction causes the system to
display the images on the screens moving in the opposite
direction.   This creates the effect of moving inside the
virtual environment.
Most position sensors use magnetic fields.  The most popular
position sensors are being manufactured by {\bf Pholemus Inc.}.
\bl \ind The Data Glove (VPL) is a customized glove.
Position sensors run along the back of
each finger.
Fiber Optiks are used to to detect angles between
different parts of the fingers.
The cheapest device commercially available today is
The Power Glove (MATTEL). It uses ultrasonic detectors
and electro conductive ink.
We will look at the following devices in more detail when we
look at developments in force--feedback.
The Force--Feedback arm (UNC) which applies
 forces on the user, such
as resistance, giving haptic sensations and
The Master Manipulator (University of Tsukuba).
\subsubsection{Ear Phones}
\ind Another target sense in a virtual environment is
the auditory system.
The ear phones produce 3D sounds.
This is
achieved by filtering  digital audio and position
sounds in 3D space.
Sounds follow their assigned objects as the user passes
by.
The main sound control and processing system available
is a 320 MIPS, real--time signal processor know as
the convolvotron, designed by Scott Foster.  We will also
look at this in more detail in the section that deals with 
digitizing sound.

\subsection{\vr\ Systems}
Various \vr\ systems have been implemented. These systems offer
\vr\ exploration and interaction via \ccl .  Special hardware
and software components are presented giving us a good idea
of the availability of systems as well as the power
they should provide us with.
\bi
\item The Virtual Enironment Workstation {\bf (VIEW) } is a project
at {\bf NASA Ames research center}.  This system started as project
for developing a high level simulation for AIR FORCE pilots.  They have
developed various applications such as the Virtual Wind Tunnel and
SIMNET.
Their system uses a head mounted
display and has auditory, speech and gesture interaction capabilities.
The head mounted display consists of two LCD screens with 640x220 pixels
resolution with wide--angle optics, which mimic the human binocular visual
capabilities.  The device also has tracking sensors attached to it.  It
enables high performance, real--time 3-D graphics presentation at up to
30 frames per second as required to update the display in coordination with
the users new orientation and position. A {\bf VPL} data glove provides
absolute position and orientation of the hand, this information is used
to handle a virtual hand which can manipulate other virtual objects. The
system recognizes gestures from the hand and can interpret these as
commands.  The Covolvotron auditory system augments or supplies
information to the \ve .  The system provides 3-D stereoscopic sound.  The
VIEW system includes a speech recognition device for voice 
input. \cite{FISHER}

\item The Reality Built for Two {\bf (RB2) } system. \cite{GROOT},
\cite{HAYES}, \cite{VPL}
This is the system that has been developed by {\bf VPL Research} and
is commercially available.  The system allows for two users
at once in a virtual space.  This system 
consists of the following 
components:
\bd
\item [Hardware -- ] One VPL {\em DataGlove} which converts hand motion
into computer readable form.  The position sensors on the glove form
an integrated {\bf Polhemus } tracking system.  It supports RS232 and
RS 422 serial input and output.  One {\em EyePhone} is the head mounted
display incorporating color LCD's, {\bf Polhemus} tracking system,
a microphone, and audio headphones.  The video images is
enhanced by   a proprietary optical diffusion element.  An 
{\em AudioSphere} is audio system providing 3-D sounds.  It uses the
{\bf Convolvotron} and software that provides basic position
and motion sound rendering, including distance rolloff, Doppler shift,
and more.  A design and control Macintosh workstation and
two Silicon Graphics Power Series workstations provide the power to
build, use, play, and run \ve .  The components are
networked together by a combination of Ethernet and a proprietary 
synchronization bus.  Since one IRIS  is needed for each
each,  we would need four workstations to have two users in 
the same \ve , as well as \ccl\ for each one.

\item [Software -- ] (On Macintosh workstation). 
The {\em RB2 Swivel} modeling software from Macintosh, is 
a tool for defining the
shapes, layout, linkages, and motion constraints of the 3-D 
objects in a \vw .  {\em Body Electric}, a real--time simulation program
for specifying object behavior by Macintosh, 
incorporates advanced visual programming and signal processing  
features.  Its an incremental  compiler generating high speed simulation
code that allows changes to be made interactively, while a simulation is
running.  (On the graphics workstations). {\em Issac}  software renders
the \ve\ in real--time.  {\em Isacc} and {\em Body Electric} communicate
while a simulation is running.
\ed
\item  {\bf Autodesk} has a system called {\bf Cyberspace}.  It's a
386--Based PC and a custom video board.  One of the great
feature about this system is that is used {\em AutoCAD} as it's 
basic software.  Any thing that can you can build with {\em AutoCAD} can
be explored with Cyberspace. \cite{HAYES}
\item At {\bf University of North Carolina} the {\bf Pixel Planes} 
parallel processing system was developed.  This system can do
3-D rendering in real-time by assigning one processor to
each pixel of the display.
\ei
\section{Sound}
%\input{sound}

\subsection{``Real--Time Digital Synthesis of Virtual Accustic 
Environments"}
\cite{WENZEL}
A signal processing device that generates 3D sound is described.
This device is being used at
{\bf NASA--Ames Research Center} with their Virtual Interactive Environment
Workstation. {\bf ( VIEW ) }  VIEW allows the user to explore and
interact with a \vw\ using a head mounted display and data glove
controlled by the operators position, voice, and gestures.
\bl \ind Possible applications that could benefit from using sound in
a \ve\ are advanced teleconferencing systems, monitoring tele--robotic
activities, and scientific visualization of multidimensional data.
\bl \ind The process involves synthesizing localized sounds.  Finite
Impulse Response {\bf (FIR)} filters are placed around a would be user's
ear drums at
intervals of 15 degrees azimuth and 18 degrees elevation for 
a specific range.  A total of 144 FIR filters are positioned.
A map of location filters is constructed and downloaded to a
real time digital signal--processor.  This device is called the 
{\bf convolvotron},
designed by Scott Foster from {\bf Crystal River Engineering}.
It convolves an analog signal with filter coefficients determined
from the FIR map and head position.  The signal is placed
in perceptual 3--Space of the user.  Motion trajectories and
static locations at higher resolution than
the sampling are interpolated.
Perceptual accuracy of the basic
technique has been confirmed by static sources.  Preliminary
data suggest that using non--listener--specific transforms to achieve
synthesis of localized cues is at least feasible.
\section{Force -- Feedback}
%\input{force}
This section explores some of the ongoing research in the
area of force feedback. Force feedback gives users haptic or
tactile sensations.  This  feature should aid users their \vr\ 
when interacting with objects or entities in the
\vr . by adding and added input to the sense of touch and feel.
\subsection{``Artificial Reality with Force--feedback.  
Development of Desktop Virtual lSpace with Compact Master Manipulator"}
\cite{IWATA} This paper discusses an interface device fo artificial
reality with
force feedback for the manipulation of virtual objects at 
{\bf University of Tsukuba}.  First a
brief description of \ar\ is given leading to the following statement:
``methods presenting tactile information have not been sufficiently
developed".  A description of existing tactile input
devices is given,  a glove, a 3-D mouse, and the master manipulator, and
the disadvantages of these are explained.
The author has developed a virtual object manipulation system on a
desktop computer.  The system consists of an image generator system
and a master manipulator
with 9 degrees--of--freedom and tactile input.
\bi
\item Image Generator Subsystem.  A specialized graphics computer is
used, the TITAN, which has 2 CPU boards and its peak performance is 32 MIPS,
32 MFLOPS.   The position and orientation of the hand are described
in a coordinate system fixed in the \vw . A virtual hand is drawn 
that consists of 100 Gouraud shaded polygons.
The user can only see the virtual hand,  the user must look 
into a mirror that is placed at 45 degrees from the screen.
\item Master Manipulator for tactile input and reaction force
generator subsystem.  A {\em parallel } manipulator  consist of two
triangles at different levels and 6 cylinders connecting every corner
of one triangle with two corners of the other triangle.  The length of
the cylinders
can be controlled.  These manipulators lack backdriveability and use
a small working volume.
The author and a team at University of Tsukuba  developed a new
{\em link} manipulator.  The manipulator uses pantograph link
mechanisms instead of linear accutators,  it improves the
working volume and backdriveability.  The top platform of the 
manipulator is fixed to the palm of the operator, enabling the user to move
the hand and fingers independently.  The thumbs's working angle is 120
degrees,  the other fingers is 90 degrees.  The middle, ring, and little 
fingers work as a unit.  The manipulator applies forces to the
fingers and palm of the operator.
\ei
The hand is divided into 16 control points and the distance between these
points and the surface of virtual objects is calculated.  To grab 
an virtual object you must {\em touch} the virtual object with
the thumb and a finger.  Objects can have attributes such as solidity.  A
rigid  body produces a reaction force with maximum possible torque.  A
force and movement vector is computed at the palm with respect to
the palm and the virtual object.  Tactile sensations are limited, in
particular very detailed ones such as surface texture,  
future enhancements
should consider this.  Some applications of this systems include the
manipulation of virtual prototype products and the choreography for
3-D animated
characters. 
\subsection{``Project GROPE -- Haptic Displays for Scientific Visualization"}
\cite{GROPE} The  goal of this project was to develop 
a haptic display for 6-D of
interaction in the design of protein molecules.
This work was done
at {\bf University of North Carolina at Chapel Hill}.
The project started
back in 1967 and is has gone through a 2-D stage, then a 3-D stage, then a
6-D stage with a simple task, and finally
a full 6-D molecular docking system.  The word haptic is defined as
``pertaining to sensations such as touch, temperature, pressure, etc.
mediated by skin, muscle, tendon, or joint."  This system works directly
with scientific visualization, the area of computer graphics
which aims to improve our understanding of scientific phenomena by 
enhancing scientists' perception of real or predicted models.
The haptic display improves perception and understanding of the
world models and the force fields being simulated. Chemists 
report having a new understanding of why a particular drug
docks well or poorly.   They can reproduce the true docking positions
for known drugs easily and seem to find good docks for
drugs whose true docking positions are unknown.  An improvement in
situation awareness has been reported by the users. 
The pharmaceutical industry should benefit from the use of
haptic displays, but the development in this industry will be
very slow.
\section{Virtual Camera Control}
%\input{virtualcamera}
\subsection{``Exploration and Virtual Camera Control in Virtual Three
Dimensional Environments"}
\cite{WARE} Evaluation of three distinct metaphores 
for exploration and virtual camera
control in \ve s using six degrees of freedom was done by the
Computer Graphics and Animation Group at The Media
Laboratory {\bf  Massachusetts Institute of Technology}.
The metaphors are:
\be
\item Eyeball in hand,  the image that the eye sees is mapped on the
screen.  A position tracker is used to determine the hand position
and orientation.  The view is from the vantage point of the hand held
eye.
\item Scene in hand, the scene is translated and rotated in relation
to a 3--Space mouse.
\item Flying vehicle control,  the 3--Space mouse is used as 
a navigation control device.  The user can fly forward, backward,
through objects, up, down, etc.
\ee
The motion path can
be recorded and played back.  The system provides the user with an
interface for exploring virtual graphical environments.  Descriptions
of other methods are discussed.  Observations about the task:  
\bi
\item Racing the viewpoint in the
\ve\ has six degrees of freedom, three for position and
three for angular placement.
\item Exploration of the \ve\ can be done by navigating a viewpoint
in the environment.
\item Moving a viewpoint is isomorphic with moving the environment
with respect to the viewpoint.
\ei
The product was used on three toy environments, a cube, a maze, and
a scene of traffic signs.  Seven subjects were experimented with.
Intensive interviews provided the following results.  The results
regarding constructs and affordances, cognitive properties and
movie making insight for each of the metaphors.  Some metaphors are
better suited for different tasks, such as flying vehicle to navigate
the maze and scene in hand for the cube.  Further work expects to
extend the exploration with manipulation of objects in the
environment.  The eyeball in had offers no such extension while the
flying vehicle and scene in hand seem good candidates.
\section{Virtual Actors}
%\input{virtualactors}
If telepresense is to become a reality, we must understand how
to deal with what Alan Kay once called ``software robots" as well
as with actual robots at remote locations.  These robots will
have to be able to perform tasks without our intervention, but there
will be instances when we will want to direct 
them and guide them through problems that they are not ready
to deal with.
This section contains information that will be important in
the understanding of the problems that need to be 
solved in this area.
\subsection{``Control of a Virtual Actor: The Roach"}
\cite{ROACH} A \ve\ system is discussed which supports
simulations of virtual actors, by Computer Graphics and Animation Group
at the Media Laboratory {\bf Massachusetts 
Institute of Technology}. The actors are provided
with capabilities to perceive the virtual environment that
they inhabit, they can be directed or react to changes in
their habitat.  Design, training, and testing
 can be done in a workplace of prototype virtual 
agents in virtual environments.
%The actors behavior will be driven by their perception of
%the environment,  they will not have highly cognitive and
%motor skills.
In particular the actor chosen simulates the behavior of
a cockroach.  The virtual actor can wonder around the simulated
environment interacting with multiple simulations.  The roach
can perceive changes in the environment,  it reacts to
commands, it runs away from a grabbing hand, or follows the
hand around.  The roach can display a wide variety of functional
behaviors.
There is no other system that supports virtual actors, distributed
simulations, and interactions among various simulations.
\blind
The behavior of the virtual actor is defined by the {\em sensori--motor
level}, which consists of muscle groups, motor organs, and motor organ
systems. The {\em reactive level} give the virtual actor the
power to select and reponds approprietly to environment changes.
\bi
\item The {\em sensori--motor level} consists of a gait controller and
kinematic motor programs.  The gait controller is a system that 
coordinates motor organ activity.  Each leg has an oscillator
which triggers steps.  There are time and phase relationships that
the oscillators must maintain.  Reflexes reinforce the basic oscillator
patterns and add adaptability of the gait.  The kinematic motor programs
control the muscle groups at the joints.  This produces body and limb
motions which result in locomotion and stance of the roach.  The 
behavior of the roach is observed as locomotion.
\item Implementation.  Gait oscillators and reflexes set states for the
motor programs.  The kinematic programs generate the locomotive behavior.
Parameters are set with scripting commands at the reactive level.  A
task level program can be used to specify goals for the roach to
fullfill.  This level also contains tools to control the sensori-motor
level, such as parameter setting sliders and a data glove to
instill reactive behavior.
\item In the {\em reactive level}
messages are passed to the virtual actor which in turn react to them.  The
perception of the environment is simulated as a constraint network where
events are associated with motor acts.  The actual exchange of information
is implemented using {\em bolio}, which is the graphical simulation 
platform which manages a global object database and 
implements distributed message processing.
\ei
The authors feel that the design of virtual actors should be guided by
research into the behavior and neural organization of actual
animals.  The system simulates basic behavior only.

\section{Applications}
%\input{applications}
\vr\ is a place for exploration, navigation, communication, and
creativity.  Participants will find themselves with capabilities
never before available to them.  The potential for breakthroughs
and discoveries in completely unrelated areas is explosive. The
applications once inside \vr\ seem never ending.  First we
will see some current applications that are fully developed or are
being researched, then some
applications that have been predicted by experts in
the field of \vr\ will be presented.  
\subsection{Current Applications}
\bi
\item Architectural Design.  
When MIT's new School of Computer Science building was designed,
at Chappel Hill, North Carolina, a virtual version was created.
It was used to explore the architectural structure prior to
constructing the building.  It was determined that a wall was
not right and a visit to the \vw\ convinced the architect to
make some adjustments to the design.\cite{MINSKY1}
Other virtual buildings have
been used for advertizing. Exploration fo \vw\ has limitless applications.
\item Scientific Visualization.  
\bi
\item A Virtual Wind Tunnel 
is an environment for
visualization and interface for exploring and understanding
the output of Computational Fluid Dynamics.  Virtual Fluid flow 
around a body is simulated.  Tasks that are simply impossible to
do in a real wind tunnel can now be experimented with in
the Virtual Wind Tunnel.  This application is being implemented
at NASA Ames Research Center. \cite{USELTON}
\item Project GROPE is a haptic display for molecular forces.
This system was developed to improve the process of
interactive design of
protein molecules.  It provides the user with haptic
feed--back,  that  is, the user can feel the forces that
the molecules are exerting.  Haptic displays have been found
to improve in the design process as well as in the
enhace the users perception of simulated worlds.\cite{GROPE}
\ei
\item Computer Animation.  
\bi
\item Some \vr\ systems have capabilities
of virtual object manipulation and playback of camera 
trajectories followed.  These can be used for computer
animation purposes.  \cite{IWATA}
\item Work with autonomous virtual actors involves
giving the actors the capabilities to perceive
their environment.  In the case of a virtual cockroach,
sensori--motor activity and reactive--behavior give the
actor the freedom to roam around the simulated
environment, while interaction allows a user to
place obstacles, issue commands, and use a virtual hand to
which the roach responds.  Speed, direction, and gait parameters
for locomotion can be interactively altered.
Gait controlling mechanisms were used in the animation of
``Grinning Evil Death". \cite{ROACH}
\ei
\item Simulators.  \vr\ was born as a flight
simulator for improving AIR FORCE pilots' abilities in missile launching. 
Another such environment called SIMNET, is a tank for combat
simulation.  This is the most widely distributed environment today.
Advanced airplane cockpits are being developed at Human Interface 
Technology Laboratory at University of Washington.
\item The Virtual Environment Operating Shell.  An environment for
computer resources and communication management.  In particular
design kits (CAD), dynamic simulation kits, virtual world tools,
processors, and behavior transducer devices are part of VEOS.
\ei
\subsection{Future Applications}
These applications have been forseen and some of these are currently
under development.
\bi
\item Telepresence.  This topic is considered a strong suite in favor
of \vr .  It seems that \vr\ will be the way to establish an interface
between a robot in any environment and a robot operator at some
remote location.  These robots have will be placed in environments
which are potentially dangerous, unreachable, or impossible to
do in the real world, such as
nuclear reactor plants, underwater construction,  space colonization,
and more.   The operator should be able to operate the robot as if
the person was actually present at the robots site. \cite{FISHER}, 
\cite{RHEINGOLD}
\item Televirtual conferencing.  \vr\ could become the communication
tools of the future that will replace the telephone.  Not only will
you be able to hear the person that you are talking to, but you
will also be able to see them, and who knows, maybe even
feel them. \cite{GROOT}
\item Education.  The potential for educational applications has
no limits.  I can think of no better way of learning geometry than
being in a geometric world and seeing, touching, and hearing the
geometrical objects tell me personally what they are all about.
Maybe students will not go to school at all but the instructors
might teleteach their classes.
\item Virtual Movies.  It seems as though the movie industry
might be done in virtual worlds that are inclusive.   The movie
viewers will become part of the movies and maybe even participate
by taking characters rolles as well as changing the perspective
or viewpoint of the scene.
\ei
\section{Conclusion}
%\input{conclusion}
\vr\ has the raw power of making us realize
that the world we live in is a model of
reality that we have developed.  This model
has been shaped by us with the help of
those who have been able to influence 
our perception.  Out of a technologically oriented
society comes this new device, the computer, and now
a way to use it as a medium seems to be readily available.
\vr\ users perceive an environment, their body becomes and
interface, they are in a place and do not have to
learn anything in order to get there.  It is very possible
that \vr\ will unleash a whole gamma of new discoveries in
our relentless pursuit of {\em The Natural Human Computer Interface},
one that closely resembles the way we interact with the real
world.
\blind
We have studied some of the existing \vr\ systems and many
of the applications that more and more show that \vr\ has
the potential to become the predecessor of the {\em ultimate
interface}.  Some of the current applications in \vr\ send the
imagination reeling into worlds that not even
Science Fiction has explored.  From predicted applications 
telepresence seems to be the most radical one,  because it
seems that we will be able to perceive remote locations while
we are one fixed place.
One application that seems plausible would allows us to
create programs in a \vw . This is a logical step to 
current visual programming developments.
\blind
Two special sections, {\em Visual Perception} and {\em \hci\ } were
presented with the goal of exposing us to some the issues involved
in these areas.  From both of these areas the desire to look at the
work of \cite{ROACH} grew from learning about {\em perception} and
{\em agents}.  More work that would naturally follow 
would include \cite{FLOCKS} and \cite{WORM}.  Simulating the
perception of virtual agents seems to be a most interesting and
rewarding area for research.  I picture virtual agents as capable
of displaying locomotive behavior at higher levels than the roach
as well as being able to perform specialized task on my behalf.
\blind 
{\em Sound} and {\em Force Feedback} are both topics that
directly relate to some of the research that is taking place in
relation to our haptic, tactile, and auditory systems.  The work
in these areas adds new information to the environment, taking the
\vr\ experience a step closer to total perceptory \vr .
\blind
\vr\ is still in its conception, we can't say that it is even in
its infancy.  We do not really know in which direction we will
go or how long it will take.  It is obvious that at this time the
bottleneck that prevent us from creating a very high
quality image for our \ve\ is real--time rendering.  All existing
\vr\ systems seem to use Gouraud shading with a specified limit in
the number of polygons that can be handled in one frame.  This limit is
required in order to be able to render in real--time.  This
creates rather crude simulations, but visiting these crude
\ve\ seems enough to stir the participants in search of more.
\blind
More difficult problem involve postion and orientation of the 
body of the user.  At this time we can accomplish this for
the head and a hand using head mounted displays and 
data gloves, but for the whole body we have only begun to
consider the possibilities.
\blind
The force--feedback problem is again a difficult task.  The work of
\cite{GROPE} started two decades back.  A lot of work still remains 
to be done in this area.  One suggestion that seemed might work talked
about covering the inside of the data glove with baloon like areas that
could be inflated with a substance that could also transmit
temperature at the same time.
Other work that should be considered deals with detailed
tactile sensation as in \cite{MINSKY2}
\blind
One area which seems to be unfullfilled is that of dynamic
enviroment manipulation.  Creating an environment while
inside a \vw\ seems to be the ultimate goal in \vr\ control.
\newpage
\bibliography{all}
\bibliographystyle{alpha}
\end{document}