williamb@milton.u.washington.edu (William Bricken) (12/15/90)
Virtual Reality: Directions of Growth Notes from the SIGGRAPH '90 Panel Copyright (C) 1990 All Rights Reserved by William Bricken William Bricken Human Interface Technology Laboratory University of Washington, FU-20 Seattle, WA 98125 9/10/90 william@hitl.vrnet.washington.edu VI. VIRTUAL WORLD TOOLS To give you an idea of what work and play will be like in VR, I'll describe some of the tools we're designing at HITL: the wand the virtual body virtual home, virtual community concurrent inconsistent worlds autonomous entities concrete mathematics, experiential programming The Wand is an evolution of the Mouse. It is a simple physical device with a wide diversity of uses, ideal characteristics for a tool. Physically, the wand is a spatial position and orientation sensor on a handheld stick. In software, the Wand emanates a ray which can be used for pointing at virtual objects. Coupled with voice commands, the Wand can be used to identify objects, to attach to and move objects, to bring things closer or place them at a distance, to indicate a direction for flying, to identify a location to teleport to, to measure distance, as a pen for drawing, as a knife, as a switch, as a spotlight. Lots of functionality from a little hardware. People achieve presence in VR by inhabiting a virtual body. The virtual body is a software toolkit for associating an arbitrary suite of behavior transducers (such as wands, voice command systems, headtracking, etc.) to a display of self in a virtual world. What we do physically is sensed and converted to virtual behavior. Don't think that the virtual body is necessarily in the shape of our physical body; any object in VR can be inhabited. If you are controlling a physical robot, you may prefer your virtual body to be the shape of that robot. If you are navigating a data terrain, you may prefer to have a virtual body shaped like a jeep or an airplane. The virtual body can filter and map physical behavior onto superhuman capacities. One of the first things we did to figure out how a virtual body might be used was to search the old comic books for super powers. The virtual home is an environment designed for personalized comfort, for work and for play. My virtual home will have a cozy chair, a fireplace, some cats, and a cabinet full of virtual tools and toys, essentially what I now have at (physical) home. Physical reality is a great starting model for virtual reality. Take what we like and delete what we don't. Virtual homes will be customized, personalized environments. The virtual home extends to a virtual community. People we work with are not organized by some cryptic email address that is basically a program to tell the network where to find them. They are organized in close proximity in space. In a virtual community, friends have virtual homes that are visible from our own virtual home. They are our neighbors. We visit them by pointing to their home and saying "jack me there". Less frequent acquaintances may be down the road or over the hill. The idea is to organize virtual space to accommodate to human culture. One profound capability in VR is to maintain inconsistent views for different participants, to intermix personal realities. In physical reality, mass has a way of being unarguable. We quickly default to assuming a consistent, objective reality that is communal to everyone. Consistency is an assumption and is widely overgeneralized. Each person in physical reality, for example, has a viewpoint, each viewpoint is necessarily in a different physical place, each perspective provides different information about the inclusive environment. Every experience is unique. We agree to suppress our differences for massive objects, but the line is always fuzzy. We certainly tolerate differences within the domain of conversation. How we talk is an excellent example of concurrent inconsistent worlds. In VR, communality can be negotiated rather than assumed. In VR, the color of my shirt can appear to be green to me, but blue to you. So long as we do not talk about or interact with the color of the shirt, how it is rendered to each of us is irrelevant. Carry this a bit further: I can be sitting in my virtual home next to an empty chair. You jack a duplicate of your virtual body into that chair. From my perspective, you are visiting me. Now, from your perspective, you are still sitting in your virtual home, in your customized environment. You have an empty chair, and I jack a duplicate of my virtual body into it. We are now sitting in two totally different environments while sharing a mutual conversation. For me, you are in my home, for you, I am in your home. So long as the inconsistencies in our environments are not items of contention or confusion, the differences will not interfer with communication. When they do interfer, the explicit differences become subject to negotiated resolution. But the pluralism of VR is much deeper. It is possible to maintain inconsistencies directly, without resolution, using a mathematical technique called the imaginary boolean value. We could choose to represent the color of my shirt as ambiguous, as context dependent. Both green and blue. We can then discuss the color of the shirt as being inconsistent, as information about which we simply do not see eye to eye. I bring up these ideas from an esoteric branch of representation theory to illustrate a fundamental point. VR is not bounded by the assumptions of physical reality. We can have whatever we can formally specify. The HITL architecture specifies that every object in VR, including space itself, have processing and memory resources. Entities are objects with the capabilities of operating systems. Every entity is a system, every entity is a variant of the same system. This means that we can use the same editing, debugging, and interaction tools for modifying each entity. Entities are running a sense-process-act loop; in artificial intelligence terms, each entity is an agent, an actor. This means that VR is inhabited with artificial life. Every entity is capable of independent action, in response to environmental changes, in response to internal memory or process changes, or in response to changes in the rules, the disposition, specifying that entity's internal processes. Each entity is an expert system using pattern-matching on its input to trigger disposition rules and metarules which generate outputs to the context. The environment itself is just another entity, one that includes other entities within it. All cyberspace is Toontown. We have been able to demonstrate that mathematics itself (in particular logic, integers, and sets) can be expressed concretely, using 3D arrangements of physical things, such as blocks on a table, doors open or shut, rockwalls that respond to gravity, the things of everyday life. String-based symbolic representations of mathematical concepts are typographically convenient, but tokens are not at all essential to mathematical expression. VR makes it convenient to express abstract ideas using spatial configurations of familiar objects. One benefit of this approach is that we can build visual programs, set them on a virtual table, and watch them work. We can experience programs as other entities rather than as dumps of text. Bugs would manifest as structural anomalies, as visual irregularities. Architectural design has a sensual, experiential semantics. It is but a quirk of typography that we have ignored the experiential semantics of computational languages. More fundamentally, experiential computing unites our spatial and our symbolic cognitive skills, permitting mathematical visualization, analytic gestalt, whole brain processing.