[sci.virtual-worlds] HTIL entities

william@hitl.vrnet.washington.edu (William Bricken) (07/26/90)

	Coordination of Multiple Patrons in Virtual Space:  

			Preliminary Notes

			 William Bricken

			   HITL, UW
			    4-20-90


Assume a  software architecture for virtual reality within which each object/ 
entity is organized into a sense-model-act system with its own (virtual) 
process resources.  The model within an entity can be inhabited, that is, the  
computational cycle which maps sense input to action output can be under the  
interactive control of a user/participant. 

What are the mathematical tools which coordinate actions of multiple  
participants in virtual worlds?  The problem is similar to those faced in the  
design of software architectures for intelligent distributed agents 
(Genesereth and Nilsson, Logical Foundations of Artificial Intelligence).  
Entities differ from agents in these ways:  	

	(1)   Entities can be fully connected while maintaining individual  
perspective (merged without resolving inconsistencies).  Sharing information  
between autonomous entities is a matter of convenience.

	(2)  Entities inhabited by participants do not require automation of  
knowledge.  Some representation problems in world models can be interactively  
deferred to humans.  Sharing information between participants is a matter of  
negotiation and choice.

	(3)  The environment is computational. Some difficulties with modeling
real world phenomena can be finessed in a virtual world.  The environment, 
itself an entity, can be fully responsive.  

Each entity is essentially a small symbolic system which maps input and state  
onto output and state.  The symbolic framework can be functional, object- 
oriented, frame-based, or rule-based;  mapping eventually reduces to  
operations of pattern-matching and substitution.  The internal organization of
each entity consists of: 	

	Input buffer (fed by sensors) 	
	Priorities (rules for selecting input)
	Disposition (rules triggered by selected input) 	
	Knowledge (state collected by rules) 	
	Output buffer (actions generated by rules) 

A set of priorities select an input item to compare to the trigger claues in 
a set of  disposition rules. When a particular rule is matched, the action it
specifies is  carried out.  An action may be to store the input as knowledge 
or to cause an effector to change the state of the entity.  Some rules may be 
contingent on stored knowledge to be triggered.  Some rules may be independent
of input, they form the internal processing disposition of the entity.  
Statistical classification and inference over knowledge are potential internal
processing  tools.  We are limiting the knowledge representation language of 
entities to first- order predicate calculus.

An entity becomes a virtual body when it is inhabited by a participant.  The  
disposition within a virtual body defaults (partially or wholly) to the 
sensor/effector suite of the participant.  For example, rather than 
controlling the viewpoint of a virtual robot by internal rules, viewpoint is 
controlled by the data stream generated by the head-tracking unit on the 
participant.

Each entity constructs a model of its environment by sensing and storing  
information or by directly incorporating knowledge of other entities.  In  
contrast to robotic agents, sensory channels in virtual entities can be both 
exploratory (disconnected from environmental entities) and inclusive (directly
sharing knowledge with environmental entities).  Objectively, an entity with 
a viewpoint can sense the representation of other entities which fall within 
the range of that viewpoint.  Inclusively, entities which share information 
boundaries can merge world models directly.  

Objective agents must communicate through links, simulating a physical 
environment. Inclusive entities can be informationally in  contact, simulating
a virtual database.  Objective information is broadcast, enforcing a sharp 
distinction between syntax, which is transmitted, and  semantics, which is
 processed.  Inclusive communication is like direct behavior, communication 
acts change mutual internal states without transmission.  From another 
perspective, information processes in virtual environments are fully situated,
requiring a new model of semantics (Brian C. Smith, The Correspondence
Continuum).

These two forms of information exchange can be represented by network and  
map models.  Networks permit broadcast information: 

	[A]<--->[B].  

Entities A and B interact through the link between them.  Each has a sensor at
their local end of the link.  Maps permit shared information:  

	[A][B].  

Here, the boundary, ][ , is a direct mutual contact. Although networks and 
maps  are isomorphic representations for some purposes, in terms of virtual  
communication, they are different.  Removing a broadcast channel, for  
example, leaves two isolated entities:  

	[A]<--->[B]    ==>    [A]<   >[B].  

Removing a shared information boundary leaves one composite entity:  

	[A][B]    ==>    [A B].   

We are developing a calculus of information  sharing which uses topological  
boundary models to regulate the semantics and storage when information is  
exchanged virtually.

Virtual reality provides an empirical context for exploration of theories of  
cooperation between human groups and software configurations.  Within an  
environment which has a visual semantics (such as architectural models and  
prototypes of instrument panels), participants can interact directly with 
images rather than with textual representations.  Digital databases can be 
updated automatically. Inhabited entities must engage in broadcast 
communication, since humans cannot share minds.  Automated entities can be 
informationally cooperative, although they must possess only partial models of
the world.  The issue is how to construct automated and inhabited entity 
collectives that maximize task-oriented productivity.

Coordination between participants depends upon mutually consistent models of  
shared environments.   Network-based research on intelligent agents has been  
developed for adversial environments.  Virtual worlds coordination theory  
extends broadcast models by including map-based cooperative models which  
permit complete communication with environmental entities while maintaining  
individual perspectives. Unlike objective reality, virtual spaces accommodate  
multiple concurrent realities, each associated with a different participant or
perspective.  I can perceive a green desk, for example, while you perceive the
same desk to be brown.  You may even not perceive my desk at all.  We can  
each dwell in entirely different virtual environments, establishing communality
only for those entities we explicitly wish to share.

Inconsistencies across participants can be negotiated or managed. Negotiation  
can be sensory (sharing viewpoints), knowledge based (sharing memories), or   
rule based  (sharing dispositions).  Maintenance of contradiction in virtual  
worlds requires merging inconsistent knowledge without disabling action or  
inference.  We are developing an approach to inconsistency that uses a three- 
valued logic based on an imaginary boolean value, j.   

	(P and (not P))  =  j.  

The imaginary logical value is analogous to the imaginary numerical value i.  
It  does not interact with two-valued deduction, permits a weaker form of 
inference in the presence of inconsistency, and allows lazy resolution of 
contradictions. This approach is equivalent to a hypothetical worlds approach,
but splits at the variable level (bottom-up) rather than at the model level 
(top-down) (Nicholas Rescher, Many-valued Logic).

We are applying contradiction maintenance techniques to a broader project  
called televirtuality.  The idea is to transmit virtual worlds over fiber 
optic telecommunications networks, to replace telephone and television with 
shared virtual realities.  The conventional objective model calls for 
coordination of a single virtual space across multiple concurrent participants
Contradiction maintenance reduces the transmission bandwidth and 
synchronization bottlenecks of objective approaches. The shift in perspective 
is analogous to moving from a knowledge database relying on accumulation of
inferential assertions to a constraint database permitting any satisficing
world configuration.

Another technique we are exploring is experiential mathematics, and in  
particular, visual programming. We are constructing formal maps from textual  
representations to virtual entities that are sensually accessable. For 
example, we have mapped textual logic onto stacked cubes (blocks world).  
In contrast to iconic and representational visual programming languages, Block
Logic uses spatial position to embody logical semantics.  Removal and
rearrangement of blocks is axiomatized to hold computational semantics 
invarient. The evaluation of a structure appears visually as blocks join and
tumble; the structure remaining has the value of the result of the 
computation.  Programming bugs appear as anomolous configurations of blocks. 
Logics based on physical boundaries have pleasant properties; since they are 
many-to-one maps from textual logics, they simplify as well as concretize.

Visual programming is a component of a wider visualization project which  
includes the capabilities to transform, cluster, abstract and, in general, 
interact mathematically with data.  The long-term goal is to construct an 
experiential  environment which provides a non-textual yet formal interface 
to mathematical computation. The entity architecture permits, for example, data collection and  
analysis during a simulation experiment.  We are designing SIMSTAT,  an  
empirical analysis environment which provides statistical analysis integrated  
into experiential simulation. Using overlay display techniques,the scientist/
participant will be able to enter into virtual simulation running concurrently
on top of a physical experiment.  The virtual display is driven by data streams
which may originate from mathematical models, from other entities within the  
simulation, or from the physical experiment itself.  The history mechanism 
within each entity generates a stream of behavioral data which is linked to a 
statistical analysis package. Correlational analysis is achieved by linking 
two objects to a common clock which time-stamps their respective behaviors.  
The comparison  of empirical and hypothesized results is displayed dynamically
 and interactively as data accumulates. 

Naturally there are many active areas of research in distributed inference,   
negotiation of perspective, visual semantics, and participatory simulation.   
Although we are exploring contributions to each,  (software entities, direct
and broadcast communication, contradiction maintenance, experiential  
mathematics), we are primarily seeking to construct an empirical environment  
within which problems of coordinated action can be easily embodied and  
tested.  Virtual reality is a workbench for mathematical models of multiple  
realities.