[sci.virtual-worlds] Munich/Siemens 'In Cyberspace' symposium long

Chalmers@europarc.xerox.com (Matthew Chalmers) (05/13/91)

Hello -

Here is a description of some of the parts of the recent 'In Cyberspace'
symposium, held at the Deutsches Museum in Munich (Munchen). Siemens was
the main sponsor. The quality and interest of the talks varied a fair
bit, as did the volume and continuity of notes I took. I took few or no
notes about the less technical talks, and so this description should not
be considered fully representative of the event. Overall the message
that I got from the symposium was that we should consider the medium of
VR as a thing in itself wherein mimicry of the real world may reduce its
potential and power.

There may still be typos and artifacts from the conversion into plain
text. Also there are a few missing facts and such. Mea culpa... but
maybe the Net will fill them in.

Regards,

--Matthew



Thursday 11th April

Bob Jacobson (U. of Washington) gave an overview of what was going on at
the HIT Lab. He showed a tape from the Industrial Symposium held there a
few months ago. He dislikes the data glove style of motion, in
particular the pointing gesture for motion. Interested in metaphorical
icons to stand in for the properties of an object. 

The HIT Lab plans to work on a virtual retinal scanner, intended to make
EyePhones obsolete. They will project directly onto the eyeball using
low power lasers. (Damned low power.) They are initially aiming for 4
megapixels, but intend to work towards having 16 megapixels. This would
involve positional sensing, removing the cables and tethers of current
systems. They are looking for real-world problems that are inherently
spatial.

Another big project is the Virtual Environment Operating System (VEOS) which
is intended to end up being a publicly available operating system based
on a rather Linda-like model of concurrency. It also seems to involve
constraints. It will tend to use a number of big servers to handle the
shared virtual worlds, and so allow lots of people with small machines
to connect or dial in. Still very early days though.

He mentioned other VR labs: DEC, Sun andHP in the US, and Fujitsu and
Matsushita in Japan. He also mentioned the forthcoming journal,
Presence, from MIT Press.

He took a question from a guy from the Ruhr U. in Bochum, where there
appears to be a major 3D audio project under way.


Jurgen Brickmann, from the Dept. of Physical Chemistry in Darmstadt,
spoke about deliberately shifting away from trying to mimic reality when
designing virtual worlds. It means systems can run faster and can be
simple enough for everyone to use. Also different people see the same
thing differently, and tailoring a world to each viewer's sensibilities
seems too difficult. He showed some slides of examples of mixing levels
of dimensionality in imagery.

He proposed that the challenge for the artist is to not stay in 3D and
to avoid the standard rules of interpretation. He suggested that there
were two novel artistic aspects: presented data and/or artifacts, and
interaction and interpretation. In his artistic work, Brickmann
concentrates on the latter, as well as the interplay of the
dimensionality of objects. His main scientific work is in the
visualisation of molecular structures.


Marc Bolas of Fake Space Labs primarily talked about his company's two
main products, the Boom viewer and the Molly camera mount. The company
is a small spinoff from Ames and Stanford work. VR is the metaphor. A
concept -- we clothe ourselves with technology. If this technology is
engrossing enough, no matter the display medium, then we effectively
move into a different space.

The telephone interface.. the best audio interface in the world. One can
move into and out of its use very smoothly, one can use it while still
working in the
real world, and it is virtually transparent when it is not being used.
With a headset one has to suit up. With a speakerphone one is too
distant. Compare this with the Boom. One can pull into it gradually, one
can share it (pass it over to someone else), it has high resolution
(currently better than EyePhones but not for long), and clean and fast
tracking.

Excellent quote: If you can't eat a doughnut and drink coffee, you're
not going onto use it in the office.
 
VR as its own medium: what can we learn from the fact that wireframe
pictures are often more informative than flat polygons? Even when the
degree of realism is low, the ultimate measure of the success of a VR
system is the degree of engagement it invokes.
 
Re the Boom: head movement involves a head rotation as well as a
translation, as one pivots from the base of the neck and does not pivot
around the eye centroid or the head centre.




Friday 12th April

David Sturman (MIT Media Lab Computer Graphics and Animation Group) had
a talk entitled 'What Can We Do With Our Hands'. He is from the Computer
Graphics and Animation Group. He talked of whole-hand interaction: the
direct use of the hand with no intermediary mouse, button or dial. The
goals are: naturalness, to take advantage of preacquired skills;
adaptability, i.e. to have multipurpose smooth transitions between
functions and modes; and dexterity, the integration of motion into
higher levels of competence.

The criteria for successful applications are that there should be many
degrees of freedom, the coordination of degrees of freedom leads to
dexterity, and there should be real-time or time-critical control.
Examples: the Bechtel 8 d.o.f. crane used in the construction industry;
the NASA telerobotic servicer; the Woods Hole Oceanographic Institute's
Jason vehicle; controlling robots e.g. returning robot arms to a good
area; the control of animated characters (depends on complexity and
fluidity); conducting a musical concert e.g hand control of synthesised
sounds required to combine with other musicians; medical simulation and
training.
 
Myron Kreuger's work is relevant here. Most interesting were his ideas
about the limitations of the use of cameras e.g. occlusion. For hands
and arms mechanical gloves and arms, or optically-based systems like the
DataGlove and DataSuit are often needed. The VPL DataGlove gives around
5 degrees of accuracy. Kramer (sp?) of MIT is working on a strain-based
flexion system.
 
The Dextrous (sp?) HandMaster is an expensive glove device that offers
20 d.o.f. including finger separation and measuring joint curl to about
1 degree of flexion. One can add a Polhemus or Ascension Bird to its
wrist. W Industries will soon release a glove withsome tactile feedback,
and the National Advanced Robotics Research Centre (USA?) is working on
a similar item, the AirGlove.

Interpretation of data was a major topic. He suggested that we require
greater precision in terminology, discussing a three levels of hand or
limb description level 1 based on the fundamental degrees of freedom;
level 2 with first and second order derivatives, and abstractions of
dofs e.g. joint velocities  or fingertip paths; and level 3 involving
discrete features such as gestures, positions and grips. He also
discussed three types of interpretation of hand motion: direct e.g. a
'finger walker', mapped e.g. a slider, and symbolic e.g. sign language.
The latter requires the most contextual information e.g. ASLAN uses 11
parameters. Kreuger discussed appropriate uses of the technology,
splitting them into 'anthropomorphic' tasks and coordination-of-dof
tasks. We need a way to describe and compare whole-hand tasks and
interfaces: their structural, functional and cognitive aspects. This
leads on to assessing the limits of required dexterity, the coupling of
dofs, endurance and training requirements, and methods of evaluation of
whole-hand input.

With regard to appropriate feedback mechanisms for such systems, MK
noted  that tactile or kinesthetic feedback is much faster than the
visual feedback loop. For teleoperation and similar operations, visual
feedback is important but complex whereas auditory feedback is cheap and
effective. 

In order to better our knowledge of how to design devices, he asked what
actions need to be measured and how accurately? What constraints on the
hand are present, and how do they affect the scope of designs?


John Eyles of UNC gave an overview of VR activity at Chapel Hill. Nothing 
particularly new to report, although the progress of PixelPlanes is always 
interesting.


Jonathan Waldern of W Industries was the most contentious of the
speakers at the symposium. He was immediately percieved by some of the
audience as a Suit. He comes from a background in CAD systems for
creative design, and did a PhD at Loughborough with IBM sponsorship. He
did the cart with the display showing the virtual objects in the room,
and did some work on the measure ment and design of the virtual
interface, gesturing, etc. He has attempted to advance beyond this work
and to build a 'VR workstation' which is robust, cheap and safe (no high
voltages, etc.). The company was formed in 1987 in Leicester (UK). The
'W' in the company name is from his name (barf).

Their products include computing hardware, simulation management
software, rendering software, and i/o devices such as a visor with a
microphone, stereo sound and LCD displays (including optics toallow
variation of binocular overlap), and a glove with some tactile feedback
via inflated pads. A backpack-like affair connects up the various
components. They will soon have an 'exoskeleton' to go over the whole
body.

They consider entertainment to be the most demanding and profitable
application area. 50% of the company is devotedto entertainment-related
products. Some networking: at Wembley recently they showed 10 machines
linked together. Main target: arcades, holiday camps, hotels, bars. A
basic 'leisure machine' is available for <20k. Most software is simple
flight simulator stuff: jet fighters, helicopters, submarines. Also
powerboat racing.

He started his session by playing a video which slickly intercut scenes of
excited 'players' surrounded by dry ice and film footage of (real world)
fighter planes in action. Music by Queen: 'I want it all.. I want it
now' was on the audio track. Images from his system were conspicuous by
their absence from the video.

The audience reaction was interesting in itself: some people were mildly
amused by the promotional character of the video and the
self-promotional character of Waldern, while others were sternly annoyed
by the war imagery. 

Several questions from the audience were very critical of the warlike
character of the games i.e. along the lines of Why have you chosen to
make your first products ones which are designed to teach children how
to kill people more efficiently?" Waldern's response was that commercial
success was essential for his small company to succeed, and this type of
entertainment is of a type already popular with games users. This is the
way for them to get some money in and allow them to create other more
educational or exploratory games.

An audience member pointed out the benefit of having new educational
devices with which to illustrate new or complex information. He pointed out
that most words describing 'awareness' are pictorial in origin, which bodes
well for VR technologies. Also, the educational possibilities imply
great ethical responsibilities both on the part of educators and the
industrialists producing the systems. Nevertheless all technologies can
be used for good or ill.

Waldern said that his aim was to produce well-designed, useful and reliable 
technology which would offer a head start for further applications. VR won't
happen without applications.

A further question was whether the company had any non-technical
specialists in psychology, education or ethics. Waldern said the company
was too small to have such people.

Bob Jacobson of the U. of Washington was the last audience member to add
to the discussion. He pointed out that at his lab they have suppressed
imagery of war games (etc.) deliberately. They see a need to present a
more varied image  of the potential of VR technology, and wish to avoid
making those people who are against the militaristic influence on
research and development from turning away from the field.


Peter Schroder of Thinking Machines was previously at MIT. He did Physics,
especially handling gravity and basic Newtonian mechanics. He worked on
the Roach.

He warned that the cost of making things realistic is *so* large. We should 
hestate before committing ourselves to this as a goal, and perhaps emphasise 
interaction as a key feature. A qualitative change occurs when the feedback 
and the user is fast enough. This is a different issue altogether that realism 

of imagery. Complexity of the modelled worlds is also another issue, as is 
software robustness. The latter is especially true for physically-based 
modelling, where there are many numerical errors to trap the unwary. There 
are many research systems around but few ready for Joe User to walk up to 
and use.

The media attention given to VR is unduly premature. We must avoid creating
unfulfillable expectations. Beware the backlash. Real applications keep things
down-to-earth. Although many people see the cyberpunk idea as some sort
of ideal, the real applications are the more significant and
interesting. 
Fidelity, or complete realism, is not such an issue. The main task is the 
more basic physical characteristics of the particular application. Education 
is also a big interest, and  adaptability or mutability of the scene
make for great appeal.

Scientific visualisation is also promising. VR could be right for navigating 
through the huge data sets of CFD, molecular modelling, and similar fields. 
UNC's PixelPlanes 5 is near to the forefront in this regard, but it still only
offers a few Hertz. The Connection Machine is roughly similar in this
regard. We need roughly another two orders of magnitude of performance
increase before SciVi really achieves its potential for (e.g.) adaptive
steering of simulations.

Having worked in an engineering capacity, Schroder is awed by the
complexity and variety of the technical problems facing the field. An
example is the production of small graphical displays, where we are
faced by tasks such as predictive filtering and overcoming sensor
inaccuracy (i.e. retaining accurate positioning in large spaces).

Polygon rendering: quoted figures for rendering rates need to be divided by 
three to get 'working' rates. And we need perhaps 30 or 50 megapolygons/s.
More under control is the area of sound simulation, but other senses such as 
tactile feedback are poorly served as yet. We don't want to be in a big cage 
of force-feedback machines. An example is the flight simulator which is the 
size of a large room.

Schroder criticises the idealism of those people who say we are entering
a new age and removing the barrier between man and machine. Barf. The
machine is so far away from us it's not even funny. For the present
these people should stick to LSD. There are minor control problems, but
that's 'just a small matter of engineering'.

Even so, a very promising technology for the future in this regard is that of 
direct neural connection. The complexity of a sense such as touch is great. 
Only if we get into the system can we get through to the use of complex
information. Biofeedback techniques to train to use such complex
interfaces might be needed, perhaps with neural nets to support them.
The problem of software design for the interface may lead to more of
'software designing software'.

Let's avoid the term 'virtual reality'. Instead we should look to how we can
bring out the worlds inside the computer. Consider the Moog synthesiser's 
history. The first attempts at using it were to simulate natural sounds, but
this was a failure. It was much more successful to use the machine's own
style. With graphics we are at a similar early stage. We should design
worlds appropriate to the machines we use. Consider also Myron Kreuger's
work which uses existing and appropriate techniques. 

As an example we should look at evolutionary systems and algorithmic varia
tion. These offer great complexity, interesting emergent properties and a 
certain autonomy of operation. Note that headsets/eyephones/etc are not
important. People get excited anyway even though the resolution of
current VR displays is often poor. Best to use a good display though
e.g. 1280 x 1024. One need only put simple physical laws into such
systems in order to generate complexity. However one has to let the
system go beyond one's own control i.e. to evolve.

A large number of very fine images were then shown. These were produced
on  a Connection Machine using image 'evolution' software co-developed
by Karl Sims. Sims is due to do a paper at Siggraph 91 describing much
of the technical details. Briefly, the evolution process involved serial
application
of mathematical functions to the image.

Basic summary: skip the goggles, consider the appropriateness to the
machine and interactivity. Don't make the Moog mistake, and remember
this field will be slow to develop.


Erich Kiefer, from a German university whose name escapes me, spoke of
the  needed advance from intelligent CAD systems to intelligent VR
systems. 
Generally speaking, he sees a need to adjust technology to human needs.
This is a shared goal of AI and VR, and also VR without AI will become
boring and helpful.

An example might be the generation and design of the geometry of a
modelled city. One can imagine a control system with a natural language,
gesture and drawing interface, but we would also want to have artificial
actors taking on services within the model. We need both: we need an
intelligent VR system for this type of task. We need systems to perform
complex actions and to survive in complex environments. This suggests
reflective capabilities: philosphers suggest that these are necessary
for 'reasonable' behaviour. We will need metalevels -- theories of
communication, a theory of thought and representation, and an idea of
what introspection is. Then we may be able to obtain a reflective
architecture able to look at its own contemplative processes.


Friday 12th April

Myron Kreuger gave an interesting talk, but since it really covered the same
format as his fine book 'Artificial Realities' I'll skip a commentary
and just leave a few short extracts. 

Kreuger looks forward to a more general human interface using the whole
body. We should remove the restriction that we have to sit down, so that
we can more fully use our bodies.

A modelled 'reality' requires predictability. Although we can change the
rules of physics or reaction, we can only do so at 'suitable' times, or
when the
previous set of rules has been established in the minds of the users. Kreuger
reiterated the primary importance of instantaneous feedback: Reality acts as 
fast as I act, otherwise it isn't real.




--