[sci.virtual-worlds] Update rate, and the C-words

cdshaw@cs.UAlberta.CA (Chris Shaw) (04/03/91)

In article <gottalovethoselongnewsIDnames> Alan Kilian writes:

>Basically everything "science" is all floatingpoint.
>So, the moral of the story is that we need gigaflops.

Agreed. Of course, maybe we don't need a lot of scientific simulation in
our virtual worlds. To date, I've seen only three groups that do simulation in
their worlds. There's lots of people making viewers of static polygonal models.

>With a gigaflop cpu as you add objects to the world the whole thing slows
>down and when you get to maybe 500 objects it's just as slow as the above 
>example was with 2048 but the point is that up to 100 objects it's many times
>faster.
Frankly, I think that everyone's talking through their hat on this subtopic.
For me, there's the question of whether 100 objects can be *usefully perceived*
by one person. Or is it just like so many cars on the freeway-- when you look
closely, you see the people driving, but otherwise, it's a flowing mass.
Plenty of algorithms wait to be developed that divide the world into small
chunks and do the N**2 interaction calculation for N<20 objects.

>>In a VR system each component must perform its
>> functions continuously with LITTLE OR NO LATENCY.
>
>No. this is simply not true. The latency from head motion to display generation
>on the HIT labs VPL based VR system is from 1 to 4 seconds. This is a long time
>in computer lives. and this is arguable the best VR system in production.

Arguable indeed! In a "raw" VPL system, using 4D-70's for drawing, Mac2 for
data collection & switching, and moderate scene complecity, I have seen 10-15
frames per second. Our system with moderate complexity on ancient machinery
gets 5-15 updates per second.

One second per update is not good. One second per update isn't interaction.
Stuart Card has said that once latency rises above 0.5 seconds,
you're not doing interaction, you're thinking and planning ahead.
Anyone who has typed on a terminal that gives 0.5 second character
turnaround knows what I'm talking about. You have to plan your
editing commands and text entry in order to get proper results. It's hard
work to edit under this type of environment, and it's just a 1D workspace!
I have used systems that momentarily give 0.5 second latency for a 2D
(schematic capture) application, and the mental effort there is even higher.
Imagine having to place the mouse cursor when latency is 0.5 seconds. After
a while, I just stopped working while this was happening (doing a save
in background was the usual cause, so I usually did desk work while saving).
Our experience with 0.5 second latency in 3D using a head-mounted display
shows the truth of this rule once more.

So anyway, there's the minimim acceptable number.

The next number to shoot for is 0.1 second (100ms) lag. It's sort of
the "10dB roll-off point" for improved user effectiveness. Once you surpass
100ms lag, the rest is gravy. The user appreciates the improvement, but
her performance does not improve markedly. Of course, the second-order
effects of user appreciation are being ignored for the moment.

>You definitely want the latency to be small but to require "LITTLE OR NO" is
>simply silly.

This debate has been settled. You need 100ms or better.
See Card, Moran & Newell, "The Psychology of Human-Computer Interaction"

>You do need to have each machine running asynchronously 
>independant of it's predecessor. If the simulation can't keep up for one frame
>or of a packet gets lost you can't have the image "jerk".

This is a debate over definition of terms, I think. The question is.. how do
you decide when to skip a frame, or when to approximate, or whatever. The
requirement is clear, however. 

>> A lost packet
>> becomes a visible display defect in the continuous real-time virtual
>> world unless there is sufficient overhead to permit retransmissions.
>
>No again. It's not overhead, it's how fast can you fully compute the world.
>If you can fully compute the world in 1/1000 second then you have a lazy .016
>seconds to get the data to somewhere else. If it takes you 1/61 second
>to compute the world then you only have .0002 second to get the data out of
>here. It's all in how fast the floatingpoint simulations run.

Wellll.. It's all in how much you can cheat, really. There comes a point where
something has to fall off the wagon just to get over the hill. How to do this
is an open research question. But the point is this.. temporal fidelity is
requirement number one, and you design your system to allow for that
constraint. If not, you're designing an animation system (nothing wrong with
that), but if it ain't interactive, it ain't VR. And, if it's got >0.5 seconds
latency between user input and computer output, it ain't interaction.

><Good description of distributed-processing simulation system deleted.>

We and others have been doing this for a couple of years now.
Seems the right way to go.

>>Virtual Worlds frame rates will ultimately need to be higher then 30 fps.
>Again Right on. And more than 640 X 480 X 256 colors.

Yes, but this requires the available output hardware. (Headmount in particular)
As I said, I don't think that update rate need go beyond 10Hz for decent
interaction, and frame rates of 60-70Hz will do. Note that frame rate != update
rate. Frame rate is the screen hardware refresh frequency, and should be above
30 Hz to avoid flicker.

>Oh, and some software to boot.

Good point.

> -Alan Kilian kilian@cray.com                  612.683.5499

By the way, someone privately took issue with my statement that
"Interactive computer graphics is the core of VR." Sorry about the confused
intent. By this I meant that it's the *core requirement* not the 
*core research challenge*. Virtual sound environments aside, I think that
a necessary condition for VR is interactive computer generated visuals.
Furthermore, the interaction relies on high-bandwidth user input devices to
specify view, and to manipulate objects within the virtual environment in
a natural way.

Thus, using mouse input to specify view doesn't qualify, for example, since
the device is low bandwidth and the interaction is not "natural". It
lack of naturalness arises from the use of a cognitive tranlation process
from 2D motions to 3D motions.
(My rash attempt at defining VR).
-- 
Chris Shaw     University of Alberta
cdshaw@cs.UAlberta.ca           Now with new, minty Internet flavour!
CatchPhrase: Bogus as HELL !

brucec%phoebus.labs.tek.com@RELAY.CS.NET (Bruce Cohen) (04/08/91)

In article <1991Apr3.060218.4122@milton.u.washington.edu> cdshaw@cs.UAlberta.CA 
(Chris Shaw) writes:
> 
> In article <gottalovethoselongnewsIDnames> Alan Kilian writes:
> 
>>Basically everything "science" is all floatingpoint.
>>So, the moral of the story is that we need gigaflops.
> 
> Agreed. Of course, maybe we don't need a lot of scientific simulation in
> our virtual worlds. To date, I've seen only three groups that do simulation in
> their worlds. There's lots of people making viewers of static polygonal models
.

Sure, but in the long (and maybe the not-so-long) run, we need to simulate
physical objects for VRnauts to interact with.  (And before anyone jumps on
me for being a reality chauvinist, I'll point out that the object may
respond to a physics different from the one we see when we're not wearing
the goggles).  The state of the art in physical simulation of mechanical
systems (bridges, chains, snakes, and jello) involves solving a set of
simultaneous differential equations everytime a part moves or the forces on
it change.  In my book that means floating point, and lots of it.

Well, you say, why do I care about simulating physical objects?  After all,
isn't a door just a bunch of polygons?  No, I answer, not if you want to
be able to knock on it, or push it open with a stick rather than touch it
with your finger (veterans of dungeon adventuring will recognize these
strategies), and have it react like a door with mass and squeaky hinges ;-).

But leaving aside the entertainment applications, consider the example that
Moravec gives in "Mind Children": learning physics interactively by going
inside a VR where you can play with objects while modifying the objects and
the constants of nature.  Now, you could argue that that's scientific
visualization too, and you'd be right (though it better be a lot cheaper
than the systems used in research if many people are ever going to learn
that way).

So let's consider VR applied to a business use: working with a large
database of business records.  I think it would be very nice to design a
file object, whatever it looks like, so that it's perceived mass would
depend on the amount of information in it.  You could heft a file and
decide very quickly whether you wanted to get involved with it just now or
go for coffee.  Ok, it's a little fanciful.  But that's eactly why VR will
be useful in the long run: to open up the metaphors we use when interacting
with information systems to a little fancy, so we can find more efficient
(in terms of the user) and more interesting (to make the user more engaged
in the work) ways of interacting.

So in my opinion, physical simulation will be a very big part of the
inplementation of most commercial VR systems (after the initial "glitz"
phase, assuming it survives that), and floating point arithmetic will be
part of that.
--
------------------------------------------------------------------------
Speaker-to-managers, aka
Bruce Cohen, Computer Research Lab        email: brucec@rl.labs.tek.com
Tektronix Laboratories, Tektronix, Inc.                phone: (503)627-5241
M/S 50-662, P.O. Box 500, Beaverton, OR  97077