[sci.virtual-worlds] Hardware and VR

kilian@poplar.cray.com (Alan Kilian) (04/04/91)

  Well it seems that the readers of this newsgroup are content to talk about
things like New sensory modalities and New paradigms of thought until I
am ready to get sick on my terminal. I am not. There was a good discussion
starting about hardware and VR which I thought brought a good bit of reality 
to this newsgroup. But Bob J. says there are too many people complaining so
I would like to continue this discussion via Email.

  If you would like to be in on this discussion (Hardware issues and VR)
send me some Email and I'll be the clearinghouse for this discussion.

  To everyone who complained about this thread. Have fun dreaming, we'll get
on with getting things running.

                 -Alan "I don't take kindly to head bashing" Kilian


 -Alan Kilian kilian@cray.com                  612.683.5499
  Cray Research, Inc.           | If you were plowing a field what would you
  655 F Lone Oak Drive          | rather use? 2 strong oxen or 1024 chickens?
  Eagan  MN,     55121          | -Seymour Cray (On massivly paralell machines)

-- 

[MODERATOR'S NOTE:  I believe that Alan misstates the complaints of several
sci.virtual-worlds participants regarding the Cray-Connection Machine discus-
sion.  As I understood them, these complaints were not about the use of 
hardware to create virtual worlds -- rather, they were finding the listing
of the internal characteristics of Cray and CM computers to be tedious.

Alan does not agree.  He feels that after the first few posts, the general
tenor of the thread was against discussions of hardware.  So he has requested
that I post this.

I don't want to promote another derivative discussion of who said what.  So
those who want to follow Alan's suggestion should do so via email.  Otherwise,
discussions of hardware AS IT RELATES TO VIRTUAL WORLDS continues to be
welcome here. -- Bob J.]

jim@baroque.stanford.edu (James Helman) (04/04/91)

I actually thought the Cray/CM discussion was exposing some valuable
lines of thought.  Good things come out of debates, even if they get a
little, (but not too) heated.  It's clear that people from different
backgrounds have different conceptions of VR applications.  Seeing
these different perspectives is instructive.  While I don't share
Alan's thought on many of these issues, I do on getting down to brass
tacks and discussing details of hardware, bandwidths, computation
requirements.  Speculative ideas, philosophy, metaphor, neural
implants all have a place, but so do people debating whether numerical
simulation a big part of VR and what architectures are good for what
applications.  "Internal characteristics" aren't tedious from my
perspective, they're essential to understanding how VR will move
from a reseach topic to real world applications.

-jim



-- 

tsarver@uunet.UU.NET (Tom Sarver) (04/05/91)

It seems that the hardware has limited what we think we should be
able to create in VR.  What about a software architecture which allows
plug-and-play copmonents to be implemented on varying levels of machines
for varying levels of quality?

What I'm suggesting is an analogy to the HDTV strategy of broadcasting
in the highest quality possible, but selling HDTV receivers which can
produce varying levels of quality.  Here, the single, expensive
component (the broadcaster) is sending to another component (the
receiver) implemented on (at least) two different "platforms."  Each
receiver will "implement" some subset of the message, where the
higher quality, more expensive receiver will implement a larger
subset.

The above analogy can be expanded to whole system of components which
are talking to each other (basically in a pipeline configuration).
This strategy can be implemented in, as I see it, at least two
approaches:  the standards approach and the real-time-degradation approach
(I know these are probably not well named, but the names will serve
their purpose).

The standards approach (implemented by HDTV, BTW) sets up some
standards regarding the message and its contents between two components.
One "chooses" the quality level when implementing the standards for
a given platform.  This is similar to X-windows, where the messages
are the same, but each server can have a slightly degraded appearance.
For example, fewer colors (or monochrome, or grey-scale), lower
resolution, and slower updates are all areas in which quality can
degrade without losing the content of the message.

The real-time-degradation approach is similar to a real-time system
which can give an approximation (as opposed to an accurate answer)
when the deadline arrives.  In this case the quality level is dynamic,
based on the circumstances at a given time.  One can see that this
approach is much more difficult to implement because one has to have
a clear understanding of building approximations and refining them.

Do all objects begin life as a collection of spheres and get refined
into something more realistic?  How can we apply this paradigm to all
the components in the architecture?

Implications of the whole plug-and-play goal are if you allow graceful
degradation, you can lose the sense of "being there."  This is where
you make the tradeoff between content and form.  "Content" is traversing
the database, or watching the air flow, or whatever job one is doing
within the VR.  "Form" relates to how real the environment feels.  I
don't think the proponent of _Neuromancer_ ever felt like he was walking
in the park;  the synthetic world felt "real" but still synthetic.  He
could differetiate between being "jacked in" and not.

fink@acf5.NYU.EDU (Howard Fink) (04/06/91)

I don't get the HDTV analogy.  HDTV sends a rasterized image to the receiver.
What's needed is a signal that describes the image, with the receiver
determining the resolution.  The user can trade off between speed, accuracy,
color, frame rate, etc.  As the machines get faster, there's no need to 
change standards.