[sci.virtual-worlds] abstracts for 1991

ANDY%CORNELLF@UWAVM.U.WASHINGTON.EDU (Andy Rose) (09/14/90)

I am working on a design of PANDA, the parallel application network
development analogue, which has many of the features of virtual
reality.  This system will primarily be an extension of what AVS from
Stardent and apE from Ohio Super do which is allow users to develope
graphics applications (mainly data viewing tools) within an object
oriented graphic environment.  In this paradigm, the user "drags"
functional modules from a palette onto an editing area where modules
can be "hooked" together if data types are compatible.   For instance
you may want to "read data" -> "generate arbitrary slice" -> "render" ->
"display" with an additional module called "generate colormap".

The extensions which PANDA represents enable the users to share data.
This is where the virtual reality analogy comes into play.  Since I
am mainly concerned with developing a useable product I am somewhat
constrained by available technology.  Fortunately the environment
at the Cornell Theory Center is rich with compute power and graphic
workstations, although one of the design goals is to open remote
visualization to the remote user (imagine a PS/2 across campus or in
Raleigh NC displaying flexing molecules).

I think that my experience in scientific visualization has given me
some insights which would be valuable to the vr community.  I have
to confront the reality of network speeds, cache sizes, interprocess
communications, color support of different hardware, etc.

What I would like is some direction for producing an acceptable abstract
for the Santa Cruz Group for Study, April '91.

The goal for the short term (before Dec '90) is to produce a system which
will allow researchers to use the vis. tools now available in a
transparent, hassle-free way (over 100 tools).  I think that the
by products of such development have more effect than I originally
forsaw.  For instance I would like two researchers to be able to
analyse the same data set (in this case analyse can mean change) similar
to two authors editing the same manuscript.  From this goal comes
the now familiar paradigm of one user having to be able to know the
other is around.  In this case a "user" is a point of view (an "eye point")
and a pointing cursor (or perhaps more than one).  So if you were
editing a data set (by picking objects and moving them), someone
sharing the data space (the "virtual reality") may "see" you do this.
Hopefully not just by seeing the object moving but
also seeing your eye point and your cursor move.  Perhaps later
I can add more descriptive features to the eye icon (maybe a bitmap
of the users face, or a finger print which gets left on the object
he "touched").


What I'm getting at is that this is not farfetched and is infact
work supported by the National Science Foundation ("A Visual Pipeline
Environment for Scientific Visualization Support") and IBM (the
CNSF's closeness to IBM, which gave us two IBM 3090s, allows us access
to some new development tools for the RS6000, namely some MOTIF
graphic interface developement toolkits).


So please let me know if you are interested in seeing the white paper
describing the goals and timetable of this project and
if this is of interest to the vr community at the level which might allow
this work to be presented in Santa Cruz

In sincerity,
Andrew Newkirk Rose '91
Department of Visualization
Cornell National Supercomputing Facility / Theory Center
632 Engineering and Theory Center
Hoy Road
Ithaca NY  14853
607 254 8686
andy@cornellf.tn.cornell.edu