[sci.virtual-worlds] VR OS wish list

almquist@brahms.udel.edu (Squish) (04/08/91)

In article <1991Apr8.000801.8080@milton.u.washington.edu> (Bruce Cohen) writes:
>The state of the art in physical simulation of mechanical systems 
>(bridges, chains, snakes, and jello) involves solving a set of 
>simultaneous differential equations everytime a part moves or the 
>forces on it change.  In my book that means floating point, and lots 
>of it.

How does this thought sound?  Why don't we create a virtual reality 
system based on a efficent computer solution that has been working for 
a while?  What could that be?  How about Operating Systems?  Why 
don't we create a VR OS - ie. a kernel that would handle in addition 
to usual operating system responsibilities (ie. memory management, 
file system, etc.) it would handle VR related tasks (ie. high speed 
I/O information, 3D sound, interactive 3D graphics, etc).  Nothing 
new with this idea, many people have come to the same conclusion.  
Taking from previous posted articles of yester-year, there was someone 
who said that they wanted to be able to have sound coming from
ttyX and the graphic texture map info coming from ttyY etc.  Using 
this concept why can't we get physical simulation info from ttyZ?  
If I/O is being established between VR kernel and DataGloves, why 
couldn't we establish a connection between mathematica and objects in 
our VR world?  We could do bridge design by establishing a link to 
AutoCAD and then do bridge analysis by establishing a link to SAPP90 
or Strudle or Mom&Pop's FEA (Finite Element Analysis) package.  ETC.  
I've seen a bunch of neat packages out there on the market.  Why not
use them?  Or should we follow software practices of yesterday and 
re-develop the wheel (-:  SO, would this work?  Well, if we had our
simulation package running on a CRAY we'd be in busniess BUT, what
about parallel and distributed processing?  If we designed our
kernel so that communication is fast and efficent and also so that
it would allow multiple connections, what would stop us from using
various concepts from the parallel and distributed processing schools
of thought?  Also, I've heard rumors that (was it Macsyma or
Mathematics?) was going to be put on a chip!!  There is also the
new chains of RISC machines that are pushing the computational
envelope - how about the i860?  What other creative/dreaming ideas
could we use?  What kinda abilities should we implement or consider
putting in this wonderful VR kernel?  Anyone got ideas?  I see a
VR kernel coming to be in the future that would allow wonderful
physical simulations to be a common thing.  Using this idea, we
wouldn't have to have everything running in/on the VR kernel.  Just
establish a high speed link with the kernel, talk the same
language, and TA-DA, we could use UNIX machines, DOS machines 
(ARF!), VMS machines (YUCK!).  Maybe now we might be able to get 
some use out of your BIG Dinosaur, aka the IBM 3090 (as taken from 
Bill Joy's comments about this wonderful machine)!  Toss out the 
debate about Cray and Connection, you could use them both.  Only 
problem, we'd need GIGA-networks SOON.  But with recent 
breakthroughs in fiberoptics, who knows.  SO, where has this posting 
lead us?  What's in YOUR wish lists?  What kinds of abilities would 
you think a kernel should have?  What kinds of problems could we 
solve? How useful would our cheap VR systems be?  How could we make 
them more useful?  Inquiring minds wanna-know. (-:

- Mike Almquist (almquist@brahms.udel.edu)

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (04/09/91)

almquist@brahms.udel.edu (Squish) writes:

>How does this thought sound?  Why don't we create a virtual reality 
>system based on a efficent computer solution that has been working for 
>a while?  What could that be?  How about Operating Systems?  Why 
>don't we create a VR OS - ie. a kernel that would handle in addition 
>to usual operating system responsibilities (ie. memory management, 
>file system, etc.) it would handle VR related tasks (ie. high speed 
>I/O information, 3D sound, interactive 3D graphics, etc).  Nothing 
>new with this idea, many people have come to the same conclusion.  
...[deleted]...
>who said that they wanted to be able to have sound coming from
>ttyX and the graphic texture map info coming from ttyY etc.  Using 
>this concept why can't we get physical simulation info from ttyZ?  
>If I/O is being established between VR kernel and DataGloves, why 
>couldn't we establish a connection between mathematica and objects in 
...[deleted]...
>breakthroughs in fiberoptics, who knows.  SO, where has this posting 
>lead us?  What's in YOUR wish lists?  What kinds of abilities would 
>you think a kernel should have?  What kinds of problems could we 
>solve? How useful would our cheap VR systems be?  How could we make 
>them more useful?  Inquiring minds wanna-know. (-:

   This sounds a bit like some of the arguments to take object-oriented
design and carry out some of the programming models into the kernel, so that
there is a distributed idea to the whole mess.

   Well, on *my* wish-list is some of the generalized ideas that would give
us the specific ones you asked for...  like:

   1) Data-flow like model.  This already exists somewhat in the concept of
        UNIX pipes and sockets.  Unfortunately, it is still only partially
        utilized.  The idea of having a place to connect (device, whatever)
        mostly needs the work done, but would be nice to have a uniform
        structure present under it.  It would also be nice to make the
        connections independent of the machine that they are on, i.e. you
        would have no problem executing a bunch of processes (in pipes,
        whatever) across several machines, and it would be syntactically
        pretty much identical to doing it on one machine.

   2) Machine independence of implementation for code and data models.  This
        also exists already to some extent.  the few a.out formats for UNIX
        are documented (and for the variants thereof), and data-space tends
        to be allocated in a uniform fashion.  This could be extended, and
        a framework provided for code migration, whatever.  It would be
        nearly essential to have this kind of thing for mutiple interacting
        sessions.

   3) Uniform (or at least interfaceable) management method for managing
        data and code pieces, and what is being used by what user, etc.  It
        seems that the UNIX (& DOS & VMS) idea of a process is breaking
        down with all of the work being done on new operating systems like
        Mach and especially the distributed systems.  Why not break out of
        the concept altogether?  It seems to be a big stopgap in the code
        migration idea, even though it provides nice methods of isolation
        and data protection.

   Coupled with some resource-scheduling methods for the above, constrained
based on performance, seems like an interesting idea.

   Erich

             "I haven't lost my mind; I know exactly where it is."
     / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
    { Honorary Grad. Student (Math) }--> Internet E-mail: <erich@cs.pdx.edu>
     \  Portland State University  /        Phone #:  (503) 289-4635