[sci.virtual-worlds] Interactive soft modelling

leech@homer.cs.unc.edu (Jonathan Leech) (11/29/90)

In article <11890@milton.u.washington.edu>, mg@munnari.oz.au (Mike
Gigante) writes:
|>With a pair of gloves, eyephones and lots of software, we hope to make
|>a really neat equivilent to clay modelling in VR space.

        Some people may remember a SIGGRAPH paper a few years back (I don't
have the exact reference) which used a depth span buffer to do real time
CSG. This only worked from one viewpoint, but was pretty neat to watch.
I recall the author referring to it as 'Cheese Whiz' due to the effect
of depositing a trail of material as the cursor moved about.

        With high-performance voxel systems becoming available (e.g.
people here can do several frames/second of a 128^3 voxel database raytraced
on our Pixel-Planes 5 system), it's natural to extend this concept to full
3D with the usual VR gadgets.

mg@munnari.oz.au (Mike Gigante) (11/29/90)

leech@homer.cs.unc.edu (Jonathan Leech) writes:


>In article <11890@milton.u.washington.edu>, mg@munnari.oz.au (Mike
                                          -->mg@godzilla.cgl.rmit.oz.au<--
>Gigante) writes:
>|>With a pair of gloves, eyephones and lots of software, we hope to make
>|>a really neat equivilent to clay modelling in VR space.

>        Some people may remember a SIGGRAPH paper a few years back (I don't
>have the exact reference) which used a depth span buffer to do real time
>CSG. This only worked from one viewpoint, but was pretty neat to watch.
>I recall the author referring to it as 'Cheese Whiz' due to the effect
>of depositing a trail of material as the cursor moved about.

Tim Van Hook if I recall correctly probably in '86 in Dallas or '87 in
Anaheim. Also, I think the neatest thing was the *removal* of material ala 
machining.

Now whether this is the most appropriate (or most natural) method for
VR modelling, my intuition says not. It should certainly be quite
simple to implement -- a simple uniform cell would suffice.  However,
I think that operations like those proposed by Beth Cobb in her '84
Doctoral thesis (U of Utah), combined with an explicit surface
representation (i.e. B-Splines, Beta-Splines etc) would be fun. That
is where we are heading. The fun and interesting part will be
providing a *really* simple way of specifying things like the locality
of refinement, ensuring comaptible orders, knot vectors etc etc
without the poor sculptor having to know *anything* about splines and
their details.  (i.e. he/she just has a lump of maleable material that
behaves in a predictable manner). One of the nice things will be that
we can then manufacture (NC) these virtual sculptures, maybe with
sereolithography once the sculptor makes self penetrating objects like
klein bottles and the like...




>        With high-performance voxel systems becoming available (e.g.
>people here can do several frames/second of a 128^3 voxel database raytraced
>on our Pixel-Planes 5 system), it's natural to extend this concept to full
>3D with the usual VR gadgets.


Yep, it is true that volumes are becoming computationally feasible, but
you are still limited to fairly low resolutions right? Although I can
imagine mechanisms for local deformations of volumes, some of them will
require copying, others careful ordering of overwrites. e.g. if I want
to twist and stretch the volume to form a helical column..

It is certainly interesting to think about, and discuss it here..

Mike Gigante,
RMIT Australia
mg@godzilla.cgl.rmit.oz.au