[sci.virtual-worlds] Multiscale databases

leech@cs.unc.edu (Jonathan Leech) (04/12/91)

In article <1991Apr11.185440.23383@milton.u.washington.edu>, piggy@gargoyle.uchi
cago.edu (La Monte Yarroll) writes:
|> I've been thinking about 3D generalizations of this algorithm, which
|> seems to be what we want for VR.  The obvious extension of dividing
|> space into cubes and picking an "average" colour, seems inadequate to
|> me--it seems that you'll get stuck with a bunch of opaque blobs.  Is
|> it perhaps possible to come up with an "average" shape for any
|> spacial region--something like approximating a sphere with an icosahedron?

    I agree this is inadequate.  It's attacking the problem from the
wrong end - sort of like pre vs.  post filtering.  Better that objects
be initially specified at multiple scales.  The hard part is
automatically generating multiscale representations.  Stuff like
volume datasets or parametric objects (patches, implicit surfaces...)
is relatively straightforward unless you need to reduce the actual
number of patches/volumes/whatever,, but irregular (polygonal) data
must address subsampling primitives immediately.  This may be a double
win as multiscale representations are also useful for rendering via
successive refinement.
--
    Jon Leech (leech@cs.unc.edu)    __@/
    Brice: "How many people don't know anything about it?"
    Andy: "About what?"