[comp.graphics] Blobbly Polygons

charlie@celia.UUCP (Charlie Gibson) (02/15/89)

Hello.

Can anyone recommend some references for "Meta-Sphere" or "Blobby Molecule"
algorithms that create a tesselated 3-d geometry? (as opposed to scanline
algorithms for direct rendering of these primitives)

Thanx.
-- 
Charlie Gibson                                | What IS the secret powder
Rhythm & Hues, Inc.                           | that makes "Orange Julius"
INTERNET: celia!charlie@tis.llnl.gov          | a "Devilishly Good Drink?"
UUCP: ...{ames,hplabs}!lll-tis!celia!charlie  | I *MUST* KNOW!

foo@titan.rice.edu (Mark Hall) (02/16/89)

In article <439@celia.UUCP> celia!charlie@tis.llnl.gov (Charlie Gibson) writes:
>Can anyone recommend some references for "Meta-Sphere" or "Blobby Molecule"
>algorithms that create a tesselated 3-d geometry? (as opposed to scanline
>algorithms for direct rendering of these primitives)
>
>-- 
>Charlie Gibson                                | What IS the secret powder
>Rhythm & Hues, Inc.                           | that makes "Orange Julius"

   The surfaces of these blobby molecules are implicitly defined.
  That is, they are defined by a single equation

       f(x, y, z) = 0

   I assume that you have seen Blinn's '82 paper on rendering these directly.
  It was printed in ACM transactions on Graphics, and pointed to in that
  year's SIGGRAPH proceedings. 

   In the last couple of years several people have looked into using 
  polygonal representations of implicit surfaces. Implicit surfaces
  are becoming a little more popular because they are nice for defining
  "blending surfaces". My advisor wrote his thesis on the form the implicit
  blending surface needs to be in. Data that is in the form of spatially
  arranged density data can also be viewed by picking a level set to be
  the surface of interest. Lots of data is in this form: CT scan, NMR, 
  some seismic data, etc. 

    Back to your initial question: what are some references on 
  polygonalizing these things?

   Blinn, J., (1982)
   A Generalization of Algebraic Surface Drawing,
   ACM Transactions on Graphics, Vol. 1, Number 3, pp. 235-256.

   Wyvil, G.,McPheeters, C., and Wyvil, B., (1986)
   ``Data structure for  soft objects",
    The Visual Computer,2:227-234.

   Lorenson, W., and Cline, H. (1987)
   ``Marching Cubes: A High Resolution 3D Surface Construction Algorithm",
    Computer Graphics, Volume 21, No. 4, pp. 163-169.

   Duurst, M. J. (1988),
   "Additional Reference to Marching Cubes", 
   Computer Graphics, 
   Volume 22, No. 2, pp. 72,73.

   Bloomenthal, J., (1988) 
   Polygonalization of Implicit Surfaces,
   Computer Aided Geometric Design 5 (1988), pp. 341-355.
   (also Xerox Report CSL-87-2. and in SIGGRAPH course notes (87 & 88?))

   Hall, M., and Warren, J. (1988)
   "Adaptive Tessellation of Implicitly Defined Surfaces",
   (Submitted for publication  and Rice Technical report)
   *copies on e-mail request*


   As mentioned before, Blinn rendered these things directly. 
   The brothers Wyvil have done a lot of work incorporating these objects
   into their Graphicsland environment at U. Calgary. I see some
   of their students posting in this group, if you have questions 
   for them. Lorenson and Cline 
   showed a table lookup algorithm that is quite nice. It does have 
   (at least) one bug, a consequence of which is pointed out by Duurst,
   but I have used it in a number of applications with pleasing results.
   Bloomenthal presents an algorithmic approach with the added feature 
   of being able to adaptively approximate the surface. That is, where
   the surface is flat[ter], use fewer but larger polygons to approximate it.
   His algorithm is tricky to implement correctly by his own admission. 
   Joe Warren and I found a different method for adaptively polygonalizing
   the surface that we think is easier to implement. 

   Hope this helps. 

   - mark




 
   

foo@titan.rice.edu (Mark Hall) (02/20/89)

In article <2607@kalliope.rice.edu> foo@titan.rice.edu (Mark Hall) writes:
>In article <439@celia.UUCP> celia!charlie@tis.llnl.gov (Charlie Gibson) writes:
>>Can anyone recommend some references for "Meta-Sphere" or "Blobby Molecule"
>>algorithms that create a tesselated 3-d geometry? 
>>-- 
>>Charlie Gibson                                | What IS the secret powder
>>Rhythm & Hues, Inc.                           | that makes "Orange Julius"
>


   I left a few things unclear in my posting of references in polygonalizing
 implicit surfaces, among which are the "blobby molecules". 

   If you want a copy of the paper I co-authored, send an email request
 with a physical mail address. The figures and photos in the paper do 
 not travel by email very well. At least, I don't have that capability. 

   The "tricky" part of Bloomenthal's polygonalization algorithm is in 
 the adaptive part. If you don't care about adaptively polygonalizing
 his algorithmic approach is quite straightforward.(If you are rendering
 "blobby molecules", for instance, there will probably not be large flat
 portions on your surface, so adaptivity will probably not buy you much).
 
   Note that Lorenson & Cline's method presupposes a fixed grid of values
 that the algorithm is presented with. Lots of scientific data comes in
 this form. The other algorithms show their graphics origins in that they
 can take advantage of (or require) the ability to generate density 
 information at arbitrary locations. This allows much smaller amounts of 
 data to be sufficient. If you are concerned with a single connected 
 structure, it is often possible to "walk" along the surface, generating
 data only in a small area just inside and outside the surface. (see 
 Bloomenthal's paper)

   A previous article mentioned that Lorenson and Cline's method can cause
 holes to appear at saddle points of the surface. That is the problem that 
 Duurst mentioned. However, I feel that the holes are a symptom of a more
 general and underlying problem. The real problem is that the lookup table 
 method they use can cause data on a single face to be interpreted differently
 by the two cubes which share the face.

   A quick note: All these methods are point-sampling methods, and are therefore
 subject to the problems of undersampling just as in signal processing.
 Therefore it is impossible to know that you have sampled the volume closely
 enough unless you can determine the maximum "frequency" and sample twice
 that closely. All these methods can be "fooled" by data with a high enough
 frequency component. Lorenson and Cline's method has the additional problem that 
 two cubes may differently interpret how the surface crosses their shared face.

             in           out         in           out
                * * * * * *             * *.* * * *
                *      .  *             * .       *
                *.      . *             *.       .*
                * .      .*             *       . *
                *  .      *             *      .  *
                * * * * * *             * * * * * *
             out          in         out            in

   (the above is supposed to be two squares with a pair of diagonal 
    lines in each. )

   Consider the 2D case: you have 4 data values at the corners of a square
 (or rectangle). Consider the case where diagonally opposite corners match
 as to "inside/outside"edness. There are 2 ways to interpret the data for 
 simple functions. (1) the 2 corners that are "inside" are connected to 
 each other or (2) they are unconnected on this face. The two 
 interpretations will lead to a different set of polygon edges on the
 face. 

   In 3D, the above scenario happens on a face shared by two abutting cubes.
 If the two cubes interpret the shared face inconsistently, problems
 arise. One is the creation of holes at saddle points. All the other
 methods insure consistency in the interpretation of shared faces and so 
 not have the same problems. 

   - mark