ranjit@grad1.cis.upenn.edu (Ranjit Bhatnagar) (09/12/89)
In article <170@vsserv.scri.fsu.edu> prem@geomag.UUCP (Prem Subrahmanyam) writes: > > I would strongly recommend obtaining copies of both DBW_Render and > QRT, as both have very good texture mapping routines. DBW uses > absolute spatial coordinates to determine texture, while QRT uses > a relative position per each object type mapping. The combination of 3-d spatial texture-mapping (where the map for a particular point is determined by its position in space rather than its position on the patch or polygon) with a nice 3-d turbulence function can give really neat results for marble, wood, and such. Because the texture is 3-d, objects look like they are carved out of the texture function rather than veneered with it. It works well with non-turbulent texture functions too, like bricks, 3-d checkerboards, waves, and so on. However, there's a disadvantage to this kind of texture function that I haven't seen discussed before: as generally proposed, it's highly unsuited to _animation._ The problem is that you generally define one texture function throughout all of space. If an object happens to move, its texture changes accordingly. It's a neat effect - try it - but it's not what one usually wants to see. The obvious solution to this is to define a separate 3-d texture for each object, and, further, _cause the texture to be rotated, translated, and scaled with the object._ DBW does not allow this, so if you want to do animations of any real complexity with DBW, you can't use the nice wood or marble textures. This almost solves the problem. However, it doesn't handle the case of an object whose shape changes. Consider a sphere that metamorphoses into a cube, or a human figure which walks, bends, and so on. There's no way to keep the 3-d texture function consistent in such a case. Actually, the real world has a similar defect, so to speak. If you carve a statue out of wood and then bend its limbs around, the grain of the wood will be distorted. If you want to simulate the real world in this way and get animated objects whose textures stay consistent as they change shape, you have to use ordinary surface-mapped (2-d) textures. But 3-d textures are so much nicer for wood, stone, and such! There are a couple of ways to get the best of both worlds: [I assume that an object's surface is defined as a constant set of patches, whether polygonal or smooth, and though the control points may be moved around, the topology of the patches that make up the object never changes, and patches are neither added to or deleted from the object during the animation.] 1) define the base-shape of your object, and _sample its surface_ in the 3-d texture. You can then use these sample tables as ordinary 2-d texture maps for the animation. 2) define the base-shape of your object, and for each metamorphosized shape, keep pointers to the original shape. Then, whenever a ray strikes a point on the surface of the metamorphed shape, find the corresponding point on the original shape and look up its properties (i.e. color, etc.) in the 3-d texture map. [Note: I use ray-tracing terminology but the same trick should be applicable to other techniques.] The first technique is perhaps simpler, and does not require you to modify your favorite renderer which supports 2-d surface texture maps. You just write a preprocessor which generates 2-d maps from the 3-d texture and the base-shape of the object. However, it is susceptible to really nasty aliasing and loss of information. The second technique has to be built into the renderer, but is amenable to all the antialiasing techniques possible in an ordinary renderer with 3-d textures, such as DBW. Since the notion of 'the same point' on a particular patch when the control points have moved is well-defined except in degenerate cases, the mapping shouldn't be a problem -- though it does add an extra level of antialiasing to worry about. [Why? Imagine that a patch which is very large in the original base-shape has become very small - sub-pixel size - in the current animated shape. Then a single pixel-sized sample in the current shape could be mapped to a large part of the original - using, for instance, stochastic sampling or analytic techniques.] If anyone actually implements these ideas, I'd like to hear from you (and get credit, heh heh, if I thought of it first). I doubt that I will have the opportunity to try it. If you post a reply to this article, please include this paragraph. If you see this paragraph in a follow-up, but didn't see the original article, please send me mail. My postings often seem to get very limited and unpredictable distribution, and I'm hoping to track down the problem. (ranjit@eniac.seas.upenn.edu / Ranjit Bhatnagar, 4211 Pine St., Phila PA 19104) -- ranjit "Trespassers w" ranjit@eniac.seas.upenn.edu mailrus!eecae!netnews!eniac!... "Such a brute that even his shadow breaks things." (Lorca)
lalonde@ug.cs.dal.ca (Paul Lalonde) (09/12/89)
In article <14266@netnews.upenn.edu> ranjit@grad1.cis.upenn.edu.UUCP (Ranjit Bhatnagar) writes: > [Talk about 3-d texture maps deleted for brevity] >I haven't seen discussed before: as generally proposed, it's highly >unsuited to _animation._ The problem is that you generally define one >texture function throughout all of space. If an object happens to move, >its texture changes accordingly. It's a neat effect - try it - but it's >not what one usually wants to see. >The obvious solution to this is to define a separate 3-d texture for >each object, and, further, _cause the texture to be rotated, translated, >and scaled with the object._ DBW does not allow this, so if you want >to do animations of any real complexity with DBW, you can't use the nice >wood or marble textures. I get around this be keeping not only the general texture stored with the object, but also an (x,y,z) triple pointing to where the texture is to be evaluated. I also keep some orientation information with the object. The texturing routine then only has to translate the scene coordinate of the point being textured into texture coordinates. It comes down to keeping the textures in object coordinates. This allows you to carve more than one object out of the same chunk of marble, which can be quite pleasing. It also requires very little extra manipulation of the texture. For shape changes you just keep track of your deformation function and apply it to the point whose texture you are evaluating. -Paul (Ps. My raytracer implementing this (and other goodies) should be available as soon as I finish up my spline surfaces...Real Soon Now) Paul A. Lalonde UUCP: ...{uunet|watmath}!dalcs!dalcsug!lalonde Phone: (902)423-4748 BITNET: 05LALOND@AC.DAL.CA "The only true law is that which leads to freedom" - Richard Bach, _Jonathan Livingston Seagull_
pearce%alias@csri.utoronto.ca (Andrew Pearce) (09/15/89)
In article <14266@netnews.upenn.edu> ranjit@grad1.cis.upenn.edu.UUCP (Ranjit Bhatnagar) writes: <stuff about solid texturing problems with animation and having to store a coordinate frame with each object/texture pair deleted> |This almost solves the problem. However, it doesn't handle the case of |an object whose shape changes. Consider a sphere that metamorphoses into |a cube, or a human figure which walks, bends, and so on. There's no way |to keep the 3-d texture function consistent in such a case. . . . | 2) define the base-shape of your object, and for each | metamorphosized shape, keep pointers to the original shape. | Then, whenever a ray strikes a point on the surface of | the metamorphed shape, find the corresponding point on the | original shape and look up its properties (i.e. color, | etc.) in the 3-d texture map. [Note: I use ray-tracing | terminology but the same trick should be applicable to | other techniques.] . . . |If anyone actually implements these ideas, I'd like to hear from |you (and get credit, heh heh, if I thought of it first). I doubt |that I will have the opportunity to try it. I believe this is very similar to what Witkin and Terzopoulos did at Schlumberger around 1987. I can't for the life of me find the paper though. I did see a film at CHI+GI 87 (and a SIGGRAPH too) where they drove a cone into a solid textured cube. The cone caused the cube and texture to deform non-linearly. They also moved the coordinate frame of the texture without moving the object causing the texture to appear to flow around the cone. Can someone mail me the reference to the paper, if there was one. - Andrew Pearce - Alias Research Inc., Toronto, Ontario, Canada. - pearce%alias@csri.utoronto.ca | pearce@alias.UUCP - ...{allegra,ihnp4,watmath!utai}!utcsri!alias!pearce
ritter@versatc.UUCP (Jack Ritter) (09/16/89)
In article <457@alias.UUCP>, pearce%alias@csri.utoronto.ca (Andrew Pearce) writes: > In article <14266@netnews.upenn.edu> ranjit@grad1.cis.upenn.edu.UUCP (Ranjit Bhatnagar) writes: > > <stuff about solid texturing problems with animation and having to > store a coordinate frame with each object/texture pair deleted> > It seems to me that you could solve this problem by transforming the center/orientation of the texture function along with the object that is being instantiated. No need to store values, no tables, etc. The texture function must of course be simple enough to be so transformable. Example, wood grain simulated by concentric cylindrical shells around an axis (the core of the log): Imagine the log's center line as a half-line vector, (plus a position, if necessary), making it transformable. Imagine each object type in its object space, BOLTED to the log by an invisible bracket. As you translate and rotate the object, you also sling the log around. But be careful, some of these logs are heavy, and might break your teapots. I use only natural logs myself. -- -> B O Y C O T T E X X O N <- Jack Ritter, S/W Eng. Versatec, 2710 Walsh Av, Santa Clara, CA 95051 Mail Stop 1-7. (408)982-4332, or (408)988-2800 X 5743 UUCP: {ames,apple,sun,pyramid}!versatc!ritter