[comp.graphics] Hypertextures

jk87377@korppi.tut.fi (Kouhia Juhana Krister) (01/03/91)

Hi,

I'm implementing hypertextures descriped in "Hypertexture" by Ken
Perlin and Eric Hoffert in SIGGRAPH '89.
Here's some what I have got now.
I describe minimally, because my English language isn't that good, but
I'm sure you understand if you read the above paper and start to
implement that algorithm.
I'm not sure that everything is correct or you understand - I post
this anyway.


Density in point x is D(x).
D(x) = f(dis(x))
Where dis(x) is a "distance" from object.
And f() is a function which define is the object soft or what ever.
For example f() for the solid unit sphere is:
  if (dis(x) < 1.0)  f = 1.0
               else  f = 0.0     /* or f = 1.0 - dis(x) */
That negative f in outside of the sphere can be used for the special
functions that are not possible when there's zero density in outside.

Sphere:
-------

dis(x) = sqrt(x*x+y*y+z*z))

Cube:
-----

dis(x) = max(abs(x),abs(y),abs(z))

Cylinder:
---------

dis(x) = max(abs(z),sqrt(x*x+y*y)) 

Torus:
------

r = sqrt(x*x+y*y)
p = a-r
dis(x) = sqrt(z*z+p*p)

Torus is in xy-plane; center is (0,0,0), big radius = a

Cone:
-----

u = 2.0*sqrt(x*x+y*y)+z
v = abs(z)
if (u < v) dis(x) = v
      else dis(x) = u

Polyhedrons:
------------

A nearest distance to polyhedron is calculated.
This causes that polyhedron's corners and edges are rounded when point
x moves away from polyhedron.
If point x is in inside of the polyhedron then corners and edges are
sharp unless the polyhedron is concave.

Another method is move the planes in direction of the normal of the
plane and find a state where x is on the plane of the polyhedron.
There is not a rounded corners in this model.
(This is under study now.)

Spline-patches:
---------------

Distances from the point x are calculated to the points of the patch
where the normal is toward point x.  
Nearest distance is dis(x).



As stated in the Perlin/Hoffert paper it is very slow compute this
using ray tracing. I'm now thinking about the functions which are
recursively calculated and in each recursive step the details of the
texture increases.
This means that we can calculate a texture fast with few recursions
first and when we are close enough we increase recursions to get more
details. (Like as in mandelbrot set calculation; I have allready
calculated it in 4D --- hmmmm... how about furry mandelbrot? :-)
I'm not sure works above but somehow I try decrease calculations with
the rough mode.


I'm too looking the article "Global and Local Deformations of Solid
Primitives" by A.H. Barr for deformations of the hyperobjects.


Any ideas for implementing hypertextures?
Experiences in implementing hypertextures, anyone?
I'm interested in to know what is in SIGGRAPH '90 about this subject.

Oh yeah, this is not my exercise.


Juhana Kouhia
jk87377@tut.fi

ebert@sphere.cis.ohio-state.edu (David S. Ebert) (01/11/91)

In article <1991Jan2.195806.25330@funet.fi> jk87377@korppi.tut.fi (Kouhia Juhana Krister) writes:
>
>
>Hi,
>
>I'm implementing hypertextures descriped in "Hypertexture" by Ken
>Perlin and Eric Hoffert in SIGGRAPH '89.
>
>
>Any ideas for implementing hypertextures?
>Experiences in implementing hypertextures, anyone?
>I'm interested in to know what is in SIGGRAPH '90 about this subject.
>
>Oh yeah, this is not my exercise.
>
>
>Juhana Kouhia
>jk87377@tut.fi


Re: SIGGRAPH'90 paper on Hypertextures:
-----------------------------------

Well, there was a paper in SIGGRAPH90 on a related topic to Hypertextures.
The paper is "Rendering and Animation of Gaseous Phenomena by Combining Fast
Volume and Scanline A-buffer Techniques" by David S. Ebert and Richard E. 
Parent. This paper describes a system for volume rendering volume density
functions and combining these volume rendered objects with scanline rendered
surface-defined objects. The approach in the paper has the following advantage:

	1) Normal polygonal or patch objects are rendered much faster
	   than with raytracing (scanline a-buffer used). The quality
	   of a scanline rendered image vs. raytracing can always be argued.

	2) Volume tracing stops once full coverage of the pixel is
	   achieved. This has the advantage that if a wall is in front
	   of the volume defined object, no volume tracing will occur
	   for the  pixel.

	3) The paper also has an efficient method for volume shadowing
	   using three-dimensional shadow tables.

This paper talks about volume density functions for producing images
and animations of clouds, steam, fog, etc. Volume density functions
are basically the same as hypertextures, but I feel that volume
density function is a more descriptive term. Even though the functions
in the paper are rendered using a low-albedo gaseous illumination
model, the functions can easily be rendered using an opacity and
illumination formula for solid objects. (In fact, the preliminary
version of the paper had a "blobby" sphere in one of the images).
The paper gives alot of detail on implementation.

Re: The speed of rendering hypertextures and volume density functions:
---------------------------------------------------------------------

I found that with my initial tests, 70-90% of the calculation time was
being spent in turbulence function evaluation. We spent a few weeks
optimizing the noise and turbulence functions and achieved significant
speed-ups. Some things we did included making table dimensions for the
noise lattice be powers of 2 so that we could use bit-shifting
operations for indexing into the 3D array. We also pre-computed
offsets for the indexing into the 3D array. Initially, the assembly
code for indexing into our 3D noise array was over 21 instructions. We
were able to reduce this to less than 10. Some other useful things we
did included using tables for cos, sin, and pow. Especially for cos
and sin, a table of say 20,000 entries is probably sufficient
resolution. 

I also do not feel that the speed of rendering these functions should
be a limitation to producing animations. Animations featuring these
functions are very well suited for distributed processing. For the
animations shown with my talk, we used a network of 50 machines. Since
each frame takes over 30 minutes, the distribution software's overhead
is negligible. 


David
-----
-=-
--  David S. Ebert,       Department of Computer and Information Science  -----
---  The Ohio State University; 2036 Neil Ave. Columbus OH USA 43210-1277  ----
-- ebert@cis.ohio-state.edu or ..!{att,pyramid,killer}!cis.ohio-state.edu!ebert
-------------------------------------------------------------------------------