[comp.graphics] Ray tracing acceleration.

chet@CIS.OHIO-STATE.EDU (eric j chet) (02/28/90)

Hello out there:

     I have developed a ray tracing program, and am at the stage
of adding acceleration code to it.  I have bounding volumes and
shadow caching.  From my research ray classification seems best, is
this so?  I want to implement the algorithms discused in "Fast Ray
Tracing by Ray Classification" by James Arvo and David Kirk
SIGGRAPH Proceedings 1987.  Has anybody implemented these algorithms?
What time improvements if you seen?  
     Any psudocode, source code information on expanding these algorithms
would be greatly appreciated.

     Please send me ANY information that might help.
 
     Thank You -- Eric J. Chet -- chet@cis.ohio-state.edu

grahaf@otago.ac.nz (06/14/91)

Sorry to post this but the mail system here says "undeliverable mail".
This is for Matthew (new to net) and anyone else who is interested.

>|> > |> 2). Octree flattening. This takes the adaptive division from 1) and converts it
>|> > |> into a regular space division. This allows the use of a fast voxel skipping
>|> > |> algorithm. ( John Cleary and Geoff Wyvill )
>|> > |> 
>|> > This sounds like an interesting idea.  How do turn an octree into uniform space
>|> > division without losing the advantages the octree gave in the first place.
>|> 
>|> If you have a voxel which is not divided to the bottom level yet, generate the
>|> number of bottom level voxels it contains and put the same sub-model in each.
>|> No calculation required, just a flood fill (sort of). This is a gross
>|> simplification but gives the flavour.
>|> 
>|> > 
>How is this different fom regular uniform division?  It seems like USS is the
>same as an octree method that has been divided to the lowest level.  

The result is almost exactly the same BUT this way, once you have a single
primitive in a higer level voxel you can say that all the bottom level voxels
also contain that primitive and not do any more work. This may not sound like
much but if you stop at two levels above the lowest level you save 64 voxel
comparisons for that primitive.

>I'm not sure what is meant by sub-model.  

In our system we define objects by combining primitives; planes, cones,
cylinders, spheres and so on. To keep track of the objects we use a Directed
Acyclic Graph (DAG). This DAG is basicaly a tree of primitives combined with
PLUS and MINUS operators. The model is the whole DAG, a sub-model is just a
sub-tree.

>Are the same object that intersect the higher level voxel associated with 
>all the lower level voxels?

Exactly.

>
>|> > |> 3). Edge following. This uses the fact that in any picture each colour will
>|> > |> cover more than a few pixels. This means that rays are cast only on the edges
>|> > |> of areas of colour; the inside areas can be flood filled without casting extra
>|> > |> rays. This has resulted in casting less than %10 of the rays for a full ray
>|> > |> trace in some cases. Note that the resulting picture is identical to the fully
>|> > |> traced version. ( Geoff Wyvill, Alistair Ward and Tim Brown )
>|> > |> 
>|> > How do you determine where the color edges are without casting rays?
>|> 
>|> Dare I say it? "By casting rays". To begin with we cast a sparse grid of rays
>|> over the image space. If two adjacent rays have different colour values we
>|> search for the point or points of change, then use these as seeds for the edge
>|> follower. This grid represents a small over head and there is the chance that
>|> if the grid is too coarse it will miss small objects but this proves to be
>|> relatively rare. Typicaly we use a 50 x 50 grid for a picture 512 x 512. 
>|> 
>|> >
>This sounds very much like adapative anti-aliasing.  

Yes we do in deed antialias along these edges but only when the colour value
across the edge is greater than one colour step. If it is exactly one step
there is no advantage so we don't.

>How do you known when to do the flood fill?  

The picture is rendered into a frame buffer. After the edges have all been
found, any un-coloured pixels are coloured from adjacent coloured pixels.

>How is a boundary generated and saved?

The initial seed pixels are placed on a stack and an edge is traced from each
one. If an edge branches, one edge is followed and the end of the other is
placed on the stack for future processing.

>Is this method effective with textured surfaces?  Could this method be used 
>to generate shadows?

The answer to both questions is YES. Our system does textures (solid and
surface), transparency, refraction, and of course shadows. This is easy with a
ray tracer.

Unfortunately they are not available on line for various reasons. I might have
a word with Geoff Wyvill about this but due to politicks it is unlikely to
change in the near future (sigh).

Hope this iluminates things a little for you.

Graham Furniss, PhD Student, Graphics Research Lab, Computer Science Dept.,

		University of Otago, New Zealand.