[comp.graphics] Ray tracing: ray-object intersection

ritter@versatc.UUCP (Jack Ritter) (10/01/88)

A simple method for fast ray tracing has occurred to me,
and I haven't seen it in the literature, particularly
Procedural Elements for Computer Graphics.
It is a way to trivially reject rays that don't
intersect with objects. It works for primary
rays only (from the eye).  It is:

Do once for each object: 
   compute its minimum 3D bounding box. Project
   the box's 8 corners unto pixel space. Surround the
   cluster of 8 pixel points with a minimum 2D bounding box.
   (a tighter bounding volume could be used).

To test a ray against an object, check if the pixel
through which the ray goes is in the object's 2D box.
If not, reject it.

It sure beats line-sphere minimum distance calculation.


Surely this has been tried, hasn't it?

-- 
       The ultimate realization: everything is everything. (I should know.)
   Jack Ritter, S/W Eng. Versatec, 2805 Bowers Av, Santa Clara, CA 95051
   Mail Stop 1-7.  (408)982-4332, or (408)988-2800 X 5743
   UUCP: {pyramid,mips,vsi1,arisia}!versatc!ritter

flaig@Apple.COM (Charles Flaig) (10/03/88)

In article <522@versatc.UUCP> ritter@versatc.UUCP (Jack Ritter) writes:
>
>A simple method for fast ray tracing has occurred to me,
>and I haven't seen it in the literature, particularly
>Procedural Elements for Computer Graphics.
>It is a way to trivially reject rays that don't
>intersect with objects. It works for primary
>rays only (from the eye).  It is:
 ^^^^^^^^^^^^^^^^^^^^^^^^

Unfortunately, that's the problem.  You also need to intersect rays from
each of the light sources, which adds a lot of storage with your approach.
And in ray tracing a "typical" scene, you are going to have some transmission
and a lot of reflection for secondary rays which leads to the "primary rays
from the eye" being only a minor part of the overall calculation.  Trying to
do some decent diffusive lighting leads to even more secondary rays....

The nice thing about a sphere is that it works uniformly for all rays, and
is fairly simple.

(If I got anything wrong, flame the net, not me. :-)

--Charles
  flaig@apple.com

ksbooth@watcgl.waterloo.edu (Kelly Booth) (10/03/88)

In article <522@versatc.UUCP> ritter@versatc.UUCP (Jack Ritter) writes:
>
 [lots of stuff deleted]
>
>Surely this has been tried, hasn't it?

Yes.  Lots of times.

saponara@batcomputer.tn.cornell.edu (John Saponara) (10/03/88)

In article <522@versatc.UUCP> ritter@versatc.UUCP (Jack Ritter) writes:
>
>A simple method for fast ray tracing has occurred to me,
>and I haven't seen it in the literature, particularly
>Procedural Elements for Computer Graphics.
>It is a way to trivially reject rays that don't
>intersect with objects. It works for primary
>rays only (from the eye).  It is:
>
>Do once for each object: 
>   compute its minimum 3D bounding box. Project
>   the box's 8 corners unto pixel space. Surround the
>   cluster of 8 pixel points with a minimum 2D bounding box.
>   (a tighter bounding volume could be used).
>
>To test a ray against an object, check if the pixel
>through which the ray goes is in the object's 2D box.
>If not, reject it.
>
>It sure beats line-sphere minimum distance calculation.
>
>Surely this has been tried, hasn't it?

It's true, this really hasn't appeared in the literature, per se.  However, it
has been done.

The idea of the item buffer has been presented by Hank Weghorst, Gary Hooper,
and Donald P. Greenberg in "Improved Computational Methods for Ray Tracing",
ACM TOG, Vol. 3, No. 1, January 1984, pages 52-69.  Here they cast polygons
onto a z-buffer, storing the ID of the closest item for each pixel.  During
ray tracing the z-buffer is then sampled for which items are probably hit
by the eye ray.  These are checked, and if one is hit you're done.  If none
are hit then a standard ray trace is performed.  Incidentally, this is the
method Wavefront uses for eye rays when they perform ray tracing.  It's
fairly useful, as Cornell's research found that there are usually more eye
rays than reflection and refraction rays combined.  There's still all those
shadow rays, which was why I created the light buffer (but that's another
story...see IEEE CG&A September 1986 if you're interested).

In the paper the authors do not describe how to insert non-polygonal objects
into the buffer.  In Weghorst's (and I assume Hooper's, too) thesis he
describes the process, which is essentially casting the bounding box onto
the screen and getting its x and y extents, then shooting rays within this
extent at the object as a pre-process.  This is the idea you outlined.
However, theirs avoids all testing of the extents by doing the work as
a per object (instead of per ray) preprocess.  A per object basis means they
don't have to test extents: all they do is loop through the extent itself and
shoot rays at the object for each pixel.

All for now,

Eric Haines (not John Saponara)

NOTE: this account is going away soon.  My stable email address is:
    hpfcla!hpfcrs!eye!erich@hplabs.hp.com