[comp.graphics] Ray Tracing Optimization Idea

npw@eleazar.dartmouth.edu (Nicholas Wilt) (03/22/91)

I had a simple, interesting idea to make tracing primary rays more
efficient.  Cache the last object intersected; intersect with it
first and if hit, determine whether it is still the nearest object as
follows:
	1. Fire a ray from the intersection back at the eye
	2. If no object is intersected, the cached object is still nearest
	3. If any objects are intersected, find the farthest intersection
	   that's not beyond the eye.  This is the new nearest intersection
	   to the eye.

Since many objects such as polygons and bounding volumes (with K-K
traversal) respond well to rays which are heading away from them, it seems
to me that this scheme could work well for certain scenes.

Has anyone read about an approach which takes advantage of coherence in
this way?  Has anyone implemented the approach?  I welcome any comments.

--Nick
  npw@eleazar.dartmouth.edu

ercal@usenet.umr.edu (Fikret Ercal) (03/22/91)

Looks like an interesting idea, but still, the question of 

"How would you perform the second intersection test with the 
objects between the point hit and the eye fast enough such that 
the gain will be justified ?"

remains to be answered... thanks.

Fikret Ercal, Univ.of Missouri-Rolla
ercal@cs.umr.edu

jack@shograf.COM (Jack Ritter) (03/23/91)

In article <1991Mar21.183340.19319@dartvax.dartmouth.edu>, npw@eleazar.dartmouth.edu (Nicholas Wilt) writes:
> efficient.  Cache the last object intersected; intersect with it
> first and if hit, determine whether it is still the nearest object as
> follows:
> 	1. Fire a ray from the intersection back at the eye
> 	2. If no object is intersected, the cached object is still nearest

I think this may be similar to other schemes
of "depth coherence". But this idea has a
twist: tracing from the last visible object to
the eye and stopping when anything is hit
is just like tracing shadow rays back to
lite sources, and therefore all the tricks
we've learned in shadow testing could
be used with this "last obj in view" idea.
(Then again, maybe I read that in some
article).

-------------------------
"The traffic lights, they turn blue tomorrow"
     Jimmie Hendrix

Jack Ritter
SHO Graphics
1890 N Shoreline Blvd.
Mountain View, CA 94043
email: jack@shograf.com
Tele:  (415) 903-3867

kyriazis@iear.arts.rpi.edu (George Kyriazis) (03/24/91)

In article <1991Mar21.183340.19319@dartvax.dartmouth.edu> npw@eleazar.dartmouth.edu (Nicholas Wilt) writes:
>I had a simple, interesting idea to make tracing primary rays more
>efficient.  Cache the last object intersected; intersect with it
>first and if hit, determine whether it is still the nearest object as
>follows:
>	1. Fire a ray from the intersection back at the eye
>	2. If no object is intersected, the cached object is still nearest
>	3. If any objects are intersected, find the farthest intersection
>	   that's not beyond the eye.  This is the new nearest intersection
>	   to the eye.
>

1. Since you are firing a ray from the object to the eye anyway, why not
   fire a ray from the eye to the object like you are doing normally?  
   You'll be looking through the object database anyway.  It all depends
   if the scene complexity is higher infront of the eye or behind the eye.
   When you fire a ray towards the eye, you may have complicated objects
   behind you that you may hit now that you did not hit before.  If you
   are using some kind of voxel technique to walk through the scene, I
   think that you'll be just complicating the intersection algorithm.

2. Why do you think that works just for primary rays?  It looks to me
   that you can perform your optimization for non-primary rays also.


-- 
----------------------------------------------------------------------------
George Kyriazis
kyriazis@iear.arts.rpi.edu kyriazis@rdrc.rpi.edu kyriazis@orion.phys.rpi.edu
----------------------------------------------------------------------------

npw@eleazar.dartmouth.edu (Nicholas Wilt) (03/24/91)

In article <9B}=P0$@rpi.edu> kyriazis@iear.arts.rpi.edu (George Kyriazis) writes:
>1. Since you are firing a ray from the object to the eye anyway, why not
>   fire a ray from the eye to the object like you are doing normally?  
>   You'll be looking through the object database anyway.  It all depends
>   if the scene complexity is higher infront of the eye or behind the eye.
>   When you fire a ray towards the eye, you may have complicated objects
>   behind you that you may hit now that you did not hit before.  If you
>   are using some kind of voxel technique to walk through the scene, I
>   think that you'll be just complicating the intersection algorithm.

True.  But how often is the scene complexity behind the eye?  This
seems like an uncommon occurrence to me.  If the scene complexity is
behind the eye, and the objects there aren't likely to be reflected
by mirrors in front of the eye, then they should be culled from the
database before you start tracing anyway.

I am using rectangular bounding volumes and Kay-Kajiya traversal; this
approach is easily adapted to find the farthest intersection not beyond
the eye.

As was pointed out in an earlier post, the scheme could be simplified
to just find out if the cached object is still the closest.  In this
case, the traversal is as simple as tracing a shadow ray.

>2. Why do you think that works just for primary rays?  It looks to me

Because it exploits the tendency for objects in the scene to occupy
adjacent pixels.

I have partially implemented the scheme as follows: intersect with the
cached object and fire a ray back at the viewer; if no object intersected,
the cached object is still the nearest, otherwise, run the standard
find-nearest-intersection code.  As implemented, it generates polygon-
intensive images _much_ faster, but quadric-intensive images much
slower.  I think looking for the "farthest intersection" (not beyond the 
eye) would keep the scheme from slowing you down in images which don't
respond well to the approach.

--Nick
  npw@eleazar.dartmouth.edu

kyriazis@iear.arts.rpi.edu (George Kyriazis) (03/24/91)

In article <1991Mar24.012243.27780@dartvax.dartmouth.edu> npw@eleazar.dartmouth.edu (Nicholas Wilt) writes:
>In article <9B}=P0$@rpi.edu> kyriazis@iear.arts.rpi.edu (George Kyriazis) writes:
>>1. Since you are firing a ray from the object to the eye anyway, why not
>>   fire a ray from the eye to the object like you are doing normally?  
>>   You'll be looking through the object database anyway.  It all depends
>>   if the scene complexity is higher infront of the eye or behind the eye.
>>   When you fire a ray towards the eye, you may have complicated objects
>>   behind you that you may hit now that you did not hit before.  If you
>>   are using some kind of voxel technique to walk through the scene, I
>>   think that you'll be just complicating the intersection algorithm.
>
>True.  But how often is the scene complexity behind the eye?  This
>seems like an uncommon occurrence to me.  If the scene complexity is
>behind the eye, and the objects there aren't likely to be reflected
>by mirrors in front of the eye, then they should be culled from the
>database before you start tracing anyway.
>
It is not uncommon to be doing a flyby of a scene.  In that case, you will
most definetely have objects behind the eye.  You are welcome to get rid
of the objects from the database, but you may have to re-build the
extent hierarchy, which could be inefficient in some cases.  As I said
before though, in some cases (voxel walking) you won't even have to
cull the objects, since you can stop walking through the voxels if you hit
the voxel that the eye sits on.

>>2. Why do you think that works just for primary rays?  It looks to me
>
>Because it exploits the tendency for objects in the scene to occupy
>adjacent pixels.
>
True.  But you can keep a cache for each object that contains the last
object that a ray that emerged from the first object hit.  In that case,
the eye can be thought of as just another object.  I think I implemented
that method a while ago with some success.

>find-nearest-intersection code.  As implemented, it generates polygon-
>intensive images _much_ faster, but quadric-intensive images much
>
What is your ratio of primary to secondary rays?  If you are saying that
polygon images are generated _much_ faster, that means that your
primary/total number of rays ratio must be pretty close to 1.  If it is, that
means you are not getting too many reflections and refractions, not
to mention light rays, so why bother ray-tracing?  An image with not too
many reflections and no refractions gives me a ratio of .3

-- 
----------------------------------------------------------------------------
George Kyriazis
kyriazis@iear.arts.rpi.edu kyriazis@rdrc.rpi.edu kyriazis@orion.phys.rpi.edu
----------------------------------------------------------------------------

jsp@cs.ed.ac.uk (John Spackman) (03/25/91)

npw@eleazar.dartmouth.edu (Nicholas Wilt) writes:
>I had a simple, interesting idea to make tracing primary rays more
>efficient.  Cache the last object intersected; intersect with it
>first and if hit, determine whether it is still the nearest object as
>follows:
>	1. Fire a ray from the intersection back at the eye
>	2. If no object is intersected, the cached object is still nearest
>	3. If any objects are intersected, find the farthest intersection
>	   that's not beyond the eye.  This is the new nearest intersection.
This scheme is very similiar to `shadowing caching' whereby each light
source is assigned a record of that object (if any) found to cast a full
shadow for the previous shadow ray traced to that light.   This object
is queried first for the next ray to expolit coherence.

Your method of exploiting view ray coherency could trace rays FROM the
eye rather than TO the eye.  If the cached object were hit once more,
the distance to this intersection allows many objects to be rejected even if
their bounding volume is struck but at a GREATER distance, since this
is a lower bound on the distance to any intersection with the contents.

|: JANET: jsp@uk.ac.ed.lfcs :: ARPA: jsp%lfcs.ed.ac.uk@nsfnet-relay.ac.uk  :|
|: John Spackman, Computer Science, Edinburgh University, Room 2417 JCMB,  :|
|: The King's Buildings, Mayfield Road, Edinburgh EH9 3JZ. Tel 031 650 5125:|
		

markv@pixar.com (Mark VandeWettering) (03/27/91)

Regarding the "object cache" for raytracing:

This is an interesting idea.  I believe I actually did try doing this
in my raytracer, but took it out because of the limited performance
gains. (<2% as I recall)

I implemented it slightly differently than described initially.  It makes
more sense to cast the second ray in the same direction as the first, but
with the distance cutoff, because you are looking for the intersection
CLOSEST to your eye.


In article <2448@umriscc.isc.umr.edu> ercal@cs.umr.edu (Fikret Ercal) writes:
>"How would you perform the second intersection test with the
>objects between the point hit and the eye fast enough such that
>the gain will be justified ?"

Often you can prune large amounts of the ray tree if you know that the
ray is a "short" ray.  In the case of Kajiya/Kay, you avoid some heaping
operations which normally would have to have been done.  Basically, any
method which has a "top down" approach can benefit (at least marginally)
be expressing the ray as a "segment".

This same approach is more effective in shadow testing:  cacheing the
last object that shadowed a given pixel is often quite useful.  The
MTV raytracer does include shadow cacheing optimizations.

Mark

erich@eye.com (Eric Haines) (03/27/91)

In article <1991Mar21.183340.19319@dartvax.dartmouth.edu> npw@eleazar.dartmouth.edu (Nicholas Wilt) writes:
>I had a simple, interesting idea to make tracing primary rays more
>efficient.  Cache the last object intersected; intersect with it
>first and if hit, determine whether it is still the nearest object as
>follows:
>	1. Fire a ray from the intersection back at the eye
>	2. If no object is intersected, the cached object is still nearest
>	3. If any objects are intersected, find the farthest intersection
>	   that's not beyond the eye.  This is the new nearest intersection
>	   to the eye.

There's really no need to shoot a ray back from the intersection:  simply
shoot the ray from the eye, but first attempt to intersect the object you hit
last time.  If you hit it, then use its intersection distance as a maximum
bound on the intersection tests with the rest of the environment.  If the ray
does hit something, but that something's distance is beyond your cutoff
distance, then you know you don't hit it.  This is equivalent to your
approach:  objects _beyond_ the intersected object from the eye are equivalent
to those _behind_ your new ray (shot from the intersection back to the eye).

The idea of using the distance to the nearest intersection is an old one
(which in computer graphics terms means probably 8 years old), and I don't
think anyone claims inventing it.  I independently came up with trying to
first intersect the object hit by the last eye ray with this eye ray to get a
minimum distance, and I'm sure others thought of it independently, too (great
minds think alike, as do mediocre ones like mine).

Doing this caching trick for all rays (eye, reflection, refraction) is a win.
In fact, I found that doing caching along with Kay/Kajiya sorting would then
sometimes _lose_ me a little speed over just doing this cache intersection
then intersecting with a straight, unsorted traversal of the hierarchy, as the
caching often was much of the Kay/Kajiya speed up.  Testing other closer
objects first via Kay/Kajiya was often nullified by the time spent sorting.
Kay/Kajiya saves time by trying to intersect potentially closer objects first,
and when we find that the nearest intersection distance is closer than the
closest potential distance of the next closest object on the list of objects
to be tested (you got all that?) is farther than this distance, testing ends.
So, if we know a close hit (or any hit, even if not closest) early on, we
partly do what Kay/Kajiya does, as we can eliminate all objects beyond this
distance as in Kay/Kajiya.

Of course, caching works only when there's a fair bit of image coherence - the
"field of grass" image they did is a possible example of Kay/Kajiya paying off
over the single object cache (though I've never done a comparison).

The first time I know of these ideas being in print is the "Tracing Tricks"
article in the "Intro to Ray Tracing" course notes in SIGGRAPH '89 (pretty
obscure, eh?).  The whole article is on-line as "RTfull9" (I think) via
anonymous FTP at weedeater.math.yale.edu.  It's Volume 2, Number 8 of the Ray
Tracing News.  Get a copy to see some of the other optimizations people have
thought up (but rarely turned into articles) over the years.  Some of these
will be in "Graphics Gems II", by the way.

Eric

coy@ssc-vax (Stephen B Coy) (03/27/91)

In article <1991Mar26.183524.1547@pixar.com> markv@pixar.com (Mark VandeWettering) writes:
>Regarding the "object cache" for raytracing:

>This same approach is more effective in shadow testing:  cacheing the
>last object that shadowed a given pixel is often quite useful.  The
>MTV raytracer does include shadow cacheing optimizations.

Just a thought.  Since objects that are closer to a light tend to
cast larger shadows it would seem beneficial to shoot shadow rays
from the light to the surface rather than the "normal" way.  This
way the shadow cache should tend to have slightly more coherence.
The extra cost should be zero but I'm afraid that the benefits might
also be as slight.  Comments?

>Mark

Stephen Coy
coy@ssc-vax.UUCP

quinn@ascutney (Jerry Quinn) (03/28/91)

In article <3754@ssc-bee.ssc-vax.UUCP> coy@ssc-vax.UUCP (Stephen B Coy) writes:

>Just a thought.  Since objects that are closer to a light tend to
>cast larger shadows it would seem beneficial to shoot shadow rays
>from the light to the surface rather than the "normal" way.  This
>way the shadow cache should tend to have slightly more coherence.
>The extra cost should be zero but I'm afraid that the benefits might
>also be as slight.  Comments?
>

If there are a lot of small objects between the objects being shadowed
and the one close to the light, you could get some benefit, since the
cache would fall off the small objects often and cause retraces,
whereas the object close to the light is still valid, but I think the
same thing can exist in the other direction.  Simply change the
position of the object to shadow and the light source.  The correct
answer requires more knowledge than we're likely to want to wait for.


                            0 (light)
			  
			  ------  (object blocking light)

                          --
                           --
			       --  (Objects that get cached)
                             --	
			
                         --------- (object to shadow)



>
>Stephen Coy
>coy@ssc-vax.UUCP


Jerry Quinn
quinn@sunapee.dartmouth.edu