[comp.graphics] Raytracing speedups....

gefuchs@top.cis.syr.edu (Gill E. Fuchs) (12/15/88)

Hi...

   The following paragraphs describe an idea of mine.  This idea describe
a possible algorithm for improving speed for raytracing...

   Please note that this is not an algorithm yet.  I would appreciate
if anybody would let me know if this would be truly possible and if somebody
else has thought of it already....

   I appriciate any comments or suggestions...

                                                        -vahid koussari

koussari@rodan.acs.syr.edu         gefuchs@top.cis.syr.edu

-------------

    This process describes a possible improvement for raytracing....
This method uses bounding boxes instead of bounding volumes to speed up
intersection tests of objects.

    As with any basic raytracer, begin by reading in info about objects...
But as you are reading world coordinate positions for objects using a
4 X 4 transformation matrix, transform the object center plus any point
along the border (radius for sphere) to screen coordinates.
    For each all objects create an array which contains coordinates of
opposite sides of a rectangle that bound that object. Note that this could
be more optimized by that if after reading in all object descriptions and
creating all bounding boxes, you can check to see if one boxes is inside
another and so forth. If so, count the number of bounding boxes within each
other and delete all the rest and keep the biggest one (topmost). Then set a
structure equal to the number of objects that are included in that box.
If there are more than one then set it equal to that number otherwise
set it equal to 1.

Now for the actual rendering...

   As you begin to render an image and create rays for all pixels on
the screen, you check to see if the current screen(x,y) coordinate that
is being processed is in side any of the bounding boxes.
   If it is not in anyof the boxes, then get the background color.
   If it is then intersect the ray with the spheres pointed to by the
structure that holds what objets that exist inside that bounding box.

-------------
Now for possible bugs...

   Would there be problems with secondary rays? (reflected,refracted)

-------------
Further improvements if this works....

   After doing the preprocessing, an array of bits could be setup that
correspond to the resolution of the image...
instead of checking to see if the x,y coordinate is inside a rectangle.
you can just check to see if the appropiate bit inside the array is
1 or 0...if 0 then there are no objects at that point (get background)
if 1 then process using the shader.....

   I have chosen rectangles instead of irregular polygons because it
is easier to check for points inside rectangle than irreg. polygons.
-----------------------------------------------------------------------
#! rnews  

cygnus@vax1.acs.udel.EDU (Marc Cygnus (ACM V.Pres.)) (12/16/88)

In article <907@cmx.npac.syr.edu> koussari@rodan.acs.syr.edu (Vahid Koussari)
writes:
>   The following paragraphs describe an idea of mine.  This idea describe
>a possible algorithm for improving speed for raytracing...
> <etc...>
>    This process describes a possible improvement for raytracing....
>This method uses bounding boxes instead of bounding volumes to speed up
>intersection tests of objects.
> <etc, remainder deleted>

Yes, it sounds like an effective speedup. It also sounds like an octree :-).

Octree (and similar) encoding an object hierarchy to reduce computations 
is fairly common. For an introductory treatment of the idea of threespace
cell-decomposition, take a look at "A new algorithm for object oriented ray
tracing," by Saul Youssef (Computer Vision, Graphics, and Image Processing,
34, (1986), pps. 125-137). That paper proposes an interesting variation on
the octree approach. Or, look up octree encoding / 3d region decomposition.

					-marcus-

easterb@ucscb.UCSC.EDU (William K. Karwin) (12/16/88)

In article <907@cmx.npac.syr.edu> gefuchs@top.cis.syr.edu (Gill E. Fuchs) writes:
>koussari@rodan.acs.syr.edu         gefuchs@top.cis.syr.edu
>...transform the object center plus any point
>along the border (radius for sphere) to screen coordinates.
>    For each all objects create an array which contains coordinates of
>opposite sides of a rectangle that bound that object...
>... you check to see if the current screen(x,y) coordinate that
>is being processed is in side any of the bounding boxes.
>   If it is not in anyof the boxes, then get the background color.
>   If it is then intersect the ray with the spheres pointed to by the
>structure that holds what objets that exist inside that bounding box.

The way I am reading this, you are generating 2d boxes in screen
coordinates, not 3d boxes around the objects in world coordinates.

>   Would there be problems with secondary rays? (reflected,refracted)

Yes, I think there would be some problems with secondary rays.
Your purpose for these 2d bounding boxes is to make a lot of the
intersection computation work overhead, to be done once at the
beginning of the job.  Casting a secondary ray would be equivalent
to changing the viewpoint of the scene, and so would require another
entire set of 2d bounding boxes to be created.  This would have
to happen during runtime, because you could not predict secondary rays.

I have seen the results of ray casting algorithms that only display
regions of sharp contrast changes, i.e., edges of displayed objects.
These algorithms do no reflection, refraction, or shading.  At best they
do constant-shaded coloring of surfaces.  They are most often used (to my
knowledge) for fast generation of images in a CAD application, or as
a scene previewer for a more realistic renderer.

I think your idea would be very well suited for such a display method.

William K. Karwin                 ARPA  : easterb@ucscb.ucsc.EDU
"Any nitwit can understand        UUCP  : ...!ucbvax!ucscc!ucscb!easterb
computers.  Many do." -T. Nelson  BITNET: easterb@ucscb.BITNET

nctingue@ndsuvax.UUCP (Mark Tinguely) (12/16/88)

>In article <907@cmx.npac.syr.edu> gefuchs@top.cis.syr.edu (Gill E. Fuchs) writes:
>>   Would there be problems with secondary rays? (reflected,refracted)

In article <5796@saturn.ucsc.edu> easterb@ucscb.UCSC.EDU (William K. Karwin) writes:
>Casting a secondary ray would be equivalent
>to changing the viewpoint of the scene, and so would require another
>entire set of 2d bounding boxes to be created.  This would have
>to happen during runtime, because you could not predict secondary rays.

I don't want to put words in anyone's mouth but I think the intention
of the secondary ray is to PROJECT the secodary ray into 2D space, then
check the orginal bounding boxes. This is a crude test because the ray
could be moving away from the object, not toward it in the projected axis,
but it would help to eliminate those object not near the ray in the other
two axises.

Mark Tinguely
--
Mark Tinguely           North Dakota State University,  Fargo, ND  58105
UUCP:       ...!uunet!ndsuvax!nctingue
BITNET:     nctingue@ndsuvax.bitnet
ARPA,CSNET: nctingue%ndsuvax.bitnet@cunyvm.cuny.edu

ritter@versatc.UUCP (Jack Ritter) (12/17/88)

In Article 2031 of comp.graphics,
Gill E. Fuchs writes:

>   The following paragraphs describe an idea of mine.  This idea describe
> 	a possible algorithm for improving speed for raytracing...

> 	But as you are reading world coordinate positions for objects using a
> 	4 X 4 transformation matrix, transform the object center plus any point
> 	along the border (radius for sphere) to screen coordinates.

> 	   As you begin to render an image and create rays for all pixels on
> 	the screen, you check to see if the current screen(x,y) coordinate that
> 	is being processed is in side any of the bounding boxes.


I came up with this idea in aug, '88. I posted it, and got a response.
Since my formulation was concise, I will include it here again, w/ respnse.
(Basically, Gill has added recursive boxing to the algorithm.)

**************************** My Aug posting*********************

A simple method for fast ray tracing has occurred to me,
and I haven't seen it in the literature, particularly
Procedural Elements for Computer Graphics.
It works for primary rays only.

Do once for each object: 
   compute its minimum 3D bounding box. Project
   the box's 8 corners unto pixel space. Surround the
   cluster of 8 pixel points with a minimum 2D bounding box.
   (a tighter bounding volume could be used).

To test a ray against an object, check if the pixel
through which the ray goes is in the object's 2D box.
If not, reject it.

It sure beats line-sphere minimum distance calculation.

*********************** the response**********************

Hi, I just read your posting on ray-object intersection and did not
get a chance to look and see if anyone else responded to this (which
for unbiased opinions is always best), so here is my response.

I am not sure whether your 8 pt 2-d bounding box test has been tried
or not.  It certainly is a novel idea worth persuing.  However, if you
have looked at most raytracing articles, there have been a variety of
attempt to reduce the number of "first" intersection calculations.  I
believe Weghorst actually goes as far as to use a first level
scan-conversion routine to solve the first intersection problem.  This
would be the fastest method.  The main problem with ray-tracing,
though, is the object intersections encountered at the later levels
when traversing the tree.  This is where the majority of time is spent
in this type algorithm.  Hope this was helpful.

			-Scott Whitman (slim@tut.cis.ohio-state.edu.ARPA)


-- 
       The ultimate realization: everything is everything. (I should know.)
   Jack Ritter, S/W Eng. Versatec, 2805 Bowers Av, Santa Clara, CA 95051
   Mail Stop 1-7.  (408)982-4332, or (408)988-2800 X 5743
   UUCP: {pyramid,mips,vsi1,arisia}!versatc!ritter

cac@druco.ATT.COM (Curtis A. Conkey) (12/20/88)

in article <907@cmx.npac.syr.edu>, gefuchs@top.cis.syr.edu (Gill E. Fuchs) says:
> 
> Hi...
> 
>    The following paragraphs describe an idea of mine.  This idea describe
> a possible algorithm for improving speed for raytracing...
> 
>    Please note that this is not an algorithm yet.  I would appreciate
> if anybody would let me know if this would be truly possible and if somebody
> else has thought of it already....
> 
>    I appriciate any comments or suggestions...
> 
>                                                         -vahid koussari

Check out some of these references on Ray-tracing:

.H 1 "Bibliography"
(Doct81) Doctor Louis J. and Torborg, John G. 
"Display Techniques for Octree-Encoded Objects"
IEEE Computer Graphics and Applications July 1981 pp(29-38)

(Fuji85) Fujimoto, Akira and Iwata, Kansei 
"Accelerated Ray Tracing"
Computer Graphics Tokyo 85 April 1985

(Fuji83) Fujimoto, Akira and Iwata, Kansei 
"Jag-Free Images on Raster Displays"
IEEE Computer Graphics and Applications December 1983 pp(26-34)

(Glas84) Glassner, Andrew S. 
"Space Subdivision for Fast Ray Tracing"
IEEE Computer Graphics and Applications October 1984 pp(15-22)

(Hall83) Hall, Roy A. and Greenberg, Donald P. 
"A Testbed for Realistic Image Synthesis"
IEEE Computer Graphics and Applications Nov 1983 pp(10-20)
 
(Kapl85) Kaplan, Michael R. 
"Space Tracing, a Constant Time Ray Tracer"
SIGGRAPH85 Tutorial on The Uses of Spatail Coherence in Ray Tracing

(Plun85) Plunket, David J. and Bailey, Michael J. 
"The Vectorization of a Ray Tracing Algorithm for Improved Execution Speed"
IEEE Computer Graphics and Applications August 1985 pp (52-59)

(Wegh84) Weghorst, Hank and Hooper, Gary and Greenberg, Donald P.
"Improved Computational Methods for Ray Tracing"
ACM Transactions on Graphics Vol. 3, No. 1 January 1984 pp(52-69)

(Whit80) Whitted, Turner 
"An Improved Illumination Model for Shaded Display"
Communications of the ACM June 1980 Vol23 Num6 pp(343-349)

(Yama84) Yamaguchi, K. and Kunii, T.L. and Fujimura K.
"Octree-Related Data Structures and Algorithms"
IEEE Computer Graphics and Applications January 1984 pp(53-59)

Hope these help

Curtis Conkey
AT&T Bell Laboratories
Denver, Colorado
ihnp4!druco!cac

david@linc.cis.upenn.edu (David Feldman) (12/22/88)

Article 4169 of comp.graphics:
From: ritter@versatc.UUCP (Jack Ritter)
Subject: Re: Raytracing speedups....

> Do once for each object: 
>    compute its minimum 3D bounding box. Project
>    the box's 8 corners unto pixel space. Surround the
>    cluster of 8 pixel points with a minimum 2D bounding box.
>   (a tighter bounding volume could be used).
> To test a ray against an object, check if the pixel
> through which the ray goes is in the object's 2D box.
> If not, reject it.
> It sure beats line-sphere minimum distance calculation.

I did this for a ray tracer that a friend of mine and I hacked up.  This
particular ray tracer only handled spheres, so the calculations were not
bad at all.  The code was short and ugly.

> would be the fastest method.  The main problem with ray-tracing,
> though, is the object intersections encountered at the later levels
> when traversing the tree.  This is where the majority of time is spent
> in this type algorithm.  Hope this was helpful.

This does seem to be the case.  I used a Phong illumination model for
reflections (best results per MFLOP I think) and also had special cases
for mirrors and zenith lighting.  The speedup due to the bounding boxes on
the image plane (pixel space) was less than ten percent for the complicated
images we did.  In the end, my friend took out the b-box code to keep things
simpler, i.s. he did not think it was worth it.

_   /|					Dave Feldman
\'o.O'					david@dsl.cis.upenn.edu
=(___)=		Ok, cough!
   U					DSL - land of wonder and enchantment
ACK! PHHT!

ewhac@well.UUCP (Leo L. Schwab) (12/22/88)

In article <907@cmx.npac.syr.edu> gefuchs@top.cis.syr.edu (Gill E. Fuchs) writes:
>   The following paragraphs describe an idea of mine.  This idea describe
>a possible algorithm for improving speed for raytracing...
> [ ... ]
>    This process describes a possible improvement for raytracing....
>This method uses bounding boxes instead of bounding volumes to speed up
>intersection tests of objects.
>
	Basically, he's proposing 2D boxes in screen space rather than 3D
voxels in world space.

	There's a commercial raytracer for the Amiga that uses precisely the
method you outlined.  It works well enough for matte objects, but falls
far short of optimal for anything that might redirect the ray.

	If you have a silver sphere, and you bounce a ray off it, your
screen space subdivision is now useless, and you must check against every
object/polygon in the scene to see what you've hit, which might be another
mirrored object...

	The other disadvantage is less obvious.  Suppose I have 10,000
polygons all lined up, and I position my camera to look straight down the
line.  Rays cast toward the center will test positive for far more polygons
than necessary.  It may be possible to optimize this by sorting the box test
results by Z, but you'll still be making more tests than necessary.

	Scenes on the aforementioned raytracer inevitably take longer to
render than on a competing raytracer that uses voxels (using equivalent
hardware and software configurations).  The voxel tracer usually finishes no
less than 25% faster than the boxel tracer.  Of course, your mileage may
vary.

	Disclaimer:  I'm just parroting this stuff...

_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
Leo L. Schwab -- The Guy in The Cape	INET: well!ewhac@ucbvax.Berkeley.EDU
 \_ -_		Recumbent Bikes:	UUCP: pacbell > !{well,unicom}!ewhac
O----^o	      The Only Way To Fly.	      hplabs / (pronounced "AE-wack")
"Work FOR?  I don't work FOR anybody!  I'm just having fun."  -- The Doctor

mbc@a.cs.okstate.edu (Michael Carter) (12/30/88)

From article <7839@versatc.UUCP>, by ritter@versatc.UUCP (Jack Ritter):
> 
> A simple method for fast ray tracing has occurred to me,
> and I haven't seen it in the literature, particularly
> Procedural Elements for Computer Graphics.
> It works for primary rays only.
> 

     I tried something like this a while back in an early version of my
Hypercube Ray Tracer.  As you described, I placed a minimum sized box
around the image of the object as projected onto the viewing plane, and
tested primary rays against it as a first-cut culling scheme.  At that time,
the ray tracer was using the NAIVE object intersection method.  The best
improvement I ever got from this scheme was about 20% overall speedup for
scenes with small numbers of large objects.  It seemed that there were
just too many secondary rays being cast for the decreased number of primary
rays to have much effect on the overall time.  This is especially true in
scenes where large reflective or partially reflective objects exist -- their
shadow testing rays and reflected rays just overwhelm any savings gained in
casting the primary ray.
     This is admittedly not an extensive test, the there is room for further
experimentation, but the extra code incurred for such a modest gain was not
worth the trouble in my estimation.  Sorry.  Keep the ideas coming, though!

-- 
Michael B. Carter (mbc@a.cs.okstate.edu)
Department of Electrical and Computer Engineering, Oklahoma State University