[comp.graphics] raytracing in ||

bodarky@umbc3.UMD.EDU (Scott Bodarky) (11/17/88)

I am presently preparing to try and implement a ray tracer on an INMOS
transputer.  I've never done anything with ray tracing before, but I 
am starting to get a fairly clear picture of how it is done.  One good 
article I've read is "Exploiting concurrency: a ray tracing example"
by Jamie Packer, which is INMOS's Technical Note 7.  I do have some
questions I am hoping some of you out there might answer about
the algorithm itself.

1) Given a pixel on the screen, there are an infinite number of light
   rays that might hit it.  Does one only calculate the ray perpendicular
   to the screen, or is that insufficient?  How many rays does one
   calculate?

2) From an intersection point, one sends a test ray directly at the light
   sources to test for objects that might cast shadows.  What if you
   are on the far side of an object from a light source?  

Sorry if these are amateurish questions, but I'm new at (and intensely
interested in) this.

     -Scott Bodarky	          |	bodarky@umbc3 (UNIX)
				  |     bodarky@umbc2 (VMS)
				  |     bodarky@umbc1 (VMS)
-------------------------------------------------------------------------------
      The Image Lab		  |	The Center for Studies in 19th C. Music
      University of MD.      	  |	University of MD.
      Baltimore County		  |	College Park

marco@hpuamsa.UUCP (Marco Lesmeister) (11/17/88)

> 1) Given a pixel on the screen, there are an infinite number of light
>    rays that might hit it.  Does one only calculate the ray perpendicular
>    to the screen, or is that insufficient?  How many rays does one
>    calculate?

You calculate only the rays from the eye-point through each pixel, so
that's a ray per pixel.

> 2) From an intersection point, one sends a test ray directly at the light
>    sources to test for objects that might cast shadows.  What if you
>    are on the far side of an object from a light source?  

I tested the newly casted ray against all of the objects, and if the
intersection point is closer to the light-source than the first found
intersection point, the first found i.p. is not visible seen from the
light-source.
If the intersection point is on the back-end of an object seen from
the eye-point, then that is not a valid intersection point in the
first place.

> Sorry if these are amateurish questions, but I'm new at (and intensely
> interested in) this.

I know how you feel, because that's where I was about a year ago.

> -Scott Bodarky	          |	bodarky@umbc3 (UNIX)

Marco Lesmeister       |\    /|           Hewlett Packard
Coudenhoveflat 80      | \  / |           Startbaan 16
1422 VK Uithoorn       |  \/  |           1187 XR Amstelveen
Holland 02975-65878    |      |____       Holland 020-5476911

steveb@cbmvax.UUCP (Steve Beats) (11/18/88)

In article <1351@umbc3.UMD.EDU> bodarky@umbc3.UMD.EDU (Scott Bodarky) writes:
>
>[stuff about transputer raytracing deleted]
>
>1) Given a pixel on the screen, there are an infinite number of light
>   rays that might hit it.  Does one only calculate the ray perpendicular
>   to the screen, or is that insufficient?  How many rays does one
>   calculate?
>
Generally, you will take a ray from the viewpoint and cast it through the
relevant pixel on your view plane (screen).  If your view is anything other
than perpendicular to an axis (and passing through the origin) you will have
to perform some transformations to make the calculation of the ray direction
a little simpler.  Foley and Van Dam suggest rotating and transforming the
whole scene from world co-ordinates to image (view) co-ordinates.  This works
quite well.   If you sample the scene using one pixel per ray, you will get
pretty severe aliasing at high contrast boundaries.  One trick is to sample
at twice the vertical and horizontal resolution (yielding 4 rays per pixel)
and average the resultant intensities.  This is a pretty effective method
of anti-aliasing.  Of course, there's nothing to stop you using 8 or 16 rays
per pixel, but this gets very expensive in terms of CPU time.  

>2) From an intersection point, one sends a test ray directly at the light
>   sources to test for objects that might cast shadows.  What if you
>   are on the far side of an object from a light source?  
>
Providing you are not working with transparent objects, just take the dot
product of the surface normal and the vector to the light source.  Negative
and positive values determine whether the surface is facing away or towards
the light source.

	Steve

brown@tyler.cs.unc.edu (Lurch) (11/29/88)

In article <5263@cbmvax.UUCP> steveb@cbmvax.UUCP (Steve Beats) writes:
>In article <1351@umbc3.UMD.EDU> bodarky@umbc3.UMD.EDU (Scott Bodarky) writes:
>>
>>[stuff about transputer raytracing deleted]
>>
>If you sample the scene using one pixel per ray, you will get
>pretty severe aliasing at high contrast boundaries.  One trick is to sample
>at twice the vertical and horizontal resolution (yielding 4 rays per pixel)
>and average the resultant intensities.  This is a pretty effective method
>of anti-aliasing.

From what I understand, the way to achieve 4 rays per pixel is to sample at
vertical resolution +1, horizontal resolution +1, and treat each ray as a
'corner' of each pixel, and average those values.  This is super cheap compared
to sampling at twice vertical and horizontal.

>	Steve

Randy 'no raytracing guru' Brown

----
Back off, man!  I'm a scientist!       brown@cs.unc.edu   uunet!mcnc!unc!brown

kyriazis@rpics (George Kyriazis) (11/29/88)

In article <5548@thorin.cs.unc.edu> brown@tyler.UUCP (Lurch) writes:
>In article <5263@cbmvax.UUCP> steveb@cbmvax.UUCP (Steve Beats) writes:
>>If you sample the scene using one pixel per ray, you will get
>>pretty severe aliasing at high contrast boundaries.  One trick is to sample
>>at twice the vertical and horizontal resolution (yielding 4 rays per pixel)
>>and average the resultant intensities.  This is a pretty effective method
>>of anti-aliasing.
>
>From what I understand, the way to achieve 4 rays per pixel is to sample at
>vertical resolution +1, horizontal resolution +1, and treat each ray as a
>'corner' of each pixel, and average those values.  This is super cheap compared
>to sampling at twice vertical and horizontal.
>

There is another way to do antialiasing, used mainly with parallel alrgorithms,
since keeping track of what CPU calculated what pixel is a bit clumsy.
You shoot N rays somewhere in the pixel and then take the average of them.
The rays need to have a Gaussian spatial distribution to be better results.
Using that method no data sharing is necessary, and you get reasonably good
results.  Sampling too few times you get noise, not aliasing effects.
Sampling in too big an area (eg. the area of say 6*6 pixels instead of 
just 1) you get blurr.  Sometimes it's a useful effect.


  George Kyriazis
  kyriazis@turing.cs.rpi.edu
  kyriazis@ss0.cicg.rpi.edu
------------------------------

cme@cloud9.UUCP (Carl Ellison) (11/30/88)

In article <5548@thorin.cs.unc.edu>, brown@tyler.cs.unc.edu (Lurch) writes:
> 
> From what I understand, the way to achieve 4 rays per pixel is to sample at
> vertical resolution +1, horizontal resolution +1, and treat each ray as a
> 'corner' of each pixel, and average those values.  This is super cheap compared
> to sampling at twice vertical and horizontal.
> 


There ain't no such thing as a free lunch.

The re-use of corners for all pixels which share them turns this into
filtering AFTER sampling -- and sampling only at pixel resolution.

All the aliasing which a 1-sample per pixel image would have is carefully
preserved this way -- only the final picture (both good stuff and aliasing)
is blurred.

Have a nice day,

--Carl Ellison          ...!harvard!anvil!es!cme    (normal mail address)
                        ...!ulowell!cloud9!cme      (usenet news reading)
(standard disclaimer)

awpaeth@watcgl.waterloo.edu (Alan Wm Paeth) (11/30/88)

In article <5548@thorin.cs.unc.edu> brown@tyler.UUCP (Lurch) writes:
>
>From what I understand, the way to achieve 4 rays per pixel is to sample at
>vertical resolution +1, horizontal resolution +1, and treat each ray as a
>'corner' of each pixel, and average those values.  This is super cheap compared
>to sampling at twice vertical and horizontal.

This reuses rays, but since the number of parent rays and number of output
pixels match, this has to be the same as low-pass filtering the output
produced by a raytracer which casts the same number of rays (one per pixel).

The technique used by Sweeney in 1984 (while here at Waterloo) compares the four
pixel-corner rays and if they are not in close agreement subdivides the pixel.
The recursion terminates either when the rays from the subpixel's corners are
in close agreement or when some max depth is reached. The subpixel values are
averaged to form the parent pixel intensity (though a more general convolution
could be used in gathering up the subpieces).

This approach means that the subpixel averaging takes place adaptively in
regions of pixel complexity, as opposed to globally filtering the entire
output raster (which the poster's approach does implicitly).

The addition can be quite useful. For instance, a scene of flat shaded polygons
renders in virtually the same time as a "one ray per pixel" implementation,
with some slight overhead well spent in properly anti-aliasing the polygon
edges -- no time is wasted on the solid areas.

   /Alan Paeth
   Computer Graphics Laboratory
   University of Waterloo

elf@dgp.toronto.edu (Eugene Fiume) (11/30/88)

In article <7034@watcgl.waterloo.edu> awpaeth@watcgl.waterloo.edu (Alan Wm Paeth) writes:
>
>The technique used by Sweeney in 1984 (while here at Waterloo) compares the four
>pixel-corner rays and if they are not in close agreement subdivides the pixel.
>The recursion terminates either when the rays from the subpixel's corners are
>in close agreement or when some max depth is reached. The subpixel values are
>averaged to form the parent pixel intensity (though a more general convolution
>could be used in gathering up the subpieces).

This technique was, of course, mentioned in Whitted's 1980 paper.
-- 
Eugene Fiume
Dynamic Graphics Project
University of Toronto
elf@dgp.utoronto (BITNET); elf@dgp.toronto.edu (CSNET/UUCP)

markv@uoregon.uoregon.edu (Mark VandeWettering) (12/01/88)

In article <7034@watcgl.waterloo.edu> awpaeth@watcgl.waterloo.edu (Alan Wm Paeth) writes:
>In article <5548@thorin.cs.unc.edu> brown@tyler.UUCP (Lurch) writes:
>>
>>From what I understand, the way to achieve 4 rays per pixel is to sample at
>>vertical resolution +1, horizontal resolution +1, and treat each ray as a
>>'corner' of each pixel, and average those values.  This is super cheap compared 
>>to sampling at twice vertical and horizontal.

And also super-ungood.   Better than not doing it, but hardly
satisfactory.  You may as well do it as a post process.  The problem
really arises from sampling on a regular grid.  You can have drop outs
(small objects dissappear) and other problems as well.

>This reuses rays, but since the number of parent rays and number of output
>pixels match, this has to be the same as low-pass filtering the output
>produced by a raytracer which casts the same number of rays (one per pixel).

Correct.  The fact that you have gained no more information about the
picture (haven't cast more rays) means that you aren't really going to
improve on the quality of the imae over the unfiltered case.

>The technique used by Sweeney in 1984 (while here at Waterloo) compares the four
>pixel-corner rays and if they are not in close agreement subdivides the pixel.
>The recursion terminates either when the rays from the subpixel's corners are
>in close agreement or when some max depth is reached. The subpixel values are
>averaged to form the parent pixel intensity (though a more general convolution
>could be used in gathering up the subpieces).

This is common, and nice.  I have been planning on doing an adaptive
antialiaser for my raytracer, but.... ahh....to have free time again.

Mark VandeWettering

kyriazis@rpics (George Kyriazis) (12/01/88)

In article <7034@watcgl.waterloo.edu> awpaeth@watcgl.waterloo.edu (Alan Wm Paeth) writes:
>In article <5548@thorin.cs.unc.edu> brown@tyler.UUCP (Lurch) writes:
>>
>>From what I understand, the way to achieve 4 rays per pixel is to sample at
>>vertical resolution +1, horizontal resolution +1, and treat each ray as a
>>'corner' of each pixel, and average those values.  This is super cheap compared
>>to sampling at twice vertical and horizontal.
>
>This reuses rays, but since the number of parent rays and number of output
>pixels match, this has to be the same as low-pass filtering the output
>produced by a raytracer which casts the same number of rays (one per pixel).
>

By sampling the image at points homogeneously spaced, ray tarcing becomes
a point sampling technique, and inevitably you get aliasing effects.  
You can stransform there sample points into gaussian distributions, sampling
somewhere inside the pixel and weighting the color of the pixel accordingly.
Since this has a randomness effect into it, the eye does not perceive 
it as aliasing but as noise.  By taking more that one sample per pixel,
you actually spread out the gaussians merging them with the rest of the
pixels.  That merging gives a continuity of color.

That technique was described in a paper by Rob Cook (I don't rememeber
the title).

>This approach means that the subpixel averaging takes place adaptively in
>regions of pixel complexity, as opposed to globally filtering the entire
>output raster (which the poster's approach does implicitly).

Unfortunately I always have to take several samples per pixel :-(

>
>   /Alan Paeth
>   Computer Graphics Laboratory
>   University of Waterloo


  George Kyriazis
  kyriazis@turing.cs.rpi.edu
  kyriazis@ss0.cicg.rpi.edu
------------------------------

david@sun.uucp (David DiGiacomo) (12/01/88)

In article <7034@watcgl.waterloo.edu> awpaeth@watcgl.waterloo.edu (Alan Wm Paeth) writes:
>The technique used by Sweeney in 1984 (while here at Waterloo) compares the four
>pixel-corner rays and if they are not in close agreement subdivides the pixel.
>The recursion terminates either when the rays from the subpixel's corners are
>in close agreement or when some max depth is reached. The subpixel values are
>averaged to form the parent pixel intensity (though a more general convolution
>could be used in gathering up the subpieces).

Let me point out the obvious.  This technique is great for antialiasing
the edges of relatively large objects, but doesn't help if there are
subpixel objects (e.g. acute polygon vertices) which don't happen to cross
the pixel corners.

jevans@cpsc.ucalgary.ca (David Jevans) (12/01/88)

In article <5548@thorin.cs.unc.edu>, brown@tyler.cs.unc.edu (Lurch) writes:
> In article <5263@cbmvax.UUCP> steveb@cbmvax.UUCP (Steve Beats) writes:
> >In article <1351@umbc3.UMD.EDU> bodarky@umbc3.UMD.EDU (Scott Bodarky) writes:
> >If you sample the scene using one pixel per ray, you will get
> >pretty severe aliasing at high contrast boundaries.  One trick is to sample
> >at twice the vertical and horizontal resolution (yielding 4 rays per pixel)
> >and average the resultant intensities.  This is a pretty effective method
> >of anti-aliasing.
 
> From what I understand, the way to achieve 4 rays per pixel is to sample at
> vertical resolution +1, horizontal resolution +1, and treat each ray as a
> 'corner' of each pixel, and average those values.  This is super cheap compared
> to sampling at twice vertical and horizontal.

Blech!  Super-sampling, as suggested in the first article, works ok but is
very slow and 4 rays/pixel is not enough for high quality images.  Simply
rendering vres+1 by hres+1 doesn't gain you anything.  All you end up doing is
blurring the image.  This is VERY unpleasant and makes an image look out
of focus.

Aliasing is an artifact of regular under-sampling.  Most people adaptively
super-sample in areas where it is needed (edges, textures, small objects).
Super-sampling in a regular pattern often requires more than 16 rays per
anti-aliased pixel to get acceptable results.  A great improvement comes from
filtering your rays instead of simply averaging them.  Even better is to fire
super-sample rays according to some distribution (eg. Poisson) and then
filter them.

Check SIGGRAPH proceedings from about 84 - 87 for relevant articles and
pointers to articles.  Changing a ray tracer from simple super-sampling to
adaptive super-sampling can be done in less time than it takes to render
an image, and will save you HUGE amounts of time in the future.  Filtering
and distributing rays takes more work, but the results are good.

David Jevans, U of Calgary Computer Science, Calgary AB  T2N 1N4  Canada
uucp: ...{ubc-cs,utai,alberta}!calgary!jevans

jmd@granite.dec.com (John Danskin) (12/02/88)

In article <7034@watcgl.waterloo.edu> awpaeth@watcgl.waterloo.edu (Alan Wm Paeth) writes:
:The technique used by Sweeney in 1984 (while here at Waterloo)...

This technique was proposed by t. whitted in his june '80 cacm paper
"an improved illumination model for shaded display".

This is *still* an interesting paper (perhaps *the* interesting paper)
if you want to review the basics. Whitted doesn't talk much about speeding it
up, but the basic description of how ray tracing works is extremely clear.

-- 
John Danskin				| jmd@decwrl.dec.com  or decwrl!jmd
DEC Advanced Technology Development	| (415) 853-6724 
100 Hamilton Avenue			| My comments are my own.
Palo Alto, CA  94301			| I do not speak for DEC.