[comp.graphics] Distributed Ray Tracing Problems

winfnws@dutrun.UUCP (Aadjan van der Helm) (02/16/89)

First an introduction. I'm a Computer Graphics student at the
Technical University of Delft, The Netherlands. My assignment was to
do some research about distributed ray tracing. I actually implemented
a distributed ray tracer, but during experiments a very strange
problem came up. I implemented depth-of-field exactly in the way R.L.
Cook described in his paper. I decided to do some experiments with the
shape of the f-stop of the simulated camera. First I simulated a
square-shaped f-stop. Now I now this isn't the real thing in an actual
photocamera, but I just tried. I divided the square f-stop in a
regular raster of N x N sub-squares, just in the way you would
subdivide a pixel in subpixels. All the midpoints of the subsquares
were jittered in the usual way. Then I rendered a picture. Now here
comes the strange thing. My depth-of-field effect was pretty accurate,
but on some locations some jaggies were very distinct. There were
about 20 pixels in the picture that showed very clear aliasing of
texture and object contours. The funny thing was that the rest of the
picture seemed allright. When I rendered the same picture with a
circle-shaped f-stop, the jaggies suddenly disappeared! I browsed
through my code of the square f-stop, but I couldn't find any bugs. I
also couldn't find a reasonable explanation of the appearance of the
jaggies. I figure it might have something to do with the square being
not point-symmetric, but that's as far as I can get. I would like to
know if someone has experience with the same problem, and does
somebody has a good explanation for it ...

Many thanks in advance, Marinko Laban.