[comp.graphics] Combining radiosity and ray-tracing

c184-bp@cube.Berkeley.edu (Rick Braumoeller) (06/09/91)

I am interested in combining a radiosity project that I've recently
written with a friend's ray tracer.  I'm stumped on a couple of
questions -- hopefully, one of you out there with the time can help me out.

1)  It seems to me that ray tracing is very happy to have light
sources seperate from the objects in the scene, whereas radiosity
insists that they are objects in the scene.  Personally, I wouldn't
mind treating them differently from the rest of the objects in the
scene.  This would, however, throw a wrench into the radiosity part of
the program.  Any hints?

2)  Ray tracing is also very happy to use arbitrary objects, as long
as you can write functions to do intersection, normals, etc. with that
object.  But radiosity -- how about a sphere?  Do you tesselate it to
break it into patch/subpatches, and then render from each tesselated
face to give out the sphere's energy?  Since radiosity needs to have
patches, I don't see a "good" way to model something like a sphere
using that method.

3)  As far as I can tell, adaptive subdivision during the radiosity
pass is for the benefit of the viewer only -- it does not affect how
MUCH energy moves between surfaces, only where on the surface it goes.
Since I will be combining these two methods, I will, essentially, have
a view-DEpendant rendering.  Should I take advantage of the fact that
I know where the eyepoint is in the radiosity pass, and use that so
that I only subdivide things that I will wind up seeing in the ray
tracing pass(es)?

Thanks in advance for any tips, pointers, or casual observations --
email is appreciated, but not required.

- Rick Braumoeller
University of California, Berkeley

jet@karazm.math.uh.edu (J Eric Townsend) (06/10/91)

In article <C184-BP.91Jun8222718@cube.Berkeley.edu> c184-bp@cube.Berkeley.edu (Rick Braumoeller) writes:
>1)  It seems to me that ray tracing is very happy to have light
>sources seperate from the objects in the scene, whereas radiosity

Depends on how you write the raytracer.  In my ongoing project,
the only difference between light-emitting and "regular" objects
is which structure they're stored in.  Everything gets tested the
same way, however...  (ie: I can have a light-emitting cone.)


--
J. Eric Townsend - jet@uh.edu - bitnet: jet@UHOU - vox: (713) 749-2126
Skate UNIX! (curb fault: skater dumped)

   --  If you're hacking PowerGloves and Amigas, drop me a line. --

atc@cs.utexas.edu (Alvin T. Campbell III) (06/10/91)

In article <C184-BP.91Jun8222718@cube.Berkeley.edu> c184-bp@cube.Berkeley.edu (Rick Braumoeller) writes:
>I am interested in combining a radiosity project that I've recently
>written with a friend's ray tracer.  I'm stumped on a couple of
>questions -- hopefully, one of you out there with the time can help me out.
>
>1)  It seems to me that ray tracing is very happy to have light
>sources seperate from the objects in the scene, whereas radiosity
>insists that they are objects in the scene.  Personally, I wouldn't
>mind treating them differently from the rest of the objects in the
>scene.  This would, however, throw a wrench into the radiosity part of
>the program.  Any hints?
>

Radiosity methods do not necessarily require the light sources to be
among the list of polygons to be rendered.  You do not state 
which algorithm you have implemented, but I would guess it
to be the full matrix hemicube method (Cohen, SIGGRAPH '85).
Full matrix methods require computing all form factors first, solving the 
simultaneous linear equations directly, and then rendering the scene.
These early methods do assume the light sources are scene polygons.
However, progressive refinement approaches, introduced by Chen at SIGGRAPH '88,
have lifted this requirement.  It is my impression that most successive 
work has followed the progressive refinement philosophy.  The paper by 
Wallace et al in SIGGRAPH '89, in particular, went to great efforts
to add to radiosity algorithms some of the effects available with other 
rendering approaches (point light sources, for example).

>2)  Ray tracing is also very happy to use arbitrary objects, as long
>as you can write functions to do intersection, normals, etc. with that
>object.  But radiosity -- how about a sphere?  Do you tesselate it to
>break it into patch/subpatches, and then render from each tesselated
>face to give out the sphere's energy?  Since radiosity needs to have
>patches, I don't see a "good" way to model something like a sphere
>using that method.
>

This depends on what algorithm you are using.  If you consider distributed
ray tracing to be a radiosity algorithm, no explicit tesselation is needed.
For hemicube-based methods, you will need to tesselate each object, but
this is only so that light sources can at all be reasonably approximated 
with the hemicubes.  Computing the form factors requires no tesselation 
at all -- if you can scan-convert an object, its form factor can
be calculated.  However, you might want to subdivide objects so 
that there is some intensity gradation visible in your images, rather
than having each object colored throughout with a very accurate 
average intensity.  This leads to ...

>3)  As far as I can tell, adaptive subdivision during the radiosity
>pass is for the benefit of the viewer only -- it does not affect how
>MUCH energy moves between surfaces, only where on the surface it goes.
>Since I will be combining these two methods, I will, essentially, have
>a view-DEpendant rendering.  Should I take advantage of the fact that
>I know where the eyepoint is in the radiosity pass, and use that so
>that I only subdivide things that I will wind up seeing in the ray
>tracing pass(es)?
>

Dividing the scene into a good mesh is essential to getting an accurate 
solution.  For hemicube-based methods, as an example, all the elements
used as light sources must be small relative to the rest of the scene.
One very important factor is that in all radiosity methods, the intensity
throughout a patch must be nearly constant.  Notice that the radiosity 
equations all assume light sources of constant intensity. If there is a great 
deal of intensity variation, much accuracy is lost.  Adaptive refinement
(Cohen, CG&A '86) or (Campbell, SIGGRAPH '90) is a reasonable way to 
deal with the problem, since the only alternative is to make all patches
very small.

The concept of using the viewing position as a criterion for mesh generation
is not new.  Both Buckalew (SIGGRAPH '89) and Heckbert (SIGGRAPH '90) 
have incorporated this into their global illumination algorithms.

>Thanks in advance for any tips, pointers, or casual observations --

I am glad to be of service.

-- 
				A. T. Campbell, III
				CS Department, University of Texas
				atc@cs.utexas.edu