[net.graphics] Wave-front ray tracing?

aglew@ccvaxa.UUCP (03/02/86)

Humble query, but first, an apology: I'm not into graphics. I've taken a
course in computer graphics, and I do like computational geometry, but
I'm not interested enough in graphics to read the journals on a regular
basis.

Now, background: one of my friends is into graphics. He keeps going on about
ray-tracing, and the like, optical techniques that require humongous amounts
of computation power, and are the reasons why Lucasfilm buys Crays and
Abel Associates buys Goulds (modest plug here).

Query: might not wave-front type computations be more efficient? Take a
wave-front from your light source (spherical, cylindrical, or planar)
start it propagating, and compute its first intersection with an object.
Parts of the wave-front will be modified - light "densities" will be changing
all the time. You might even consider polarization effects. Of course,
the more reflections/refractions you have, the more wave-fronts you'll have
to handle, but some will fade away eventually.

Has this been tried as an alternative to ray-tracing? Any references?
Any overwhelming reasons why it won't work? The only thing I can see is that
it is hard to work backwards from eye to light source.

rhbartels@watcgl.UUCP (Richard Bartels) (03/04/86)

In article <13300001@ccvaxa> aglew@ccvaxa.UUCP writes:
>
>Query: might not wave-front type computations be more efficient?
>.....
>Has this been tried as an alternative to ray-tracing? Any references?
>

One reference is: H. P. Moravec -- 3D Graphics and the Wave Theory,
		  Computer Graphics, Vol. 15, No. 3, Aug. 1981,
		  pp. 289-296 (SIGGRAPH '81 Proceedings)

The result seems to be that you get pictures 1000 times worse for
10,000 times the complexity and effort, as witness the fact that
there don't appear to have been any follow-up articles.

As a side remark: I have a friend who works from time to time in
the optics industry designing lenses.  They use ray-tracing, mainly,
rather than wave-front calculations.  There must be a reason.

-Richard Bartels

td@alice.UucP (Tom Duff) (03/04/86)

There's a paper in the 1980 Siggraph proceedings by Hans Moravec of
Carnegie-Mellon University about wave-tracing.  Moravec is interested
in what we're going to do with the enormous numbers of teraflops that
will be available to us in twenty years or so.  Wave-tracing involves
large numbers of 3-dimensional Fourier transforms, all of which must
be done at a resolution comparable to the wavelength of the light
you're dealing with.  Moravec produced some tiny (64x64?) images that
took many hours (around 30-40?) of KL-10 time to produce, and looked
like nothing at all -- you could decipher the objects in the scene
if he told you what they were, but mostly you saw diffraction fringes
-- these images were, after all, only about 32 wavelengths wide.

There's no reason why it won't work, except that producing a reasonable
image would consume all the CPU cycles produced since the dawn of history.
Tracing backwards from eye to light source is no problem -- you don't do
it that way.  Moravec includes in his scene a pinhole camera object and
displays the photons that hit the camera's screen.

pearce@calgary.UUCP (Andrew Pearce) (03/04/86)

In article <13300001@ccvaxa>, aglew@ccvaxa.UUCP writes:
...
> Query: might not wave-front type computations be more efficient? Take a
> wave-front from your light source (spherical, cylindrical, or planar)
> start it propagating, and compute its first intersection with an object.
> Parts of the wave-front will be modified - light "densities" will be changing
> all the time.
...
> Has this been tried as an alternative to ray-tracing? Any references?
> Any overwhelming reasons why it won't work? The only thing I can see is that
> it is hard to work backwards from eye to light source.

Yes, this has been thought of and versions have been implemented.  The two
that come to mind right away are :

J. Amanatides, "Ray Tracing with Cones", ACM SIGGRAPH, Vol. 18, No. 3, 1984,
	pp. 129-136

P. Heckbert and P. Hanrahan, "Beam Tracing Polygonal Objects", ACM SIGGRAPH,
	Vol. 18, No. 3, 1984, pp. 119-128

They use the "wave-front" idea and break the wave into sub-waves when an object
is hit.  It all works fine, and samples texture maps *very* accurately even at
extreme angles or distances, but problems crop up when the beam or cone has
to be refracted through an object.

This critisism is more valid of beam tracing than of cone tracing, cone tracing
is more an alternative to super-sampling for anti-aliasing and to avoid the
granularity of regular ray-tracing for small objects at great distances from
the viewpoint.

Some work is also being done on using this type of idea for simulating and
testing room accoustics (using multiple sound wave fronts rather than light).
I think this is being done at the U of British Columbia.

Andrew Pearce   Dept. Computer Science
U of Calgary    2500 University Dr.
Calgary,	Alberta, Canada
T2N 1N4

	  Usenet: ...{ubc-vision,ihnp4}!alberta!calgary!pearce

steve@bambi.UUCP (Steve Miller) (03/05/86)

> Wave-tracing involves
> large numbers of 3-dimensional Fourier transforms

My undergraduate thesis involved predicting the amplitude functions
of plane waves scattered from regular dielectric solids.  By solving
a set of equations describing boundary constraints (obtainable from
Maxwell's equations), it was possible to predict the scattered wave
amplitude as a function of angle around the scattering center.  One
might extend this idea to apply the same boundary constraints to objects
scattering the scattered wave, and so forth.  Whether this is sensible
for objects much larger than light wavelengths (my objects were around
1-10 microns) is a question.  But it did work for small objects.

	-Steve Miller ihnp4!bambi!steve

chapman@sfucmpt.uucp (John Chapman) (03/05/86)

There was a siggraph conference paper a few years ago about
this (sorry can't remember the vol/issue #).  As I remember it
sounded like you would need an ibm gf11 (gigaflops, 11 of them)
just to get going with this approach.

asw@rlvd.UUCP (Antony Williams) (03/06/86)

In article <13300001@ccvaxa> aglew@ccvaxa.UUCP writes:
>
>Query: might not wave-front type computations be more efficient? Take a
>wave-front from your light source (spherical, cylindrical, or planar)
>start it propagating, and compute its first intersection with an object.
>Parts of the wave-front will be modified - light "densities" will be changing
>all the time. You might even consider polarization effects. Of course,
>the more reflections/refractions you have, the more wave-fronts you'll have
>to handle, but some will fade away eventually.
>
>Has this been tried as an alternative to ray-tracing? Any references?
>Any overwhelming reasons why it won't work? The only thing I can see is that
>it is hard to work backwards from eye to light source.

See "3-D graphics and the Wave Theory", Hans P Moravec, CMU
    in Computer Graphics, Vol 15, No 3, August 1981.
    Proceedings of SIGGRAPH '81.

Basically, it doesn't work too well.  I have always suspected that it has to
do with the relative size of objects and the wavelength of the light wave.
The wave front must be represented at spatial resolution about equal
to the wavelength, which either gives massive arrays or small objects.
In the latter case, diffraction effects are not the same as in the real world,
with large objects and small wavelengths.

-- 
---------------------------------------------------------------------------
Tony Williams					|Informatics Division
UK JANET:	asw@uk.ac.rl.vd			|Rutherford Appleton Lab
Usenet:		{... | mcvax}!ukc!rlvd!asw	|Chilton, Didcot
ARPAnet:	asw%rl.vd@ucl-cs.arpa		|Oxon OX11 0QX, UK

cheryl@batcomputer.UUCP (03/08/86)

In article <228@vaxb.calgary.UUCP> pearce@calgary.UUCP (Andrew Pearce) writes:
>In article <13300001@ccvaxa>, aglew@ccvaxa.UUCP writes:
>...
>> Query: might not wave-front type computations be more efficient? Take a
>> wave-front from your light source (spherical, cylindrical, or planar)
>> start it propagating, and compute its first intersection with an object.
>> Parts of the wave-front will be modified - light "densities" will be changing
>> all the time.
>...
>> Has this been tried as an alternative to ray-tracing? Any references?
>> Any overwhelming reasons why it won't work? The only thing I can see is that
>> it is hard to work backwards from eye to light source.
>
>Some work is also being done on using this type of idea for simulating and
>testing room accoustics (using multiple sound wave fronts rather than light).
>I think this is being done at the U of British Columbia.

	And also for synthetic seismic studies --  the seismology book by
	Aki & Richards goes into great & gorey mathematical detail;
	YAAWE (Yet another application of the Wave equation).


	

>
>Andrew Pearce   Dept. Computer Science
>U of Calgary    2500 University Dr.
>Calgary,	Alberta, Canada
>T2N 1N4
>
>	  Usenet: ...{ubc-vision,ihnp4}!alberta!calgary!pearce