[comp.graphics] Village Idiot asks about Ray Tracing

ewhac@well.UUCP (Leo 'Bols Ewhac' Schwab) (12/03/88)

[ #include <witty_saying.h> ]

	I was doing a few gedanken experiments with raytracing, and came up
with a few questions.  Realize that I've never written a raytracer.

	Suppose I have an object, a light source, and a flat surface set up
as a perfect mirror.  Suppose further that I have a thing between the object
and light source preventing direct illumination of the object.  Suppose
further still that the "mirror" is set up to reflect the light from the
light source to the object.  Question:  Will the object be illuminated?
Does it depend on whose software I'm using?

	Suppose I have a flat surface, a light source, and an object in the
shape of a convex lens above the surface under the light.  Suppose further
that the object is set up to be perfectly clear, and refracts light like
glass.  Question:  Will the light beneath the lens object be intensely
focused on the surface below, just like a real lens?

	The point of the above two questions is to find out if, in general,
raytracers handle illumination from light bounced off of or refracted
through other objects.

	Finally, has anyone come up with a raytracer whose refraction model
takes into account the varying indicies of refraction of different light
frequencies?  In other words, can I find a raytracer that, when looking
through a prism obliquely at a light source, will show me a rainbow?

	Something tells me that all three questions are rather hard.

_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
Leo L. Schwab -- The Guy in The Cape	INET: well!ewhac@ucbvax.Berkeley.EDU
 \_ -_		Recumbent Bikes:	UUCP: pacbell > !{well,unicom}!ewhac
O----^o	      The Only Way To Fly.	      hplabs / (pronounced "AE-wack")
"Work FOR?  I don't work FOR anybody!  I'm just having fun."  -- The Doctor

ranjit@eniac.seas.upenn.edu (Ranjit Bhatnagar) (12/04/88)

Leo Schwab (ewhac@well.uucp) writes:
>	I was doing a few gedanken experiments with raytracing, and came up
>with a few questions.  Realize that I've never written a raytracer.

Well, I've never written one either, so ignore everything I say.
>
>[Will a mirror shine reflected light on other objects?]
>Does it depend on whose software I'm using?

It depends on the software - most ray tracers do NOT model reflected light,
because of the tremendous increase in complexity.  The standard shadow
model casts a ray from the object's surface to each light source, to
see if there's an object in the way.  To test for reflected light,
you have to cast rays in EVERY DIRECTION, just in case there's a mirror
in that direction that might be reflecting light on this part of
the object surface.  Even if you optimize it to cast rays only
at known mirrors, you still need to cast an infinite number towards
each mirror.  

A nice application of stochastic techniques is to cast a moderate
number of rays in RANDOM directions, hoping that they will hit a mirror
if there's one to hit.  If the jitter is done well, then the effect
will not be bad.

Radiosity models take a completely different approach to this problem.
See e.g. "A Radiosity Solution for Complex Environments" - Cohen and
Greenberg, SIGGRAPH 85, or "A Radiosity Method for Non-Diffuse Environments"
- Immel and Cohen, SIGGRAPH 86.  These guys don't discuss mirrors,
but you can sort of see how the radiosity method could handle them.
Radiosity worries about the general problem of light reflected off
of surfaces in the environment - in the real world, light doesn't
just come straight from the bulb!
>
>	Suppose I have a flat surface, a light source, and an object in the
>shape of a convex lens above the surface under the light.  Suppose further
>that the object is set up to be perfectly clear, and refracts light like
>glass.  Question:  Will the light beneath the lens object be intensely
>focused on the surface below, just like a real lens?

Answer is the same as above - it could be done by casting zillions of rays, 
but it's computationally expensive.  Stochastic sampling can certainly help,
though.
>
>	Finally, has anyone come up with a raytracer whose refraction model
>takes into account the varying indicies of refraction of different light
>frequencies?  In other words, can I find a raytracer that, when looking
>through a prism obliquely at a light source, will show me a rainbow?
>
This is even nastier than the first two problems, because very few
rendering systems really model light like it is in the real world.
We don't usually think about it, because the RGB approximation LOOKS
the same to us as real light, but if you wanted to get rainbows,
you would have to take into account that visible light is a continuum
of wavelengths that can be mixed arbitrarily.

In the graphics world, the sun is really a red sun, a green sun, and
a blue sun that happen to occupy the same position - if you were to
look at it through a simulated prism, you would get not a rainbow,
but a red spot, a green spot, and a blue spot.  I think work has been
done in rendering that models the entire visible spectrum instead of
the RGB approximation - you can imagine that that would be really
nasty, because every color vector, instead of being described by
three numbers, would be described by a continuum (perhaps approximated
as 500 numbers or something like that).  So, while wavelength-based
refraction could be modeled by extending current refraction techniques,
nobody does it because it would reveal the artificiality of the RGB
approximation.

Come to think of it, it's even worse than that!  You don't know the
color of a ray until it has "cashed out" completely - all its descendants
have been resolved.  But you can't resolve the ray until you know its
color, because the color will affect its trajectory!  Based on this,
I would say that frequency-based refraction is VERY difficult using
eye-based ray-tracing.  I expect one would cast a sort of "average"
ray, based on a single color, and then use relaxation techniques to
bend and twist the ray based on its color, and change its color
based on the trajectory, until it reaches a stable state.  Unfortunately,
in any reasonable complex scene, there's probably no numerical
technique that could reliably find the resultant ray even from a
very well-chosen "guess", because of all the discontinuities and
nonlinearities involved.

The other two problems are soluble, though, and stochastic (also
called "distributed") ray tracing is really neat way to get good
approximations to that and other fun problems (it turns out to
be applicable to simulation of motion blur, out-of-focus cameras,
lens depth of field, fuzzy reflections (like in a formica
tabletop), and all kinds of amazing stuff).  A nice introduction
is "Stochastic Sampling in Computer Graphics" - Rob Cook, in
ACM Transactions on Graphics, v5#1, Jan 86.


	- Ranjit


   
"Trespassers w"   ranjit@eniac.seas.upenn.edu	mailrus!eecae!netnews!eniac!...
       -- I'm not a drug enforcement agent, but I play one for TV --
 Giant SDI lasers burn 1,000 points of light in Willie Horton - Dave Barry

cnsy@vax5.CIT.CORNELL.EDU (12/05/88)

>>[Will a mirror shine reflected light on other objects?]
>>Does it depend on whose software I'm using?
>
>Radiosity models take a completely different approach to this problem.
>See e.g. "A Radiosity Solution for Complex Environments" - Cohen and
>Greenberg, SIGGRAPH 85, or "A Radiosity Method for Non-Diffuse Environments"
>- Immel and Cohen, SIGGRAPH 86.  These guys don't discuss mirrors,

One radiosity article that does show the effect of mirrors is in the SIGGRAPH
87 Proceedings, "A Two-Pass Solution to the Rendering Equation: A Synthesis of
Ray Tracing and Radiosity Methods" by Wallace, Cohen & Greenberg.  See page
319, image 10c.  The "mirror world" technique as it applies to radiosity is
touched upon on page 315.  For detailed info, get Holly Rushmeier's thesis
(reference 21 of the article).  I believe she may have an article about this
topic somewhere soon (TOG, maybe?).

Eric Haines, 3D/Eye Inc, ...!hplabs!hpfcla!hpfcrs!eye!erich

chris@spock (Chris Ott) (12/06/88)

ewhac@well.UUCP (Leo 'Bols Ewhac' Schwab) writes:

>	I was doing a few gedanken experiments with raytracing, and came up
> with a few questions.  Realize that I've never written a raytracer.

     I'm writing a ray tracer right now and have some experience.

>	Suppose I have an object, a light source, and a flat surface set up
> as a perfect mirror.  Suppose further that I have a thing between the object
> and light source preventing direct illumination of the object.  Suppose
> further still that the "mirror" is set up to reflect the light from the
> light source to the object.  Question:  Will the object be illuminated?
> Does it depend on whose software I'm using?

     Software that does true ray tracing would definitely illuminate the
object. For example, the ray could be sent from the eye through a specific
pixel on the screen to the object, bounce off the object into the mirror,
and finally, off the mirror into a light source. Then, given the color of
the object and the light source, the pixel's color can be computed. At
least mine works this way. The way my code looks, it seems as if this
would be intrinsically part of ray tracing, i.e. I didn't have to make
a special case for mirrors.

>	Suppose I have a flat surface, a light source, and an object in the
> shape of a convex lens above the surface under the light.  Suppose further
> that the object is set up to be perfectly clear, and refracts light like
> glass.  Question:  Will the light beneath the lens object be intensely
> focused on the surface below, just like a real lens?

     Again, true ray tracing would definitely produce this result.

Then ranjit@eniac.seas.upenn.edu (Ranjit Bhatnagar) writes:

# It depends on the software - most ray tracers do NOT model reflected light,
# because of the tremendous increase in complexity.  The standard shadow
# model casts a ray from the object's surface to each light source, to
# see if there's an object in the way.  To test for reflected light,
# you have to cast rays in EVERY DIRECTION, just in case there's a mirror
# in that direction that might be reflecting light on this part of
# the object surface.  Even if you optimize it to cast rays only
# at known mirrors, you still need to cast an infinite number towards
# each mirror.  
#
# A nice application of stochastic techniques is to cast a moderate
# number of rays in RANDOM directions, hoping that they will hit a mirror
# if there's one to hit.  If the jitter is done well, then the effect
# will not be bad.

     This does not sound correct to me. My understanding is that the only
rays we are interested in are the ones that the eye can see, so we just
need to cast a ray (more than one, if we want some reasonable anti-aliasing)
from the eye-point through each pixel. At least that's the way I did it
and it gives very realistic results. Any comments?

>	The point of the above two questions is to find out if, in general,
> raytracers handle illumination from light bounced off of or refracted
> through other objects.

     Yes.

>	Finally, has anyone come up with a raytracer whose refraction model
> takes into account the varying indicies of refraction of different light
> frequencies?  In other words, can I find a raytracer that, when looking
> through a prism obliquely at a light source, will show me a rainbow?

     This could be tough. The red, green, and blue components of monitors
only simulate the full color spectrum. On a computer, yellow is a mixture
of red and green. In real life, yellow is yellow. You'd have to cast a
large number of rays and use a large amount of computer time to simulate
a full color spectrum. (Ranjit pointed this out in his article and went
into much greater detail).

>	Something tells me that all three questions are rather hard.

Nah. They were easy.

> Leo L. Schwab

#	- Ranjit

Chris Ott


-------------------------------------------------------------------------------
 Chris Ott
 Computational Fluid Mechanics Lab            Just say "Whoa!!" and
 University of Arizona                          vote for Randee!!

 Internet: chris@spock.ame.arizona.edu
 UUCP: {allegra,cmcl2,hao!noao}!arizona!amethyst!spock!chris
-------------------------------------------------------------------------------

markv@uoregon.uoregon.edu (Mark VandeWettering) (12/06/88)

In article <859@amethyst.ma.arizona.edu> chris@spock.ame.arizona.edu (Chris Ott) writes:
}ewhac@well.UUCP (Leo 'Bols Ewhac' Schwab) writes:
}>	I was doing a few gedanken experiments with raytracing, and came up
}> with a few questions.  Realize that I've never written a raytracer.

}     I'm writing a ray tracer right now and have some experience.

}>	Suppose I have an object, a light source, and a flat surface set up
}> as a perfect mirror.  Suppose further that I have a thing between the object
}> and light source preventing direct illumination of the object.  Suppose
}> further still that the "mirror" is set up to reflect the light from the
}> light source to the object.  Question:  Will the object be illuminated?
}> Does it depend on whose software I'm using?

The question is actually trickier than you might believe.  Even
diffusely reflecting objects reflect light, hence at every point on the
surface of an object, the illumination is a function of its own
radiosity and the radiosity of every patch visible AFTER ANY NUMBER OF
BOUNCES.  Hence, there is not a great way of knowing when to quit
tracing rays.  For instance, you hit a triangle.  You need to find the
amount of light energy reaching that triangle from EVERY direction, not
just the direction of the reflected ray.  Techniques of MonteCarlo
integration, distributed ray tracing, and radiosity all attempt to deal
with this fundamental problem.

}     Software that does true ray tracing would definitely illuminate the
}object. For example, the ray could be sent from the eye through a specific
}pixel on the screen to the object, bounce off the object into the mirror,
}and finally, off the mirror into a light source. Then, given the color of
}the object and the light source, the pixel's color can be computed. At
}least mine works this way. The way my code looks, it seems as if this
}would be intrinsically part of ray tracing, i.e. I didn't have to make
}a special case for mirrors.

Solving the general problems of "light bleeding" or diffuse
interreflections is very difficult and a current research topic.
Consider the problem rephrased another way.  Given a description of the
objects, surfaces and lights in a scene, you are trying to determine
what your light would see by observing the scene.  Unfortunately, we
can't model the path of every photon in the scene.  We use the fact that
we are only interested in the miniscule portion of light rays in the
scene that actually "hit" your eye.  To reconstruct the interplay of
light from the eye position seems to be the goal of current state of
raytracing research.

}# It depends on the software - most ray tracers do NOT model reflected light,
}# because of the tremendous increase in complexity.  The standard shadow
}# model casts a ray from the object's surface to each light source, to
}# see if there's an object in the way.  To test for reflected light,
}# you have to cast rays in EVERY DIRECTION, just in case there's a mirror
}# in that direction that might be reflecting light on this part of
}# the object surface.  Even if you optimize it to cast rays only
}# at known mirrors, you still need to cast an infinite number towards
}# each mirror.  
}#
}# A nice application of stochastic techniques is to cast a moderate
}# number of rays in RANDOM directions, hoping that they will hit a mirror
}# if there's one to hit.  If the jitter is done well, then the effect
}# will not be bad.

}     This does not sound correct to me. My understanding is that the only
}rays we are interested in are the ones that the eye can see, so we just
}need to cast a ray (more than one, if we want some reasonable anti-aliasing)
}from the eye-point through each pixel. At least that's the way I did it
}and it gives very realistic results. Any comments?
}

Unfortunately, it is correct.  Consider the problems associated with
diffuse inter-reflections that I mentioned above.  The light shone back
to the eye is not merely based on light from a small number of
directions, but from and infinite number of directions.

}>	The point of the above two questions is to find out if, in general,
}> raytracers handle illumination from light bounced off of or refracted
}> through other objects.

}     Yes.

	No.  Kajiya's "Rendering Equation" produced some of the effects
	that you want however, and better computational methods should
	result in the kind of images that you desire.


}>	Finally, has anyone come up with a raytracer whose refraction model
}> takes into account the varying indicies of refraction of different light
}> frequencies?  In other words, can I find a raytracer that, when looking
}> through a prism obliquely at a light source, will show me a rainbow?
}
}     This could be tough. The red, green, and blue components of monitors
}only simulate the full color spectrum. On a computer, yellow is a mixture
}of red and green. In real life, yellow is yellow. You'd have to cast a
}large number of rays and use a large amount of computer time to simulate
}a full color spectrum. (Ranjit pointed this out in his article and went
}into much greater detail).

Actually, this problem seems the easiest.  We merely have to trace rays
of differing frequency (perhaps randomly sampled) and use Fresnel's
equation to determine refraction characteristics.  If you are trying to
model phase effects like diffraction, you will probably have a much more
difficult time.


Mark VandeWettering

po0o+@andrew.cmu.edu (Paul Andrew Olbrich) (12/07/88)

Hi folks.

This is in reply to some of the recent ray tracing discussion.

>  I'm writing a ray tracer right now and have some experience.

I've already written a ray tracer.  And I'm a freshman at CMU!  (I love this
stuff!)

> The question is actually trickier than you might believe.  Even
> diffusely reflecting objects reflect light, hence at every point on the
> surface of an object, the illumination is a function of its own
> radiosity and the radiosity of every patch visible AFTER ANY NUMBER OF
> BOUNCES.  Hence, there is not a great way of knowing when to quit
> tracing rays.  For instance, you hit a triangle.  You need to find the
> amount of light energy reaching that triangle from EVERY direction, not
> just the direction of the reflected ray.  Techniques of MonteCarlo
> integration, distributed ray tracing, and radiosity all attempt to deal
> with this fundamental problem.

I just wanted to bring up a small point on this.  Basically what's being said
here is that every object that you're dealing with reflects light to some
degree.  This is extremely clear when you realize that if all of the objects did
not reflect light, then no light would radiate from them at all, and you
wouldn't be able to see them!

This, of course, would be a severe bummer!

---
Drew Olbrich
po0o+@andrew.cmu.edu

"You cannot depend on your eyes when your imagination is out of focus." -- Mark
Twain

"I wouldn't trust him any farther than I could spit a rat." -- Zaphod Beeblebrox

skinner@saturn.ucsc.edu (Robert Skinner) (12/07/88)

In article <859@amethyst.ma.arizona.edu> chris@spock.ame.arizona.edu (Chris Ott) writes:
>
>ewhac@well.UUCP (Leo 'Bols Ewhac' Schwab) writes:
>>	I was doing a few gedanken experiments with raytracing, and came up
>> with a few questions.  Realize that I've never written a raytracer.
>
>     I'm writing a ray tracer right now and have some experience.

I haven't written a ray tracer, but I've talked to people that have

>>  [in summary, Leo asks:]
>> does ray tracing produce secondary illumination, i.e. does the light
>> bouncing off of a mirror illuminate an object?
>
>     Software that does true ray tracing would definitely illuminate the
>object. For example, the ray could be sent from the eye through a specific
>pixel on the screen to the object, bounce off the object into the mirror,
>and finally, off the mirror into a light source. Then, given the color of
>the object and the light source, the pixel's color can be computed. At
>least mine works this way. The way my code looks, it seems as if this
>would be intrinsically part of ray tracing, i.e. I didn't have to make
>a special case for mirrors.

	Say you've got one sphere, one light source, and 40 mirrors, but
only one miror reflects the light source onto the sphere.
How do you know which mirror contributes light without casting test
rays towards the mirrors?  And how many rays do you cast before you
give up?  The mirror may be large, and the light only comes from one
small part of it.

	This also goes for secondary illumination from non-mirrors
(all objects reflect, except a black body).  Imagine two surfaces that
face each other, one is directly illuminated but the second is not.  
The second surface should be illuminated from the first one.

	If you have an algorithm that can do this naturally without
using rays to search for the illuminant, then you should publish.  I'm
sure it would be accepted by SIGGRAPH.  Jim Kajiya is the only one I'm
aware of that has published anything about ray tracing that can handle
secondary illumination, and this is how he does it.  (He said he had
to use HUGE lights to get it to work.)

>
>>	Suppose I have a flat surface, a light source, and an object in the
>> shape of a convex lens above the surface under the light.  Suppose further
>> that the object is set up to be perfectly clear, and refracts light like
>> glass.  Question:  Will the light beneath the lens object be intensely
>> focused on the surface below, just like a real lens?
>
>     Again, true ray tracing would definitely produce this result.

You can't get this without the secondary illumination you are asking
for above.

>
>Then ranjit@eniac.seas.upenn.edu (Ranjit Bhatnagar) writes:
>
># ... most ray tracers do NOT model reflected light,
># ... you have to cast rays in EVERY DIRECTION, ..
>#
># A nice application of stochastic techniques is to cast a moderate
># number of rays in RANDOM directions, hoping that they will hit a mirror
># if there's one to hit.  If the jitter is done well, then the effect
># will not be bad.
>
>     This does not sound correct to me. My understanding is that the only
>rays we are interested in are the ones that the eye can see, so we just
>need to cast a ray (more than one, if we want some reasonable anti-aliasing)
>from the eye-point through each pixel. At least that's the way I did it
>and it gives very realistic results. Any comments?

I agree with Ranjit.  You are right as far as primary rays are
concerned, but once you hit an object, what do you know about how it
is illuminated?  You know the positions of the lights, no sweat there.
You know the positions of mirrors that may reflect light onto this
object, but you don't know what part of the mirror really does it.
Take the example above of the light being focused onto the object (a
caustic).  The light is intensified at the caustic, so you must take
into account that the light is being reflected from many places on the
mirror onto one spot on the surface.  (Many of your test rays bounced
off of the mirror and hit the light).  A short distance away from the
caustic, there is no extra illumination.  (All of the test rays bounced
off of the mirror and MISSED the light).
>
>>	The point of the above two questions is to find out if, in general,
>> raytracers handle illumination from light bounced off of or refracted
>> through other objects.
>
>     Yes.

No

>
>>	Finally, has anyone come up with a raytracer whose refraction model
>> takes into account the varying indicies of refraction of different light
>> frequencies?  In other words, can I find a raytracer that, when looking
>> through a prism obliquely at a light source, will show me a rainbow?
>
>     This could be tough. ...

This is the easy part.  Rob Cook wrote a paper on stochastic sampling
the screen pixel positions for antialiasing, the position of finite 
width light sources for shadow penumbras, object positions in time for
motion blur, and aperture position for depth of field.  So all you
have to do is sample the frequency of light.  
You fire say 16 rays per pixel anyway to do
antialiasing, and assign each one a color (frequency).  When the ray
is refracted through an object, take into account the index of
refraction and apply Snell's law.  A student here did that
and it worked fine.  He simulated rainbows and diffraction effects
through prisms.  What he couldn't do is to shine a light through a
prism and cast a spectrum on a surface.  But the sectrum is just a
special case of a caustic, so if you do secondary illumination it will
work.

	(Spencer Thomas (U. Utah, or is it U. Mich. now?) also implemented 
the same sort of thing at about the same time.  (Oh yeah?  Well I
thought about doing it a year before that :^)... sure)

>
>>	Something tells me that all three questions are rather hard.
>
>Nah. They were easy.

Please show us how the first two are easy.  Sorry if I sound overly
cynical, but I don't see how it falls out naturally without sending
out test rays to search for secondary illumination.  If you really can
do this, by all means let us know.  The world anxiously awaits.

Robert Skinner
skinner@saturn.ucsc.edu
-- 
----
this is a test

jevans@cpsc.ucalgary.ca (David Jevans) (12/08/88)

In article <3324@uoregon.uoregon.edu<, markv@uoregon.uoregon.edu (Mark VandeWettering) writes:
< }<	Finally, has anyone come up with a raytracer whose refraction model
< }< takes into account the varying indicies of refraction of different light
< }< frequencies?  In other words, can I find a raytracer that, when looking
< }< through a prism obliquely at a light source, will show me a rainbow?
< }
< }     This could be tough. The red, green, and blue components of monitors
< }only simulate the full color spectrum. On a computer, yellow is a mixture
< }of red and green. In real life, yellow is yellow. You'd have to cast a
< }large number of rays and use a large amount of computer time to simulate
< }a full color spectrum. (Ranjit pointed this out in his article and went
< }into much greater detail).
< 
< Actually, this problem seems the easiest.  We merely have to trace rays
< of differing frequency (perhaps randomly sampled) and use Fresnel's
< equation to determine refraction characteristics.  If you are trying to
< model phase effects like diffraction, you will probably have a much more
< difficult time.

This has already been done by a number of people.  One paper by
T. L. Kunii describes a renderer called "Gemstone Fire" or something.
It models refraction as you suggest to get realistic looking gems.
Sorry, but I can't recall where (or if) it has been published.  I have
also read several (as yet) unpublished papers which do the same thing in
pretty much the same way.

David Jevans, U of Calgary Computer Science, Calgary AB  T2N 1N4  Canada
uucp: ...{ubc-cs,utai,alberta}!calgary!jevans

coifman@yale.UUCP (Ronald Coifman) (12/16/88)

  In article <5647@saturn.ucsc.edu> skinner@saturn.ucsc.edu (Robert Skinner)
writes:

>>>	Finally, has anyone come up with a raytracer whose refraction model
>>> takes into account the varying indicies of refraction of different light
>>> frequencies?  In other words, can I find a raytracer that, when looking
>>> through a prism obliquely at a light source, will show me a rainbow?
>>
>>     This could be tough. ...
>
>This is the easy part...
>You fire say 16 rays per pixel anyway to do
>antialiasing, and assign each one a color (frequency).  When the ray
>is refracted through an object, take into account the index of
>refraction and apply Snell's law.  A student here did that
>and it worked fine.  He simulated rainbows and diffraction effects
>through prisms.
>
>	(Spencer Thomas (U. Utah, or is it U. Mich. now?) also implemented 
>the same sort of thing at about the same time.

  Yep, I got a Masters degree for doing that (I was the student Rob is refer-
ring to).  The problem in modelling dispersion is to integrate the primary
sample, over the visible frequencies of light.  Using the Monte Carlo integra-
tion techniques of Cook on the visible spectrum yields a nice, fairly simple
solution, albeit at the cost of supersampling at ~10-20 rays per pixel, where
dispersive sampling is required.

  Thomas used a different approach.  He adpatively subdivided the spectrum
based on the angle of spread of the dispersed ray, given the range of frequen-
cies it represents.  This can be more efficient, but can also have unlimited 
growth in the number of samples.  Credit Spencer Thomas; he was first.

  As at least one person has pointed out, perhaps the most interesting aspect
of this problem is that of representing the spectrum on an RGB monitor.  That's
an open problem; I'd be really interested in hearing about any solutions that
people have come up with.  (No, the obvious CIE to RGB conversion doesn't work
worth a damn.)

  My solution(s) can be found in "A Realistic Model of Refraction for Computer
Graphics", F. Kenton Musgrave, Modelling and Simulation on Microcomputers 1988
conference proceedings, Soc. for Computer Simulation, Feb. 1988, in my UC Santa
Cruz Masters thesis of the same title, and (hopefully) in an upcoming paper
"Prisms and Rainbows: a Dispersion Model for Computer Graphics" at the Graphics
Interface conference this summer.  (I can e-mail troff sources for these papers
to interested parties, but you'll not get the neat-o pictures.)

  For a look at an image of a physical model of the rainbow, built on the 
dispersion model, see the upcoming Jan. IEEE CG&A "About the Cover" article.

					Ken Musgrave
-- 
_____________________________________________________________________
Ken Musgrave			arpanet: musgrave@yale.edu
Yale U. Math Dept.		
Box 2155 Yale Station		Primary Operating Principle:
New Haven, CT 06520				Deus ex machina