[comp.sys.sgi] Quantitative IRIS Lighting Models

Tim_Buxton@UM.CC.UMICH.EDU (07/11/90)

 
 
 
What is the best software approach for predicting the TRUE
appearance of a given material surface in light(s) of a
given color from a given angle?  Say you want to know if a machine or
 person can recognize an object in a given light
for sorting on an assembly line, or some other quantitative
application.
 
 
Faced with this sort of task, I am requesting the
 experiences of people modeling the physics of materials in
 light on IRISes. This includes the faithful modeling of:
 
  Bidirectional surface reflectance characteristics
 
  Spectral characteristics of incident and reflected light
      from several sources
 
  Shadowing
 
 
How would you rate the following alternatives on SGI for
 faithful image reproduction and computation requirements:
 
   1. User-written lighting model programs in RGB mode
 (based on a recent helpful posting by Paul Haeberli)
 
   2. Commercial rendering programs such as Personal
 Visualizer, RenderMan, etc. 
 
   3. Raytracing programs such as the BRL-CAD lgt model
 
 
How in any of these approaches do you quantify lumens input
 to the scene when light intensity is specified in 0.0 to
 1.0  or 0-255 RGB color intensities?  How can you interpret
 lumens output?  How can you model the effect of a color
 filter or video camera bandpass?
 
Is there any coverage of quantitative modeling in the SGI
 graphics courses?
 
How do the capabilities of the Power Vision (VGX)series affect
 the above?  For instance, how much does the hardware
 texturing or anti-aliasing or blurring capability  help in
 physical modeling?
 
There is helpful material on specular and diffuse
 reflectance, etc. in the User's Guide on Lighting, as well
 as information in the Modeling on the IRIS pamphlet and
 Wavefront Personal Visualizer, and the BRL-CAD manual.  All
 seem possible candidates.  Before going in three directions
 at once, though I think I and many others would be helped
 by recommendations of those who have worked with 
 modeling quantitatively correct images.
 
Thanks in advance for your response.
 
 
                             -Tim Buxton
                              OptiMetrics, Inc.
                              Tim_Buxton@um.cc.umich.edu

moss@brl.mil (Gary S. Moss (VLD/VMB) <moss>) (07/25/90)

In article <6494368@um.cc.umich.edu>, Tim_Buxton@UM.CC.UMICH.EDU writes:
|> What is the best software approach for predicting the TRUE
|> appearance of a given material surface in light(s) of a
|> given color from a given angle?
|>
|> How would you rate the following alternatives on SGI for
|>  faithful image reproduction and computation requirements:
|>  
|>    1. User-written lighting model programs in RGB mode
|>  (based on a recent helpful posting by Paul Haeberli)
|>  
|>    2. Commercial rendering programs such as Personal
|>  Visualizer, RenderMan, etc. 
|>  
|>    3. Raytracing programs such as the BRL-CAD lgt model

|> How in any of these approaches do you quantify lumens input
|>  to the scene when light intensity is specified in 0.0 to
|>  1.0  or 0-255 RGB color intensities?  How can you interpret
|>  lumens output?  How can you model the effect of a color
|>  filter or video camera bandpass?
As author of the BRLCAD lgt program, I guess I should say a few words.
It is an empirically based lighting model, not a physical model.
Output intensities are clipped for storage in a 0-255 RGB pixel data
structure which is an artifact of the design of the BRLCAD frame buffer
model which favors 24-bit color displays.  These intensities are a
factor of RGB components of material colors, diffuse and specular
coefficients, mirror-reflective and refractive properties for the
materials, and shadowing when combined appropriately with the intensities
and positions of the various light sources employed. Light source
intensities are intended to be input in the 0.0 to 1.0 range and proper
values are largely dependent on how many sources are employed so as to
maintain full range (minimize clipping) in the RGB pixel values.  It is
up to the user to define his light source and material properties data
bases to achieve a visual effect, however, there is no direct means to
calibrate these parameters with real world data other than to tweak
them to mimic the results of experimentation.  Therefore, the model is
not predictive.

Besides the intensity values being disassociated with a physical model,
the behavior of the light is not realistic.  Light sources are modeled
as point sources, refraction and reflection are not wavelength dependent,
and the worst problem is that the rays are traced from the observer to
the light source.  This is typical of most of the lighting models which
are designed for rendering of geometry, not for use in smart sensor type
applications.

I doubt that you will pluck any software of the shelf that will be very
useful in this regard, unless it is *not* under the category of "lighting
models".

-Gary