[comp.sys.amiga.graphics] Radiosity LONG

cs87fmc@cc.brunel.ac.uk (F M Cargill) (05/11/91)

I got a message from Don Kennedy asking me:
> Can you give me some more information on the method of creating 3D images that
> you described in your article? 
and I thought the rest of you might like to know.  Apologies if you've
already seen this, but the news system was screwy when I originally sent 
this article - hence I'm re-posting.

Ok, it's the radiosity model which was invented/developed in 1984.  Cornell
University is where most of the research is being done on the model; I'll
give you a (hopefully) brief resume of what the model is, what it does, and
what its advantages and disadvantages are over other models (such as
ray-tracing).

First off - Advantages:
1)  The calculations are view-independent, once you have worked
out the light interactions in a scene you can render it from any viewpoint as
fast as it takes to do a Gouraud shaded image.  This is excellent for
animation - the first image will take ages, but all the subsequent ones will
be produced in no time at all.  Contrast this to ray-traced animations which
have to recalculate the whole scene at each frame.
2)  You can switch on lights, change the colour of the wallpaper, add disco
lights, you name it!  The recalculation will be a little slower than just
rendering a new image, but a lot faster than regenerating the whole scene.
3)  (One of) the model(s) is intelligent in that it won't calculate
reflections if there's no light shining.  This is the Progressive Radiosity
model that is faster, simpler, smaller and easier to understand than the
original.
4)  You can use Adaptive Subdivision to split the surfaces into smaller
elements whenever there is a sharp intensity gradient (like the edge of a
shadow).  This process is automatic, you just specify what's the most detail
that you want and the algorithm will split up the surfaces where it NEEDS
to.

Disadvantages:
1)  Specular (mirror/shiny) reflections aren't handled by the basic model,
although there are techniques that will include them.  The problem is that
reflections depend on the viewpoint - as you move, the highlights move.
2)  In the simple models, if an object moves within a scene then you have to
regenerate the whole scene.  You can get round this by shooting positive and
negative energy when something moves.  I've got a paper on this from one of
the SIGGRAPHs.
3)  If you don't use Progressive Radiosity then you need memory and time on
the order of the number of patches squared.  Unless you've got a CRAY or
Connection Machine on your Video Slot then use Progressive Radiosity!

Basic model description:
All surfaces are made of chalk - they're Lambertion emitters which radiate
light evenly in all directions.
All surfaces are treated as light emitters, with the amount of light each one
radiates depending on:
a) the amount of light it emits (if it is a lamp)
b) the amount of light that falls onto it (from lamps AND other surfaces)
c) the coefficient of reflectivity of the surface.

Thus the model bounces DIFFUSE light around a scene to produce an object
database or scene description which has each surface coloured according to
the light falling onto it, shadows, penumbrae, etc.

Rather than explain how light falls onto a surface I'll explain how you can
shoot light from a surface into the scene - the process is the same, just
reversed.  Project the scene onto a small hemisphere centred on a surface
that you want to shoot light from.  Then project the hemisphere down onto the
hemisphere base.  Imagine the projection of just one polygon onto the
hemisphere then onto the base.  The proportion of the area of the circular
base that is covered by the projection of the polygon is equal to the
proportion of the light radiated by the surface that reaches the polygon.

In practice you project onto a 'hemi-cube' centred on the surface - thus you
get to do five easy projections onto flat planes instead of a hard projection
onto the surface of a sphere.  These projections are what you need the
hardware and the MIPS/FLOPS for.

It helps if you always shoot from the `brightest unshot` patch, so that the
maximum light will be distributed around the scene in the quickest time.
After shooting from 20 patches you can make a good image, obviously the
more times you shoot, the more accurate the final image will be, but the law
of diminishing returns applies.

References:
SIGGRAPH since 1985,
the second edition of Foley, Van Dam, Feiner and Hughes (used to be Foley &
Van Dam),
Alan Watts book on 3D graphics.

> 
> Thanks,
> Don Kennedy
> Vision Quest Systems
Who are Vision Quest Systems anyway?
-- 
  *  Fletch Cargill                                         *      //  *
  *  Brunel University Department of Computer Science, UK.  *  \\ //   *
  *  cs87fmc@cc.brunel.ac.uk                                *   \X/    *

kcampbel@uafhp.uark.edu (Keith Alan Campbell) (05/12/91)

Vision Quest Systems is a loose, small conglomerate of individuals who sometimes
get together and do some productive product development. Currently there are 
four of us active, of whom 2 are almost impossible to contact since they have
full time engineering jobs and every day is skiing day at the lake. Jim May is
company president, and his specialty is video applications. His goal is to 
conglomerate third party hardware and software into a professional Amiga based
video oriented multimedia workstation. He has been the force behind the 
Vision Quest dual TBC, which is currently in limbo. He is available almost any
time at the (501) 253-5264 number. I am the Audio person, having developed the
specs and format of the SunRize AD1016. I am also speculating about the   
development of an Amiga oriented professional digital audio business, a kind of
clearinghouse for Amiga based audio workstation hardware, software, and 
development information. It will (if I go with it) be called Concept Digital
Audio. I am available at (501) 521-0420 evenings and weekends, or messages can
be left mornings. Our two engineers choose to remain nameless because of 
restrictive policies at their workplace, but they are equally adept at writing
Amiga "c" code, 56001 assembly code, designing production 4 layer PC boards,
writing and debugging Transputer code (and designing Transputer based video/
character recognition applications in hardware), and have designed the hardware
and much of the low level code for the Amiga/AD1016 interface. They generally 
choose to be contacted through me, at least in the beginning. We all sometimes 
brainstorm about specific applications, but only spend blocks of time on 
projects that have a promise of some kind of financial return. Currently we
are finishing up the AD1016. Later we may have some time for third party 
development. JPEG may be a realistic goal, since the engineers are working with
the C-Cubed chip at work now. (They find it VERY hard to work with, from an
interfacing standpoint). We are also tinkering with 56001 code to replace math 
code for PD ray-tracers to see if the 56001 can be used as a co-processor/
accelerator.
   
   Don Kennedy
   Vision Quest Systems