[sci.virtual-worlds] Real-time raytrace and some related "ramblings"

fmgst@unix.cis.pitt.edu (Filip Gieszczykiewicz) (05/13/91)

        Greetings. Well, I finally got my 486/25 (33MHz was a _bit_ above
        my budget :-) First thing I did was a "benchmark". I ran an
        older version of qrt (Quick RayTracer - author:???). After studing
        the results, something dawned on me:

        Real-time raytracing is not as far away as most of you led me
        to believe.... Why? Well, I did a 320x200x~256 image of a
        sphere, 1/2 cylinder "pillar", and a room with brick walls (one of the
        files included in the .ZIP file). It took less than 2 minutes.
        
        Now, I realize that 320x200 is not "super" but that's the
        resolution of (to my knowledge) most color LCD screens (from which
        goggles are made) If I can do a frame in under 2 minutes, the addition
        of a few processors would do a much better job, say, a frame in half 
        a second or less. With twice than many processors, that time would be 
        cut in half - or, what is the current update rate for most VR systems.
        Anyone have any comments? ;-) By the way, does anyone know
        of a processor board (besides the Transputer) that could be used
        in this application? I'm looking for something that is not only
        low cost but also permits more boards to be added for more
        processing power. 

        Also, anyone know if such a board would make a good senior project? 
        I was thinking of using 68010/68881 pairs per board with some 
        (say 4 megs) local memory and a well organized buss (I think it's 
        asking too much to use the pathetic ISA buss that's in most ATs 
        (yeah, also in mine :-( 

        Such a system would have several main goals: 

                1) price (must be low so that "simple" people like me can
        afford it - also, since more own it, more software gets written)

                2) expendability and modularity (add more boards -> more 
        power and adding another board does not require recompiling the
        whole OS))

                3) programability (must be programmed in an object-
        oriented language - how about C++? - programs may be developed
        on the subsystem (if that is the implementation) or downloaded
        at execution time...)

                4) more to follow.... :-)

        I'll "tidy up" a crude proposal I sent to a friend of mine and
        post it here - in it, I describe the project I want to do as my
        senior project (or, just for fun :-)

        I'll welcome any suggestions.

        Take care.
-- 
_______________________________________________________________________________
"The Force will be with you, always." It _is_ with me and has been for 11 years
Filip Gieszczykiewicz  "... a Jedi does it with a mind trick... " ;-)
FMGST@PITTVMS  or  fmgst@unix.cis.pitt.edu "My ideas. ALL MINE!!"

lance@motcsd.csd.mot.com (lance.norskog) (05/14/91)

fmgst@unix.cis.pitt.edu (Filip Gieszczykiewicz) writes:

>        Greetings. Well, I finally got my 486/25 (33MHz was a _bit_ above
>        my budget :-) First thing I did was a "benchmark". I ran an
>        older version of qrt (Quick RayTracer - author:???). After studing
>        the results, something dawned on me:

>        Real-time raytracing is not as far away as most of you led me
>        to believe.... Why? Well, I did a 320x200x~256 image of a
>        sphere, 1/2 cylinder "pillar", and a room with brick walls (one of the
>        files included in the .ZIP file). It took less than 2 minutes.


Yes, I noticed how fast QRT is.  DKBtrace has options for not doing
various levels of tracing, also.

Ray-tracing is inherently 2D, while radiosity is inherently 3d.
Ray-tracing works from the eye to a background, radiosity works from
a light source until the light peters out.  You can do a radiosity
pass once, and save all the surface pixels in a 3D sparse data structure.
(Voxels are one technique.)  Then, just move around the 3-space and
continuously walk the data structure, displaying successive images
from your pre-computed database.  You can't move the objects, and
you yourself are transparent (no visible effect on the shading)
but it's computationally much less intensive.  

Also, radiosity gives a more realistic, software look.  Check some
computer graphics books.

Takes a lot more RAM, though.  You have to store enough detail for
each surface that it looks OK close up.  Also, you have to move
through the database quickly using a 3D line-drawing algorithm.
It is pre-sorted, though.

I guess you could use weighted average shadings from neighboring 
voxels if you're close enough to see a flat shaded polygon.
In custom fixed-point arithmetic, it might even be fast enough.

Lance Norskog
thinman@netcom.com

markv@pixar.com (Mark VandeWettering) (05/29/91)

In article <1991May14.202045.1020@milton.u.washington.edu> lance@motcsd.csd.
mot.com (lance.norskog) writes:

>Ray-tracing is inherently 2D, while radiosity is inherently 3d.
>Ray-tracing works from the eye to a background, radiosity works from
>a light source until the light peters out.  

Raytracing is _not_ inherently 2D.  It is an approximation to solving
the "rendering equation" or global illumination problems.  Raytracing makes
the assumption that the only "interesting" light transport occurs in directions
that end up at your eye position (or recursively, the point of origin for a 
ray).  While one may argue (correctly) that this does not simulate all 
possible light paths, it does make a reasonable attempt at reproducing some
subset of possible lighting situations.

Radiosity also makes a similar assumption.  The idea behind (most) radiosity 
implementations is that there is no view dependent illumination in the scene.
In other words, all scene elements are diffuse reflectors.  This too, is 
a reasonable assumption, which yields a different subspace of possible
illumination situations.  

>You can do a radiosity
>pass once, and save all the surface pixels in a 3D sparse data structure.
>(Voxels are one technique.)  Then, just move around the 3-space and
>continuously walk the data structure, displaying successive images
>from your pre-computed database.  You can't move the objects, and
>you yourself are transparent (no visible effect on the shading)
>but it's computationally much less intensive.  

And produces a different set of effects.  Interesting, but also different.

>Also, radiosity gives a more realistic, software look.  Check some
>computer graphics books.

"Realism" is subjective.  Hybrid algorithms that combine raytracing and 
radiosity seem to be the most realistic, because they are able to capture
reflection and refraction from both diffuse and specular surfaces.

>Takes a lot more RAM, though.  You have to store enough detail for
>each surface that it looks OK close up.  Also, you have to move
>through the database quickly using a 3D line-drawing algorithm.
>It is pre-sorted, though.

Storage can be higher.  The main problem is that in order to avoid aliasing,
you patches should all be on the order of a pixel in size.  The reason that 
most radiosity pictures look okay is because we expect diffuse illumination
to change only slowly over the picture.  Still, look carefully at places where
tables and chairs meet the ground, and you will not see the sharper shadows
that one would expect.  This is largely due to insufficient grid resolution.

Mark VandeWettering

lance@motcsd.csd.mot.com (lance.norskog) (05/31/91)

You're right, I misspoke myself.  I meant to say that ray-tracing
is inherently scan-line-first, whereas radiosity is inherently 
object-first.  For VR, ray-tracing is useless because you can't 
compute anything up-front and then re-use it.  You want to precompute
as viewpoint-independent information much as possible, then recycle 
it from the current viewpoint.  Maintaining BSP trees of unchanging
objects has this effect, and radiositizing immovable unchanging
objects

My application (5D, a 3D real-time multi-media window system) 
is not intended for making movie special effects or falsifying 
evidence, so I don't give a damn about realism.  I want to create
a new interactive medium for information presentation and
manipulation.  

The scene can be abstract instead of physical objects, with
the 3D effect used to convey relationships between those objects.
Shading is useful for reinforcing the 3D effect,
and helping your visual system build a 3D map from a scene.
If stuff closer to you is more important, that's a form of
intellectual depth-cuing.

The problem with making all polygons less than a pixel is that
they can all be varying distances from the viewpoint.  So, I'm supposed
to make them all smaller than a pixel from (say) 6 inches away
in world space, because maybe someday I'll get that close?
That's a lot of polygons!  Reyes-style micropolygonization
is really not a feasible strategy for VR.

What just popped into my head is to do shaded polygons with a light
source and an ambient value, with one batch of math doing both the
viewpoint transform and the normal transform intertwingled.  
GG1 has a gem on how to do this.  I'm doing this on a 486 with VGA, 
and reports are that my FP will grossly outstrip my video bandwidth.
This may be feasible cheap shading.  

Lance

Note use of the word "interwingled": the new edition of
Computer Lib/Dream Machines is my current breakfast book.
I highly recommend it to computer dreamers.