[comp.graphics] Ray tracing

josh@hi.uucp (Josh Siegel ) (04/23/87)

I am going to throw out some more ideas.  Maybe some great and new method
could be generated right here... :-)

In article <1514@sphinx.uchicago.edu> drco@sphinx.UUCP (david lee griffith) writes:
>     As written, the program will test for each pixel and each
>sphere in the universe whether the line from the observer to the
>patch of the perspective plane corresponding to the pixel intersects that 
>sphere.  With 50 - 100 spheres in the universe, and 320x400 pixels
>(or however many there are) that is a whole bunch of comparisons. 
>If each comparison requires two floating-point multiplications and
>a subtraction, no wonder the program takes so long!  Some sort of pre-
>processing (a simplified Z-buffering comes to mind) should be included
>so that only necessary comparisons are done and !!NO!! line-sphere 
>intersection tests should be done on pixels that turn out to be 
>background. Of course your programs are going to take hours if you use
>such brute force techniques as using ray-tracing to do hidden surface
>elimination.  

	The Z-buffer will work nice except it doesn't help with
shadows unless you do a full Z-buffer for each light source as
well. How do we do shadows?
	Another idea would be just to draw 2D versions of all the objects
and then see what is not drawn upon to find out what is the background.
Also, since not ALL objects are reflective, we could just have it
ray trace those objects that are special.  Still,  I don't know
how texture mapping is done when you don't ray trace so I don't
know how much this will help.

	The initial intersections are the easy part.  It is when
you have light bouncing off of lots and lots of different objects
that slows it down.  He doesn't even have a example of that!
Think of a mirrored room with reflective spheres where the only
color is a purple sofa.  Yech!
	How about we divide the Universe into blocks that are
16 on a side.  Every object that touches one of these "blocks"
is put into a linked list for that block.  This means we can move
through the blocks and just look at the objects that touch that block.
Saves us time checking intersection with every object.  It also
means that putting dense objects on the other side of the
room won't slow down a sparse area at all..

	Another question, how much of this could we do using
integers or using logs.  A add is much faster then a multiply. 
Any ideas on what could be done in this area?

	How 'bout the FFT work done from Carnegie-Mellon
by Hans P. Moravec?   ("3D Graphics and the Wave Theory",
Computer Graphics, Volume 15, Number 3, Aug. 81).  Has
anybody continued this work on doing a whole ray traced image
by using waves and FFTs?  There are some nice sample pictures
but it has been a bit since I have gotten any real information.

>     Other problems with the code are less obvious. Little things 
>like the fact that the color of a mirrorred sphere is irrelevent
>(I want my jugglers to have golden balls to work with :-) ), and
>that if a bright red light reflects off of a shiny surface the 
>highlights turn out white. I won't even talk about the really 
>unrealistic model of specular reflection.  

I don't think it is irrelevent but maybe not as needed.
Personally,  I think that a program should give
you a choice if you wish to do specular reflection on a particular
object.  I bet it would speed things up if we didn't have to follow
3 rays per intersection for just relection because of specular
reflection. (I am not sure if the amiga ray tracer does this)

>     In short, what we have so nicely listed in the article is 
>some really slow and sloppily conceived code.  This would not bother
>me except that an improved version of it is being made available for
>commercial release.  Please tell me that not all commercial Amiga
>software is of this quality.  Such a wonderful sounding machine 

Ah... BUT it was free!

Actually, the fact that this is public domain makes it very nice.  I have
2 or 3 ray tracers (depending on your definition of a tracer) in my
directory and everyone is a bit different.  One is less then 600 lines
and does relection, refraction, specular reflection, and multiple light
sources.  You can also run each color on a different processor! It
is a grand piece of software yet the guy didn't do what Dave Wecker
did which is put his neck on the line.  There is a lot out there...
its just that people aren't willing to take their hard work
and give it away.  Lets give D. Wecker a hand!

>
>
>-- 
>...ihnp4!gargoyle!sphinx!drco				Dave Griffith
>                                                        University of Chicago
>                                                        Mathematics Dept.  

Actually,  I am also interested in the data structures people are using.
What structure are people using that facilitates objects
conencted to objects, etc.

-- 
Josh Siegel		(siegel@hc.dspo.gov)
                        (505) 277-2497  (Home)
		I'm a mathematician, not a programmer!

ritzenth@bgsuvax.UUCP (Phil Ritzenthaler) (04/27/87)

In article <4947@hi.uucp>, josh@hi.uucp (Josh Siegel ) writes:
> I am going to throw out some more ideas.  Maybe some great and new method
> could be generated right here... :-)
                                     .
                                     .
                                     .
> 	How about we divide the Universe into blocks that are
> 16 on a side.  Every object that touches one of these "blocks"
> is put into a linked list for that block.  This means we can move
> through the blocks and just look at the objects that touch that block.
> Saves us time checking intersection with every object.  It also
> means that putting dense objects on the other side of the
> room won't slow down a sparse area at all..

Maybe I am in the wrong ballpark, but wasn't this already proposed?  In
IEEE CG & A, April, 1986 a similar (but not quite) proposal was presented by
some gentlemen (one I recall was named Fujimoto).  The system was call the
ARTS (Accelerated Ray Tracing System) System.  It is now being put into 
an application by Ray Tracing, INC. (address given if needed) . . . 

Flames to /dev/null please . . .
                                                   
Phil Ritzenthaler			 |USnail:University Computer Services
Computer Graphics Specialist		 |       Academic User Services
					 |       241 Math-Science Bldg.
UUCP :..!cbatt!osu-eddie!bgsuvax!ritzenth|       Bowling Green State University
CSNET:ritzenth@research1.bgsu.edu 	 |       Bowling Green, OH   43403-0125
ARPA :ritzenth%bgsu.csnet@csnet-relay    |Phone: (419) 372-2102

spencer@osu-cgrg.UUCP (Steve Spencer) (04/28/87)

In article <829@bgsuvax.UUCP>, ritzenth@bgsuvax.UUCP (Phil Ritzenthaler) writes:
> In article <4947@hi.uucp>, josh@hi.uucp (Josh Siegel ) writes:
> > I am going to throw out some more ideas.  Maybe some great and new method
> > could be generated right here... :-)
>                                      .
> 
> Maybe I am in the wrong ballpark, but wasn't this already proposed?  In
> IEEE CG & A, April, 1986 a similar (but not quite) proposal was presented by
> some gentlemen (one I recall was named Fujimoto).  The system was call the
> ARTS (Accelerated Ray Tracing System) System.  It is now being put into 
> an application by Ray Tracing, INC. (address given if needed) . . . 
> 

Sorry, Josh.  It's been done, though re-inventing the wheel shows 
original thought.  Spatial subdivision has been proposed by more than one
person or group.  Look in IEEE CG&A, back SIGGRAPH proceedings...

Basically, it falls into these categories:

 1.  a hierarchical subdivision scheme, usually an octree-like approach.
     this is quicker than...

 2.  an equal-sized spatial subdivision (break the "world" into NxNxN
      equal sized boxes.  
     this method is easier to implement then #1, but it is slower.

 3.  the tried-and-true method of bounding boxes of bounding boxes...
     [Timothy Kay talked about a new (and interesting...) way to do
      this at last year's SIGGRAPH.]

In general, all of these methods greatly reduce the time spent by
ray-tracing programs, because you don't have to do ALL THOSE 
INTERSECTION CALCULATIONS !!!!!   Fujimoto's program ran fast for 
two reasons:  first, they used spatial subdivision.  Second, their
algorithm dropped into assembly language in places, for speed.


Good luck.


-- 
...I'm growing older but not up...       - Jimmy Buffett

Stephen Spencer, Graduate Student
The Computer Graphics Research Group
The Ohio State University
1501 Neil Avenue, Columbus OH 43210
{decvax,ucbvax}!cbosg!osu-cgrg!spencer        (uucp)

josh@hi.uucp (Josh Siegel) (04/29/87)

In article <823@osu-cgrg.UUCP> spencer@osu-cgrg.UUCP (Steve Spencer) writes:
 >In article <829@bgsuvax.UUCP>, ritzenth@bgsuvax.UUCP (Phil Ritzenthaler) writes:
 >> In article <4947@hi.uucp>, josh@hi.uucp (Josh Siegel ) writes:
 >> > I am going to throw out some more ideas.  Maybe some great and new method
 >> > could be generated right here... :-)
 >>                                      .
 >> 
 >> Maybe I am in the wrong ballpark, but wasn't this already proposed?  In
 >> IEEE CG & A, April, 1986 a similar (but not quite) proposal was presented by
 >> some gentlemen (one I recall was named Fujimoto).  The system was call the
 >> ARTS (Accelerated Ray Tracing System) System.  It is now being put into 
 >> an application by Ray Tracing, INC. (address given if needed) . . . 
 >> 
 >
 >Sorry, Josh.  It's been done, though re-inventing the wheel shows 
 >original thought.  Spatial subdivision has been proposed by more than one
 >person or group.  Look in IEEE CG&A, back SIGGRAPH proceedings...
 >
 >Basically, it falls into these categories:
 >
 > [...]
 >
 >
 >...I'm growing older but not up...       - Jimmy Buffett
 >
 >Stephen Spencer, Graduate Student
 >The Computer Graphics Research Group
 >The Ohio State University
 >1501 Neil Avenue, Columbus OH 43210
 >{decvax,ucbvax}!cbosg!osu-cgrg!spencer        (uucp)


Damm... I HATE when I do that... :-)

I had assumed it was not a new idea (I thought of it) but had not
 seen any references on it (had not gone through the SIGGRAPH
 stuff yet).  Oh well...

Anyhow,
	Thanks for all the references I have gotten on all the
things I threw out.  Interesting material.  Still, lets see
the usenet ray tracer! :-)

			--Josh Siegel
-- 
Josh Siegel		(siegel@hc.dspo.gov)
                        (505) 277-2497  (Home)
		I'm a mathematician, not a programmer!

carlson@styx.UUCP (John Carlson) (04/30/87)

In article <823@osu-cgrg.UUCP> spencer@osu-cgrg.UUCP (Steve Spencer) writes:
>Basically, it falls into these categories:
> 1.  a hierarchical subdivision scheme, usually an octree-like approach.
>     this is quicker than...
> 2.  an equal-sized spatial subdivision (break the "world" into NxNxN
>      equal sized boxes.  
>     this method is easier to implement then #1, but it is slower.

This is news to me (and I am trying to catch up).  Do you have empirical
or experimental data to show this?  Is it true for all cases?  How did you
conclude that one scheme was faster than the other?  Does the hierarchical
subdivision scheme take advantage of integer arithmetic to trace rays?

John Carlson
ARPA: carlson@lll-tis-b.arpa
UUCP: lll-crg!styx!carlson

spencer@osu-cgrg.UUCP (Steve Spencer) (05/01/87)

In article <21416@styx.UUCP>, carlson@styx.UUCP (John Carlson) writes:
> In article <823@osu-cgrg.UUCP> spencer@osu-cgrg.UUCP (Steve Spencer) writes:
> >Basically, it falls into these categories:
> > 1.  a hierarchical subdivision scheme, usually an octree-like approach.
> >     this is quicker than...
> > 2.  an equal-sized spatial subdivision (break the "world" into NxNxN
> >      equal sized boxes.  
> >     this method is easier to implement then #1, but it is slower.
> 
> This is news to me (and I am trying to catch up). 
> Do you have empirical or experimental data to show this? 

Not that I have actual numbers right here with me, but I seem to remember
a slightly quicker runtime for octree-based subdivision versus the SEADS
(equal-sized subdivision) method.  Intuitively, it would make sense that
the hierarchical approach would be faster, because the subdivisions are
more proportional to the scene's complexity in a given area
(greater subdivision where there's more stuff, less subdivision where
 there's a lot of empty space.)

> Is it true for all cases? 

No.  There are "scenes" (groupings of lights and objects) which, based
upon their placement, would be faster with one method than with the other.
For example, imagine defining a scene with nine spheres: eight of them
at the corners of a cube 100 units on a side, and the ninth in the center.
The SEADS approach would be a dog here, compared to the octree approach,
for most combinations of [criteria for octree subdivision] and [number of
"voxels" in which to subdivide the scene space].  On the other hand,
the octree approach may bog down with a scene which was constructed with
many objects clustered near the center of the scene space, because 
traversing the octree can be time-consuming when the octree is deep.
Here, the SEADS approach would, at least in concept, be better.

> How did you conclude that one scheme was faster than the other?  

Comparing timings of scenes.

> Does the hierarchical subdivision scheme take advantage of integer
> arithmetic to trace rays?

It can, though ours doesn't. (I didn't write that module but am fairly
certain of my answer.) Both the hierarchical subdivision and the SEADS
method could utilize integer arithmetic.

> 
> John Carlson
> ARPA: carlson@lll-tis-b.arpa
> UUCP: lll-crg!styx!carlson

Hope this helped.  
-- 
...I'm growing older but not up...       - Jimmy Buffett

Stephen Spencer, Graduate Student
The Computer Graphics Research Group
The Ohio State University
1501 Neil Avenue, Columbus OH 43210
{decvax,ucbvax}!cbosg!osu-cgrg!spencer        (uucp)

elwell@osu-eddie.UUCP (Clayton Elwell) (05/01/87)

In article <21416@styx.UUCP> carlson@styx.UUCP (John Carlson) writes:
>In article <823@osu-cgrg.UUCP> spencer@osu-cgrg.UUCP (Steve Spencer) writes:
>>Basically, it falls into these categories:
>> 1.  a hierarchical subdivision scheme, usually an octree-like approach.
>>     this is quicker than...
>> 2.  an equal-sized spatial subdivision (break the "world" into NxNxN
>>      equal sized boxes.  
>>     this method is easier to implement then #1, but it is slower.
>
>This is news to me (and I am trying to catch up).  Do you have empirical
>or experimental data to show this?  Is it true for all cases?  How did you
>conclude that one scheme was faster than the other?  Does the hierarchical
>subdivision scheme take advantage of integer arithmetic to trace rays?
>
>John Carlson
>ARPA: carlson@lll-tis-b.arpa
>UUCP: lll-crg!styx!carlson

I have observed the implementation of a ray tracer that uses scheme #1 (octree
subdivision).  It was written by Andrew Glassner while he was at Case Western
Reserve University.  I believe he had an article about it in IEEE CG&A a year
or so ago (I can go look it up if necessary).

His scheme subdivided the scene into an octree such that each cell had less
than a certain number of objects.  Since the cell boundaries were parallel to
the coordinate planes (in object space, of course), and the program kept track
of the size of the smallest cell, a ray could be propagated from one cell to
the next in only a few calculations.  He tested it against a naive ray tracer
(one that tested each ray against every object), and it showed a significant
improvement.  As I remember, the naive tracer ran in exponential time (by
number of objects), and the octree encoded tracer ran in sublinear time (by
number of objects).

This is discussed in detail in his article in IEE CG&A mentioned above.


-=-


							Clayton Elwell
The meek are getting ready...			Elwell@Ohio-State.ARPA
					   ...!cbosgd!osu-eddie!elwell

trainor@CS.UCLA.EDU (05/02/87)

In article <823@osu-cgrg.UUCP> spencer@osu-cgrg.UUCP (Steve Spencer) writes:
>In general, all of these methods greatly reduce the time spent by
>ray-tracing programs, because you don't have to do ALL THOSE 
>INTERSECTION CALCULATIONS !!!!!

Forget about intersection calculations, it's just trying to reduce the
problem of determining which object goes with a given ray to logarithmic
time, instead of linear.  The actual intersections only amount to a
constant, although it's a whopper...

	Douglas

chapman@fornax.uucp (John Chapman) (05/02/87)

> In article <21416@styx.UUCP>, carlson@styx.UUCP (John Carlson) writes:
> > In article <823@osu-cgrg.UUCP> spencer@osu-cgrg.UUCP (Steve Spencer) writes:
> > >Basically, it falls into these categories:
> > > 1.  a hierarchical subdivision scheme, usually an octree-like approach.
> > >     this is quicker than...
> > > 2.  an equal-sized spatial subdivision (break the "world" into NxNxN
> > >      equal sized boxes.  
> > >     this method is easier to implement then #1, but it is slower.
> > 
> > This is news to me (and I am trying to catch up). 
> > Do you have empirical or experimental data to show this? 
> 
> Not that I have actual numbers right here with me, but I seem to remember
> a slightly quicker runtime for octree-based subdivision versus the SEADS
> (equal-sized subdivision) method.  Intuitively, it would make sense that

Fujimoto et al claim SEADS with 3DDDA is faster than octrees, at least
in part because of the simplicity of cell-to-cell moves in 3DDDA compared
to octrees. They claim an order of magnitude improvement in time over
octrees. The paper is in the April 86 issue of CG&A.
In computing intuition can get you into hot water :-).

.
.
. 
> Stephen Spencer, Graduate Student
> The Computer Graphics Research Group
> The Ohio State University
> 1501 Neil Avenue, Columbus OH 43210
> {decvax,ucbvax}!cbosg!osu-cgrg!spencer        (uucp)

*** REPLACE THIS LINE WITH YOUR MESSAGE ***
-- 
{watmath,seismo,uw-beaver}!ubc-vision!fornax!sfulccr!chapman
                   or  ...!ubc-vision!sfucmpt!chapman

chapman@fornax.uucp (John Chapman) (05/02/87)

.
.
.
> the next in only a few calculations.  He tested it against a naive ray tracer
> (one that tested each ray against every object), and it showed a significant
> improvement.  As I remember, the naive tracer ran in exponential time (by
> number of objects), and the octree encoded tracer ran in sublinear time (by
> number of objects).

The naive ray tracing alg. tests a ray against every object in the scene.
The time for a single ray is O(n) when there are n objects. If you render
a scene with an m by m raster and a single ray/pixel the time is then
O(n*m*m).
 

*** REPLACE THIS LINE WITH YOUR MESSAGE ***
-- 
{watmath,seismo,uw-beaver}!ubc-vision!fornax!sfulccr!chapman
                   or  ...!ubc-vision!sfucmpt!chapman

ae@tybalt.caltech.edu (Andrew A. Essen) (05/16/88)

I figure this question is asked all too often, but here it goes...
Could someone send me a list of some references on ray-tracing?  I'm new
here so don't persecute me too much for this.

			thanks,

			Andrew Essen

mke@cseg.uucp (Mike K. Ellis) (02/17/89)

I need a good intersection algorithm for a ray tracing model.  The algorithm 
should be able to handle spheres, boxes, and pyramids.  I also need it to be 
very fast.  

   Thanks in advance for any help or suggestions.

    - Mike Ellis
 

joeg@polygen.uucp (Joe Gaudreau) (02/23/89)

Hello, I've been "reading" a lot about ray tracing here, understand some of
it, but would like to know more.  I'm interested in getting my hands dirty
but lack references and info.  Any help out there?  I'd be VERY interested
in actual source code - even if it's a "toy" - I still could learn a lot.
Thanx!!!

Joe Gaudreau
-=- -======-  %%WeBrokeItWeBrokeTheLamp

Mahesh.Neelakanta@f7.n369.z1.FIDONET.ORG (Mahesh Neelakanta) (02/23/89)

  I have some source code (written in C [what else !!]) that you may 
find intersting.  It allows the user to input the location and 
refection coeff. of galss balls in 3-space and then oouputs a datafile 
with x-y coordinates and a gre-scale number. Unfortunately the program 
does not completely work and It was specifically designed to run on a a 
hp system with UNIX.  Still interested ?
 
                                               - Mahesh

naveen@gizmo.ee.pdx.edu (Naveen Chandra) (08/22/89)

   I was looking for Siggraph lecture notes on Ray Tracing. Few people
   did reply to me and one of them said he could loan me the notes. I
   have lost his mailing address, and if you are the one reading this
   please do reply to me again. 


  Thanx

  Naveen

shal@csinc.UUCP (Shal Jain x848) (08/22/89)

In article <1621@psueea.UUCP>, naveen@gizmo.ee.pdx.edu (Naveen Chandra) writes:
> 
> 
>    I was looking for Siggraph lecture notes on Ray Tracing. Few people
>    did reply to me and one of them said he could loan me the notes. I
>    have lost his mailing address, and if you are the one reading this
>    please do reply to me again. 
> 
>   Naveen

I have some notes from SIGGRAPH 89 and the book INTRO TO RAY TRACING
 
email address uunet!csinc!shal

dkelly@npiatl.UUCP (Dwight Kelly) (08/23/89)

In article <106@csinc.UUCP> shal@csinc.UUCP (Shal Jain x848) writes:
>
>I have some notes from SIGGRAPH 89 and the book INTRO TO RAY TRACING
                                                 ^^^^^^^^^^^^^^^^^^^^

Who publishes the book?

--
Dwight Kelly            UUCP: gatech!npiatl!dkelly
Director R&D            AT&T: (404) 962-7220
Network Publications, Inc    2 Pamplin Drive     Lawrenceville, GA  30245
             Publisher of "The Real Estate Book" nationwide!

shal@csinc.UUCP (Shal Jain x848) (08/24/89)

In article <437@npiatl.UUCP>, dkelly@npiatl.UUCP (Dwight Kelly) writes:
> In article <106@csinc.UUCP> shal@csinc.UUCP (Shal Jain x848) writes:
> >
> >I have some notes from SIGGRAPH 89 and the book INTRO TO RAY TRACING
>                                                  ^^^^^^^^^^^^^^^^^^^^
> 
> Who publishes the book?
> 
> --
> Dwight Kelly            UUCP: gatech!npiatl!dkelly

The book AN INTRODUCTION TO RAY TRACING is edited by ANDREW GLASSNER
PUBLISHER: ACADEMIC PRESS INC. San Diego, CA 92101
ISBN 0-12-286160-4

Mr. GLASSNER can be reached at Xeroc PARC,3333 Coyote Hill Road
Palo Alto, CA 94304.

Shh! Be wevy wevy quiet,I'am hunting that wascally wabbit.
					- Elmer J. Fudd

fleming@balboa (Dennis Paul Fleming) (08/30/89)

In article <437@npiatl.UUCP> dkelly@npiatl.UUCP (Dwight Kelly) writes:
>In article <106@csinc.UUCP> shal@csinc.UUCP (Shal Jain x848) writes:
>>
>>I have some notes from SIGGRAPH 89 and the book INTRO TO RAY TRACING
>                                                 ^^^^^^^^^^^^^^^^^^^^
>
>Who publishes the book?
>

	An Introduction to Ray Tracing
	Ed. Andres S. Glassner
	Academic Press
	Hartcourt Brace Jovanovic, Publishers

Dave.Levinson@f28.n363.z1.FIDONET.ORG (Dave Levinson) (09/07/89)

I have heard there are sharware ray tracing programs available, if anyone
has heard of any please let me know. Thank you much.... Dave Levinson
--  
Don't trust that map entry.  Try "...peora!tarpit!libcmp!mamab" first.

Fidonet:  Dave Levinson via 1:363/9
Internet: Dave.Levinson@f28.n363.z1.FIDONET.ORG
Usenet:  ...!peora!tarpit!libcmp!mamab!28!Dave.Levinson

ingar@bibsyst.UUCP (ingar) (06/26/91)

I'm not sure i made myself clear the last time so I try again.

In my ray tracing program I am not able to ray trace cubes, or anything
that contains a strait line. They always end up looking like deformed 
spheres, a sphere with angels as a matter of fact. If i ray trace an 
object in a room the walls alway ends up in looking like "bowls". 
I have checked my types and they are all correct, and I have checked
that all my vector lengths are correct.

Does anybody have an idea of what I am doing wrong, except the fact
that I started to make a ray traceing program??

Ingar Pedersen, PreIng.

ingar@bibsyst.no
 
  

jk87377@cc.tut.fi (Juhana Kouhia) (06/27/91)

In article <383@bibsyst.UUCP> ingar@bibsyst.UUCP (ingar) writes:
>
>In my ray tracing program I am not able to ray trace cubes, or anything
>that contains a strait line. They always end up looking like deformed 
>spheres, a sphere with angels as a matter of fact. If i ray trace an 
>object in a room the walls alway ends up in looking like "bowls". 
>I have checked my types and they are all correct, and I have checked
>that all my vector lengths are correct.

If your sphere (or any other object) looks right then your ray/cube
intersection algorithm is wrong. Do you use transformation systems?
Have you a cube in local coordinate system when you actually make
ray/cube intersection? Maybe the error is when you try transform the
intersection point back from local to world. 

Do you normalize your ray when you define it from the pixel
coordinates and eye point?

Juhana Kouhia

grahaf@otago.ac.nz (06/27/91)

In article <383@bibsyst.UUCP>, ingar@bibsyst.UUCP (ingar) writes:
> I'm not sure i made myself clear the last time so I try again.
> 
> In my ray tracing program I am not able to ray trace cubes, or anything
> that contains a strait line. They always end up looking like deformed 
> spheres, a sphere with angels as a matter of fact. If i ray trace an 
> object in a room the walls alway ends up in looking like "bowls". 
> I have checked my types and they are all correct, and I have checked
> that all my vector lengths are correct.
> 
> Does anybody have an idea of what I am doing wrong, except the fact
> that I started to make a ray traceing program??
> 
Someone else had this problem and the answer posted was "You have a fish eye
lens problem.



Eye    View         Object
       plane                     This results in bent edges
        |           /------/
        |          /     / |
        |         /     /  |
/       |        |------|  |
\	|        |      | /
        |        |______|/
        |
        |


Try this.
                                     View           Object
Eye				     plane
				        |           /------/
				        |          /     / |
				        |         /     /  |
/				        |        |------|  |
\				 	|        |      | /
				        |        |______|/
				        |
				        |


Hope this helps,

Graham.

nwatson@ENUXHA.EAS.ASU.EDU (Nathan F. Watson) (06/27/91)

In article <1991Jun27.155718.625@otago.ac.nz> grahaf@otago.ac.nz writes:
>In article <383@bibsyst.UUCP>, ingar@bibsyst.UUCP (ingar) writes:
>> I'm not sure i made myself clear the last time so I try again.
>> 
>> In my ray tracing program I am not able to ray trace cubes, or anything
>> that contains a strait line.
                   ^^^^^^

>Someone else had this problem and the answer posted was "You have a fish eye
>lens problem.

[ ... some diagrams suggesting that the original poster move the eye away
  from the viewing plane ... ]

The proposed solution involves diminishing the field-of-view angle by
moving the eye away from the viewing plane.  I do not believe this will
work as any projection of a straight line (or boundary between polygonal
patches) will be mapped to a straight line on the viewing plane (and
screen) no matter how close to the viewing plane the eye happens to be.
Since the original poster states that straight lines are curved, the
field-of-view angle is not the problem.

The problem may be in computing the vector from the eye through a given
pixel on the screen.  An error in this computation would almost certainly
map straight lines to curved lines.

---------------------------------------------------------------------
Nathan F. Watson                             Arizona State University
nwatson@enuxha.eas.asu.edu                Computer Science Department
"Remember:  No matter where you go, there you are." - Mr. B. Banzai

tmkk@uiuc.edu (K. Khan) (06/28/91)

In article <383@bibsyst.UUCP> ingar@bibsyst.UUCP (ingar) writes:
>I'm not sure i made myself clear the last time so I try again.
>
>In my ray tracing program I am not able to ray trace cubes, or anything
>that contains a strait line. They always end up looking like deformed 
>spheres, a sphere with angels as a matter of fact. If i ray trace an 
>object in a room the walls alway ends up in looking like "bowls". 
>I have checked my types and they are all correct, and I have checked
>that all my vector lengths are correct.
>
>Does anybody have an idea of what I am doing wrong, except the fact
>that I started to make a ray traceing program??

Here's a wild guess: check the focal length parameter in your projection
formulas. Too short a focal length value will give a fish eye lens look
to the scene, and cause the distortions you describe.

jk87377@cc.tut.fi (Juhana Kouhia) (06/28/91)

In article <1991Jun27.184538.6963@ux1.cso.uiuc.edu> tmkk@uiuc.edu (K. Khan) writes:
>
>Here's a wild guess: check the focal length parameter in your projection
>formulas. Too short a focal length value will give a fish eye lens look
>to the scene, and cause the distortions you describe.

As somebody else mentioned this too and someone told the thruth that
this is not true.
Focal lenght (or distance of the image plane) doesn't distort
straight lines; I did verify this mathematically.
Hopefully I'm right; this indeed needs imagination to see the
difference from the sphere distortion on the images.
(Sphere's are ellipsoids.)

Juhana Kouhia

jonas-y@isy.liu.se (Jonas Yngvesson) (06/28/91)

nwatson@ENUXHA.EAS.ASU.EDU (Nathan F. Watson) writes:

>In article <1991Jun27.155718.625@otago.ac.nz> grahaf@otago.ac.nz writes:
>>In article <383@bibsyst.UUCP>, ingar@bibsyst.UUCP (ingar) writes:
>>> I'm not sure i made myself clear the last time so I try again.
>>> 
>>> In my ray tracing program I am not able to ray trace cubes, or anything
>>> that contains a strait line.
>                   ^^^^^^

>The proposed solution involves diminishing the field-of-view angle by
>moving the eye away from the viewing plane.  I do not believe this will
>work as any projection of a straight line (or boundary between polygonal
>patches) will be mapped to a straight line on the viewing plane (and
>screen) no matter how close to the viewing plane the eye happens to be.
>Since the original poster states that straight lines are curved, the
>field-of-view angle is not the problem.

Yes it is, sort of. The problem is that you assume that the distance from the
viewpoint to the viewplane is constant over the whole image. This is of course
not the case, the distance are bigger in the corners than in the middle. If
the viewpoint is close to the viewplane the relative difference is quite large
and straight lines *will* map to curves. Moving the viewpoint away from the
viewplane makes the relative difference smaller. Straight lines still maps
into curves, but the curvature is usually so small that you can't see
it.

--Jonas
-- 
------------------------------------------------------------------------------
 J o n a s   Y n g v e s s o n
Dept. of Electrical Engineering	                         jonas-y@isy.liu.se
University of Linkoping, Sweden                   ...!uunet!isy.liu.se!jonas-y

kyriazis@fourdee.Eng.Sun.COM (George Kyriazis) (06/29/91)

In article <1991Jun28.095138.3617@cc.tut.fi> jk87377@cc.tut.fi (Juhana Kouhia) writes:
>
>Focal lenght (or distance of the image plane) doesn't distort
>straight lines; I did verify this mathematically.
>
I would agree..  We are talking basically of a perspective transform, and
perspective transforms are known to preserve staight lines.  If that
was not true, all interactive 3-D systems as well as architects that try
to display perspective would've been severely wrong..

Now, the question is why does it give a fish-eye look.  
Say you are looking at a cube which is dead smack in the middle of the
screen.  I got the impression that this cube will have rounded concave
edges and look slightly like a sphere, right?  

I will assume that the intersection routine is ok (it's pretty difficult
to mess up a plane intersection routine so it gives you curves), so the
problem is related to viewing.

That effect will mean that the scanlines are not strait but bend 
towards the center of the image.  That will apply for the vertical 
lines too, I guess.  Ok.  Take a scanline.  Y is fixed on the scanline,
and what varies is X.  If the scanline bends towards the middle (or
towards the edges if you like), that will mean that the X coordinate
affects the Y offset in some way.

Two guesses on what is wrong:

1. Something is messed up when building the ray vector from the view
   plane normal and the x and y offsets, and then normalizing; altough
   errors in the length of the ray vector should not matter.

2. Maybe the error is caused by using angles instead of offsets?


Just my humble opinion....


--
----------------------------------------------------------------------------
George Kyriazis			(please include a generic disclaimer)
kyriazis@eng.sun.com  kyriazis@rdrc.rpi.edu  kyriazis@iear.arts.rpi.edu
----------------------------------------------------------------------------

chrisg@cbmvax.commodore.com (Chris Green) (06/29/91)

In article <1991Jun28.095138.3617@cc.tut.fi> jk87377@cc.tut.fi (Juhana Kouhia) writes:
>
>In article <1991Jun27.184538.6963@ux1.cso.uiuc.edu> tmkk@uiuc.edu (K. Khan) writes:
>>
>>Here's a wild guess: check the focal length parameter in your projection
>>formulas. Too short a focal length value will give a fish eye lens look
>>to the scene, and cause the distortions you describe.
>
	Here's a wilder guess: You're not basing your ray directions on (shudder)
ANGLES, are you?
-- 
*-------------------------------------------*---------------------------*
|Chris Green - Graphics Software Engineer   - chrisg@commodore.COM      f
|                  Commodore-Amiga          - uunet!cbmvax!chrisg       n
|My opinions are my own, and do not         - killyouridolssonicdeath   o
|necessarily represent those of my employer.- itstheendoftheworld       r
*-------------------------------------------*---------------------------d

nwatson@ENUXHA.EAS.ASU.EDU (Nathan F. Watson) (06/30/91)

In article <jonas-y.678121618@isy.liu.se> jonas-y@isy.liu.se (Jonas Yngvesson) writes:
>nwatson@ENUXHA.EAS.ASU.EDU (Nathan F. Watson) writes:
>
>>In article <1991Jun27.155718.625@otago.ac.nz> grahaf@otago.ac.nz writes:
>>>In article <383@bibsyst.UUCP>, ingar@bibsyst.UUCP (ingar) writes:
>>>> I'm not sure i made myself clear the last time so I try again.
>>>> 
>>>> In my ray tracing program I am not able to ray trace cubes, or anything
>>>> that contains a strait line.
>>                   ^^^^^^
>
>>The proposed solution involves diminishing the field-of-view angle by
>>moving the eye away from the viewing plane.  I do not believe this will
>>work as any projection of a straight line (or boundary between polygonal
>>patches) will be mapped to a straight line on the viewing plane (and
>>screen) no matter how close to the viewing plane the eye happens to be.
>>Since the original poster states that straight lines are curved, the
>>field-of-view angle is not the problem.
>
>Yes it is, sort of. The problem is that you assume that the distance from the
>viewpoint to the viewplane is constant over the whole image. This is of course
>not the case, the distance are bigger in the corners than in the middle. If
>the viewpoint is close to the viewplane the relative difference is quite large
>and straight lines *will* map to curves. Moving the viewpoint away from the
>viewplane makes the relative difference smaller. Straight lines still maps
>into curves, but the curvature is usually so small that you can't see
>it.
>
>--Jonas

Well, as I (nwatson@enuxha) said later in my posting, the original problem may
be related to calculating the original vector from the eye passing through
a particular pixel.  When I wrote my ray tracer, I computed the vector
from the eye to the viewing plane pixel and then normalized it before
tracing the ray.  The approach worked accurately (there may be a more
efficient way to do it than I did) and preserved straight lines, no
matter what the field-of-view angle was.  I am not sure whether
your phrase "... assume that the distance from the viewpoint to the
viewplane is constant ..." refers to me or the original poster.  In any
case, I never made said assumption, and said assumption should never be
made, unless for special effects.

To drive home the point that projected lines are never distorted into
curves, you may use some elementary geometry.  The set of rays that pass
through the eye point and the points of the world-space line are in
a single plane.  The intersection of that plane with the viewing plane
is a straight line.  Excuse my pedanticism.


---------------------------------------------------------------------
Nathan F. Watson                             Arizona State University
nwatson@enuxha.eas.asu.edu                Computer Science Department
"Remember:  No matter where you go, there you are." - Mr. B. Banzai