[comp.graphics] Lunar Distortions

musgrave-forest@CS.YALE.EDU (F. Ken Musgrave) (11/17/90)

  Here's another little question spawned by moon-renderings:

  A very wide-angle view of a scene (i.e., a landscape), with a sphere
(i.e., a moon) in an extreme corner of the image, sports one very distorted
sphere in the image, when rendered using the standard virtual-screen model
for ray tracing.  (See the cover of Jan. '89 IEEE CG&A for an example.)

  Seems that this is a version of the sphere-to-plane mapping problem, and
therefore inadmissible to a non-distorting solution.

  Can anyone out there prove this conjecture right or wrong, or demonstrate
some nice workaround?

							Ken
-- 
		The Fundamental Dilemma of Existentialism:

	Eschew obfuscation.			Ignore alien orders.

fournier@cs.ubc.ca (Alain Fournier) (11/19/90)

In article <27332@cs.yale.edu> musgrave-forest@CS.YALE.EDU (F. Ken Musgrave) writes:
>
>  Here's another little question spawned by moon-renderings:
>
>  A very wide-angle view of a scene (i.e., a landscape), with a sphere
>(i.e., a moon) in an extreme corner of the image, sports one very distorted
>sphere in the image, when rendered using the standard virtual-screen model
>for ray tracing.  (See the cover of Jan. '89 IEEE CG&A for an example.)
>....

This one is easy (and frustrating to behold). The projection of a sphere
on a screen using standard viewing transformation is the intersection
of a cone (the eye at apex, with a circular cross section defined by the 
circle on the sphere tangent to the cone) and the plane of the window.
This (as known for two to three thousands of years) is a conic (ellipse -including
circle, parabola or hyperbola).
Most of the times in a CG picture it's an ellipse, and most of these times
it's close enough to a circle that the difference is not noticeable.
Ken got cases where it is. Why do we think it's not "normal"? I guess
because in real life we don't look at the "corners" of our field of vision,
and our retina is curved to begin with (just a guess here about the reasons).
bbb

dilip@ncst.ernet.in (Dilip Khandekar) (11/19/90)

In article <27332@cs.yale.edu> musgrave-forest@CS.YALE.EDU (F. Ken Musgrave) writes:
>
>  Here's another little question spawned by moon-renderings:
>
>  A very wide-angle view of a scene (i.e., a landscape), with a sphere
>(i.e., a moon) in an extreme corner of the image, sports one very distorted
>sphere in the image, when rendered using the standard virtual-screen model
>for ray tracing.  (See the cover of Jan. '89 IEEE CG&A for an example.)
>
>  Seems that this is a version of the sphere-to-plane mapping problem, and
>therefore inadmissible to a non-distorting solution.
>
>  Can anyone out there prove this conjecture right or wrong, or demonstrate
>some nice workaround?
>

   Yes, this distortion is due to sphere-to-plane mapping. Actually, the 
volume defined by the rays joining every point on the sphere to the center of
projection will be a cone. Now an intersection of this cone with a plane 
will invariably result in an ellipse. So EVERY sphere, should look like an
ellipsoid in a ray-traced image which uses this model. But the distortion is
prominent only for spheres located in extreme corner of the scene.

   I also encountered the same problem and would be interested in any method
or projection-model which circumvents this problem. If the pin-hole camera
model is not a good model for the human eye-brain system then is there any
other model which is more accurate?

- dilip

===============================================================================
Dilip Khandekar
Graphics and CAD Divn.,                 email:   dilip@ncst.ernet.in
National Centre for Software Tech.,           or uunet!shakti!dilip
Juhu,   Bombay,   INDIA.                  Tel: +91-022-6201606,6201574.
===============================================================================

3003jalp@ucsbuxa.ucsb.edu (Applied Magnetics) (11/20/90)

In article <27332@cs.yale.edu> musgrave-forest@CS.YALE.EDU (F. Ken Musgrave) writes:


>  A very wide-angle view of a scene (i.e., a landscape), with a sphere
>(i.e., a moon) in an extreme corner of the image, sports one very distorted
>sphere in the image, when rendered using the standard virtual-screen model
>for ray tracing.  (See the cover of Jan. '89 IEEE CG&A for an example.)
>  Seems that this is a version of the sphere-to-plane mapping problem, and
>therefore inadmissible to a non-distorting solution.
>  Can anyone out there prove this conjecture right or wrong, or demonstrate
>some nice workaround?

A stereographic projection maps circles on the sphere to circles in the
plane (counting straight lines as degenerate circles).  The inevitable
distorsion is an increased magnification toward the edges of the
scene.  The distorsion is extreme on a world map, but it might be OK
for a rendering if your wide angle stays below 180 degrees.  (Never
seen one myself, so I don't know.)

If the distorsion is too large, look for conformal mappings.  A
conformal mapping would transform _small_ circles to circles.  The most
general conformal mapping from the sphere to the plane is a
stereographic projection followed by an analytic transformation of the
complex plane.  Read the Shaum book on complex variables, do all the
problems and start tinkering :-).

Pierre Asselin,  R&D, Applied Magnetics Corp.
<std. disclaimer>

prem@geomag.gly.fsu.edu (Prem Subrahmanyam) (11/21/90)

In article <1097@shakti.ncst.ernet.in> dilip@ncst.ernet.in (Dilip Khandekar) writes:
>ellipsoid in a ray-traced image which uses this model. But the distortion is
>prominent only for spheres located in extreme corner of the scene.
>
>   I also encountered the same problem and would be interested in any method
>or projection-model which circumvents this problem. If the pin-hole camera
>model is not a good model for the human eye-brain system then is there any
>other model which is more accurate?
>
>- dilip

   A solution I can see is to have each pixel, instead of having a
   calculated position on a viewing plane, have each instead with
   a calculated horizontal and vertical angle from the center.  
   Knowing the angles, one can easily construct a vector of the
   appropriate angles to represent this pixel's ray.  I have not
   actually implemented this, but it is obvious that this would
   work, as when a conventional method is used and the viewing
   plane is located far from the eye point, the near-spherical
   approximation of the plane is good enough to remove most
   any distortion.  The reason why a conventional camera does
   well is due to the same phenomenon -- it takes a spherical
   "bunch" of rays and maps them to the film plane.
   If I ever get my present thesis out of the way, I will 
   attempt to implement this in my version of DBW_Render.
   ---Prem Subrahmanyam

uad1077@dircon.uucp (11/21/90)

In article <1097@shakti.ncst.ernet.in> dilip@ncst.ernet.in (Dilip Khandekar) writes:
>In article <27332@cs.yale.edu> musgrave-forest@CS.YALE.EDU (F. Ken Musgrave) writes:
>>
>>  Here's another little question spawned by moon-renderings:
>>
>>  A very wide-angle view of a scene (i.e., a landscape), with a sphere
>>(i.e., a moon) in an extreme corner of the image, sports one very distorted
>>sphere in the image, when rendered using the standard virtual-screen model
>>for ray tracing.  (See the cover of Jan. '89 IEEE CG&A for an example.)
>>  Can anyone out there prove this conjecture right or wrong, or demonstrate
>>some nice workaround?
>>
>
>   I also encountered the same problem and would be interested in any method
>or projection-model which circumvents this problem. If the pin-hole camera
>model is not a good model for the human eye-brain system then is there any
>other model which is more accurate?

YOu might like to experiment with mimcking the disorting effect of
camera lenses.  Lenses which behave like renderers are very expensive,
and are called something like `flat-plane lenses' (surprise).  Ordinary
lenses exhibit distortion (it's called that in optics books).  Positive
distortion makes flat squares look like pillowcases, negative makes
them look weird.  A way to model positive distortion is to move points
in the picture closer to the centre.  A point that starts off at
(r, theta) on the image plane in polar co-ordinates would move to
(r', theta) where, in two popular approximations:
r' = r (1 - C * r * r)
r' = r * cos (C * r)
for appropriate (small) constants C.  Of course, I was working with
images.  In a ray-tracer, you would distort the directions of the rays
by using the inverse transformation to splay the rays from the camera
outwards.

NO guarantees that it will fix the problem, but it is not a million
miles away from this discussion....

-- 
Ian D. Kemmish                    Tel. +44 767 601 361
18 Durham Close                   uad1077@dircon.UUCP
Biggleswade                       ukc!dircon!uad1077
Beds SG18 8HZ United Kingd    uad1077%dircon@ukc.ac.uk

ph@miro.Berkeley.EDU (Paul Heckbert) (11/27/90)

musgrave-forest@CS.YALE.EDU (F. Ken Musgrave) writes:
> A very wide-angle view of a scene with a sphere in an extreme corner of
> the image yields a very distorted sphere in the image...

dilip@ncst.ernet.in (Dilip Khandekar) responds:
>   I also encountered the same problem and would be interested in any method
>or projection-model which circumvents this problem. If the pin-hole camera
>model is not a good model for the human eye-brain system then is there any
>other model which is more accurate?

Pictures generated with a standard perspective camera model only look "normal"
if the viewing angle used during rendering matches the angle at which
you view the picture.  If you use a horizontal view angle of theta during
perspective rendering, and view the pictures on a monitor of width w,
then if you view the screen at a distance of d = w/2*cot(theta/2), the
picture will not look distorted.  Here's a little table for a w=14" screen:

				     telephoto     about normal     wide angle

    view angle (degrees), theta	    :   10		50		90

    recommended viewing distance, d :   80"		15"		7"

The same argument applies in photography: shoot a photograph with a
standard (50mm focal length) lens, print it at 8x10" size, say, and it
will look pretty normal when viewed at a distance of about one foot.
If you shoot a picture with a wide angle lens (24mm, say) and print it
at 8x10, you will perceive "perspective distortion" if you view it at a
distance of a foot, but it will look much less distorted if you hold it
close to your face, so that your viewing angle matches the view angle
captured by the lens.

The argument also applies in projection of movies and slides:  there is
only one point in a movie theater from which a viewer will see the same
image as that seen by the camera (i.e. same angle of view).  Theater
geometry and the lenses used for shooting and projection are usually
chosen to put that "ideal viewer position" near the middle of the
theater, I imagine.  Assuming perfect filming and projection and one
eye closed, viewers at this ideal position will not see any distortion
artifacts of the projection -- that is, they will not be able to tell the
difference between a projected film and a window into a 3-D scene.
Viewers not at the ideal viewing position, such as those in the first
row, will see the familiar artifacts of perspective "distortion" that
will easily allow them to distinguish between a projected image and a
real 3-D scene.

Another interesting observation about projections is that you
can project onto ANY shape screen you like (planar, spherical, cube corner,
curtain, human torso, ...) and there will be no artifacts of the
projection if the projection lens matches the shooting lens, the viewer
is right at the projector, and the surfaces are properly finished.

---

Related question: is there a formula relating camera lens
focal length and angle of view?  (I would guess that such a relationship
would not be theoretical, but would be based on praticalities,
and would vary from manufacturer to manufacturer)

Paul Heckbert, Computer Science Dept.
570 Evans Hall, UC Berkeley		INTERNET: ph@miro.berkeley.edu
Berkeley, CA 94720			UUCP: ucbvax!miro.berkeley.edu!ph

zap@lysator.liu.se (Zap Andersson) (11/27/90)

ph@miro.Berkeley.EDU (Paul Heckbert) writes:

>musgrave-forest@CS.YALE.EDU (F. Ken Musgrave) writes:
>> A very wide-angle view of a scene with a sphere in an extreme corner of
>> the image yields a very distorted sphere in the image...

>dilip@ncst.ernet.in (Dilip Khandekar) responds:
>>   I also encountered the same problem and would be interested in any method
>>or projection-model which circumvents this problem. If the pin-hole camera
>>model is not a good model for the human eye-brain system then is there any
>>other model which is more accurate?

>Pictures generated with a standard perspective camera model only look "normal"
>if the viewing angle used during rendering matches the angle at which

[ some extremely interesteing blah blah deleted to save bandwidth...]

>curtain, human torso, ...) and there will be no artifacts of the
>projection if the projection lens matches the shooting lens, the viewer
>is right at the projector, and the surfaces are properly finished.

This rings a bell in my mind, since I once by mistake in my raytracer did
a purely 'angular' perspective model, i.e. the x axis of the display was
really the angle of the ray in the ground plane, and the 'y' coordinate was
the angle from that plane... the image looked.... different.... well, any-
body implemented some kind of 'different' perspective model? Ideally,
you would hook up the user's face to the screen via a telescopic pole,
and pull it via pneumatic cylinders to the correct viewing distance, and
the rendereing equation should of course not project onto a plane, but
onto a slightly curved ditto (i.e. a computer monitor). Now we would
hear no more whinig about perspective distorsion.... ;-)

No, seriously, anybody tried twiddling with it? I did (by mistake) in my tracer
but, well.... nah, not good... And the problem is, also, that these twiddlings
are only easy to do in raytracets, since linear-things (polygon's and such) may
get non-linear sides when you twiddle (har har for you Z-buffalos ;-)

Let me know of you found something stunning!

>---

>Related question: is there a formula relating camera lens
>focal length and angle of view?  (I would guess that such a relationship
>would not be theoretical, but would be based on praticalities,
>and would vary from manufacturer to manufacturer)

Well it very well does, doesn't it? only that the "standard" equation IS
based on the practicallity of the film being 35mm and nothing else. Doubt
that a mm is different from manufacturer to manufacturer (although I know
that inces are different from manufacturer to manufacturer ;-)

(((Yeah I'm metric - I'm from Sweden, goddamnit ;-)))

>Paul Heckbert, Computer Science Dept.
>570 Evans Hall, UC Berkeley		INTERNET: ph@miro.berkeley.edu
>Berkeley, CA 94720			UUCP: ucbvax!miro.berkeley.edu!ph





-- 
* * * * * * * * * * * * * * * * *
* My signature is smaller than  *
* yours!  - zap@lysator.liu.se  *
* * * * * * * * * * * * * * * * *
--
* * * * * * * * * * * * * * * * *
* My signature is smaller than  *
* yours!  - zap@lysator.liu.se  *
* * * * * * * * * * * * * * * * *

ph@miro.Berkeley.EDU (Paul Heckbert) (11/29/90)

In response to the question I asked earlier:
 | Is there a formula relating camera lens
 | focal length and angle of view?  (I would guess that such a relationship
 | would not be theoretical, but would be based on praticalities,
 | and would vary from manufacturer to manufacturer)

I've received several lucid replies:

------------------------------------------------------------
From greg@hobbes.lbl.gov Mon Nov 26 22:46:43 1990
From: greg@hobbes.lbl.gov (Gregory J. Ward)
Subject: Re:  lens angles

The relationship is fairly straightforward as I understand it.  Think
pyramid where the width and length of the base are defined by the
image dimensions and the height is given by the focal length.  The
formula for the angle is simply:

	angle = 2 * atan(film_size/2 / focal_length)

For 35mm film, whose image dimensions are 34mm by 23mm (approx.),
the view angles for a standard 50mm lens are 37.6 by 25.9 degrees.

-Greg

------------------------------------------------------------
From grant%delvalle.llnl.gov@lll-lcc.llnl.gov Tue Nov 27 03:20:22 1990
From: grant%delvalle.llnl.gov@lll-lcc.llnl.gov (Chuck Grant)

If Ken is rendering an outdoor picture at night with the moon in
it, then it is probably a very wide angle picture, and you are absolutely
correct that the answer is "stick you face closer to the screen and it will
look ok."

You state:

>there is only one point in a movie theater from which a viewer will see the
>same image as that seen by the camera (i.e. same angle of view).  Theater
>geometry and the lenses used for shooting and projection are usually
>chosen to put that "ideal viewer position" near the middle of the
>theater, I imagine.

You don't have to imagine. You are exactly right. I remember this from
filmmaking books I read in high school. A 35mm movie camera uses 50mm lenses as
the "normal" lens. A 35mm film projector uses a 100mm lens so the picture looks
right when you are seated half way between the projector and the screen.  (I
don't remember the numbers for "Panavision" or other wide screen systems.) Use
of telephoto or wide angle lenses in the camera produces some distortion to the
viewer at the center of the theater.

This is something very important to film directors. All films are done this
way. They understand it. I have never heard this issue mentioned in the context
of computer graphics. Maybe no one knows this?

As to your question:

>is there a formula relating camera lens focal length and angle of view?

Such a formula is simple, if the lens has no distortion, and the size and shape
of the film is known. Where distortion is distortion in the strict optical
aberration sense. Any optical system is subject to several aberrations to
varying degrees. These are:

spherical aberation
coma
astigmatism
curvature of field
distortion
longitudinal chromatic aberration
lateral chromatic aberration

Distortion is non-uniform magnification across the field of view. Magnification
usually varies slightly as the angle off the optical axis varies. This gives
rise to "barrel" (negative) or "pincushion" (positive) distortion. Named,
for what the image of a square looks like when subjected to said distortion.

For camera lenses, you can safely assume the distortion is very small except
for wide angle lenses (which is probably the case you were interested in). 
I expect that lens manufacturers would be relectant to release the actual
numbers describing their lens's performance, since most people couldn't tell
the difference anyway.

For a lens focused at infinity and flat film,
fov = 2 * atan ( film_width / ( 2 * focal_length ) )

The only difference between a distortionless lens and an ideal pinhole camera
with respect to field of view is that if the lens is focused at a finite
distance, replace focal_length in the above formula with:
1/(1/focal_length - 1/object_distance)
which is the lens to image (film) distance.

Chuck

------------------------------------------------------------
From awpaeth@watcgl.waterloo.edu Tue Nov 27 18:46:56 1990
From: Alan Wm Paeth <awpaeth@watcgl.waterloo.edu>
Subject: Fields of View

    >Related question: is there a formula relating camera lens
    >focal length and angle of view?

The way this is done for a rectilinear lens (everything one ever encounters
short of optics with high distortions or a fisheye) is based on the size
of the image formed. In an "ideal" lens -- a pin-hole is a nice first
approximation -- the field is as large as you want -- the negative or
film holder or other physical limitation defines the "field stop". In a
more complex lens the diameter of some internal element might serve to
define the field stop -- a 50mm lens for a 35mm SLR would probably not
be able to produce a ~250 mm image circle if mounted on a large format
camera's lens board. If if could it would make a tremendous wide angle!

If the linear field F is given, then you can use the dimensionless relation:

                 2 tan(A/2) = F/EFL

to solve for angular field A or effective focal length.

This says (for instance) that a conventional SLR with 43 mm film diagonal
(35 mm film is 24 mm x 36 mm; hypot(24,36)~=43) will cover a reasonable,
mildly wide-angle 53 degree (diagonally) angular field with a 43 mm lens
installed.

   /Alan

------------------------------------------------------------

So using the (theoretical! yay!) relation

	angle = 2 * atan(film_size/2 / focal_length)

with 35mm film format, which I'm taking to be 34mm wide by 24mm high (Greg
Ward's numbers; Alan Paeth's numbers differ slightly: 24x36), we get the
following correspondence for some common lens focal lengths:

	focal length (mm)	24	35	50	80	200

	horizontal angle (deg)	70.6	51.8	37.6	24.0	9.72

	vertical angle (deg)	51.2	36.4	25.9	16.4	6.58

-Paul

Paul Heckbert, Computer Science Dept.
570 Evans Hall, UC Berkeley		INTERNET: ph@miro.berkeley.edu
Berkeley, CA 94720			UUCP: ucbvax!miro.berkeley.edu!ph