levine@well.sf.ca.us (Ron Levine) (05/01/91)
It is part of the PHIGS specification that all coordinate
systems are right-handed. What does that mean? Why does that
matter in practice?
The recent book on PHIGS from the U.K. defines "right-handed
coordinate system" with the help of a drawing of a CRT
Workstation with x and y axes in the plane of the screen and a z
axis sticking out toward the eye of the viewer. It defines
handedness by saying, "in a right handed coordinate system, the
z axis comes out of the screen as z increases. ... If the z axis
were to go into the screen the system would be called left-
handed, and it would be the mirror image of a right-hand system",
along with the italics and boldface which indicate a definition.
That is all that this book says on the subject.
What if the workstation and its screen were not in the picture?
How could I tell my right from my left, then, I wonder? And
then, if I turned the workstation around and left the coordinates
fixed, the old positive z would then be the negative z and that
would appear to change the handedness by this definition. Does
the handedness of an arbitrary fixed coordinate system depend on
the position or orientation of a phantom workstation, I wonder?
At least two of the five coordinate systems which are always
present in PHIGS are not necessarily oriented in any special way
with respect to the workstation display surface. What do I mean
by their handedness?
And who cares? When does it matter whether a coordinate system
is right or left handed? How can the PHIGS programmer avoid
introducing a left handed coordinate system? What could be the
consequences if she inadvertently does so? Is it even possible
to inadvertently do so?
Most mathematician/PHIGS programmers should be able to answer all
these questions, precisely and operationally. I think that
perhaps many computer scientist/PHIGS programmers cannot.
All 3D coordinate systems fall into one of two equivalence
classes, one arbitrarily called right handed, the other left. In
this discussion, a coordinate system is defined by an ordered set
of mutually perpendicular axes, and the ordering is important.
That is, the xyz system and the yxz system are different. And
they are of different handedness classes. But xyz and yzx have
the same handedness. The positive senses of the axes are also
important: xyz has different handedness from (-x)yz, but the same
as (-x)(-y)z.
You can tell whether two arbitrary 3D systems xyz and uvw have
the same handedness by looking at the matrix of the
transformation between them, in particular the determinant of
the 3x3 linear part. If this determinant is positive, the two
systems have the same handedness, otherwise different.
xyz -> (-x)yz is reflection in the plane x=0,.
xyz -> yxz is reflection in the plane y=x,
both like reflection in a mirror. So we see that a coordinate
system and its mirror image are of opposite handedness.
Because our two hands are mirror images of each other, we are
able to use them as mnemonics for which class is which. Hold
your right hand with the thumb and index finger approximately
mutually perpendicular in the plane of the palm, and the middle
finger approximately perpendicular to the plane of the palm. Now
go look for a coordinate system. When you find it, orient your
right hand so that the thumb points along the x axis, the index
finger along the y axis. If the middle finger points along the z
axis, the coordinate system is right handed. If the middle
finger points along the (-z) axis, the coordinate system is left
handed. Now, you don't need to carry your workstation around
with you, your hands will do.
Very little in computer graphics, or even physics, depends on the
handedness of the coordinate system. The most important thing I
can think of is the formula for computing the cross product of
two vectors from their cartesian components. This is not a
coordinate invariant. Generally, we like to base computational
algorithms on formulas which are coordinate invariant--you change
coordinate system, you get the same result. Unfortunately, the
cross product is not coordinate invariant. The good news is that
it's not too coordinate-dependent, because it depends only on the
handedness of the coordinate system, and it varies only by a
multiplicative sign between systems of different handedness. But
it's mostly due to this non-invariance of the cross product that
we have to pay any attention at all to coordinate handedness.
What does this have to do with PHIGS? What are the answers to
the questions in the paragraph above beginning "Who cares"?
Stay tuned. More when I get back from a trip to New York,
leaving in the morning.
Ron Levine
Dorian Research, Inc.levine@well.sf.ca.us (Ron Levine) (05/07/91)
To recap my original posting, <24534@well.sf.ca.us>, the three
main points are:
(1) If you are person confronted with an arbitrary 3D cartesian
coordinate system, say as a physical model or drawing, then you
can tell its handedness by the right hand rule: thumb +x, index
+y, middle +z. But if you are a PHIGS program running in a
computer confronted with a coordinate system specified as a
transformation matrix linking it to a system of known handedness
(or equivalently as three basis vectors expressed in terms of
their components with respect to a system of known handedness),
then you must compute the sign of the determinant of the 3x3
linear part of that transformation, by one algorithm or another--
systems related by transformations with positive determinant have
the same handedness.
(2) The only place that coordinate handedness matters in computer
graphics is in operations which make use of the vector cross
product. (There remains to discuss where PHIGS uses the cross
product, and what ill effects might appear in your output display
if you inadvertently or intentionally use a left handed system
when computing one of these cross products. I will discuss these
questions in another article.)
(3) The PHIGS standard document specifies that all coordinate
systems are right handed.
This third point is the subject of this article; in particular,
the fact that this specification is not supported by the rest of
the definition of PHIGS, and is not met in practice. Under the
specified semantics of the relevant PHIGS functions, it is quite
possible for an application to cause any of three of the five
coordinate systems which are always present in PHIGS to be left
handed. I have verified that the possibility exists in my
favorite PHIGS implementation, which conforms to the specified
semantics for these functions. And this is good, because it is
sometimes advantageous for an application to introduce a left
handed system.
Device Coordinates, the most concrete of the PHIGS systems, at
the downstream end of the transformation pipeline, is easiest to
deal with. And here is an appropriate place to begin with a
picture of a workstation display with axes drawn in it, as in the
recent book from the U.K. Moreover, the PHIGS standard insists
that DC has its origin at the lower left far corner, and the axes
as in the picture. So it is safe to say that Device Coordinates
are always right-handed. But it is rarely INTERESTING to say
that DC are right handed, because most devices in common use have
only 2D display surfaces and the DC z-coordinate has little
practical significance.
Note that, under this specification, PHIGS DC might not coincide
with the natural device coordinates for some devices, for many
devices have y increasing downward. Also note that the
definition in the cited book would not help you determine which
way to add a z-axis to such a system to make it right handed, but
the right hand rule would help you with that decision.
Normalized Projection Coordinates are related to Device
Coordinates by the workstation transformation. The application
does not set this transformation explicitly, but rather provides
two sets of data which define it: the workstation viewport and
the workstation window. The latter is a box (short for
rectangular parallelopiped) in the NPC unit cube, the former a
box in the Device Coordinate space. The only way to give the
workstation transformation a negative determinant is to specify
at least one of these boxes with at least one of its limit-pairs
in the reverse order, and the standard specifies that this shall
be an error. So NPC are always right handed because DC are.
View Reference Coordinates are mapped to Normalized Projection
Coordinates by the view mapping matrix, an element of the view
representation, which the application can specify any way it
wants. In particular, the application may intentionally or
inadvertently specify a view mapping matrix with a negative
determinant (say a reflection), and so a View Reference
Coordinate system which is left handed with respect to the right
handed NPC. If you use the utility function EVALUATE VIEW
MAPPING MATRIX 3 to compute this matrix from its defining "view-
volume" boxes, then this accident won't happen, because this
function has error checks preventing it.
Similar remarks apply to the view orientation transformation,
from World Coordinates to View Reference Coordinates, another
element of the view representation. You can specify the view
orientation matrix yourself and introduce a handedness change, if
you wish, or you can avoid the problem by using the provided
utility function, EVALUATE VIEW ORIENTATION MATRIX 3, to compute
the matrix from the view orientation data: view plane normal,
view up vector, and view reference point.
Modeling Coordinates, at the input end of the transformation
pipeline, are connected to World Coordinates by the modeling
transformation. The modeling transformation is in traversal
state, so may change frequently, and may well be different for
every primitive in a structure. Again, there is nothing to stop
the application from supplying any modeling transformation it
wishes, as an argument to SET LOCAL TRAN 3 or SET GLOBAL TRAN 3,
and so it may make the Modeling Coordinates left handed.
If you use the utility functions TRANSLATE 3, ROTATE X, ROTATE Y,
or ROTATE Z, to determine the transformation matrices for
translations and rotations about coordinate axes, then you will
preserve right handedness. But notice that if you supply a scale
vector with one or three negative components as an argument to
the functions SCALE 3, or BUILD TRAN 3, then you are defining a
transformation with a negative determinant, and it is not a
specified error condition for these functions. So it is possible
to introduce left handed Modeling Coordinates, even when using
only the PHIGS utility functions to compute the transformation
matrices.
Summing up, you can be sure to keep all coordinate systems right
handed if you use only the supplied utility functions to compute
viewing and modeling matrices, and if you avoid using negative
scale factors in the utility functions which take a scale vector
as input. Otherwise, you may intentionally or inadvertently
make Modeling, World, or View Reference Coordinates left handed
with respect to Normalized Device Coordinates.
Now, I think it is never desirable to use left handed World or
View Reference Coordinates. However, there are times when an
application may well wish to use left handed Modeling
Coordinates: namely, for complex objects with planes of symmetry
(not unusual). Why compute and store all the vertex coordinates
and other data for both halves of a symmetric object? In PHIGS
you can use the same data as argument to two primitive calls,
one generating the mirror image of the effect of the other, by
using the modeling transformation to insert a reflection in the
symmetry plane between the two primitive calls. Reflection
changes handedness. Of course, when you do this, you may have
to take into account the possible ill effects from getting the
wrong signs for cross products, a problem I'll discuss in another
article.
What we see here is a deficiency in the logical consistency of
the PHIGS specification. It is not the only one.
All this is perhaps in somewhat more detail than would be
appropriate for a book aiming to be a "practical introduction to
PHIGS". However, if such a book raises the issue by attempting
to define "right handed coordinate system", then it ought at least
to give a correct and useful definition, and, further, it should
give the uninitiated reader some clue as to the significance and
scope of the concept.
Ron Levine
Dorian Research, Inc.
(415)-535-1350