[comp.graphics] 3D Rotations/Instancing

gavin@krypton.SGI.COM (Gavin Bell) (01/27/89)

	Here's another way of keeping error out of your rotation matrices:
don't store rotations as a matrix; instead, convert to a matrix only when
you have to, and store rotations as 'Euler paramaters'.
	Euler paramaters take up 1/4 of the room of a rotation matrix (4 floats
instead of 16), and are a lot easier to normalize.
	So, here's the code.  I know that it works; I've used it to spin a cube
at 30 frames/second for over a week, with no deformation of the cube.  The
only thing missing is the vector routines, which are boring anyway, and
are left as an exercise for the reader.  Note that this code is far from
optimized; in my case, optimization wasn't necessary, since most of the
program's time was spent either interacting with the user or drawing the
objects to be displayed.
	If you really want me to, I could dig up the reference to Euler
paramaters.  And, if you are really lazy, I could even be convinced to
send you the source to the vector routines.

/*
 * Given an axis and angle, compute euler paramaters
 * a is the axis (3 floats), phi is angle in radians, and
 * e is where to put the resulting Euler paramaters (4 floats)
 */
void
axis_to_euler(float *a, float phi, float *e)
{
	vnormal(a);	/* Normalize axis */
	vcopy(a, e);
	vscale(e, fsin(phi/2.0));
	e[3] = fcos(phi/2.0);
}
/*
 *	Given two rotations, e1 and e2, expressed as Euler paramaters,
 * figure out the equivalent single rotation and stuff it into dest.
 * 
 * This routine also normalizes the result every COUNT times it is
 * called, to keep error from creeping in.
 */
#define COUNT 100
void
add_eulers(float *e1, float *e2, float *dest)
{
	static int count=0;
	register int i;
	float t1[3], t2[3], t3[3];
	float tf[4];

	vcopy(e1, t1); vscale(t1, e2[3]);
	vcopy(e2, t2); vscale(t2, e1[3]);
	vcross(e2, e1, t3);	/* t3 = e2 cross e1 */
	vadd(t1, t2, tf);	/* tf = t1 + t2, if this was C++ */
	vadd(t3, tf, tf);	/* tf += t3 */
	tf[3] = e1[3] * e2[3] - vdot(e1, e2);

	/* I use tf[] so that e1 or e2 can be the same as dest */
	for (i=0; i < 4; i++) dest[i] = tf[i] ;
	if (++count > COUNT)
	{
		count = 0;
		normalize_euler(dest);
	}
}
/*
 * Euler paramaters always obey:  a^2 + b^2 + c^2 + d^2 = 1.0
 * We'll normalize based on this formula.  Also, normalize greatest
 * component, to avoid problems that occur when the component we're
 * normalizing gets close to zero (and the other components may add up
 * to more than 1.0 because of rounding error).
 */
void
normalize_euler(float *e)
{	/* Normalize result */
	int which, i;
	float gr;

	which = 0;
	gr = e[which];
	for (i = 1 ; i < 4 ; i++)
	{
		if (fabs(e[i]) > fabs(gr))
		{
			gr = e[i];
			which = i;
		}
	}
	e[which] = 0.0;	/* So it doesn't affect next operation */

	e[which] = fsqrt(1.0 - (e[0]*e[0] + e[1]*e[1] +
		e[2]*e[2] + e[3]*e[3]));

	/* Check to see if we need negative square root */
	if (gr < 0.0)
		e[which] = -e[which];
}

/*
 * Build a rotation matrix, given Euler paramaters.
 */
void
build_rotmatrix(Matrix m, float *e)
{
	m[0][0] = 1 - 2.0 * (e[1] * e[1] + e[2] * e[2]);
	m[0][1] = 2.0 * (e[0] * e[1] - e[2] * e[3]);
	m[0][2] = 2.0 * (e[2] * e[0] + e[1] * e[3]);
	m[0][3] = 0.0;

	m[1][0] = 2.0 * (e[0] * e[1] + e[2] * e[3]);
	m[1][1] = 1 - 2.0 * (e[2] * e[2] + e[0] * e[0]);
	m[1][2] = 2.0 * (e[1] * e[2] - e[0] * e[3]);
	m[1][3] = 0.0;

	m[2][0] = 2.0 * (e[2] * e[0] - e[1] * e[3]);
	m[2][1] = 2.0 * (e[1] * e[2] + e[0] * e[3]);
	m[2][2] = 1 - 2.0 * (e[1] * e[1] + e[0] * e[0]);
	m[2][3] = 0.0;

	m[3][0] = 0.0;
	m[3][1] = 0.0;
	m[3][2] = 0.0;
	m[3][3] = 1.0;
}
-------------
Insert Standard Disclaimer here
--gavin    (gavin@sgi.com)

cme@cloud9.Stratus.COM (Carl Ellison) (01/28/89)

In article <5909@leadsv.UUCP>, kallaus@leadsv.UUCP (Jerry Kallaus) writes:
> 
> Rotation matrices are orthonormal matrices which have the property
> that they are their own inverse.

Almost.

Their TRANSPOSE is their inverse.

kallaus@leadsv.UUCP (Jerry Kallaus) (02/01/89)

> In article <5909@leadsv.UUCP>, kallaus@leadsv.UUCP (Jerry Kallaus) writes:
    (Jerry demonstrates an incredible lack of math talent.)
> In article <25598@sgi.SGI.COM>, andru@rhialto.SGI.COM (Andrew Myers) writes:
    (Andrew notices this.)

Thanks to Andrew Myers for pointing out errors in my previous posting in
this thread.  They really were gross.  I abandoned that approach, and offer
the following obsersvation.

A well known iterative method for improving the estimate of the inverse
of a matrix is as follows (the n and n+1 refer to the iteration step).
If the matrix Bn is an estimate of the inverse of the matrix A,
then   A Bn - I = E ,  and if the norm of E < 1 , the following:

    Bn+1 = Bn ( 2I - A Bn )         ( 1 )
converges quadratically as per
    A Bn+1 = I - E**2
to the correct inverse.

The original problem was the orthonormalization of a rigid body rotation
matrix which contained errors due to frequent updating.
If the matrix A is a rigid body rotational transformation matrix, then
    A* A = I   and   A A* = I     ( where * denotes transpose ).
In other words, the transpose of A is the inverse of A.  If we have such
a matrix, only contaminated with errors, we could consider using
equation (1) to find the inverse of the contaminated matrix, using the
transpose of the contaminated matrix as the intial estimate for Bn.
However, this would only give us the inverse of the contaminated matrix,
and if this result were iterated through equation (1) again, we would
simply bounce back to the original A* (approximately).  Presumably, after
a single iteration of (1) the differences in the elements of
A and A-* (A inverse transpose) are small enough to make some linearity
assumptions.  The following is an intuitive supposition (and possibly
quite wrong); a math derivation evades me.

It seems to me that the approximation to the desired
(nearest to A in some sense) orthonormal matrix is half way between A and A-*.
If so, then the two could be averaged, or from (1):

       Bn+1 = Bn  + Bn - Bn A Bn
showing the step update delta as    Bn - Bn A Bn .
Using half of this step size and replacing Bn with  An*  provides

       An+1*  = (1/2) An* ( 3I - A An* )         ( 2 )

Showing that there are basically two matrix multiplies involved.
Depending on the accuracy with which rotational matrices
are being updated, only one iteration of (2) would probably be
needed per 10 to 1000 rotation updates.

Once again, the above is only an intuitive derivation, and sometimes
my intuition takes a hike.  Unfortunately, I don't have time to
try it.
-- Jerry
-- 
Jerry Kallaus         {pyramid.arpa,ucbvax!sun!suncal}leadsv!kallaus
(408)742-4569
     "Funny, how just when you think life can't possibly get
      any worse, it suddenly does." - Douglas Adams

moore@svax.cs.cornell.edu (Doug Moore) (02/02/89)

Since no one else has mentioned this, I guess I will.  Why not use quaternions,
rather than rotation matrices, to represent your rotations?  Quaternions on
the unit sphere and 3-d rotations are isomorphic, and quaternions don't require
the redundant storage and calculation that 3x3 matrices do.

A quaternion may be thought of as an entity of the form <s,x>, where s is a
scalar and x is a 3-vector.  Multiplication of quaternions is given by
<s1,x1> * <s2,x2> = <s1 * s2 - x1 dot x2, s1*x1 + s2*x2 + x1 cross x2>.
A unit quaternion is one that satisfies s*s + x dot x = 1.  A unit quaternion
may also be thought of as a rotation of angle 2 arccos s about the axis v.  To
rotate a vector v by a rotation quaternion q to get a vector w, use the
formula <0,w> = inv(q) * <0,v> * q, where inv(q) * q = <1,0>, and inv(<s,v>)
= <s,-v>.  Or, if you prefer, form the equivalent rotation matrix

1 - 2 (x2*x2 + x3*x3)     2 (x1*x2 + s * x3)     2 (x1*x3 - s*x2)
2 (x1*x2 - s*x3)         1 - 2 (x1*x1 + x3*x3)   2 (x2*x3 + s*x1)
2 (x1*x3 + s*x2)          2 (x2*x3 - s*x1)      1 - 2 (x1*x1 + x2*x2)

and use that.

The basic algorithm, then, to display vectors V = <v1 ; v2 ; v3 ; ... ; vn>
rotating by q every frame is

rot = <1,0>
do-forever
	R = rotation matrix associated with rot
	DISP = R * V
	display all vectors in DISP
	rot = rot * q
	norm = rot.s * rot.s + rot.x1 * rot.x1 + rot.x2 * rot.x2 + rot.x3 * rot.x3
	if (abs(norm - 1) > tolerance)
		norm = sqrt(norm)
		rot.s = rot.s/norm
		rot.x1 = rot.x1/norm
		rot.x2 = rot.x2/norm
		rot.x3 = rot.x3/norm
	endif
enddo

That's the general idea, anyway.  For a less terse exposition, see 
Ken Shoemake, "Animating Rotation with Quaternion Curves", COMPUTER GRAPHICS
Vol 19 No 3, pp. 245-254.

Incidentally, similar quaternion techniques can be used for 4-d rotations.
I haven't been able to get a handle on higher dimensions, though.

Doug Moore (moore@svax.cs.cornell.edu)

skinner@saturn.ucsc.edu (Robert Skinner) (02/02/89)

In article <5941@leadsv.UUCP> kallaus@leadsv.UUCP (Jerry Kallaus) writes:
>> In article <5909@leadsv.UUCP>, kallaus@leadsv.UUCP (Jerry Kallaus) writes:
>    (Jerry demonstrates an incredible lack of math talent.)
>> In article <25598@sgi.SGI.COM>, andru@rhialto.SGI.COM (Andrew Myers) writes:
>    (Andrew notices this.)
>
>Thanks to Andrew Myers for pointing out errors in my previous posting in
>this thread.  They really were gross.  I abandoned that approach, and offer
>the following obsersvation.
>
  ... Jerry adds more math...

This is all true, but (IMHO) it seems too complicated.
Remember what orthonormal means:  Each column of a matrix is normal to
each other column.  (And since its transpose is its inverse, which
is also orthonormal, the same is true of rows.)
We can use that to fix up the matrix very quickly.

Look at it this way, if we have the following vector/matrix:

[x y z] | a b c |
	| d e f |
	| g h i |,

then the X axis, [1 0 0], gets transformed into [a b c],
	and Y: [0 1 0] --> [d e f]
	and Z: [0 0 1] --> [g h i]

now since X x Y = Z, this means that
	[a b c] x [d e f] = [g h i]

So normalize [a b c] to get [a' b' c'], then
compute 
	[a' b' c'] x [d e f] 

and normalize to get [g' h' i']
Now you can calculate the correct Y component from the corrected
X and Z components.
	[g' h' i'] x [a' b' c'] = [d' e' f'] 

You don't have to normalize, be cause X' and Z' are already unit length
and perpendicular.

---
This seems simpler to me, even though you have to compute two square-roots
to normalize.  But you use Newton's method to compute your own square-root.
The vectors you're normalizing are always near unit length, so you can use 1.0 
as a starting point and get pretty fast convergence.  You may need only
3 or 4 steps (with one + and one * per step, I think), and you can unroll
the loop to make it even faster.  The nice thing is that you can tailor
the routine to make it as accurate as you want.

If I remember correctly, you were starting with some integer approximations
anyway, so a few times through this loop may be adequate.

hope this helps,
Robert Skinner
skinner@saturn.ucsc.edu

	

ceb@ethz.UUCP (Charles Buckley) (02/02/89)

In article <25633@sgi.SGI.COM> gavin@krypton.SGI.COM (Gavin Bell) writes:

	   Here's another way of keeping error out of your rotation matrices:
 and store rotations as 'Euler paramaters'.

	   If you really want me to, I could dig up the reference to Euler

How about Kane and Likins, Spacecraft Dynamics, published in the early
80's?  Readable, informative, all that.

cs161agc@sdcc10.ucsd.EDU (John Schultz) (02/04/89)

  Since I am specifying rotations as angles (yaw,roll,pitch) wouldn't
it be fastest and simplest to rotate the coordinate system and then
solve for the new angles? Arcsin could be tabled and no square roots
will have to be calculated.  Can it be done, math whizzes? 


  Thanks for the suggestions,

  John Schultz

gavin@krypton.SGI.COM (Gavin Bell) (02/06/89)

In article <769@ethz.UUCP>, ceb@ethz.UUCP (Charles Buckley) writes:
> How about Kane and Likins, Spacecraft Dynamics, published in the early
> 80's?  Readable, informative, all that.

Thank you-- that is the reference I was referring to.I had been asked
to dig up the reference, and found that the xeroxed copy of chapters 1
and 2 that I have has no bibliographic information at all.  It is
indeed quite helpful.

--gavin