[comp.graphics] Random points on a sphere

jdchrist@watcgl.waterloo.edu (Dan Christensen) (05/04/90)

In article <1990Apr27.201704.16711@cs.UAlberta.CA> cdshaw@cs.UAlberta.CA (Chris Shaw) writes:
>How about using 4 random numbers in the range from [0.0, 1.0]
>to generate a quaternion [w,x,y,z]? Normalize the quaternion so that
>(w*w + x*x + y*y + z*z) = 1.0 , and use this as a rotation of a unit vector
>[ 1, 0, 0 ].

I think this presents the same problem as just choosing random x, y and
z from [0.0, 1.0] and normalizing without rejection, namely that the
resulting distribution is not uniform.  The quaternions that your method
generates will more dense near the corner's of the hypercube in 4-space.

Dan Christensen

cwg@m.cs.uiuc.edu (05/05/90)

Re: Random points on surface of sphere (or in its interior).

There are essentially two choices:

(1) choose a region over which it is simple to generate random points and
which contains the sphere, and then reject those points outside.
Hence, since uniform [0,1] are usually computationlly inexpensive, generating
on a cube is good.  The additional cost, as has been pointed out, is
the 48% of the points that have to be discarded, and the cost
of computing the "discard" function.  (Also, if only the surface points
are needed, the cost of projecting interior points back to the
surface.)  Thus the cost is

(3R + 3M + 2A + C)/0.52

for each point, where R, M, A, and C are costs of random number,
multiply, Add, and comparison.  Projection costs an additional

Q + 3D

per point, where Q and D are costs of square root and divide.

(2) Choose a parameterization of the space, and then generate points
with a probability distribution appropriate to the parametarization.
This has also been proposed, using polar coordinates.  If only the
surface is needed, only two parameters are needed, say lattitude and
longitude.  The cost is now

2R + T

where T is the cost of computing a trigonometric function, typically
about 6(M+A).  It is probably faster.

kassover@jupiter.crd.ge.com (David Kassover) (05/08/90)

The foregoing discussion is much to mathematical for my attention
span to handle in detail.  But a thought occurs to me:

If one were to select Two Methods of generating points, each of which
happened to concentrate the generated points in the region the
other left sparse, would that not be "good enough" for the
application?  (Time vs code vs point density tradeoffs)

I know this is not pure, so once again I apologize.

(I'm sorry, I'm feeling particularly smarmy today 8-)
--
===================================================
David Kassover
kassover@ra.crd.ge.com
kassover@crd.ge.com

kneller@cgl.ucsf.edu (Don Kneller) (05/08/90)

This has been a long running discussion, and I may have missed
a posted solution, but I believe you can use:

	1) theta = random number between -180 and 180
	2) phi = arccos(random number between -1 and 1)

The 2) ensures phi near 0 and pi are selected rarely, while values
near pi/2 are relatively common.

-----
	Don Kneller
UUCP:		...ucbvax!ucsfcgl!kneller
INTERNET:	kneller@cgl.ucsf.edu

shaw@paralogics.UUCP (Guy Shaw) (05/08/90)

In article <4400066@m.cs.uiuc.edu>, cwg@m.cs.uiuc.edu writes a good
summary of many postings:

> (1) choose a region over which it is simple to generate random points and
> which contains the sphere, and then reject those points outside.

[...]

> (2) Choose a parameterization of the space, and then generate points
> with a probability distribution appropriate to the parametarization.


I have seen several postings which compute the _average_ cost of
computing random points in a unit cube and rejecting those that lie
outside the inscribed unit sphere.  Some argue that rejection methods
are cheaper because you don't need to compute trig functions, others
argue that the cost of 1 trig function is less than the average combined
cost of computing, rejecting, and projecting.  The race seems close
enough that all I am convinced of is that a _good_ implementation of one
method is faster than a not-so-good implementation of the other.

The thing that bothers me about using any method that involves
generating random numbers and rejecting those that fail some test
is that, even though an average rejection rate is known, there is
no guarantee that I won't generate a _long_ series of rejected numbers.

This is why I would go out of my way to choose a method involving
transformations, even if the cost of a transformation is more expensive
than the average cost of a rejection method.  But, if for some other
problem, the cost of transformation were so prohibitively expensive that
I must use rejection methods, I would want to put a limit on the length
of a run of rejected points and figure out something reasonable to do if
that limit is exceeded.

Am I just being paranoid?  Is this just a theoretical nit, or does this
count "for all practical purposes", as well?  Does anyone have some pointers
to literature on the subject?

Thanks in advance.
-- 
Guy Shaw
Paralogics
paralogics!shaw@uunet.uu.net  or  uunet!paralogics!shaw

thant@horus.esd.sgi.com (Thant Tessman) (05/09/90)

I haven't followed the whole thing, but I think I've got
a really easy way to distribute random points on a sphere.

x = random number [-1, +1]
y = random number [-1, +1]
z = random number [-1, +1]

r = sqrt(x*x + y*y + z*z) 

if ( r > 1.0 )

	//throw it away

else 

	//project it onto the unit sphere

	x = x/r;
	y = y/r;
	z = z/r;

Piece o' cake.

thant

falk@peregrine.Sun.COM (Ed Falk) (05/10/90)

In article <271@paralogics.UUCP> shaw@paralogics.UUCP (Guy Shaw) writes:
>In article <4400066@m.cs.uiuc.edu>, cwg@m.cs.uiuc.edu writes a good
>summary of many postings:

[generate random points on a sphere by generating random points in
a unit cube; reject points outside the sphere, project points inside
the sphere]

>The thing that bothers me about using any method that involves
>generating random numbers and rejecting those that fail some test
>is that, even though an average rejection rate is known, there is
>no guarantee that I won't generate a _long_ series of rejected numbers.
>
> ...
>
>Am I just being paranoid?  Is this just a theoretical nit, or does this
>count "for all practical purposes", as well?  Does anyone have some pointers
>to literature on the subject?

It's always a good idea to be paranoid when looking at algorithms.

In this case, though I wouldn't worry.  Statisticly speaking, the
fraction of rejected points will be

	volume(cube) - volume(sphere)
	-----------------------------  =  1 - volume(unit sphere) =~ 48%
		volume(cube)


Of course, it *is* possible to get a long string of random points
outside of the sphere, but with random numbers it's possible to get
long strings of anything.  That's the risk you take with random
numbers, otherwise they wouldn't be random.

Here's something else to be paranoid about though; make sure that
this algorithm doesn't divide by zero.  In general, if I'm testing
a number to see if it's zero before dividing, a flag goes up in
my head to tell me to be careful of very small nonzero numbers.
In the points on the sphere case, random points very close to the
origin might cluster when projected to the sphere because of low
precision, or they might be significantly off the surface of the
sphere because of round-off error.  For these reasons, my own
version of the algorithm is

	x = rand[-1 1] ;
	y = rand[-1 1] ;
	z = rand[-1 1] ;
	r = sqrt(x*x + y*y + z*z)
	if ( r > 1  ||  r < .01 )
	  reject ;
	else
	{
	  x = x/r ;
	  y = y/r ;
	  z = z/r ;
	}

Finally, as a general note, the sqrt() operation can be saved until
you accept the point, thus reducing the work.

		-ed falk, sun microsystems
		 sun!falk, falk@sun.com
		 card-carrying ACLU member.