[comp.sys.ibm.pc.programmer] Random Number Generation in Turbo C 2.0

rmandel@grad1.cis.upenn.edu (03/15/90)

Hi folks,

I'm writing a simulator in Turbo C 2.0, and I need to generate
a pseudo-random number with Gaussian distribution. It doesn't
have to be statistically perfect, but a good approx. is
necessary. I'm using TC's 'random' function to produce my
basic flat distribution (I know random doesn't actually
generate a flat distribution, but if you discard all numbers
between 0 and about 0.4, the rest of the distribution i.e.
0.4 to 1.0 is more or less flat), but have been struggling
to find a way to transform this to a normal/Gaussian/white noise
distribution. 

Has anyone does this? Any help or code fragments (not necessarily
in C) would be greatly appreciated.

Thanks,
Robbie Mandelbaum.

gunther@jhunix.HCF.JHU.EDU (Gunther Wil Anderson) (03/15/90)

The code is straightforwardly presented in _The Science of Fractal
Images_ by Heinz-Otto Peitgen and (someone else, I forgot whom).  I
haveincluded it here, but that book is quite worthwhile for not only
fractals of all types in theory and practise, but also general random
number algorithms.

#define MAXLEVEL 200

int 		Arand, Nrand;
double 		GaussAdd, GaussFac;
double		delta[MAXLEVEL];

void InitGauss(seed)
int seed;
{
	Nrand = 4;
	Arand = INT_MAX;
	GaussAdd = sqrt(3.0*(double)Nrand);
	GaussFac = 2*GaussAdd/((double)Nrand*(double)Arand);
	srand((unsigned)seed);
}

double Gauss()
{
	double sum;
	int i;
	
	sum = 0.0;
	for(i = 1; i <= Nrand; i++)
		sum += (double)rand();
	return(GaussFac*sum - GaussAdd);
}

And viola, Gaussian random variables.

Hope this helps,

Gunther Anderson
gunther@jhunix.hcf.jhu.edu
gunther@jhuvms.hcf.jhu.edu
gunther@crabcake.cs.jhu.edu

raymond@maypo.berkeley.edu (Raymond Chen) (03/15/90)

eisen@jhunix.UUCP (Gunther Wil Anderson) presented an algorithm
that claims to produce normally-distributed random numbers.

If you read the code carefully, however, you'll observe that
it really doesn't do that.  It merely produces a crude approximation
to a normal distribution, based on the principle that the sum
of a large number of uniform distributions looks sort of like
a normal distribution.

In particular, the function will never return values with absolute
value greater than sqrt(12).

To do it right, pull out your dusty copy of Knuth's "Seminumerical
Algorithms" or just derive it yourself:  If D(x) is a probability
distribution, then let y = C(x) be a solution to the following equation:

	integral from -infinity to y   D(t) dt  = x

Then the function C(U) yields the desired distribution, where U
is a source of uniformly-distributed random numbers from 0 to 1.

(As a check, plug in D(x) = the characteristic function of [0,1], 
and observe that C(x) is the identity function on (0,1).)
--
raymond@math.berkeley.edu         mathematician by training, hacker by choice

bb16@prism.gatech.EDU (Scott Bostater) (03/15/90)

In article <1990Mar14.195315.2985@agate.berkeley.edu> raymond@math.berkeley.edu (Raymond Chen) writes:
>eisen@jhunix.UUCP (Gunther Wil Anderson) presented an algorithm
>that claims to produce normally-distributed random numbers.
>
>If you read the code carefully, however, you'll observe that
>it really doesn't do that.  It merely produces a crude approximation
>to a normal distribution, based on the principle that the sum
>of a large number of uniform distributions looks sort of like
>a normal distribution.

And a rough approximation at that.  Try summing 10 to 15 points for something
close.
>
>In particular, the function will never return values with absolute
>value greater than sqrt(12).
>
    [ details of doing the intergral to compute normal distribution omited]

There is a quick and easy transformation that takes 2 independant uniform 
variables and provides 2 independant normal (or gaussian) random variables.

Essentially, given:
  a,b        { random variables uniform [0,1] }
  sigma      { standard deviation of gaussian variables }
  mean       { mean of gaussian variables }

Then

   u = 2 * pi * a
   v = Sqrt(2) * Sqrt( -ln(b))          { thats log base e }

   x = Cos( u) * v * sigma + mean
   y = Sin( u) * v * sigma + mean

x & y are now N(mean,sigma) distributed independant vairables.

References:  1) _Probabilty,_Random_Variables,_and_Stochastic_Processes_ by
                 Athanasios Papoulis      (losey text, good reference)
             2) _Numerical_Respicies_in_[C|Pascal|Fortran]_  (unsure of author) 
                (very good references, wish I owned a copy :-)
             3) _EE_6051_Homework__Set_Number_5,_Problem_3_,  (myself)
-- 
Scott Bostater      GTRI/RAIL/RAD   (Ga. Tech)
"My soul finds rest in God alone; my salvation comes from Him"  -Ps 62.1
uucp:     ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!bb16
Internet: bb16@prism.gatech.edu