My previous post is about a method to simulate a Brownian motion. A friend of mine emailed me yesterday to tell me that this is useless if we do not know how to simulate a normally distributed variable.
My first remark is: use the rnorm() function if the quality of your simulation is not too important (Later, I'll try to explain you why the R "default random generation" functions are not perfect). However, it may be fun to generate a normal distribution from a simple uniform distribution. So, yes, I lied, I won't create the variable from scratch but from a uniform distribution.
The method proposed is really easy to implement and this is why I think it is a really good one. Besides, the result is far from being trivial and is really unexpected. This method is called the Box-Muller method. You can find the proof of this method here. The proof is not very complicated, however, you will need a few mathematical knowledges to understand it.
Let u and v be two independent variables uniformly distributed. Then we can define:
x = sqrt(-2log(u))sin(2 PI v)
y = sqrt(-2log(u))cos(2 PI v)
x and y are two independent and normally distributed variables. The interest of this method is its extreme simplicity in term of programming (We only need 9 lines if we don't want to test the normality of the new variables neither plot the estimation of the density).
We can obtain a vector of variables normally distributed. The Lillie test doesn't reject the null hypothesis of normal distribution. Besides, we can plot the estimation of the density of the variables. We obtain the following plot that looks indeed similar to the Gaussian density.
The program (R):
# import the library to test the normality of the distribution
library(nortest)
size = 100000
u = runif(size)
v = runif(size)
x=rep(0,size)
y=rep(0,size)
for (i in 1:size){
x[i] = sqrt(-2*log(u[i]))*cos(2*pi*v[i])
y[i] = sqrt(-2*log(u[i]))*sin(2*pi*v[i])
}
#a test for normality
lillie.test(c(x,y))
#plot the estimation of the density
plot(density(c(x,y)))
Add a comment