[net.math] Coin Flips

don@allegra.UUCP (D. Mitchell) (05/18/84)

The question about coin flips is a good one.  There are two major
answers depending on which to the two philosophical schools of
statistics you belong to.

Classical statistics has something called "hypothesis testing".  You
know a fair coin yields a binomial distribution which will look like a
bell shaped curve (symmetric when Prob(heads) = 0.5).  You pick a
confidence level, say 99 percent, and then you reject the fairness
hypothesis if the result of your coin flips is out in the 1 percent
tails of the curve.

The flaw in that approach is that picking the 1 percent rejection area
is arbitrary.  What if the distribution is uniform?  (not for a coin
flip, but for some"experiment").  The concept of confidence breaks
down.

Bayesian statistics deals with the problem more consistently (in my
opinion), but has bizarre philosophical implications.  A Bayesian
believes that human knowledge is described by probability
distributions.  That is, probability is subjective, how strongly you
believe something will happen.

When someone gives you a coin, you think it is fair, so your own
private "prior" distribution is binomial.  When you do an experiment,
you can take the results and use them to mathematically transform the
distribution into a new one ("the posterior distribution").  If the
coin gives 40 heads out of 40 flips, this distribution will be strongly
skewed.