[net.misc] ESP & Elementary Probability

karn (02/28/83)

Hasn't anyone taken even an elementary probability course?!

When I took one as a sophomore, our professor (who incidentally was a
gambling hobbyist and a successful blackjack card counter) related one
way to use "ESP" for fun and profit:

Take 64 [football fans | horse racing fans | whatever] who are gullible
enough to believe in ESP.  (This shouldn't be too hard.)  It is
desirable that none of the 64 know each other. Take an upcoming event
(game, race, etc) in which there is a 50-50 binary outcome of some sort
on which they can bet, and offer "free" advice to each of your 64
"clients": to the first 32, you tell them that you "sense" outcome "A",
and to the other 32, you tell them to bet on outcome "B".  After the
event, you drop the half for which you were wrong.  Now you take your
remaining 32 and split them up into two groups of 16, repeating the
above process.

After you have done this 6 times, you will have one person remaining for
which you've been right six times in a row. This person is your victim. 
You tell him that you have "shared your power" with him, FOR FREE, six
times already out of the goodness of your heart. You ask him to be fair
to you and "share the profits" (before the event, of course), and
naturally you soak him for all he's worth.

He's quite likely to go along, since the chances of your having given
him the six outcomes strictly by luck, of course, was only 1/64.

If you try this, don't tell anybody where you heard it.

Phil Karn

dir (02/28/83)

Most researchers do not claim "truth" based on a single experiment
unless the probability is extremely small (p < 10e-9 say).
The usual process is to look at other independent tests,
calculate the overall statistic, and then make a judgement.
Most laymen are statistically naive, and probably would
fall prey to the 1/64 trick.

The problem is that by the same reasoning as the 1/64 trick,
virtually ANY experiment ever run could be pure chance, including
some well-acknowledged empirical effects. Even the overall
statistics of a group of experiments could be chance. 

Where do we draw the line at what is chance and what isn't?
p < .05 is an historical standard in psychology, in medicine it
is more like .10. Any other standards out there?

henry (03/02/83)

   "Most researchers do not claim "truth" based on a single experiment
   unless the probability is extremely small (p < 10e-9 say).
   The usual process is to look at other independent tests,
   calculate the overall statistic, and then make a judgement.
   Most laymen are statistically naive..."

Researchers who follow this procedure are also statistically naive.
It does not matter how many existing results your theory can explain;
if it cannot *predict* new results, then what you are doing is not
science.  It's easy to fiddle with a theory until it explains data you
already know about;  the acid test is new data, not considered in the
formulation of the theory.

Obviously, this causes trouble if you aren't sure what you're after.
There are ways around this, but only the more conscientious parapsy-
chology researchers use them.  Said conscientious researchers are also
seldom heard claiming positive results.

In case people are curious, at least one of said methods is very simple.
You divide your data into two halves along some arbitrary line, agreed
on before the experiment.  (E.g., first n/2 tests and last n/2 tests.)
You put the second half away and don't look at it.  You then examine
the first half in any way you please, looking for correlations.  Once
you have determined what phenomenon appears to be manifesting itself,
you settle on a specific data-analysis procedure and specific criteria
for success and failure.  Then, you dig out the second half of the data
and apply that exact procedure and those exact criteria.  You are not
allowed to fiddle with the analysis method once you have started to
examine the second half.  If you get positive results on the second half,
using the method and criteria derived from the first half, THEN you have
a positive result for the experiment.  If not, you have a negative result
and no further fudging is allowed.

Show me positive results obtained with this method, or some other way
of making sure you don't outwit yourself, and I will be impressed.
Statistical analysis of parapsychology data without such safeguards
is worthless.

					Henry Spencer
					U of Toronto