[net.misc] A PK experiment

lew@ihuxr.UUCP (Lew Mammel, Jr.) (02/03/84)

This is about the article "The Persistent Paradox of Psychic Phenomena: An
Engineering Perspective" by Robert G. Jahn, which appeared in Proceedings
of the IEE, Vol. 70, no. 2, February 1982. The article is a general survey,
but it concludes an exstensive description of a PK experiment conducted
by a Princeton undergraduate, which the author supervised. A graph depicting
the results of this experiment was shown on the NOVA ESP show. I'll confine
my remarks to a brief critique of this experiment.

The graph shown on NOVA showed "Cumulative Deviation" vs. "Number of Trials"
for three cases, PK+, PK-, and BL (base line). The PK+ and PK- graphs showed
trials in which the "operator" tried to psychically influence the REG (Random
Event Generator) in a positive or negative direction. Each trial consisted
of 200 random binary events. The trials were divided into 5 series. The
graph represented data shown tabularly as:

type	trials	mean	std	t-score	Pt	n+/n-

BL	23000	100.045	6.980	.978	.164	10891/10782
PK+	13050	100.223	6.979	3.644	e-4	6310/6004
PK-	12100	99.709	6.968	-4.596	2e-6	5462/5956

The t-score is given by (mean - 100)/(SD*sqrt(trials)), and shows the
deviation of the cumlative score compared to the expected deviation. Pt
is the probability of achieving that deviation (or greater) by chance,
assuming a true mean of 100. n+/n- presumably shows the number of trials
which came out above and below 100, but I couldn't find anything about it
in the text.

A few facts about the experiment: ALL trials were conducted by a single
person, the experimenter, as "operator".  The binary events were generated
at rates of either 100 or 1000 per second by a hardware noise generator.
They were tallied in an "alternating" mode to compensate for any systematic
bias. It was stated that the event rate didn't affect the PK performance.
That is, analyzed separately, the slow and fast rates both showed the same
PK bias as the combined data.

The trials were conducted in "runs" of 50 each. These could be in automatic
mode, with each trial initiating automatically, or in manual mode. Read this
quote from the article:

	This operator attempted, on instruction or volition, to distort
	the trial counts either toward higher or lower values. The several
	options of sampling number, sampling frequency, +/- polarity,
	and manual/automatic sequencing were variously determined by
	random instruction, operator preference, or experimental practicality,
	and recorded before the beginning of each trial.

Now ... did she record her PK polarity before each trial? Read it again!
Also, the device has a reset button. In fact, the whole issue of data
collection isn't dealt with at all.

Here's a real kicker: She tried a whole new series of 2000 event trials
in which the PK+ and PK- means came out to be 1000.380 and 999.569.
Jahn calls this result "curiously ambivalent". The PK seems to be limited
to a fraction of an event per trial. This leads me into a whole series
of ruminations on what constitutes an event. The hardware event generator
sums over at least trillions of quantum events. A PK bias of 1ppb would
produce perfect scores at the output.

Conversely, if you counted each trial as an event with the PK bias as
reported and >100 being a hit, the bias amounts to several percent.
Evidently an event is whatever is required to keep the PK lurking at
the fringe of the statistics. BAH!

		Lew Mammel, Jr. ihnp4!ihuxr!lew