[net.misc] Dean Radin's "sugar-sand" data

lew@ihuxr.UUCP (01/30/84)

During a recent exchange of mail with Dean Radin about the Helmut Schmidt
experiment (more on that later), Dean chided me for not posting a "retraction"
of my comments on his "sugar-sand" experiment. Some will recall that Dean
claims to have proven that holding a vial of sugar in one's hand decreases
one's strength, compared to holding a "control" vial of sand.

When Dean posted a brief description of his experiment, I posted a request
that he reveal his data and describe his procedures in more detail. Finally,
he sent me his data and analysis by mail. I spent a lot of time poring over
his multivariate analysis. My original thought was that he must have mis-
applied the ANOVA program that he was using, although I never expressed this
publically. I found that he hadn't done anything overtly wrong, and that
his data did show a VERY significant correlation of strength with the
substance in the subjects vial. Before I go into my big HOWEVER, I'll recap
the data:

There were 85 subjects each of whom underwent 6 strength trials, 3 with
sand and 3 with sugar. The average strength with sugar for all subjects
was 189.1 and with sand it was 197.3. The standard deviation across all
510 trials was 82.7. At first glance, the 8.2 difference in the sugar-sand
strengths seems insignificant compared to the SD of 82.7. As the ANOVA
shows though, there were enough trials so that one would expect a much
smaller difference if it were due to the random variation with an rms
of 82.7. The smaller expected value would be on the order of 82.7/sqrt(510)
Anyway, the 8.2 difference is WAY beyond chance.

HOWEVER, this isn't the whole story. One big problem is that Dean treated
the subjects as a RANDOM variable. Obviously, the subject is the single
most significant variable. Nominally, we might expect each subject to 
have the same strength in all six trials. I tried a two way analysis of
variance with subject and substance as the "treatments". From a simpler
point of view, this amounts to looking at the variation of strength
about each subject's average. This analysis reveals a large interaction
affect. this means that while the average affect of the substance was
to (apparently) weaken the subject when he or she held sugar, compared
to sand, the effect differed widely among subjects. Some were much
stronger when they held sugar, but more were weaker.

Another way of stating this is this: You can't model the data by putting
a random variability on top of a constant effect due to substance. The
data shows a real variability of effect: an interaction effect.

Here are the top ten subjects, ranked by the magnitude of the substance
effect:

subj.		sugar				sand

1)	132	132	132		210	230	225
2)	250	145	185		295	290	260
3)	140	175	220		320	228	182
4)	290	130	120		250	225	220
5)	238	200	170		160	160	135
6)	230	115	128		115	100	108
7)	250	190	265		190	190	190
8)	110	105	80		115	115	190
9)	18	18	18		60	64	50
10)	208	110	210		255	232	160

The "unexplainable by chance" variation is not confined to sugar vs. sand
averages, but shows up as extreme excursions of single measurements.
Incidentally, subject 9 was named "dean". Was it you Dean? Anyway, note
that 5, 6, and 7 are strong with sugar. What are we to make of 4?

It would seem that much of the variability is dominated by instrumentation
effects. In a nutshell, it looks like a case of GIGO. Here are a few more
cases of startling variability:

	105	84	10		80	102	98
	15	0	10		25	45	25

Well, this is certainly an example of hiding behind noise, as someone
mentioned is common in ESP experiments. It would seem to be pointless
to apply high powered statistical tests to data which suffer such obvious
problems. Large arbitrary variations like this are surely systematic.

	Lew Mammel, Jr. ihnp4!ihuxr!lew