[net.philosophy] Negative deviations in psi experiments.

cooper@pbsvax.DEC (Topher Cooper HLO2-3/M08 DTN225-5819) (10/28/85)

>In any case, the whole basis of the research into such phenomena is as
>fundamentally flawed as religious belief (for many it seems to be a "modern"
>substitute for "ancient" religion, much as some people turn to eastern
>belief systems to substitute for the western ones).  Working from the
>assumption that the phenomena do exist (wishful thinking in and of itself),
>they engage in research to "prove" this, but along the way they interpret
>the data AS THOUGH THE CONCLUSION WAS TRUE, thus "proving" the conclusion.
>An example:  in ESP testing, some percentage of correct answers is considered
>statistically average (i.e., if you were to just pick randomly the likelihood
>would be that you would get, say, 20%).  A much higher percentage is taken
>as "evidence" of ESP, but then SO IS an extremely LOW percentage!  (Wow,
>that much of a deviation from probability?  It "must" be some psychic
>phenomenon involved!) [ROSEN]

The first sentence, besides being completely unproven, is completely
irrelevant.  As has been pointed out to Rich any number of times, the reason
that someone believes in a theory has nothing to do with the truth of that
theory.  I could, for example, make speculations as to the reasons for his
apparent fear of particular theories, but I won't, because those reasons are
not relevant to the validity of his arguments.

The second sentence is, in general, simply untrue.  Much current work in
the field starts with the "assumption" that psi exists (because those workers
feel that there is already sufficient evidence) and attempts to find out
something about its characteristics.  They do not, however, attempt to prove
the existence of psi based on that assumption.

This is an interesting criticism because it much more frequently applies to
the critics: i.e., they start with an explicit assumption of the impossibility
of psi phenomena and from that derive a proof of its non-existence.

The example given, even if valid, does not support the criticism made
previously.  It represents an accusation of poor statistical procedure,
rather than an example of using the conclusion as a premise.

In any case the example only serves to show the shallowness of the reading and
thought that Rich has devoted to this issue.  This same criticism has been made
over and over again by the critics.  It is answered, over and over again.
But the same criticism keeps coming up.  This is particularly frustrating since
the flaw in the criticism is so laughingly elementary.  Anyone who has taken an
elementary course in statistics should be able to spot it, certainly if they
make an attempt to read any of the experiments so criticized.

Imagine someone checking to see if a coin is biased and concluding that there
is strong statistical evidence that it is.  Then a critic comes along and says
"A much higher than expected number of heads is taken as evidence of a biased
coin, but then SO IS an extremely LOW percentage!",  with the implication
that this invalidates the test for bias of the coin.  Of course it does not.
What is called a "two-tailed" statistical test is used.  A one-tailed test
would ONLY check for a higher than expected number of heads, or ONLY check
for a lower than expected number of heads (= higher than expected number of
tails).  A two-tailed test checks for both.  The cost of checking for both is
that you need a larger deviation in a particular direction to conclude bias,
than you would for a one-tailed test for that direction.  All this is standard,
elementary hypothesis testing theory from statistics.

In the early days of parapsychology, one-tailed tests were used exclusively.
The results were very inconsistent.  Then it was noticed that some subjects
in some conditions seemed to be getting fewer than expected hits, and that
these tended to cancel out the successes of other subjects in other conditions.
Without worrying about why this took place, the solution was to use two-tailed
tests (including intrinsically two-tailed tests such as analysis of variance)
ALL THE TIME.  This is now the policy, two-tailed tests are usually used in
parapsychology unless there is a particular reason not to.  Results are still
inconsistent (this problem not, apparently, being the only one) but much less
so.

"But," critics frequently respond, "how do we know that the parapsychologists
don't first try a one-tailed test for psi-hitting, and only when that fails
apply a two-tailed test?"  One answer is to simply look at the experimental
literature.  One frequently finds experiments reported with a "higher than
expected number of hits" which report the results in terms of probabilities
derived from two-tailed tests, resulting in lower overall statistical
significance than if a one-tailed test had been used.  Indeed, one rarely
sees one-tailed tests used at all.

What is the source of psi-missing (which is the term used to describe the
situation when a subject guesses significantly fewer targets than can be
ascribed to chance)?  If I could give a complete answer to that I would be
one of the top parapsychologists in the world.  A number of circumstances
under which psi-missing occurs have been found, however.  I will describe two
important ones.

Before an experiment, the subjects are asked "Do you think that psi exists?"
(in practice many variants of this question have been explored).  Some answer
"Yes, definitely", some answer "I don't know" and some answer "Definitely not".
Those who give one of the first two answers are labeled "Sheep", those who
give the last answer are labeled "Goats".  When the tests are given and
evaluated, it is frequently found that the Sheep tend to psi-hit, while the
Goats tend to psi-miss.  This is called the Sheep-Goat effect.

When an experiment is performed and there are two conditions used (i.e., test
and control conditions), and the subjects are aware of the distinction between
the conditions (e.g., one condition is "lights are on" and the other is "lights
are out" in the room the subject is in), then it is frequently found that one
condition will produce psi-hitting and the other will produce psi-missing.
There is some indication that the psi-hitting generally occurs in the condition
that the particular subject prefers.  This is one of several related effects
called in parapsychology, the "Differential effect."  For those who remember
the posting which started this (I won't bother to quote it) it should be clear
how this effect might call into question the meaningfulness of Hawkings
observation.

A number of other conditions have been found which seem to relate to
psi-missing, including some correlations with mood or personality scales, but
no-one would claim to be able to reliably predict when it will occur.  Lacking
such reliable predictions, two-tailed tests are used and rightly so.

		Topher Cooper

USENET: ...{allegra,decvax,ihnp4,ucbvax}!decwrl!dec-rhea!dec-pbsvax!cooper
ARPA/CSNET: cooper%pbsvax.DEC@decwrl

Disclaimer:  This contains my own opinions, and I am solely responsible for
them.

rlr@pyuxd.UUCP (Rich Rosen) (10/30/85)

>>In any case, the whole basis of the research into such phenomena is as
>>fundamentally flawed as religious belief (for many it seems to be a "modern"
>>substitute for "ancient" religion, much as some people turn to eastern
>>belief systems to substitute for the western ones).  Working from the
>>assumption that the phenomena do exist (wishful thinking in and of itself),
>>they engage in research to "prove" this, but along the way they interpret
>>the data AS THOUGH THE CONCLUSION WAS TRUE, thus "proving" the conclusion.
>>An example:  in ESP testing, some percentage of correct answers is considered
>>statistically average (i.e., if you were to just pick randomly the likelihood
>>would be that you would get, say, 20%).  A much higher percentage is taken
>>as "evidence" of ESP, but then SO IS an extremely LOW percentage!  (Wow,
>>that much of a deviation from probability?  It "must" be some psychic
>>phenomenon involved!) [ROSEN]

> The first sentence, besides being completely unproven, is completely
> irrelevant.

Odd that a belief being "fundamentally flawed" is viewed as an irrelevancy.

> As has been pointed out to Rich any number of times, the reason
> that someone believes in a theory has nothing to do with the truth of that
> theory.

No, only the truth has any bearing on the truth of a theory (as I have
pointed out any number of times), AND in the absence of any evidence for
a chosen belief, the reasons why THAT belief is chosen over others is
sujbect to scrutiny, and if there is no evidentiary reason for the belief,
if the root reason is nothing more than wishful thinking and presumptiveness,
it has not merit.

> I could, for example, make speculations as to the reasons for his
> apparent fear of particular theories, but I won't, because those reasons are
> not relevant to the validity of his arguments.

More correctly, the assumption that I "fear" particular theories is one that
you make (for whatever reason---what might it be?) and one that has no basis
in fact.  But you choose to work from that assumption for a reason.  Perhaps
because you assume that anyone who sees holes in theories and seeks to disprove
them must "fear" them. (Especially when YOU happen to believe them.)

> The second sentence is, in general, simply untrue.  Much current work in
> the field starts with the "assumption" that psi exists (because those workers
> feel that there is already sufficient evidence) and attempts to find out
> something about its characteristics.

As I said in that first sentence, the exact same thing as religious belief
is present here, and Cooper has stated it himself!

> They do not, however, attempt to prove the existence of psi based on that
> assumption.

On the contrary, evidence (what little there is) is interpreted based on that
assumption, in much the same way that "evidence" for the existence of god
is interpreted based on THAT assumption as "proof".

> This is an interesting criticism because it much more frequently applies to
> the critics: i.e., they start with an explicit assumption of the impossibility
> of psi phenomena and from that derive a proof of its non-existence.

Not the "impossibility", but rather the shoddiness of the experimental
techniques in analysis, the presumptiveness of the statistical analysis, etc.
This is a regular occurrence:  those who have a stake in a given desired
conclusion will assert that the skeptics are starting from an assumption, as
if making no assumptions is an assumption itself.

> The example given, even if valid, does not support the criticism made
> previously.  It represents an accusation of poor statistical procedure,
> rather than an example of using the conclusion as a premise.

No, it represents an analysis of the data with the premise in mind:  there
must be some way we can use this statistical data to 'prove' our point...

> In any case the example only serves to show the shallowness of the reading
> and thought that Rich has devoted to this issue.  This same criticism has
> been made over and over again by the critics.  It is answered, over and over
> again.

How so?  With what?  With assertions of "that's not true!"?

> But the same criticism keeps coming up.  This is particularly frustrating
> since the flaw in the criticism is so laughingly elementary.  Anyone who has
> taken an elementary course in statistics should be able to spot it, certainly
> if they make an attempt to read any of the experiments so criticized.
> Imagine someone checking to see if a coin is biased and concluding that there
> is strong statistical evidence that it is.  Then a critic comes along and says
> "A much higher than expected number of heads is taken as evidence of a biased
> coin, but then SO IS an extremely LOW percentage!",  with the implication
> that this invalidates the test for bias of the coin.  Of course it does not.
> What is called a "two-tailed" statistical test is used.  A one-tailed test
> would ONLY check for a higher than expected number of heads, or ONLY check
> for a lower than expected number of heads (= higher than expected number of
> tails).  A two-tailed test checks for both.  The cost of checking for both is
> that you need a larger deviation in a particular direction to conclude bias,
> than you would for a one-tailed test for that direction.  All this is
> standard, elementary hypothesis testing theory from statistics.

But why would a bias in the negative direction from positive psychic
instances be considered evidence FOR psi.  

> In the early days of parapsychology, one-tailed tests were used exclusively.
> The results were very inconsistent.  Then it was noticed that some subjects
> in some conditions seemed to be getting fewer than expected hits, and that
> these tended to cancel out the successes of other subjects in other
> conditions.  Without worrying about why this took place, the solution was to
> use two-tailed tests (including intrinsically two-tailed tests such as
> analysis of variance) ALL THE TIME.  This is now the policy, two-tailed tests
> are usually used in parapsychology unless there is a particular reason not
> to.  Results are still inconsistent (this problem not, apparently, being the
> only one) but much less so.

And THIS is evidence? (And an example of NOT assuming the conclusion??)

> What is the source of psi-missing (which is the term used to describe the
> situation when a subject guesses significantly fewer targets than can be
> ascribed to chance)?  If I could give a complete answer to that I would be
> one of the top parapsychologists in the world.

I take it being a great bullshit artist would be prerequisite to such a
position, because that is what you are talking about:  taking a set of data
and "getting" it to fit your desired conclusion, and bullshitting one's way
through the process.

> Before an experiment, the subjects are asked "Do you think that psi exists?"
> (in practice many variants of this question have been explored).  Some answer
> "Yes, definitely", some answer "I don't know" and some answer "Definitely not".
> Those who give one of the first two answers are labeled "Sheep", those who
> give the last answer are labeled "Goats".  When the tests are given and
> evaluated, it is frequently found that the Sheep tend to psi-hit, while the
> Goats tend to psi-miss.  This is called the Sheep-Goat effect.

Frequently enough that a major statistical deviation is in evidence?  Or
just enough to bullshit one's way around?

> There is some indication that the psi-hitting generally occurs in the condition
> that the particular subject prefers.  This is one of several related effects
> called in parapsychology, the "Differential effect."  For those who remember
> the posting which started this (I won't bother to quote it) it should be clear
> how this effect might call into question the meaningfulness of Hawkings
> observation.

Uh, yeah, right...  Excuses for shoddy experimental technique.  Many 
telekinesis debunkings have taken place with supposed psychics who insisted
on such specific conditions that they "prefer".  What was discovered was
that the particular conditions enabled them to manipulate the objects in
very obvious physical ways (e.g., blowing).
-- 
"Mrs. Peel, we're needed..."			Rich Rosen 	ihnp4!pyuxd!rlr	

davet@oakhill.UUCP (Dave Trissel) (11/02/85)

In article <1116@decwrl.UUCP> cooper@pbsvax.DEC (Topher Cooper HLO2-3/M08 DTN225-5819) writes:

>
>What is the source of psi-missing (which is the term used to describe the
>situation when a subject guesses significantly fewer targets than can be
>ascribed to chance)?  ......
>
>When an experiment is performed and there are two conditions used (i.e., test
>and control conditions), and the subjects are aware of the distinction between
>the conditions (e.g., one condition is "lights are on" and the other is "lights
>are out" in the room the subject is in), then it is frequently found that one
>condition will produce psi-hitting and the other will produce psi-missing.
>There is some indication that the psi-hitting generally occurs in the condition
>that the particular subject prefers.  This is one of several related effects
>called in parapsychology, the "Differential effect."

I had this effect occur with me when I was tested on a random event generator
(REG) several years ago.  The REG generated 10,000 events a second. (It
consisted of a noise diode running a decade counter which generated the
equivalent roll of a ten-sided die.)

The PSI task was to start at zero and run a test where I was to
"wish" for high values on the numbers.  Ten trials for all ten digits.
During the test I found myself uneasy when I
was "wishing" for one of the odd numbers as the target.  For some reason,
I have always felt a slight bias toward even numbers over odd.

Sure enough, it turned out that the digits showing up in the final results
for the counters when examined had a much higher ratio of the even digits to
odd (odds of almost 400 to 1 against.)  However,                ,
the experiment's predetermined analysis was not to take such data into account
(only the highest counter was determined for each of the ten trials) and
this therefore was never even mentioned in the published results.
(In other words, no "looking for results in the data" allowed.)

A couple more runs still showed I could produce the bias.  The end result was
odds of about 12 to 1 overall.

I found it amusing that the electrons tended to "choose" my favorite base ten
numbers for a total count while pouring out of the noise diode.  None of the
thousands of "stand alone" tests or those of other subjects showed the same
anamoly.

Several months later I tried again and was still causing the bias (the odds
was somewhere around 5 to 1 for that series.)

  --  Dave Trissel   {ihnp4,seismo}!ut-sally!oakhill!davet

"Gee - it couldn't be true so I guess it didn't happen."