cooper@pbsvax.dec.com (07/03/86)
Bill Jefferys ({allegra,ihnp4}!{ut-sally,noao}!utastro!bill, bill@astro.UTEXAS.EDU) writes: >Experimental data falls into several broad categories. > >In the large majority of cases, statistical treatment of the data is used >to improve signal-to-noise, but the conclusions would be basically >unchanged if they were not used. As detectors become more sensitive, >for example, experiments that used to be very difficult become routine, >and sophisticated statistical processing of the data becomes less >necessary in order to detect the signal. There are two ways that "detectors become more sensitive". One way is that they introduce less noise themselves. For example, IR detectors may be cooled so that their is less internally generated IR. This is certainly an important process but it is not the dominant one. A detector is a device which is used to take an input, extracts the useful part (the signal) and rejects the useless (for the present purpose) part (the noise). It is performing a statistical transformation on the available data. It is no less a statistical transformation because it is performed mechanically by an "analog" computer (in the broadest sense of the term) rather than by explicit calculation. It is no less a statistical transformation because its harder to determine the statistical assumptions, distributions, probabilities, etc. associated with the transformation. The S/N ratio of the phenomena being studied does not decrease, only the S/N ratio at the start of the final stages of processing (those in which people do part or all of the processing) is reduced. The experiments become easier because *more* sophisticated (or at least, more specialized) statistics are being done, and because they are being done mechanically. If there is anything intrinsically wrong with doing statistics on a particular source of data then hiding them won't make it any better. >In many cases, the signal-to-noise is low, and statistics is required to >bring the signal out of the noise. This is where things begin to get >dangerous. The smaller the signal, the more sophisticated the processing, >the more likely it is that small biases in the data (which are unavoidable) >will be amplified and mistaken for signal. Every good experimenter knows >this, and so in well designed experiments, the greatest part of the >effort goes into understanding any independent experimental apparatus. The weakness of the inferences is the sticky point. Under the null hypothesis neither the inferred nor the actual desires should be correlated so the weakness of the inference doesn't interfere with the rejection of the null hypothesis. If we assume that the actual correlation is with subconscious desires, however, a weak inference (weak correlation between actual and inferred desires) would result in a lower measured correlation. This would contribute to the apparent erratic nature of psi, and to the high rate of repeatability failures. This is, of course, only speculation since it is very difficult to test directly, though it does have a fair amount of explanatory power. The attitude of parapsychologists is that there is an unknown source of systematic biases. No known source of such biases, or combination of sources fit its characteristics very well. Maybe someday we'll identify a subtle effect arising from known processes, maybe not. But it is anti-scientific to simply assume the effect arises from known causes or to require full understanding *before* any effort be put into investigating it. Bill Jefferys seems to agree that there are systematic biases. He offers no explanation for them, but simply dismisses them as, es320, p. 119, >13 March 1986). (When Marks is saying anything concrete about parapsychology at all he is talking about repeatability and gross experimental errors. Nowhere can I find any support in his commentary for your interpretation.) You seem to have a rather exaggerated concept of the size of parapsychological experiments. It's true that the average parapsychology experiment uses more data than the average psychology experiment -- but not that much more. A very big parapsychology experiment might involve 20000 trials. More commonly an experiment has 1000 trials. If statistical significance in experiments of that size is digging too deep into the noise level, than we're going to have to throw out many other, more conventional fields: epidemiology, modern crystallography and particle physics come to mind off hand. Some techniques which show great promise (though I think that they are still in need of some maturing) seem to get statistically significant results with only a few dozen trials. We'll have to see how they work out. A good parapsychological experiment is carefully designed to remove or control the known sources of bias. As you say, "The greatest part of the effort goes into understanding and controlling the systematic biases." When such experiments are performed, fairly small but distinct systematic biases are frequently found -- much more frequently than chance would explain. Specifically there is a correlation between the results of a random experiment and the desires of an operator (more accurately, what can be reasonably inferred to be the desires of the operator). The bias remains after changes of target -- when the operator desires another result. Control runs (when there is no operator preference) show no bias. The direction of bias may be positive or negative, but is consistent for a given operator under given conditions, over moderate periods of time. A number of fairly consistent correlations have been demonstrated between various psychological measurements (e.g., beliefs and personality) and the direction of the bias. Others have been demonstrated between other psychological measurements (e.g., moods) and the size of the effect. (The previous description is meant to serve for either ESP or PK experiments. Space doesn't allow discussion of the many techniques which are used to control known sources of statistical bias, check any experimental report in a refereed parapsychology journal). I should comment on my use of the term "desire" in the above description. The actual "desires" of any operator, especially subconscious desires, are unknowable. What we can do is make reasonable inferences about what the operator desires. It is these *inferred* desires which are actually found to correlate with the seemingly independent experimental apparatus. The weakness of the inferences is the sticky point. Under the null hypothesis neither the inferred nor the actual desires should be correlated so the weakness of the inference doesn't interfere with the rejection of the null hypothesis. If we assume that the actual correlation is with subconscious desires, however, a weak inference (weak correlation between actual and inferred desires) would result in a lower measured correlation. This would contribute to the apparent erratic nature of psi, and to the high rate of repeatability failures. This is, of course, only speculation since it is very difficult to test directly, though it does have a fair amount of explanatory power. The attitude of parapsychologists is that there is an unknown source of systematic biases. No known source of such biases, or combination of sources fit its characteristics very well. Maybe someday we'll identify a subtle effect arising from known processes, maybe not. But it is anti-scientific to simply assume the effect arises from known causes or to require full understanding *before* any effort be put into investigating it. Bill Jefferys seems to agree that there are systematic biases. He offers no explanation for them, but simply dismisses them as, essentially, too small to worry about. Too me, that isn't an attitude which is likely to advance scientific understanding very much. Topher Cooper USENET: ...{allegra,decvax,ihnp4,ucbvax}!decwrl!pbsvax.dec.com!cooper INTERNET: cooper%pbsvax.DEC@decwrl.dec.com Disclaimer: This contains my own opinions, and I am solely responsible for them.
bill@utastro.UUCP (William H. Jefferys) (07/08/86)
Topher Cooper remarks: > You seem to have a rather exaggerated concept of the size of > parapsychological experiments. It's true that the average > parapsychology experiment uses more data than the average > psychology experiment -- but not that much more. A very > big parapsychology experiment might involve 20000 trials. > More commonly an experiment has 1000 trials. I was responding to Dave Trissell, who was talking in the following terms: > Sorry, but I didn't state the case clear enough. Over equal > intervals of time the success rate usually varies from better > to worse. Thus, a 10,000 trial experiment when divided into > two 5,000 trail periods will find a higher sucess and > A machine I was tested on was built by a fellow Motorolan > and used a so-called noise diode which latched and recorded > up to 10,000 samples a second. 10,000 samples _a second_!!! How long does he run this thing anyway? > Some techniques which show great promise (though I think that > they are still in need of some maturing) seem to get statistically > significant results with only a few dozen trials. We'll have > to see how they work out. We shall see. If such techniques show paranormal effects in anyone's hands, it might be interesting. > There are two ways that "detectors become more sensitive". One > way is that they introduce less noise themselves. For example, > IR detectors may be cooled so that their is less internally > generated IR. This is certainly an important process but it > is not the dominant one. [Dissertation on how detectors move the signal-processing to a different place without changing the basic beginning-to-end processing] You are right, of course, but this is beside the point. Reducing the "noise" inherent in psi experiments is precisely what I am talking about. No one in their right mind would set up an experiment by deliberately injecting noise into their experiment that is a hundred times the size of the expected signal, just so that they can have the "fun" of pulling the signal back out of the noise. One doesn't take a CCD frame of a faint galaxy by flooding the chip with 100 x the flux that expected from the galaxy, just so that one has to stack 10,000 frames to pull the signal back out of the noise. Yet ever since Rhine, that is the basic paradigm that parapsychologists have been using. Not to denegrate Rhine. His idea of using statistical processing to study psi was for its time a revolutionary one. But here it is fifty years later, and parapsychologists using these techniques still can't convince other scientists that paranormal effects are real. Heck, they can't even convince some parapsychologists (e.g., Susan Blackmore). For example, why study PK by rolling dice (to give a simple example)? If a psychic can really exert enough force on the dice to affect the roll in a nonrandom way, surely he or she can exert enough psychic force on a sensitive torsion balance to move it measurably! If the psychic can't do the latter (under suitably controlled conditions), how can parapsychologists expect other scientists to accept the results of dice-rolling experiments as evidence for paranormal effects? If a psychic can't move a sensitive torsion balance, how can anyone take spoon-bending seriously? Or consider experiments based on Schmidt's ideas. Instead of trying to pull signal out of noise generated by quantum processes, why not see if the psychic can directly affect one of the very sensitive quantum detectors that have been developed? A SQUID comes to mind as a possible basis for such an experiment. If psychics can't affect a detector of that sensitivity in the same room with them, how can a general take seriously the proposal that psychics can somehow cause Soviet ballistic missles to fail at a distance of 5000 miles? After all, missile defence is useless if can only increase the probability of a missle failing by a small amount over the chance level. > The attitude of parapsychologists is that there is an unknown > source of systematic biases. No known source of such biases, > or combination of sources fit its characteristics very well. > Maybe someday we'll identify a subtle effect arising from > known processes, maybe not. But it is anti-scientific to > simply assume the effect arises from known causes or > to require full understanding *before* any effort be put > into investigating it. As long as one simply says "there are unexplained biases", that's fine. It is when the words, "...and those biases are evidence of paranormal phenomena" are added that I, and most scientists, part company with parapsychologists. > Bill Jefferys seems to agree that there are systematic biases. > He offers no explanation for them, but simply dismisses them > as, essentially, too small to worry about. > Too me, that isn't an attitude which is likely to advance > scientific understanding very much. I think this is a distortion of my position. In many cases in the past we know what the biases were (unconscious "cueing", or outright cheating, e.g., Soal). To their credit, many parapsychologists have made a real effort to tighten their controls. But how can we be sure that their efforts have been adequate? The question is not whether or not there are biases, but whether they are "interesting" or not. "Clever Hans" was very "interesting" science when the source of bias was first understood. Now it is old hat. And although you say ex-cathedra that no known biases or combination of biases fit the data very well, I think that that case has yet to be made. To get my interest, you will have to come up with something that (a) is replicable, (b) has a large enough signal that one can be certain that it is real, and not an artifact. If someone like Susan Blackmore says that an alleged paranormal phenomenon is worth looking at, then I would have to agree. (I have to use someone like her as a proxy, since I don't have time to waste in studying this subject closely. I have enough trouble keeping up in my own field.) -- Glend. I can call spirits from the vasty deep. Hot. Why, so can I, or so can any man; But will they come when you do call for them? -- Henry IV Pt. I, III, i, 53 Bill Jefferys 8-% Astronomy Dept, University of Texas, Austin TX 78712 (USnail) {allegra,ihnp4}!{ut-sally,noao}!utastro!bill (UUCP) bill@astro.UTEXAS.EDU. (Internet)