[net.med] Intro to Scientific Method and Experimentation

hollombe@ttidcc.UUCP (Jerry Hollombe) (03/21/85)

>From: andrea@hp-sdd.UUCP (andrea)
>Subject: Re: The cancer/laetril debate
>Message-ID: <8000021@hp-sdd.UUCP>
>
>A discussion of scientific method might be interesting here!
>Could you post something (25 words is too little, 25 screens too much)
>to start it off?

I have before me the book _Experimental and Quasi-experimental  Designs  for
Research_  by  D.T.  Campbell  and  J.C.  Stanley.  This book is used as the
primary text for a graduate level course in experimental  design.  To  start
off the discussion I'll summarize (without permission -- ~sigh~) some of the
relevant passages from its  first  chapter.  Caveat:  This  is  a  _graduate
level_ text, so the going may get a little dense in places.

First, under the heading Evolutionary Perspective on Cumulative Wisdom  and
Science comes the following passage:

        "...Experimentation thus is not  in  itself  viewed  as  a
        source  of  ideas necessarily contradictory to traditional
        wisdom.  It is rather a refining process superimposed upon
        the probably valuable cumulations of wise practice..."

In other words, there's nothing  inherently  contradictory  in  the  use  of
scientific  methods  to  test  out  the  principles of traditional/wholistic
medicine.  The concepts are not necessarily antagonistic to each other.

Now, to the nitty-gritty.  Under the heading Factors Jeopardizing Internal
and External Validity:

        _Internal Validity_ is the basic minimum without which any
                            experiment is uninterpretable:  Did in
                            fact the experimental treatments  make
                            a    difference   in   this   specific
                            experimental instance?

        _External Validity_ asks the question of generalizability:
                            To    what    populations,   settings,
                            treatment variables,  and  measurement
                            variables    can    this   effect   be
                            generalized?


        Relevant to internal validity, eight different classes  of
        extraneous  variables  will be presented; these variables,
        if  not  controlled  in  the  experimental  design,  might
        produce   effects   confounded  with  the  effect  of  the
        experimental stimulus.  They represent the effects of:

        1) History -- the specific events  occurring  between  the
        first   and   second   measurement   in  addition  to  the
        experimental variable.

        2)  Maturation  --  processes   within   the   respondents
        operating as a function of the passage of time per se.

        3) Testing -- the effects of taking a test upon the scores
        of a second testing.

        4)  Instrumentation  --  changes  in  the  calibration  of
        instruments or changes in the observers.

        5)  Statistical  regression  --  where  groups  have  been
        selected on the basis of their extreme scores.

        6) Selection -- resulting from biases in the selection  of
        respondents for the comparison groups.

        7)  Experimental  mortality  --   differential   loss   of
        respondents from the comparison groups.

        8)   Selection-maturation   interaction,   etc   --   i.e.
        interaction of any or all of the previous seven factors to
        produce a confounding effect.

The book also presents four factors affecting the external  validity  of  an
experiment:

	1) Reactive or interaction effect of testing.

        2) The interaction effects of  selection  biases  and  the
        experimental variable.

	3) Reactive effects of experimental arrangements.

	4) Multiple-treatment interference.


Whew!  Still with me?  The above factors are all  reasons  why  hearsay  and
single  instance  evidence of a treatment effect prove nothing.  None of the
above factors are controlled for in such a situation.  Therefore,  there  is
_no way_ to know whether or not the treatment in question is responsible for
the observed effect.  It is hopelessly confounded with all the factors  just
listed.

There are other things to consider as well.  Experimenter bias is a  serious
confounding  factor.  It  has  been  demonstrated experimentally (of course)
that, given knowledge of the  expected  results,  even  the  most  objective
experimenter  will  tend to see what they expect to see.  Hence the need for
double-blind testing of treatments where neither the experimenters  nor  the
subjects know who is getting what treatment, if any.  This also controls for
placebo effect.  And I haven't even mentioned the Hawthorne Effect ...

I could continue at considerable length (by the time I got my  Master's  I'd
spent  four  years  studying this stuff) but the above is probably enough to
chew on for now.  Sorry this is so long,  but,  as  I  said  in  a  previous
posting, you can't explain it in 25 words or less.

-- 
-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-
The Polymath (aka: Jerry Hollombe)
Citicorp TTI
3100 Ocean Park Blvd.
Santa Monica, CA  90405
(213) 450-9111, ext. 2483
{philabs,randvax,trwrb,vortex}!ttidca!ttidcc!hollombe