[comp.ai] Psycho-Physical Measurement: Reply to Adam Reed

harnad@mind.UUCP (02/15/87)

Adam V. Reed (adam@mtund.UUCP) at AT&T ISL, Middletown NJ USA, wrote
in support of the following position: Psychophysicists measure
conscious experience in the same sense that physicists measure
ordinary physical properties. Our senses and central nervous systems
are analogous to the physicist's measuring equipment. If we can assume
that this "mental" equipment is similar in all of us, then reports of
psychophysical "measurements" of private, conscious experiences are just
as objective as reports of physical measurements of physical phenomena,
and objective in the same sense (observer-independence).

I will attempt to show why this is incorrect. But first let me say
that there is really no reason for a psychophysicist to get embroiled
in the mind/body problem (or its "other-minds" variant). In cog-sci
there is a real empirical question about what processes and
performances are justifiably and relevantly mind-like, because it is
mental capacity (or at least its performance manifestations) that one
is attempting to capture and model. It MATTERS in cognitive modeling
whether you've really captured intelligence, or just a clever toy
(partial) look-alike. There is no corresponding problem in
psychophysics. The input/output charactersitics, detection
sensitivities, etc., of human observers have face validity as
displayed in their performance data. There is no empirical question
affecting the validity (as opposed to the interpretation) of the
data that depends on their being a measure of conscious experience
rather than merely human receiver I/O characteristics.

For simplicity I will focus on detection performance only, although the
same arguments could be applied to discrimination, magnitude judgment,
identification, etc.

If a subject reports when he detects the presence of a signal, and this
relation (signal/detection-report) displays interesting I/O regularities
(thresholds, detectabilities, criterial biases, etc.), those regularities
are indisputably objective in the same sense that the physicist's
(or engineer's) regularities are. The sticky part comes when one wants
to interpret the measurements and their regularities, not as they are
objectively -- namely input/output performance regularities of human subjects
under certain experimental conditions -- but as measurements of and
regularities in conscious experience.

Adam has an "intuition pump" in support of the latter interpretation:
He suggests that a subject can compute his own (say) detection
thresholds if he receives detection trials plus feedback as to whether
or not a stimulus was present. His only performance would be to
report, after a long series of trials and private calculations, what
his detection threshold was. Since everyone can in principle do this
for himself, it is observer-independent, and hence objective. Yet it
involves no overt behavior other than the final threshold report;
otherwise, it is exactly like a physicist performing an experiment in
the privacy of his lab, and then reporting the results, which anyone
else can then replicate in the privacy of his own lab. So surely the
measurement is not merely of behavioral regularities, but of conscious
experience.

There are many directions from which one can attack this argument:

	(i) One could call into question the "lab" analogy, pointing out
	that, in principle, two physicists could check each other's
	measurements in the same "lab," whereas this is not possible in
	one-man psychophysics.
	(ii) One could question the objectivity of being both subject and
	experimenter.
	(iii) One could question whether the subject is performing a
	"measurement" at all, in the objective sense of measurement;
	only the psychophysicist is measuring, with the subject's receiver
	characteristics under various input conditions being the object of
	measurement. The subject is detecting and reporting.
	(iv) One could point out that one subject's report of his
	threshold is not subject-independently tested by another
	subject's report of his own threshold.
	(v) One could point out that intersubjective consensus hardly
	amounts to objectivity, since all subjects could simply share the same
	subjective bias.
	(and so on)

These objections would all trade (validly) on what we really MEAN by the
objective/subjective distinction, which is considerably more than consensus
among observers. I will focus my rebuttal, however, on Adam's argument,
taken more or less on its own terms; I will try to show that it cannot lead
to the interpretation he believes it supports.

First, what work are the "covert calculations" really doing in Adam's
thought-experiment? What (other than the time taken and the complexity
of the task) differentiates a simple, one-trial detection-response from
the complex report of a threshold after a series of trials with feedback
and internal calculations? My reply is: nothing. Objectively speaking,
the normal trial-by-trial method and the long-calculation-with-feedback
method are just two different ways of making the same measurement of a
given subject's threshold. (And the only one doing the measuring in both
cases is the psychophysicist, with the data being the subject's input and
output. Not even the subject himself can wear both hats -- objective
and subjective -- at one time.)

So let's just talk about a simple one-trial detection, because it shares all
the critical features at issue, and is not complicated by irrelevant
ones. The question then becomes "What is the objectivity-status of
reports of single stimulus-detections from individual subjects?" rather
than "How observer-independent is the calculation of detection
thresholds after a series of trials with feedback?" The two questions
are equivalent in the relevant respects, and they share the same
weaknesses.

When a subject reports that he has detected a stimulus, and there was in
fact a stimulus presented, that's ALL there is, by way of data: Input
was the stimulus, output was a positive detection report. (When I say
"behavioral" or "performance" data, I am always referring to such
input/output regularities.) Of course, if I'm the subject, I know that
there's something it's "like" to detect a stimulus, and that the
presence of that sensation is what I'm really reporting. But that's
not part of the psychophysical data, at least not an objective part.
Because whereas someone else can certainly look at the same stimulus,
and experience the sensation for himself, he's not experiencing MY
sensation. I believe that he's experiencing the same KIND of
sensation. The belief is surely right. But there's certainly no
objective basis for it. Consider that no matter how many times the
same stimulus is presented to different subjects, and all report
detecting it, there is still no objective evidence that they're having
the same sensation -- or even that they're having any sensation at all.
It is the everyday, informal solution to this "other-minds" problem --
based on the similarity of other subjects' behavior to our
own -- that confers on us the conviction that they're experiencing
similar things with "similar equipment." But that's no objective basis
either.

Contrast this psychophysical detection experiment with a PHYSICAL
detection experiment. Suppose we're trying to detect an astronomic
effect (say, an "alpha") through a telescope. If an astronomer reports
detecting an alpha, there is the presumption -- and it can be tested,
and confirmed -- that another astronomer could, with similar equipment
and under similar conditions, detect an alpha. Not his OWN alpha, but
an objective, observer-independent alpha. This would not necessarily
be the self-same alpha -- only a token of the same type. Even
establishing that it was indeed an instance of the same generic type
could be done objectively.

But none of this carries over to the case of psychophysical detection,
where all the weight of our confidence that the sensation exists and is
of the same type is borne by our individual, subjective, intuitive solutions
to the every-day other-minds problem -- the "common"-sense-experience we all
share, if you will. I'm not, of course, claiming that this "common sense" is
likely to be wrong; just that it's unique to subjective phenomena and
does not amount to objectivity. Nor can it be used as a basis for
claiming that psychophysics "measures" conscious experience. Yes, we
all have subjective experience of the same kind. Yes, that's what
we're reporting when we are subjects in a psychophysical experiment.
But, no, that does not make psychophysical data into objective measures of
conscious experience. (In fact, "an objective measure of a subjective
phenomenon" is probably a contradiction in terms. Think about it.)

A third case is worth considering, because it's midway between the
physical and the psychophysical detection situation, and more like the
latter in the relevant respects: Unlike cognitive science, which is
concerned with active information-processing -- learning, memory,
language, etc. -- psychophysics is in many ways a calibration science:
It's concerned with determining our sensitivities for detection,
discrimination, etc. As such, it is really considering us in our
capacity as sensory devices -- measuring instruments. So the best
analogy would probably be the equivalent kind of investigation on
physical measuring devices. If what was at issue was not the
astronomer's objectivity in alpha detection but the telescope's, then
once again observer-independent conclusions could be drawn.
Comparisons between the telescope's sensitivity and that of other
alpha-detection devices could be made, etc. Here it would clearly be
the device's input/output behavior that was at issue, nothing more.

The same seems true of psychophysical detection. For although we all
know we're having sensations in a detection experiment, the only thing
that is being, or can be, objectively measured under such conditions
is our sensitivity as detection devices. Nor is more AT ISSUE in
psychophysics. In cog-sci, one can say of an input/output device that
purports to model our behavior: "But how do you know that's really
how I did it? After all, I can do much more (and I do it all consciously),
whereas all you have there is a few dumb processes and performances."
This is a real issue in cognitive modeling. (The buck stops at the TTT,
however, according to my account.) In psychophysics, on the other hand,
nobody is going to question the validity of a detection threshold because
there's no way to show that it's based on measuring consciousness rather
than mere input/output performance characteristics.

Before turning to Adam Reed's specific comments, let me reiterate that
this analysis is just as applicable, mutatis mutandis, to the more
complicated case of threshold calculation after a series of trials
with feedback. It's still a matter of input/output characteristics -- this
time with a long series of inputs, with instructions -- rather than
any "direct, objective measurement of experience." There's just no such
thing as the latter, according to the arguments I'm making.

[And I haven't even brought up the vexed issue of psycho-physical
"incommensurability," namely, that no matter how reliable our
psychophysical judgments, and how strong our conviction that they're
veridical in our own case, there is no OBJECTIVE measure on which to
equate and check the validity of the relation between physical stimulation
and sensation. Correlations between input and output are one thing -- but
between physical intensity and "experiential intensity"...?]

Adam writes:

>	I don't buy the assumption that two must *observe the same
>	instance of a phenomenon* in order to perform an *observer-independent
>	measurement of the same (generic) phenomenon*. The two physicists can
>	agree that they are studying the same generic phenomenon because they
>	know they are doing similar things to similar equipment, and getting
>	similar results. But there is nothing to prevent two psychologists from
>	doing similar (mental) things to similar (mental) equipment and getting
>	similar results, even if neither engages in any overt behavior apart
>	from reporting the results of his measurements to the other. My point is
>	that this constitutes objective (observer-independent) measurement of
>	private (no behavior observable by others) mental processes.

Apart from the objections I've already made about the "similar
equipment" argument [what, by the way, is "mental equipment"? sounds
figurative], about the experimenter as subject, about detection as
"measurement," and about the irrelevance of the behavioral covertness
to the basic input/output issue, the "generic" question seems problematic.
With the alphas, above, we didn't have to oberve the same alpha, but
we did have to observe the same kind of alpha. Now the "alpha" in the
private case is MY sensations, not sensations simpliciter. So you
needn't verify, for objectivity's sake, the specific detection
sensation I had on trial N, or on any of my trials when I was subject,
if you like -- just as long as the generic sensation you do check on
is MINE not YOURS. Because otherwise, you see, there's this
observer-dependence...

>	This objection [that there's no way of checking the correctness of a
>	subject's covert calculations] applies with equal force to the
>	observation, recording and calculations of externally observable
>	behavior. So what?

What I meant here was that, after a long series of detection trials
with feedback and covert calculations, there's no way you can check
that I calculated MY threshold right except by running the trials on
yourself and checking YOUR threshold. But what has that to do with the
validity of MY threshold, or its status as a measure of my experience,
rather than just my input/output sensitivity after a series of trials
with complex instructions? 

I agree that there is validity-problem with all behavior, by the way,
but I think that favors my argument rather than yours. One way to
check the covert calculation is to have a subject do both -- overt
detecting AND covert calculations on subsequent feedback. The two
thresholds -- one calculated covertly by the subject, the other by the
experimenter -- may well agree, but all that shows is that they get
the same result when wearing their respctive (objective) psychophysicist's
hats. What the agreement does not -- and cannot -- show is that the
subject was "measuring experience" when he was detecting. It can't
even show he was HAVING experience when he was detecting. But that's
the whole point about behavioral measures and objectivity. If we're
lucky, they'll swing together with conscious experience, but there's
no objective basis for counting on it, or checking it. (And, equally
important: It makes no methodological difference at all.)

>	Yes [there {is} no way of getting any data AT ALL without the subject's
>	overt mega-response at the end], but *this is not what is being
>	measured*. Or is the subject matter of physics the communication
>	behavior of physicists?

The subject may be silent till trial N, but the input/output
performance that is being measured is the presentation of N trials
followed by a report that stands in a certain relation to the inputs.
This is no different from the case of a simple trial, with a single
stimulus input, and the simple report "I saw it." That's not
scientific testimony, that's subjective report. The only one who can
ever see THAT kind of "it" (namely, yours), is you. (And, as I
mentioned, the subject is really switching hats here too.)

>	What is objectively different about the human case is that not only is
>	the other human doing similar (mental) things, he is doing those
>	things to similar (human mind implemented on a human brain) equipment.
>	If we obtain similar results, Occam's razor suggests that we explain
>	them similarly: if my results come from measurement of subjectively
>	experienced events, it is reasonable for me to suppose that another
>	human's similar results come from the same source. But a computer's
>	"mental" equipment is (at this point in time) sufficiently dissimilar
>	[to] a human's that the above reasoning would break down at the point
>	of "doing similar things to similar equipment with similar results",
>	even if the procedures and results somehow did turn out to be identical.

First, I of course agree that people have similar experiences and
similar brains, and that computers differ in both respects. But I
don't consider an experience, or the report of an experience, to be a
"measurement." If anything, all of me -- rather than part of me, used and
experimented on by another part -- is the measuring device when I'm
detecting a stimulus. After all, what's happening when I'm detecting
an (astronomic) alpha: a measurement of a measurement? (The point
about the computer was just meant to remind you that psychophysicists
are just doing input/output sensitivity measurements, and that the
same data could be generated by a computer-plus-transducer. But the
difference between current computer and ourselves touches on more
complex issues related to the TTT that needn't be raised here.)

The relevant factors are all there in simple one-trial detection: If I
report a detection, there's absolutely no objective test of whether
(1) I had a sensation at all, (2) I "measured" it accurately, or even
(3) whether it's measurable  at all (i.e., whether experience and
phsyical magnitude are commensurable). My detection sensitivity in the
face of inputs, on the other hand, is indeed objectively testable. No
number of private experiments by experimenter/subjects can make a dent
in this epistemic barrier (called the mind/body problem).

>	Not true [that what we are actually measuring objectively is merely
>	behavior]. As I have shown in my original posting, d' can be measured
>	without there *being* any behavior prior to measurement. There is
>	nothing in Harnad's reply to refute this.

It can't be done without presenting stimuli and feedback. And
"behavior" refers to input/output relations. So there's a long string
of real-time input involved in the covert experiment, followed by the
report of a d' value. From that we can formulate the following
behavioral description: That after so-and-so-many trials of
such-and-such stimuli with such-and-such instructions, the subject
reports X. Even when I'm myself the subject in such an experiment,
that's how I would describe my findings, and those data are
behavioral. This is no different, as I suggested, from a single
detection trial. And the subject, of course, is switching hats during
such an experiment; there's nothing magic about his behavioral silence
during the covert calculations, any more than there is in the
astronomer's, after he's gotten his telescope reading and performs
calculations on them.

>	Why [will the testability and validity of these hypotheses always be
>	objectively independent of any experiential correlations (i.e., the
>	presence or absence of consciousness)]? And how can this be true in
>	cases when it is the conscious experience that is being measured?

These input/output sensitivity characteristics of human observers
would look the same whether or not human subjects were conscious. They
ARE conscious, and they ARE having experiences during the
measurements, but it's not their experiences we (or they) are measuring, it's
their sensitivity to stimuli. It feels, when I'm the subject, as if there's a
close coupling between the two. But who am I to say? That's just a feeling
And feelings also seem, objectively speaking, incommensurable with
physical intensities. The astronomer's detection has no such liability
(except, of course, its subjective side -- "What it's like to detect
an alpha," or what have you). Rather than forcing us to conclude that
it's conscious experience that we're measuring in psychophysics, as
Adam suggests, I think Occam's Razor (a methodological principle,
after all) is dictating precisely the opposite.

>	I would not accept as legitimate any psychological theory
>	which appeared to contradict my conscious experience, and failed to
>	account for the apparent contradiction. As far as I can tell, Steve's
>	position means that he would not disqualify a psychological theory just
>	because it happened to be contradicted by his own conscious experience.

That depends on what you mean by "contradicted conscious experience."
I assume we're both willing to concede on hallucinations and illusions.
I also reject radical behaviorism, which says that consciousness is just
behavior. (I know that's not true.) I'd reject any theory that said I
wasn't conscious, or that there was no such thing, or that it's
"really" just something else that I know perfectly well it isn't. I'd
also reject a theory that couldn't account for everything I can
detect, discriminate, report and describe. But if a theory simply
couldn't account for the fact that I have subjective experience at
all, it wouldn't be contradicting my experience, it would just be
missing it, bypassing it. That's just what the methodological
solipsism I recommend does. It is, in a sense, epistemologically
incomplete -- it can't explain everything. Whether it's also
ontologically incomplete depends on the (objectively untestable)
question of whether the asymptotic model that passes the TTT is or is
not really conscious. If it is, then the model has "captured"
conscious, even though the coupling cannot be demonstrated or
explicated. If it has not, it is ontologically incomplete. But, short
of BEING that model, there's no way we can ever know. (I also think
that turing-indistinguishability is an EXPLANATION of why there's
this incompleteness.)

>>[SH:] If I were one of the [psychophysical] experimenters
>>and Adam Reed were the other, how would he could get "objective
>>(observer-independent) results" on my experience and I on his? Of
>>course, if we make some (question-begging) assumptions about the fact
>>that the experience of our respective alter egos (a) exists, (b) is
>>similar to our own, and (c) is veridically reflected by the "form" of the
>>overt outcome of our respective covert calculations, then we'd have some
>>agreement, but I'd hardly dare to say we had objectivity.

>	[AR:] These assumptions are not "question-begging": they are logically
>	necessary consequences of applying Occam's razor to this situation (see
>	above). And yes, I would tend to regard the resulting agreement among
>	different subjective observers as evidence for the objectivity of their
>	measurements.

I guess it'll have to be a standoff then. We disagree on what counts
as objective -- perhaps even on what objective means. Also on which
way Occam's Razor cuts.

>	For measurement to be *measurement of behavior*, the behavior must be,
>	in the temporal sequence, prior to measurement. But if the only overt
>	behavior is the communication of the results of measurement, then the
>	behavior occurs only after measurement has already taken place. So the
>	measurement in question cannot be a measurement of behavior, and must be
>	a measurement of something else. And the only plausible candidate for
>	that "something else" is conscious experience.

If you're measuring, say, detection sensitivity, you're measuring
input/output characteristics. It doesn't matter if these are
trial-to-trial I/O/I/O etc., or just III...I/O. Only the behaviorists
have made a fetish of overt performance. These days, it's safe to say
that performance CAPACITY is what we're measuring, and that includes
the capacity to do things covertly, as revealed in the final output,
and inferrable therefrom. (Suppose you were checking a seismograph by
looking at it's monthly cumulations only: Would the long behavioral
silence make the end-result any less overt and "behavioral"?) As I suggested
in another module, cognitive science is just behaviorism-with-a-theory,
at last. The theory includes attributing covert, unobservable processes to
the head -- but not conscious experiences to the mind. We know that's there
those too, but for the (Occamian) reasons I've been discussing endlessly,
they can't figure in our theories.

>	Steve seems to be saying that the mind-body problem constitutes "a
>	fundamental limit on objective inquiry", i.e. that this problem is *in
>	principle* incapable of ever being solved. I happen to think that human
>	consciousness is a fact of reality and, like all facts of reality, will
>	prove amenable to scientific explanation. And I like to think that
>	this explanation will constitute, in some scientifically relevant sense,
>	a solution to the "mind-body problem". So I don't see this problem as a
>	"fundamental limit".

I used to have that fond hope too. Now I've seen there's a deep problem
inherent in all the existing candidates, and I've gotten an idea of what
the problem is in principle (that turing-indistinguishability IS
objectivity), so I don't see any basis for hope in the future (unless
there is a flaw in my reasoning). And, as Nagel has shown, the
inductive scenario based on our long successful history in explaining
objective phenomena simply fails to be generalizable to subjective
ones. So I don't see the rational basis for Adam Reed's optimism. On
the other hand, methodological epiphenomenalism is not all that bad --
after all, nothing OBJECTIVE is left out.

-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet