[mod.ai] philosphy - consciousness

DAVIS@EMBL.BITNET (03/06/87)

Could an unconscious machine be a good psychologist ?
*****************************************************

        During the recent discussions on consciousness, Stevan Harnad has,
in the face of many claims about its role/origin, given us the demanding
question "well, if X is achieved *with* consciousness, why couldn't it
be accomplished *without* ?" (I hope I have understood this much correctly).
I think that many of those arguing with Harnad, myself included, have not
appreciated the full implications of this question - I wish now to give
one example of "an X" designed to at least point in the direction of an
answer to Harnad's question.

        I hope that Stevan would accept, as a relatively axiomatic truth,
that for complex systems (eg; ourselves, future compsys'), interaction
and 'social development' are a *good thing*. That is to say, a system will
do better if it can interact with others (particularly of its kind), and
even more so if such interactions are capable of development towards
structures resembling 'societies'. We can justify this simply on the grounds
of efficiency, information exchange, and altruistically-based mutual survival
arrangements (helping each other out). I think that this is as true of computer
systems as human beings, although its curent implementation lacks any real
capacity for self-development.

        Given this axiom - that complex systems will do better if they
interact - we may return to the hypothesis of Armstrong, recently raised
by M.Brilliant on the ailist, that the selective advantage conferred by
being conscious is connected with the ability to form developing social
systems. Harnad's question in this context (previously raised) is "why couldn't
an unconscious TTT-indistinguishable automaton accomplish the same thing ?".

        So, lets look at this proposition. In order to accomplish meaningful
social interactions in a way that opens up such relations to future development
it is necessary to be able to predict - not, of course with 100% accuracy,
but to an extent that permits mutual acts to occur without running through
all the verbal preliminaries every time (conceptually similar to installing
preamble macros in TeX - a facetious statement!). Our ability to do this
is described in every day experience as 'understanding other people', and
permits us to avoid asking the boss for a raise when he is obviously in
a foul mood.

        Rephrasing Harnad's question in an even more specific (and revealing)
manner, we now have " why couldn't an unconscious TTT-indistinguishable
automaton make similarly good predictions about other conscious objects ?".
We now have a useful fusion of biological, psychological and computer terms.
What sort of computer systems do we know of that are able to make predictions?
Although the exact definition is currently under debate ( see the list ),it
seems that we may subsume such systems under the general term "expert systems"-
used here in the most general sense of being an electronic device with access
to a knowledge base and some method of drawing conclusions given this knowledge
and a specific query or situation. I hope that Stevan will go along with
this as a possible description of his TTT-indistinguishable automaton.

        So, could such a system 'understand' other people ? I believe that
it could not, for the following reasons. As sophisticated as this 'inference
engine' may be, its methods of reasoning must still, even in some high level
sense, be instantiated by its designers. Moreover, its knowledge base is
expandable only by observation of the world. To behave in a way that was
TTT-indistinguishable from a human in its capacity to 'understand' people,
this automaton would either (1) have to have a built in model of human
psychology or (2) be capable of collecting information that enabled it to
form its own model over time.

        Here we have reached the kernel of the problem. Do we have, or are
we ever likely to have our own model of human psychology that is capable
of being implented on a computer ? Obviously, this is open to debate, but
I think not. The human approach to psychology seems to me to be incapable
of developing in a context which does not take the participation and prior
knowledge of the psychologist into consideration. As sophisticated as it
gets, I feel (though you're welcome to try and change my mind) that psychology
will always be like a dictionary - you look up the meaning of one word,
and find you have to know 30 others to understand what it means. Alternatively,
suppose that our fabulous machine were to try and 'figure it out fo itself'.
It will very soon run into a problem. When it asks someone why they did
something, it will recieve a reply which often involves a reference to an
'inner self' - a world, which as any good psychologist will tell you, has
its own rules, its own objects and its own interactions. The machine asks,
and asks, observes and observes - will it ever be able to put together a
picture of the 'inner life' of these conscious humans ?

        And now we are at the end. Its obviously a statement of faith, but
I believe that what consciousness gives us is the ability to do just what
this machine cannot - to be a good psychologist. It makes this possible
by allowing us to *compare and contrast* our own behavior and 'inner self'
with other's behaviour - and hence to make the leap of understanding that
gives rise to the possibility of meaningful social interaction and development.
We have our *own* picture of 'inner life' (this is not meant to be mystical!)
and hence we have no need to seek to develop a model by inference. I do
not believe (now!) that an unconscious device could do the latter, and hence
I do not think that it is possible, even in principle, to build an unconscious
TTT-indistinguishable automaton that is capable of interacting with conscious
objects.

Thankyou, and good night.

Paul Davis

wetmail: embl, postfach 10.2209, 6900 heidelberg, west germany
netmail: davis@embl.bitnet
petmail: homing pigeons to .......