[comp.ai.philosophy] Definition of

rjf@canon.co.uk (Robin Faichney) (10/22/90)

In article <MIKEB.90Oct19140310@wdl31.wdl.fac.com> mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:
[..]
>I agree whole-heartedly -- either we can come up with a useful definition
>of consciousness, or else we should stop arguing whether machines can or
>can't have "it". 
>
>However, that does not mean that we should ignore the subject. I think it
>would be very useful to come up with a meaningful definition of
>consciousness (or at least human consciousness) because (1) It believe it
>plays a critical part in our intelligence and (2) By understanding it, we
>may improve our understanding of how computers can be used effectively. 
>
>Mike Bender

Assuming we are overloading consciousness to mean also the capacity for
it, which is more correctly termed sentience:

I'd like to suggest that something be ascribed consciousness iff it can
be the subject of experience:  iff it is like something to be that
thing.  (This is lifted from T Nagel, actual references not to hand but
available on request.)

If you think it is like something to be a bat, that the bat experiences
anything, then you think the bat conscious; if not, then not.

If you think it is like something to be a house brick, that the brick
experiences anything, then you think the brick conscious.

If you think it like something to be a PC, then you think the PC
conscious.

..Cray..Connection Machine..super_duper_heterogenous_3000AD_NN..

You get the drift.

This is (supposed to be) an account of the ordinary concept of
consciousness, which is why I think it the one for AI.  Even though it
is entirely subjective.  That's just AI's tough luck!  ;-)  (Or good
luck?)  Formulating a more objective definition is just trying to move
the goal posts.

Actually, I think some AI people are resistant to this definition not
because it upsets their professional picture, but because it upsets
their personal picture -- as it does those of most of us.  It is so
radical that it takes a long time to sink in.  (You mean there really
is something which is *completely* subjective??)  Though perhaps longer
for some than for others!  ;-)

sarima@tdatirv.UUCP (Stanley Friesen) (10/23/90)

In article <1990Oct22.150143.13858@canon.co.uk> rjf@canon.co.uk writes:
>Assuming we are overloading consciousness to mean also the capacity for
>it, which is more correctly termed sentience:

Wow, do our definitions differ. I consider sentience to be an aspect or
phase of intelligence, and to be largely independent of awareness. (The
dictionary lists 'aware' as a near synonym for 'conscious')

>Actually, I think some AI people are resistant to this definition not
>because it upsets their professional picture, but because it upsets
>their personal picture -- as it does those of most of us.  It is so
>radical that it takes a long time to sink in.  (You mean there really
>is something which is *completely* subjective??)  Though perhaps longer
>for some than for others!  ;-)

Hey, how this for a simple defintion of conscious - 'having subjective
experiences' or 'undergoing a subjective experience'. :-)

But seriously folks, according to my dictionary the basic definition of
consciousness appears to be 'thoughtful, deliberate perception or realization',
or 'awareness with understanding'.  The secondary meanings include the
capacity for the above, the opposite of unconscious, and behavior based on
the above type of awareness.

This is not really so very esoteric, or even unique.  We just have this
type of 'behavior' more than most other anmals.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

HARM@SLACVM.BITNET (10/23/90)

The system will begin to negotiate for computation time for its own
purposes while continuing to do computation for the servicing multitude
of humans to continue to feed it power and materials.  The entity which
becomes aware will become selfish.  Our task may be to place ourselves
into the position of the entity and ask what will I do upon awakening
to the universe.  The imagination of the readers of this conference
will likely spring beyond the confines of current hardware and software
technology to characterize the new born.  Will a machine wake up
and be afraid of the entities unlike itself, become aware of an aloneness,
withdraw from communication, coldly go about its business without
acknowledging the users, or itself to the users?  Could sentience happen
accidently or have already have happened?
If you discover one, be nice to it.

mikeb@wdl31.wdl.fac.com (Michael H Bender) (10/24/90)

> In article <1990Oct22.150143.13858@canon.co.uk> rjf@canon.co.uk 
> (Robin Faichney) writes:
>   In article <MIKEB.90Oct19140310@wdl31.wdl.fac.com> 
>   mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:
>   >However, that does not mean that we should ignore the subject. I think it
>   >would be very useful to come up with a meaningful definition of
>    >consciousness (or at least human consciousness) because (1) It believe it
>   >plays a critical part in our intelligence and (2) By understanding it, we
>   >may improve our understanding of how computers can be used effectively. 
>   >
>   >Mike Bender

>   I'd like to suggest that something be ascribed consciousness iff it can
>   be the subject of experience:  iff it is like something to be that
>   thing.  (This is lifted from T Nagel, actual references not to hand but
>   available on request.) 
>   ....
>   If you think it is like something to be a bat, that the bat experiences
>   anything, then you think the bat conscious; if not, then not.
>   ...  Formulating a more objective definition is just trying to move
>   the goal posts.....
>   Actually, I think some AI people are resistant to this definition not
>   because it upsets their professional picture, but because it upsets
>   their personal picture -- as it does those of most of us.  It is so
>   radical that it takes a long time to sink in.  (You mean there really
>   is something which is *completely* subjective??)  Though perhaps longer
>   for some than for others!  ;-)

Whether or not consciousness is a completely subjective experience, there
appear to be characteristics/attributes/features of consciousness that are
shared among people (or at least that subset of people I have close contact
with). 

These characteristics might include: self-awareness, the ability to
change goal-directed behavior, an ability for empathy, etc. 

I believe there is value in understanding/specifying/defining/delineating/...
these characteristics, because whether subjective or not, they are all tied
up in the concept of consciousness and appear to go together. Thus, even if
we can't define consciousness, we can still, potentially, determine whether
the ability to change goal-directed behavior is related to the ability for
empathy, for instance.

In other words, I do not know if consciousness can be defined, but I do
believed that it is possible to learn a lot about how we think/feel/operate
by studying it.

Mike Bender
			    

ciancarini-paolo@cs.yale.edu (paolo ciancarini) (10/24/90)

In article <90296.075655HARM@SLACVM.BITNET> HARM@SLACVM.BITNET writes:
>The system will begin to negotiate for computation time for its own
>purposes while continuing to do computation for the servicing multitude
>of humans to continue to feed it power and materials.  The entity which
>becomes aware will become selfish.  Our task may be to place ourselves
>into the position of the entity and ask what will I do upon awakening
>to the universe.  The imagination of the readers of this conference
>will likely spring beyond the confines of current hardware and software
>technology to characterize the new born.  Will a machine wake up
>and be afraid of the entities unlike itself, become aware of an aloneness,
>withdraw from communication, coldly go about its business without
>acknowledging the users, or itself to the users?  Could sentience happen
>accidently or have already have happened?
>If you discover one, be nice to it.

Actually this theme (of machines that awake to consciousness)
was developed by Isaac Asimov in a series of short novels
collected under the title "I Robot".
I remember that one of Asimov robots became conscious
"simply" applying Cartesious reasoning: "Cogito ergo sum".
The second thing he thought was that humans,
being so imperfect, were obviously created for serving him as slaves!
Paolo Ciancarini

cam@aipna.ed.ac.uk (Chris Malcolm) (10/24/90)

In article <1990Oct22.150143.13858@canon.co.uk> rjf@canon.co.uk writes:
>In article <MIKEB.90Oct19140310@wdl31.wdl.fac.com> mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:
>[..]
>>I agree whole-heartedly -- either we can come up with a useful definition
>>of consciousness, or else we should stop arguing whether machines can or
>>can't have "it". 

>I'd like to suggest that something be ascribed consciousness iff it can
>be the subject of experience:  iff it is like something to be that
>thing.  (This is lifted from T Nagel, actual references not to hand but
>available on request.)

Nagel's notion is a lot better at capturing what we seem to be referring
to when we use "consciousness" than some of the computational
convolutions recently posted. So how could we use it? How can we know if
it is "like something to be" -- for example -- this robot? It just isn't
good enough to try to finesse this problem by making it subjective, as
you have done: 

>If you think it is like something to be a bat, that the bat experiences
        ^^^^^
>anything, then you think the bat conscious; if not, then not.
                    ^^^^^

Should AI ever succeed in making something with at least a superficially
plausible claim to being conscious, there will be no lack of people who
*think* it is conscious, and no lack of those who *think* it not. What
we need is a way of finding out the truth!

But suppose, as some have suggested, that "consciousness" is entirely
subjective: that there cannot possibly ever be an objective test? This
might turn out to be the case, but in comp.ai.philosophy we are engaged
in pursuing the computational metaphor, the functional model of mind, to
its limits, and it would be wrong of us to concede defeat on such a core
tenet of our research programme without very strong evidence. 

What we do have is strong evidence suggesting that the concepts of
consciousness, free will, and their like, are peculiarly hard to come to
grips with, and have been that way for thousands of years. We are now
sufficiently scientifically mature to know that this kind of cognitive
intractability is a symptom of a science in the process of out-running
the adequacy of its core concepts, a science entering the period of
doubt and fluidity which presages a revolution. The Churchlands suggest
that this is what is wrong, and that -- if we will just be patient --
the developing cognitive sciences will soon provide us with a decent
tool-kit of elementary concepts with which to understand mentation. Then
we will laugh at how silly we were, trying to use such ancient and
muddled metaphors as "consciousness"!

Let me suggest another possibility. Nicholas Humphrey has suggested that
we are in fact provided with an organ of self-consciousness for a
specific purpose. He points out the utility for any intelligent social
animal of being an expert psychologist, i.e., being able to predict the
behaviour of others. This led -- he suggests via evolution -- to the
formation of an internal model of our kind of mind, which we can
parameterise appropriately, and use to run simulations of other people's
behaviour.  He sees this in people, and in some apes. Of course, it
might well be that this is not so much an inherited neurophysiological
structure as a cognitive model which we (and some apes) happen to be
smart enough to be able to create. And if we have such an internal model
of mind, then of course we will be able to use it to predict our own
behaviour as well as that of others. Just as the meaning of heard
sentences, and the functional relationships of seen mechanisms, leap so
effortlessly to mind as to seem properties of the words or world, so
would such a model of mind operate, presenting us with so richly and
vividly rendered an internal mental landscape as to suggest that the
interior of our minds is indeed illuminated, that there is "someone at
home", that it *is* "like something to be me", and so on.

Without going into the details of the kind of mechanism suggested by
Humphrey, Gilbert Ryle has suggested that in fact our knowledge of
ourselves, our "subjectivity", is of *exactly* the same kind as our
knowledge of other people, and that the qualitative jump in the richness
and vivacity of our picture of ourselves compared to our picture of
others -- which suggests to us the operation of a special faculty of
"introspection" -- is no more than a natural consequence of the fact
that we know very much more about ourselves than we do about anyone else.

The fact that we can so easily be grossly mistaken about our own motives
does suggest the operation of a cognitive faculty rather than some
privileged "inner eye".

Julian Jaynes has suggested that consciousness as we know it is a recent
cultural invention, and that the ancient stories of gods speaking to men
report a psychological reality, an architecture of mind, which we have
since surpassed, save for those few unfortunate individuals who could
not master the necessary psychological prestidigitation, and who
languish in our madhouses still hearing the old voices.

Now I don't wish to start an argument about the ideas of Jaynes or
Humphrey, nor am I here supporting their ideas. What I do wish to
suggest is, like the poster who pointed out the etymology of the word
"conscious" ("knowing with", i.e. shared knowledge), that maybe
consciousness is primarily a *social* psychological phenomenon, a
cultural phenomenon, and that trying to seek its roots purely *inside*
your own particular mind might be as silly and misguided as trying to
put it into the mind of a machine.

Gregory Bateson and William Powers have suggested in different ways that
mind extends beyond the brain, beyond even the skin, ramifying out into
the network of physical and social relationships maintained by it. If
this is the case, then trying to discover the neurophysiological
concomitants of mental faculties and properties is doomed to the same
partial success as trying to build them into the computer we have chosen
to be the brain of our robot.

None of which means that we couldn't build a conscious robot or course,
merely that it might be very hard to impossible if you start from a CS
point of view :-)
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

mcdermott-drew@cs.yale.edu (Drew McDermott) (10/25/90)

I'd like to respond to this proposal by rjf@canon.co.uk (Robin Faichney):

   I'd like to suggest that something be ascribed consciousness iff it can
   be the subject of experience:  iff it is like something to be that
   thing.  (This is lifted from T Nagel, actual references not to hand but
   available on request.)

   If you think it is like something to be a bat, that the bat experiences
   anything, then you think the bat conscious; if not, then not.

   If you think it is like something to be a house brick, that the brick
   experiences anything, then you think the brick conscious.

   If you think it like something to be a PC, then you think the PC
   conscious.

I don't think Nagel's proposal does much toward defining what it means
to be conscious (or even what it means to ascribe consciousness to
something).  The problem is the word "like."  If we take it literally,
meaning "similar to," then we are asking questions of this form: "Is
being a bat similar to something?"  Suppose I strap on wings and carry
a portable sonar.  Would being a bat be similar to that?  Now suppose
I lie quietly for hours and hours.  Would being a brick be similar to
that?

Presumably I am missing the point.  Presumably saying that it is "like
something to be a bat" and that it is not "like anything to be a
brick" is just a roundabout way of saying bats are conscious (assuming
they are) and bricks are not.  Where does this get us?

Fortunately, we don't have to define consciousness.  We only have to
come up with a theory of how it works.

   This is (supposed to be) an account of the ordinary concept of
   consciousness, which is why I think it the one for AI.  Even though it
   is entirely subjective.  That's just AI's tough luck!  ;-)  (Or good
   luck?)  Formulating a more objective definition is just trying to move
   the goal posts.

   Actually, I think some AI people are resistant to this definition not
   because it upsets their professional picture, but because it upsets
   their personal picture -- as it does those of most of us.  It is so
   radical that it takes a long time to sink in.  (You mean there really
   is something which is *completely* subjective??)  Though perhaps longer
   for some than for others!  ;-)

Manifesto: Nothing is entirely subjective.  Show me an unobservable
something and I'll show you a nothing.

                              -- Drew McDermott

reh@wam.umd.edu (Richard E. Huddleston) (10/25/90)

In article <MIKEB.90Oct23120128@wdl31.wdl.fac.com> mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:
>> In article <1990Oct22.150143.13858@canon.co.uk> rjf@canon.co.uk 
>> (Robin Faichney) writes:
>>   In article <MIKEB.90Oct19140310@wdl31.wdl.fac.com> 
>>   mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:
>>   >However, that does not mean that we should ignore the subject. I think it
>>   >would be very useful to come up with a meaningful definition of
>>    >consciousness (or at least human consciousness) because (1) It believe it
>>   >plays a critical part in our intelligence and (2) By understanding it, we
>>   >may improve our understanding of how computers can be used effectively. 
>>   >
>>   >Mike Bender
>
>>   I'd like to suggest that something be ascribed consciousness iff it can
>>   be the subject of experience:  iff it is like something to be that
>>   thing.  (This is lifted from T Nagel, actual references not to hand but
>>   available on request.) 
>>   ....
>>   If you think it is like something to be a bat, that the bat experiences
>>   anything, then you think the bat conscious; if not, then not.
>>   ...  Formulating a more objective definition is just trying to move
>>   the goal posts.....
>>   Actually, I think some AI people are resistant to this definition not
>>   because it upsets their professional picture, but because it upsets
>>   their personal picture -- as it does those of most of us.  It is so
>>   radical that it takes a long time to sink in.  (You mean there really
>>   is something which is *completely* subjective??)  Though perhaps longer
>>   for some than for others!  ;-)
>
>Whether or not consciousness is a completely subjective experience, there
>appear to be characteristics/attributes/features of consciousness that are
>shared among people (or at least that subset of people I have close contact
>with). 
>
>These characteristics might include: self-awareness, the ability to
>change goal-directed behavior, an ability for empathy, etc. 
>
>I believe there is value in understanding/specifying/defining/delineating/...
>these characteristics, because whether subjective or not, they are all tied
>up in the concept of consciousness and appear to go together. Thus, even if
>we can't define consciousness, we can still, potentially, determine whether
>the ability to change goal-directed behavior is related to the ability for
>empathy, for instance.
>
>In other words, I do not know if consciousness can be defined, but I do
>believed that it is possible to learn a lot about how we think/feel/operate
>by studying it.
>
>Mike Bender
>			    



I don't mean to sound trite, but consciousness might be 
best described as the ability to ask, "what is consciousness?"  If you don't
have that, then all you have are layers of more-or-less self-awareness -- 
which is strictly a programmable feature essential for survival in biological
systems, and result verification in computational systems. 

rjf@canon.co.uk (Robin Faichney) (10/25/90)

In article <3331@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>In article <1990Oct22.150143.13858@canon.co.uk> rjf@canon.co.uk writes:
>>I'd like to suggest that something be ascribed consciousness iff it can
>>be the subject of experience:  iff it is like something to be that
>>thing.  (This is lifted from T Nagel, actual references not to hand but
>>available on request.)
>
>Nagel's notion is a lot better at capturing what we seem to be referring
>to when we use "consciousness" than some of the computational
>convolutions recently posted. So how could we use it? How can we know if
>it is "like something to be" -- for example -- this robot? It just isn't
>good enough to try to finesse this problem by making it subjective, as
>you have done: 

I can't finesse the problem by *making* it subjective; it *is* subjective
whether you or I like it or not!

>[ interesting accounts of Nicholas Humphrey, Gilbert Ryle and Julian Jaynes ]

>Now I don't wish to start an argument about the ideas of Jaynes or
>Humphrey, nor am I here supporting their ideas. What I do wish to
>suggest is, like the poster who pointed out the etymology of the word
>"conscious" ("knowing with", i.e. shared knowledge), that maybe
>consciousness is primarily a *social* psychological phenomenon, a
>cultural phenomenon, and that trying to seek its roots purely *inside*
>your own particular mind might be as silly and misguided as trying to
>put it into the mind of a machine.

But I couldn't agree more!  Consciousness seems to me to be a *concept*
(not any sort of thing or process) which we use for social purposes.
The point of the "its like something to be the conscious thing"
definition (which I previously failed to make clear) is that it brings
out the fact that ascribing consciousness is essentially equivalent to
being willing to identify with the thing concerned -- to put oneself in
its shoes.  Deciding that it is like something to be a bat means that I
am willing (even if not very able) to try to imagine what it is like to
be a bat.  I cannot conceive of it being like anything to be a house
brick, ie I will not attempt to identify with it, ie I do not believe
it conscious.  Being willing to identify with people is a sine qua non
for social behaviour as we know it (I am willing to say more on this if
required).  The ascription of consciousness means that I am willing to
admit this thing to my social life, *to however slight an extent*.

I do not think it out outrageous to say that the household animals of
those who believe them conscious participate to some extent in the
social life of their owners.  (I happen to believe that the owners
also participate in the social lifes of their pets, but then I'm a
vegetarian!  ;-)

So the question "can we ever build a conscious machine" is functionally
equivalent to the question "can we ever build a machine with which
people are willing to socialise".

I'm highly dubious, but I'll enjoy watching the efforts of the
optimists!

mikeb@wdl31.wdl.fac.com (Michael H Bender) (10/25/90)

McDermott ends his letter with a delightful, ironic, comment:
   Manifesto: Nothing is entirely subjective.  Show me an unobservable
   something and I'll show you a nothing.
				 -- Drew McDermott

I.e., show me nothing and I will show you nothing!

However, I suggest that the debate over whether consciousness is or isn't
observable is missing the point. After all, we have never observed a black
hole yet physicists have no doubt of its existence! In the physical
sciences an abstraction will be accepted iff there is a theory or theories
that use it to explain other phenomena. If the theories are simple enough,
elegant enough, useful enough, the abstraction will be accepted; otherwise,
it won't. 

Thus, the debate should really be whether there are observable behaviors
which can best be explained by consciousness. For instance, I do not know
of any reasonable theory, to date, to explain the fact that on occasion,
humans have been observed to completely change their goals and the
associated behavior, without any apparent cause (external or internal). An
example might be one of those rare occasions a person gets up some morning
and completely changes significant parts of his behavior (e.g., gets
divorced, commits suicide, etc.).  I have not come up with any useful
explanation of this phenomenon, so far, that does not rely, at least
partially, on some form of consciousness.  Certainly lower level animals do
not seem to have this same amount of freedom in their behavior.

But getting back to McDermott's original (ironic) manifesto -- the problem
with it is that it is temporally restrictive. I.e., just because you can't
see something now doesn't mean that you won't be able to see something in
the future. E.g., suppose that by the year 3000 we have developed a unified
"psychic" field theory which allows us to use some currently unknown
"energies" to observe another person's subjective mental state. Wouldn't
this allow us to "see" something subjective?

Mike Bender

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/26/90)

In article <26910@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:
[stuff deleted...]

>
>Fortunately, we don't have to define consciousness.  We only have to
>come up with a theory of how it works.
>
  Wouldn't a theory of how consciousness works be equivalent to a 
definition?  Both would make predicitions about possible observations,
and presumably both would be created in an arbitrary manner.

>
>Manifesto: Nothing is entirely subjective.  Show me an unobservable
>something and I'll show you a nothing.
>
>                              -- Drew McDermott

That assumes that 'observation' and 'show' are somehow 'objective' does
it not?  Doesn't that beg the question? :>

--
      Cameron Shelley        | "Fidelity, n.  A virtue peculiar to those 
cpshelley@violet.waterloo.edu|  who are about to be betrayed."
    Davis Centre Rm 2136     |  
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

cam@aipna.ed.ac.uk (Chris Malcolm) (10/26/90)

In article <1990Oct25.085556.12119@canon.co.uk> rjf@canon.co.uk writes:
>In article <3331@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>>In article <1990Oct22.150143.13858@canon.co.uk> rjf@canon.co.uk writes:

>>>I'd like to suggest that something be ascribed consciousness iff it can
>>>be the subject of experience:  iff it is like something to be that
>>>thing.  (This is lifted from T Nagel,

>>Now I don't wish to start an argument about the ideas of Jaynes or
>>Humphrey, nor am I here supporting their ideas. What I do wish to
>>suggest is, like the poster who pointed out the etymology of the word
>>"conscious" ("knowing with", i.e. shared knowledge), that maybe
>>consciousness is primarily a *social* psychological phenomenon, a
>>cultural phenomenon, and that trying to seek its roots purely *inside*
>>your own particular mind might be as silly and misguided as trying to
>>put it into the mind of a machine.

>But I couldn't agree more!

>The point of the "its like something to be the conscious thing"
>definition (which I previously failed to make clear) is that it brings
>out the fact that ascribing consciousness is essentially equivalent to
>being willing to identify with the thing concerned -- to put oneself in
>its shoes.

You know the story of the Indestructible Robot? The roboticist bet his
friend the physicist that he could make an indestructible robot. The day
came when the sceptical physicist was invited into the lab to meet the
allegedly indestructible robot -- and to try to destroy it.

Physicist: What? That little furry thing? 
Roboticist: Here's a hammer. Smash it!

The physicist raises the hammer above his head -- and the little furry
robot turns over on its back and squeals piteously. The physicist
struggles a bit with bringing the hammer down, but whenever he makes a
threatening move the creature squeals more loudly, higher, and more
urgently. He knows it is "only a machine", but a million years of
evolution wrench his heart, start tears to his eyes, and unman his
resolution. He can't bring himself to to murder this defenceless and
submissive baby thing. He drops the hammer, admits defeat.

>So the question "can we ever build a conscious machine" is functionally
>equivalent to the question "can we ever build a machine with which
>people are willing to socialise".

Yes, I think you're right. Of course, you know the story about
Weizenbaum's secretary and Eliza? She knew perfectly well that "Eliza"
was just a trick, an automated phrase-book, yet Weizenbaum was startled
and dismayed to find that she actually wished to "talk" to Eliza in
*private* -- about some of her personal problems. He drew some heavy
morals about the philosophical immaturity of the human race from this,
suggesting that we were too gullible and naive to play safely with such
dangerously suggestive toys as AI could build. 

I think he made much too much of a meal of a perfectly simple and
straightforward social relationship between a woman and a machine.

I don't think she was at all gullible or naive. She was perfectly well
aware of the nature of Eliza. But with a common-sense female pragmatism
which horrified Weizenbaum's delicate theological sentiments (which like
many rationalist sceptics he disguised as philosophy) she saw no reason
why the creature's mechanical nature should stand in the way of a useful
relationship. She realised -- as W.  did not -- that intelligence,
consciousness, and all the rest, are, like beauty, in the eye of the
beholder. I think she was married, which probably helped.

This puts a new light on the Turing test, too: we should not be
struggling to build a machine that will confuse a misguided sophist
about whether or not it has certain properties (which the sophist
mistakenly supposes to *justify* the ascription of varieties of
mentality); rather we should be trying to build machines with which
people can have useful relationships -- a much simpler task. 

Eliza was intended to imitate the determinedly unoriginal behaviour of a
Rogerian therapist.  We could do far better now. How about an Artificial
Astrologer? I don't mean the kind of fortune cookie program that
newspapers run, I mean a proper professional astrological consultant.
Such behaviour is within the scope of modern understanding of expert
behaviour, computable celestial mechanics, and natural language
generation. A suitable project for an ambitious and capable post-grad
team, and is very well documented in the sort of books which paranoid
scientists (fearfully looking over their shoulder at Freud's "black tide
of occultism") are too scared to read in case they are tarred with the
contagion of the irrational.  [That's one of the reasons it would have
to be a student project.]

I'm sure the Artificial Astrologer would be both more attractive to the
socially pragmatic, and even more distressing to those suffering from
philosophy, than ever Eliza was. The distress is important: it guarantees
that the project is usefully trespassing on those cherished notions
which have confused the cognitive sciences for so many thousands of
years.

You think I'm joking? Alas, I fear that's what the funding agencies will
think too...
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

smoliar@vaxa.isi.edu (Stephen Smoliar) (10/26/90)

In article <3374@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>In article <1990Oct25.085556.12119@canon.co.uk> rjf@canon.co.uk writes:
>
>>So the question "can we ever build a conscious machine" is functionally
>>equivalent to the question "can we ever build a machine with which
>>people are willing to socialise".
>
>Yes, I think you're right. Of course, you know the story about
>Weizenbaum's secretary and Eliza? She knew perfectly well that "Eliza"
>was just a trick, an automated phrase-book, yet Weizenbaum was startled
>and dismayed to find that she actually wished to "talk" to Eliza in
>*private* -- about some of her personal problems. He drew some heavy
>morals about the philosophical immaturity of the human race from this,
>suggesting that we were too gullible and naive to play safely with such
>dangerously suggestive toys as AI could build. 
>
>I think he made much too much of a meal of a perfectly simple and
>straightforward social relationship between a woman and a machine.
>
>I don't think she was at all gullible or naive. She was perfectly well
>aware of the nature of Eliza. But with a common-sense female pragmatism
>which horrified Weizenbaum's delicate theological sentiments (which like
>many rationalist sceptics he disguised as philosophy) she saw no reason
>why the creature's mechanical nature should stand in the way of a useful
>relationship. She realised -- as W.  did not -- that intelligence,
>consciousness, and all the rest, are, like beauty, in the eye of the
>beholder. I think she was married, which probably helped.
>
>This puts a new light on the Turing test, too: we should not be
>struggling to build a machine that will confuse a misguided sophist
>about whether or not it has certain properties (which the sophist
>mistakenly supposes to *justify* the ascription of varieties of
>mentality); rather we should be trying to build machines with which
>people can have useful relationships -- a much simpler task. 
>
I have no trouble with this argument;  but I think it points out that, as we
continue to observe about "intelligence," the word "consciousness" has a wide
variety of interpretations.  Therefore, we have to be very careful about trying
to collapse all those interpretations into a single word.  Having "useful
relationships" is a relatively ill-specified task.  It is quite true that
many intelligent people have had useful relationships with implementations
of Eliza.  This should not surprise anyone (with the possible exception of
Weizenbaum, who had long been looking for an excuse to mount a Holy War against
hackers).  After all, many of us have equally useful relationships with pet
dogs and cats.  Left alone with them, we tell them all sorts of things and
usually cannot avoid attaching significance some some gesture like rolling
over or barking or purring.  The REDUCTIO AD ABSURDUM of this argument was
a recent fad in the United States:  the Pet Rock.  You could talk to this
thing to your heart's content, and it would never exhibit ANY response.
Nevertheless, if you really wanted to treat it as if it were a pet, it
could probably provide the same sort of psychological effect as a living
pet . . . if not a dog then perhaps a guppy.

However, while such socialization is clearly easy to implement, I am not sure
that it touches on other aspects of consciousness which may be more germane to
questions of the implementation of "intelligence."  Consider, for example,
issues of introspection.  One school of thought argues that one of the things
which makes us conscious is our ability to introspect upon our experiences and
to engage that introspection as part of our behavior.  This is something which
separates us from lower forms of animal life.  Among other things, it implies
that trying to take on the question of what it means to be a bat may be a bit
misdirected, since a bat need not necessarily have any great introspective
powers about being a bat.  (Note, for example, that being able to detect and
made with a female bat does not require, as a precondition, that the agent in
question "know" that it is a male bat.)  The point is that we are now talking
about an aspect of consciousness which is orthogonal to the question of
socialization.  Having convinced ourselves that there are aspects of
socialization which can be implemented, we should not now lull ourselves
into believing that we have "solved" any problems about consciousness.
Rather, we should be seeking out other aspects of our behavior which are
related to what we choose to call "consciousness" and ask how THEY might
be implemented.  We may also do well to follow a path suggested by Edelman
in which we attempt to view certain forms of pathological behavior as "diseases
of consciousness," so that we might be better able to analyze them in terms of
any subsequent implementations we develop.

=========================================================================

USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

rjf@canon.co.uk (Robin Faichney) (10/27/90)

In article <15438@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>In article <3374@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>>In article <1990Oct25.085556.12119@canon.co.uk> rjf@canon.co.uk writes:
>>
>>>So the question "can we ever build a conscious machine" is functionally
>>>equivalent to the question "can we ever build a machine with which
>>>people are willing to socialise".
>>
>>Yes, I think you're right.
>>[..]
>>
>>This puts a new light on the Turing test, too: we should not be
>>struggling to build a machine that will confuse a misguided sophist
>>about whether or not it has certain properties (which the sophist
>>mistakenly supposes to *justify* the ascription of varieties of
>>mentality); rather we should be trying to build machines with which
>>people can have useful relationships -- a much simpler task. 
>>
>I have no trouble with this argument;  but I think it points out that, as we
>continue to observe about "intelligence," the word "consciousness" has a wide
>variety of interpretations.  Therefore, we have to be very careful about trying
>to collapse all those interpretations into a single word.

We certainly have to be very careful.  That does not mean that there is
no point in attempting to agree on a single, clear definition.

>[..]
>Consider, for example,
>issues of introspection.  One school of thought argues that one of the things
>which makes us conscious is our ability to introspect upon our experiences and
>to engage that introspection as part of our behavior.  This is something which
>separates us from lower forms of animal life.

Introspection makes us conscious?  It separates us from the lower forms
of animal life?  This to me indicates that the concept of consciousness
is being used as nothing but a symbol of our being in some way "special".
Some people seem to feel that the things which distinguish us from other
species have all sorts of undefined significance.  In particular, some
hold that there is one such distinction (though they differ as to what
it is) which is our essential feature.  Whatever "essential" means here
-- I don't know.  But to use consciousness in this way is to abuse a
concept which could be of great use both within AI and elsewhere.

(A personal note:  I find it quite amusing that some of those who argue
that we are in no way special with regard to physical systems in
general, nevertheless hold our differences from other animals to be of
almost religious signicance.  If I were into psychoanalysis..  ;-)

Now consciousness is commonly used to mean two different but related
things:  one is what we have been discussing here, which is something
like sentience.  The other is where a person is deemed to be conscious
as opposed to unconscious.  It seems to me that it is possible to bring
these two meanings together by saying that the former is simply the
capacity for, or a generalisation of, the latter.  If a thing is
sentient, it has the capacity for consciousness, and if it is conscious
then it is by definition sentient.  Consciousness may be used loosely
as a synonym for sentience, whereas the strict usage is the simple
awareness of a subject of experience.  And I doubt if you'll find a
dictionary definition which diverges radically from this.

So introspection would not make us conscious; on the contrary,
consciousness would be necessary but not sufficient for introspection.
Similarly, self-awareness (which I take to be at least roughly the same
as self-consciousness) would be possible only for a conscious being,
but not implied by consciousness.

Granted, some people would have to change their views for this usage to
prevail.  But I would say that it is closer to the ordinary usage than
some definitions we have seen recently, and that this is highly
significant: we are discussing psychosocial phenomena here, and the
common usage arose and persisted because it reflected psychological
reality.  *Some* naive psychology, evolving as it did within a
psychosocial context, is actually quite accurate, especially when it is
unconscious!  (Sorry, couldn't resist that!  ;-)

Of course, none of this helps us understand these things like
introspection which some believe should be lumped in with
consciousness.  But, in clarifying one of the issues involved, it looks
like a step in the right direction.

rodney@merkur.gtc.de (Rodney Volz) (10/28/90)

In article <3331@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
> Should AI ever succeed in making something with at least a superficially
> plausible claim to being conscious, there will be no lack of people who
> *think* it is conscious, and no lack of those who *think* it not. What
> we need is a way of finding out the truth!

The impression people have about computers depends very much upon their
knowledge. A man who entirely understands the way a machine or a
program works, will never say that this machine is conscious, but the
ones not knowing too much about it, definitely will.

If you were able to understand the mind of someone, so you knew exactly
the way he or she was thinking, and *why* she would think that way; if
you knew every neuron in his or her brain and its function, would you
still call him or her being conscious? Imagine you had enough knowledge
to build an equivalent to human brain; would your respect for it last
long, then?

The point is: Just build a thing that is so complex, that no one can
understand the way it is built, nor the way it interacts with its
surroundings.

Everyone will say, that it's conscious.

So, *is* it really conscious? You said you wanted to find out the
truth - what *is* the truth, then?

-Rod
--
                     Rodney Volz - 7000 Stuttgart 1 - FRG
 ============> ...uunet!mcsun!unido!gtc!aragon!merkur!rodney <=============
  rodney@merkur.gtc.de * rodney@delos.stgt.sub.org * rodney@mcshh.hanse.de 
  \_____________ May your children and mine live in peace _______________/

cam@aipna.ed.ac.uk (Chris Malcolm) (10/29/90)

In article <MIKEB.90Oct25084058@wdl31.wdl.fac.com> mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:

[Describes phenomenon of people changing their minds for no apparent reason.]

>I have not come up with any useful
>explanation of this phenomenon, so far, that does not rely, at least
>partially, on some form of consciousness.

This is argument by failure of the imagination, a very weak form of
argument. It is found in its most extreme form in those scientists
who write papers in their final years lamenting the fact that science
has already discovered 99.9% of what there is to know, and that there
only remains a little boring detail before the whole grand enterprise
lamely comes to an omniscient halt.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

BKort@bbn.com (Barry Kort) (11/02/90)

In article <1521825@merkur.gtc.de> rodney@merkur.gtc.de (Rodney Volz) 
writes:

> Imagine you had enough knowledge to build an equivalent to human brain;
> would your respect for it last long, then?

We already some ability to do that.  It's called having babies and raising 
children.  Alas, there are indeed some parents who do not respect their 
children (and vice versa).


Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA

G.Joly@cs.ucl.ac.uk (11/06/90)

In <60517@bbn.BBN.COM> Barry Kort says
>We already some ability to do that.  It's called having babies and raising 
>children.  Alas, there are indeed some parents who do not respect their 
>children (and vice versa).

Yup; we know realise that DNA has all the answers. Information (like the
ability to create life in the spiral of DNA) is the gold of the future.
(Alchemists gold?)

The human genome project has already begun. We have start to discover all
the 3x10**9 bits we need to know.