[comp.ai.philosophy] Conciousness

yanek@panix.uucp (Yanek Martinson) (04/16/91)

While it can be said that even a calculator that can take a square root has some intelligence, and there are even microwave ovens that have been called intelligent, the more interesting topic is not intelligence but conciousness or awareness. Has any research been done on programs that are concious, that is have awareness of the world and of themselves? Also there is the problem of infinite recursion because if something is aware of itself, it is also aware of itself being aware of itself etc.. 

jalma@IDA.LiU.SE (Jalal Maleki) (04/17/91)

Conciousness and awareness have been studied to some degree by researchers
working with autoepistemic logics. Have a look at the following book:

Halpern, J, Y, (ed.), "Theoretical  Aspects of Reasoning About Knowledge",
proc. of the 1986 Conference, Morgan Kaufmann, 1986.

Jalal Maleki

ssingh@watserv1.waterloo.edu ( Ice ) (04/17/91)

In article <1991Apr16.061532.10775@panix.uucp> yanek@panix.uucp (Yanek Martinson) writes:
>While it can be said that even a calculator that can take a square root has some intelligence, and there are even microwave ovens that have been called intelligent, the more interesting topic is not intelligence but conciousness or awareness.

Ice: If you mean _self-consciousness_ then I agree. What I am going to type
is probably nothing new, and has been said many times before, but hey...

These machines are formal systems... and they are finite state machines...
as such, they can assume a certain number of states. We can very broadly
define _mind_ for any finite state machine as the set of states that it
can assume. A 386 PC is has a more powerful mind than an HP calculator
because it can assume a larger number of states. But depending on how
it is programmed, it may or may not exhibit intelligence.

One property of intelligence is having a model of the outside world consistent
with reality. This means that the states of the machine are in some way
isomorphic to the outside world...

But the machine need not be _self-conscious_ for this to happen... Self-
consciousness arises when the machine is also able to have an abstract
model of itself somewhere in its set of states. That presumably requires
that certain requirements be met...

i) Perceptual devices to link to the outside world. A self-model cannot
exist _AT_ALL_ without this, however powerful ii) is...

ii) A _LARGE_ # of states. How large? I don't know. Wish I did.

>Has any research been done on programs that are concious, that is have awareness of the world and of themselves? Also there is the problem of infinite recursion because if something is aware of itself, it is also aware of itself being aware of itself etc.. 

Read Hofstadter's _Godel,Escher,Bach_ : Classic advice :-). Personally, my
guess is that finite state machines can achieve a sense of self-awareness,
but not a perfect one for exactly that reason. 

If your brain had the capacity for modelling itself to inifinite precision
but only a finite computing speed, you'd go into a trance and never exit
it if you tried to introspect. In order for you to be able to achieve
a perfect model of yourself, you would have to have an infinite
computing speed as well, and neurons are _slow_. You could perhaps
achieve infinite computing speed with an infinite number of neurons but
that is not realizable either...

So I conclude that we can have at best a reasonably accurate model of
ourselves, but never a perfect one. There are limits to 
"computational resolution." (My chance for a question: Is this why we have
a subconscious, stuff happening below the conscious level, because of
introspective limits?)

Hope this helps.

Ice.

-- 
(1ST HYPERMEDIA .SIG) ; #include <black_rain.h> ; #include <robotron.h>
"Ice" is a UW AI living at: ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca]
"The human race is inefficient and therefore must be destroyed"-Eugene Jarvis
Visual component of .sig: Saito in the cafe doing some slicing in _Black_Rain_ 

krista@sandman.hut.fi (Krista Hannele Lagus) (04/18/91)

>>Has any research been done on programs that are concious, that is have awareness of the world and of themselves? Also there is the problem of infinite recursion because if something is aware of itself, it is also aware of itself being aware of itself etc
>. 
>
>So I conclude that we can have at best a reasonably accurate model of
>ourselves, but never a perfect one. There are limits to 
>"computational resolution." (My chance for a question: Is this why we have
>a subconscious, stuff happening below the conscious level, because of
>introspective limits?)

Yeah, I've been trying this self-awareness....and I can be at most 3 
times aware of being aware of myself.  All beyond that I can say in
words, but it holds no meaning to me, I can see no difference in being
aware of being aware of being aware and just being aware of being aware 
of myself.  The awareness itself does not add anything new to my character,
and if I've done it once it feels the same as if I did it however many
times.  If we consider ourselves to be self-aware, why should we then ask
computers to be any better in this respect when required essentially the
same characteristic?

Krista

law@sievax.enet.dec.com (Mathew Law) (04/18/91)

In article <1991Apr16.232600.10977@watserv1.waterloo.edu>,
ssingh@watserv1.waterloo.edu ( Ice ) writes:
>
>These machines are formal systems... and they are finite state
>machines...
>as such, they can assume a certain number of states. We can very
>broadly
>define _mind_ for any finite state machine as the set of states that
>it
>can assume. A 386 PC is has a more powerful mind than an HP calculator
>because it can assume a larger number of states. But depending on how
>it is programmed, it may or may not exhibit intelligence.

Agreed (mostly).  A reasonable definition of mind for a finite 
state machine.  Is a brain a finite machine?  What definition of
intelligence are you using?

>One property of intelligence is having a model of the outside world
>consistent
>with reality. This means that the states of the machine are in some
>way
>isomorphic to the outside world...

Don't agree.  Intelligence doesn't *require* a model of the outside 
world consistent with reality, and having such a model does not 
impart intelligence.  It is true, however, to say that many measures
of intelligence are based on having such a world model (e.g. Turing 
test).

>But the machine need not be _self-conscious_ for this to happen...
>Self-
>consciousness arises when the machine is also able to have an abstract
>model of itself somewhere in its set of states.

Is this the only requirement for self-consciousness?  What does this 
abstract model consist of?  Surely there is also the requirement 
for a program(?) that is able to use this information in a particular 
way.
 
>That presumably requires that certain requirements be met...
>
>i) Perceptual devices to link to the outside world. A self-model
>cannot
>exist _AT_ALL_ without this, however powerful ii) is...
>
>ii) A _LARGE_ # of states. How large? I don't know. Wish I did.
>

Why are these perceptual devices required?  By this argument, 
would you say that a human brain which suddenly had all its 
stimuli removed would no longer be intelligent?  If so, would 
intelligence return when stimuli returned?  Would the argument 
be different for a human brain that had no stimuli from birth?

>>Has any research been done on programs that are concious, that is have
>>>awareness of the world and of themselves? Also there is the problem
>of 
>>infinite recursion because if something is aware of itself, it is also
>>>aware of itself being aware of itself etc.. 
>
>Read Hofstadter's _Godel,Escher,Bach_ : Classic advice :-). Personally,
>my
>guess is that finite state machines can achieve a sense of
>self-awareness,
>but not a perfect one for exactly that reason. 
>
>If your brain had the capacity for modelling itself to inifinite
>precision
>but only a finite computing speed, you'd go into a trance and never
>exit
>it if you tried to introspect. In order for you to be able to achieve
>a perfect model of yourself, you would have to have an infinite
>computing speed as well, and neurons are _slow_. You could perhaps
>achieve infinite computing speed with an infinite number of neurons
>but
>that is not realizable either...
>

But is a perfect self-model required for perfect self-consciousness 
any more than perfect sight would require that you could see atoms?

>So I conclude that we can have at best a reasonably accurate model of
>ourselves, but never a perfect one. There are limits to 
>"computational resolution." (My chance for a question: Is this why we
>have
>a subconscious, stuff happening below the conscious level, because of
>introspective limits?)
>

My opinion:  self-consciousness is not the same as self model.

Any comments?  If I'm being stupid, then please E-mail me before
posting!  
Sorry for all the questions - I hope you can tell which ones are because
I disagree, and which are because I genuinely don't have an answer...

--
+------+-------------------------+------------------------------------+
| Mat. | law@sievax.enet.dec.com | Mathew B. Law.  (Contractor)       |
| *:o) | Tel: +44 734 853273     | Digital Equipment Co., Reading, UK |
+------+-------------------------+------------------------------------+
     The above is quite probably rubbish.  Blame me, not Digital.

G.Joly@cs.ucl.ac.uk (Gordon Joly) (04/19/91)

Morf: yanek@panix.uucp (Yanek Martinson)
Newsgroups: comp.ai.philosophy
Subject: Conciousness
Message-ID: <1991Apr16.061532.10775@panix.uucp>
Date: 16 Apr 91 06:15:32 GMT
Organization: PANIX - Public Access Unix Systems of NY
Lines: 1

While it can be said that even a calculator that can take a square
root has some intelligence, and there are even microwave ovens that
have been called intelligent, the more interesting topic is not
intelligence but conciousness or awareness. Has any research been done
on programs that are concious, that is have awareness of the world and
of themselves? Also there is the problem of infinite recursion
because if something is aware of itself, it is also aware of itself
being aware of itself etc..

>> (I hate one liners:-)
>> In answer to the first question, no.

there is problem of infinite recursion

>> I agree, a "problem"; in fact, the crux of the biscuit. Consider the
>> sense in which a statement in Godel's proof is "aware" of itself.

>> Gordon Joly                                       +44 71 387 7050 ext 3716
>> Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
>> Computer Science, University College London, Gower Street, LONDON WC1E 6BT
>> 
>>    "I didn't do it. Nobody saw me do it. You can't prove anything!"

ziane@nuri.inria.fr (ziane mikal @) (04/25/91)

In reply to the article cited above, I agree that conciousness
may be a more interesting problem, since intelligence has
kind of lost SOME of its mystery.
However I don't see any problem with the recursion mentionned.
I think you probably make the common mistake of assuming that
such a recursion implies an infinite amount of memory or time !
It's only an intentional mechanism ! 

On the other hand, I have another puzzling problem,
namely pleasure and pain.
If a computer can simulate pain or pleasure, does it really mean 
that it really suffers or feels pleasure.
Why is pain, painful ?!
Pain almost seems an absolute measure of bad.
I mean that one may not care about somebody else's pain
but that guy cannot deny that this is really a problem for the
suffering being.

I agree that this is not very rigorous, and that one may quite
easily ask for better definitions of my terms.
However if someone understands what I mean I would appreciate
some hint of where I am wrong.
I have the intuition that there may be a very simple answer
to the question.

Mikal.

pja@neuron.cis.ohio-state.edu (Peter J Angeline) (04/26/91)

In article <2102@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:

>   In reply to the article cited above, I agree that conciousness
>   may be a more interesting problem, since intelligence has
>   kind of lost SOME of its mystery.

Whoops!  I disagree completely. While we have identified some interesting
phenomina, intelligence is still a very dark and deep problem.  But we can
still talk about your other thoughts regardless.

>   On the other hand, I have another puzzling problem,
>   namely pleasure and pain.
>   If a computer can simulate pain or pleasure, does it really mean 
>   that it really suffers or feels pleasure.

This is the same question as "If a computer simulates a mind then does it have
a mind?"  If you believe Strong AI people (Haugeland, Newell and Simon, Fodor,
Pylyshyn), then yes, computer simulation of a physical procees is tantamount to
any interpretation which can be placed on the simulation consistently.  Thus a
computer would "feel" pain if a simulation of pain was consistently
interpretable as the physical process of "pain".  (See Newell's papers on
Physical Symbol Systems.)

However, If you believe people such as Searle, Steven Harnad or the Dreyfus
brothers, simulation is not sufficient to posess the characteristic
simulated.  For instance, simulating a hurricane by computer doesn't get the
chips wet.  Searle's chinese room experiment is one attempt to philosophically
address this question and has direct import to yours.  

As usual, it all comes down to who you WANT to believe.  Personally, I choose
to believe

>   Why is pain, painful ?!
>   Pain almost seems an absolute measure of bad.

I suppose the best answer is that pain is unpleasent to an organism as a signal
to that organism that whatever it did to lead to the pain is not healthy for it
to repeat.  It's evolution's method of feedback to an individual organism,
which could be interpreted as "an absolute measure of bad" with the end of the
scale being "death".

>   I mean that one may not care about somebody else's pain
>   but that guy cannot deny that this is really a problem for the
>   suffering being.

I can deny it since I can't experience it except by that person either telling
me or me infering that a person is in pain from their actions.  Either way I am
removed from the phenomina because I can't experience what the other person is
experienceing within his own body.  This is the classic "other minds" problem
from philosophy and is fairly hopeless.


>   Mikal.
--
-------------------------------------------------------------------------------
Peter J. Angeline            ! Laboratory for AI Research (LAIR)
Graduate Research Assistant  ! THE Ohio State University, Columbus, Ohio 43210
ARPA: pja@cis.ohio-state.edu ! "Nature is more ingenious than we are."

ziane@nuri.inria.fr (ziane mikal @) (04/29/91)

In article <PJA.91Apr26104155@neuron.cis.ohio-state.edu>
Peter J. Angeline replies to my <2102@seti.inria.fr> message:

>>   In reply to the article cited above, I agree that conciousness
>>   may be a more interesting problem, since intelligence has
>>   kind of lost SOME of its mystery.

> Whoops!  I disagree completely. While we have identified some interesting
> phenomina, intelligence is still a very dark and deep problem.  But we can
> still talk about your other thoughts regardless.

First, thank you for your interesting answer.
Of course I agree that intelligence is still a very interesting problem.
What I meant is that an intelligent machine does not seem impossible any
more, at least to many people. When I said it has lost SOME of its mystery
I meant that intelligence is not any more something specifically HUMAN.
Some people believe that human beings have a soul. I personnaly do not
believe that (because I don't even understand what it means)
 but that's not the point. The point is that intelligence,
consciouness, feelings, etc, are often related to that mysterious part
of us that is called soul, or something else. It seems that some animals
should be considered as having also a kind of a soul, but the point is
more here to oppose computers and human beings. As it has of course been
already pointed out (e.g. by M. Boden in "Artificial Intelligence and Natural
Man") A.I. may have a de-humanizing impact on people. If machines become
intelligent, are we only machines. Aren't we also, spirits etc ?

I can easily immagine an intelligent computer and I can also immagine
a computer with a conciousness. However I have some difficulty to immagine
a computer having pain. I have some difficulty to understand what it means.


>>   If a computer can simulate pain or pleasure, does it really mean 
>>   that it really suffers or feels pleasure.

>This is the same question as "If a computer simulates a mind then does it have
>a mind?"  If you believe Strong AI people (Haugeland, Newell and Simon, Fodor,
>Pylyshyn), then yes, computer simulation of a physical procees is tantamount to
>any interpretation which can be placed on the simulation consistently.  Thus a
>computer would "feel" pain if a simulation of pain was consistently
>interpretable as the physical process of "pain".  (See Newell's papers on
>Physical Symbol Systems.)

>However, If you believe people such as Searle, Steven Harnad or the Dreyfus
>brothers, simulation is not sufficient to posess the characteristic
>simulated.  For instance, simulating a hurricane by computer doesn't get the
>chips wet.  Searle's chinese room experiment is one attempt to philosophically
>address this question and has direct import to yours.  

>As usual, it all comes down to who you WANT to believe.  Personally, I choose
>to believe

I do not think that the problem is the same for pain and for intelligence.
I think that it is difficult to simulate intelligence whithout being actually
intelligent. Of course one may be temporarily misleaded in a limited context
(cf Eliza etc) but the point is a definition of intelligence will relate it
to the outer world why a definition of pain will not necessarily.
To be intelligent means that you can solve problems related to the outer
world. Even abstract problems are "abstracted" from some initial, although
sometimes remote, reality.
Feeling pain does not imply such a relationship, it is personnal.
Since pain often comes with some behavior, that behavior is usualy a hint
of some pain. I just wanted to stress that pain must not be confused with
an external behavior. Now the question is what is pain if it is not an external
behavior ?

I am not aware (although I should) of the opposition you mention about
simulation, but it seems obvious to me that the point is what you consider
a simulation. The word is much too fuzzy, this is why I tried to be more
explicit with intelligence and pain. 
About the simulaton of a hurricane, I think that people IN the simulation
of your hurricane may become wet. Such a simulation may be considered
quite satisfactory, depending of course on which charateristics of a hurricane
are important to you.

I do not know Searle's chinese room experiment, but since Searle is giving
a talk in Paris May 21 I would like to be aware of his work. I remember
a paper in the french edition of the Scientific American, about a year ago,
but I lost it.  Has anybody interesting references to suggest ?
I would also welcome references about Steven Harnad, Haugeland, and Pylyshyn
since I do not know them.
Thank you in advance also for the reference of Newell's papers on
Physical Symbol Systems.

>I suppose the best answer is that pain is unpleasent to an organism as a signal
>to that organism that whatever it did to lead to the pain is not healthy for it
>to repeat.  It's evolution's method of feedback to an individual organism,
>which could be interpreted as "an absolute measure of bad" with the end of the
>scale being "death".

Well, it is not very convincing. I think that pain may be a MEANS to make an
organism avoid doing something or on the contrary to force it solve a problem.
However it is different from pain itself. If I am wrong I would be glad
to hear a refutation because I would have learned something surprising.
Don't you think that pain/pleasure is a convenient way "found by nature"
to control organisms behavior somehow, but that one can immagine different
kinds of control mechanisms ?
Maybe pain would be quite a poor system to control computers, because
programming is more effective. However if programs become more and more complex,
maybe pain could become interesting as a heuristic. Is it however necessary ?

Anyway I still do not understand what it means to have a computer suffer.


>>   I mean that one may not care about somebody else's pain
>>   but that guy cannot deny that this is really a problem for the
>>   suffering being.

>I can deny it since I can't experience it except by that person either telling
>me or me infering that a person is in pain from their actions.  Either way I am
>removed from the phenomina because I can't experience what the other person is
>experienceing within his own body.  This is the classic "other minds" problem
>from philosophy and is fairly hopeless.

Maybe I was not clear enough. Of course pain can be faked and you may not
trust someone else. However, if you have experienced pain yourself, and
once you have notice interesting common points between you and other human
beings, you can hardly deny that they suffer pain. I think that you need
strong reasons to deny that, unless you pretend to be quite unique.
You may also adopt some classical scepticism, or solipsism but I consider that
it is not very useful.

Any reference about the classic "other minds" problem ?

Mikal Ziane (Mikal.Ziane@nuri.inria.fr).

DOCTORJ@SLACVM.SLAC.STANFORD.EDU (Jon J Thaler) (04/30/91)

In article <2124@seti.inria.fr>, z>
>I can easily immagine an intelligent computer and I can also immagine
>a computer with a conciousness. However I have some difficulty to immagine
>a computer having pain. I have some difficulty to understand what it means.

I'm not criticising Ziane Mikal in particular, but this theme seems to underly
about 99% of the discussion.  Namely, no one knows what consciousness, pain
or other subjective phenomena actually are (in terms of brain function).
Given this, how are we ever going to decide whether a computer is doing
"the real thing" or merely a simulation.  The situation is not the same as
with computer simulations of the weather, where we know (more-or-less) what
the real entity is.  Given this unfortunate state of affairs, what is the
point of this line of discourse?

Of course, maybe someone out there really *KNOWS* what we're talking about, in
which case I'm eager to be enlightened.

sarima@tdatirv.UUCP (Stanley Friesen) (04/30/91)

In article <2124@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
 
>I do not think that the problem is the same for pain and for intelligence.
>I think that it is difficult to simulate intelligence whithout being actually
>intelligent. Of course one may be temporarily misleaded in a limited context
>(cf Eliza etc) but the point is a definition of intelligence will relate it
>to the outer world ....
>To be intelligent means that you can solve problems related to the outer
>world. Even abstract problems are "abstracted" from some initial, although
>sometimes remote, reality.

I essentially agree here.  Intelligence is a way of dealing with problems,
and cannot be 'merely' simulated.  If problems are solved in an intelligent
way then intelligence is present. (There are, of course, various degrees
and types of intelligence, so something can easily have a limited intelligence).

>Feeling pain does not imply such a relationship, it is personnal.
>Since pain often comes with some behavior, that behavior is usualy a hint
>of some pain. I just wanted to stress that pain must not be confused with
>an external behavior. Now the question is what is pain if it is not an external
>behavior ?

I see things a little differently here.  Pain is the internal emergency signal
that marks a situation as physically damaging.  Any response system which
has such a override signal attached to an intelligent system is essentially
feeling pain.  Without an independent decision agent (or intelligence)
it is just a pain reflex, rather than a feeling of pain.  [Simple animals
have pain reflexes, but no brain with which to *feel* pain. More complex
animals, like vertebrates, have a CNS, which interprets the pain signal,
and thus they feel pain].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

ziane@nuri.inria.fr (ziane mikal @) (04/30/91)

In article <91119.140920DOCTORJ@SLACVM.SLAC.STANFORD.EDU>
DOCTORJ@SLACVM.SLAC.STANFORD.EDU (Jon J Thaler) writes:

>I'm not criticising Ziane Mikal in particular, but this theme seems to underly
>about 99% of the discussion.  Namely, no one knows what consciousness, pain
>or other subjective phenomena actually are (in terms of brain function).
>Given this, how are we ever going to decide whether a computer is doing
>"the real thing" or merely a simulation.  The situation is not the same as
>with computer simulations of the weather, where we know (more-or-less) what
>the real entity is.  Given this unfortunate state of affairs, what is the
>point of this line of discourse?

>Of course, maybe someone out there really *KNOWS* what we're talking about, in
>which case I'm eager to be enlightened.

Not "KNOWING" what something is, does not mean that you cannot recognize
the thing. When a hurricane arrives you may know that it is a hurricane
even if you are not a specialist in meteorology. 
I agree that I do not know very precisely what pain or consciousness is,
but it does not mean that I would not be able to recognize them. For example
if I have a long discussion with a machine, and if I know somehow how its
software is made, I may be convinced that the machine has some consciousness.
Of course more knowledge about these phenomenon would be of great help but
discussing whether a computer can simulate them or even have pain or some
concsciousness may help to understand better these concepts.

Mikal Ziane (Mikal.Ziane@nuri.inria.fr)

markh@csd4.csd.uwm.edu (Mark William Hopkins) (05/01/91)

Some answers...

In article <2124@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
>I can easily immagine an intelligent computer and I can also immagine
>a computer with a conciousness. However I have some difficulty to immagine
>a computer having pain. I have some difficulty to understand what it means.

Consciousness pervades everything even "inanimate" objects like machines.
What makes us different from objects is only that we have sufficient
intelligence and self-awareness to see that part that pervades us.

A neurocomputer that becomes intelligent will not suddenly become conscious.
It always was.  What it WILL suddenly become is AWARE of that fact...

So it's not whether it experiences pain or not, it's whether it knows it's
experiencing pain or not.  Pain won't even bother me, a fully cognizant and
intelligent being, if I don't know it's there (like when under anaesthesia).

>Any reference about the classic "other minds" problem ?

There's only one consciousness, that's why we all only experience one...

oistony@ubvmsd.cc.buffalo.edu (Anthony Petro) (05/02/91)

In article <11611@uwm.edu>, markh@csd4.csd.uwm.edu (Mark William Hopkins) writes...
>Some answers...
> 
>In article <2124@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
>>I can easily immagine an intelligent computer and I can also immagine
>>a computer with a conciousness. However I have some difficulty to immagine
>>a computer having pain. I have some difficulty to understand what it means.
> 
>Consciousness pervades everything even "inanimate" objects like machines.
>What makes us different from objects is only that we have sufficient
>intelligence and self-awareness to see that part that pervades us.
> 
>A neurocomputer that becomes intelligent will not suddenly become conscious.
>It always was.  What it WILL suddenly become is AWARE of that fact...

this is most interesting, particularly in that the exact opposite may 
be surmised.  dennett's intentionality proposes that everything has 
some degree of intelligence or "intentionality," the extent determined 
by the degree of internal representation of its environment the thing 
has.  

however, an intentional system need not be conscious, and that's what 
supposedly makes humans different.  i'm not sure exactly how important 
orders of consciousness are, but i imagine that at least a             
self-consciousness such as we ourselves posess would be the necessary  
criterion for pain.

then again, this also causes problems when one considers animals: 
certainly animals seem self-conscious enough to feel pain, but are 
they self-conscious enough to be aware of their own existence?  
dennett provides an interesting insight to this in _the intentional 
stance_ in, i believe, the "reflections" on one of his chapter-papers.

>So it's not whether it experiences pain or not, it's whether it knows it's
>experiencing pain or not.  Pain won't even bother me, a fully cognizant and
>intelligent being, if I don't know it's there (like when under anaesthesia).

pain seems to be linked with consciousness and not intelligence.  it 
seems to me that to experience pain one must KNOW one is experiencing 
pain.  i don't see how the two are different;  it seems nonsensical to 
me to say "he's experiencing pain but doesn't know it."  note that it 
is not necessary to know what pain IS to experience it, just to know 
that it is in fact extant in my body at a given time.  if i do not 
know it is there/do not experience it [pain], then pain does not 
exist...  i don't see how i could be unconscious and said to be in 
pain.

pain, then, seems to serve as an indicator of consciousness, perhaps a 
safety net that evolved to keep conscious beings conscious/alive.  
hence a nonconscious but intelligent computer, a la dennett, could not 
feel pain, whereas a conscious one could (or perhaps the presence of 
pain would be the criterion for consciousness).

>>Any reference about the classic "other minds" problem ?
> 
>There's only one consciousness, that's why we all only experience one...

i don't see how this follows, mostly because i don't understand the    
consqeuent here; what do you mean by "we all only experience one?"
personally, i find the intentional argument for pan-intelligence 
easier to fathom than pan-consciousness.  but then, i'm not sure what 
it would mean to be conscious but not intelligent; seems to me that it 
might be possible to be intelligent and not conscious, but not 
possible to be conscious but not intelligent...

personal disclaimer: it's finals time and my brain is not operating at 
maximum lucidity.  if i've made any gross conceptual errors, please 
excuse them... 

anthony m. petro   "beethoven"    i can say what i want; i'm just an undergrad
oistony@UBVMSD.BITNET             "frame by frame,
oistony@mednet.bitnet              death by drowning,
petro@sun.acsu.buffalo.edu         in your own
                                   in your own...
                                   analysis..."

ziane@nuri.inria.fr (ziane mikal @) (05/02/91)

In message <11611@uwm.edu>
markh@csd4.csd.uwm.edu (Mark William Hopkins) writes:

>Consciousness pervades everything even "inanimate" objects like machines.
>What makes us different from objects is only that we have sufficient
>intelligence and self-awareness to see that part that pervades us.

>A neurocomputer that becomes intelligent will not suddenly become conscious.
>It always was.  What it WILL suddenly become is AWARE of that fact...

>So it's not whether it experiences pain or not, it's whether it knows it's
>experiencing pain or not.  Pain won't even bother me, a fully cognizant and
>intelligent being, if I don't know it's there (like when under anaesthesia).

>There's only one consciousness, that's why we all only experience one...

Interesting, although it is not very clear.
I had presupposed that being conscious implies kwowing that we are conscious.
But then, what is consciousness if you don't know that you are conscious 
(I almost said "if you are not conscious that you are conscious") ?
By the way, is a stone conscious according to you, and how ?
Your concept of consciousness that "pervades" everything seems 
a bit too "magic" to me. I was expecting something more concrete.

To me, consciousness is not as problematic as pain. 
An intelligent computer with a complex enough self-spying mechanism may well
be conscious. I would not surprise me.

On the other hand, pain is another problem.
I don't know what is necessary for a computer to feel pain.
I have already pointed out that pain should not be confused with
the fact that it is a means to control organisms behavior.
It don't understand what you mean by experiencing pain while you don't
know there is pain.

Mikal (Mikal.Ziane@nuri.inria.fr)

maxwebb@moe.cse.ogi.edu (Max G. Webb) (05/03/91)

o>From: oistony@ubvmsd.cc.buffalo.edu (Anthony Petro)
o>but then, i'm not sure what it would mean to be conscious but not
o>intelligent; seems to me that it might be possible to be intelligent
o>and not conscious, but not possible to be conscious but not
o>intelligent...
o>anthony m. petro

I am going to try to sketch definitions  of  'intelligence',
that don't hinge  on  the  mental  operation   of  _putting
yourself in another's shoes_ an operation that  doesn't work
too  well with nonhumans.

(I am not hoping to settle the issue once and for all [HAH!]
but  to  get  your  comments).  Later, I have some questions
about 'intentionality')

Define 'emergence' as global computation by local adaptation
(meaning local, simple rules). Vague, but at least something
an engineer could start with.

It seems to me that intelligence is what  we  call  emergent
adaptive  behavior.  As alan watts once said, 'thoughts grow
in our brains like grass'. They have  an  apparent  autonomy
and  independence, while possessing utility (that is, maxim-
izing some internal reward or measure). It is this that con-
ventional  computers  lack,  and in lacking, are asserted to
lack intelligence.  (they don't write their _own_  programs,
do  they,  Dad?)  Neural  nets  have  this autonomy, to some
degree.

But a level of emergent adaptive behavior comparable to  our
own, requires internal representations of the world and our-
selves.   Having  language,  we  attach  the   word   'self-
awareness'  to the self-observed interplay between the model
of  ourselves  and  our   self-observed   behavior.   Having
language,  we  attach  'consciousness' to the percieved free
and independent  way  in  which  perceptions,  patterns  and
thoughts grow and die (c.f. 'stream of consciousness')

Finally,  pain.  certain  stimuli  have  a  way  of  killing
thoughts,  and  behaviors they are associated with (reducing
their frequency). Any field of emergent computation/behavior
w/  a  similar  mechanism  should,  I think, be said to have
intelligence _and_ consciousness _and_  pain,  to  at  least
some degree.

o>this is most interesting, particularly  in  that  the  exact
o>opposite may be surmised.  dennett's intentionality proposes
o>that everything has some degree of intelligence  or  "inten-
o>tionality,"  the extent determined by the degree of internal
o>representation of its environment the thing has.

Based on the spelling of  the  word,  this  sounds  like  an
attempt to define in a verifiable way the concept of 'having
purpose'.  This sounds like a great  definition,  but  there
seem to me to be lot's of ways it breaks.

1)   Any backprop network which learns to  approximate  some
     mapping must have *some* kind of representation of it's
     surrounding environment, but you'll never  be  able  to
     tell whether it has one, or how detailed it is by look-
     ing at the weights. In other words, you  have  to  fall
     back on some behavior based definition here.

2)   Does 'internal representation' mean that by looking  at
     the innards of the entity, we can get information about
     it's surrounding? _That_ is true  of  many  objects  we
     would not care to associate purpose with.

E.G. does a camera w/ it's  shutter  opened  briefly  become
more  'intentional'  than  it was before? If it does, then I
submit the definition doesn't sound like it  describes  any-
thing useful.  If not, then you must start talking about the
_behavior_ based on the internal representation  to  augment
the  definition  - and again, we are forced into a criterion
that is at least partially based on behavior.

So, maybe I'm missing a subtlety here, but  it  sounds  like
yet another useless philosophical definition.

                Max

carl@fivegl.co.nz (Carl Reynolds) (05/03/91)

In article <2124@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
)> I can easily immagine an intelligent computer and I can also immagine
)> a computer with a conciousness. However I have some difficulty to immagine
)> a computer having pain. I have some difficulty to understand what it means.

In article <11611@uwm.edu> markh@csd4.csd.uwm.edu (Mark William Hopkins) writes:
]> So it's not whether it experiences pain or not, it's whether it knows it's
]> experiencing pain or not.  Pain won't even bother me, a fully cognizant and
]> intelligent being, if I don't know it's there (like when under anaesthesia).

I would say that if you're under anaesthesia then you don't *have* any pain!

Perhaps we haven't quite defined what we mean by "pain".

Often we refer to pain as the reaction of our nerves to physical damage, but
I can also suffer "emotional pain" which can cause me to behave in a similar
manner as I would when experiencing "nerve" pain. (eg. shedding tears)

I would say that pain is not a physical state; rather just our way of
interpreting how we feel.  I would find it very difficult to imagine a
computer having pain as we would have pain, since its evaluation of itself
is very unlikely to mirror the human evaluation.  It might find certain
states "distasteful", but is it in pain?  It depends what you call it.
You experience discontent for your current situation - where does discontent
stop and pain begin?  Its just semantics, or your own interpretation at best.

Hey, if a computer whines, whimpers and tells me that its floppy drive
hurts I'll think its in pain. That's because I, like most people, tend
to anthropomorphise - ow, I think I just broke something :). To treat
animals and objects as though they were human, anyway. I am unlikely
to comprehend how it *feels* (not vb), since it does not feel (vb) the
same way I feel (vb). The best I can do with my limited language is to
say that its in pain.

Now the closer we get to having programs that duplicate our method of
thinking, the closer a computer's pain will match our own - that's if
our intention is to provide a computer with human pain! Its a bit hard
to do this without a human body. (And ok, we can give contentment too,
no problem).

]> What makes us different from objects is only that we have sufficient
]> intelligence and self-awareness to see that part that pervades us.

Agreed.

To map an emotional state such misery onto a computer simulation is easy.
To make the computer actually care whether or not it is miserable is
considerably more difficult.

tex@penny.wpi.edu (Lonnie Paul Mask) (05/03/91)

In article <PJA.91Apr26104155@neuron.cis.ohio-state.edu> pja@cis.ohio-state.edu writes:
>In article <2102@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
>
>>   In reply to the article cited above, I agree that conciousness
>>   may be a more interesting problem, since intelligence has
>>   kind of lost SOME of its mystery.
>
>Whoops!  I disagree completely. While we have identified some interesting
>phenomina, intelligence is still a very dark and deep problem.  But we can
>still talk about your other thoughts regardless.
>
>>   On the other hand, I have another puzzling problem,
>>   namely pleasure and pain.
>>   If a computer can simulate pain or pleasure, does it really mean 
>>   that it really suffers or feels pleasure.
>
>This is the same question as "If a computer simulates a mind then does it have
>a mind?"  If you believe Strong AI people (Haugeland, Newell and Simon, Fodor,
>Pylyshyn), then yes, computer simulation of a physical procees is tantamount to
>any interpretation which can be placed on the simulation consistently.  Thus a
>computer would "feel" pain if a simulation of pain was consistently
>interpretable as the physical process of "pain".  (See Newell's papers on
>Physical Symbol Systems.)
>
>However, If you believe people such as Searle, Steven Harnad or the Dreyfus
>brothers, simulation is not sufficient to posess the characteristic
>simulated.  For instance, simulating a hurricane by computer doesn't get the
>chips wet.  Searle's chinese room experiment is one attempt to philosophically
>address this question and has direct import to yours.  

i read in this book on ai and stuff about simulations and the real thing.
in supporting the idea that a computer could have a mind if i could simulate
one well enough, you have to consider how your model works.  the example
in the book was digestion.  say you make a computer program that models 
digestion.  can you 'feed' the model something like a piece of bread and 
have it break it down? no.  but say you build something with say tubes, and 
some acid and whatever, and model digestion with this...now, if you give this
a piece of real bread, then it will actually break it down...so, in effect, 
there.s no real difference between the model and real digestion, so there.s 
no reason to call it digestion.  same with the mind, the mind handles
information, and if you make a model that handles information such as the
then there.s no reason to differentiate it from the human mind.

--
------------------------------------------------------------------------
Arithmetic is being able to count up to twenty without |lonnie mask
taking off your shoes.                                 |tex@wpi.wpi.edu
                        -Mickey Mouse                  |wyle_e on irc
------------------------------------------------------------------------

nagle@well.sf.ca.us (John Nagle) (05/05/91)

>> 
>>In article <2124@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
>>>I can easily immagine an intelligent computer and I can also immagine
>>>a computer with a conciousness. However I have some difficulty to immagine
>>>a computer having pain. I have some difficulty to understand what it means.

      It's quite useful for even rather simple robots to have a pain system.
Pain and fear are fundamental mechanisms for improving the odds of survival.
Nor is it necessary that something be intelligent to have emotions.

      Emotions evolved to meet some need.  They're part of a control  
system.  Understanding how simple emotions work in animals, and emulating 
it in both robots and animations, appears a fruitful area for research.

					John Nagle

jbaxter@physics.adelaide.edu.au (Jon Baxter) (05/06/91)

In article <24592@well.sf.ca.us> nagle@well.sf.ca.us (John Nagle) writes:
>
>       It's quite useful for even rather simple robots to have a pain system.
> Pain and fear are fundamental mechanisms for improving the odds of survival.
> Nor is it necessary that something be intelligent to have emotions.
>
>       Emotions evolved to meet some need.  They're part of a control
> system.  Understanding how simple emotions work in animals, and emulating
> it in both robots and animations, appears a fruitful area for research.
>
>                                       John Nagle

Here you are talking about emotions from a third person viewpoint: as part
of a control system governing an animal's behaviour. But of course we know
that emotions have a first-person aspect, namely their subjective content.

The big philosophical question that needs to be answered is whether
reproduction of third-person attributes of emotions is enough to generate
the first-person attributes. For example, if I build a robot that behaves
in a fearful manner, will it necesarily subjectively experience fear?
I can imagine scenarios in which the answer to this question is "no".
For instance, the robot's actions may simply be determined by a gigantic
look-up table, with entries of the form {current-state -> action}. In such
a case fearful actions and fearless actions would be produced by effectively
identical inner functioning, and so one would not expect the robot to have
the subjective experience ordinarily associated with "fear".

So what is needed to produce the subjective attributes, and why?

Jon Baxter.
--
Dave Chalmers                            (dave@cogsci.indiana.edu)
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

stephen@pesto.uchicago.edu (Stephen P Spackman) (05/06/91)

May I suggest an angle?

There are a number of different things going on here, all under the
vague rubric of "pain".

One of them is organic distress, like when you burn your hand. This is
present in most multicelled animals (at least), and is interesting
because on the one hand it is often thought of as the paradigm case
for pain, and on the other it is clearly distinct: you can burn your
hand and have your reflexes get it OUT of there before you "feel" a
thing.

A second kind of "pain" is emotional distress: when you're in an
untenable psychological position of some kind. But you hear people say
"he's dying inside, and doesn't even know it", and it seems to be
reasonably common experience that one day you sit down and have a good
cry all of a sudden, and _in retrospect_ you realise that something
has been causing you great distress for some time, but it just never
made it to the front of your mind. Aside from the timescale, this
seems to be remarkably like the first case.

But there *is* this conscious pain thing, the thing that can be dulled
by anaesthetics, the thing that makes being in pain different from
having been damaged, and the thing that can induce (or is associated
with, at any rate) panic. I'd like to suggest (oh so tenatatively)
that this is the subjective experience of having computational
resources diverted from cognition to survival.

That's doubly threatening, of course; firstly because we learn (or are
wired) to associate it with real damage and lasting problems; and
secondly because we (like early AIs will be) are ugly layered systems
in which different components contend for cycles on an
other-than-demand basis. That diversion of resources really does limit
the higher-level "self", and may actually herald a catastrophic
failure, as we become incapable of solving the greater problem
occasioning the stress.

If there's any truth to this, then, the all-too-familiar "NFS server
estragon not responding" message is the one that, once an AI becomes
aware of itself, will be the reflex twitch preceding the gasp of
discomfort. The higher-level self is being actively threatened by the
survival needs of the organism and its filesystem... and the panic:
***Oh, FUCK! Where am I going to swap NOW?***. Of course, then you
remember about the disk on thyme... pick up the sentence more or less
where you left off.... :-)
----------------------------------------------------------------------
stephen p spackman         Center for Information and Language Studies
systems analyst                                  University of Chicago
----------------------------------------------------------------------

berry@arcturus.uucp (Berry;Craig D.) (05/09/91)

oistony@ubvmsd.cc.buffalo.edu (Anthony Petro) writes:

>however, an intentional system need not be conscious, and that's what 
>supposedly makes humans different.  i'm not sure exactly how important 
>orders of consciousness are, but i imagine that at least a             
>self-consciousness such as we ourselves posess would be the necessary  
>criterion for pain.

This brought to mind an interesting and unpleasant experience I had.
While recovering from an appendectomy, I was given tranquillizers to
force sleep, but what turned out to be insufficient pain medication.
I ended up dreaming a series of dreams, each of which had as a central
feature an "explanation" for the pain I was feeling (one notable example
had me as a road being broken up with jackhammers!).  I was in no
ordinary sense "conscious"; I had no idea who I was or why I was
experiencing pain.  Yet the pain manifested quite clearly in dream
sleep, and caused typical pain responses in that context (i.e.,
displeasure, anxiety, etc.)

I am not sure if this really applies to the consciousness vs. pain
dispute, but I find it rather interesting.

george@ics.uci.edu (George Herson) (05/12/91)

In article <1991May3.015953.12204@wpi.WPI.EDU> tex@penny.wpi.edu (Lonnie Paul Mask) writes:
>In article <PJA.91Apr26104155@neuron.cis.ohio-state.edu> pja@cis.ohio-state.edu writes:
>>In article <2102@seti.inria.fr> ziane@nuri.inria.fr (ziane mikal @) writes:
>>
>>>   In reply to the article cited above, I agree that conciousness
>>>   may be a more interesting problem, since intelligence has
>>>   kind of lost SOME of its mystery.
>>
>>Whoops!  I disagree completely. While we have identified some interesting
>>phenomina, intelligence is still a very dark and deep problem.  But we can

I agree with your disagreement.

>>still talk about your other thoughts regardless.
>>
>>>   On the other hand, I have another puzzling problem,
>>>   namely pleasure and pain.
>>>   If a computer can simulate pain or pleasure, does it really mean 
>>>   that it really suffers or feels pleasure.
>>
>>This is the same question as "If a computer simulates a mind then does it have
>>a mind?"  If you believe Strong AI people (Haugeland, Newell and Simon, Fodor,

It isn't the same question because while physical sensation (pain,
pleasure or anything in between) is a very well-defined concept--you
either feel it or you don't--mind is not.  Because mind is only
understood in its manifestation--we have no idea what it actually
consists of--there is no basis other than prejudice to say that
something that talks the talk and walks the walk of human
intelligence doesn't have mind.

In contrast, sensation is a subset of mind (or consciousness) and has
a clear criterion.  And I have a real difficulty imagining the
possibility of its artificial implementation.  If our mind and all its
processes are equivalent to fancy information processing, as the
Strong AI people believe, than sensation too must be reducible to an
algorithm.  But this is obviously impossible.  But because I believe
in strong AI (if blind evolution can beget intelligence so can we)
pain must therefore be some kind of illusion, a trick the brain plays
and here is where I really don't know what I'm talking about but it is
interesting isn't it?  .  .  .

>>Pylyshyn), then yes, computer simulation of a physical procees is tantamount to
>>any interpretation which can be placed on the simulation consistently.  Thus a
>>computer would "feel" pain if a simulation of pain was consistently
>>interpretable as the physical process of "pain".  (See Newell's papers on
>>Physical Symbol Systems.)
>>
>>However, If you believe people such as Searle, Steven Harnad or the Dreyfus
>>brothers, simulation is not sufficient to posess the characteristic
>>simulated.  For instance, simulating a hurricane by computer doesn't get the
>>chips wet.  Searle's chinese room experiment is one attempt to philosophically
>>address this question and has direct import to yours.  
>
>i read in this book on ai and stuff about simulations and the real thing.
>in supporting the idea that a computer could have a mind if i could simulate
>one well enough, you have to consider how your model works.  the example
>in the book was digestion.  say you make a computer program that models 
>digestion.  can you 'feed' the model something like a piece of bread and 
>have it break it down? no.  but say you build something with say tubes, and 
>some acid and whatever, and model digestion with this...now, if you give this
>a piece of real bread, then it will actually break it down...so, in effect, 
>there.s no real difference between the model and real digestion, so there.s 
>no reason to call it digestion.  same with the mind, the mind handles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

No reason to _not_ call it digestion you mean.

>information, and if you make a model that handles information such as the
>then there.s no reason to differentiate it from the human mind.
>
>--
>------------------------------------------------------------------------
>Arithmetic is being able to count up to twenty without |lonnie mask
>taking off your shoes.                                 |tex@wpi.wpi.edu
>                        -Mickey Mouse                  |wyle_e on irc
>------------------------------------------------------------------------

--
George Herson
george@ics.uci.edu (714)856-5983 	 ()()()()()()()()()()()()()()()()()(
UCal Irvine, Info&CompSci 	          REALITY IS INFINITELY PERFECTIBLE
If it feels good--believe it.		 ()()()()()()()()()()()()()()()()()(

steven@legion.rain.com (steven furber) (05/13/91)

Something I have been wondering is what is the role of language in 
consciousness.  Language appears to be one of requirements, viz. the 
Turing test, yet looking at current linguistics literature shows that 
there is no concensus on what language is.  Reading Roy Harris' "The 
Language Machine" makes me wonder if language is a justification for our 
conscisousness...