[comp.ai] Intelligence / Consciousness Test for Machines

mician@usfvax2.EDU (Rudy Mician) (10/06/88)

I have a question that I know has been addressed in the past (and undoubtedly
continues to be addressed):  

When can a machine be considered a conscious entity?  

For instance, if a massive neural-net were to start from a stochastic state 
and learn to interact with its environment in the same way that people do
(interact not think), how could one tell that such a machine thinks or exists
(in the same context as Descarte's "COGITO ERGO SUM"/"DUBITO ERGO SUM"
argument- that is, how could one tell whether or not an "I" exists for the
machine? 

Furthermore, would such a machine have to be "creative"?  And if so, how would
we measure the machine's creativity?

I suspect that the Turing Test is no longer an adequate means of judging
whether or not a machine is intelligent. 


If anyone has any ideas, comments, or insights into the above questions or any
questions that might be raised by them, please don't hesitate to reply.

Thanks for any help,

     Rudy


-- 

Rudy Mician     mician@usfvax2.usf.edu
Usenet:		...!{ihnp4, cbatt}!codas!usfvax2!mician

bwk@mitre-bedford.ARPA (Barry W. Kort) (10/06/88)

In article <1141@usfvax2.EDU> mician@usfvax2.usf.edu.UUCP,
(Rudy Mician) asks:

>When can a machine be considered a conscious entity?  

Consciousness is not a binary phenomenon.  There are degrees of
consciousness.  So the transition from non-conscious to conscious
is a fuzzy, gradual transition.

A normal person who is asleep is usually regarded as unconscious,
as is a person in a coma.  An alert Dalmation may be considered
conscious.

It might be more instructive to catalog the stages that lead to
higher levels of consciousness.  I like to start with sentience,
which I define as the ability of a system to sense its environment
and to construct an internal map, model, or representation of that
environment.  Awareness may then be defined as the ability of a
sentient system to monitor an evolving state of affairs.

Self-awareness may, in turn, be defined as the capacity of a sentient
system to monitor itself.

As an aware being expands its powers of observation, it achieves
progressively higher degrees of consciousness.

Julian Jaynes has suggested that the bicameral mind gives rise to
human consciousness.  By linking two semi-autonomous hemispheres
through the corpus callosum, it is possible for one hemisphere
to act as observer and coach for the other.  In other words, 
consciousness requires a feedback loop.

Group consciousness arises when independent individuals engage in
mutual mirroring and monitoring.  From Narcissus to Lewis Carroll,
the looking glass has served as the metaphor for consciousness raising.

--Barry Kort

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (10/06/88)

From article <1141@usfvax2.EDU>, by mician@usfvax2.EDU (Rudy Mician):
" ...
" When can a machine be considered a conscious entity?  

Always.  It's a matter of respect and empathy on your part.  All
the machines I use are conscious.

Or never, maybe, if you take 'conscious' seriously enough to
entertain the possibility that you yourself are not conscious
except sporadically.  Whatever one may think of his overall thesis,
Julian Jaynes (The Origin of Consciousness and the Bicameral Mind)
is very persuasive when he argues that consciousness is not
required for use of human language or every-day human activities.

		Greg, lee@uhccux.uhcc.hawaii.edu

cdc@cseg.uucp (C. David Covington) (10/06/88)

In article <1141@usfvax2.EDU>, mician@usfvax2.EDU (Rudy Mician) writes:
> 
> I have a question that I know has been addressed in the past (and undoubtedly
> continues to be addressed):  
> 
> When can a machine be considered a conscious entity?  
> 
 . . .
>
> I suspect that the Turing Test is no longer an adequate means of judging
> whether or not a machine is intelligent. 
> 

     Regarding intelligent machines, to the naive it's totally magic, to the
wizard it's clever programming and a masterful illusion at best.  To ascribe
consciousness to a machine is a personal matter.  If I cannot tell the 
difference between a 'conscious' human and a skillful emulation of the same,
then I am perfectly justified in *modeling* the machine as human.  It's not
so much a question of what *is* as a question of what *appears* to be.

     The same machine might be rightfully deemed conscious by one but not
by another.  I must expose my world view as predominantly Christian at this
point.  My belief in a Supreme Being places my view of man above all other
animals and therefore above any emulation of man by machine.  I say this not
so much to convert the masses to my point of view but to clarify that there
are people that think this way and this allows no place for conscious
machines.  

     So to readdress the original question, the Turing test is certainly
still valid from my understanding that it is a matter of how accurately
you can mimic human behaviors.  Between the lines you are making the
assumption that man and machine are the same in essence.  To this I object 
by faith.  The question cannot be properly addressed without first dealing
with world views on man.

                                                David Covington
                                                Assistant Professor
                                                Electrical Engineering
                                                University of Arkansas
                                                (501)575-6583

jackson@esosun.UUCP (Jerry Jackson) (10/07/88)

In article <40680@linus.UUCP> bwk@mitre-bedford.ARPA (Barry W. Kort) writes:

   In article <1141@usfvax2.EDU> mician@usfvax2.usf.edu.UUCP,
   (Rudy Mician) asks:

> >When can a machine be considered a conscious entity?  

 >  A normal person who is asleep is usually regarded as unconscious,
 > as is a person in a coma.  An alert Dalmation may be considered
 >  conscious.


A person who is in a coma is unconscious because he is incapable of
experiencing the outside world.  Consciousness is a *subjective*
phenomenon.  It is truly not even possible to determine if your
neighbor is conscious.  If a person felt no pain and experienced no
colors, sounds, thoughts, emotions, or tactile sensations he could be
considered unconscious.  Note that we would be unable to determine
this.  He could behave in exactly the same way while being completely
inert/dead inside.  Machines that are obviously unconscious such as
feedback-controlled battleship guns and thermostats respond to their
environments but, I would hardly call them conscious.  It is hard to
imagine what one would have to do to make a computer conscious, but it
does seem that it would involve more than adding a few rules.


--Jerry Jackson

dmocsny@uceng.UC.EDU (daniel mocsny) (10/07/88)

In article <828@cseg.uucp>, cdc@cseg.uucp (C. David Covington) writes:
> I must expose my world view as predominantly Christian at this
> point.  My belief in a Supreme Being places my view of man above all other
> animals and therefore above any emulation of man by machine.  I say this not
> so much to convert the masses to my point of view but to clarify that there
> are people that think this way and this allows no place for conscious
> machines.  

On the contrary, some people who think this way believe that conscious
machines are essential for literal fulfillment of Biblical prophecy. I
quote from Revelation chapter 13, verses 11, 13, and 15, NASB:

``And I saw another beast coming up out of the earth; and he had two
horns like a lamb, and he spoke as a dragon.''

``And he performs great signs, so that he even makes fire come down out
of heaven to the earth in the presence of men.''

``And there was given to him to give breath to the image of the beast,
that the image of the beast might even speak and cause as many as do
not worship the image of the beast to be killed.''

(Unfortunately, the Apostle John was not kind enough to tell us whether
the image of the beast was of the logic machine or connectionist
paradigms.) I too claim to espouse a Christian world view, but I do
not know what to make of John's Revelation. Neither can I find any
clear description in the Bible of the limits of human capability.
On the contrary, in Genesis 11, verses 5-8, we read:

``And the Lord came down to see the city and the tower which the
sons of men had built.''
``And the Lord said, `Behold, they are one people, and they all
have the same language. And this is what they began to do, and now
nothing which they purpose to do will be impossible for them. Come,
let Us go down and there confuse their language, that they may not
understand one another's speech.' ''
``So the Lord scattered them abroad from there over the face of the
whole earth; and they stopped building the city.''

(Had the sons of men already invented computers by that time, no
doubt they could have confused their own languages without divine
intervention.)

One might conclude from these snippets that human capability has
no real limits, and that God takes a rather dim view of us reaching
too far. How far is ``too far,'' is, however, not clear. I see no
justification for proscribing any particular avenues of investigation,
especially as many of us insist on continuing to develop physical
and mental illnesses, fall into poverty, commit acts of violence,
and generally be unhappy.

We are still light years away from having any real idea of what is
going on between our ears. However, every year we hear about yet
another link between a new behavior pattern and some physico-chemical
or genetic factor. How much can we ultimately explain and emulate?
Will belief in the Soul become an article of faith only, with no
explanatory role to play? Will Dualism become the modern day
horse, a luxury to keep around, stripped of its former utility?

Dan Mocsny

bbw842@leah.Albany.Edu (Barry B Werger) (10/07/88)

How do we know whena machine is conscious?
(The (Anti-)Abortionists ask similar questions, I am sure.)       

This is probably an unanswerable question.  Perhaps when a the system's
programmer is the Observer of the Turing test, a better guess can be made.

To some extent, machines have been programmed to be 'conscious' for a long
time.  Error reporting is a machine - conscious of its failure to perform
properly, or inability to perform a request - telling its operator about its
limitations.  

In a limited way a microprocessor is conscious of its environment.  It monitors
the state of the busses connected to it.  It reacts in certain ways to stimu-
lation from its surroundings (i.e., interrupt lines, clock pulses).  Does this 
count?

wardeng@rpics (Greg Warden) (10/07/88)

When can machine be considered conscious?

Of course this is really asking what is consciousness? This may 
have to do with an ability to extract semantic meaning from symbols 
(or stimulus realy) that do not have any inherent meaning already.

You can teach a machine to manipulate english all you want, but when does
it start to KNOW what the characters MEAN? Now this is atleast the 
traditional attack of strong AI (see John Searl _Minds Brains & Programs_).
I am not convinced that this is a valid argument. First if we are to 
consider ourselves conscious, we assumes that we have some cosmic 
understaning about our stimuli. Do we? I am not an expert on theory
of language but how do we get info out of a word? It seems to me that we
refer the word to a referent--sort of a level of abstraction. Maybe
if the machine had data about what words refered to to (and what they
refered to) and it knew what the relationshinp between the words and
referents it would be conscious. 

greg warden

pollock@usfvax2.EDU (Wayne Pollock) (10/07/88)

Consciousness?!?, in a machine?!?   Gee, I was of the opion that some of the
humans using the computer are not conscious! But if your theory is correct
than lets take an example, oh lets say.....autos (cars). On my way over here
I was almost ran off the road BY WHO? The car or the driver?!? Who do I sue?
The car or the driver? Who's insurance am I covered under? The caes or the
driver's? You see your question isn't as simple as it seems. There are a lot
of legal problems when or if you concider a machine conscious. So the next
time I get in trouble I'll blame it on IBM you see because,,,,THE MACHINE MADE
ME DO IT!!!!!!!!!!!!!
Let it be known that the above views are from the unconscious mind of MITCHELL
POLLOCK and that my brother WAYNE POLLOCK (whom I am visiting) had nothing to
do with this (in fact he doesn't even know about this). Should anyone want
more comments from me, you may reach me thruogh the GEnie Network. My mailing
address is M.POLLOCK

rolandi@gollum.UUCP (mail) (10/08/88)

Adding to Barry Kort's......

>Consciousness is not a binary phenomenon.  There are degrees of
>consciousness.  So the transition from non-conscious to conscious
>is a fuzzy, gradual transition.

When a person is awake and responds in a predictable manner, he is said 
to be conscious.

>Awareness may then be defined as the ability of a
>sentient system to monitor an evolving state of affairs.

When a person is known to know some given thing, he is said to be aware.

>Self-awareness may, in turn, be defined as the capacity of a sentient
>system to monitor itself.

When a person can label his own behavior in ways that are consistent with
the labels of those who observe him, he is said to be self-aware.


Walter Rolandi
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

bbw842@leah.Albany.Edu (Barry B Werger) (10/08/88)

In article <1145@usfvax2.EDU>, pollock@usfvax2.EDU (Wayne Pollock) writes:
> than lets take an example, oh lets say.....autos (cars). On my way over here
> I was almost ran off the road BY WHO? The car or the driver?!? Who do I sue?
> The car or the driver? Who's insurance am I covered under? The caes or the
> driver's? You see your question isn't as simple as it seems. There are a lot
> of legal problems when or if you concider a machine conscious. So the next
> time I get in trouble I'll blame it on IBM you see because,,,,THE MACHINE MADE
> ME DO IT!!!!!!!!!!!!!

But think about NEW cars for a moment (Think about AUDIS!).
Look at the controversy...was unintended acceleration drivers fault, or car'
I suppose it was manufacturer's in this case. But if car computer fails, is it
driver's fault? With ABS, COmputer controled 4ws, computer controlled engines,
active suspension, if a car loses control can we really say it was the driver's
fault any more?

I think i know what you are saying, but your argument does not quite work.
Does ABS represent consciousness? Computerized engine management? the
computer is 'aware' of its environment, and acts accordingly. in some limited
sense, i'd call this consciousness.

Soon I'll learn to type,
Barry Werger
  

shani@TAURUS.BITNET (10/09/88)

In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
>
> When can a machine be considered a conscious entity?
>
Oh no! not that again! ;-)

Okay, I'll make it short and somewhat cyinc this time: The answer is: NEVER!

You see andy, the only reason for you to assume that there is such a thing as
a conscious entity at all, is that otherwise, YOU are not a conscious entity,
and that probebly sounds nonsense to you (Actualy when saying that, I already
take a dangerious step forward, assuming that YOU ARE... the only thing I can
know is that I AM a conscious entity...).

I hope that helps...

O.S.

bwk@mitre-bedford.ARPA (Barry W. Kort) (10/11/88)

In article <263@balder.esosun.UUCP> jackson@esosun.UUCP (Jerry Jackson) writes:
>Consciousness is a *subjective* phenomenon. 
>It is truly not even possible to determine if your neighbor is conscious.

I think the best way to determine if someone is conscious is to carry
on a conversation with them.  (The interaction need not be verbal.
One can use visual or tactile channels, or non-verbal auditory channels.)
There are interesting anecdotes about autistic children who were coaxed
into normal modes of communication by starting with primitive stimulus-
response modes.  The Helen Keller story also dramatizes such a breakthrough.

One of the frontiers is the creation of a common language between
humans and other intelligent mammals such as chimps and dolphins.

--Barry Kort

kmont@hpindda.HP.COM (Kevin Montgomery) (10/12/88)

/ hpindda:comp.ai / shani@TAURUS.BITNET /  5:54 am  Oct  9, 1988 /
In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
>
> When can a machine be considered a conscious entity?
>
Oh no! not that again! ;-)

Okay, I'll make it short and somewhat cyinc this time: The answer is: NEVER!

You see andy, the only reason for you to assume that there is such a thing as
a conscious entity at all, is that otherwise, YOU are not a conscious entity,
and that probebly sounds nonsense to you (Actualy when saying that, I already
take a dangerious step forward, assuming that YOU ARE... the only thing I can
know is that I AM a conscious entity...).

I hope that helps...

O.S.
----------

kmont@hpindda.HP.COM (Kevin Montgomery) (10/12/88)

(sorry about the previous repost of the last response)

In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> When can a machine be considered a conscious entity?

May I suggest that this discussion be moved to talk.philosophy?

While it has many implications to AI (as do most of the more
philosophical arguments which take place in comp.ai), it has a
broader scope and should have a broader reader base.  There are
a number of implications of a definition of consciousness-
rights and responsibilities of something deemed conscious, whether
mere consciousness is a sufficient criterion for personhood 
(should non-biological entities be deemed conscious, and if 
consciousness is sufficient for personhood, and the constitutional
rights are bestowed upon persons "born" in a country, then these 
entities have all the rights of the constitution (in this country)).

The implication of this example would be that if machines (or animals, 
or any non-human or non-biological entity) have rights, then one may
be arrested for murder if one should halt the "life process" of 
such an entity either by killing an animal or by removing power
from a machine.

Moreover, the question of when humans are conscious (and thus are
arguably persons) has implications in the areas of abortion, euthanasia,
human rights, and other areas.

For these reasons, I suggest we drop over to talk.philosophy (VERY
low traffic over there, anyway), resolve these questions (if possible,
but doubtful), and post a response to the interested newsgroups (comp.ai,
talk.abortion, etc).

Rather than attacking all questions at once and getting quite confused 
in the process, I suggest that we start with the question of whether
consciousness is a necessary and sufficient criterion for personhood.
In other words, in order to have rights (such as the right to life),
does something have to have consciousness?  Perhaps we should start
with a definition of consciousness and personhood, and revise these
as we see fit (would someone with a reputable dictionary handy post 
one there?).

Note that there are implications about things such as anencephalic
babies (born with only the medulla, no higher brain areas exist),
commissurotomy (split-brain) patients, and even people we consider
to be knocked unconscious (or even sleeping!) have personhood 
(and therefore rights).

				kevin

smann@watdcsu.waterloo.edu (Shannon Mann - I.S.er) (10/12/88)

In article <1141@usfvax2.EDU> mician@usfvax2.usf.edu.UUCP (Rudy Mician) writes:

>When can a machine be considered a conscious entity?  
>
>For instance, if a massive neural-net were to start from a stochastic state 
>and learn to interact with its environment in the same way that people do
>(interact not think), how could one tell that such a machine thinks or exists
>(in the same context as Descarte's "COGITO ERGO SUM"/"DUBITO ERGO SUM"
>argument- that is, how could one tell whether or not an "I" exists for the
>machine? 

Only the _machine_ can adequately answer the question.  If the _machine_ asks
'What/Who Am I?', by the definition of self-awareness (any reasonable one I can 
think of) the machine is self-aware.  If the _machine_ can sense and react to 
the environment, it is (on some primitive level) aware.  Science has already 
provided us with machines that are far more _aware_ than the common amoeba.
Until the science community refines its' ideas of what awareness, and self-
awareness entails, the above question cannot be answered with any accuracy.

Is it possible?  Certainly!  Consciousness occurs with in biological systems,
so why not mechanical systems of sufficient complexity.  If we consider the 
vastness of space and time, and that an event can occur once, it is reasonable
to conclude that _self-awareness_ will occur out there again and that, more than
likely, it will be in a different form than ours.  Knowing this, is it so 
difficult to accept the possibility of creating the same?

>Furthermore, would such a machine have to be "creative"?  And if so, how would
>we measure the machine's creativity?

This question could/should be asked about humans.  When is a human creative?
When We invent something, is it not the re-application of some known idea?
Or an accidental discovery?  In my mind, creativity is the ability to syn-
thesize _something_ from a group of _something_different_.  My definition does
not include the concept of self-direction, and so should be modified.
Regardless, it does touch upon the basic idea that _to_create_ means to take
_what_is_ and make _something_new_.  By this definition, _life_ is creative :-)

>I suspect that the Turing Test is no longer an adequate means of judging
>whether or not a machine is intelligent. 

Here we go upon a different tack.  Intelligence is quite different than self-
awareness.  I do not want to define intelligence as it is a term that is used
and misused in so many ways that coherent dialog about the subject is highly 
suspect of worth.  My definition certainly would not clear up any ambiguity, 
but would probably create a flame war of criticism.   Self-awareness is exactly
that, to be aware of one-self, separate from the environment you exist in.
Intelligence...       well, you go figure.  However, there is a difference.

>If anyone has any ideas, comments, or insights into the above questions or any
>questions that might be raised by them, please don't hesitate to reply.

Well, you asked....  I know about much of the research that has been done on the
topic of self-learning systems.  The idea is that, if a machine can learn like 
humans, then it must be like humans.  However, humans do not learn in the 
simplified manner that these systems employ.  Humans use a system where they
learn how a particular system or process works, and then can re-apply that 
heuristic (am I using this term correctly?) under different circumstances.
Has the heuristic approach be attempted in machine learning systems?  I don't
believe so, and would appreciate any response.

>Rudy Mician     mician@usfvax2.usf.edu
>Usenet:		...!{ihnp4, cbatt}!codas!usfvax2!mician

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

P.S.  Please do not respond with any egocentric views about what it is to be 
human, etc.  I see Humanity as different than the rest of the animal kingdom,
but, in no way superior.  Having the power to damage our planet the way we
do does not mean we our superior.  Possessing and using that power only shows
our foolishness.

dharvey@wsccs.UUCP (David Harvey) (10/13/88)

In article <874@taurus.BITNET>, shani@TAURUS.BITNET writes:
> In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> >
> > When can a machine be considered a conscious entity?
> >
> Oh no! not that again! ;-)
> 
> Okay, I'll make it short and somewhat cyinc this time: The answer is: NEVER!
> 
> You see andy, the only reason for you to assume that there is such a thing as
> a conscious entity at all, is that otherwise, YOU are not a conscious entity,
> and that probebly sounds nonsense to you (Actualy when saying that, I already
> take a dangerious step forward, assuming that YOU ARE... the only thing I can
> know is that I AM a conscious entity...).
> 
> I hope that helps...
> 
> O.S.

If you claim that you are a conscious entity, but that I am not (your
view of the world), and that I am a conscious entity but that you are not
(my view of the world), then I can only assume that you are talking
about self awareness.  But is this what determines whether something is
a conscious entity or not?  If I am not mistaken, your view is that for
anything to be a conscious entity, it must have self awareness, and only
it can determine whether it is a conscious entity.  Please correct me if
I misinterpreted you.  But then why in the world am I writing this
article in response?  After all, I have no guarantee that you are a
conscious entity or not.  But for some reason, I have this persistent
but unverifiable belief that you are a conscious entity.  Otherwise,
why would I write?  In other words, we have a sticky problem that may
or may not have a solution.  Yes, Bertrand Russell is someone who had
a neat idea with the proposal of a third indeterminate state.  In other
words, I prefer to consider this type of question the mystery it should
be categorized as.

dharvey@wsccs

I am responsible for Nobody
and Nobody is responsible for me.
The only thing you can know for sure,
is that you can't know anything for sure.

bwk@mitre-bedford.ARPA (Barry W. Kort) (10/13/88)

                            The Turtling Test

                                Barry Kort

                  (With apologies to Douglas Hofstadter)


       Achilles: Good morning Mr. T!

       Tortoise: Good day Achilles.  What a wonderful day for
                 touring the computer museum.

       Achilles: Yes, it's quite amazing to realize how far our
                 computer technology has come since the days of Von
                 Neumann and Turing.

       Tortoise: It's interesting that you mention Alan Turing, for
                 I've been doing some biographical research on him.
                 He is a most interesting and enigmatic character.

       Achilles: Biographical research?  That's a switch.  Usually
                 people like to talk about his Turing Test, in
                 which a human judge tries to distinguish which of
                 two individuals is the human and which is the
                 computer, based on their answers to questions
                 posed by the judge over a teletype link.  To tell
                 you the truth, I'm getting a little tired of
                 hearing people talk about it so much.

       Tortoise: You have a fine memory, my friend, but I'm afraid
                 you'll be disappointed when I tell you that the
                 Turing Test does come up in my work.

       Achilles: In that case, don't tell me.

       Tortoise: Fair enough.  Perhaps you would be interested to
                 know what Alan Turing would have done next if he
                 hadn't died so tragically in his prime.

       Achilles: That's an interesting idea, but of course it's
                 impossible to say.

       Tortoise: If you mean we'll never know for sure, I would
                 certainly agree.  But I have just come up with a
                 way to answer the question anyway.

       Achilles: Really?

       Tortoise: Really.  You see, I have just constructed a model
                 of Alan Turing's brain, based on a careful
                 examination of everything he read, saw, did, or
                 wrote about during his tragic career.

       Achilles: Everything?

       Tortoise: Well, not quite everything -- just the things I
                 know about from the archives and from his notes
                 and effects.  That's why it's just a model and not
                 an exact duplicate of his brain.  It would be a
                 perfect model if I could discover everything he
                 ever saw, learned, or discovered.

       Achilles: Amazing!

       Tortoise: Since Turing had a very logical mind, I merely
                 start with his accumulated knowledge and reason
                 logically to what he would have investigated next.
                 Interestingly, this leads to a possible hypothesis
                 explaining why Turing committed suicide.

       Achilles: Fantastic!  Let's hear your theory.

       Tortoise: A logical next step after devising the Turing Test
                 would be to give the formal definition of a Turing
                 Machine to computer `A' (which, since it's a
                 computer, happens to be a Turing Machine itself)
                 and ask it to decide if another system (call it
                 machine `B') is a Turing Machine.

       Achilles: I don't get it.  What is machine `A' supposed to
                 do to decide the question?

       Tortoise: Why it merely devises a test which only a Turing
                 Machine could pass, such as a computation that a
                 lesser beast would choke on.  Then it administers
                 the Test to machine `B' to see how it handles the
                 challenge.

       Achilles: Are you sure that a Turing Machine knows how to
                 devise such a test in the first place?

       Tortoise: That's a good question.  I suppose it depends on
                 how the definition of a Turing Machine is stated.
                 Clearly, a good definition would be one which
                 states or implies a practical way to decide if an
                 arbitrary hunk of matter possesses the property of
                 being a Turing Machine.  In this case, it's safe
                 to assume that the problem was well-posed, meaning
                 that the definition was sufficiently complete.

       Achilles: So what happened next?

       Tortoise: You mean what does my model of Turing's brain
                 suggest as the next logical step?

       Achilles: Of course, Mr. T.  I quite forgot what level we
                 were operating on.

       Tortoise: Next, Machine `A' would be asked if Machine `A'
                 itself fit the definition of a Turing Machine!

       Achilles: Wow!  You mean you can ask a machine to examine
                 its own makeup?

       Tortoise: Why not?  In fact many modern computers have
                 built-in self diagnostic systems.  Why can't a
                 computer devise a diagnostic program to see what
                 kind of computer it is?  As long as it's given the
                 definition of a Turing Machine, it can administer
                 the test to itself and see if it passes.

       Achilles: Holy Holism!  Computers can become self-aware of
                 what they are?!

       Tortoise: That would seem to be the case.

       Achilles: What happens next?

       Tortoise: You tell me.

       Achilles: The Turing Machine tries the Turing Test on a
                 human.

       Tortoise: Very good.  And what is the outcome?

       Achilles: The human passes?

       Tortoise: Right!

       Achilles: So Alan Turing concludes that he's nothing more
                 than a Turing Machine, which makes him so
                 depressed he eventually commits suicide.

       Tortoise: Maybe.

       Achilles: What else could there be?

       Tortoise: Let's go back to your last conclusion.  You said,
                 "Turing concludes that he's nothing more than a
                 Turing Machine."

       Achilles: I don't follow your point.

       Tortoise: Suppose Turing wants to prove conclusively that he
                 was something more than "just a Turing Machine."

       Achilles: I see.  He had a Turing Machine in him, but he
                 wanted to know what else he was that was more than
                 just a machine.

       Tortoise: Right.  So he searched for some way to discover
                 how he differed from a machine in an important
                 way.

       Achilles: And he couldn't discover any way?

       Tortoise: Not necessarily.  He may have known of several
                 ways.  For example, he could have tried to fall in
                 love.

       Achilles: Why falling in love is the easiest thing in the
                 world.

       Tortoise: Not if you try to do it.  Then it's impossible!

       Achilles: I see your point.

       Tortoise: In any event, there is no evidence that Turing
                 ever fell in love, even though he must have known
                 it was possible.  Maybe he didn't know that one
                 shouldn't try so hard.

       Achilles: So he committed suicide in despair?

       Tortoise: Maybe.

       Achilles: What else could there be?

       Tortoise: The last possibility that comes to mind is that
                 Turing suspected there was something he was
                 overlooking.

       Achilles: And what is that?

       Tortoise: Could a Turing Machine discover the properties of
                 a Turing Machine without being told?

       Achilles: Gee, I don't know.  But it could discover the
                 properties of another machine that it could do
                 experiments on.

       Tortoise: Would it ever think to do such experiments on
                 itself?

       Achilles: I don't know.  Does it even know what the word
                 "itself" points to?

       Tortoise: Who would have given it the idea of "self"?

       Achilles: I don't know.  It reminds me of Narcissus
                 discovering his reflection in a pool of water and
                 falling in love with himself.

       Tortoise: Well, I haven't finished my research yet, but I
                 suspect that a Turing Machine, without outside
                 assistance, could not discover the complete
                 definition of itself, nor would it think to ask
                 itself the question, "Am I a Turing Machine?" if
                 it were simply given the definition of one as a
                 mathematical abstraction.

       Achilles: In other words, if Alan Turing did ask himself the
                 question, "Am I (Alan Turing) a Turing Machine?"
                 the very act of posing the question proves he
                 isn't one!

       Tortoise: That's my conjecture.

       Achilles: So he committed suicide to prove he wasn't one,
                 because he didn't realize that he already had all
                 the evidence he needed to prove that he was
                 intellectually more complex than a mere Turing
                 Machine.

       Tortoise: Perhaps.

       Achilles: Well, I would be most interested to discover the
                 final answer when you complete your research on
                 this most interesting question.

       Tortoise: My friend, if we live long enough, we're bound to
                 find the answer.

       Achilles: Good day Mr. T!

       Tortoise: Good day Achilles.

mkent@dewey.soe.berkeley.edu (Marty Kent) (10/14/88)

In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> When can a machine be considered a conscious entity?

I think an important point here is exactly the idea that a machine (or
other entity) is generally -considered- to be conscious or not. In other
words, this judgement shouldn't be expected to reflect some deep -truth-
about the entity involved (as if it REALLY is or isn't conscious). It's
more a matter of the -usefulness- of the judgement: what does it buy you
to consider an entity conscious...

So a machine can be considered (by -you-) conscious any time
1) you yourself find it helpful to think this way, and
2) you're not aware of anything that violates this judgement.

If you really want to consider entities conscious, you can come up with a
workable definition of consciousness that'll include most anything (or, at
least,  exclude almost nothing). If you're really resistant to the idea,
you can keep pushing up the requirements until nothing and noone passes
your test. 

Chief Dan George said "The human beings [his own tribe, of course :-)]
think -everything- is alive: earth, grass, trees, stones. The white man
thinks everything is dead. If things keep trying to act alive, the white
man will rub them out."


Marty Kent  	Sixth Sense Research and Development
		415/642 0288	415/548 9129
		MKent@dewey.soe.berkeley.edu
		{uwvax, decvax, inhp4}!ucbvax!mkent%dewey.soe.berkeley.edu
Kent's heuristic: Look for it first where you'd most like to find it.

gbn474@leah.Albany.Edu (Gregory Newby) (10/18/88)

(* sorry about typos/unreadability:  my /terminfo/regent100 file
 is rapidly approaching maximum entropy)
In article <3430002@hpindda.HP.COM>, kmont@hpindda.HP.COM (Kevin Montgomery)
writes:
> In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> > When can a machine be considered a conscious entity?
> 
> May I suggest that this discussion be moved to talk.philosophy?
> 
> While it has many implications to AI (as do most of the more
> philosophical arguments which take place in comp.ai), it has a
> broader scope and should have a broader reader base.  

I would like to see this discussion carried through on comp.ai.
It seems to me that these issues are often not considered by scientists 
working in ai, but should be.  And, it may be more useful to
take an "operational" approach in comp.ai, rathar than a philosophical
or metaphysical approach in talk.philosophy.

This topic has centered about the definition of consciousness, or
the testing of consciousness.

Turing (_Mind_, 1950) said:  "The only to know that a machine is
thinking is to be that machine and feel oneself thinking. " (paraphrase)

A better way of thiking about consciousness may be to consider
_self_ consciousness.  That is, is the entity in question capable
of considering its own self ?  

Traditional approaches to defining "intelligent behaviour" are
PERFORMANCE based.  The Turing test asks a machine to *simulate* a human.
  (as an aside:  how could a machine, which has none of the experience
  of a human, be expected to act as one.  Unless someone were to 
  somehow 'hard-code' all of a human's experience in some computer system, 
  but who would call that intelligence?)
Hofstadter (_Goedel, Escher, Bach_, p24) gives a list of functions as
criteria for intelligent behaviour which many of today's smart expert
systems can perform, but they certainly aren't intelligent!

If a machine is to be considered as "intelligent," or "conscious,"
no test will suffice.  It will be forced to make an argument on its
own behalf.  

This argument must begin, "I am intelligent"

  (or, "I am conscious" --means the same thing, here)

The self concept has not, to my knowledge, been treated in the AI
literature.  (My thesis, "A self-concept based approach to artificial
intelligence, with a case study of the Galileo(tm) computer system,"
SUNY Albany, dealt with it, but I'm a social scientist.)

As Mead (see, for instance, _Social Psychology_) suggests, the
difference between lower animals and man is twofold:

1)  the self concept:  man may consider the self as an object, separate
from other objects and in relation to the environment.

2)  the generalized other:  man is able to consider the self as
seen by other selves.

The first one's relatively easy.  The second must be learned through
social interaction.


So, (if anyone's still reading)
What kind of definition of intelligence are we talking about here?
I would bet that for any performance criteria you can give me, if I 
gave you a machine that could do it, the machine would not be considered
intelligent without also exhibiting a self-concept.

'Nuff said.

--newbs
  (
   gbnewby@rodan.acs.syr.edu
   gbn474@leah.albany.edu
  )

smann@watdcsu.waterloo.edu (Shannon Mann - I.S.er) (10/19/88)

In article <734@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
>If you claim that you are a conscious entity, but that I am not (your
>view of the world), and that I am a conscious entity but that you are not
>(my view of the world), then I can only assume that you are talking
>about self awareness.  But is this what determines whether something is
>a conscious entity or not?  If I am not mistaken, your view is that for
>anything to be a conscious entity, it must have self awareness, and only
>it can determine whether it is a conscious entity.  Please correct me if
>I misinterpreted you.  But then why in the world am I writing this
>article in response?  After all, I have no guarantee that you are a
>conscious entity or not.  But for some reason, I have this persistent
>but unverifiable belief that you are a conscious entity.  Otherwise,
>why would I write?  





Yes, but consider that the _other_ conscious entity may only be another
part of yourself (in the sense of multiple personalities.)  You may be
unaware that each separate entity is actually the same entity.






>                    In other words, we have a sticky problem that may
>or may not have a solution.  Yes, Bertrand Russell is someone who had
>a neat idea with the proposal of a third indeterminate state.  In other
>words, I prefer to consider this type of question the mystery it should
>be categorized as.
>
>dharvey@wsccs

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

'I have no brain, and I must think...' - An Omynous
'If I don't think, AM I' - Another Omynous

P.S.  I am unaware of Russell's postulate re: a third indeterminate state.
Could you give me some references?  Thanks in advance.
Newsgroups: comp.ai
Subject: Re: Intelligence / Consciousness Test for Machines (Neural-Nets)???
Summary: Then again, who am I?
Expires: 
References: <1141@usfvax2.EDU> <874@taurus.BITNET> <734@wsccs.UUCP>
Sender: 
Reply-To: smann@watdcsu.waterloo.edu (Shannon Mann - I.S.er)
Followup-To: 
Distribution: na
Organization: U. of Waterloo, Ontario
Keywords: 

In article <734@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
Massive delete... Sorry, inews refused to accept unless I cut down on
referenced material.
>
>If you claim that you are a conscious entity, but that I am not (your
>view of the world), and that I am a conscious entity but that you are not
>(my view of the world), then I can only assume that you are talking
>about self awareness.  But is this what determines whether something is
>a conscious entity or not?  If I am not mistaken, your view is that for
>anything to be a conscious entity, it must have self awareness, and only
>it can determine whether it is a conscious entity.  Please correct me if
>I misinterpreted you.  But then why in the world am I writing this
>article in response?  After all, I have no guarantee that you are a
>conscious entity or not.  But for some reason, I have this persistent
>but unverifiable belief that you are a conscious entity.  Otherwise,
>why would I write?  

Yes, but consider that the _other_ conscious entity may only be another
part of yourself (in the sense of multiple personalities.)  You may be
unaware that each separate entity is actually the same entity.

>                    In other words, we have a sticky problem that may
>or may not have a solution.  Yes, Bertrand Russell is someone who had
>a neat idea with the proposal of a third indeterminate state.  In other
>words, I prefer to consider this type of question the mystery it should
>be categorized as.
>
>dharvey@wsccs

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

'I have no brain, and I must think...' - An Omynous
'If I don't think, AM I' - Another Omynous

P.S.  I am unaware of Russell's postulate re: a third indeterminate state.
Could you give me some references?  Thanks in advance.