[mod.ai] Consciousness

rjf@seismo.CSS.GOV@ukc.UUCP (12/10/86)

In <960671.861204.KFL@MX.LCS.MIT.EDU> KFL%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU
("Keith F. Lynch") writes:
>    From: mcvax!ukc!rjf@seismo.css.gov  (R.J.Faichney)
>
>    ... to ascribe consciousness to something man-made, no matter how perfect
>    it's performance, will always require an effort of will...
>
>  The net is an excellent medium for Turing tests...
>  Let me play the Turing game in reverse for a moment, and ask if you
>would bet a lot of money that nobody would regard a computer as
>conscious if it were to have written this message?
>								...Keith

I would certainly bet that *most* people would not regard a computer as
conscious if it had written your message, or even mine. If someone
had lived for several years with a supposed-person who turned out to be
a robot, they would be severely shocked, when they discovered that fact, 
and would *not* say 'Well, you certainly had me fooled. I guess you robots
must be conscious after all.' 

I explained in an earlier posting why I believe the naive reaction to be
important. The problem is not just about what would deserve the attribution
of consciousness, but about what we feel about making that attribution.
And such feelings go much deeper than mere prejudice. I think they go as
deep as love and sex, and are equally valid and valuable. I often turn
machines on, but they don't do the same for me - they're not good enough,
because they're not folks. And never will be.

(A promise for all you hard AIers - no more on this, from me, at least.
Well - no more in mod.ai, anyway.)

Robin Faichney	  ("My employers don't know anything about this.")

UUCP:  ...mcvax!ukc!rjf             Post: RJ Faichney,
                                          Computing Laboratory,
JANET: rjf@uk.ac.ukc                      The University,
                                          Canterbury,
Phone: 0227 66822 Ext 7681                Kent.
                                          CT2 7NF

KFL@AI.AI.MIT.EDU.UUCP (12/21/86)

    From: mcvax!ukc!rjf@seismo.CSS.GOV

    If someone had lived for several years with a supposed-person who turned
    out to be a robot, they would be severely shocked, when they discovered
    that fact, and would *not* say 'Well, you certainly had me fooled. I guess
    you robots must be conscious after all.'

  That is what *I* would say.  What WOULD be sufficient evidence for
consciousness?  If only self experience is sufficient, does that mean
you don't think the rest of us are conscious?
  What if YOU turned out to be a robot, much to your own surprise?
Would you then doubt your own consciousness?  Or would you then say
"well, maybe robots ARE conscious, and humans AREN'T"?

    The problem is not just about what would deserve the attribution of
    consciousness, but about what we feel about making that attribution.

  Huh?  Does reality depend on feelings?

    And such feelings go much deeper than mere prejudice. I think they go as
    deep as love and sex, and are equally valid and valuable. I often turn
    machines on, but they don't do the same for me - they're not good enough,
    because they're not folks. And never will be.

  What about aliens from another planet?  They might give ample
evidence that they are intelligent (books, starships, computers,
robots, network discussion groups, etc) but might appear quite
physically repulsive to a human being.  Would you believe them
to be conscious?  Why or why not?
								...Keith

JMC@SAIL.STANFORD.EDU.UUCP (02/22/87)

	This discussion of consciousness considers AI as a branch of
computer science rather than as a branch of biology or philosophy.
Therefore, it concerns why it is necessary to provide AI programs
with something like human consciousness in order that they should
behave intelligently in certain situations important for their
utility.  Of course, human consciousness presumably has accidental
features that there would be no reason to imitate and other features
that are perhaps necessary consequences of its having evolved that 
aren't necessary in programs designed from scratch.  However, since
we don't yet understand AI very well, we shouldn't jump to conclusions
about what features of consciousness are unnecessary in order to
have the intellectual capabilities humans have and that we want our
programs to have.

	Consciousness has many aspects and here are some.

	1. We think about our bodies as physical objects to which
the same physical laws apply as apply to other physical objects.
This permits us to predict the behavior of our bodies in certain
situations, e.g. what might break them, and also permits us to
predict the behavior of other physical objects, e.g. we expect
them to have similar inertia.  AI systems should apply physics
to their own bodies to the extent that they have them.  Whether
they will need to use the analogy may depend on what knowledge
we choose to build in and what we will expect them to learn from
experience.

	2. We can observe in a general way what we have been thinking
about and draw conclusions.  For example, I have been thinking
about what to say about consciousness in this forum, and at present
it seems to be going rather well, so I'll continue composing
my comment rather than think about some specific aspect of
consciousness.  I am, however, concerned that when I finish this
list I may have left our important aspects of consciousness that
we shall want in our programs.  This kind of general observation
of the mental situation is important for making intellectual
plans, i.e. deciding what to think about.  Very intelligent computer
programs will also need to examine what they have been thinking
about and reason about this information in order to decide whether
their intellectual goals are achievable.  Unfortunately, AI isn't
ready for this yet, because we must solve some conceptual problems
first.	

	3. We compare ourselves intellectually with other people.
The concepts we use to think about our own minds are mainly learned
from other people.  As with information about our bodies, we infer
from what we observe about ourselves to the mental qualities of
other people, and we also learn about ourselves from what we
learn about others.  In so far as programs are made similar to
people or other programs, they may also have to learn from interaction.

	4. We have goals about our own mental functioning.  We would
like to be smarter, nicer and more content.  It seems to me that
programs should also have such meta-goals, but I don't see that
we need to make them the same as people's.  Consider that many
people have the goal of being more rational, e.g. less driven
by impulses.  When we find ourselves with circular preferences,
e.g preferring A to B, B to C and C to A, we chide ourselves and
try to change.  A computer program might well discover that its
heuristics give rise to circular preferences and try to modify
them in service of its grand goals.  However, while people are
originally not fully rational, because our heritage provides
direct connections between our disparate drives and the actions
that achieve the goals they generate, it seems likely that
there is no reason to imitate all these features in computer programs.
Thus our programs should be able to compare the desirability
of future scenarios more readily than people do.

	5. Besides our direct observations of our own mental
states, we have a lot of general information about them.  We
can predict whether problems will be easy or difficult for us
and whether hypothetical events will be pleasing or not.
Programs will require similar capabilities.

	Finally, it seems to me that the discussion of consciousness
in this digest has been too much an outgrowth of the ordinary
traditional philosophical discussions of the subject.  It hasn't
sufficiently been influenced by Dennett's "design stance".  I'm
sure that more aspects of human consciousness than I have been
able to list will require analogs in robotic systems.  We should
also be alert to provide forms of self-observation and reasoning
about the programs own mental state that go beyond those evolution
has given us.

eugene@AMES-PIONEER.ARPA (Eugene Miya N.) (03/05/87)

Do not confuse consciousness with memory.  Consciousness is not
a dualistic phenomena which your "speculation" (your word) tends
to imply.  Consider that you did not mentioned subconscious (explicitly),
and but you did mention a dual unconscious.

Your comments on memory can also be refined by the cognitive
literature such as the distinction between recall, recognition, and the two
other types of memory tests I am forgetting.  You also should make a
distinction between forgetting and interference (this is good).
My suggestion is for you to visit a nearby college or university and
get some literature on cognition (of which I am NOT a proponent).

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene