[net.ai] awareness

Robert.Frederking%CMU-CS-CAD@sri-unix.UUCP (10/19/83)

whether human life
should have a special value, beyond its information handling abilities, for
instance for euthanasia and abortion questions. (I certainly don't want to
argue about abortion; personally, I think it should be legal, but not treated
as a trivial issue.)

        At this point, my version of several definitions is in order.  This
is because several terms have been confused, due probably to the
metaphysical nature of the problem.  What I call "awareness" is *not*
"self-reference": the ability of some information processing systems (including
people) to discuss and otherwise deal with representations of themselves.
It is also *not* what has been called here "consciousness": the property of
being able to process information in a sophisticated fashion (note that
chemical and physical reactions process information as well).  "Awareness"
is the internal experience which Michael Condict was talking about, and
which a large number of people believe is a real thing.  I have been
told that this definition is "epiphenominal", in that awareness is not the
information processing itself, but is outside the phenomena observed.

        Also, I believe that I understand both points of view; I can argue
either side of the issue.  However, for me to argue that the experience of
"awareness" consists solely of a combination of information processing
capabilities misses the "dualist" point entirely, and would require me to
deny that I "feel" the experience I do.  Many people in science deny that
this experience has any reality separate from the external evidence of
information processing capabilities.  I suspect that one motivation for this
is that, as Paul Torek seems to be saying, this greatly simplifies one's
metaphysics.

        Without trying to prove the "dualist" point of view, let me give an
example of why this view seems, to me, more plausible than the
"physicalist" view.  It is a variation of something Joseph Weizenbaum
suggested.  People are clearly aware, at least they claim to be.  Rocks are
clearly not aware (in the standard Western view).  The problem with saying
that computers will ever be aware in the same way that people are is that
they are merely re-arranged rocks.  A rock sitting in the sun is warm, but
is not aware of its warmth, even though that information is being
communicated to, for instance, the rock it is sitting on.  A robot next to
the rock is also warm, and, due to a skillful re-arrangement of materials,
not only carries that information in its kinetic energy, but even has a
temperature "sensor", and a data structure representing its body
temperature.  But it is no more aware (in the experiential sense) of what is
going on than the rock is, since we, by merely using a different level of
abstraction in thinking about it, can see that the data structure is just a
set of states in some semiconductors inside it.  The human being sitting
next to the robot not only senses the temperature and records it somehow (in
the same sense as the robot does), but experiences it internally, and enjoys
it (I would anyway).  This experiencing is totally undetectable to physical
investigation, even when we (eventually) are able to analyze the data
structures in the brain.

An interesting side-note to this is that in some cultures, rocks, trees,
etc., are believed to experience their existance.  This is, to me, an
entirely acceptable alternate theory, in which the rock and robot would both
feel the warmth (and other physical properties) they possess.

As a final point, when I consider what I am aware of at any given moment, it
seems to include a visual display, an auditory sensation, and various bits
of data from parts of my body (taste, smell, touch, pain, etc.).  There are
many things inside my brain that I am *not* aware of, including the
preprocessing of my vision, and any stored memories not recalled at the
moment.  There is a sharp boundary between those things I am aware of and
those things I am not.  Why should this be?  It isn't just that the high
level processes, whatever they are, have access to only some structures.
They *feel* different from other structures in the brain, whose information
I also have access to, but which I have no feeling of awareness in.  It
would appear that there is some set of processing elements to which my
awareness has access.  This is the old mind-body problem that has plagued
philosophers for centuries.

To deny this qualitative difference would be, for me, silly, as silly as
denying that the physical world really exists.  In any event, whatever stand
you take on this issue is based on personal preferences in metaphysics, and
not on physical proof.

flink%umcp-cs%CSNet-Relay@sri-unix.UUCP (10/23/83)

From:  Paul Torek <flink%umcp-cs@CSNet-Relay>

            [Submitted by Robert.Frederkind@CMU-CS-SAD.]

[Robert:]

I think you've misunderstood my position.  I don't deny the existence of
awareness (which I called, following Michael Condict, consciousness).  It's
just that I don't see why you or anyone else don't accept that the physical
object known as your brain is all that is necessary for your awareness.

I also think you have illegitimately assumed that all physicalists must be
functionalists.  A functionalist is someone who believes that the mind
consists in the information-processing features of the brain, and that it
doesn't matter what "hardware" is used, as long as the "software" is the
same there is the same awareness.  On the other hand, one can be a
physicalist and still think that the hardware matters too -- that awareness
depends on the actual chemical properties of the brain, and not just the
type of "program" the brain instantiates.

You say that a robot is not aware because its information-storage system
amounts to *just* the states of certain bits of silicon.  Functionalists
will object to your statement, I think, especially the word "just" (meaning
"merely").  I think the only reason one throws the word "just" into the
statement is because one already believes that the robot is unaware.  That
begs the question completely.

Suppose you have a "soul", which is a wispy ghostlike thing inside your body
but undetectable.  And this "soul" is made of "soul-stuff", let's call it.
Suppose we've decided that this "soul" is what explains your
intelligent-appearing and seemingly aware behavior.  But then someone comes
along and says, "Nonsense, Robert is no more aware than a rock is, since we,
by using a different level of abstraction in thinking about it, can see that
his data-structure is *merely* the states of certain soul-stuff inside him."
What makes that statement any less cogent than yours concerning the robot?

So, I don't think dualism can provide any advantages in explaining why
experiences have a certain "feel" to them.  And I don't see any problems
with the idea that the "feel" of an experience is caused by, or is identical
with, or is one aspect of, (I haven't decided which yet), certain brain
processes.
                                --Paul Torek, umcp-cs!flink

Robert.Frederking%CMU-CS-CAD@sri-unix.UUCP (10/24/83)

        Sorry about not noticing the functionalist/physicalist
distinction.  Most of the people that I've discussed this with were either
functionalists or dualists.

        The physicalist position doesn't bother me nearly as much as the
functionalist one.  The question seems to be whether awareness is a function
of physical properties, or something that just happens to be associated with
human brains -- that is, whether it's a necessary property of the physical
structure of functioning brains.  For example, the idea that your "soul" is
"inside your body" is a little strange to me -- I tend to think of it as
being similar to the idea of hyperdimensional mathematics, so that a person's
"soul" might exist outside the dimensions we can sense, but communicate with
their body.  I think that physicalism is a reasonable hypothesis, but the
differences are not experimentally verifiable, and dualism seems more
reasonable to me.

        As far as the functionalist counter-argument to mine would go, the
way you phrased it implies that I think that the "soul" explains human
behavior.  Actually, I think that *all* human behavior can be modeled by
physical systems like robots.  I suspect that we'll find physical correlates
to all the information processing behavior we see.  The thing I am
describing is the internal experience.  A functionalist certainly could make
the counter-argument, but the thing that I believe to be important in this
discussion is exactly the question of whether the "soul" is intrinsically
part of the body, or whether it's made of "soul-stuff", not necessarily
"located" in the body (if "souls" have locations), but communicating with
it.  As I implied in my previous post, I am concerned with the eventual
legal and ethical implications of taking a functionalist point of view.

        So I guess I'm saying that I prefer either physicalism or dualism to
functionalism, due to the side-effects that will occur eventually, and that
to me dualism appears the most intuitively correct, although I don't think
anyone can prove any of the positions.

dinitz@uicsl.UUCP (10/28/83)

#R:sri-arpa:-1283300:uicsl:15500011:000:2270
uicsl!dinitz    Oct 27 12:13:00 1983

I'm not so sure that pleasure/pain response will never be analyzable in
physical terms, Robert.  If we were ever to gain a satisfactory
understanding of that property of higher animates -- enough say, to
model it in a robot -- we would also erode the idea that the robot
could not feel or experience other emotional states.  The problem is
that we must not base our arguments concerning robot consciousness,
experience, feeling, selfhood, et cetera on the absence of an adequate
theory to explain the same "phenomena" in humans/animals.

I have placed "phenomena" in scare quotes because of our inability to
define them satisfactorily.  What we really have are words which carry
along vague notions of the way we think our world is.  I hesitate to
call them phenomena, properties, or states until we can pinpoint them
more precisely.

In the end, though, I agree with you that the question is really one of
world-view -- a cultural perspective.  If one begins with the premise
that only higher animates can experience consciousness, then there is no
easy way to infer that plants, rocks or robots can too.  If one begins
with the premise that all earthly objects can experience consciousness,
then there is no question in the special cases of plants, rocks and
robots.  With this second premise the interesting questions are whether
intangibles like words, thoughts, physical states (i.e. temperature),
and time are earthly objects which can hence experience consciousness.

My personal feeling:  The culture I grew up in only allows higher
animates to possess consciousness.  However, I grew up at a time when
the dominant culture was experimenting with various "foreign"
world-views.  Thus there is already some partially charted space in my
head for exceptions.  Outside the context of this discussion, I would
probably say (without hesitation) that robots don't have
consciousness.  Within this context, however, I am willing to admit the
possibility, and discuss the point.  Ultimately, the question has no
relevance at this time, or in my lifetime; I do not worry too much
about what the correct answer is.  In summary, my opinion may be called
the Zen of Robot Consciousness: robot consciousness is possible, and
robot consciousness is not possible.

Rick Dinitz

andree@uokvax.UUCP (11/09/83)

#R:sri-arpa:-1300000:uokvax:900004:000:495
uokvax!andree    Oct 30 13:21:00 1983

Robert -

If I understand correctly, your reasons for preferring dualism (or
physicalism) to functionalism are:

	1) It seems more intuitively obvious.
	2) You are worried about legal/ethical implications of functionalism.

I find that somewhat amusing, as those are EXACTLY my reasons for
prefering functionalism to either dualism or physicalism. The legal
implications of differentiating between groups by arbitrarily denying
`souls' to one is well-known; it usually leads to slavery.

	<mike