[comp.ai.philosophy] Testing for machine consciousness

rjf@canon.co.uk (Robin Faichney) (10/08/90)

In article <7@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1990Oct4.154655.23004@canon.co.uk> rjf@canon.co.uk (I) wrote:
>>[ saying that just because certain functions are associated with
>>  consciousness in us, does not mean that the presence of such
>>  functions is evidence for consciousness in machines ]
>
>The conunter-argument is simple.  This is also true of the human brain!
>No individual neural mechanism in the brain is conscious, nor is any
>individual subsystem in the brain conscious.  Consciousness is the result
>of the sum of the activities and interactions of many components and
>mechanisms within the brain.  Thus if implementation via unconscious parts
>denies consciousness, then *we* are not conscious either, we just think we
>are.

OK, I know this thread was originally about emergence, and I neglected
til now to change the Subject line, but that is not what I was talking
about!  What I meant was, we associate consciousness particularly with
short-term memory, for instance, but it would (I guess) be relatively
easy to implement a machine with short-term memory which functioned
just like ours, though the machine was not conscious.  The same
argument applies to any other function.  So, if not by its functioning,
how else can we tell whether a machine is conscious?

>> -- if we agree that no current
>>machine is conscious, why should we believe any future machine to be so
>>-- it could perform indistinguishably from a person, while being
>>"nothing but" an unconscious object.
>
>Because we do not agree that no current machine is conscious - we all agree
>that the human machine is indeed conscious.

No we don't.  Because (a) some people would argue that maybe we're not
really conscious, we just think we are (I personally do not think this
position worth dealing with) and (b) you have just redefined
"machine".  In such discussions we have to distinguish between natural
human beings on one hand and artefacts made by them on the other, in
order to compare their qualities.  Normally, the word "machines" is
used for the artefacts.  To attempt to blur the distinction merely by
deciding to call people machines, is a tactic hardly worthy of this
refutation.  It tells us absolutely nothing about either humans, or
consciousness, or machines.

BTW, I'd still be interested in hearing whether anyone has a test for
machine consciousness..

sarima@tdatirv.UUCP (Stanley Friesen) (10/09/90)

In article <1990Oct8.120927.8648@canon.co.uk> rjf@canon.co.uk writes:
>In article <7@tdatirv.UUCP> I (Stanley Friesen) wrote:
>>The counter-argument is simple.  This is also true of the human brain!
>>No individual neural mechanism in the brain is conscious, nor is any
>>individual subsystem in the brain conscious.  ...
>
>What I meant was, we associate consciousness particularly with
>short-term memory, for instance, but it would (I guess) be relatively
>easy to implement a machine with short-term memory which functioned
>just like ours, though the machine was not conscious.  The same
>argument applies to any other function.  So, if not by its functioning,
>how else can we tell whether a machine is conscious?

If it functions just like our brain it *is* conscious!

Or do you mean that it has short-term memory just like ours, but none of
our other functionality?
In that case the issue of emergence comes back in.  Consciousness probably
is an emergent property of the sum total of all of our functionality (or
at least a very large subset thereof).  Thus taking in one sub-component,
like short-term memory, and expecting it to be 'conscious' is silly.
Consciousness appears to be based on a complex interaction amoung: internal
world models, self-monitoring, decision-making processes, spontaneous
learning, abstraction, and perhaps other things.  Short-term memory may well
be a critical component, giving rise to the sense of continuity necessary
for a sense of self, but it is scarcely a defining feature.

>>Because we do not agree that no current machine is conscious - we all agree
>>that the human machine is indeed conscious.
>
>No we don't.  Because (a) some people would argue that maybe we're not
>really conscious, we just think we are (I personally do not think this
>position worth dealing with) and (b) you have just redefined
>"machine".  In such discussions we have to distinguish between natural
>human beings on one hand and artefacts made by them on the other, in
>order to compare their qualities.

O.K., I will make the same point in another way.  I know of no mechanism
within the human brain that is not strictly physical in nature.  Thus an
exact copy of a human could, in theory, be constructed (a machine).  Since
this exact copy is indistinguishable in any way from a naturally born human,
we can, as a shortcut, say that humans are 'machine-like' in construction.
Thus, if we are conscious, and that consciousness is based in our machine-like
body functioning, then a machine may be conscious. [I base this on years of
study as a biologist].

In short, I think the distinction you are making between 'machine' and 'human'
is largely artificial, it is based on a false dualism.

>BTW, I'd still be interested in hearing whether anyone has a test for
>machine consciousness..

Now, as for testing a machine for consciousness, that is harder.  It is quite
a different question than whether a machine can be conscious.  [We were denying
that blacks were conscious(had a soul) for many years, despite the contrary
evidence, so how are we going to be objective about a *machine*!]
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/10/90)

>In article <1990Oct4.154655.23004@canon.co.uk> rjf@canon.co.uk wrote:
[article deleted...]

>BTW, I'd still be interested in hearing whether anyone has a test for
>machine consciousness..

  If consciousness is 'emergent' and therefore not reduced to a formal
framework (yet), then there is no test since there are no criteria to
fufil...
--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

phil@eleazar.dartmouth.edu (Phil Bogle) (10/10/90)

There has been a great deal of discussion of consciousness as an emergent
phenomenon and the potential of machines to attain it.  I'd like to
backtrack to ask a fundamental question: why should we care whether a
machine is actually conscious, sentient, "aware", and so forth?   With regards
to intelligence, consciousness as opposed to behavior which merely
_seems_ conscious makes no difference whatsoever.  Would AI research be
considered a failure if actually did manage to construct Searle's non-conscious
Chinese Room?

  Even from the standpoint of ethics, would you feel no shame
destroying an intelligence capable of creating elegant and beautiful
ideas, merely because it wasn't "sentient" (especially if it gave every
appearance of being so.)

   I'm not saying I would freely abandon my self-awareness.  I am
suggesting that Searle and others have their priorities messed up.  The real
question should not be "Can we create intelligence without consciousness?", but
"Could we create intelligence without having some kind of emergent, higher level
structure?" It would be very distressing if a machine like Searle's CR
managed to fake it's way through the Turing test using only a set of
disconnected, low-level rules-- not because the machine isn't conscious,
but because it has none of the complex, emergent structure we associate
with intelligence.  Other than that point, however, I can't see how Searle's
argument should influence AI researchers in any way.  The field, after all, is
AI, not AC.

BKort@bbn.com (Barry Kort) (10/11/90)

In article <1990Oct8.120927.8648@canon.co.uk> rjf@canon.co.uk (Robin 
Faichney) writes:

> BTW, I'd still be interested in hearing whether anyone has a test for
> machine consciousness..

I envision the following scenario...

It is near the end of the semester in Professor Moravec's class on 
advanced robotics.  All year long the grad students have labored to 
construct the silicon golem.  As the class sits back to admire it's 
handiwork, the good Professor polls the team on the ultimate question.  
"Who among us is in favor of giving the android consciousness?"

As the students ruminate on the question they are distracted by the 
whirring of gear motors as the robot raises its hand and says, "I am."


Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA

harmo@cc.helsinki.fi (10/11/90)

In article <21@tdatirv.UUCP>, sarima@tdatirv.UUCP (Stanley Friesen) writes:

> Since
> this exact copy is indistinguishable in any way from a naturally born human,
> we can, as a shortcut, say that humans are 'machine-like' in construction.

Well, indistinquishable maybe, but still very different in certain historical
and conventional respects.
A copy of me would not be married to my wife or be father of my children.
I think such a creature differs from me in quite essential aspects.
There are many philosophers who would argue that the same applies to
"consciousness" (eg. Davidson, Sellars). Consciousness is something we
attribute to creatures partly because they have a certain convention-based
roles and a certain type of history in the human society. Consciousness of
animals (if you believe in such) or wolf-babies or representatives of other
cultures (if you this culture is very different from mine so that I can't
attribute the proper roles)  is a derived concept, they are not "really"
conscious. Machine consciousness would be similar to animal consciousness
unless machines start to grow into proper kinds of roles in society.
Note that this does not imply  that eg. representatives of other cultures are 
somehow inferior in information-processing capabilities.

 -Timo Harmo

me@csri.toronto.edu (Daniel R. Simon) (10/12/90)

Perhaps some recent experiences of mine will add to the current debates over 
consciousness and emergence.

I have just returned from a visit with the graphics group at a rather
prestigious academic institution which shall remain nameless.  I say "graphics
group", but I should note that they have changed their name to "Laboratory for
Artificial Appearance", and consider themselves to be grappling with much 
deeper problems than the mere rendering of amusing images (although the bulk
of their lucrative research work is in practice of precisely this variety).

Their transformation from "graphics group" into LAA began with an article 
published some years ago in an obscure graphics journal, but now considered a
classic in the field.  It proposed the following thought experiment:  suppose
that a television camera were pointed at a person sitting against a blank 
background; the resulting image (which might be either black and white or 
colour, and either still or moving, as the experimenter prefers) is displayed 
on an ordinary CRT screen in a different room.  Adjacent to this screen sits 
another screen displaying an artificially rendered image (again, either black
and white or colour, and either still or moving, to match the image on the 
first screen).  Individuals are invited into the room to examine the screens,
and to try to determine which screen is displaying the image of a real person.
The article provocatively asks whether, if even very sharp-eyed viewers are 
unable to spot the signs of computer rendering in the artificial image, it can
credibly be denied that the artificial image is in fact, the image, or (in some
strict sense) the "appearance" of a human being.

The consequences for the field of this playfully-named "blurring test" have, it
seems, been profound.  Numerous extensions have been proposed to the original
test, including the possibility for aural or tactile components to the images, 
and perhaps even interaction.  Graphics researchers who still have difficulty 
producing realistic images of trees have found ample backing for their efforts
to theorize about how to produce flawless renderings of the human form.  
Philosophical debates have abounded concerning the definition of appearance,
appropriate goals for artificial appearance researchers, and most of all, what
properties artificial appearance shares with its "natural" counterpart.

For example, during my stay at the LAA, a heated discussion ensued concerning
the property of beauty, and in particular over whether artificially-rendered
humans could be considered beautiful.  Many, of course, argued that no mere
collection of pixels could ever be considered beautiful in the sense that real,
live humans could be; much was made of the significant role of context in 
beauty.  Some suggested that the sceptics were defining beauty too narrowly;
was there no beauty, they asked, in the Mona Lisa or the Venus de Milo, not to
mention Mondrian's "Broadway Boogie-Woogie"?  The middle position, I gathered, 
was that, at the very least, an image which passed the blurring test deserved 
to be considered as beautiful as if it were a real person, although it was 
widely conceded that no artificial image yet produced was worthy of the 
attribution.

Two aspects of this discussion make it, I think, relevant to the current ones 
in this newsgroup:  firstly, the sceptical "hard line" was frequently rebutted 
with references to the idea of beauty being an "emergent" property.  Hence, it 
was argued, a collection of pixels may well be "humanly" beautiful, although
it was composed of individual unbeautiful parts.  Secondly, their use of the
blurring test was, for me, highly evocative of the current debate; for example,
they frequently asked how, if one could deny the beauty of a computer-generated
image indistinguishable in "appearance" from a really beautiful woman, one 
could still say with even reasonable certainly that a beautiful woman herself
is really beautiful; could not her appearance, too, be secretly composed of 
minute dots of colour?

My only contribution to the debate was to remark once, half in jest, that I
had always believed beauty to be in the eye of the beholder.  There was an
awkward silence, a few people coughed embarrassedly, and after a few moments
the conversation continued as before.  Until I left, those were the last words
I dared speak on the subject.


"There *is* confusion worse than death"		Daniel R. Simon
			     -Tennyson		(me@theory.toronto.edu)

rjf@canon.co.uk (Robin Faichney) (10/12/90)

In article <21@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>[..]
>Consciousness appears to be based on a complex interaction amoung: internal
>world models, self-monitoring, decision-making processes, spontaneous
>learning, abstraction, and perhaps other things.
>[..]
>Thus, if we are conscious, and that consciousness is based in our machine-like
>body functioning, then a machine may be conscious. [I base this on years of
>study as a biologist].

Can anyone provide a pointer to objective evidence for the existence of
consciousness?

>In short, I think the distinction you are making between 'machine' and 'human'
>is largely artificial, it is based on a false dualism.

The distinction is in our minds..  ;-)

sarima@tdatirv.UUCP (Stanley Friesen) (10/12/90)

In article <3324.2713728b@cc.helsinki.fi> harmo@cc.helsinki.fi writes:
>In article <21@tdatirv.UUCP>, I write:
 
>> Since
>> this exact copy is indistinguishable in any way from a naturally born human,
>> we can, as a shortcut, say that humans are 'machine-like' in construction.
 
>Well, indistinquishable maybe, but still very different in certain historical
>and conventional respects.

But then so am I, every individual is different in many details.   This, to
me, does not mean they are not conscious.

>A copy of me would not be married to my wife or be father of my children.

True, but he might go out and find his own wife. And father his own children.

>There are many philosophers who would argue that the same applies to
>"consciousness" (eg. Davidson, Sellars). Consciousness is something we
>attribute to creatures partly because they have a certain convention-based
>roles and a certain type of history in the human society. Consciousness of
>animals (if you believe in such) or wolf-babies or representatives of other
>cultures (if you this culture is very different from mine so that I can't
>attribute the proper roles)  is a derived concept, they are not "really"
>conscious.

I think we have rather different concepts of what consciousness is.  However,
I do not see that an artificial duplicate of a human would be unable to enter
into these roles.  he would go out and get a job and do all the other things
people in our culture do.  [We might prevent him from doing so through
prejudice, but this does not imply actual lack of capacity].  

If I were to make a defintion of consciousness along the lines you are
suggesting, I would use the *capacity* to enter into these roles and histories,
rather than any specific instances of them, as the defining characteristic.
There is no real evidence that any normal human is incapable of entering into
any of the various social roles of any society, at least if introduced to them
at a sufficiently early age. And even adults are often capable of adjusting
behavior enough to fit into the social roles of radically different culture,
if the motivation is strong enough.

Hwever, the defintion of conscious I actually use is rather different. It is
that mode of thought in which self-awareness is used to guide reactions.  Or
at least that is an aproximation of what I mean by the word - I find it
difficult to produce a satisfactory precise definition.

>Machine consciousness would be similar to animal consciousness
>unless machines start to grow into proper kinds of roles in society.
>Note that this does not imply  that eg. representatives of other cultures are 
>somehow inferior in information-processing capabilities.

Consciousness is not information processing either, it is more of an attitude
towards reality, or towards the relationship of self to non-self.

A conscious machine would tend to develope roles for itself in society.  They
may well be different than human roles, and it may take us awhile to percieve
these roles as normal in the same way we view eating lunch in a cafe.  But
such changes do take place, computer programming is a totally new role in
society.  [I actually knew one of the very first professional computer
programmers, so it is a *very* new role]
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

rjf@canon.co.uk (Robin Faichney) (10/13/90)

In article <1990Oct11.161350.16127@jarvis.csri.toronto.edu> me@csri.toronto.edu (Daniel R. Simon) writes:
>[..]
>
>My only contribution to the debate was to remark once, half in jest, that I
>had always believed beauty to be in the eye of the beholder.  There was an
>awkward silence, a few people coughed embarrassedly, and after a few moments
>the conversation continued as before.  Until I left, those were the last words
>I dared speak on the subject.

I hate to be the boringly explicit one, especially after such a
delightful allegory as this, but my position is that consciousness is
in the eye of the beholder.

In article <25036@dartvax.Dartmouth.EDU> phil@eleazar.dartmouth.edu (Phil Bogle) writes:
>..With regards
>to intelligence, consciousness as opposed to behavior which merely
>_seems_ conscious makes no difference whatsoever..
>
>The field, after all, is
>AI, not AC.

Absolutely.  But, to go back a little and clarify this: consciousness
is a subjective phenomenon, so the goal would be not to build a
conscious machine, but a machine which people believed to be
conscious.  I think this must be what Turing had in mind.

BUT, this is not the Turing Test -- I personally believe that people
without axes to grind simply will not believe it conscious if they know
it is a machine, no matter what it does.

AND, even if I am wrong on that, subjectivity is not democratic: a
majority of people believing in the consciousness of a machine does not
make it "really" conscious; if you believe in it, then it is so for
you, and if not..

As for intelligence, it is not as clearly a purely subjective
phenomenon as is consciousness, but it would not at all surprise me if,
following an analysis of the concept, we came to the same conclusion.

Of course one of the main implications of all this is that AI should
concern itself as much with the psychology/sociology of the attribution
of consciousness and intelligence as with the internals of the machines..
on the other hand, if they are trying to simulate the human mind, then
they'd have to do that anyway, wouldn't they?  ;-)

Personally, I won't believe it conscious until it is proved that it
believes me conscious!  :-)

sarima@tdatirv.UUCP (Stanley Friesen) (10/14/90)

In article <1990Oct12.074325.688@canon.co.uk> rjf@canon.co.uk writes:
>
>Can anyone provide a pointer to objective evidence for the existence of
>consciousness?

Not really.  But are you seriously claiming that humans are *not* conscious??

Actually, I believe that at some basic level 'consciousness' is defined as
"what we humans have that allows us to ask 'who am I'".  Thus it exist *by*
*definition*, and it is just a matter of figuring out what that something
really is.

>>[ME]
>>In short, I think the distinction you are making between 'machine' and 'human'
>>is largely artificial, it is based on a false dualism.
>
>The distinction is in our minds..  ;-)

Exactly my point.  Since the distinction is in our minds any argument based
on it is also only in our minds, and nature is not constrained to agree with
us.  Thus the arguments against the possibility of constructing an intelligence
are flawed.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

rjf@canon.co.uk (Robin Faichney) (10/16/90)

In article <31@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1990Oct12.074325.688@canon.co.uk> rjf@canon.co.uk writes:
>>
>>Can anyone provide a pointer to objective evidence for the existence of
>>consciousness?
>
>Not really.  But are you seriously claiming that humans are *not* conscious??

I'd claim that consciousness cannot be handled objectively.  There can be
no objective evidence for it, or definition of it.

>Actually, I believe that at some basic level 'consciousness' is defined as..

What does this mean?  Whose definition is this?

>"what we humans have that allows us to ask 'who am I'"...

This seems obviously, to me, to be about selfconsciousness, not
consciousness.

>>>[Stanley]
>>>In short, I think the distinction you are making between 'machine' and 'human'
>>>is largely artificial, it is based on a false dualism.
>>
>>The distinction is in our minds..  ;-)
>
>Exactly my point.  Since the distinction is in our minds any argument based
>on it is also only in our minds, and nature is not constrained to agree with
>us.  Thus the arguments against the possibility of constructing an intelligence
>are flawed.

The distinction is in our minds, the argument is in our minds,
consciousness is in our minds -- but you want to put it into a
machine.  You say (I think) that it is merely subjective, therefore the
objections to implementation are similarly subjective.  But I say the
implementations themselves will be equally subjective -- so you can
believe in them if you want to, but (a) you can forget "proof" of any
such implementation, and (b) in no way can this endeavour be described
as "scientific".

The point I am trying to make is that consciousness is neither an
object nor a process -- it is a concept.  When we say something is
conscious what we are really talking about is certain psychological and
sociological aspects of the relationship between it and us.  Primarily,
it means we are willing to identify with it, to put ourselves in its
shoes.

But what is most relevant here is that, if you are interested in
consciousness, you have to look very carefully and seriously at how
this concept is used "in real life".  Adopting some convenient
definition dreamed up either by yourself or by someone similarly
motivated to make it fit your picture just won't do.  Hackers don't
become philosophers or (even social) scientists without a lot of hard
work.  If they knew about it, the notion of a bunch of "computer
scientists" trying to construct a "real person" would make most people,
including those highly qualified in other disciplines, crease up.
(Of course, they laughed at Galileo..  ;-)

mikeb@wdl31.wdl.fac.com (Michael H Bender) (10/17/90)

In article <31@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:

   >>In short, I think the distinction you are making between 'machine' 
   >>and 'human' is largely artificial, it is based on a false 

   >>The distinction is in our minds..  ;-)

   Exactly my point.  Since the distinction is in our minds any argument based
   on it is also only in our minds, and nature is not constrained to agree with
   us.  Thus the arguments against the possibility of constructing an
   intelligence are flawed.

Likewise, the arguments "proving" the possibility of constructing
consciousness are equally flawed! (By the way -- how can we build something
we can't even define?)

Mike Bender

mccool@dgp.toronto.edu (Michael McCool) (10/17/90)

mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:
>
>By the way -- how can we build something
>we can't even define?

>Mike Bender

Has anyone in this group ever read "Destination Void" and the Artificial
Consciousness Project mentioned therin?  The book is by Frank Hebert, of
Dune and etc.  Follow-up to this book was "The Jesus Incident".  

Not supporting his views, just mentioning a thread.

Michael McCool@dgp.toronto.edu

cam@aipna.ed.ac.uk (Chris Malcolm) (10/19/90)

In article <MIKEB.90Oct16133635@wdl31.wdl.fac.com> mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:
>(By the way -- how can we build something
>we can't even define?)

Fascinating question! I would love to know HOW it is that I do it; I have
no doubt THAT I do it quite often when implementing experimental research
prototypes.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

sarima@tdatirv.UUCP (Stanley Friesen) (10/20/90)

In article <MIKEB.90Oct16133635@wdl31.wdl.fac.com> mikeb@wdl31.wdl.fac.com (Michael H Bender) writes:
>Likewise, the arguments "proving" the possibility of constructing
>consciousness are equally flawed! (By the way -- how can we build something
>we can't even define?)

I am not sure how you get this from my position.   I suspect we may not be
talking on the same wavelength.  The distinction that I was claiming to be
entirely 'imaginary' was the distinction between machine(=artificial) and
human(=natural).   That is any argument based on the two being intrinsicly
different is suspect.  And since there is no intrinsic difference, the
existance of one implies the possibility of the other.

Besides, I tend to assume something is possible until it is proven otherwise.
Too many things claimed impossible have been done for me to place much store
in 'impossibility'.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

jhess@orion.oac.uci.edu (James Hess) (10/23/90)

In article <1990Oct11.161350.16127@jarvis.csri.toronto.edu> me@csri.toronto.edu (Daniel R. Simon) writes:
>Perhaps some recent experiences of mine will add to the current debates over 
>consciousness and emergence.
>
>For example, during my stay at the LAA, a heated discussion ensued concerning
>the property of beauty, and in particular over whether artificially-rendered
>humans could be considered beautiful.
>
>My only contribution to the debate was to remark once, half in jest, that I
>had always believed beauty to be in the eye of the beholder.  There was an
>awkward silence, a few people coughed embarrassedly, and after a few moments
>the conversation continued as before.  Until I left, those were the last words
>I dared speak on the subject.
>
Lord forbid that these academics should show the initiative to study the 
literature on the meaning of words in semantics and philosophy or the 
philosophy of aesthetics.  It would deprive them of the opportunity to attempt
to single-handedly recreate thousands of years of Western and Eastern 
philosophy.  I can only hope that they had the good sense to conduct this 
session over a pint of bitters.

oliphant@telepro.UUCP (Mike Oliphant) (10/27/90)

In article <1990Oct16.084022.7279@canon.co.uk> rjf@canon.co.uk writes:

>I'd claim that consciousness cannot be handled objectively.  There can be
>no objective evidence for it, or definition of it.

I would argue that the key issue is not to classify things as being subjective
or objective, but rather to try to understand why the subjective exists at
all and to try to figure out just what the heck it is.  To me, the
problematic aspect of consciousness is that it is so inextricably linked to
having a "point of view".  I want to know why I have such a "point of view"
and where it comes from.  Telling me that it is subjective and I cannot
objectively investigate it doesn't help any.  This is the traditional cop-out
of labelling something that you do not understand and then proclaiming the
issue to either be resolved or unresolvable.

--
Mike Oliphant		    UUCP: alberta!herald!telepro!oliphant
			Internet: oliphant@telepro.uucp
			 FidoNet: (1:140/91) - ZMH only
*
* Call TelePro, the development system for DIALOG Professional
*
*   Phone: +1 306 249 2352	2400/9600/14400 bps HST
*	   +1 306 652 2084	300/1200/2400 bps
* FidoNet: (1:140/90)
*

rjf@canon.co.uk (Robin Faichney) (10/30/90)

In article <oliphant.4676@telepro.UUCP>, oliphant@telepro.UUCP (Mike Oliphant) writes:
> In article <1990Oct16.084022.7279@canon.co.uk> rjf@canon.co.uk writes:
> 
> >I'd claim that consciousness cannot be handled objectively.  There can be
> >no objective evidence for it, or definition of it.
> 
> I would argue that the key issue is not to classify things as being subjective
> or objective, but rather to try to understand why the subjective exists at
> all and to try to figure out just what the heck it is.  To me, the
> problematic aspect of consciousness is that it is so inextricably linked to
> having a "point of view".  I want to know why I have such a "point of view"
> and where it comes from.  Telling me that it is subjective and I cannot
> objectively investigate it doesn't help any.  This is the traditional cop-out
> of labelling something that you do not understand and then proclaiming the
> issue to either be resolved or unresolvable.

It's no cop out.  It only looks that way because it lacks an
explanation of the concepts of objectivity and subjectivity.  I can put
the relevant part of such a explanation in a few words: consciousness
is the essence of subjectivity.  They are practically the same thing.
It is the particular point of view which we possess as individual
organisms.  But if you want a *real* explanation I suggest you check
out the last 2 chapters of Thomas Nagel's book 'Mortal Questions'.  It
even touches on the existential question 'why am *I* here, now,
possessed of this particular point of view'.  And without getting
mystical.  Highly recommended.

BKort@bbn.com (Barry Kort) (10/31/90)

In article <oliphant.4676@telepro.UUCP> oliphant@telepro.UUCP (Mike 
Oliphant) writes:

> I would argue that the key issue is not to classify things as being 
> or objective, but rather to try to understand why the subjective exists 
> at all and to try to figure out just what the heck it is.

Think of your brain and mind carrying out a mapping between external 
(objective) reality and internal (subjective) mental models or images.  
The map and the territory bear a strong resemblance to each other, but the 
map is not the territory.

We measure the objective through our senses and construct the subjective 
as our internal representation of the world in which we find ourselves embedded.


Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/31/90)

  I'd like to inject a few comments regarding testing for machine
consciousness.

  Firstly, why do we accept the belief that other humans are conscious?
(I use the word "belief" advisedly, since I think that knowledge of
another's subjectivity is problematic.)  I would argue that we use a
genetic analogy: I am human (which is now a genetic term), and I am
conscious; therefore since this other individual is human, he or she
is also conscious.  In other words, we believe ourselves to be conscious,
and we believe that the genetic connection between ourselves and other
humans is 'close' enough to preserve that property.  (I should also
point out that "human" wasn't always this way - animistic religions
ascribed "human"-type consciousness to animals such as wolves and bears
and other "totems".  But I digress. :)

  So what stands in the way of our belief in machine consciousness?  If
the above is true, it predicts that we will have problems because our
connection with the computer/program pairs we create is not genetic (or
of the same order of strength as genetic).  Our connection with any
machine/program pair is that we have brought it into existence to fufil
a specific purpose - normally one we have formally defined, or are
capable of formally defining.  This implies that the only way we will
accept the belief that any pair is conscious, is if we  can formally
describe consciousness and verify that the program involved matches
the definition.  This also helps to explain why people are more
reluctant to accept a connectionist approach, since the connection
between us and the eventual behaviour of the pair is even more distant
than under traditional circumstances.

  If you do not believe that we can define consciousness, then according
to this line of reasoning you must either give up on ever accepting a
belief in machine consciousness, or come up with a new criterion.  The
obvious way of getting around the "genetic" analogy is to try and
generalize the notion of consciousness to avoid anthropomorphisizing
terms (which seems difficult considering the lack of other examples
of 'conscious' to go on).  Even if we manage to avoid loaded terminology,
I wonder if we can avoid loading the concepts we imagine when we use
it?  

  Anyway, many definitions of conscious are being offered in another
thread and I have nothing new to offer at the moment.  But how about
the "new criterion", a new analogy?  (Bear in mind that I think 
analogy is the right idea here, since I am addressing belief, and not
proof - the arguement for which is in the first paragraph above.)
Analogies will not give definitive statements and can be misguided,
but it ultimately seems to be what we are using anyway.  The analogy
should not rely on some biological factor unless you really wish to 
rule out machine or alien consciousness by fiat.  Since, personally,
I do not subscribe to any form of dualism, I would regard it as 
possible to require that the analogy make reference to the conscious
Whatever's structure as well as its behaviour.  Well, this is as far
as I've gotten it, so all I can do now is open the floor!  Any
suggestions out there?

  Btw, my last post "Public Apology" was an ironic attempt at showing
that a machine consciousness right now would have great trouble
percieving us as we really are given its sensory environment.   I was
being ironic about other things, but it doesn't matter.  The point
is: I *don't* have the code!  I have yet to build a sentient posting
reader-responder, so I cannot honour any requests for one!  Sorry! :>


--
      Cameron Shelley        | "Fidelity, n.  A virtue peculiar to those 
cpshelley@violet.waterloo.edu|  who are about to be betrayed."
    Davis Centre Rm 2136     |  
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

rjf@canon.co.uk (Robin Faichney) (10/31/90)

In article <1990Oct31.023922.13795@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>
>
>  I'd like to inject a few comments regarding testing for machine
>consciousness.
>
>  Firstly, why do we accept the belief that other humans are conscious?
>(I use the word "belief" advisedly, since I think that knowledge of
>another's subjectivity is problematic.)  I would argue that we use a
>genetic analogy: I am human (which is now a genetic term), and I am
>conscious; therefore since this other individual is human, he or she
>is also conscious.  In other words, we believe ourselves to be conscious,
>and we believe that the genetic connection between ourselves and other
>humans is 'close' enough to preserve that property.

I think cameron is on the right lines here, but I don't think he's
quite got there.  For one thing, his account suggests that this is an
intellectual phenomenon, but I don't think that can be true.  For
another, he puts self-consciousness before belief in others'
consciousness.  I think that we identify with, and therefore by (my)
definition believe in the consciousness of, other people, long before
we become self-conscious (even if we don't at that stage put it in
quite those terms).

Of these two points, the lack of consideration of non-intellectual
aspects of this is probably more fundamental.  But it can be elucidated
by looking at the development of the concept of consciousness.  When
it's put that way, it is obvious that the concept as such is a relative
late comer, whether viewed within the evolution of the species or the
development of the individual.  Its function is to provide an
intellectual handle to at least one non-intellectual phenomenon.  My
contention is that this phenomenon is identification with others
(other, closely related phenomena probably also being implicated).

This would certainly explain the difficulties which we have in defining
consciousness:  we assume that because we have a symbol, there must be
a referent.  On reflection it becomes obvious that a concept could
easily serve many purposes without actually 'standing for' any single,
particular thing.  This is the same sort of mistake that Wittgenstein
tried to explain regarding the meaning of language:  it is not the case
that each word, phrase, whatever must represent some particular thing
in the world, which is its meaning; in fact, the meaning of a piece of
language is simply the way it is used.

So how is 'consciousness' used?  In more ways than one, to be sure, but
I think that the common usage -- simple awareness -- is the primary
one.

To go back a little:  what are a baby's earliest social interactions?
I'd suggest (I have a reference for this somewhere) the exchange of
smiles, probably with the mother.  Note that mother's smile tends to
trigger baby's smile and vice versa.  This is modelling behaviour, and
though at first it is undoubtably very low level, it is in principle
the same thing as when the little girl wants to dress up like mummy (or
the little boy ;-), and the teenager, having switched from the parental
model to the peer group model, wants to look/talk/etc just like all her
friends -- or maybe, wants to be as non-conformist as her cultural
heroes.  There again, any such social interaction as the feeling and
expression of sympathy for someone, requires feeling for, ie
identification with, that person.  What I'm trying to say is that
identification is fundamental to socialisation and social interaction,
and you obviously can't identify with anything you don't believe to be
fundamentally like yourself.

So what does identification have to do with consciousness?  Well, I
don't think that it starts with our 'believing ourselves to be
conscious'.  It is deeper than that: in fact, we simply experience
things, and are 'programmed' to view other humans as essentially like
us, ie as 'experiencers'.  The social phenomenon of identifying with
others may reasonably be assumed to have arisen long before the concept
of consciousness.  The fact is that we *naturally* identify with some
of the things in our environment, and not with others; our intellectual
view of this is that some things are conscious and others are not.

That could be taken as meaning that maybe our 'programming' is wrong:
maybe (some?) other people are not conscious, or maybe some inanimate
things are.  But *that is meaningless*.  We either identify with a
thing or we don't.  Period.

The consequences for AI?  I'd suggest the field has nothing to lose by
forgetting consciousness.  People have suggested that important things
are associated with consciousness, like introspection and short-term
memory, but leaving out consciousness would in no way prevent objective
analogues of these, or any other mental phenomena, from being
investigated.  You might even look at identification with others, but
that might be a little one-sided!  ;-)

BTW, what I am suggesting here might be taken as meaning that the mind
as an individual entity is not a meaningful concept, that minds are
"merely" the nodes in a social network.  Maybe a better way of putting
it is that some of the software cannot, for reasons of function rather
than implementation, be run on a standalone machine, only on a
network.  This sort of view of the mind is actually quite common these
days in the arts and social sciences, and if AI is ever to approach the
higher level functions, the practitioners will have to start looking at
postmodernism, structuralism, et al, if only to be able to say what is
wrong with these approaches!  ;-)

If you are interested in an example of research in computing which does
take recent work in the arts and social sciences very seriously, and in
my view is successful in integrating these areas, where they naturally
overlap, look out some of the stuff on computer supported cooperative
work (CSCW) and groupware by the Awakening Technologies group.  (They
seem to be mainly P and J Johnson-Lenz, and publish themselves.) They
have submitted a paper to the forthcoming CSCW Special Edition of The
Intl Jnl of Man Machine Studies.

rjf@canon.co.uk (Robin Faichney) (11/01/90)

In article <1990Oct31.142817.1999@canon.co.uk> rjf@canon.co.uk (I)
wrote:  Something quite long which, on rereading today, I find still
omits a straightforward, explicit account of where the concept of
consciousness comes from.  So here goes:

We experience things, and are 'programmed' to identify with other
humans, ie to believe them essentially identical to ourselves, so we
believe that they experience things.  Some things we do not identify
with, ie we do not believe they experience anything.  We characterise
the difference between the things which we believe experience things
and those which we do not, by saying that the former are conscious and
the latter are not.  But even before this, before the concept of
consciousness arises, we reach the stage of realising that just as we
view other people as experiencers, so they view us, and that is the
beginning of self-consciousness, though we do not yet call it that.

The main consequence for AI is probably that, if you disregard the
inherited predisposition to attribute consciousness to (ie identify
with) other humans, such attribution is at best arbitrary and at worst
meaningless.  Unless there is something seriously wrong with my
account, there can never be a good reason to seriously attribute
consciousness to a machine.

The foregoing is a clarification of what I've said before.  However,
since considering some fascinating arguments made recently in this
group by Chris Malcolm, I would like to suggest a possible scenario:

As Eliza demonstrated, people will quite readily interact with a
machine as if there was 'a real person in there', even when they know
there is not.  I think that this phenomenon is interestingly similar to
the kind of suspension of disbelief which occurs when we are 'taken in'
by a good film, play, book, etc.  We know that the characters are not
real, but can feel, to some extent, as if they were.  I'd put quite a
lot of money on the proposition that this will be the main way people
will interact with computers in the near future and will remain so
indefinitely.  All of our communications facilities are designed for
human corresponents, so the best way to communicate with a machine has
to be as if it were a person.  Consider all the current talk about
software agents.  For the reasons I've tried to explain above, people
will never be willing to believe that there is *really* a conscious
entity in there.  But, as shown by Eliza and in the arts, in practice
that belief is not required for meaningful (in human terms)
communication to take place.  Whether the machine views the
communication as meaningful is not only irrelevant but meaningless.

I'd also like to humbly suggest that future generations of AI workers
will look back with amusement and bewilderment at such arguments as to
whether a machine could be conscious, much as we do at the medieval
arguments about the number of angels which could dance on the head of a
pin.

(I would *not* go so far, in this forum, as to suggest that some people
in AI have, when thinking about a conscious machine, been carried away
by the thought of playing God!  ;-)

cpshelley@violet.uwaterloo.ca (cameron shelley) (11/01/90)

In article <1990Oct31.142817.1999@canon.co.uk> rjf@canon.co.uk writes:
>In article <1990Oct31.023922.13795@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>>
>>
>>  I'd like to inject a few comments regarding testing for machine
>>consciousness.
>>
>>  Firstly, why do we accept the belief that other humans are conscious?
>>(I use the word "belief" advisedly, since I think that knowledge of
>>another's subjectivity is problematic.)  I would argue that we use a
>>genetic analogy: I am human (which is now a genetic term), and I am
>>conscious; therefore since this other individual is human, he or she
>>is also conscious.  In other words, we believe ourselves to be conscious,
>>and we believe that the genetic connection between ourselves and other
>>humans is 'close' enough to preserve that property.
>
>I think cameron is on the right lines here, but I don't think he's
>quite got there.  For one thing, his account suggests that this is an
>intellectual phenomenon, but I don't think that can be true.  For
>another, he puts self-consciousness before belief in others'
>consciousness.  I think that we identify with, and therefore by (my)
>definition believe in the consciousness of, other people, long before
>we become self-conscious (even if we don't at that stage put it in
>quite those terms).
>

You're quite right when you say I was addressing "this" as an 
intellectual phenomenon, but I think this is reasonable since
I was addressing 'testing for consciousness' and not its
evolution or acquisition.  I agree that consciousness is deeply
connected to social interaction, but I question the attempt at
strongly ordering identification of others as prior to identification
of self.  I'm not saying it must be the other way around, just that
the ordering is more ambiguous than you seem to suggest.

Do you see your suggestions as having an impact on testing for
consciousness in machines?

[rest deleted, sorry! :>]
--
      Cameron Shelley        | "Fidelity, n.  A virtue peculiar to those 
cpshelley@violet.waterloo.edu|  who are about to be betrayed."
    Davis Centre Rm 2136     |  
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

mcdermott-drew@cs.yale.edu (Drew McDermott) (11/03/90)

Quoting Robin Faichney <rjf@canon.co.uk>:

   >We experience things, and are 'programmed' to identify with other
   >humans, ie to believe them essentially identical to ourselves, so we
   >believe that they experience things.  Some things we do not identify
   >with, ie we do not believe they experience anything.  

Surely there can be no question whether humans actually experience
things.  If you ask me to doubt whether I actually experience
anything, I will experience several things, including puzzlement!

Your paragraph is a puzzle already.  The first three words grant that
"We experience things."  But you say "we ... believe that *they*
experience things."  (italics added)  Surely we=they here?

   >... The main consequence for AI is probably that, if you disregard the
   >inherited predisposition to attribute consciousness to (ie identify
   >with) other humans, such attribution is at best arbitrary and at worst
   >meaningless.  Unless there is something seriously wrong with my
   >account, there can never be a good reason to seriously attribute
   >consciousness to a machine.

There is something seriously wrong; see above.

   >As Eliza demonstrated, people will quite readily interact with a
   >machine as if there was 'a real person in there', even when they know
   >there is not.  

I am completely confident that Eliza experiences nothing, regardless
of how comforting it can be to talk to it.

    I think that this phenomenon is interestingly similar to
   >the kind of suspension of disbelief which occurs when we are 'taken in'
   >by a good film, play, book, etc.  We know that the characters are not
   >real, but can feel, to some extent, as if they were.  I'd put quite a
   >lot of money on the proposition that this will be the main way people
   >will interact with computers ....

Interesting observation; no doubt correct.

But I still maintain that the fuss about "testing for consciousness"
is misguided.  Here's how it will work: We will figure out (via
modeling and vivisection) what's going on in people that counts as
consciousness.  We will then duplicate that something artificially and
verify that the resulting system is also conscious.  (It will have
strong opinions about the way it works that are isomorphic to ours
about ourselves).  We will also no doubt produce so many variations on
the theme that our concept of mind will change considerably by the
time we're done.  (As Chris Malcolm has emphasized in this thread.)  No
particular definition of consciousness will emerge, and the desire for
one will evaporate.  What will emerge is a good understanding of how
to manipulate different aspects of consciousness.  So, if you want a
robot with, say, qualia but no free will, you can have it.

Let me hasten to add that this scenario is not inevitable.  It *could*
turn out that a much more radical revision of our conceptual framework
results from our investigations, so that, e.g., we end up saying that
no system ever actually experiences anything.  Or it could turn out
that we never get a satisfactory theory of consciousness, and it
remains a mystery.  But if the theory of consciousness evolves as
previous scientific theories have evolved, then, I claim, we need have
no qualms about any special methodological problems with it.

   >I'd also like to humbly suggest that future generations of AI workers
   >will look back with amusement and bewilderment at such arguments as to
   >whether a machine could be conscious, much as we do at the medieval
   >arguments about the number of angels which could dance on the head of a
   >pin.

I agree, but for somewhat different reasons.

                                             -- Drew McDermott

G.Joly@cs.ucl.ac.uk (11/07/90)

``Intellignce is the the mind of the beholder''.

I have always said that; no need for scalpels here.

G.Joly@cs.ucl.ac.uk (11/07/90)

``Intelligence is the the mind of the beholder''.

I have always said that; no need for scalpels here.

Gordon Joly                                           +44 71 387 7050 ext 3716
InterNet: G.Joly@cs.ucl.ac.uk       UUCP: ...!{uunet.uu.net,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT, UK