[comp.ai] "self" consciousness

mfinegan@uceng.UC.EDU (michael k finegan) (02/02/90)

roger@yuba.wrs.com (Roger Rohrbach) writes:

>N.B.: "supra-consciousness" is not a naive concept.
#flame on
BULLS__T
#flame off (almost)
>							It is possible to
>observe one's level of awareness waxing and waning, and to note the effect
>thereof on one's ability to process information, e.g., vividness of sensory
>experience,  reliability of memory.  Adopting the assumption that conscious-
>ness has degrees, the lowest of which is sleep, only simple observation is
>required to see that we do not occupy the high end of the scale on a moment-
>by-moment basis; we may at any point find ourselves closer to or further from
>sleep.  This is not philosophy;  it's an empirical study of the human mind,
>and therefore is relevant to any theory of artificial intelligence.
>Roger Rohrbach                                  sun!wrs!roger    roger@wrs.com

I am sorry, but you are arguing from false reference. Ouspensky (et al) talk
specifically about raising consciousness levels until out of body experiences
are achieved, and believe that higher levels of consciousness include states
of immortality. If you don't think that is far-fetched, that is your right.
Does this enable you to create a computer that will pass the Turing test ?
No. That is a very different 'consciousness' than the 'expert source' you
quote, out of context, was refering to.

While I don't want to enter into a long winded (hollow) discussion with you,
there are certainly different levels of activity in the human mind.
(or would you prefer brain - since you dropped Searle's name)
But, is dreaming a lower level of consciousness than watching 'Wheel of
Fortune' ? I don't think you can answer that question, empirically, or
otherwise. Moral: consciousness is in the eye of the beholder.

					mfinegan@uceng.UC.EDU

warning: I am extremely dogmatic

kp@uts.amdahl.com (Ken Presting) (02/06/90)

mfinegan@uceng.UC.EDU (michael k finegan) writes:
> But, is dreaming a lower level of consciousness than watching 'Wheel of
> Fortune' ? I don't think you can answer that question, empirically, or
> otherwise. Moral: consciousness is in the eye of the beholder.

Easy.  When you're dreaming you have a dramatically reduced ability to
distiguish true from false assertions about yourself.  While watching TV,
it is common to forget that you're hungry, bored, and uncomfortable, but
most people still remember that they have eyes, ears, and a remote control.

What I've been calling "the self-description scale" shares the useful
attribute of objectivity with the Turing Test.  Indeed, there is nothing
in the TT or in my proposal that would upset a behaviorist, though I am
no behaviorist myself.

Let me restate the self-description scale:

For any object, there is a set of sentences which refer to that object.
For any two objects, if object A can correctly affirm or deny each
sentence which refers to itself when object B can do the same (mutatis
mutandis), and object A can correctly affirm or deny some sentences which
object B cannot, then A has superior self-descriptive capacity than B.

It's not formally a part of the criterion, but I should say that ordinary
physical description of one's body and its relation to the surroundings
is the main thing I have in mind.  I should also note that this is *not*
the only criterion relevant to judgements of intelligence.

Whether or not this scale has any relation to the various notions of
consciousness proposed by psychologists and philosophers is a separate
(but not unimportant) question.  I do think this scale agrees roughly
with everyday talk about consciousness - when you're asleep (ie
unconscious) you don't have much to say about yourself (or anything else).
Rocks have nothing to say, chimpanzees have only a little.  People will
go on for hours ...

mfinegan@uceng.UC.EDU (michael k finegan) (02/06/90)

roger@yuba.wrs.com (Roger Rohrbach) writes:

:-)    You'd have to locate the "out-of-body" stuff for me- must have missed it.
:-) There is a discussion of "immortality"- as a relative concept.  E.g., there
:-) is light reaching the earth from close to the beginning of time.  This light
:-) has been around awhile :-)
I have no plans to collect references in material I care not to reread.
If you read more of his material (or his mentor G.I. Gurdjieff's), I
think you will agree with me that they are not discussing 'relative concepts'.

:-) The physicist Geoffrey Chew hazards the opinion
:-) that light and consciousness are aspects of the same phenonomenon.
i.e. electromagnetic energy ? This is new ? A physiological perspective
certainly assumes that conciousness is electromagnetic ...

:-)     I'm looking at a conference announcement right now with invited
:-) presentations on "A Quantum Theory of Consciousness" and "A Trans-Temporal
:-) Apporach to Mind-Brain Interaction". Yes, it all sounds far-fetched. What
:-) of it?
I don't care if it sounds far-fetched. The title of a talk doesn't guarantee
information content.

:-) "Watching 'Wheel of Fortune'" is not a "level of consciousness".
Certainly, but it involves a certain level of conciousness.
:-) It is an activity involving sensory input, intellectual processing (not
:-) much!), and emotional stimulation,  all of which can occur with varying
:-) degrees of consciousness (my original point: consciousness as distinct
:-) from thought, feeling, etc.).
One cannot assume a priori that thought and conciousness are separate.

:-)  If you were implying that watching 'Wheel of Fortune'
:-) *evinces* a low level of consciousness,  I might agree.
:-) Roger Rohrbach                               sun!wrs!roger    roger@wrs.com
I was!

Next ...

kp@uts.amdahl.com (Ken Presting) writes:

%-) What I've been calling "the self-description scale" shares the useful
%-) attribute of objectivity with the Turing Test.  Indeed, there is nothing
%-) in the TT or in my proposal that would upset a behaviorist, though I am
%-) no behaviorist myself.
Okay - but I made no comments about your 'self-description scale'

%-) Let me restate the self-description scale:

%-) For any object, there is a set of sentences which refer to that object.
%-) For any two objects, if object A can correctly affirm or deny each
%-) sentence which refers to itself when object B can do the same (mutatis
%-) mutandis), and object A can correctly affirm or deny some sentences which
%-) object B cannot, then A has superior self-descriptive capacity than B.
i.e. Double-talk is a sign of superior reasoning skills ? Maybe. :-)
Seriously, this sounds fine, but who judges 'correctness' ? A very intelligent
source (machine or otherwise) might be correct, with the judge incapable of
realizing it. Examples: a prodigous student and their teacher, Gallileo and
the church, etc.

%-) It's not formally a part of the criterion, but I should say that ordinary
%-) physical description of one's body and its relation to the surroundings
%-) is the main thing I have in mind.  I should also note that this is *not*
%-) the only criterion relevant to judgements of intelligence.
%-)
%-) Whether or not this scale has any relation to the various notions of
%-) consciousness proposed by psychologists and philosophers is a separate
%-) (but not unimportant) question.
Why do you guys always ignore the biological notions ?

%-) I do think this scale agrees roughly with everyday talk about
%-) consciousness - when you're asleep (ie unconscious) you don't have much
%-) to say about yourself (or anything else). Rocks have nothing to say,
%-) chimpanzees have only a little.
The concept of inner dialogue has been around for a while (cf. psycho-analysis).
This might be a necessarry pre-requisite for a computer to pass the Turing test
(namely, create an inner model of the enviroment, try out its planned responses
within this model, and see what happens - then actual present the critiqued
response to the enviroment). This is in line with a.i. developments in knowledge
representation.

%-) People will go on for hours ...
I now see why this topic has been discussed in comp.ai for so long (a year ?).

As Roger alluded to, maybe I should wake up and turn off the tv.

						mfinegan@uceng.UC.EDU

kp@uts.amdahl.com (Ken Presting) (02/07/90)

(I took out the smileys, but it's nothing personal)

In article <3553@uceng.UC.EDU> mfinegan@uceng.UC.EDU (michael k finegan) writes:
>>kp@amdahl.uts.amdahl.com (Ken Presting) writes:
>>Let me restate the self-description scale:
>>
>>For any object, there is a set of sentences which refer to that object.
>>For any two objects, if object A can correctly affirm or deny each
>>sentence which refers to itself when object B can do the same (mutatis
>>mutandis), and object A can correctly affirm or deny some sentences which
>>object B cannot, then A has superior self-descriptive capacity than B.
>
>Seriously, this sounds fine, but who judges 'correctness' ? A very intelligent
>source (machine or otherwise) might be correct, with the judge incapable of
>realizing it. Examples: a prodigous student and their teacher, Gallileo and
>the church, etc.
>
>>It's not formally a part of the criterion, but I should say that ordinary
>>physical description of one's body and its relation to the surroundings
>>is the main thing I have in mind.  I should also note that this is *not*
>>the only criterion relevant to judgements of intelligence.

The last paragraph is the answer to the question.  If the self-descriptive
statements are about mundane facts (like "My hair is brown" or "My
cabinet is blue") then there should be little controversy over the right
answers.  Most authors seem to favor asking deep subjective questions in
the Turing Test, but that seems unproductive to me.  ELIZA gave a good
impression of depth, at least to some people.

A good example a a computer describing itself accurately is the Unix(tm)
"ps" (process status) command. At least the machine knows what it's doing!
Process accounting statistics are even better; they involve memory of past
actions.  I'm using this example to show that self-description is only
part of the story for a criterion of intelligence.

The question of who is a reliable judge for Turing-type tests has been
around awhile.  I've always liked the suggestion that programmers would
be better able to detect programs than psychologists.  Realistically,
I'll bet that nobody will trust anybody's judgement except their own, and
the wrangling will get nastier as programs get smarter

>> Whether or not this scale has any relation to the various notions of
>> consciousness proposed by psychologists and philosophers is a separate
>> (but not unimportant) question.
>Why do you guys always ignore the biological notions ?

I'm not sure what you mean.  Most biologists have better things to do
than debate the nature of consciousness, like bar-coding bees, or stealing
food from _Sphex_.

>> I do think this scale agrees roughly with everyday talk about
>> consciousness - when you're asleep (ie unconscious) you don't have much
>> to say about yourself (or anything else). Rocks have nothing to say,
>> chimpanzees have only a little.
>The concept of inner dialogue has been around for a while (cf. psycho-analysis).
>This might be a necessarry pre-requisite for a computer to pass the Turing test
>(namely, create an inner model of the enviroment, try out its planned responses
>within this model, and see what happens - then actual present the critiqued
>response to the enviroment). This is in line with a.i. developments in knowledge
>representation.

This sounds like a useful technique to me.  It resembles gedanken-
experiments, design reviews, and everyday planning of tasks.  An inner
dialog in itself is not what I mean by self-description, though.  A system
could use a fancy inner-dialog algorithm and still be mostly oblivious.