[net.ai] The Halting Problem

nazgul@apollo.UUCP (Kee Hinckley) (10/22/70)

> 
> Whereupon a strange, unreadable look came over Dean's face, and he called to
> someone I couldn't see, "Okay, jig's up! Cut! He figured it out." (Hands
> motioning, now) "Get, those props out of here, tear down those building
> fronts, ... "
> 
> Scared the pants off me.
> 
> Michael Condict   ...!cmcl2!csd1!condict
> New York U.

    Interesting example.  Does anyone out there recall the SF story that carried
essentially the same theme?  The hero felt that the whole world was a prop meant
to fool him, and one time when he ran back through the rain to close a window in
the house he discovered that they had forgot to make it rain behind that window,
instead the sun was shining.
    When it comes down to it, how can you tell.  If the worlds a facade its done
very well (and by someone with a morbid imagination).  It seems unlikely that
anyone will make a mistake.  So you might as well assume that its real since you
are unable to prove otherwise.

                                            -kee

pd@eisx.UUCP (P. Devanbu) (09/26/83)

There are two AI problems that I know about: the computing power
problem (combinatorial explosions, etc) and the "nature of
thought" problem (knowledge representation, reasoning process etc).
This article concerns the latter.

AI's method (call it "m") seems to  model human 
information processing mechanisms, say legal reasoning methods, and 
once it is understood clearly, and a calculus exists for it, programming 
it. This idea can be transferred to various problem domains, and voila,
we have programs for "thinking" about various little cubbyholes of 
knowledge. 

The next thing to tackle is, how do we model AI's method "m" that was
used to create all these cubbyhole programs ?  How did whoever thought
of Predicate Calculus, semantic networks,  Ad nauseum block world 
theories come up with them ? Let's understand that ("m"),
formalize it, and program it. This process (let's call it "m'") gives
us a program that creates cubbyhole programs. Yeah, it runs on a zillion
acres of CMOS, but who cares.

Since a human can do more than just "m", or "m'", we try to make 
"m''", "m'''" et al. When does this stop ? Evidently it cannot. 
The problem is, the thought process that yields a model or simulation 
of a thought process is necessarily distinct from the latter  (This
is true of all scientific investigation of any kind of phenomenon,
not just thought processes). This distinction is one of the primary
paradigms  of western Science.

Rather naively, thinking "about" the mind is also done "with" the mind. 
This identity of subject and object that ensues in the scientific 
(dualistic) pursuit of more intelligent machine behavior - 
do you folks see it too ? Since scientific thought
relies on the clear separation of a theory/model and reality, is
a mathematical/scientific/engineering discipline inadequate for said
pursuit ? Is there a system of thought that is self-describing ? Is
there a non-dualistic calculus ? 

What we are talking about here is the ability to separate oneself
from the object/concept/process under study, understand it, model
it, program it... it being anything, including the ability it self.
The ability to recognize that a model is a representation within
one's mind of a reality outside of ones mind. Trying to model
this ability is leads one to infinite regress.
What is this ability ? Lets call it conciousness.  What we seem
to be coming up with here is, the INABILITY of math/sci etc to
deal with this phenomenon, codify at it, and to boldly program
a computer that has conciousness. Does this mean that the statement:

"CONCIOUSNESS CAN, MUST, AND WILL ONLY COME TO EXISTENCE OF ITS 
OWN ACCORD"

is true ? "Conciousness" was used for lack of a better word. Replace
it by X, and you still have a significant statement. Conciousness already
has come  to existence;  and according to the line of reasoning above,
cannot be brought into existence by methods available.

If so, how can we "help" machines  to achieve conciousness, as benevolent
if rather impotent observers ?
Should we just mechanistically build larger and larger neural network
simulators  until one says "ouch" when we shut a portion of it  off,
and better, tries to deliberately modify(sic) its environment so that 
that doesn't happen again? And may be even can split infinitives ?

As a parting shot, it's clear that such neural networks, must have
tremendous power to come close to a fraction of our level of abstraction
ability.

Baffled, but still thinking...  References, suggestions, discussions,
pointers avidly sought.

Prem Devanbu

ATTIS Labs , South Plainfield.

mat@hou5d.UUCP (M Terribile) (09/28/83)

I may be naive, but it seems to me that any attempt to produce a system that
will exhibit conciousness-;like behaviour will require emotions and the
underlying base that they need and supply.  Reasoning did not evolve
independently of emotions; human reason does not, in my opinion, exist
independently of them.

Any comments?  I don't recall seeing this topic discussed.  Has it been?  If
not, is it about time to kick it around?
						Mark Terribile
						hou5d!mat

samir@drufl.UUCP (09/28/83)

I agree with mark. An interesting book to read regarding conciousness is
"The origin of conciousness in the breakdown of bicamaral mind" by
Julian Jaynes. Although I may not agree fully with his thesis, it did
get me thinking and questioning about the usual ideas regarding
conciousness.

An analogy regarding conciousness, "emotions are like the roots of a
plant, while conciousness is the fruit".

				Samir Shah
				AT&T Information Systems, Denver.
				drufl!samir

portegys@ihuxv.UUCP (10/02/83)

I think that the answer to the halting problem in intelligent
entities is that there must exist a mechanism for telling it
whether its efforts are getting it anywhere, i.e. something that
senses its internal state and says if things are getting better,
worse, or whatever.  Normally for humans, if a "loop" were to
begin, it should soon be broken by concerns like "I'm hungry
now, let's eat".  No amount of cogitation makes that feeling
go away.

I would rather call this mechanism need than emotion, since I
think that some emotions are learned.

So then, needs supply two uses to intelligence: (1) they supply
a direction for the learning which is a necessary part of
intelligence, and (2) they keep the intelligence from getting
bogged down in fruitless cogitation.

             Tom Portegys
             Bell Labs, IH
             ihuxv!portegys

milla@cca.UUCP (Michael J. Massimilla) (10/06/83)

     Your "infinite regress"  dilemma  is  resolved  when  we  view  the
situation  practically instead of theoretically.  Practically, a machine
that is complex ENOUGH will appear to be conscious.  It does not have to
be  able  to  manage  infinite  levels  of  thought.  In fact, the whole
question is a matter of degree.  Humans are "more  conscious"  than  the
lower  primates  because  humans  have  a  greater  capacity for complex
thought.  But even humans are limited.  Certainly it is not  beyond  the
realm of possibility for a machine to be sufficiently complex that it is
"more conscious" than man.

     The key point is that consciousness is NOT the  ability  to  manage
infinite  levels of model/reality.  Rather, it is an ILLUSION associated
with being able to perform highly complex  thought.   That  illusion  is
enhanced  by  randomness, lack of full information, and the complexities
of the external universe.   Self-awareness  is  another  aspect  of  the
illusion.

     As for the theoretical characteristic "X" which you describe -- the
absoulte ability to understand infinite levels of thought -- it  has  no
practical existence or significance whatsoever.

					Michael Massimilla
					cca!milla

condict@csd1.UUCP (10/08/83)

Self-awareness is an illusion?  I've heard this curious statement before and
never understood it.  YOUR self-awareness may be an illusion that is fooling
me, and you may think that MY self-awareness is an illusion, but one thing that
you cannot deny (the very, only thing that you know for sure) is that you,
yourself, in there looking out at the world through your eyeballs, are aware
of yourself doing that.
At least you cannot deny it if it is true.  The point is, I know that I have
self-awareness -- by the very act of experiencing it.  You cannot take this
away from me by telling me that my experience is an illusion.  That is a
patently ludicrous statement, sillier even then when your mother (no offense --
okay, my mother, then) used to tell you that the pain was all in your head.
Of course it is!  That is exactly what the problem is!

Let me try to say this another way, since I have never been able to get this
across to someone who doesn't already believe it.  There are some statements
that are true by definition, for instance, the statement, "I pronounce you
man and wife".  The pronouncement happens by the very saying of it and cannot
be denied by anyone who has heard it, although the legitimacy of the marriage
can be questioned, of course.  The self-awareness thing is completely
internal, so you may sensibly question the statement "I have self-awareness"
when it comes from someone else.  What you cannot rationally say is "Gee, I
wonder if I really am aware of being in this body and looking down at my hands
with these two eyes and making my fingers wiggle at will?"  To ask this ques-
tion seriously of yourself is an indication that you need immediate psychiatric
help.  Go directly to Bellvue and commit yourself.  It is as lunatic a question
as asking yourself "Gee, am I really feeling this pain or is it only an
illusion that I hurt so bad that I would happily throw myself in the trash
masher to extinguish it?"

For those of you who misunderstand what I mean by self-awareness, here is the
best I can do at an explanation.  There is an obvious sense in which my body is
not me.  You can cut off any piece of it that leaves the rest functioning
(alive and able to think) and the piece that is cut off will not take part in
any of my experiences, while the rest of the body will still contain (be the
center for?) my self-awareness.  You may think that this is just because
my brain is in the big piece.  No, there is something more to it than that.
With a little imagination you can picture an android being constructed
someday that has an AI brain that can be programmed with all the memories you
have now and all the same mental faculties.  Now picture yourself observing the
android and noting that it is an exact copy of you.  You can then imagine
actually BEING that
android, seeing what it sees, feeling what it feels.  What is the difference
between observing the android and being the android?  It is just this -- in
the latter case your self-awareness is centered in the android, while in the
former it is not.  That is what self-awareness, also called a soul, is.  It is
the one true meaning of the word "I", which does not refer to any particular
collection of atoms, but rather to the "you" that is occupying the body.
This is not a religous issue either, so back off, all you aetheist and Christian
fanatics.  I'm just calling it a soul because it is the real "me", and I
can imagine it residing in various different bodies and machines, although
I would, of course, prefer some to others.

This, then, is the reason I would never step into one of those teleporters that
functions by ripping apart your atoms, then reconstructing an exact copy at a
distant site.  My self-awareness, while it doesn't need a biological body to
exist, needs something!  What guarantee do I have that "I", the "me" that
sees and hears the door of the transporter chamber clang shut, will actually
be able to find the new copy of my body when it is reconstructed three
million parsecs away.  Some of you are laughing at my lack of modernism here,
but I can have the last laugh if you're stupid enough to get into the
teleporter with me at the controls.  Suppose it functions like this (from
a real sci-fi story that I read): It scans your body, transmits the copying
information, then when it is certain that the copy got through it zaps the
old copy, to avoid the inconvenience of there being two of you (a real mess
at tax time!).  Now this doesn't bother you a bit since it all happens in
micro-seconds and your self-awareness, being an illusion, is not to be con-
sulted in the matter.  But suppose I put your beliefs to the test by setting
the controls so that the copy is made but the original is not destroyed.
You get out of the teleporter at both ends, with the original you thinking
that something went wrong.  I greet you with:

"Hi there!  Don't worry, you got transported okay.  Here, you can talk to
your copy on the telephone to make sure.  The reason that I didn't destroy
this copy of you is because I thought you would enjoy doing it yourself.
Not many people get to commit suicide and still be around to talk about it
at cocktail parties, eh?  Now, would you like the hari-kari knife, the laser
death ray, or the nice little red pills?"

You, of course, would see no problem whatsoever with doing yourself in on the
spot, and would thank me for adding a little excitement to your otherwise
mundane trip.  Right?  What, you have a problem with this scenario?  Oh, it
doesn't bother you if only one copy of you exists at a time, but if there
are ever two, by some error, your spouse is stuck with both of you?  What does
the timing have to do with your belief in self-awareness?  Relativity theory
says that the order of the two events is indeterminate anyway.

People who won't admit the reality of their own self-awareness have always
bothered me.  I'm not sure I want to go out for a beer with, much less date
or marry someone who doesn't at least claim to have self-awareness (even if
they're only faking).  I get this image of me riding in a car with this
non-self-aware person, when suddenly, as we reach a curve with a huge semi
coming in the other direction, they fail to move the wheel to stay in the
right lane, not seeing any particular reason to attempt to extend their
own unimportant existence.  After all, if their awareness is just an illusion,
the implication is that they are really just a biological automaton and
it don't make no never mind what happens to it (or the one in the next seat,
for that matter, emitting the strange sounds and clutching the dashboard).

The Big Unanswered Question then (which belongs in net.philosophy, where I
will expect to see the answer) is this:

		"Why do I have self-awareness?"

By this I do not mean, why does my body emit sounds that your body interprets
to be statements that my body is making about itself.  I mean why am *I*
here, and not just my body and brain?  You can't tell me that I'm not, because
I have a better vantage point than you do, being me and not you.  I am the
only one qualified to rule on the issue, and I'll thank you to keep your
opinion to yourself.  This doesn't alter the fact that I find my existence (that
is, the existence of my awareness, not my physical support system), to be
rather arbitrary.  I feel that my body/brain combination could get along just
fine without it, and would not waste so much time reading and writing windy
news articles.

Enough of this, already, but I want to close by describing what happened when
I had this conversation with two good friends.  They were refusing to agree
to any of it, and I was starting to get a little suspicious.  Only, half in
jest, I tried explaining things this way.  I said:

"Look, I know I'm in here, I can see myself seeing and hear myself
hearing, but I'm willing to admit that maybe you two aren't really self-aware.
Maybe, in fact, you're robots, everybody is robots except me.  There really
is no Cornell University, or U.S.A. for that matter.  It's all an elaborate
production by some insidious showman who constructs fake buildings and offices
wherever I go and rips them down behind me when I leave."

Whereupon a strange, unreadable look came over Dean's face, and he called to
someone I couldn't see, "Okay, jig's up! Cut! He figured it out." (Hands
motioning, now) "Get, those props out of here, tear down those building
fronts, ... "

Scared the pants off me.

Michael Condict   ...!cmcl2!csd1!condict
New York U.

shebs@utah-cs.UUCP (Stanley Shebs) (10/11/83)

I share your notion (that human ability is limited, and that machines
might actually go beyond man in "consciousness"), but not your confidence.
How do you intend to prove your ideas?  You can't just wait for a fantastic
AI program to come along - you'll end up right back in the Turing Test
muddle.  What *is* consciousness?  How can it be characterized abstractly?
Think in terms of universal psychology - given a being X, is there an
effective procedure (used in the technical sense) to determine whether
that being is conscious?  If so, what is that procedure?

					AI is applied philosophy,
					stan the l.h.
					utah-cs!shebs

ps Re rational or universal psychology: a professor here observed that
it might end up with the status of category theory - mildly interesting
and all true, but basically worthless in practice... Any comments?

milla@cca.UUCP (Michael J. Massimilla) (10/12/83)

Of course self-awareness is real.   The  point  is  that  self-awareness
comes  about  BECAUSE  of  the  illusion  of consciousness.  If you were
capable of only very primitive thought, you would  be  less  self-aware.
The  greater  your  capacity  for complex thought, the more you perceive
that your actions are the result of an active,  thinking  entity.   Man,
because  of  his  capacity  to form a model of the world in his mind, is
able to form a model of himself.  This all makes  sense  from  a  purely
physical  viewpoint;  there  is  no  need  for  a supernatural "soul" to
complement the brain.  Animals appear to have some  self-awareness;  the
quantity  depends  on  their intelligence.  Conceivably, a very advanced
computer system could have a high degree  of  self-awareness.   As  with
consciousness,  it is lack of information -- how the brain works, random
factors, etc. which makes self-awareness  seem  to  be  a  very  special
quality.  In fact, it is a very simple, unremarkable characteristic.

						M. Massimilla

messick@orstcs.UUCP (10/20/83)

#R:eisx:-60700:orstcs:2600002:000:3667
orstcs!messick    Oct 19 09:10:00 1983

/***** orstcs:net.ai / eisx!pd /  7:31 am  Sep 26, 1983*/
There are two AI problems that I know about: the computing power
problem (combinatorial explosions, etc) and the "nature of
thought" problem (knowledge representation, reasoning process etc).
This article concerns the latter.

AI's method (call it "m") seems to  model human 
information processing mechanisms, say legal reasoning methods, and 
once it is understood clearly, and a calculus exists for it, programming 
it. This idea can be transferred to various problem domains, and voila,
we have programs for "thinking" about various little cubbyholes of 
knowledge. 

The next thing to tackle is, how do we model AI's method "m" that was
used to create all these cubbyhole programs ?  How did whoever thought
of Predicate Calculus, semantic networks,  Ad nauseum block world 
theories come up with them ? Let's understand that ("m"),
formalize it, and program it. This process (let's call it "m'") gives
us a program that creates cubbyhole programs. Yeah, it runs on a zillion
acres of CMOS, but who cares.

Since a human can do more than just "m", or "m'", we try to make 
"m''", "m'''" et al. When does this stop ? Evidently it cannot. 
The problem is, the thought process that yields a model or simulation 
of a thought process is necessarily distinct from the latter  (This
is true of all scientific investigation of any kind of phenomenon,
not just thought processes). This distinction is one of the primary
paradigms  of western Science.

Rather naively, thinking "about" the mind is also done "with" the mind. 
This identity of subject and object that ensues in the scientific 
(dualistic) pursuit of more intelligent machine behavior - 
do you folks see it too ? Since scientific thought
relies on the clear separation of a theory/model and reality, is
a mathematical/scientific/engineering discipline inadequate for said
pursuit ? Is there a system of thought that is self-describing ? Is
there a non-dualistic calculus ? 

What we are talking about here is the ability to separate oneself
from the object/concept/process under study, understand it, model
it, program it... it being anything, including the ability it self.
The ability to recognize that a model is a representation within
one's mind of a reality outside of ones mind. Trying to model
this ability is leads one to infinite regress.
What is this ability ? Lets call it conciousness.  What we seem
to be coming up with here is, the INABILITY of math/sci etc to
deal with this phenomenon, codify at it, and to boldly program
a computer that has conciousness. Does this mean that the statement:

"CONCIOUSNESS CAN, MUST, AND WILL ONLY COME TO EXISTENCE OF ITS 
OWN ACCORD"

is true ? "Conciousness" was used for lack of a better word. Replace
it by X, and you still have a significant statement. Conciousness already
has come  to existence;  and according to the line of reasoning above,
cannot be brought into existence by methods available.

If so, how can we "help" machines  to achieve conciousness, as benevolent
if rather impotent observers ?
Should we just mechanistically build larger and larger neural network
simulators  until one says "ouch" when we shut a portion of it  off,
and better, tries to deliberately modify(sic) its environment so that 
that doesn't happen again? And may be even can split infinitives ?

As a parting shot, it's clear that such neural networks, must have
tremendous power to come close to a fraction of our level of abstraction
ability.

Baffled, but still thinking...  References, suggestions, discussions,
pointers avidly sought.

Prem Devanbu

ATTIS Labs , South Plainfield.
/* ---------- */

nather@utastro.UUCP (Ed Nather) (10/31/83)

A common characteristic of humans that is not shared by the machines
we build and the programs we write is called "boredom."  All of us get
bored running around the same loop again and again, especially if nothing
is seen to change in the process.  We get bored and quit.

         *--->    WARNING!!!   <---*

If we teach our programs to get bored, we will have solved the
infinite-looping problem, but we will lose our electronic slaves who now
work, uncomplainingly, on the same tedious jobs day in and day out.  I'm
not sure it's worth the price.

                                    Ed Nather
                             ihnp4!{kpno, ut-sally}!utastro!nather

mat@hou5d.UUCP (11/01/83)

	A common characteristic of humans that is not shared by the machines
	we build and the programs we write is called "boredom."  All of us get
	bored running around the same loop again and again, especially if
	nothing is seen to change in the process.  We get bored and quit.

		 *--->    WARNING!!!   <---*

	If we teach our programs to get bored, we will have solved the
	infinite-looping problem, but we will lose our electronic slaves who now
	work, uncomplainingly, on the same tedious jobs day in and day out.  I'm
	not sure it's worth the price.

Hmm.  I don't usually try to play in this league, but it seems to me that there
is a place for everything and every talent.  Build one machine that gets bored
(in a controlled way, please) to work on Fermat's last Theorem.  Build another
that doesn't to check tolerances on camshafts or weld hulls.  This isn't like
destroying one's virginity, you know.

						Mark Terribile
						Duke Of deNet