[comp.ai] Simulating thinking is NOT like

ywlee@aisunk.cs.uiuc.edu (03/01/90)

/* Written  1:34 am  Feb 25, 1990 by news@caen.engin.umich.edu in aisunk:comp.ai */
>To this end, I would like to propose a tentative enumeration of 
>behaviors that are "more intelligent".  
>...
>            + Recognize impossible goals. 



I like most part of your proposal but a couple of point. 
It is not possible an intelligence to recognize a goal is impossible 
or not. After all, it is the same as halting problem. I believe it
is enough an "intelligence" to recognize difficult goals to achieve
within some finite amount of resources. 
So I suggest to modify this to:

	    Recognize difficult goals to be achieved.



>        - Occasionally abandon formal reasoning methods to 
>	  simply explore patterns in the information at its
>	  disposal.  (Dreams? Creativity?)
>

I don't think this part is necessary. Do we get creativity
by abandoning formal reasoning methods? Well, I doubt it. 

ywlee.

sticklen@cpswh.cps.msu.edu (Jon Sticklen) (03/01/90)

on ywlee@aisunk.cs.uiuc.edu suggestion to "recognize hard goals"...


actually it would seem that "impossible" is the right word.
ie, this is not the same as the halting problem. eg, suppose
i have a goal of living to be 10,000 years old. i should recognize
this as impossible and abandon searching for ways to achieve the goal.
(note i am not saying that such goals may never be achieved, but in
a commonsense notion, such a goal is impossible.)


        ---jon---




-------------------------------------------------------------
	Jon Sticklen
	Artificial Intelligence/Knowledge Based Systems Group
	Computer Science Department
	Michigan State University
	East Lansing, MI  48824-1027
	517-353-3711
	FAX: 517-336-1061
-------------------------------------------------------------

kp@uts.amdahl.com (Ken Presting) (03/01/90)

In article <4100006@aisunk> ywlee@aisunk.cs.uiuc.edu writes:
>
>/* Written  1:34 am  Feb 25, 1990 by news@caen.engin.umich.edu in aisunk:comp.ai */
>>To this end, I would like to propose a tentative enumeration of 
>>behaviors that are "more intelligent".  
>>...
>>        - Occasionally abandon formal reasoning methods to 
>>	  simply explore patterns in the information at its
>>	  disposal.  (Dreams? Creativity?)
>>
>
>I don't think this part is necessary. Do we get creativity
>by abandoning formal reasoning methods? Well, I doubt it. 
>
>ywlee.

I always jump to a conclusion, then argue my way back.  Works for me :-)

More seriously, I think a better word for the activity in question
is "play", rather than "dreaming" etc.  Come to think of it, I'm sure
that I would deny the intelligence of any organism that was incapable
of playing.  Children seem to gain quite a bit from this, starting with
babies' babbling.  It may be like "generate and test".

I wonder why "peekaboo" is such a hit with infants?  That doesn't seem
to involve any learning.  Maybe object permanence or something.

Don't mind me, I'm just playing with ideas ...

kp@uts.amdahl.com (Ken Presting) (03/02/90)

In article <6671@cps3xx.UUCP> sticklen@cpswh.cps.msu.edu (Jon Sticklen) writes:
>
>on ywlee@aisunk.cs.uiuc.edu suggestion to "recognize hard goals"...
>
>
>actually it would seem that "impossible" is the right word.
>ie, this is not the same as the halting problem. eg, suppose
>i have a goal of living to be 10,000 years old. i should recognize
>this as impossible and abandon searching for ways to achieve the goal.
>(note i am not saying that such goals may never be achieved, but in
>a commonsense notion, such a goal is impossible.)

Hans Moravec's open letter to Penrose had an interesting suggestion in
this regard.  One of _Sphex_'s qualities that leads us to deny that it
is conscious is its apparent ingorance that its behavior gets stuck in
infinite loops.  Moravec suggested that the human capacity to break out
of such loops is part of our consciousness.

Loop detection, deadlock detection, and a number of other recursively
unsolvable problems all have practical implications for ordinary data
processing systems.  I think it's coherent to require of a system that
it include some mechanism to limit the expenditure of resources on
failing strategies.  No bank would buy a transaction processor that
didn't at least use timeouts to break deadlocks.  Boredom seems to be
adequate protection for most humans :-).

The general form of a "failing strategy backout" is "Is it rational for
me to continue doing what I've been doing?"  We all ask ourselves this.
When we answer well, we say we've explained or understood our actions.
When we blow it, others may say we have rationalized our actions, but
not explained them.

The importance of this activity cannot be underestimated, IMO.  I have
argued in the past that the capacity for self-description and arguement
about self-description is necessary for intelligence, and I would say
that the current suggestion is consistent with mine.

mark@unix386.Convergent.COM (Mark Nudelman) (03/02/90)

In article <6671@cps3xx.UUCP>, sticklen@cpswh.cps.msu.edu (Jon Sticklen) writes:
> 
> on ywlee@aisunk.cs.uiuc.edu suggestion to "recognize hard goals"...
> 
> actually it would seem that "impossible" is the right word.
> ie, this is not the same as the halting problem. eg, suppose
> i have a goal of living to be 10,000 years old. i should recognize
> this as impossible and abandon searching for ways to achieve the goal.
> (note i am not saying that such goals may never be achieved, but in
> a commonsense notion, such a goal is impossible.)

What does this mean?  If a goal may someday be achieved, it is
not impossible.  At least that's how I use the word.  Do you
use "impossible" to refer to things that are unattainable
today but attainable tomorrow? 

Back to the relation of this to AI; I would say that the requirement
that an entity abandon certain goals which appear to be impossible
is at best an arguable requirement for intelligent behavior.  Would
you say that humans who tried to build flying machines in the 18th
century were not intelligent?  How about humans who currently try
to trisect angles with compass & straightedge?  I could equally well
argue that continuing to attempt to find solutions to apparently
insoluable problems is one sign of _intelligent_ behavior. 
(Equally well means equally badly.)

Mark Nudelman
{uunet,sun,decwrl,hplabs}!pyramid!ctnews!unix386!mark

kp@uts.amdahl.com (Ken Presting) (03/03/90)

In article <897@unix386.Convergent.COM> mark@unix386.Convergent.COM (Mark Nudelman) writes:
>In article <6671@cps3xx.UUCP>, sticklen@cpswh.cps.msu.edu (Jon Sticklen) writes:
>> on ywlee@aisunk.cs.uiuc.edu suggestion to "recognize hard goals"...
>> 
>> actually it would seem that "impossible" is the right word.
>> ie, this is not the same as the halting problem. eg, suppose
>> i have a goal of living to be 10,000 years old. i should recognize
>> this as impossible and abandon searching for ways to achieve the goal.
>
>Back to the relation of this to AI; I would say that the requirement
>that an entity abandon certain goals which appear to be impossible
>is at best an arguable requirement for intelligent behavior.  Would
>you say that humans who tried to build flying machines in the 18th
>century were not intelligent?

Philip Kitcher has an interesting article on this topic in the Feb. '90
issue of _The Journal of Philosophy_.  His thesis is that while it may
be irrational for an individual to pursue a certain line of research,
it may well be rational for the scientific community as a whole to
support the occasional Sphexish devotee of an idee' fixe.  Who knows,
maybe a research program that looks hopeless is actually on the right
track.  This has occurred in the history of science, notably with Wegener
and continental drift.  (Let's hear it for tenure!)

I'd like to point out how the concept of rationality helps to clarify the
issue for Kitcher.  Two distinct concerns are brought together by
rationality:  the evidential status of a hypothesis, and the practical
(or the intellectual) value to be obtained by showing the hypothesis true.

The indiviudal vs. "group" rationality issue is also interesting.  A
population of agents can significantly improve its long term performance
by allowing a few "fools" to ramble on incessantly, beating dead horses,
flogging old ideas, chasing rainbows, cherishing pipe dreams ...

(apparently it's an important concept in our language :-)

harrison@necssd.NEC.COM (Mark Harrison) (03/06/90)

In article <4100006@aisunk>, ywlee@aisunk.cs.uiuc.edu writes:
> 
> /* Written  1:34 am  Feb 25, 1990 by news@caen.engin.umich.edu in aisunk:comp.ai */
> 
> >        - Occasionally abandon formal reasoning methods to 
> >	  simply explore patterns in the information at its
> >	  disposal.  (Dreams? Creativity?)
> >
> 
> I don't think this part is necessary. Do we get creativity
> by abandoning formal reasoning methods? Well, I doubt it. 
> 
> ywlee.

Formal reasoning systems are very good for deducing knowledge from facts
already known (Deductive reasoning), but sometimes not so good for
inductive reasoning. Didn't Einstein develop some of his original thoughts
on relativity by thinking about what happened to someone falling down an
elevator shaft?

A very interesting book that covers some of these points is
_A_Whack_on_the_Side_of_the_Head_. (The Author's name is "Och" or something
like that [quick, comp.ai-ers, what makes me forget this just when I want
to tell somebody ? :-)], it is published by Warner books, and is readily
availible at bookstores.)
-- 
------------------------------------------------------------------------
Mark Harrison				| (these opinions
harrison@necssd.NEC.COM			|  are my own, etc.)
{necntc, cs.utexas.edu}!necssd!harrison |