[comp.ai] Away with words, on with action...

sksircar@phoenix.Princeton.EDU (Subrata Sircar) (02/27/90)

zarnuk@caen.UUCP (Paul Steven Mccarthy) writes:
>I argue that we can make machines perform "more intelligently" than
>they currently do.  Furthermore, I believe that the fruits of this
>pursuit have value for the human species regardless of whether or not 
>we can emulate human thinking/behavior.  Finally, I posit that we will 
>make faster progress in this field if we shy away from emotionally 
>(philosophically) charged terms like "conciousness" and "understanding", 
>in favor of focusing on specific, identifyable behaviors that we decide 
>are "more intelligent" than the way that machines currently operate.

The practical viewpoint.  Generally speaking, I feel this way as well, but am
also interested in whether or not we can emulate human thinking.  But, in 
terms of the rest of this posting, that is a side issue.

[Note:  I am rearranging parts of this posting so as to focus my discussion.]
>An "intelligent" machine must be able to:
>
>        - Perceive its environment.  (Inputs)
>        - Actively acquire new information, through both 
>          external sources -- asking questions, consulting
>          references -- and through constructing, performing 
>          and evaluating physical or intellectual experiments.
>	  (Query, Experiment)
>        - Passively acquire new information through observation
>	  of its environment. (Observe) 
>        - Exchange information with other "intelligent" machines.
>	  (Communicate)
>	- Alter its environment.  (Outputs)
>        - Recognize the affects of its actions on its
>	  environment.  (Feedback)

This is what you might call the I/O, or sensory package.  An interesting point
is that this is contrary to the Greek mode of reasoning in a vacumn; that is,
all that was necessary was a mind which could observe and postulate, and no
experimentation was deemed necessary.  There is also the issue of what is 
internal and what is external for a computer.  For example, assume that the
sum total of human knowledge is encodable in, say, an encyclopedia.  (This
assumption, for historical reasons, is called the metaphysical assumption.
More on this later.)  Then the computer has no need to consult outside sources;
it can experiment and reason about the results, add these to its data storage,
and continue.  Communication is not really necessary either; it only serves to
prove to the satisfaction of other intelligent entities that the computer is
intelligent.  A monk with a vow of silence, in isolation, is no less smart or
intelligent by not being able to communicate.

Feedback is, of course, key to the learning process, and this is one of my
main tests of intelligence.

>        - Recognize itself as a distinguishable element
>          of its environment.  (Self awareness)

I would argue that this is not necessarily the case.  There are religions which
preach that every human is part of some cosmic force, and that we are all one
(in various senses); does this mean that adherents to those religions are
not intelligent?  In fact, not recognizing that oneself is part and parcel of
the environment is unintelligent; man does this by refusing to see the damage
oil spills, strip mining, etc does to ecology.  Not attaching a special value
to oneself as opposed to everything else is rare in a human, but not unknown.

>	- Use existing information and patterns of information  
>	  to form opinions about propositions that cannot be, or
>	  have not been, empirically resolved.  (Swag)

This is just forming hypotheses, and can be lumped in with experimentation and
queries.

>        - Recognize patterns in information to synthesize new 
>          information.  (Inductive reasoning).
>        - Recognize similarities in patterns of information 
>	  from different domains.  (Draw analogies).
>	- Recognize differences in patterns of information.
>	  (Make distinctions).

Here's the kicker.  I call this the word jumble problem.  If you give a human
a bunch of letters in random order, and ask him to form words from that set
of letters, he will NOT arrange them in every possible configuration; he will
tend to only look at arrangements which "look" right.  Simulating this type of
behavior in a computer is a very difficult problem, since computers tend not
to attach meaning to one random group of symbols over another.  Douglas 
Hofstater has actually been working on this problem in the past; I don't know
what the current state of such work is.


>        - Occasionally abandon formal reasoning methods to 
>	  simply explore patterns in the information at its
>	  disposal.  (Dreams? Creativity?)

Creativity can be modeled as the act of forming hypothesis and applying the 
various reasoning methods to them.  I think what you are getting at here is
doing so to no particular purpose i.e. "for fun".  I would argue that's not
necessary for intelligence either.  Consider a hypothetical professor who 
does research 24 hours a day, and only stops to eat.  Is he/she non-intelligent
or is there something assumed about his/her behavior?

>	- Weigh probabilities of various outcomes and their
>	  implications to choose a course of action.  (Guess)

I have a problem with labeling this guessing.  What you discuss falls under the
heading of "If you don't know, take the best choice."  If one course of action
has a better chance to succeed, but is not certain, take that course.  This
is merely reasoning with probabilities, which most people tend to do at a
subconcious level i.e. if I run across the street, the chances are good I won't
get hit by this car.  Reasoning with probabilities is actually not very
different from reasoning about certainties.

>(Just your average, opinionated American S.O.B.)
>---Paul...


-- 
Subrata K. Sircar, Prophet & Charter Member of SPAMIT(tm)
sksircar@phoenix.princeton.edu       SKSIRCAR@PUCC.BITNET
"I don't want the world. I just want your half." -
They Might Be Giants (Ana Ng)

sandyz@ntpdvp1.UUCP (Sandy Zinn) (03/07/90)

 (Subrata Sircar) writes:

> >Furthermore, I believe that the fruits of this
> >pursuit have value for the human species regardless of whether or not 
> >we can emulate human thinking/behavior.

Is there a human pursuit without value?  This is a deceptively simple
question, but if you consider feedback as an important attribute, and
you consider that any pursuit provides some sort of feedback, then...
 
 (Paul Steven Mccarthy) writes:
> >An "intelligent" machine must be able to:
> >    [ LIST OF ATTRIBUTES FROM SIRCAR's POSTING including: ]
> >        - Recognize the affects of its actions on its
> >	  environment.  (Feedback)
> >
> Feedback is, of course, key to the learning process, and this is one of my
> main tests of intelligence.
 
Feedback is absolutely necessary for learning, but it is a very broad
concept, which includes all kinds of information processing beyond the
"effects of actions" mentioned.  Feedback mechanisms exist in unicellular
animals.  It is therefore not suitable as a "main test" -- unless you are
willing to confer intelligence to any organic entity, which might not
be a bad idea, considering the trouble our isolationist/supremist
stance vis-a-vis our environment (i.e. anthropocentrism) has gotten us
into.

> >        - Recognize itself as a distinguishable element
> >          of its environment.  (Self awareness)
> 
> I would argue that this is not necessarily the case. There are religions which
> preach that every human is part of some cosmic force, and that we are all one
> (in various senses); does this mean that adherents to those religions are
> not intelligent?

Self-awareness usually denotes some form of self-representation, i.e., 
the capacity to have knowledge ABOUT oneself, or a higher level symbol
of the relationship between the entity and its context, which is
satisfied by the concept "part of some cosmic force".

> In fact, not recognizing that oneself is part and parcel of
> the environment is unintelligent; man does this by refusing to see the damage
> oil spills, strip mining, etc does to ecology.  Not attaching a special value
> to oneself as opposed to everything else is rare in a human, but not unknown.
 
Obviously, humans have the capacity for awareness of themselves as
entities-in-an-environment; just as obviously, this capacity is only
poorly used.  Are we unintelligent?  Often.

> >	 [ The information/pattern recognition & processing "problems" ]
> 
> Here's the kicker.  I call this the word jumble problem.  If you give a human
> a bunch of letters in random order, and ask him to form words from that set
> of letters, he will NOT arrange them in every possible configuration; he will
> tend to only look at arrangements which "look" right.

This is pure Information Theory.  (A good introduction, if you're 
interested, is Jeremy Campbell's _Grammatical Man_.)  Again, pattern
recognition etc. occurs throughout the evolutionary tree.  Could we
even say that the half of a DNA molecule, in serving as a template for
the creation of the missing half, is a phenomenon of pattern recognition
& processing??  As for reasoning, when does it stop being "mechanical-
processing-according-to-rules" (which rats can do) and start being what
we think of Socrates as doing?

> >        - Occasionally abandon formal reasoning methods to 
> >	  simply explore patterns in the information at its
> >	  disposal.  (Dreams? Creativity?)

Gee, this is very telling.  We only OCCASIONALLY abandon formal reasoning
methods?  What about the possibility that reasoning is a complex process
which automatically involves play & exploration of patterns four levels
under our awareness?  Consider that dreams and creativity may be Special
Cases of information-processing phenomena which permeate organic life.
Exploring patterns?  What are chromosomes doing when they mutate?  How
would you specify a difference, and what does it mean that you wish to
do so?

> Creativity can be modeled as the act of forming hypothesis and applying the 
> various reasoning methods to them.  I think what you are getting at here is
> doing so to no particular purpose i.e. "for fun".

This simplifies an awesome process to a fairly crude recipe.  I can't
buy the use of "reasoning methods" in this context.  For one thing, it
connotes a conscious deliberation which is not necessarily a feature
of creativity.

I am amused at your equation of "fun" with "having no particular purpose".
What about the professor who does research all day but considers that
she is having fun because her work is exciting?  Your definition of fun
exposes the heart of a common American myth which developed as a reaction
to the Puritan work ethic:  nothing which you HAVE to do can be fun.

Since humans are teleological, highly purposive, by their very nature,
your definition ends up as a syllogism which says we can never have
any fun, because all of our activities have some particular purpose.
Even daydreaming or being silly has a purpose.  Of course, there are
all different kinds of purposes, and thus all different kinds of
human meanings.  If you talk about "higher purposes" or the "meaning
of Life", then sure, there are lots of activities which SEEM to have
no direct connection to those.  Again, how do you differentiate, not
only "every-day" purposes from, say, philosophical purposes, but how
do you differentiate the purposive behavior of a bird's nest-building 
from a human's birdhouse building -- without using the word "instinct",
which doesn't really explain a whole lot (Gregory Bateson).

> >(Just your average, opinionated American S.O.B.)

Well, an average American S.O.B. could label me as being nit-picky at
this point, without being too wrong.  After all, I came along into
this nice rambling discussion of tests for intelligence and started
splitting hairs.  But this is just my point:  what look like hairs
at one level become complex braids at another level, and become big
damn cables the size of rivers when you get close enough to actually
write computer models of this stuff.  All the AI people are swimming
as strong as they can in these rivers and few of them can get their
heads up high enough above water to even see anyone else's head,
much less get the whole picture.  We're plunging into the substrate
of life itself, not just intelligence, because the processes which
enable us to call ourselves _Homo sapiens_ are the same processes
which created the first rotifer or amoeba.

    "We have not yet found the dot so small it is uncreated,
    as it were, like a metal blank, or merely roughed in --
    and we never shall.  We go down landscape after mobile,
    sculpture after collage, down to molecular structures
    like a mob dance in Brueghel, down to atoms airy and
    balanced as a canvas by Klee, down to atomic particles,
    the heart of the matter, as spirited and wild as any
    El Greco saints."   --   Annie Dillard

So am I against AI?  Hell, no.  What a trip!  But a list of the
attributes of intelligence is simply a list of the attributes
of intelligence.  AI is a study of complexity itself.  We've got
our fingers in a pie that includes us inside it, and even if we
can reproduce that pie we still won't know what it means or even
how exactly we did it.  But like I said, way back at the beginning,
all human pursuits have value:  AI provides one helluva feedback.
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
    Sandra Zinn         |   "The squirming facts
                        |      exceed the squamous mind"
    (std disclaimer)    |         -- Wallace Stevens