[comp.ai] An Unsearchable Problem of Elementary Human Behavior

berke@CS.UCLA.EDU (07/15/87)

      An Unsearchable Problem of Elementary Human Behavior

                           Peter Berke
                UCLA Computer Science Department

The Artificial Intelligence assumption that  all  human  behavior
can  eventually be  mimicked by computer behavior has been stated
in  various  ways.   Since  Newell  stated  his   Problem   Space
Hypothesis  in   1980,  it has taken on a clearer, and thus, more
refutable form.  Newell stated his hypothesis thus:

"The fundamental organizational unit of all  human  goal-oriented
symbolic activity is the problem space." - Newell, 1980.

In the 1980 work, Newell says  his claim   "hedges   on   whether
all cognitive activity is symbolic."     Laird,  Rosenbloom,  and
Newell  (1985)  ignore this hedge and the  qualification   "goal-
oriented    symbolic"   when  they  propose:   "Our  approach  to
developing  a  general  learning  mechanism  is   based   on  the
hypothesis   that   all   complex   behavior   -  which  includes
behavior concerned with learning - occurs as  search  in  problem
spaces."  They reference Newell (1980), but their claim is larger
than Newell's original claim.

The purpose of this note is to show that, to  be  true,  Newell's
hypothesis  must  be  taken  to  mean  just that goal-search in a
state-space is a formalism that is equivalent to computing.  Then
Newell's  Problem  Space Hypothesis is simply a true theorem. The
reader  is  invited  to  sketch   a   proof   of    the    mutual
simulatability of Turing computation and a process of goal-search
in a state space.   Such a proof has been constructed  for  every
other  prospective  universal  formalism, e.g.,  lambda calculus,
recursive function theory,  and  Post  tag  systems.   That  such
universal  formalisms  are  equivalent  in  this sense led Church
(1936, footnote 3) to speculate that human  calculating  activity
can be given no more general a characterization.

But human behavior is  not  restricted  to  calculating  activity
(though   it   seems   that  at  least  some  human  behavior  is
calculating).   If the Problem  Space  Hypothesis  is  taken   to
be  a  stronger statement,  that  is,  as a statement about human
behavior rather than about the  formalism  of  goal-search  in  a
state-space,  then   I  claim  that the following counter-example
shows it to be false.

Understanding a name is an inherently unsearchable  problem;   It
cannot  be  represented  as  search  in a state or problem space.
Well, it can be so represented, but  then  it  is  not  the  same
problem.   In  searching our states for our goal we are solving a
different problem than the original one.

To understand that understanding is (or how it can be) inherently
unsearchable,  it  is  necessary to distinguish between ambiguity
and equivocacy.   At first the distinction seems  contrived,  but
it  is  required  by  the  assumption   that   there are discrete
objects called 'names' that have  discrete  meaning (some   other
associated object or objects, see Church 1986, Berke 1987).

An equivocal word/image has  more  than  one  clear  meaning,  an
ambiguous  word/image  has  none.   What  is usually meant by the
phrase "lexical ambiguity"  is  semantic equivocacy.   Equivocacy
occurs even in formal languages and systems, though in setting up
a formal system one aims to avoid equivocacy.   For  example,  an
expression  in  a  computer  language may be equivocal ("of equal
voices"), such as: 'IF a THEN IF b THEN c  ELSE  d'.   The  whole
expression  is  equivocal  depending  on which 'IF' the 'ELSE' is
paired with.  In this case there are two clear meanings, one  for
each choice of 'IF'.

On the other hand,  'ELSE'  taken  in  isolation,  is   ambiguous
("like  both"), it's meaning is not one or many alternatives, but
it is like all of them.  [The  reader,  especially  one  who  may
claim  that  'ELSE'  has  no  meaning  in isolation,  may find it
valuable to pause at this point to write down what 'ELSE'  means.
Several  good  attempts  can  be  generated  in very little time,
especially with the aid of a dictionary.]


Resolving equivocacy can be represented  as  search  in  a  state
space;  it  may very well BE search in a state space.   Resolving
ambiguity cannot be represented  as  search  in  a  state  space.
Resolving  environmental  ambiguity  is  the  problem-formulation
stage of decision making; resolving objective  ambiguity  is  the
object-recognition  phase  of  perception.

The difference between ambiguity and equivocacy is a  reason  why
object-recognition    and   problem-formulation   are   difficult
programming   and   management   problems,    only    iteratively
approximable  by computation or rational thought.   A state space
is, by definition,  equivocal rather than  ambiguous.     If   we
confuse  ambiguity   with  equivocacy,  ambiguity  resolution may
seem like search in goal space, but this ignores the  process  of
reducing  an  ambiguous   situation  to  an  equivocal  one  much
the way Turing (1936) consciously ignores  the  transition  of  a
light  switch  from OFF to ON.

A digital process  can  approximate  an  analog  process  yet  we
distinguish the digital process from the analog one.   Similarly,
an equivocal problem can approximate an  ambiguous  problem,  but
the   approximating  problem  differs from  the approximated one.
Even if a bank of mini-switches can  simulate  a   larger   light
switch  moving  from  OFF  to  ON,  we don't evade the problem of
switch transition, we push it "down" a  level,  and  then  ignore
it.     Even   if   we   can   simulate an ambiguity by a host of
equivocacies, we don't  thereby  remove  ambiguity,  we  push  it
"down" a level, and then ignore it.

Ambiguity resolution cannot be accomplished by goal-search  in  a
state   space.   At  best  it  can  be  pushed  down some levels.
Ambiguity must still be resolved at the lower levels.  It doesn't
just  go  away;  ambiguity  resolution is the process of it going
away.   Representation may require ambiguity resolution,  so  the
general problem of representing something (e.g., problem solving,
understanding a name) as  goal-search  in a state space  can  not
be represented as goal-search in a state space.

This  leads  me  to  suspect  what  may  be  a  stronger  result:
"Representing   something"   in   a  given  formalism  cannot  be
represented in  that  formalism.  For  example,  "representing  a
thought  in words," that is, expression, cannot be represented in
words.   "What it is to be a word" cannot be expressed in  words.
Thus there can be no definition of 'word' nor then of 'language'.
Understanding a word, if it  relies  on  some  representation  of
"what  it  is  to  be  a word" in words, cannot be represented in
words.

The meaning of a word is in this way  precluded  from  being  (or
being  adequately  represented by) other words.  This agrees with
our daily observations  that  "the  meaning  of  a  word,"  in  a
dictionary  is  incomplete.   Not all words need be impossible to
completely define, just some of them for this argument  to  hold.
It also agrees with Church's 1950 arguments on the contradictions
inherent in taking words to be the meaning of other words.

If understanding cannot be represented in words, it can never  be
well-defined  and  cannot be programmed.   In programming, we can
and must ignore the low-level process of bit-recognition  because
it  is,  and  must  be,  implemented  in  hardware.    Similarly,
hardware  must  process   ambiguities   into   equivocacies   for
subsequent "logical" processing.

We are thus precluded from saying how  understanding  works,  but
that  does  not preclude us from understanding.   Understanding a
word can be learned  as  demonstrated  by  humans  daily.    Thus
learning  is  not  exhausted  by  any (word-expressed) formalism.
One example  of  a  formalism  that  does  not  exhaust  learning
behavior  is  computation  as defined (put into words) by Turing.
Another is goal-search in a  state-space  as  defined  (put  into
words) by Newell.


References:

Berke,  P.,  "Naming  and  Knowledge:  Implications  of  Church's
Arguments   about  Knowledge  Representation,"  in  revision  for
publication,1987.

Church, A., An Unsolvable Problem  of  Elementary  Number  Theory
(Presented to the American Mathematical Society, April 19, 1935),
Journal of Symbolic Logic, 1936.

Church,  A.,  "On Carnap's Analysis of  Statements  of  Assertion
and Belief," Analysis, 10:5, pp. 97-99, April, 1950.

Church,  A.,  "Intensionality  and  the  Paradox  of   the   Name
Relation," Journal of Symbolic Logic, 1986.

Laird, J.E., P.S. Rosenbloom, and A. Newell, "Towards Chunking as
a General Learning Mechanism,"  CMU-CS-85-110.

Newell, A. "Reasoning, Problem Solving, and  Decision  Processes:
The  problem  space  as a Fundamental Category.  Chapter 35 in R.
Nickerson (Ed.), Attention and Performance VIII.   Erlbaum, 1980.

Turing, A.M., On Computable numbers, with an application  to  the
Entscheidungsproblem.  Proceedings  of  the  London  Mathematical
Society  42-2 (1936-7), 230-265;  Correction, ibid.,  43  (1937),
544-546.

roelw@cs.vu.nl (Roel Wieringa) (07/16/87)

Sorry, my previous posting was a reaction to Berke's "An unsearchable
Problem of Elementary Human Behavior," not to the symbol grounding
discussion.

Roel Wis the a