[comp.ai] The symbol grounding problem: Again... grounding?

allard@bnl.UUCP (rick allard) (07/13/87)

In article <931@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:

>Categorization preformance (with all-or-none categories) is highly reliable
>(close to 100%) and MEMBERSHIP is 100%. ...

Why add this clause about "real" membership?  Isn't the bulk of the
discussion about us humble humans doing the categorizing?  If we do
start wondering about this larger realm, does it bear on categorizing?

Rick
-- 
ooooooooooooootter#spoon in bowl
!!!!!!!!!!!!&   RooM    &
!!!!!!!!!!!!R   oooo    M

roelw@cs.vu.nl (Roel Wieringa) (07/16/87)

In article 512 of comp.ai Peter Berke says that
1. Newell's hypothesis that all human goal-oriented symbolic activity
is searching through a problem-space must be taken to mean that human
goal-oriented symbolic activity is equivalent to computing, i.e. that
it equivalent (mutually simulatable) to a process executed by a Turing
machine;
2. but human behavior is not restricted to computing, the process of
understanding an ambiguous word (one having 0 meanings, as opposed to
an equivocal word, which has more than 1 meanings) being a case in
point. Resolving equivocality can be done by searching a problem
space; ambiguity cannot be so resolved.

If 1 is correct (which requires a proof, as Berke says), then if 2 is
correct, we can conclude that not all human behavior is searching
through a problem space; the further conclusion then follows that
classical AI (using computers and algorithms to reach its goal)
cannot reach the goal of implementing human behavior as search
through a state space.

There are two problems I have with this argument.

First, barring a quibble about the choice of the terms "ambiguity" and 
"equivocality", it seems to me that ambiguity as defined by Berke is really
meaninglessness. I assume he does not mean that part of the surplus 
capacity of humans over machines is that humans can resolve meaninglessness
whereas machines cannot, so Berke has not said what he wants to say.

Second, the argument applies to classical AI. If one wishes to show
that "machines cannot do everything that humans can do," one should
find an argument which applies to connection machines, Boltzmann
machines, etc. as well.

Supposing for the sake of the argument that it is important to show
that there is an essential difference between man and machine, I
offer the following as an argument which avoids these problems.

1. Let us call a machine any system which is described by a state
evolution function (if it has a continuous state space) or a state
transition function (discrete state space).
2. Let us call a description explicit if (a) it is communicable to an
arbitrary group of people who know the language in which the
description is stated, (b) it is context-independent, i.e. mentions
all relevant aspects of the system and its environment to be able to
apply it, (c) describes a repeatable process, i.e. whenever the same
state occurs, then from that point on the same input sequence will
lead to the same output sequence, where "same" is defined as
"described by the explicit description as an instance of an input
(output) sequence." Laws of nature which describe how a natural process
evolves, computer programs, and radio wiring diagrams are explicit
descriptions.

Now, obviously a machine is an explicitly described system.
The essential difference between man and machine I propose is that
man possesses the ability to explicate whereas machines do not. The
*ability* to explicate is defined as the ability to produce an
explicit description of a range of situations which (i.e. the range
is) not described explicitly. In principle, one can build a machine
which produces explicit descriptions of, say, objects on a conveyor
belt. But the set of kinds of objects on the belt would then have to
be explicitly described in advance, or at least it would in
principle be explicitly describable, even though the description
would be large, or difficult to find. the reason for this is that a
machine is an explicitly described system, so that, among others, the
set of possible inputs is explicitly described.
  On the other hand, a human being in principle can produce
reasonably explicit descriptions of a class of systems which has no
sharp boundaries. I think it is this capability which Berke means
when he says that human beings can disambiguate whereas algorithmic
processes cannot. If the set of inputs to an explication process carried 
out by a human being is itself not explicitly describable, then
humans have a capability which machines don't have.

A weak point in this argument is that human beings usually have a
hard time in producing totally explicit descriptions; this is why
programming is so diffcult. Hence, the qualification "reasonably
explicit" above. This does not invalidate the comparison with
machines, for a machine built to produce reasonably explicit
descriptions would still be an explicitly described system, so that
the sets of inputs and outputs would be explicitly described (in
particular, the reasonableness of the explicitness of its output
would be explicitly described as well).

A second argument deriving from the concepts of machine and
explicitness focuses on the three components of the concept of
explicitness. Suppose that an explication process executed by a human
being were explicitly describable. 
1. Then it must be communicable; in particular the initial state must be 
communicable; but this seems one of the most incommunicable mental states 
there is. 
2. It must be context-independent; but especially the initial stage
of an explication process seems to be the most context-sensitive
process there is.
3. It must be repeatable; but put the same person in the same
situation (assuming that we can obliterate the memory of the previous
explication of that situation) or put identical twins in the same
situation, and we are likely to get different explicit descriptions
of that situation.

Note that these arguments do not use the concept of ambiguity as
defined by Berke and, if valid, apply to any machine, including
connection machines. Note also that they are not *proofs*. If they
were, they would be explicit descriptions of the relation between a
number of propositions, and this would contradict the claim that the
explication process has very vague beginnings.

Roel Wieringa