[comp.ai] Challenge to Connectionists

harnad@mind.UUCP (Stevan Harnad) (12/19/86)

I would like to issue a challenge to connectionists. Connectionist (C)
approaches are receiving a great deal of attention lately, and many
ambitious claims and expectations have been voiced. It is not clear,
on the existing evidence, what the null hypothesis is or ought to be,
and what would be needed to reject it. Let me propose one:

H-0: Connectionist approaches will fail to have the power to capture
the capacities of the mind because they will turn out to be subject to
higher-order versions of the same limitations that eliminated
Perceptrons from contention.

It would seem that in order to reject H-0, meeting one or the other of the
following criteria will be necessary:

	(i) Prove formally that not only is C not subject to perceptron-like
	constraints, but that it does have the power to generate
	mental capacity.
	
This first criterion is currently rather vague, since there is no well-defined
formal problem that is known to be equivalent to mental capacity (in the way
the traveling salesman problem is known to be equivalent to many important
computational problems). The conceptual and evidential burden,
however, is on those who are making positive claims.

	(ii) Demonstrate C's power to generate mental capacity empirically
	by generating human performance capacity or a significant portion
	of it.

The second criterion also suffers from some vagueness because there
seems to be no formal, empirical or practical basis for determining
when (if ever) a performance domain ceases to be a "toy" problem (like
chess playing, circumscribed question-answering and
object-manipulation, etc.) and becomes life-size -- apart from the
Total Turing Test, which some regard as too demanding. It is also
unknown whether there exists any natural (or formally partitionable)
subtotal performance "module." Again, however, the conceptual and
evidential burden would seem to be on those who are making positive
claims.

To summarize, my challenge to connectionists is that they either
provide (1) formal proof or (ii) empirical evidence for their claims
about the present or future capacity of C to model human performance
or its underlying function.

Conspicuously absent from the above is any mention of the brain. The
brain is a red herring at this stage of investigation. Experimental
neuroscientists have only the vaguest ideas about how the brain
functions. They, like all other experimental scientists, must look to
theory not only for hypotheses about function, but for guidance as to
what to look for. There is no reason to believe, for example, that the
functional level "where the action is" in the brain is anything
remotely similar to our naive and simplistic picture consisting of neurons,
action potentials, and their connections. It may, for example, be at the
subthreshold level of graded postsynaptic potentials, or at a biochemical
level, or at no level so far ascertained or even conceptualized.

At this point, taking it to be to C's credit that it is "brain-like"
amounts to the blind leading the blind. Indeed, I would recommend a
"modularization" between the efforts of those who test C as a neural
modal and those who test it as a performance model. The former should
restrict themselves to accounting for the data from experimental neuroscience
and the latter should restrict themselves to accounting for performance data,
with neither claiming the other's successes as bearing on the validity
of their own efforts.  Otherwise, shortcomings in C's performance
capacity will be overlooked or rationalized on the grounds of brain
verisimilitude and shortcomings in C's brain-modeling will be overlooked
or rationalized on the grounds of its cognitive capacity.

Finally, lest it be thought that AI (symbolic modeling) gets off
scot-free in these considerations: AI is and should be subject to the
same two criteria. "Turing power" is no better a prima facie basis for
claiming to be capturing mental power in AI than "brain-likeness" is in
connectionism. Indeed, C has the slight advantage that it is at least
a class of algorithms rather than just a computational architecture.
Hence it has some hope of showing that, what (if anything) it can
ultimately do, it does by the same general means, rather than ad hoc
ones gerrymandered to any problem at hand, as AI does.

Instead of indulging in mentalistic (and in C's case, also neuralistic)
overinterpretations of the minuscule performance capacities of current
models, both AI and C should hunker down to creating performance
models that will require no embellishment or interpretation to be
impressive as inroads on human performance and its functional basis.
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

andrews@ubc-cs.UUCP (Jamie Andrews) (12/23/86)

In article <425@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>... meeting one or the other of the
>following criteria will be necessary:
>	(i) Prove formally that not only is C not subject to perceptron-like
>	constraints, but that it does have the power to generate
>	mental capacity.
>	(ii) Demonstrate C's power to generate mental capacity empirically...

     Minsky and Papert's analysis of perceptrons was based on a very
exact and restricted type of machine.  It seems to me that the
emphasis in the discussion about connectionism should be on proving
that the connectionist approach cannot work (possibly *using*
_Perceptrons_-like arguments), rather than that _Perceptrons_-like
proofs *cannot* be applied to connectionism.

     I think both connectionists and anti-connectionists should be
involved in this proof process, however.  I wouldn't want the
discussion to turn into yet another classic AI political battle.

>To summarize, my challenge to connectionists is that they either
>provide (1) formal proof or (ii) empirical evidence for their claims
>about the present or future capacity of C to model human performance
>or its underlying function.

     If you mean by this that we should not study connectionism
until connectionists have done one of these things, then (as you
point out) we might as well write off the rest of AI too.  The
main thing should be to try to learn as much from the connectionist
model as possible, and to accept any proofs of uselessness if
someone should come up with them.  We can't expect to turn all
connectionist researchers into Minskys in order to prove theorems
about it that must needs be very complex.

--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"Good heavens, Miss Sakamoto, you're beautiful"
  This probably does not represent the views of the UBC
  Computer Science Department, or anyone else, for that matter.