[net.ai] minority report

GLD%MIT-OZ@MIT-MC.ARPA (01/16/84)

     From: MAREK
     To repeat Minsky (and probably, most of the AI folk: one can
     only learn if one already almost knows it).

By "can only learn if..." do you mean "can't >soon< learn unless...", or
do you mean "can't >ever< learn unless..."?

If you mean "can't ever learn unless...", then the statement has the Platonic
implication that a person at infancy must "already almost know" everything she
is ever to learn.  This can't be true for any reasonable sense of "almost
know".

If you mean "can't soon learn unless...", then by "almost knows X", do you
intend:

 o a narrow interpretation, by which a person almost knows X only if she
   already has knowledge which is a good approximation to understanding X--
   eg, she can already answer simpler questions about X, or can answer
   questions about X, but with some confusion and error; or
 o a broader interpretation, which, in addition to the above, counts as
   "almost knowing X" a situation where a person might be completely in the
   dark about X-- say, unable to answer any questions about X-- but is on the
   verge of becoming an instant expert on X, say by discovering (or by being
   told of) some easy-to-perform mapping which reduces X to some other,
   already-well-understood domain.

If you intend the narrow interpretation, then the claim is false, since people
can (sometimes) soon learn X in the manner described in the broad-
interpretation example.  But if you intend the broad interpretation, then the
statement expands to "one can't soon learn X unless one's current knowledge
state is quickly transformable to include X"-- which is just a tautology.

So, if this analysis is right, the statement is either false, or empty.

MAREK%MIT-OZ@MIT-MC.ARPA (01/17/84)

ou won't take issue with
the hypothesis that an infant's category system is lesser than that of
an adult.  Yet, faced with the fact that many infants do become
adults, we have to explain how the category system can muster to grow
up, as well.

In order to do so, I propose to think that the human learning
is a process where, say, in order to assimilate a chunk of information
one has to have a hundred-, nay, a thousand-fold store of SIMILAR
chunks.  This is by direct analogy with physical growing up--it
happens very slowly, gradually, incrementally--and yet it happens.

If you recall, my original statement was made against attempting
"wholesale learning" as opposed to "knowledge-rich" systems when
building subcognitive sytems.  Admittedly, the complexity of a human
being is many an order of magnitude beyond that what AI will attempt
for decades to come, yet by observing the physical development of a
child we can arrive at some sobbering tips for how to successfully
build complex systems.  Abandoning the utopia of having complex
systems just "self-organize" and pop out of simple interactions of a
few even simplier pieces is one such tip.

                                -- Marek