[mod.ai] Technology Review article

KWH@AI.AI.MIT.EDU (Ken Haase) (02/06/86)

From: Ken Haase <KWH@MIT-AI.ARPA>


    Date: 3 Feb 86 14:25:24 GMT
    From: vax135!miles@ucbvax.berkeley.edu  (Miles Murdocca)
    Subject: Re: Technology Review article
    To: AIList@SRI-AI

    The [Technology Review] article was written by the Dreyfuss brothers,
    who are famous for making bold statements that AI will never meet the
    expectations of the people who fund AI research.  They make the claim
    that people do not learn to ride a bike by being told how to do it,
    but by a trial and error method that isn't represented symbolically.
    They use this argument and a few others such as the lack of a
    representation for emotions to support their view that AI researchers
    are wasting their sponsors' money by knowingly heading down dead-ends.

I don't think the Dreyfus brothers accuse AI researches of knowingly
heading down dead-ends.  They just claim that most of ``what people do''
cannot be captured by the ``abstracted representations'' of nearly all
current AI research.  I don't agree with this claim, but can't deny that
we (in AI) may be all wrong about our central hypothesis.  We just have
to make our hypothesis clear and explicit.  I think that most high level
intellectual processes have effective symbolic representations (and I'm
working to find out what such representations might be).  That is an
explicit hypothesis of my research.  On the other hand, I do not think
that there is anything like a symbolic representation of ``how to ride a
bike''.  What happens in such cases is that our intellect ``trains'' the
animal that is the rest of us to ride the bicycle.

    As I recall ["Machine Learning", Michalski et al, Ch 1], there are two
    basic forms of learning: 'knowledge acquisition' and 'skill refinement'.
    The Dreyfuss duo seems to be using a skill refinement problem to refute
    the work going on in knowledge acquisition.  The distinction between the
    two types of learning was recognized by AI researchers years ago, and I
    feel that the Dreyfuss two lack credibility since they fail to align their
    arguments with the taxonomy of the field.

The alchemists could have made the same argument against arguments for
the periodic table; what the Dreyfus brothers are arguing for is the
need for just such a ``paradigm shift'' in cognitive science.  The fact
that this shift will disrupt the foundations of most current AI
technology (most of which is not well proven anyway) should not effect
scientific judgements at all (though, pessimistically, it certainly
will).

In any case, the dichotomy between skill refinement and knowledge
acquisition is even suspect; outside of rote learning of facts, most
gained knowledge is gained by appropriating the knowledge as skills (in
a broad sense of skills, which includes responses, perceptual skills,
etc).

Ken

ailist@ucbvax.UUCP (02/11/86)

From: ucdavis!lll-crg!amdcad!amd!hplabs!fortune!redwood!rpw3@ucbvax.berkeley.edu (Rob Warnock)


+
| The [Technology Review] article was written by the Dreyfuss brothers, who ...
| claim...  that people do not learn to ride a bike by being told how to do it,
| but by a trial and error method that isn't represented symbolically.
+

Hmmm... Something for these guys to look at is Seymour Papert's work in teaching
such skills as bicycle riding, juggling, etc. by *verbal* and *written* means.
That's not to say that some trial-and-error practice is not needed, but that
there is a lot more that can be done analytically than is commonly assumed.
Papert has spent a lot of time looking at how children learn certain physical
skills, and has broken those skills down into basic actions, "subroutines",
and so forth.

After reading his book "Mindstorms", I picked up three apples and, following
the directions in the book, taught myself to juggle (3 things, not 4-"n") with
only a few minutes practice. Particularly useful were his warnings of which
errors were associated with which levels of the subroutine hierarchy. (Oddly
enough, most errors in the overall performance come not from the coordination
of the three balls, but from not mastering the most basic skill, throwing-
and-catching a single ball. The most serious mistake here is looking at the
balls at any points in the trajectory *other* than at the very top.)

So... there is at least SOME hint that the difference between "knowledge"
and "skills" is not as vast as we normally assume, *if* the "skills" are
analyzed properly with a view to learning.


Rob Warnock
Systems Architecture Consultant

UUCP:	{ihnp4,ucbvax!dual}!fortune!redwood!rpw3
DDD:	(415)572-2607
USPS:	627 26th Ave, San Mateo, CA  94403

albert@KIM.BERKELEY.EDU (Anthony Albert) (02/24/86)

From: albert@kim.berkeley.edu (Anthony Albert)


In article <8602110348.2860@redwood.UUCP>, ucdavis!lll-crg!amdcad!amd!
  hplabs!fortune!redwood!rpw3@ucbvax.berkeley.edu (Rob Warnock) writes:
>
>
>+
>| The [Technology Review] article was written by the Dreyfuss brothers, who
>| claim...  that people do not learn to ride a bike by being told how to do
>| it, but by a trial and error method that isn't represented symbolically.
>+
>
>Hmmm... Something for these guys to look at is Seymour Papert's work
>in teaching
>such skills as bicycle riding, juggling, etc. by *verbal* and *written* means.
>That's not to say that some trial-and-error practice is not needed, but that
>there is a lot more that can be done analytically than is commonly assumed.

The Dreyfuses (?) understand that learning can occur analytically and 
consciously at first. But in the stages from beginner to expert, the actions
become less and less conscious. I imagine Mr. Warnock's juggling (mentioned
further on in the article) followed the same path; when practicing a skill,
one doesn't think about it constantly, one lets it blend into the background.
-- 
				Anthony Albert
				..!ucbvax!kim!albert
				albert@kim.berkeley.edu

green@OHIO-STATE.ARPA (Jeffrey Greenberg) (03/01/86)

> re:
> Dreyfus' distinction between learning symbolically how to do a task
> and 'doing' the task...i.e. body's knowledge.
>
I agree with the Dreyfus brothers - the difficulty many AI people have
(in my opinion) is a fundamental confusion of
"knowledge of" versus "knowledge that."