[net.ai] Technology Review article

richw@ada-uts.UUCP (01/31/86)

Has anyone read the article about AI in the February issue of
"Technology Review"?  You can't miss it -- the cover says something
like: "In 25 years, AI has still not lived up to its promises and
there's no reason to think it ever will" (not a direct quote; I don't
have the copy with me).  General comments?

-- Rich Wagner

       "Relax!  They're just programs..."

P.S.  You might notice that about 10 pages into the issue, there's
      an ad for some AI system.  I bet the advertisers were real
      pleased about the issue's contents...

miles@vax135.UUCP (Miles Murdocca) (02/03/86)

> Has anyone read the article about AI in the February issue of
> "Technology Review"?  You can't miss it -- the cover says something
> like: "In 25 years, AI has still not lived up to its promises and
> there's no reason to think it ever will" (not a direct quote; I don't
> have the copy with me).  General comments?

The article was written by the Dreyfuss brothers, who are famous for
making bold statements that AI will never meet the expectations of the
people who fund AI research.  They make the claim that people do not learn
to ride a bike by being told how to do it, but by a trial and error
method that isn't represented symbolically.  They use this argument and
a few others such as the lack of a representation for emotions to support
their view that AI researchers are wasting their sponsors' money by
knowingly heading down dead-ends.

As I recall ["Machine Learning", Michalski et al, Ch 1], there are two
basic forms of learning: 'knowledge acquisition' and 'skill refinement'.
The Dreyfuss duo seems to be using a skill refinement problem to refute
the work going on in knowledge acquisition.  The distinction between the
two types of learning was recognized by AI researchers years ago, and I
feel that the Dreyfuss two lack credibility since they fail to align their
arguments with the taxonomy of the field.

    Miles Murdocca, 4G-538, AT&T Bell Laboratories, Crawfords Corner Rd,
    Holmdel, NJ, 07733, (201) 949-2504, ...{ihnp4}!vax135!miles

dpb@philabs.UUCP (Paul Benjamin) (02/03/86)

> 
> Has anyone read the article about AI in the February issue of
> "Technology Review"?  You can't miss it -- the cover says something
> like: "In 25 years, AI has still not lived up to its promises and
> there's no reason to think it ever will" (not a direct quote; I don't
> have the copy with me).  General comments?
> 
> -- Rich Wagner

It's just Dreyfus and his old line about the impossibility of AI. His
methods are flawed, so his conclusions are meaningless. But it's been
his ticket to prominence.

Paul Benjamin

pcook@milano.UUCP (02/03/86)

In article <7500002@ada-uts.UUCP>, richw@ada-uts.UUCP writes:
> 
> Has anyone read the article about AI in the February issue of
> "Technology Review"?  You can't miss it -- the cover says something
> like: "In 25 years, AI has still not lived up to its promises and
> there's no reason to think it ever will" (not a direct quote; I don't
> have the copy with me).  General comments?
> 
This article is a plug for a book and  use of a current topic to get back at
the AI community for an imagined snub.  Hubert Dreyfus was stood up by
John McCarthy of Stanford at a debate on on a third echelon public tv
station in the bay area, and is still mad.

First, the premise:  AI, expert systems, and knowledge-rule based systems
have been overly optimistic in their promises and stand short of delivered
results.  Probably true, but many of the systems, once implemented, lose
their mystical qualities, and look a lot like other computer applications.
It's the ones that are in the buliding process which seem to present
extravagant claims.

As presented, however, the article is a shrill cry rather than a reasoned
response.  It leans heavily on proof by intense assertion.  As a pilot
I find examples which range from dubious to incorrect.  As a scientist I
object to the gee whiz Reader's Digest tone.  As a retired Air Force Officer
I object to the position that the commander's common sense is the ideal form
of combat decision making.  And as a philosopher (albiet not expert) I object
to the muddy intellectual approach, rife with questionable presuppositions, 
faulty dilemmas, and illogical conclusions.  

I agree that the topic is worthy of discussion-  our work to realize the
potential of computers must not degenerate into a fad which will fade
from the scene.  But I object to a diatribe where advances in the field
are dismissed as trivial because current systems do not equal human
performance.



-- 
       ...Pete                  Peter G. Cook                      Lt. Colonel
pcook@mcc.arpa                  Liaison, Motorola, Inc.            USAFR(Ret)  
ut-sally!im4u!milano!pcook      MCC-Software Technology Program
512-834-3348                    9430 Research Blvd. Suite 200
                                Austin, Texas 78759

lab@rochester.UUCP (Lab Manager) (02/03/86)

In article <7500002@ada-uts.UUCP> richw@ada-uts.UUCP writes:
>
>Has anyone read the article about AI in the February issue of
>"Technology Review"?  You can't miss it -- the cover says something
>like: "In 25 years, AI has still not lived up to its promises and
>there's no reason to think it ever will" (not a direct quote; I don't
>have the copy with me).  General comments?

They basically say that things like blocks world doesn't scale up, and
AI can't model intuition because 'real people' aren't thinking
machines. An appropriate rebuttal to these two self-styled
philosophers:
"In 3000 years, Philosophy has still not lived up to its promises and
there's no reason to think it ever will."



-- 
Brad Miller	Arpa:	lab@rochester.arpa UUCP: rochester!lab 
			(also miller@rochester for non-lab stuff)
		Title:	CS Lab Manager
		Snail:	University of Rochester Computer Science Dept.
			617 Hylan Building Rochester NY 14627

lamy@utai.UUCP (Jean-Francois Lamy) (02/03/86)

In article <7500002@ada-uts.UUCP> richw@ada-uts.UUCP writes:
>like: "In 25 years, AI has still not lived up to its promises and
>there's no reason to think it ever will" (not a direct quote; I don't

Still thinking that fundamental breakthroughs in AI are achievable in such an
infinitesimal amount of time as 25 years is naive.  I probably was not even
born when such claims could have been justified by sheer enthousiasm... Not
that we cannot get interesting and perhaps even useful developments in the
next 25 years.

>P.S.  You might notice that about 10 pages into the issue, there's
>      an ad for some AI system.  I bet the advertisers were real
>      pleased about the issue's contents...

Nowadays you don't ask for a grant or try to sell a product if the words "AI,
expert systems, knowledge engineering techniques, fifth generation and natural
language processing" are not included.  

Advertisement is about creating hype, and it really works -- for a while,
until the next "in" thing comes around.
-- 

Jean-Francois Lamy
Department of Computer Science, University of Toronto,
Departement d'informatique et de recherche operationnelle, U. de Montreal.

CSNet:      lamy@toronto.csnet
UUCP:       {utzoo,ihnp4,decwrl,uw-beaver}!utcsri!utai!lamy
CDN:        lamy@iro.udem.cdn (lamy%iro.udem.cdn@ubc.csnet)

marek@iuvax.UUCP (02/04/86)

ha ha ha!  "taxonomy of the field" -- the latest gospel of AI?  Let me be
impudent enough to claim one of the most misguided AI efforts to date is
taxonomizing a la Michalski et al:  setting up categories along arbitrary
lines dictated by somebody or other's intuition.  If AI does not have
the mechanism-cum-explanation to describe a phenomenon, what right does it
have to a) taxonomize it and b) demand that its taxonomizing be recognized
as an achievement?

			-- Marek Lugowski
			   an AI graduate student (in perpetual blush for
					           AI's excesses)
			   Indiana U. CS

franka@mmintl.UUCP (Frank Adams) (02/04/86)

In article <7500002@ada-uts.UUCP> richw@ada-uts.UUCP writes:
>Has anyone read the article about AI in the February issue of
>"Technology Review"?  You can't miss it -- the cover says something
>like: "In 25 years, AI has still not lived up to its promises and
>there's no reason to think it ever will" (not a direct quote; I don't
>have the copy with me).  General comments?
>
>P.S.  You might notice that about 10 pages into the issue, there's
>      an ad for some AI system.  I bet the advertisers were real
>      pleased about the issue's contents...

This doesn't really have anything to do with AI, but the advertisers
should have been pleased.  The article will attract the attention of
the kind of people they are trying to reach.

Frank Adams                           ihpn4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108

shebs@utah-cs.UUCP (02/07/86)

In article <3600036@iuvax.UUCP> marek@iuvax.UUCP writes:
 ...one of the most misguided AI efforts to date is
 taxonomizing a la Michalski et al:  setting up categories along arbitrary
 lines dictated by somebody or other's intuition.  If AI does not have
 the mechanism-cum-explanation to describe a phenomenon, what right does it
 have to a) taxonomize it and b) demand that its taxonomizing be recognized
 as an achievement?

I assume you have something wonderful that we haven't heard about?

Or do you believe that because there are unsolved problems in physics,
chemists and biologists have no right to study objects whose behavior is
ultimately described in terms of physics?

							stan shebs
							(shebs@utah-orion)

ladkin@kestrel.ARPA (02/08/86)

In article <15030@rochester.UUCP>, lab@rochester.UUCP (Lab Manager) writes:
> "In 3000 years, Philosophy has still not lived up to its promises and
> there's no reason to think it ever will."

An interesting comment. Whenever a problem is solved in Philosophy,
it spawns a whole new field of specialists, and is no longer called
Philosophy. Witness Physics, which used to be called Natural
Philosophy. When Newton took over, it gradually became a new
subject. Witness our own subject, which arose out of the
attempts of Frege to provide a formal foundation for mathematical
reasoning, via Russell, Church, Curry, Kleene, Turing and
von Neumann. Much work in natural language understanding arises
from the work of Montague, and more recently speech act theory
is being used, from Grice, Searle and Vanderveken.
The list goes on, and so do I. Would that AI bear such glorious
fruit. I think it might.
Peter Ladkin

bantz@uiucuxc.CSO.UIUC.EDU (02/09/86)

Dreyfus's book "What Computers Can't Do" was a pretty sorry affair, insofar
as it purported to have a positive argument about intrinsic limits of 
computers.  However uncomfortable it makes the AI community feel, though,
the journalistic baiting with extensive quotations from the AI community
itself, ought to have demonstrated the virtues of a bit more humility than
is often shown.  [I'm refering to his gleeful quotation of predictions that,
by 1970 or so a computer would be world chess champion, that fully literate
translations of natural languages would be routine...]

The responses here, so far, seem to be guilty of what Dreyfus is accused of:
failing to engage the opponent seriously, and relying on personal expressions
of distaste or ridicule.  Specifically, Dreyfus does reject the typology of 
learning in AI, on the not implausible grounds that it is self-serving, and
not obviously correct (or uniquely correct).

[Please! I am *not* a fan of Dreyfus, and do not endorse most of his claims.]

gilbert@aimmi.UUCP (Gilbert Cockton) (02/10/86)

In article <15030@rochester.UUCP> lab@rochester.UUCP (Lab Manager(Brad Miller)) writes:
>......... An appropriate rebuttal to these two self-styled
>philosophers:
>"In 3000 years, Philosophy has still not lived up to its promises and
>there's no reason to think it ever will."

Which is why I'm so sceptical about the grander claims for AI.
I'm unable to see how having faster computers, smarter algorithms and
fancier versions of LISP will allow us to crack problems that have
dogged philosophers and others for centuries. I accept that IKBS
techniques do allow a person or group to encode (some/most of/all)
their expertise into an interactive program. However given the
hard problems of ontology and epistemology, I cannot believe that
the expertise's status as knowledge (as opposed to belief) and a
representation of `reality' can ever be determined on line.

The problems of philosophy remain problems of philosophy regardless
of any new modes of communication and information encoding that can
be developed. One can be ignorant of the distinction between recursion
and iteration and still find gaping holes in much current AI research.
Furthermore, one can pick up any basic philosophy text and find
many accepted arguments which provide an education as to why these
holes seem unclosable.

Has any inductive system come up with a refutation of Hume's argument
against induction? One example will do me!

You can encode some of the people some of the time, but ...

-- 
	Gilbert Cockton, Alvey MMI Unit, Scotland 
	USENET: ..(EUROPE)mcvax!ukc!cstvax!hwcs!aimmi!gilbert
	JANET: gilbert@uk.ac.hw.cs (aimmi not NRS registered yet)
	ARPA:  gilbert%cs.hw.ac.uk@cs.ucl.ac.uk ( ditto )
	DESERT ISLAND: disk in a green bottle marked GC

jon@uw-june (Jon Jacky) (02/20/86)

> (Technology Review cover says...)
> After 25 years Artificial Intelligence has failed to live up to its promise
> and there is no evidence that it ever will.

Most of the comment in this newsgroup has addressed the second clause in 
this provocative statement.  I think the first clause is more important, and
it is indisputable.  The value of the Dreyfuss brothers' article is to
remind readers that when AI advocates make specific predictions, they are 
often over-optimistic.  Personally, I do not find all of the Dreyfuss'
speculations convincing.  So what?  AI work does not get funded 
to settle philosophical arguments, but because the funders hope to derive
specific benefits.  In particular, the DARPA Strategic Computing Program,
the largest source of funds for AI work in the country,
asserts that specific technologies (rule based expert systems, parallel
processing) will deliver specific results (unmanned vehicles that can
drive at 40 km/hr through battlefields, natural language systems with 
10,000 word vocabularies) at a specific time (the early 1990's).  One
lesson of the article is that people should regard such claims 
skeptically.

Jonathan Jacky,  	...!ssc-vax!uw-beaver!uw-june!jon  or jon@uw-june
University of Washington

mfidelma@bbncc5.UUCP (Miles Fidelman) (02/20/86)

About 14 years ago Hubert Dreyfus wrote a paper titled "Why Computers Can't
Play Chess" - immediately thereafter, someone at the MIT AI lab challenged
Dreyfus to play one of the chess programs - which trounced him royally -
the output of this was an MIT AI Lab Memo titled "The Artificial Intelligence
of Hubert Dreyfus, or Why Dreyfus Can't Play Chess".

The document was hilarious. If anyone still has a copy, I'd like to arrange
a xerox of it.

Miles Fidelman (mfidelman@bbncc5.arpa)

eugene@ames.UUCP (Eugene Miya) (02/28/86)

<1814@bbncc5.UUCP>

> 
> About 14 years ago Hubert Dreyfus wrote a paper titled "Why Computers Can't
> Play Chess" - immediately thereafter, someone at the MIT AI lab challenged
> Dreyfus to play one of the chess programs - which trounced him royally -
> the output of this was an MIT AI Lab Memo titled "The Artificial Intelligence
> of Hubert Dreyfus, or Why Dreyfus Can't Play Chess".
> 
> The document was hilarious. If anyone still has a copy, I'd like to arrange
> a xerox of it.
> 
> Miles Fidelman (mfidelman@bbncc5.arpa)

Excuse the fact I reproduced all that above rather than digest it.

I just attended a talk given by Dreyfus (for the first time).  I think
the AI community is FORTUNATE to have a loyal opposition following of
Dr. Dreyfus.  In some defense, Dreyfus is somewhat kind to the AI
community (in constrast to some AI critics I know) for instance he does
believe in the benefit of expert systems and expert assistants.
Dreyfus feels that the AI community harped on the above:
	Men play chess.
	Computers play chess.
	Dreyfus is a man.
	Computer beat Dreyfus.
	Therefore, computers can beat man playing chess.
He pointed out he sent his brother (supposedily captain of the Harvard
chess team at one time) and he beat the computer (we should write
his brother at UCB CS to verify this I supose).

While I do not fully agree with Dreyfus's philosophy or his
"methodology," he is a bright thinker and critic. [One point we
do not agree on: he believes in the validity of the Turing test,
I do not (in the way it currently stands).]

--eugene miya
  NASA Ames Research Center
  {hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene
  eugene@ames-nas.ARPA

p.s. I would not mind seeing a copy of the paper myself. :-)