[comp.ai.digest] The Grand Challenge is Foolish

JMC@SAIL.STANFORD.EDU (John McCarthy) (09/27/88)

[In reply to message sent Mon 26 Sep 1988 23:22-EDT.]

I shall have to read the article in Science to see if the Computer
Science and Technology Board has behaved as foolishly as it seems.
Computer science is science and AI is the part of computer science
concerned with achieving goals in certain kinds of complex
environments.  However, defining the goals of AI in terms of reading a
physics book is like defining the goal of plasma physics in terms of
making SDI work.  It confuses science with engineering.

If the Computer Science and Technology Board takes science seriously
then they have to get technical - or rather scientific.  They might
attempt to evaluate the progress in learning algorithms, higher
order unification or nonmonotonic reasoning.

If John Nagle thinks that "The lesson of the last five years seems to
be that throwing money at AI is not enormously productive.", he is
also confusing science with engineering.  It's like saying that the
lesson of the last five years of astronomy has been unproductive.
Progress in science is measured in longer periods than that.

lishka@uwslh.UUCP (Fish-Guts) (10/10/88)

In article <ohbWO@SAIL.Stanford.EDU> JMC@SAIL.STANFORD.EDU writes:
>[In reply to message sent Mon 26 Sep 1988 23:22-EDT.]
>If John Nagle thinks that "The lesson of the last five years seems to
>be that throwing money at AI is not enormously productive.", he is
>also confusing science with engineering.  It's like saying that the
>lesson of the last five years of astronomy has been unproductive.
>Progress in science is measured in longer periods than that.

     I don't think anyone could have said it better.  If AI is going
to progress at all, I think it will need quite a bit of time, for its
goals seem to be fairly "grand."  I think this definitely applies to
research in Neural Nets and Connectionism: many people criticize this
area, even though it has only really gotten going (again) in the past
few years.  There *have* been some really interesting discoveries due
to AI; however, they have not been as amazing and earth-shattering as
some would like.

     In my opinion, the great amount of hype in AI is what leads many
people to say stuff such as "throwing money at AI is not enormously
productive."  If many scientists and companies would stop making their
research or products out to be much more than they actually are, I
feel that others reviewing the AI field would not be so critical.
Many AI researchers and companies need to be much more "modest" in
assessing their work; they should not make promises they cannot keep.
After all, the goal of achieving true "artificial intelligence" (in
the literal sense of the phrase) is not one that will occur in the
next two, ten, fifty, one-hundred, or maybe even one-thousand years.

					.oO Chris Oo.
-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp
				     ----
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."
                         - Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

dharvey@wsccs.UUCP (David Harvey) (10/13/88)

In a previous article, John McCarthy writes:
> [In reply to message sent Mon 26 Sep 1988 23:22-EDT.]
> 
	< part of article omitted >

> If John Nagle thinks that "The lesson of the last five years seems to
> be that throwing money at AI is not enormously productive.", he is
> also confusing science with engineering.  It's like saying that the
> lesson of the last five years of astronomy has been unproductive.
> Progress in science is measured in longer periods than that.

Put more succinctly, the payoff of Science is (or should be) increased
understanding.  The payoff of Engineering on the other hand should be
a better widget, a way to accomplish what previously couldn't be done,
or a way to save money.  Too many people in our society have adopted
the narrow perspective that all human endeavors must produce a monetary
(or material) result.  Whatever happened to the Renaissance ideal of
knowledge for knowledge's sake?  I am personally fascinated about what
we have recently learned about the other planets in our solar system.
Does that mean we must reap some sort of material gain out of the
endeavor?  If we use this type of criteria as our final baseline we
may be missing out on some very interesting discoveries.  If I read
John McCarthy correctly, we are just short-sighted enough not to know
whether they will turn into "Engineering" ideas in the future.  Kudos to
him for pointing this out.

dharvey@wsccs

The only thing you can know for sure,
is that you can't know anything for sure.