[comp.ai.digest] Grand Challenges

doug@feedme.UUCP (Doug Salot) (09/27/88)

---- Forwarded Message Follows ----
Return-path: <@AI.AI.MIT.EDU:ailist-request@AI.AI.MIT.EDU>
Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 196365; 23 Sep 88 10:52:50 EDT
Received: from BLOOM-BEACON.MIT.EDU (TCP 2224000021) by AI.AI.MIT.EDU 23 Sep 88 11:00:11 EDT
Received: by BLOOM-BEACON.MIT.EDU with sendmail-5.59/4.7 
	id <AA22000@BLOOM-BEACON.MIT.EDU>; Fri, 23 Sep 88 10:42:08 EDT
Received: from USENET by bloom-beacon.mit.edu with netnews
	for ailist@ai.ai.mit.edu (ailist@ai.ai.mit.edu)
	(contact usenet@bloom-beacon.mit.edu if you have questions)
Date: 22 Sep 88 08:20:17 GMT
From: peregrine!zardoz!dhw68k!feedme!doug@jpl-elroy.arpa  (Doug Salot)
Organization: Feedme Microsystems, Orange County, CA
Subject: Grand Challenges
Message-Id: <123@feedme.UUCP>
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu

In the 16 Sept. issue of Science, there's a blurb about the
recently released report of the National Academy of Sciences'
Computer Science and Technology Board ("The National Challenge
in Computer Science and Technology," National Academy Press,
Washington, DC, 1988).  Just when you thought you had the
blocks world figured out, something like this comes along.

Their idea is to start a U.S. Big Science (computer science,
that is) effort ala Japan.  In addition to the usual clamoring
for software IC's, fault tolerance, parallel processing and
a million mips (ya, 10^12 ips), here's YOUR assignment:

1) A speaker-independent, continuous speech, multilingual real-time
translation system.  Make sure you don't mess up when the
the speech is ambiguous, nongramatical, or a phrase is incomplete.
Be sure to maintain speaker characteristics (what's Chinese sound
like with a Texas accent?).  As you may know, Japan is funding
a 7 year effort at $120 million to put a neural-net in a telephone
which accomplishes this feat for Japanese <-> English (it's a
picture phone too, so part of the problem is to make lips
sync with the speech, I guess).

2) Build a machine which can read a chapter of a physics text and
then answer the questions at the end.  At least this one can be
done by some humans!

While I'm sure some interesting results would come from attempting
such projects, these sorts of things could probably be done sooner
by tossing out ethical considerations and cloning humanoids.

If we were to accept the premise that Big Science is a Good Thing,
what should our one big goal?  I personally think an effort to
develop a true man-machine interface (i.e., neural i/o) would be
the most beneficial in terms of both applications and as a driving
force for several disciplines.
-- 
Doug Salot || doug@feedme.UUCP || ...{zardoz,dhw68k,conexch}!feedme!doug
           Raisin Deters - Breakfast never tasted so good.

dmocsny@uceng.UUCP (daniel mocsny) (09/27/88)

---- Forwarded Message Follows ----
Return-path: <@AI.AI.MIT.EDU:ailist-request@AI.AI.MIT.EDU>
Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 196549; 24 Sep 88 06:48:11 EDT
Received: from BLOOM-BEACON.MIT.EDU (TCP 2224000021) by AI.AI.MIT.EDU 24 Sep 88 06:55:41 EDT
Received: by BLOOM-BEACON.MIT.EDU with sendmail-5.59/4.7 
	id <AA26636@BLOOM-BEACON.MIT.EDU>; Sat, 24 Sep 88 06:25:14 EDT
Received: from USENET by bloom-beacon.mit.edu with netnews
	for ailist@ai.ai.mit.edu (ailist@ai.ai.mit.edu)
	(contact usenet@bloom-beacon.mit.edu if you have questions)
Date: 23 Sep 88 13:39:57 GMT
From: ndcheg!uceng!dmocsny@iuvax.cs.indiana.edu  (daniel mocsny)
Organization: Univ. of Cincinnati, College of Engg.
Subject: Re: Grand Challenges
Message-Id: <266@uceng.UC.EDU>
References: <123@feedme.UUCP>
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu

In article <123@feedme.UUCP>, doug@feedme.UUCP (Doug Salot) writes:
[ goals for computer science ]

> 2) Build a machine which can read a chapter of a physics text and
> then answer the questions at the end.  At least this one can be
> done by some humans!
> 
> While I'm sure some interesting results would come from attempting
> such projects, these sorts of things could probably be done sooner
> by tossing out ethical considerations and cloning humanoids.

A machine that could digest a physics text and then answer questions
about the material would be of atronomical value. Sure, humanoids can
do this after a fashion, but they have at least three drawbacks: 

(1) Some are much better than others, and the really good ones are
rare and thus expensive,
(2) None are immortal or particularly speedy (which limits the amount of
useful knowledge you can pack into one individual),
(3) No matter how much the previous humanoids learn, the next one
still has to start from scratch.

We spend billions of dollars piling up research results. The result,
which we call ``human knowledge,'' we inscribe on paper sheets and
stack in libraries. ``Human knowledge'' is hardly monolithic. Instead
we partition it arbitrarily and assign high-priced specialists to each
piece. As a result, ``human knowledge'' is hardly available in any
sort of general, meaningful sense. To find all the previous work
relevant to a new problem is often quite an arduous task, especially
when it spans several disciplines (as it does with increasing 
frequency). I submit that our failure to provide ourselves with
transparent, simple access to human knowledge stands as one of the
leading impediments to human progress. We can't provide such access
with a system that dates back to the days of square-rigged ships.

In my own field (chemical process design) we had a problem (synthesizing
heat recovery networks in process plants) that occupied scores of
researchers from 1970-1985. Lots of people tried all sorts of approaches
and eventually (after who knows how many grants, etc.) someone spotted
some important analogies with some problems from Operations Research work
of the '50's. We did have to develop some additional theory, but we could
have saved a decade or so with a machine that ``knew'' the literature.

Another example of an industrially significant problem in my field is
this: given a target molecule and a list of available precursors,
along with whatever data you can scrape together on possible chemical
reactions, find the best sequence of reactions to yield the target
from the precursors. Chemists call this the design of chemical syntheses,
and chemical engineers call it the reaction path synthesis problem. Since
no general method exists to accurately predict the success of a chemical
reaction, one must use experimental data. And the chemical literature
contains references to literally millions of compounds and reactions, with
more appearing every day. Researchers have constructed successful programs
to solve these types of problems, but they suffer from a big drawback: no
such program embodies enough knowledge of chemistry to be really useful.
The programs have some elaborate methods to represent to represent reaction
data, but these knowledge bases had to be hand-coded. Due to the chaos
in the literature, no general method of compiling reaction data automatically
has worked yet. Here we have an example of the literature containing 
information of enormous potential value, but it is effectively useless.

If someone handed me a machine that could digest all (or at least
large subsets) of the technical literature and then answer any
question that was answerable from the literature, I could become a
wealthy man in short order. I doubt that many of us can imagine how
valuable such a device would be. I hope to live to see such a thing.

Dan Mocsny

daryl@arthur.UUCP (Daryl McLaurine) (09/27/88)

---- Forwarded Message Follows ----
Return-path: <@AI.AI.MIT.EDU:ailist-request@AI.AI.MIT.EDU>
Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 196570; 24 Sep 88 14:48:06 EDT
Received: from BLOOM-BEACON.MIT.EDU (TCP 2224000021) by AI.AI.MIT.EDU 24 Sep 88 14:55:37 EDT
Received: by BLOOM-BEACON.MIT.EDU with sendmail-5.59/4.7 
	id <AA08298@BLOOM-BEACON.MIT.EDU>; Sat, 24 Sep 88 14:29:00 EDT
Received: from USENET by bloom-beacon.mit.edu with netnews
	for ailist@ai.ai.mit.edu (ailist@ai.ai.mit.edu)
	(contact usenet@bloom-beacon.mit.edu if you have questions)
Date: 24 Sep 88 17:53:11 GMT
From: ncar!tank!arthur!daryl@gatech.edu  (Daryl McLaurine)
Organization: Dept. of Mathematics, University of Chicago
Subject: Re: Grand Challenges
Message-Id: <178@tank.uchicago.edu>
References: <123@feedme.UUCP>, <266@uceng.UC.EDU>
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu

On "Human Knowledge"...

	I am one of many people who makes a living by generation solutions to
complex problems or tasks in a specific field by understanding the 
relationships between my field and many 'unrelated' fields of study.  As the 
complexety of today's world increases, The realm of "Human Knowledge" cannot
remain 'monolithic', to solve many problems, _especialy_ in AI, one must 
acquire the feel of the dynamic 'flow' of human experence and sense the 
conectives within.  Few people are adept at this, and the ones who are, ether
become _the_ leading edge of their field, or are called opon to consult for
others by acting as that mythical construct that will 'understand' human 
experence on demand.

	In my field, both acedemic and profesinal, I strive to make systems 
that will acquire knowledge and make ,_AT BEST_, moderately simple corrila-
tions in data that may point to solutions to a specified task.  It is still
the realm of the Human Investigator to take these suggestions and make a compl-
ete analysis of them by drawing on his/her(?) own heurestic capability to arive
at a solution.  To this date, the most advanced construct I have seen, only
does a type of informational investigative 'leg work', and rarly can it corr-
alate facts that seem to be unrelated, but may actualy be ontological. (But,
I am working on it ;-} )  It is true that the computer model of what we do
would be more effective for a research investigator, but the point to which
we can program 'intuituitive knoledge' beyond simple relationships in pattern
recognition is far off.  The human in this equasionis still an unknown factor
to itself (can YOU tell me how you think, and if you can, there are MANY
cognitive sci. people [psycologists, AI researchers, etc] who want to talk to
you...), and until we can solve the grand chalenge of knowing ourselves, our
creations are little more than idiot savants (and bloody expencive ones at
that!)

-kill me, not my clients (Translated from the legalese...)
   ^
<{[-]}>-----------------------------------------------------------------------
   V   Daryl McLaurine, Programmer/Analyst (Consultant)
   |   Contact: 
   |       Home:   1-312-955-2803 (Voice M-F 7pm/1am)
   |       Office: Computer Innovations 1-312-663-5930 (Voice M-F 9am/5pm)
   |         daryl@arthur (or zaphod,daisy,neuro,zem,beeblebrox) .UChicago.edu
==\*/=========================================================================

jbn@glacier.UUCP (John B. Nagle) (09/27/88)

---- Forwarded Message Follows ----
Return-path: <@AI.AI.MIT.EDU:ailist-request@AI.AI.MIT.EDU>
Received: from AI.AI.MIT.EDU by ZERMATT.LCS.MIT.EDU via CHAOS with SMTP id 196698; 26 Sep 88 02:18:20 EDT
Received: from BLOOM-BEACON.MIT.EDU (TCP 2224000021) by AI.AI.MIT.EDU 26 Sep 88 02:26:01 EDT
Received: by BLOOM-BEACON.MIT.EDU with sendmail-5.59/4.7 
	id <AA14248@BLOOM-BEACON.MIT.EDU>; Mon, 26 Sep 88 02:03:49 EDT
Received: from USENET by bloom-beacon.mit.edu with netnews
	for ailist@ai.ai.mit.edu (ailist@ai.ai.mit.edu)
	(contact usenet@bloom-beacon.mit.edu if you have questions)
Date: 26 Sep 88 05:33:07 GMT
From: glacier!jbn@labrea.stanford.edu  (John B. Nagle)
Organization: Stanford University
Subject: Re: Grand Challenges
Message-Id: <17736@glacier.STANFORD.EDU>
References: <123@feedme.UUCP>
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu


      The lesson of the last five years seems to be that throwing money at
AI is not enormously productive.  The promise of expert systems has not
been fulfilled (I will refrain from quoting some of the promises today),
the Japanese Fifth Generation effort has not resulted in any visible
breakthroughs (although there are some who say that its real purpose was
to divert American attention from the efforts of Hitachi and Fujitsu to
move into the mainframe computer business), the DARPA/Army Tank Command
autonomous land vehicle effort has resulted in vehicles that are bigger,
but just barely able to stay on a well-defined road on good days.

      What real progress there is doesn't seem to be coming from the big-bucks
projects.  People like Rod Brooks, Doug Lenat, and a few others seem to be
makeing progress.  But they're not part of the big-science system.

      I will not comment on why this is so, but it does, indeed, seem to be
so.  There are areas in which throwing money at the problem does work,
but AI may not be one of them at this stage of our ignorance.

					John Nagle

ray@BCSAIC.BOEING.COM (Ray Allis) (10/13/88)

If AI is to make progress toward machines with common sense, we
should first rectify the preposterous inverted notion that AI is
somehow a subset of computer science, or call the research something
other than "artificial intelligence".  Computer science has nothing
whatever to say about much of what we call intelligent behavior,
particularly common sense.

Ray Allis
Boeing Computer Services-Commercial Airplane Support 
CSNET: ray@boeing.com
UUCP:  bcsaic!ray