[comp.ai] Grand Challenges

doug@feedme.UUCP (Doug Salot) (09/22/88)

In the 16 Sept. issue of Science, there's a blurb about the
recently released report of the National Academy of Sciences'
Computer Science and Technology Board ("The National Challenge
in Computer Science and Technology," National Academy Press,
Washington, DC, 1988).  Just when you thought you had the
blocks world figured out, something like this comes along.

Their idea is to start a U.S. Big Science (computer science,
that is) effort ala Japan.  In addition to the usual clamoring
for software IC's, fault tolerance, parallel processing and
a million mips (ya, 10^12 ips), here's YOUR assignment:

1) A speaker-independent, continuous speech, multilingual real-time
translation system.  Make sure you don't mess up when the
the speech is ambiguous, nongramatical, or a phrase is incomplete.
Be sure to maintain speaker characteristics (what's Chinese sound
like with a Texas accent?).  As you may know, Japan is funding
a 7 year effort at $120 million to put a neural-net in a telephone
which accomplishes this feat for Japanese <-> English (it's a
picture phone too, so part of the problem is to make lips
sync with the speech, I guess).

2) Build a machine which can read a chapter of a physics text and
then answer the questions at the end.  At least this one can be
done by some humans!

While I'm sure some interesting results would come from attempting
such projects, these sorts of things could probably be done sooner
by tossing out ethical considerations and cloning humanoids.

If we were to accept the premise that Big Science is a Good Thing,
what should our one big goal?  I personally think an effort to
develop a true man-machine interface (i.e., neural i/o) would be
the most beneficial in terms of both applications and as a driving
force for several disciplines.
-- 
Doug Salot || doug@feedme.UUCP || ...{zardoz,dhw68k,conexch}!feedme!doug
           Raisin Deters - Breakfast never tasted so good.

bds@lzaz.ATT.COM (B.SZABLAK) (09/23/88)

In article <123@feedme.UUCP>, doug@feedme.UUCP (Doug Salot) writes:
> If we were to accept the premise that Big Science is a Good Thing,
> what should our one big goal?

Build a probe to explore the surfaces of Mars et. al. without direct human
guidance. To make the project hard [;-)] , it should be able to determine
what factors in the enviroment are of intrinsic interest.

In a simular vein, I liked the idea presented in New Scientist about a year
or so ago: an interstellar probe consisting of a light sail launched by
lasers (ala "Mote in Gods Eye"). The probe would weight only a few hundred
grams and have the circuits of the computer built into the sail (along with
photocells, sensors, and radio). Travel time to the nearest stars would be
a few decades.

smoliar@vaxa.isi.edu (Stephen Smoliar) (09/23/88)

In article <123@feedme.UUCP> doug@feedme.UUCP (Doug Salot) writes:
>In the 16 Sept. issue of Science, there's a blurb about the
>recently released report of the National Academy of Sciences'
>Computer Science and Technology Board ("The National Challenge
>in Computer Science and Technology," National Academy Press,
>Washington, DC, 1988).  Just when you thought you had the
>blocks world figured out, something like this comes along.
>
>Their idea is to start a U.S. Big Science (computer science,
>that is) effort ala Japan.  In addition to the usual clamoring
>for software IC's, fault tolerance, parallel processing and
>a million mips (ya, 10^12 ips), here's YOUR assignment:
>
>2) Build a machine which can read a chapter of a physics text and
>then answer the questions at the end.  At least this one can be
>done by some humans!
>
I just heard a McNeill-Lerher report to the effect the student science
scores are at an all time low.  Is this the government's proposed solution
to that problem . . . send robots to school instead of kids?  If we want
to take on a REALLY hard problem, why not put our minds to repairing an
educational system which has been all but decimated by current government
policy?

dmocsny@uceng.UC.EDU (daniel mocsny) (09/23/88)

In article <123@feedme.UUCP>, doug@feedme.UUCP (Doug Salot) writes:
[ goals for computer science ]

> 2) Build a machine which can read a chapter of a physics text and
> then answer the questions at the end.  At least this one can be
> done by some humans!
> 
> While I'm sure some interesting results would come from attempting
> such projects, these sorts of things could probably be done sooner
> by tossing out ethical considerations and cloning humanoids.

A machine that could digest a physics text and then answer questions
about the material would be of atronomical value. Sure, humanoids can
do this after a fashion, but they have at least three drawbacks: 

(1) Some are much better than others, and the really good ones are
rare and thus expensive,
(2) None are immortal or particularly speedy (which limits the amount of
useful knowledge you can pack into one individual),
(3) No matter how much the previous humanoids learn, the next one
still has to start from scratch.

We spend billions of dollars piling up research results. The result,
which we call ``human knowledge,'' we inscribe on paper sheets and
stack in libraries. ``Human knowledge'' is hardly monolithic. Instead
we partition it arbitrarily and assign high-priced specialists to each
piece. As a result, ``human knowledge'' is hardly available in any
sort of general, meaningful sense. To find all the previous work
relevant to a new problem is often quite an arduous task, especially
when it spans several disciplines (as it does with increasing 
frequency). I submit that our failure to provide ourselves with
transparent, simple access to human knowledge stands as one of the
leading impediments to human progress. We can't provide such access
with a system that dates back to the days of square-rigged ships.

In my own field (chemical process design) we had a problem (synthesizing
heat recovery networks in process plants) that occupied scores of
researchers from 1970-1985. Lots of people tried all sorts of approaches
and eventually (after who knows how many grants, etc.) someone spotted
some important analogies with some problems from Operations Research work
of the '50's. We did have to develop some additional theory, but we could
have saved a decade or so with a machine that ``knew'' the literature.

Another example of an industrially significant problem in my field is
this: given a target molecule and a list of available precursors,
along with whatever data you can scrape together on possible chemical
reactions, find the best sequence of reactions to yield the target
from the precursors. Chemists call this the design of chemical syntheses,
and chemical engineers call it the reaction path synthesis problem. Since
no general method exists to accurately predict the success of a chemical
reaction, one must use experimental data. And the chemical literature
contains references to literally millions of compounds and reactions, with
more appearing every day. Researchers have constructed successful programs
to solve these types of problems, but they suffer from a big drawback: no
such program embodies enough knowledge of chemistry to be really useful.
The programs have some elaborate methods to represent to represent reaction
data, but these knowledge bases had to be hand-coded. Due to the chaos
in the literature, no general method of compiling reaction data automatically
has worked yet. Here we have an example of the literature containing 
information of enormous potential value, but it is effectively useless.

If someone handed me a machine that could digest all (or at least
large subsets) of the technical literature and then answer any
question that was answerable from the literature, I could become a
wealthy man in short order. I doubt that many of us can imagine how
valuable such a device would be. I hope to live to see such a thing.

Dan Mocsny

daryl@arthur.uchicago.edu (Daryl McLaurine) (09/25/88)

On "Human Knowledge"...

	I am one of many people who makes a living by generation solutions to
complex problems or tasks in a specific field by understanding the 
relationships between my field and many 'unrelated' fields of study.  As the 
complexety of today's world increases, The realm of "Human Knowledge" cannot
remain 'monolithic', to solve many problems, _especialy_ in AI, one must 
acquire the feel of the dynamic 'flow' of human experence and sense the 
conectives within.  Few people are adept at this, and the ones who are, ether
become _the_ leading edge of their field, or are called opon to consult for
others by acting as that mythical construct that will 'understand' human 
experence on demand.

	In my field, both acedemic and profesinal, I strive to make systems 
that will acquire knowledge and make ,_AT BEST_, moderately simple corrila-
tions in data that may point to solutions to a specified task.  It is still
the realm of the Human Investigator to take these suggestions and make a compl-
ete analysis of them by drawing on his/her(?) own heurestic capability to arive
at a solution.  To this date, the most advanced construct I have seen, only
does a type of informational investigative 'leg work', and rarly can it corr-
alate facts that seem to be unrelated, but may actualy be ontological. (But,
I am working on it ;-} )  It is true that the computer model of what we do
would be more effective for a research investigator, but the point to which
we can program 'intuituitive knoledge' beyond simple relationships in pattern
recognition is far off.  The human in this equasionis still an unknown factor
to itself (can YOU tell me how you think, and if you can, there are MANY
cognitive sci. people [psycologists, AI researchers, etc] who want to talk to
you...), and until we can solve the grand chalenge of knowing ourselves, our
creations are little more than idiot savants (and bloody expencive ones at
that!)

-kill me, not my clients (Translated from the legalese...)
   ^
<{[-]}>-----------------------------------------------------------------------
   V   Daryl McLaurine, Programmer/Analyst (Consultant)
   |   Contact: 
   |       Home:   1-312-955-2803 (Voice M-F 7pm/1am)
   |       Office: Computer Innovations 1-312-663-5930 (Voice M-F 9am/5pm)
   |         daryl@arthur (or zaphod,daisy,neuro,zem,beeblebrox) .UChicago.edu
==\*/=========================================================================

jbn@glacier.STANFORD.EDU (John B. Nagle) (09/26/88)

      The lesson of the last five years seems to be that throwing money at
AI is not enormously productive.  The promise of expert systems has not
been fulfilled (I will refrain from quoting some of the promises today),
the Japanese Fifth Generation effort has not resulted in any visible
breakthroughs (although there are some who say that its real purpose was
to divert American attention from the efforts of Hitachi and Fujitsu to
move into the mainframe computer business), the DARPA/Army Tank Command
autonomous land vehicle effort has resulted in vehicles that are bigger,
but just barely able to stay on a well-defined road on good days.

      What real progress there is doesn't seem to be coming from the big-bucks
projects.  People like Rod Brooks, Doug Lenat, and a few others seem to be
makeing progress.  But they're not part of the big-science system.

      I will not comment on why this is so, but it does, indeed, seem to be
so.  There are areas in which throwing money at the problem does work,
but AI may not be one of them at this stage of our ignorance.

					John Nagle

leverich@randvax.UUCP (Brian Leverich) (09/27/88)

In article <17736@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
>      The lesson of the last five years seems to be that throwing money at
>AI is not enormously productive.

Recent "big science" failures notwithstanding, the infusion of money into
AI may turn out to have been a more productive investment than we realize.

As a case in point, consider expert system technology.  It seems doubtful
that the technology is currently or soon will be capable of capturing
human "expertise" in more than a relative handful of freakishly
well-defined domains.

That doesn't mean the technology is useless, though.  Antiquated COBOL
programming replaced or substantially increased the productivity of
millions of clerks who used to do the arithmetic necessary to maintain
ledgers.  There still are millions of clerks out there who perform
evaluation activities that can be very well defined but are too complex to
cost-effectively program, debug, maintain, and document in COBOL.  A safe
bet is that over the next decade what shells _really_ do is allow the
business data processing community to automate a whole class of clerical
activities they haven't been able to handle in the past.  Unglamorous as
it seems, that single class of applications will really (no hype) save
industry billions of dollars.

Rather than looking at how well research is satisfying its own goals, when
talking about the productivity of research it may make more sense to take
a hard-headed "engineering" perspective and ask what can be built after
the research that couldn't be built before.
-- 
  "Simulate it in ROSS"
  Brian Leverich                       | U.S. Snail: 1700 Main St.
  ARPAnet:     leverich@rand-unix      |             Santa Monica, CA 90406
  UUCP/usenet: decvax!randvax!leverich | Ma Bell:    (213) 393-0411 X7769