[comp.ai] The future of AI

kirby@ut-ngp.UUCP (Bruce Kirby) (10/30/87)

I have a question for people:
   What practical effects do you think AI will have in the next ten
years?

What I am interested in is discovering what people expect to actually
come out of AI research in the near future,  and how that will affect
society,  business and government.  I am not interested in the
long-term questions of what AI will eventually accomplish.

Some supplementary questions:
   - What field of AI will produce practical applications?
   - What will be the effect of a new application? (e.g. how would an
effective translation mechanism affect the way people function?)
   - Who is likely to produce these useful applications?  How are they
to be introduced?

Any comments/responses are welcome.  I am just trying to get a feel
for what other people see as the near-term effects of AI research.

Bruce Kirby
kirby@ngp.utexas.edu
...!ut-sally!ut-ngp!kirby

goldfain@osiris.cso.uiuc.edu (11/01/87)

Re: Products in the next 10 years coming from AI.

One thing that is  currently out there,  is a growing  body of expert systems.
Many  new  ones  are being  churned out  as we speak,   and I think  they will
continue to be produced  at a gently accelerating  rate over the  next decade.
But  many expert systems are frightfully  narrow.  They  tend to be simplistic
and only apply when problems are  just right.  So  look for additional layers,
which    begin    to   show     some     real    sophistication.    I   expect
"multi-expert-system-management-systems" to appear  and   to exhibit qualities
that will begin to look like the human traits of  "judgement" and "learning by
analogy", and systems that will improve with time (autonomously).

crawford@endor.harvard.edu (Alexander Crawford) (11/05/87)

The first impact from AI on software in general will be natural
language interfaces.  Various problems need to be solved, such as how
to map English commands completely onto a particular application's set
of commands COMPLETELY.  (As Barbara Grosz says, if it can be said, it
can be said in all ways, e.g. "Give me the production report",
"Report", "How's production doing?".)  Once this is completed for a
large portion of applications, it will become a severe disadvantage in
the marketplace NOT to offer a natural-language interface.

Coupled with a NLI, machine-learning will allow applications to
improve in different ways as they are used:
	-Interfaces can be customized easily, automatically, for
	 different users.
	-Complex tasks can be learned automatically by having the
	 application examine what the human operator does normally.
	-Search of problem spaces for solutions can be eliminated and
	 replaced by knowledge.  (This is called "chunking".  See
	 MACHINE LEARNING II, Michalski et al. Chapter 10:
	 "The Chunking of Goal Hierarchies: A Generalized Model of
	 Practice" by Rosenbloom and Newell.)

-Alec (crawford@endor.UUCP)

rwojcik@bcsaic.UUCP (Rick Wojcik) (04/07/88)

In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes:
>[re: my reference to natural language programs]
>Errmmm...show me *any* program which can do these things?  To date,
>AI has been successful in these areas only when used in toy domains.
>
NLI's Datatalker, translation programs marketed by Logos, ALPs, WCC, &
other companies, LUNAR, the LIFER programs, CLOUT, Q&A, ASK, INTELLECT,
etc.  There are plenty.  All have flaws.  Some are more "toys" than
others.  Some are more commercially successful than others.  (The goal
of machine translation, at present, is to increase the efficiency of
translators--not to produced polished translations.)

>...  Does anyone think AI would be as prominent
>as it is today without (a) the unrealistic expectations of Star Wars,
>and (b) America's initial nervousness about the Japanese Fifth Generation
>project?
>
I do.  The Japanese are overly optimistic.  But they have shown greater
persistence of vision than Americans in many commercial areas.  Maybe
they are attracted by the enormous potential of AI.  While it is true
that Star Wars needs AI, AI doesn't need Star Wars.  It is difficult to
think of a scientific project that wouldn't benefit by computers that
behave more intelligently.

>Manifest destiny??  A century ago, one could have justified
>continued research in phrenology by its popularity.  Judge science
>by its results, not its fashionability.
>
Right.  And in the early 1960's a lot of people believed that we
couldn't land people on the moon.  When Sputnik I was launched my 5th
grade teacher told the class that they would never orbit a man around
the earth.  I don't know if phrenology ever had a respectable following
in the scientific community.  AI does, and we ought to pursue it whether
it is popular or not.

>I think AI can be summed up by Terry Winograd's defection.  His
>SHRDLU program is still quoted in *every* AI textbook (at least all
>the ones I've seen), but he is no longer a believer in the AI
>research programme (see "Understanding Computers and Cognition",
>by Winograd and Flores). 

Weisenbaum's defection is even better known, and his Eliza program is
cited (but not quoted :-) in every AI textbook too.  Winograd took us a
quantum leap beyond Weisenbaum.  Let's hope that there will be people to take
us a quantum leap beyond Winograd.  But if our generation lacks the will
to tackle the problems, you can be sure that the problems will wait
around for some other generation.  They won't get solved by pessimists.
Henry Ford had a good way of putting it:  "If you believe you can, or if
you believe you can't, you're right."
-- 
Rick Wojcik   csnet:  rwojcik@boeing.com	   
              uucp:  {uw-june  uw-beaver!ssc-vax}!bcsaic!rwojcik 
address:  P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone:    206-865-3844