[comp.ai] Ill-structured problems

jbn@glacier.STANFORD.EDU (John B. Nagle) (06/06/88)

In article <5644@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
>Take a look at Herb Simon's article in ARTIFICIAL INTELLIGENCE about
>"ill-structured problems" and then decide whether or not you want to
make that bet.

      A reference to the above would be helpful.  

      Little progress has been made on ill-structured problems in AI.
This reflects a decision in the AI community made in the early 1970s to
defer work on those hard problems and go for what appeared to be an
easier path, the path of logic/language/formal representation sometimes
referred to as "mainstream AI".  In the early 1970s, both Minsky and
McCarthy were working on robots; McCarthy proposed to build a robot 
capable of assembling a Heathkit color TV kit.  This was a discrete
component TV, requiring extensive soldering and hand-wiring to build,
not just some board insertion.  The TV kit was actually
purchased, but the robot assembly project went nowhere.  Eventually,
somebody at the SAIL lab assembled the TV kit, which lived in the SAIL
lounge for many years, providing diversion for a whole generation of 
hackers. 

      Embarassments like this tended to discourage AI workers from
attempting projects where failure was so painfully obvious.  With
more abstract problems, one can usually claim (one might uncharitably
say "fake") success by presenting one's completed system only with
carefully chosen problems that it can deal with.  But in dealing
with the physical world, one regularly runs into ill-structured
problems that can't be bypassed.  This can be hazardous to your career.
If you fail, your thesis committee will know.  Your sponsor will know.
Your peers will know.  Worst of all, you will know.

      So most AI researchers abandoned the problems of vision, navigation,
decision-making in ill-structured physical environments, and a number
of other problems which must be solved before there is any hope of dealing
effectively with the physical world.  Efforts were focused on logic,
language, abstraction, and "understanding".  Much progress was made; we
now have a whole industry devoted to the production of systems with
a superficial but useful knowledge of a wide assortment of problems.

      Still, in the last few years, the state of the art in that area
seems to have reached a plateau.  That set of ideas may have been
mined out.  Certainly the public claims made a few years ago have not been
furfilled.  (I will refrain from naming names; that's not my point today.)
The phrase "AI winter" is heard in some quarters.