[comp.ai.digest] Philosophy, Artificial Intelligence and Complexity Theory

ray@basser.cs.su.OZ.AU.UUCP (05/25/87)

Lately I've been chating informally to a philosopher/friend about
common interests in our work.  He was unfamiliar with the concept of the
TIME TO COMPUTE consequences of facts.  Furthermore, the ramifactions of
intractability (ie. if P != NP is, as we all suspect, true) seemed to
be new to my friend.  The absolute consequences are hard to get across
to a non-computer scientist; They always say "but computers are getting
faster all the time...".

I'm digging around for references in AI on these ideas. This isn't my area.
Can anyone suggest some?

I've dug up a couple of relevant papers:

"Some Philosophical Problems from the Standpoint of Artificial Intelligence",
McCarthy and Hayes, 1969.  I think this is the most obvious starting point,
with its description of the frame problem.  However, the authors seem to
discuss the issue from the standpoint of what a formalism must contain so
that the system is CAPABLE of computing what it needs rather than the TIME
to compute these things (please correct me if I'm wrong).  This is  hardly
suprising, as it predates the seminal P=NP? work.

"The Tractability of Subsumption in Frame-Based Description Languages",
Brachman and Levesque, AAAI-84.  This paper is relevant, but too
implementation specific for what I want.  I want something more general
and preferably philosophically oriented.


*NOTE*: I'm certainly NOT implying that AI is impossible! But the notion
of intractability is one that must be addressed.  I'm sure it has been
addressed.  I'm just chasing a few references, more for the benefit of my
colleague than myself.


Raymond Lister
Basser Department of Computer Science
University of Sydney
NSW  2006
AUSTRALIA

ACSnet:  ray@basser
ARPANET: ray%basser.oz@seismo.arpa

tanner@osu-eddie.UUCP (Mike Tanner) (05/29/87)

We have a paper to appear in this year's IJCAI called "On the
computational complexity of hypothesis assembly", by Allemang, Tanner,
Bylander, and Josephson.

Hypothesis assembly is a part of many problem solving tasks.  Eg, in
medical diagnosis the problem is to find a collection of diseases
which are consistent, plausible, and collectively explain the
symptoms.

Our paper analyzes a particular algorithm we've developed for solving
this problem.  The algorithm turns out to be polynomial under certain
assumptions.  But the problem of hypothesis assembly is shown to be
NP-Complete, by reduction to 3-SAT, when those assumptions are
violated.  In particular, if there are hypotheses which are
incompatible with each other it becomes NP-complete.  (Another well
known algorithm for the same problem, Reggia's generalized set
covering model, is shown to be NP-complete also, by reduction to
vertex cover.)

The interesting part of the paper is the discussion of what this
means.  The bottom line is, people solve problems like this all the
time without apparent exponential increase in effort.  We take this to
mean human problem solvers are taking advantage of features of the
domain to properly organize their knowledge and problem solving
strategy so that these complexity issues don't arise.

In the particular case discussed in the paper the problem is the
identification of antibodies in blood prior to giving transfusions.
There exist pairs of antibodies that people simply cannot have both
of, for genetic reasons.  So we're in the NP-complete case.  But, it
is possible to reason up front about the antibodies and typically rule
out one of each pair of incompatible antibodies (the hypotheses).
Then do the assembly of a complete explanation.  This results in
assembly being polynomial.

If you send me your land mail address I can send you a copy of the
paper.  Or you can wait for the movie.  (Dean Allemang will present it
in Milan.)

-- mike

ARPA:  tanner@ohio-state
UUCP:  ...!cbosgd!osu-eddie!tanner