[comp.ai] LOGIC AND RELATED STUFF

levesque@csli.stanford.edu (Hector Levesque) (06/19/91)

I've never posted to comp.ai before (and may regret it!), so please forgive
violations of protocol.  But these attacks on logic and truth, though maybe
familiar in AI from about 15 years ago, deserve some comment.

First, a minor complaint about Minsky's post about logic.  I agree
completely that universal generalization could end up playing a very minor
role in our cognitive life.  But I disagree completely that the utility of
logic is somehow thereby compromised.  One very popular logical theory,
sometimes called "quantification theory" is indeed concerned with
expressing in logical terms the properties of "for all" and "for some."
But even existing logical theories go well beyond this.  The theory of
generalized quantifiers, for example, examines properties of quantifiers
like "many", "most," "almost all" and the like that stand to play a much
more important role in expressing what we believe.  Then there are the
statistical/probabilistic accounts (a la Bacchus/Halpern), the nonmonotinic
accounts etc.  To say that logic lives or dies with generalization and
instantiation is like saying it lives or dies with exclusive-or.

Another point: I think it is a simple mistake (of logic!) to conclude that
because we can never be *certain* about what we mean when we say something,
or what we are agreeing about, or what is true, that somehow the truth of
the matter is thereby open to negotiation or interpretation, or that we can
decide to act in a way that does not take it into account.  If I tell you
"there's a truck coming towards you from behind", I may have no way of
knowing for sure that my statement is correct, and you may have no way of
being sure either of what I'm getting at or (assuming you've figured it
out) of whether or not what I am saying is true.  But it's a mistake (and a
dangerous one) to conclude from this lack of certainty that the truck issue
is somehow thereby reduced in importance, or that what ultimately matters
is your goals and desires, or our linguistic conventions, or even that one
opinion on the issue is as good as another.  None of these follow from
admitting that we may never know for sure one way or another if there is a
truck.  A skeptic may choose to focus on what I said, question what I mean
by a "truck" (a toy truck?), or just observe the loaded context dependency
and unavoidable subjectivism in how I perceive and report things.  But if
he or she after all this doesn't get it, and does not come to appreciate
very clearly the relevant issue, all is for nought, and the world will do
the rest.  You don't have to *know* what I said, and you don't have to
*know* if what I said is true, but for your own safety and comfort, you'd
better be able to figure out what it would be like for it to be true.

This, I assume, is what logic is for, at least for AI purposes.  Focussing
on Truth in some abstract, all-or-nothing, eternal, godlike sense, is a bit
of a red herring.  What matters I think in AI is being able to explore the
consequences of things being one way and not another, even while admitting
that much of our view of the world is not going to be right (even by our
own terms), and that there is no way to achieve certainty about almost all
of it.  We need to be able to ask ourselves "according to what I now
believe, what would things be like if P?"  The fact that we first use
natural language typically to express a P, and that this language is
infinitely rich and open to endless interpretation and uneliminable context
dependency and bla-bla-bla should really not fool us into thinking that
there is no issue to settle regarding the way the world is.  To fall for
this, as far as I can see, is to undervalue the difference between being
right or wrong about the truck, for example, and to guarantee for oneself a
hermeneutically rich but very short life.

The fact is, I don't think anyone takes this position too seriously except
when assuming a philosophical stance.  Show me a philosohper that doesn't
fall into realism of the most naive sort when confronted with memos that
say "I'm sorry, but your salary will be reduced by 50%."  Under the right
circumstances (having nothing to do with mathematics or formal artificial
domains!) relativism is put on hold, and the ordinary objective truth about
what might appear to be hopelessly vague imponderables suddenly becomes
very crisp, very precise, and very relevant to action.

Hector Levesque

ISSSSM@NUSVM.BITNET (Stephen Smoliar) (06/19/91)

In article <20018@csli.Stanford.EDU> levesque@csli.stanford.edu (Hector
Levesque) writes:
> I think it is a simple mistake (of logic!) to conclude that
>because we can never be *certain* about what we mean when we say something,
>or what we are agreeing about, or what is true, that somehow the truth of
>the matter is thereby open to negotiation or interpretation, or that we can
>decide to act in a way that does not take it into account.  If I tell you
>"there's a truck coming towards you from behind", I may have no way of
>knowing for sure that my statement is correct, and you may have no way of
>being sure either of what I'm getting at or (assuming you've figured it
>out) of whether or not what I am saying is true.  But it's a mistake (and a
>dangerous one) to conclude from this lack of certainty that the truck issue
>is somehow thereby reduced in importance, or that what ultimately matters
>is your goals and desires, or our linguistic conventions, or even that one
>opinion on the issue is as good as another.  None of these follow from
>admitting that we may never know for sure one way or another if there is a
>truck.  A skeptic may choose to focus on what I said, question what I mean
>by a "truck" (a toy truck?), or just observe the loaded context dependency
>and unavoidable subjectivism in how I perceive and report things.  But if
>he or she after all this doesn't get it, and does not come to appreciate
>very clearly the relevant issue, all is for nought, and the world will do
>the rest.  You don't have to *know* what I said, and you don't have to
>*know* if what I said is true, but for your own safety and comfort, you'd
>better be able to figure out what it would be like for it to be true.
>
I think the REAL mistake in this argument is the attempt to pile too much on
the shoulders of logic.  When you are standing out there in the world, the
issue is not a matter of truth, certainty, or even "what it would be like
for it to be true."  The issue is far simpler:  What do you do when someone
says "there's a truck coming towards you from behind?"  At the risk of
attaching too much importance to Skinner (who has no more claim to having
all the answers than the logicians do) the answer to this question, in its
simplest terms, is that you BEHAVE.  In a situation as urgent as this one,
anything you are likely to call reasoning will not take place until AFTER
you have behaved and you are reflecting on what just happened (perhaps while
choking on the exhaust fumes).  Thus, I think Hector's example is a good
illustration of the danger of confusing the EXPLANATORY value of logic with
any PREDICTIVE value--a point which I recently raised in comp.ai.philosophy.

>This, I assume, is what logic is for, at least for AI purposes.  Focussing
>on Truth in some abstract, all-or-nothing, eternal, godlike sense, is a bit
>of a red herring.  What matters I think in AI is being able to explore the
>consequences of things being one way and not another, even while admitting
>that much of our view of the world is not going to be right (even by our
>own terms), and that there is no way to achieve certainty about almost all
>of it.  We need to be able to ask ourselves "according to what I now
>believe, what would things be like if P?"  The fact that we first use
>natural language typically to express a P, and that this language is
>infinitely rich and open to endless interpretation and uneliminable context
>dependency and bla-bla-bla should really not fool us into thinking that
>there is no issue to settle regarding the way the world is.  To fall for
>this, as far as I can see, is to undervalue the difference between being
>right or wrong about the truck, for example, and to guarantee for oneself a
>hermeneutically rich but very short life.
>
I think it is certainly true that we do "reason" (in that same sense of the
word which I was arguing about above) about hypothetical situations.  Indeed,
our ability to do so is one of the reasons why Skinner does not have all the
answers.  Nevertheless, there is no reason to believe that any machinery which
we engage to ponder hypotheticals (which we tend to be free to do only when any
other demands of the situation are relatively low) is the SAME machinery which
exercises control over our behavior in the here-and-now.  Such uniformity would
be architecturally elegant, but elegance cannot hold a candle to more
fundamental issues of survival such as those Chris Malcolm recently posed
on comp.ai.philosophy.

===============================================================================

Stephen W. Smoliar
Institute of Systems Science
National University of Singapore
Heng Mui Keng Terrace, Kent Ridge
SINGAPORE 0511

BITNET:  ISSSSM@NUSVM

"He was of Lord Essex's opinion, 'rather to go an hundred miles to speak with
one wise man, than five miles to see a fair town.'"--Boswell on Johnson

thomas@ckgp.UUCP (Michael Thomas) (06/20/91)

In article <9106190527.AA17403@lilac.berkeley.edu>, ISSSSM@NUSVM.BITNET (Stephen Smoliar) writes:
> In article <20018@csli.Stanford.EDU> levesque@csli.stanford.edu (Hector
> Levesque) writes:
> > I think it is a simple mistake (of logic!) to conclude that
> >because we can never be *certain* about what we mean when we say something,
> >or what we are agreeing about, or what is true, that somehow the truth of
> >the matter is thereby open to negotiation or interpretation, or that we can
> >decide to act in a way that does not take it into account.  If I tell you
> >"there's a truck coming towards you from behind", I may have no way of
> >knowing for sure that my statement is correct, and you may have no way of

    Despite what we learned in school, the TRUTH or what the REALITY is
is something that you create (or the AI must create) for itself. You must
deturmine what is true and what is not. If you said a truck is coming at
me then tha brain/mind would instantly become aware or look for the sound
sight or other stimulus that would lead to personal truth.

> The issue is far simpler:  What do you do when someone
> says "there's a truck coming towards you from behind?"  At the risk of
> attaching too much importance to Skinner (who has no more claim to having
> all the answers than the logicians do) the answer to this question, in its
> simplest terms, is that you BEHAVE.  In a situation as urgent as this one,

   The person would also deturmine the reliable state of the speaker...
(you know the story the boy who cried wolf?) Yes teh first two times
it works but then died out. this was a story, in REAL life what would
happen if the stimulus wasn't there (actually I have never heared of
anyone getting hit by a truck or car? Just parents saying that kids
will if they go NEAR the street.) try it just say to someone: "LOOK
OUT THERE IS A TRUCK COMING!" see what they do? first if your not in
the middle of the street it wont work... if you are I bet they will
turn around first! 

> >This, I assume, is what logic is for, at least for AI purposes.  Focussing
> >on Truth in some abstract, all-or-nothing, eternal, godlike sense, is a bit
> >of a red herring.  What matters I think in AI is being able to explore the
> >consequences of things being one way and not another, even while admitting

   I'm sure you can see that the AI must use some clues from the
world/environment to aid its own views on the TRUTH in the world
around us. Isn't this what we do, trust our senses?
 
   "Some things have to be believed to be seen." --Ralph Hodgen

Thanks for listening...
-- 
Thank you,
Michael Thomas
(..uunet!ckgp!thomas)

byland@iris.cis.ohio-state.edu (Tom Bylander) (06/26/91)

In article <9106190527.AA17403@lilac.berkeley.edu> ISSSSM@NUSVM.BITNET (Stephen Smoliar) writes:
>When you are standing out there in the world, the
>issue is not a matter of truth, certainty, or even "what it would be like
>for it to be true."  The issue is far simpler:  What do you do when someone
>says "there's a truck coming towards you from behind?"
>[T]he answer to this question, in its
>simplest terms, is that you BEHAVE.  In a situation as urgent as this one,
>anything you are likely to call reasoning will not take place until AFTER
>you have behaved and you are reflecting on what just happened (perhaps while
>choking on the exhaust fumes).

I think there is a couple (common) confusions here.

First, there is the confusion of equating reasoning with deliberative,
conscious behavior.  If I accidently touch a hot surface, and then
involuntarily flinch, have I made no inferences at all?  To the
contrary, one answer is that I (or the relevant part of my nervous
system) "perceived" that I am touching a hot surface, and that I have
inferred that I should move away from it quickly.  The fact that the
transformation from hotness to flinching is not deliberative/conscious
does not imply that no reasoning has occurred, or that no "truths"
have been represented [too many negatives, I know].

Second, there is the confusion between languages of analysis and the
phenomena being analyzed.  Using logic to analyze some reasoning does
not imply that the reasoning itself explicitly uses the rules of
logic.  For example, computational learning theory uses statistics and
computational complexity to analyze inductive learning algorithms.
However, the algorithms themselves do not apply the rules of
statistics or computational complexity.  Similarly, an algorithm can
be analyzed using logic without any requirement that the algorithm
explicitly use resolution, quantifiers, etc.  (Note: to avoid logical
analysis, you will have to avoid, among other things, doing any
programming at all!)  The bottom line is that any argument of the sort
"logic is bad because we don't explicitly use it" is a non-starter.

>Thus, I think Hector's example is a good
>illustration of the danger of confusing the EXPLANATORY value of logic with
>any PREDICTIVE value--a point which I recently raised in comp.ai.philosophy.
 
It is very mysterious to me how you are going to make any predictions
without inferring them from some initial situation, i.e., without
doing logic.

I should mention that I do not believe that logic is going to solve
all the world's problems.  As many articles have noted, there are lots
of problems with logic.  However, just because logic has some problems
doesn't mean that logic is dispensable.  Whether we like it or not,
modus ponens is still something we will have to take into account.

Tom Bylander
byland@cis.ohio-state.edu