[comp.ai] Behaviour, not logic

levesque@csli.stanford.edu (Hector Levesque) (06/21/91)

I can't figure out how to reply to the news with my mailer, but this is in
response to Smoliar's comments on my recent post.  

I essentially agree with him that the real issue here is not this "Logic,
Truth, and the Fabric of Reality" business at all (thankfully!), but what
would allow an agent to BEHAVE (his emphasis) in an appropriate way after
being told about the truck.  That is, what would it take for an agent
hearing the sounds of these words to decide to get out of the way?  (Or if
it's possible here to get out of the way without somehow *deciding* to do
so, how would *that* work?)  It might turn out that good ol' reasoning will
only show up as an after-the-fact rationalization ("I moved quickly, so I
guess I must've thought I was going to get squashed!"), or maybe it will
end up playing an important role in actually deciding what to do.  The fact
that I suspect it's the latter is neither here nor there.  As far as I'm
concerned, this is not a philosophical question at all, and arguments about
what's sufficient or necessary to generate this type of behaviour are best
settled by trying out and analyzing a whole lot of designs for agents with
these capabilities. So let's get on with it, already!

Hector Levesque