[comp.ai.digest] Liar's paradox, AI .vs. human error

DANTE@EDWARDS-2060.ARPA (Mike Dante) (08/07/88)

Date: Thu, 4 Aug 88 12:03 EDT
From: Mike Dante <DANTE@EDWARDS-2060.ARPA>
Subject: Liar's paradox, AI .vs. human error
To: AIList@AI.AI.MIT.EDU
In-Reply-To: Message from "AIList Moderator Nick Papadakis <AIList-REQUEST@AI.AI.MIT.EDU>" of Wed 3 Aug 88 17:36:59-PDT

1.  Bruce Nevin suggests that the solution to the "liar's paradox" lies in the
    self reference to an incomplete utterance.  How would that analysis apply to
the following pair of sentences?
                  The next sentence I write will be true.
                  The previous sentence is false.

2.  Back when it appeared that the shooting down of the airliner in the Gulf
    was the result of an "AI" system error, there were a series of digests
using the incident as a proof of the dangers of relying on computers to make
decisions.  Now that the latest analysis seems to show that the computer
correctly identified the airliner but that the human operators erroneously
interpreted the results, I look forward to an equally extensive series of
postings pointing out that we should not leave such decisions to fallible
humans but should rely on the AI systems.
---  Or were the previous postings based more on presuppositions and political
considerations than on an analysis of evidence?
-------