[net.ai] AIList Digest V3 #153

AIList-REQUEST@SRI-AI.ARPA (AIList Moderator Kenneth Laws) (10/22/85)

AIList Digest            Tuesday, 22 Oct 1985     Volume 3 : Issue 153

Today's Topics:
  Queries - Mike O'Donnell & Expert Systems for Law Enforcement,
  Literature - AI Book by Jackson,
  AI Tools - YAPS & LISP Workstations,
  Logic - Counterfactuals,
  Opinion - SDI Software & AI Hype

----------------------------------------------------------------------

Date: Monday, 21 Oct 85 11:18:14 PDT
From: wm%tekchips%tektronix.csnet@CSNET-RELAY.ARPA
Subject: Looking for Mike O'Donnell

I'm trying to contact Michael J. O'Donnell.
I think he is now at University of Chicago, but I'm
pretty sure he reads this list.

Wm Leler
(arpa) wm%tektronix@csnet-relay
(csnet) wm@tektronix
(usenet) decvax!tektronix!tekchips!wm

------------------------------

Date: 22 Oct 85 09:37 EDT
From: Gunther @ DCA-EMS
Subject: Expert Systems and Law Enforcement

      Need information about others working in the area of Expert
      Systems and Law Enforcement.  We are compiling a bibliography of
      anything related to this area.  Please reply with
               * Bibliographies
               * Reports
               * Names and Phone Numbers
      to
               * John Popolizio or Jerry Feinstein
               * Booz, Allen & Hamilton, Inc.
               * 301-951-2911 (2912)
      Thank You.

------------------------------

Date: Mon 21 Oct 85 20:40:50-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI Book by Jackson

I saw an ad for an inexpensive AI book in a Dover catalog:
"Introduction to Artificial Intelligence" by Philip C. Jackson, Jr.,
second, enlarged edition, $8.95.  Just published, it says.

------------------------------

Date: 21 Oct 85 11:27:28 EDT (Mon)
From: Liz Allen <liz@tove.umd.edu>
Subject: YAPS

YAPS may be redistributed but only after the whoever wants YAPS gets a
license from the Univ of Maryland first.  Contact Hans or me for the
agreement which should be signed and sent to us.  Once that is done,
Hans can send you a copy of his code.

                                -Liz

[Liz also sent along a copy of the YAPS distribution agreement.  For $100
(4.1 sources) or $250 (4.1 and 4.2) UMd includes their Franz flavors package,
a window system, an editor, spreadsheet, Z80 emulator and cross-compiler, etc.
Here is a description of YAPS:

       The YAPS production system written in Franz Lisp.   This  is
       similar  to  OPS5  but  more  flexible  in the kinds of lisp
       expressions that may appear as facts and patterns  (sublists
       are  allowed and flavor objects are treated atomically), the
       variety of tests that may appear in the left hand  sides  of
       rules  and the kinds of actions may appear in the right hand
       sides of rules.  In  addition,  YAPS  allows  multiple  data
       bases which are flavor objects and may be sent messages such
       as "fact" and "goal".

Contact Liz for more details.  -- KIL]

------------------------------

Date: 22 Oct 85 10:33:36 EDT (Tue)
From: dndobrin@ATHENA.MIT.EDU
Subject: LISP Workstations

Re:  Cugini's and Tatem's <148> flames.

     I agree that LISP machines are darned hard to learn;  I also agree
that they're worth the effort.  My interests are twofold:  why is it
that intelligent, capable people like Cugini aren't willing to make the
effort?   How can the learning be made easier, or at least, more
attractive.

     There's no easy solution.  LISP machines are, in my experience,
pretty well designed (at least by comparison with the hodgepodge in
UNIX), and their documentation is, in most places, very good.  (In my
book, any documentation which tells you not to use something is already
near the front of the pack.)  Their online documentation, in particular,
has received many awards;  in one case, I was one of the judges, and
they didn't even need to bribe me.  Their documentation department does
not consist of M.I.T. hackers;  it has very experienced people in it.
Jan Walker, their manager, is one of the best in the business.
Sometimes, admittedly, they just slap the hacker's documentation onto the
system, but most of the time, it shows real care.

     Then why is it so hard to learn?  I think learning a complex system
is very much like learning to play a complex game.  No one learns chess
in a month;  very few people can even become competent at nim or rogue
in a month.  If you could get good quickly, the game would lose its
interest.  I think, therefore, that one can learn how to teach people
LISP-machineology if one studies the way people learn games.

     Mostly, they learn from other people.  In an informal study I did
at M.I.T., I discovered that people learn to play rogue by watching
other people play rogue and by asking more experienced players about
what they should do whenever a difficult situation came up.  People who
play at home or who never ask don't play as well.  Profound discovery.
The analogy, however, is exact.  Whenever you have many different things
to do and the optimal move is not at all clear (or even calculable), you
have to have some way of zeroing in on the close-to-optimal solutions.
Documentation doesn't help you zero in, because it doesn't usually
discuss that situation, and just finding what the documentation does
say requires as much work as doing the original zeroing in.
Experience does help you zero in.  So if you don't have the
experience, the easiest thing is to ask somebody who does.

     So, I would argue, the solution to Cugini's problem is to get Tatem
to hang out over there for about three months.  Maybe two.

     Well-designed systems and good documentation do make learning
easier.  With a well-designed system, when you are confronted with a
puzzling situation, you can, in effect, consult the designer, figuring
that he or she already knows the situation and has worked out some
solution.  Similarly, good documentation can often take you through some
of the most common problem places.  But even with good documentation and
design, at some point, there you are on level 23, a griffin on one side,
a dragon, on the other, 88 hit points, strength of 24, a +2, +2
two-handed sword, a wand of cold, and a wand of magic missile.  What do
you do?

     The analogy does break down in a funny way.  In a game, you rarely
get in a situation where no move does anything.  But in learning a
computer system, you often do.  That's why Control-C is one of the first
things everybody learns.

------------------------------

Date: Sat, 19 Oct 85 19:54:21 edt
From: Dana S. Nau <ucdavis!lll-crg!seismo!rochester!dsn@UCB-VAX.Berkeley.EDU>
Subject: Counterfactuals


> From: Mike Dante <DANTE@EDWARDS-2060.ARPA>
> . . .
> (0) Suppose a class consists of three people, a 6 ft boy (Tom), a 5 ft girl
>     (Jane), and a 4 ft boy (John).  Do you believe the following statements?
>
>         (1) If the tallest person in the class is a boy, then if the tallest
>             is not Tom, then the tallest will be John.
>         (2) A boy is the tallest person in the class.
>         (3) If the tallest person in the class is not Tom then the tallest
>             person in the class will be John.
>
>        How many readers believe (1) and (2) imply the truth of (3)?

If we accept statement (0) as an axiom, then statement (3) is a statement of
the form "A => B" whose antecedent A is false.  Thus statement (3) is true,
regardless of the truth or falsity of B.

Since (3) is true, it is also true that (1) and (2) imply (3).  The truth or
falsity of (1) and (2) are irrelevant in this case.

Although I haven't read the earlier articles in this discussion, I suspect
your INTENT was that statement (3) would talk about a world identical to (0)
except that Tom would not be present--and in this world statement (3) would
be false.  The only way I know of to handle this using logic is to state (3)
in completely separate logical theory from the one whose axiom is (0).

The rules for handling logical implications whose antecedents are
counterfactuals don't correspond to our intuitive notions of how
counterfactuals work.  Although this isn't my area, I have an impression
that the problem is a pretty thorny one.

        Dana S. Nau (dsn@rochester)
        From U. of Maryland, on sabbatical at U. of Rochester

------------------------------

Date: Tuesday, 15 October 1985, 23:31-EDT
From: COWAN@MIT-XX
Subject: SDI Software and AI hype


I must respond to Prof. Minsky's perplexed comment that he does not
understand computer scientists who argue against the computing
requirements of SDI.  I believe Minsky and those computer scientists are
talking about two different things.

Arguing for the feasibility of the software portion of SDI, Minsky says:

   ... aiming and controlling them
   should not be unusally difficult.  The arguments I've seen to the
   contrary all seem to be political flames.

Minsky is obviously talking about the technical software problem.  But
SDI opponents flame POLITICALLY, and so they MUST, because they are
talking about the technical AND POLITICAL software problem.

Even though I, like Minsky, believe that computers can eventually be
made to think like humans, computers cannot RESPONSIBLY be used in place
of humans in certain situations, especially those requiring political
judgement.  I believe that it is easy to see that a lot of POLITICAL
decisions will have to be built into the SDI software.  Therefore, the
political aspects of the SDI software problem must be considered.

A bare-bones, apolitical SDI system is one that doesn't consider the
Soviet response (including a Soviet SDI).  For such a system, I'm
willing to agree that SDI software is feasible -- if not today, at least
within my lifetime.

But such a "vanilla-SDI" system would be worthless, unless there is a
US-Soviet treaty outlawing countermeasures and anti-SDI systems, AND the
SDIs of both countries (the USSR would not allow us to have a unilateral
SDI edge) COULD NOT BE USED OFFENSIVELY.  Unfortunately, it is
impossible to imagine an SDI system that could not be easily
software-upgraded to knock out the other country's SDI.  SDI satellites
are sitting ducks compared to missiles.

A true SDI system would have to be programmed to react to situations
where things go wrong, even if the problems are with the other country's
SDI.  If one country knocks out the other's SDI, then that country could
launch a first strike under a protective unbrella -- an unacceptable
situation for the country whose SDI was attacked.  Thus, each SDI system
would probably retaliate if the other SDI system attacked, even if the
attack was a mistake.  And each SDI system would probably fire on the
opposing SDI system if a missile launch were detected on the other side.
If you think about this situation for a while, you realize how serious
and unstable it is.  Even Charles Zraket, executive vice president of
the Mitre Corporation, describes multiple SDIs as

    "the worst crisis-instability situation.  It'd be like having two
    gunfighters in space armed to the teeth with quick-fire capabilities."

The country whose trillion dollar SDI system is destroyed first would be
tempted to "use 'em or lose 'em" -- to launch a first strike of its own.
(Submarine launched cruise missiles could underfly any imaginable SDI.)
Any hostile action (even upgrading the software!) could be perceived as
an opening maneuver leading to a first strike.  The decision of whether
to retaliate, if made by a human being, would undoubtedly consider
political circumstances on the ground (even statements in Pravda!).  But
time requirements would preclude human involvement; the software would
have to determine a potentially grave response, using incomplete
information, to situations for which it was not tested.

To be "safe," each country would need an "SDSDI" to protect its SDI.
But then, all the arguments of the previous paragraphs would still
apply, at a higher defensive level.  Boeing, Rockwell, Lockheed, and
McDonnell Douglas might be content to build SDSDSDI's and SDSDSDSDI's,
but the result would be decreasing stability, not increasing deterrence.
The complexity of retaliatory policy would surpass the capabilities of
policy makers, and certainly make "SDI control" an even more difficult
problem than arms control is today.  Why not solve the easier problem?

It's fine to argue that software for a "vanilla-SDI" system is feasible,
but it is intellectually dishonest to argue feasibility if you
realize the true political nature of the software problem.

People who write the hyped proposals aren't being dishonest, though.  The
SDI organization's requests for proposals don't describe the above
scenario; they just ask people to study things like "reasoning under
uncertain conditions." That's a fine goal, but it is the responsibility
of computer scientists to make clear that such disciplines as "reasoning
under uncertainty" are not applicable to problems as political as
ballistic missile defense.

Rich Cowan  (cowan@mit-xx)


  [I would like to remind readers that SDI hype/feasibility is
  not necessarily AI hype/feasibility, and that ARMS-D@MIT-MC and
  POLI-SCI@RUTGERS seem to be the proper Arpanet fora for military/political
  discussions. I will attempt to screen out (censor, if you will) AIList
  submissions that do not focus on the AI aspects of the debate.  -- KIL]

------------------------------

Date: Sat 19 Oct 85 01:02:09-PDT
From: Gary Martins <GARY@SRI-CSL.ARPA>
Subject: Grading the Professor


In response to some simple questions about hype in "AI"
[AIList #132], Prof. Minsky illustrates [AIList #139] rather
than clarifies the subject matter.

First, with respect to his own role in "AI" hype, the
professor is evasive. Let's optimistically give him a grade
of "Incomplete", and hope he will make up the work later,
after he's had a chance to think about it some more.  The
only alternative is to conclude that he thinks it's OK to
recklessly exaggerate the usefulness of "AI".

Next, Prof. Minsky retells one of the most bizarre and
colorful "AI" myths: that "AI" has somehow been responsible
for all kinds of authentic real-world computing applications,
such as: air traffic control, CAT scanners, avionics,
industrial automation, radar signal processing, resource
allocation, etc. !!  But he rewards our patience with an
astounding Revelation that pumps new wind into these
nostalgic creations: that "AI" also deserves credit for the
successes of information theory, pattern recognition, and
control theory!!!!!  Leapin' lizards !!!

Could it really be that a field with this kind of
distinguished and fruitful past would have regressed to
today's fascination with such computationally trivial
pursuits as: the "5th Generation", "blackboards", "knowledge
engineering", OPS-5, R1/XCON, AM, BACON, EMYCIN, etc. ?

Finally, Prof. Minsky proposes to stretch to 15-years the
gestation period for "AI" products (an estimate that seems to
grow just about linearly with time).  But, 15 years FROM
WHEN?  Since modern "AI" is at least 30 years old, shouldn't
we already have experienced a 15-year bonanza of genuine,
concrete, real-world "AI" contributions?  Where is it ?

Every other area of computing can point to a steady
succession of useful contributions, large and small.  From
"AI" the world seems to get back very little, other than
amateurish speculations, wild prophecies, toy programs,
unproductive "tools", and chamberpots of monotonous hype.
What's wrong ?

------------------------------

End of AIList Digest
********************