[net.ai] AIList Digest V3 #36

LAWS@SRI-AI.ARPA (03/18/85)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI.ARPA>


AIList Digest            Monday, 18 Mar 1985       Volume 3 : Issue 36

Today's Topics:
  Applications - SIGART Special Issue on AI in Engineering,
  Programming - 4th-Generation Languages & AI Programming & KBES,
  Literature - Recent Articles & Principles of OBJ2,
  Seminar - Discourse Semantics for Temporal Expression (BBN)

----------------------------------------------------------------------

Date: Friday, 15 March 1985 22:17:23 EST
From: Duvvuru.Sriram@cmu-ri-cive.arpa
Subject: SIGART Special Issue on AI in Engineering

The special issue on AI in engineering (SIGART Newsletter) will be
available sometime in the middle of April or begining of May. It will
be around 90+ SIGART pages and contains contributions from over sixty
researchers from 6 countries. Copies of this special issue are sent
to SIGART members at no extra cost. If you are not a member and
interested in getting a copy then extra copies are available at
$5.00  each from the SIGART-ACM office.

Sriram

------------------------------

Date: 14 March 1985 0809-EST
From: Dave Touretzky@CMU-CS-A
Subject: 4th generation languages

      [Forwarded from the CMU bboard by Laws@SRI-AI.  This was
      in response to a question about 4th-generation languages.]

4th generation languages are not programming languages in the usual sense.
They allow you to define database files, process transactions
against databases, and generate reports from them.  For example, you can say
"For every state, for every product we sell in that state, list the
salesmen who sell that product, their annual sales, their commissions, and
their office location, sorted by annual sales."  The actual command language
uses a slightly more complex syntax, but it's still pretty close to English.

These systems are not just report writers.  They allow you to extract, sort,
copy, and update databases.  They use clever hashing and indexing algorithms
for fast retrieval.  Thus, one can write substantial data processing
applications in them, using their simple English-like commands, without
ever learning how to "program".  This is a far cry from conventional data
processing as taught in universities, which emphasize languages like COBOL
or PL/I.  The database access algorithms used by 4th generation languages
are more sophisticated than anything you would expect someone with a
two or four year degree in data processing to come up with.

Martin's complaint is based on the following scenario.  A student is hired
by a local DP shop and, for his first assignment, asked to generate a
sales report.  He goes off for a month and returns with a huge piece of
PL/I code, nicely structured and commented, and mostly debugged, that
does the job.  But he could have done the same thing in two days if he'd
just used RAMIS, or FOCUS, or one of the other 4th generation languages
that run on IBM mainframes.  Martin's point is valid.  Much of routine DP
programming is becoming obsolete -- except in the universities.

Now, not all university CS programs are designed to produce data processing
technicians.  I don't think MIT or CMU teaches much COBOL these days.  But
your average rinky-dink CS department, which just graduated from punched
cards a few years ago, would be wise to teach more than just COBOL
or PL/I (or even Pascal) to its students if it wants them to be useful
in the real world of DP.

[Some other languages that James Martin cites in the March CACM are
SAS, Natural, Ideal, Nomad, Application Factory, ADF, CNS, DMS,
Mapper, and [for DB query only] QBE.  He excludes Smalltalk and
VisiCalc.  C is 3rd-generation and LISP is 5th-generation.  Martin
seems interested only in languages for commercial DP.  -- KIL]

------------------------------

Date: Wed, 13 Mar 85 21:03:19 PST
From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
Subject: AI Programming

In response to benefits of AI versus 'common' software:

        Our application is to improve the efficiency of a huge, real time,
reliable, control system.  Based upon about a year's worth of research, my
opinion [not yet that of my employers .. but hopefully soon to be that
of my immediate bosses] is that arguing about the utility of Expert Systems
is like arguing about the utility of the "if" statement in a programming
language.  There are situations where they are useful, and there are
applications where they are not.  In a simple, overengineered application
they look great -- in tough problems they provide no panacea.

        The question to ask is how should they be used, and how can we
use them to do our job.  First I will describe what we initially thought.
Then I will transition to my current view.

        Like everybody else, we read the standard AI texts, visited the
AI demo's, and read the AI and other journals -- all talking about
specialized expert systems that configure vaxes, identify fingerprints,
prospect for oil, fix locomotives.  We said: let us go out and find
such an application within our organization.  We found it, and my boss
sent me out to develop a briefing to convince the world that it was
what should be done.

        Based upon, in hindsight, some poorly thought out logic from
one of our contractors -- we thought to remove our dependance upon
experts with expert systems.  Is not that what happened in the above
cases?  We made an important and subtle mistake:  expert systems
should be targeted towards *people* not *areas* *of* *expertise*.

        This can be understood from two perspectives: 1) an expert
system is only as good as the rules within, and they must continue
to evolve, and 2) if the expert only uses the system a part of the
time, then he never becomes proficient with it or the knowledge
within it to use it effectively.  In our case, we could have had one
operator using 15+ different expert systems created by different
contractors, with vastly different knowledge structures at one time.

        The difference between expert systems should now be clear: an
expert system is tied to a person's job; commom software is tied to
a specific technical area.  There is a lot of common ground here, but
the difference is distinct.

        Why?  Anybody with experience with expert systems understands
how quickly massive rule bases become unwieldy -- just like nested
if statements.  Keying expert systems to people keeps them small,
and permits the integrity of the distributed knowledge base to
be effectively managed.  Our expert system became a network
of distributed expert systems, mapped very closely to our current
organizational structure.  Our thinking was that individuals
within an organization drive its evolution, and we should apply
expert systems and let nature take its course.

        Our organization is a learning one, and so must be the entity
we set up.  The expert systems must contribute to flexibility and
fluidity of the organization -- by permiting the programmer and the
user to meld into one.  People, once again, will be responsible for
what their computer does.

        No one is advocating total elimination of all computers in
favor of expert systems.  We will have our share of big number crunchers.
Personal expert systems will provide the interface to these machines
in the terms meaningful to specific experts.  Personal expert systems
will also provide interfaces with each other.

        With these insights, we looked again at our organization.  We
also considered inexpensive commercially available products such
as the PC-AT, and the PC-AT-II-jr due out this summer with a 600MB
removable optical disk.  After some analysis of what our experts
were actually doing most of the time, our transitional approach became
clear:  1) use off-the-shelf micros to automate the routine tasks
that micros do better than experts and 2) slip in expert system
technology to these PC's to permit the experts to easily offload
all their well understood "routine" tasks and 3) put all these
machines into a network which permits expert systems to talk to
each other under the supervision of expert people.  We can also
build "learning systems" by extending this approach.

        I welcome comments on this matter; especially those that are
critical (but not abusive).

Richard Jennings        Arpa: jennings@Aerospace
AFSCF/XRP                     AFSCF.XRP@AFSC-HQ
PO 3430
Sunnyvale, CA 94088-3430  AV: 799-6427  comm: 408 744-6427

------------------------------

Date: Friday, 15 March 1985 22:37:58 EST
From: Duvvuru.Sriram@cmu-ri-isl1.arpa
Subject: KBES - comments

This is in reference to Goodhart's comments on expert systems.

Knowledge-Based  Expert  Systems  (KBES)  are  supposed to differ from
conventional programming languages in the following manner:

   1. Completeness : Knowledge in a  KBES  can  be  incrementally
      added,  while it is very difficult to make any changes to a
      conventional program.

   2. Uniqueness: A conventional program is supposed to  give  an
      unique  answer  for  a certain input data. KBES can come up
      with a number of possible solutions, ranked  in  a  certain
      order.

   3. Sequencing: In a KBES sequencing does not matter.

The above differences can be clearly seen in a production-based expert
system,  since  the  knowledge tends to be separate from the inference
mechanism.  The production system  programming  can  be  viewed  as  a
different  programming  methodology.  Production system programming is
also closely related to the decision table approach used in the  early
70's.

Only  when  an EXPERT'S knowledge is included in the knowledge-base we
have an KBES. Otherwise we are talking about knowledge-based  systems.
Even  your  algorithmic  program  can  be  viewed as a knowledge-based
system; only it doesn't have the specialist's knowledge.   Any  decent
(engineering)  KBES  would combine the specialist's knowledge with the
algorithmic knowledge.

Building a KBES is quite a  painful  task.  Before  you  venture  into
building  one, study the pros and cons properly. Make sure that proper
experts are available.

Regarding inexact inferences, experts will  never  say  anything  with
100% certainty. Hence a KBES should handle uncertain information.

[A different viewpoint is expressed in a Consultants in Information
Technology study done in late 1983 for the British Alvey Committee (as
reported in Expert Systems, vol. 1, no. 1, July 1984, pp. 19-20).
They conclude that expert systems are easier to construct than is
generally believed, and that academic concerns about search, conflict
resolution, and uncertain knowledge are largely irrelevant to the
commercial expert systems now under development.  (These systems
generally avoid applications that require tentative hypotheses or must
deal with uncertain data other than the user dialog.)  -- KIL]

------------------------------

Date: 15 Mar 1985 08:53-EST
From: leff%smu.csnet@csnet-relay.arpa
Subject: Recent Articles

Electronics Week March 4, 1985 page 77-78

Announcement of Timm, an expert system development kit for the PC.

Electronics Week March 11, 1985 page 65-66

Describes work on testability and testing of electronic circuits using
AI.  Part of a larger article on the military's new push for
reliability.



I just received an announcement of a journal called "New Generation
Computing."  Cost of a subscription is $96.00 to be obtained from
Springer-Verlag New York, Inc.
Attn: D. Emin
175 Fifth Avenue
New York, NY 10010

They also are willing to send out sample issues.


Table of Contents: Volume 2 No 3

Preface:

Communication and Knowledge Engineering I. Toda

Efficient Unification over Infinite Terms J. Jaflar
Fair, Biased and Self-Balancing Merge Operators: Their Specification and
Implementation in Concurrent Prolog, E. Shapiro and C. Mierowsky
A Multiport Page-Memory Architecture and a Multiport Disk-Cache System
Y. Tanaka
Functional Programming with Streams-Part II, T. Ida and J. Tanaka
ORBIT: A Parallel Computing Model of Prolog H. Yasuhara and K. Nitadori

Serialization of Process Reduction in Concurrent Prolog A. J. Kusalik



The following are NTIS publications noted in SIGPLAN (Volume 20 Number 3
March 1985).  Prices are paper copy prices.

PROLOG Programming Language. 1974-August 1984 (Citations from the
INSPEC: Information Services for the Physics and Engineering
Communities Data Base)  PB84-874775 $35.00

Grasp; Uma Proposta Para Extensao Do Lisp (In Portugese)
N84-27463/8 $4.50

Kanji PROLOG Programming System (In Japanese)
Also can be found in Mitsubishi Denki Giho Volume 58 n6 p 9-13 1984
PB84-218197) $11.50

Enhanced Prolog for Industrial Applications
Same article as above page 5-8
PB84-218197 $11.50



The Artificial Intelligence Report Volume 2 No 2

Discussion of AI and the Personal Computer by Esther Dyson
Discussion of Air Force new AI Consortium involving Syracuse University,
University of Rochester, Rochester Institute of Technology, SUNY at
Buffalo; Rensselaer Polytechnic; Clarkson University; Colgate
University; University of Massachussetts
Cognitive Systems Inc.: System for portfolio analysis
AI at Sperry
Review of Hewlett Packards Integrated Personal Computer
Agreement between Inference Corp and Lockheed Corporation

Reviews of AI growth: AI is growing at 50% a year, 53 million in 1983
and $142 million in 1984; 70 million in Lisp Machines sale for 1984 and
in 1989 310 million in Lisp Machines; natural language systems did 16
million in sales with 200 million in 1989

AI company percentage break down by state: California 30 per cent,
Massachusetts 24 per cent; New York; 12 per cent; Michigan and Florida
eight per cent each; New Jersey 5 per cent and Texas 3 percent

Gartner Group estimates $183 million in Lisp machine sales broken down
by company:
  Symbolics 88 million
  Lisp Machine, Inc 37 million
  Xerox 35 million
  Texas Instruments 23 million

A series of AI titles available from National Bureau of Standards is
available for $100.00 total from Business/Technology Books

Expert Systems: Principles and Case Studies Reviews



Artificial Intelligence Report Volume 2 Number 3

AI at NASA: Lists research activities at various centers of NASA.
A lot of work is done in robotics for space applications and space
station as well as basic research in many areas of research.

Current Projects include: Lisp Machine for space applications,
smart checklist for procedural execution which knows about human factors
pilot training system; life support system controller; visual pattern
recognition for interpretation of scanning electron microscope pictures
of airborn sulfuric acid particles; aircraft design expert system;
system to assist helicopter crews; pilots associate

An expert system has been developed to assist in monitoring Halley's
Comet.

List of Lisp and Prolog vendors for the PC including reviews of some
products.

Description of project to replace harbor and river pilots on Japanese
ships.

Dec has signed agreements with the following vendors to jointly market
products:

SRL+ and PLUME from Carnegie Group
PROLOG II from Prologia
GCLISP from Gold Hill Computers
ART from Inference Corporation

Agreement between Sumitomo Electric Industries to market Silogic's
Knowledge Workbench in Japan

AI area breakdown of papers submitted to 1985 IJCAI

Expert Systems 111
Natural Language 99
Knowledge Representation 77
Learning and Knowledge Acquisition 75
Perception 61
Automated Reasoning 49
social implications of AI 4

Geographic Breakdown

US 460
Europe 150
Asia 79
Canada 29

------------------------------

Date: Thu 14 Mar 85 16:42:26-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Report - Principles of OBJ2

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]

                             NEW CSLI REPORT

      A new CSLI Report by Kokichi Futatsugi, Joseph Goguen, Jean-Pierre
   Jouannaud, and Jose Meseguer, ``Principles of OBJ2'' (Report No.
   CSLI-85-22), has been published.  To obtain a copy of this report
   write to David Brown, CSLI, Ventura Hall, Stanford 94305 or send net
   mail to Brown at SU-CSLI.

------------------------------

Date: 13 Mar 1985 15:10-EST
From: AHAAS at BBNG.ARPA
Subject: Seminar - Discourse Semantics for Temporal Expression (BBN)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

   The next BBN Artificial Intelligence seminar will be at 10:30 AM on
Tuesday March 19, in the 2nd floor large conference room at 10 Moulton
St.  Erhard Hinrichs of Stanford and Ohio State wil speak on "A
Discourse Semantics for Temporal Expressions in English". His abstract follows:

  Apart from the semantic properties of tenses, temporal
conjunctions, and temporal adverbials in discourse, I will
discuss the contribution of Aktionsarten to the shift of
reference time.  Adopting the classification of Aktionsarten into
states, activities, accomplishments, and achievements suggested
in Vendler(1967), only accomplishments and achievements lead to
the introduction of a new reference point, which moves the time
frame of a narrative forward, whereas states and activities are
ordered with respect to previously introduced reference points
and therefore do not move the time frame of the narrative.  These
properties of Aktionsarten turn out to be relevant for the
analysis of both tenses and temporal conjunctions.

  The semantics of temporal frame adverbials in the sense of
Bennett/Partee(1972) can best be accounted for by means of a
reference point scoreboard, which is designed to keep track of
context information.  I will demonstrate how this approach to
frame adverbials can acount for phenomena such as restricted
quantification and reference time nesting.

  I will further suggest how to incorporate the interpretation of
temporal expressions in discourse into the framework of discourse
representation structure (DRS) developed by Kamp(1981).  I
propose to augment DRS theory by a modified version of the system
of event structures proposed in Kamp(1979).  However, unlike
Kamp's original structures, the event structures I am proposing
offer a formal implementation of the three-dimensional tense
logic of Reichenbach(1947).

------------------------------

End of AIList Digest
********************