[net.ai] AIList Digest V3 #137

AIList-REQUEST@SRI-AI.ARPA (AIList Moderator Kenneth Laws) (10/09/85)

AIList Digest           Wednesday, 9 Oct 1985     Volume 3 : Issue 137

Today's Topics:
  Queries - AI Machines &
    Formal Semantics of Object-Oriented Languages &
    Knowledge Representation,
  AI Tools - Lisp for Macintosh & TIMM
  Opinion - Social Responsibility,
  Expert Systems - Aeronautical Application,
  Games - Hitech Chess Performance,
  Bindings - Information Science Program at NSF

----------------------------------------------------------------------

Date: Sun, 06 Oct 85 15:20:34 EDT
From: "Srinivasan Krishnamurthy" <1438@NJIT-EIES.MAILNET>
Subject: Do I really need a AI Machine?

 Dear readers,
 I work at COMSAT LABS, Maryland. We are getting into AI in a big way
and would like comments, suggestions and answers to the
following questions,to justify a big investment on a AI machine.

  * What are the specific gains of using a AI Machine (like symbolics)
    over developing AI products/packages on general purpose
    machines like - VAX-11/750-UNIX(4.2BSD), 68000/UNIX etc.

  * How will an environment without AI Machines effect a major
    development effort.

  * How  is the HW Architecture of Symbolics, TI Explorer
    different from VAX.

  * What are the limitations/constraints of general purpose HW
     when used for AI applications.

  * Survey results of AI HW Machines. (if available)

  * Pointers to other relevant issues.

 Please message me at:
      Mailnet address:Vasu@NJIT-EIES.MAILNET
        Arpanet address: Vasu%NJIT-EIES.MAILNET@MIT-MULTICS.ARPA

My sincere thanks in advance.
                                     ..........Vasu.

------------------------------

Date: Mon, 7 Oct 85 11:06:40 edt
From: "Dennis R. Bahler" <drb%virginia.csnet@CSNET-RELAY.ARPA>
Subject: request: formal semantics of OOL's

        Does anyone have pointers to work done on
formal specification and/or formal semantic definition of
object-oriented languages or systems such as Smalltalk-80?

Dennis Bahler

Usenet:  ...cbosgd!uvacs!drb                    Dept. of Computer Science
CSnet: drb@virginia                             Thornton Hall
ARPA:  drb.virginia@csnet-relay                 University of Virginia
                                                Charlottesville, VA 22903

------------------------------

Date: Mon 7 Oct 85 17:27:03-PDT
From: MOHAN@USC-ECLC.ARPA
Subject: knowledge representation

Hi,

I am looking for a short list of introductory and survey type articles
on Knowledge Representation.

I am also looking for any work done on representing visual scenes so
that a system could reason about them and answers queries etc.
Minsky, M.:A Framework for Representing Knowledge, in Winston,P.H.
(ed.) "The Psychology of Computer Vision" is a typical paper on the type of
work I am interested. Since I have just started reading on this subject,
I am presently interested in all related topics (except the CAD/CAM
type of reasoning). What I am looking for is an introduction to this big
field: papers which have presented the key ideas and some surveys which can
give me an idea of the type of work being done in such areas.

Thanks,

Rakesh Mohan
mohan@eclc.arpa


[The October 1983 issue of IEEE Computer was a special issue on knowledge
representation, as was the February 1980 issue of the SIGART Newsletter.
Technical Report TR-1275 of the University of Maryland Department of
Computer Science (issued May 1983) is the proceedings of an informal
workshop on "Representation and Processing of Spatial Knowledge".  There
were also several papers on image-to-database matching in the April 9-11,
1985, SPIE Arlington, VA, conference on Applications of AI (SPIE volume
548).  Vision researchers generally seem happy with networks of
frames, although other representations are in use (e.g., connectionist
coarse coding, logic clauses).  I have sent a fairly extensive list
of vision citations to Rakesh and to Vision-List@AIDS-UNIX.  -- KIL ]

------------------------------

Date: Mon, 7 Oct 85 12:02 EDT
From: Carole D Hafner <HAFNER%northeastern.csnet@CSNET-RELAY.ARPA>
Subject: Lisp for Macintosh

There is a new magazine out called MacUser.  The premier issue, which
is available at newsstands, contains a review of ExperLisp for the
Macintosh.  The review says it's good but buggy.

There is also a version of Xlisp by David Betz available through the
public domain software networks.  I got a copy from the Boston Computer
Society, and when I tried to open it on my 128K Mac, the computer crashed
(i.e., the screen went crazy and strange noises occurred).

Perhaps Xlisp only runs on the 512K Mac, or else I got a bad version.

Carole Hafner
hafner@northeastern

------------------------------

Date: 8 Oct 85 12:22 EDT
From: Dave.Touretzky@A.CS.CMU.EDU
Subject: TIMM

There's been some talk about AI hype in this digest, but not too many folks
have stood up and pointed to actual examples.  Cowan's inquiry about TIMM
affords an excellent opportunity, so here goes.

As far as I can tell, TIMM is the most colossal ripoff in the expert
systems business.  I got a demo last year at AAAI-84 from Dr. Wanda
Rappaport, who I believe is one of the developers.  Basically, TIMM
works by comparing the facts of a situation with a set of stored
templates, called training instances.  The underlying data structure
looks like a frame, i.e. it has named slots.  Each training instance
consists of a frame with some or all of the slots filled in.  After
you've given TIMM enough training examples, you tell it to "generalize",
which causes it to do some computation on the training set to extract
regularities and relationships between slot values.  Then, to have TIMM
solve a problem, you give it a frame with some of the slots filled in,
and it fills in the rest of the slots.

TIMM is a mere toy, in my opinion, because it doesn't provide adequate
facilities for expressing knowledge either procedurally or declaratively.
You can't write explicit IF/THEN rules, as in OPS5 or EMYCIN.  You can't
build data structures, i.e.  it's not a real frame system with inheritance
and demons and Lisp data objects that you create and pass around and do
computation on.  Nor can you write logical axioms and feed them to a general
purpose inference engine, like Prolog's resolution algorithm.  Many, perhaps
most kinds of knowledge can not be conveniently expressed as training
instances, but that's all you get with TIMM.

At IJCAI-85 I watched a General Research sales person trying to sell TIMM
to a couple of AI novices.  She was repeating the usual set of General
Research outlandish claims, viz.  that TIMM is good for ANY expert system
application you can think of, that it doesn't require any expertise in
knowledge engineering to create "expert systems" with TIMM, that domain
experts can sit down and create their own non-trivial systems without
assistance, and so on.  (See their ad on page 38 of the Fall '85 issue of AI
Magazine:  "Experts in virtually any field can build systems with TIMM, and
do it without assistance from computer or AI specialsts.")  I find General
Research Corp.'s arrogance simply galling.  How would they use TIMM to do
VLSI circuit design, as TALIB does, to configure a Vax, like R1 does, or to
generate and sift through a large set of plausible analyses of mass
spectrogram data, as Dendral does?  All of these tasks require significant
amounts of computation, yet all TIMM can represent is training instances.
Finally I asked the sales person how TIMM would represent the following
simple piece of domain knowledge:

"A certain disease has twenty manifestations.  If a patient has at least
four of these, we should conclude that he has the disease."

This knowledge can be expressed by a single rule in OPS5, but can't be
represented at all in TIMM.  First the sales person was going to put in one
training instance for every possible case, but C(20,4) is greater than
5000, so that's impractical.  Finally she decided she'd write a Fortran
program to ask the user if the patient had each of the 20 manifestations,
sum up the "yes" answers, and return the conclusion to TIMM.  (TIMM is
written in Fortran and has the ability to call external Fortran routines.)
The point she seemed to miss is that any nontrivial expert reasoner is
going to need data structures and computations that can't be expressed as a
small set of training instances.

TIMM costs roughly $47K for the Vax version, and roughly $9K for the IBM PC
version.  General Research boasts that TIMM is in use in several Fortune
500 companies, but I haven't heard any claims about successful, up and
running, NONTRIVIAL applications.  Not surprising.

-- Dave Touretzky

PS:  Due to the nature of the above comments, I feel compelled to include
the usual disclaimer that the above opinions are solely my own, and may not
reflect the official opinions of Carnegie-Mellon University or AAAI.

------------------------------

Date: Fri,  4 Oct 85 09:32:15 GMT
From: gcj%qmc-ori.uucp@ucl-cs.arpa
Subject: An Application of Expert Systems

An application of expert systems that came to mind would be to auto-pilot
systems for  commercial aircraft. The recent  crash at Manchester airport
might not have been so serious if the reverse thrust had not been applied.
It had been suggested that this resulted in a spray of aviation fuel over
the fuselage  of the aircraft. Whether this can be avoided in future by a
change  in design  or if  a real-time  expert-system would  have been any
better than  the pilot's decision, which  of course was ``correct'', is a
matter for the deeper examination. It seems to me that  information about
possible outcomes of such an action could be made available to the pilot.

Gordon Joly
gcj%qmc-ori@ucl-cs.arpa

------------------------------

Date: Fri, 4 Oct 85 11:03:53 cdt
From: ihnp4!gargoyle!toby@UCB-VAX.Berkeley.EDU (Toby Harness)
Subject: Social Responsibility

Re: AIList Digest   V3 #132

Something I saved off usenet, about a year ago:

        It's sad that computers, which have so much potential, have so much of
        it invested in the purposes of the authorities.  I wonder if some day
        we'll be looking back at what we did in the 1980's the way many atomic
        physicists ended up remembering the 1930's.

        Jim Aspnes (asp%mit-oz@mit-mc)


Toby Harness            Ogburn/Stouffer Center, University of Chicago
                        ...ihnp4!gargoyle!toby

------------------------------

Date: 6 October 1985 2023-EDT
From: Hans Berliner@A.CS.CMU.EDU
Subject: Computer chess: hitech

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

Hitech won its  first tournament, and one  with 4 masters in  it.  It
scored 3 1/2 -  1/2 to tie for first in the Gateway  Open held at the
Pittsburgh Chess Club  this week-end.  However, on tie  break we were
awarded  first place.   En  route to  this triumph,  Hitech beat  two
masters and tied with a third.  It also despatched a lesser player in
a brilliancy  worthy of any  collection of games.   One of  the games
that it  won from a master  was an absolute beauty  of positional and
tactical skill.   It just outplayed him  from a to z.   The other two
games were nothing  to write home about, but it  managed to score the
necessary points.   I believe this  is the first time  a computer has
won a tournament with more than one master in it.

We will have a show and tell early this week.

------------------------------

Date: Tue 1 Oct 85 13:50:21-CDT
From: ICS.DEKEN@R20.UTEXAS.EDU
Subject: Information Science Program at NSF - Staffing Changes

Beth Adelson has been appointed to the position of Associate Program Director,
Information Science Program, effective August 15, 1985. Dr Adelson has been at
Yale University since 1983, as a Research Associate in the Artificial
Intelligence Laboratory.  She holds a Ph.D. from Harvard University.  Dr.
Adelson has published numerous articles in the areas of cognitive science and
artificial intelligence.  Recent works include papers on software design <<IEEE
Transactions on Software Engineering>> and the acquisition of categories for
problem solving <<Cognitive Science>>.

Joseph Deken has been appointed to the position of Program Director,
Information Science Program, effective September 3, 1985.  Dr. Deken was most
recently Associate Professor at the University of Texas at Austin, with a
joint appointment in the Department of Business and the Department of Computer
Sciences, and taught from 1976 to 1980 at Princeton University.  His Ph.D. in
mathematical statistics is from Stanford University.  Dr. Deken is the author
of several books on computing, the most recent of which is <<Silico Sapiens:
The Fundamentals and Future of Robotics>>, which will be Published by Bantam
books in January 1986.  His other writing includes <<Computer Images: State of
the Art>> (Stewart, Tabori, and Chang, 1983), <<The Electronic Cottage>>
(William Morrow, 1981), and numerous articles on statistics and statistical
computing.

Program announcements and other information about the Information Science
and Technology programs at NSF are available from:

             Division of Information Science and Technology
             National Science Foundation
             1800 G St. NW
             Washington, D.C. 20550


Correspondence may be addressed to the attention of Dr. Adelson or
Dr. Deken as appropriate.

------------------------------

End of AIList Digest
********************