[net.ai] AIList Digest V3 #85

LAWS@SRI-AI.ARPA (07/02/85)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>


AIList Digest            Tuesday, 2 Jul 1985        Volume 3 : Issue 85

Today's Topics:
  Query - Othello,
  Games - Hitech Chess Performance & Computer Cheating,
  Psychology & AI Techniques - Contextual Reasoning

----------------------------------------------------------------------

Date: 1 Jul 85 17:36:29 EDT
From: Kai-Fu.Lee@CMU-CS-SPEECH2
Subject: Othello (the game)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

I am leading a PGSS project (for high school students) that will implement
an Othello program in Common Lisp on the IBM PC.  Any program source,
clever tricks, and good evaluation functions that you're willing to share
will be appreciated.

/Kai-Fu

------------------------------

Date: 30 June 1985 2144-EDT
From: Hans Berliner@CMU-CS-A
Subject: Computer Chess (Hitech)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

CLOSE BUT NO BIG CIGAR is an appropriate summary of the performance
of Hitech in this week-ends Pittsburgh Summer Classic.  In a field
of 24, including 3 master and several experts, Hitech won its
first three games against an 1833 (Class A), 1802 (Class A), and
2256 (Master) before losing in the final round to another Master
(2263) who won the tournament.  This was Hitech's first win against
a Master.  Its overall performance in two tournaments is
6 1/2 - 2 1/2; better than 70 percent.  As it was, it finished
2nd in the tournament.  Its provisional rating is now around
2100; middle expert.

We will hold a show and tell on Friday at a time and place to
be announced.

------------------------------

Date: 1 Jul 85 10:29:54 EDT
From: Murray.Campbell@CMU-CS-K
Subject: More on Hitech Result

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

The 3-1 result Hitech achieved in the chess tournament this
past weekend was a performance rating of about 2300, well above
the master standard of 2200.  And it should be noted that the
last round loss was to Kimball Nedved.
After 2 tournaments, Hitech's performance rating is approximately
2210.

------------------------------

Date: Mon, 1 Jul 85 22:37:20 PDT
From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
Subject: Computer Cheating.

        Computer programs are the consequences of people, like novels, as
ripples are to rocks thrown in a smooth pond.  Hence, it is the authors
of these programs who are the cheaters!
        In fact, computer chess is simply another iteration, as is speed
chess, double bug-house, and probably several other versions I have not
heard of.  Consenting adults can do what every they want as long as they don't
bill the government.  So, questions as to whether computers are cheating
are really questions about whether a programmer who writes a program
and watches it play is on the same footing as the live opponent.  Practically
I think he is, philosophically I think not.
        If the chess program is going to take more responsibility for its
actions, I think it should have to 'learn' as you or I (are all of you,
if expert systems, *learning* ones?).  Of course, the author of the
program still is partly responsible for how efficiently the program
learns, and in finite time how good his creation will become.
        So, to apply the principles of recursion, let the program have
to learn how to learn.  At this level it is easy to see that skilled
opponenets will cause a 2nd-loop learner to learn faster; hence the
product is less a function of its original architecture and more a function
of its environment -- which I guess we can (by default) attribute to the
program (not its creator).  At his point, if we put it in a closet and let
it vegetate, it probably will not be very good at chess.  This is certainly
true in the limit as n (as in nth-loop learner) approaches infinity.
        It is easy to see that man is effectively an n-loop learner
which cannot comprehend an o-loop learner for o>n.  To be precise
I should have said a *single* man.  Groups  of people, similarly and
perhaps even including women, can function at some level p>n.  Hence
it should be possible for teams of people to beat individuals (and
generally is).  I see no reason for p  or q to be bounded (where q
is the class of learning evidenced by machine), and the problem has
been reduced to a point: that is that man is just a transient form
of intelligence which cannot be quantified (by himself anyway), only
*measured*.
        Chess in its various forms does so (well you think when you
win, poorly when you lose): and in its various forms is *fun*.  Just
remember, computer chess wouldn't be around if several smart people
were not whipped at the game through careless errors by people so
dumb that 'even a computer could beat them' or 'except for one
careless mistake...'


RKJ.

------------------------------

Date: Saturday, 29 Jun 1985 22:16-EST
From: munnari!psych.uq.oz!ross@seismo
Subject: Use of context to allow reasoning about time

David Sherman (AIList V3 #71: Suggestions needed for tax rules)
writes:

> I am trying to design a system which will apply the rules of the Income
> Tax Act (Canada) to a set of facts and transactions in the area of
> corporate reorganizations.
> ...
> The passage of time is very important: steps happen in a particular
> sequence, and the "state of the world" at the moment a step is taken is
> crucial to determining the tax effects.
> ...
> The problem I have ... is how to deal with _time_.

The following is just a suggestion. I have not actually tried it and I
am not familiar enough with the literature to even say whether it is an
old idea.  However, it seems plausible to me and might be a useful
approach.

Time is not directly perceptible. It is perceived indirectly by noting
that the environment (physical and cognitive) changes. There is a lot
of survival advantage to believing in causality so the brain likes to
attribute a cause to every change and when there is nothing obvious
around to attribute causality to, we invoke the concept of time.  As
Dave Sherman pointed out, time is bound up with changes in the "state
of the world", what I just called the environment.  Lets shift into
psychology and call it the context.

Context plays a very important role in psychology.  All the human and
animal decision processes that I know of are context dependent.
Consider a classic and simple memory experiment. The subject is given a
list of nonsense words to memorise and is then given a new list of
words some of which are from the memorised list to judge as old or
new.  This process may be repeated a dozen or more times in a session.
How does the subject restrict his definition of old/new to the list he
has just seen?

It seems that the words are not remembered as individual and isolated
objects but are remembered along with associative links to the context,
where the context contains everything else that happened
simultaneously. So, when memorising words in a list the subject links
the words to each other, any extraneous noises or thoughts, even small
changes in posture and discomfort. It has been shown that recognition
and recall are greatly enhanced by reconstruction of the context in
which memorisation occurred.

Context is also evolutionarily important. It obviously enhances
survival to be able to form associative links between the centre of
attention and possibly anything else.  The nasty thing about many
environments is that you can't tell before hand what the important
associations are going to be.

Lets look at how context might be applicable to AI. In MYCIN, data are
stored as <object,attribute,value> triples.  This is also a reasonable
way to do things in PROLOG because it allows the data to be treated in
a more uniform fashion than having each attribute (for instance) as a
separate predicate.  The objects in MYCIN are related by a context
tree, but this has nothing to do with the sense in which I am using
"context" so I will continue to call them objects. An object is a more
or less permanent association of a bundle of attributes. That is, there
is some constancy about it, which is why we can recognize it as an
object (although not necessarily a physical one).  By contrast the
context is an amorphous mass of other things which happen to be going
on at the same moment. There is little constancy to the structure of
the context.

The MYCIN triple cannot be related to it's context other than through
the values of its object, attribute or value fields. There is no
explicit way of showing that a fact was true only for a certain
interval of time or only when a particular goal was active.  I propose
that the triple be extended to explicitly represent the context so it
becomes <context,object,attribute,value>.  The values of the context
variable would normally be unique identifiers to allow a particular
context to be referred to.  The context does not actually store any
information, but many facts may be tied to that context. A context is a
snapshot of the facts at some stage in the computation.

Obviously there needs to be  a lot of thought put into when to take the
snapshots and the appropriate strategy will vary from application to
application.  The context will contain the facts being reasoned about
at the time of the snapshot (probably when they had been whipped into
consistency) but would also contain other relevant information such as
goal states and clock times.  For Dave Sherman's application there
would probably be a new context snapshot when each transaction occurred
(e.g. transfer of property in exchange for shares).  Two additional
facts within the context would be the earliest and latest clock times
for which the context is valid.  This would allow reasoning about
changes of state and elapsing of time because the before and after
states are simultaneously present in the fact base along with the clock
times for which they were true.

A couple of other uses of contexts suggest themselves.  One is the
possibility of implementing "possible worlds" and answering "what if"
questions.  If the system is capable of manipulating contexts it could
duplicate an existing context (but with a new context ID of course),
modify a few of the facts in the new context and then start reasoning
in that context to see what might have happened if things had been a
little different.  Another possibility is that it might be useful in
"truth maintenance systems".  I have heard of them but not had a chance
to study them.  However their references to assumption sets and
dependency directed backtracking sound to me like the idea of tracking
the context, attributing changes in the context to various facts within
the context and then using that information to intelligently manipulate
the context to implement backtracking in a computation.

     UUCP:    {decvax,vax135,eagle,pesnta}!mulga!psych.uq.oz!ross
     ARPA:    ross%psych.uq.oz@seismo.arpa
     CSNET:   ross@psych.uq.oz
     ACSnet:  ross@psych.uq.oz

     Mail:    Ross Gayler                       Phone:   +61 7 224 7060
              Division of Research & Planning
              Queensland Department of Health
              GPO Box 48
              Brisbane  4001
              AUSTRALIA

------------------------------

End of AIList Digest
********************