[comp.ai.neural-nets] Chip vs Chess Master

mamisra@pollux.usc.edu (Manavendra Misra) (03/28/91)

Last night, I happened to see a Nova episode which talked about a match-up
between Gary Kasparov (the current Chess world champion) and Deep Thought
(AI software developed at CMU and IBM which is currently the best chess
playing program). Deep Thought essentially carries out a brute force search
of the game tree in deciding its moves. There was lots of discussion of the
standard "computers taking over the world" arguments during the program and
people with backgrounds ranging from CS to Chess GM's gave their opinions.
Quite a good program and much to the relief of most of the people gathered
for the match, Kasparov won it.

I was wondering if there was research going into developing "neural"
algorithms for playing games like Chess. I recall that there was some
work by A. L. Samuel in the 50's and 60's on a program for playing 
Checkers and that it was adaptive in some sense and it also drew from
a memory of stored games.

Also, I'd like to hear what others thought of the program.

Manav.

pluto@zaius.ucsd.edu (Mark Plutowski) (03/28/91)

mamisra@pollux.usc.edu (Manavendra Misra) writes:

>Last night, I happened to see a Nova episode which talked about a match-up
>between Gary Kasparov (the current Chess world champion) and Deep Thought
>(AI software developed at CMU and IBM which is currently the best chess
>playing program). Deep Thought essentially carries out a brute force search
>of the game tree in deciding its moves. There was lots of discussion of the
>standard "computers taking over the world" arguments [...] much to the relief 
>of most of the people gathered for the match, Kasparov won it.

>I was wondering if there was research going into developing "neural"
>algorithms for playing games like Chess.  [...] 

>Also, I'd like to hear what others thought of the program.

>Manav.


It was of course, an excellent production, as are most Nova programs
(Donate to PBS!!!  You might be on it yourself someday! =->

Least surprising was the hostile reaction of the audience
to the computer - and the lack of depth of the reporters questions to 
Deep Thought's developers.  The developers were quite careful to put 
the right spin on their interpretation of Deep Thought's achievements. 
I think we ought to expect this type of reaction to intelligent computers,
at least until the next generation of hackers grows up.

Most interesting was the introspective process Kasparov and the other 
grand masters tried to describe as they fielded questions about how they evaluate
a tough move.  They did alot of emoting here, saying things like, "it feels right,"
"I gain energy from my opponent, whereas the computer is a black hole," or,
that they draw upon "inspiration" to break down the opponents "spirit".  
Sounds like a black box process to me with little conscious symbolic 
explanation capability.

A quite famous (and successful) neural-network game player is the NeuroGammon
implementation of backgammon.  (Look in 1989 or 1990 NIPs proceedings.
I am unsure about the author's name but Tesauro (sp?) comes to mind.) 
The implementation was most interesting to me due to the way it
was trained.  (That's right, trained, not programmed, as was Deep Thought.)
Out of several ways of training the network to evaluate a 
board position, the one that worked the best operated by giving the network
two board positions, and then (in essence) telling it which one 
was better.  From this relativist style of training, (i.e., without 
reference to a concrete reference point) the network was able to 
formulate an internal model which provided a score for any given board 
position. Said again, it learns to create an internal point of reference,
such that it can evaluate (score) a given board position, even though the
teacher did not give it these scores during learning.  

I suspect that much of the task of learning to evaluate a chess board position 
could be reduced to the process of function approximation, something networks 
do well.  However, scoring a chess board position would seem to be much more 
context-sensitive than scoring a backgammon board position.   

Could a network learn to predict trends in the style of play 
(aggressive, defensive, counterattacking, plodding)  of an opponent based upon 
a sequence of moves?  This is just time-series prediction.   hmmmmm.....
Or, could it learn to place more or less importance upon branches of the 
game tree based upon how the opponents forces are coalescing on the board?
This could provide the computer with a crude ability to adapt to
an opponent's strategy.  

Of course, the real goal is for the computer to develop its own strategy.  
Get your network to do THAT!  

IMHO, it's eye-opening that brute force does so well given the high branching 
factor of the game tree.  More elegant approaches to game-tree search 
would result if future Computer Chess Olympiads placed the computers into 
categories according to resources used (flops, memory, hardware, power consumption)
giving each computer a handicap according to how much brute force it relies on.  


-=-=
M.E. Plutowski,  pluto%cs@ucsd.edu 

UCSD,  Computer Science and Engineering 0114
9500 Gilman Drive
La Jolla, California 92093-0114

wjb@cogsci (03/28/91)

In article <pluto.670117666@zaius> pluto@zaius.ucsd.edu (Mark Plutowski) writes:
>A quite famous (and successful) neural-network game player is the NeuroGammon
>implementation of backgammon.  (Look in 1989 or 1990 NIPs proceedings.
>I am unsure about the author's name but Tesauro (sp?) comes to mind.) 

	I think that's right, "Gerry Tesauro".  I think he was at UIUC when
he did the work.

				Bill Bogstad

gordons@handel.cs.colostate.edu (vahl scott gordon) (03/28/91)

The NOVA program has been a topic of discussion on rec.games.chess.

I enjoyed it quite a bit, but would have liked to see more discussion on
various approaches to solving the chess-play problem.

There have been non-brute-force approaches, with less success.  Probably
the best has been the program "AWIT" from the University of Alberta
(T.Marsland) which achieved a high class B/ Low class A rating.  It
searched very few nodes, something on the order of 500 or less, as I
remember.  It was not, however, a learning program.  Even HITECH (written
by Hans Berliner, also at Carnegie Mellon) uses a not-so-brute-force
approach, with more chess knowledge.  HITECH is currently the second
highest rated chess computer program, and is at the Senior Master level.
From a knowledge point of view, Deep Thought is actually one of the
least interesting.

I am not aware of any neural programs, or even neural programs for
subsets of chess.  I think it would be a great area of research (for
SUBSETS of chess, that is).

I do think that they overplayed the surreal aspects of the game just a
bit... Kasparov is an excellent tactician, and while he doesn't analyze
so many lines as Deep Thought, he probably analyzes KEY lines DEEPER.

Lastly, I think it would be interesting to hold a tournament to see which
program was best for, say, analysis of some ceiling of positions... say,
1000.  That way you'd have to put some chess knowledge in, and try to
find the best lines to consider.  It would be a better window into
human intelligence than brute force.  The games would be interesting, too,
because the programs could better explain why they selected certain
moves (i.e., why they chose to examine certain continuations).

vegdahl@ogicse.ogi.edu (Steve Vegdahl) (03/28/91)

>I am not aware of any neural programs, or even neural programs for
>subsets of chess.  I think it would be a great area of research (for
>SUBSETS of chess, that is).

Gordon Goetsch's thesis work (CMU, 1990, working under Hans Berliner) used
neural nets to analyze KPK and KRKP endgames.

		Steve Vegdahl
		Adaptive Solutions, Inc.

cs196006@cs.brown.edu (Josh Hendrix) (03/29/91)

In article <13807@ccncsu.ColoState.EDU>, gordons@handel.cs.colostate.edu (vahl scott gordon) writes:

|> I am not aware of any neural programs, or even neural programs for
|> subsets of chess.  I think it would be a great area of research (for
|> SUBSETS of chess, that is).

I am in my first neural net course, and I have been thinking of trying to write something that would learn how to play tic-tac-toe or Qubic (tic-tac-toe with a four by four grid and four layers). The biggest problem that I see right now is the credit assignment problem (which I want to do through some neural net mechanism). I want to divide the problem into subproblems and use nets to solve them, and then find some process for communication between the sub-nets. For Qubic, I see the subproblems as:

	1. Making a legal move, no matter where the best move is. I want it to be 									        able to generalize from what it learns about legal moves in one square to
	all squares (i.e. don't place a piece in a square that already has one in 
	it, etc.)

	2. Identifying the best move to make in a given situation
		a) Always move into a blank cell where the opponent has 3 pieces 		                 in any row that intersects that cell.
		b) Try to make moves that block the opponent
		c) Try to get 4 in-a-row
		d) etc...
	I see this as mainly a credit assignment problem, because I don't want to 	
	give the net feedback for every move it makes. I want it to learn by 
	losing and winning games, which means the feedback is delayed, and
	credit must be assigned to the move that lost or won the game (perhaps
	others as well).

	3. I'm sure there are others that I haven't thought of yet...

Soooo, does anyone have any good pointers to literature in the above fields of interest, i.e. generalization and credit assignment in neural nets? I have read the Sutton & Barto pole-balancing paper, and I am thinking about ways to adapt their ideas, but I am definitely interested in others, also. Any references will be appreciated, and I will gladly summarize and post to the net.

Thanks in advance,

Josh Hendrix
cs196006@brownvm.brown.edu

jay@wiley.uucp (Jay Nelson) (03/30/91)

In article <70216@brunix.UUCP> cs196006@cs.brown.edu (Josh Hendrix) writes:
>In article <13807@ccncsu.ColoState.EDU>, gordons@handel.cs.colostate.edu (vahl scott gordon) writes:
>
>|> I am not aware of any neural programs, or even neural programs for
>|> subsets of chess.  I think it would be a great area of research (for
>|> SUBSETS of chess, that is).
>

Check out the approach used by Deep Thought for move generation.  (See
Artificial Intelligence [the journal] about 1-2 years ago.)  They
don't use a neural net, but a parallel implementation which detects
board patterns.  Their approach could easily be cast in a
connectionist architecture.  Rather than studying a board position and
generating all the moves, they analyzed every possible move that could
ever be made by any piece in any board position.  Sounds like a huge
number of possibilities, but it's really just over 4000.

Consider a pawn: it can never legally be on the first rank, on the
second rank it could move forward 2 or 1, on the 3rd through 7th ranks
it can only move forward 1.  So far we have 7 possible moves.  All
pawns have these 7 moves.  Now consider a pawn on the edge of the
board.  It can only capture diagonally in one direction. That's six
more moves.  A pawn not on the edge can capture diagonally in both
directions: 12 moves.  So edge pawns have 13 possible moves and
non-edge pawns have 19 possible moves (We're ignoring en passant as a
special case of a forward move, and the move into the 8th rank has
options for selecting the promoted piece.  For simplification, assume
promotion is determined after the move by a separate mechanism.)  This
gives (2x13) + (6x19) = 140 possible pawn moves that will ever occur
in any game of chess played.

Now, how de we detect and generate these moves?  We want to use the
board as input and the move as output:

        4  p                 =>  White pawn at a4
        3      p             =>  White pawn at b3
        2  P                 =>  Black pawn at a2 (our move)
        1
           a    b


     input layer       connections       output move

      P on a2             +1               a3 if positive
      any piece on a3     -1

      P on a2             +1               a4 if positive
      any piece on a3     -1
      any piece on a4     -1

      P on a2             +1               a2xb3 if >1
      white on b3         +1

There would be 140 output neurons for just the pawn moves.  Each one
corresponds to a specific move.  Notice that only a few squares are
used as input for each of the moves.  So by completing the analysis
for every possible piece we have just over 4000 output neurons, a
clever input layer representation (not that tricky) and integer
weights.  Evaluation is very fast, and all possible moves for the
input board position are calculated in parallel (Deep Thought actually
does this in VLSI using gate logic for each move detector).

Actually this is a connectionist approach and not a neural network.
It's all prewired and uses no learning.  But a similar mechanism might
be used for other parts of the chess game (e.g. evaluation function
for a generated board position where the outputs are continuous values
for features like mobility, king protection, pawn structure, etc.) and
some learning could occur.  The problem as always is choosing an input
representation that expresses all the features necessary for learning
to find a good solution.

Jay Nelson  (TRW)  jay@wilbur.coyote.trw.com


Jay Nelson  (TRW)  jay@wilbur.coyote.trw.com

kolen-j@retina.cis.ohio-state.edu (john kolen) (04/05/91)

For a discussion of credit assignment in NN wrt tictactoe see

J. Kolen and A. Goel.  Learning in paralell distributed processing
networks:  Computational complexity and information content.  

To appear in the March 1991 issue of IEEE Transactions on Systems, Man, and
Cybernetics.


--
John Kolen (kolen-j@cis.ohio-state.edu)|computer science - n. A field of study
Laboratory for AI Research             |somewhere between numerology and
The Ohio State Univeristy	       |astrology, lacking the formalism of the
Columbus, Ohio	43210	(USA)	       |former and the popularity of the latter