[comp.sys.mac.games] GO and Software Engineering

borg@wintermute.sdsu.edu (T Borg) (01/25/91)

Title:     Smart Game Board and Go Explorer : a study in software and
           knowledge engineering. (technical)

Author:    Kierulf, Ander; Chen, Ken; Nievergelt, Jurg.

Journal:   Communications of the ACM  Feb 1990 v33 n2 p152(15)
           * Full Text COPYRIGHT Association for Computing Machinery 1990.


SOFTWARE ENGINEERING

AND KNOWLEDGE ENGINEERING

Software engineering is an established discipline that has accumulated and
codified more than two decades worth of know-how.  Knowledge engineering, on
the other hand, is an emerging discipline with lots of issues but, at least
so far, little structure.  Despite its lack of maturity the practice of
knowledge engineering promises to have a noticeable impact on software
engineering doctrine.  The experimental nature of knowledge engineering goes
hand-in-hand with a style of software development best characterized as
'exploratory,' which has not been much studied in traditional software
engineering.

Exploratory Software Development:

Uncertain Resources, Open-Ended Goals

The vast literature on software engineering discusses, almost exclusively,
the production and maintenance of software as an industrial process:
Predictable in its outcome, repeatable in its execution, and portable in its
independence on particular individuals.  Many conditions must be met for the
managerial and technical processes advocated in software engineering to work.
The principal requirement is that the designer knows fairly well what end
product can be achieved (so it can be specified), what resources are required
(so the product can be sized up), and how to build the product (so the
production process can be planned).  The source of all this knowledge is
experience based on previous production of similar products.  Conventional
software engineering know-how works best when producing the (n + 1)st version
of a compiler, text processor, or other well known software systems component
or apllications package.

A small but important class of software development projects meets none of
the conditions above.  For good reasons, such first-of-a-king projects are
typically conducted in a manner diametrically opposite to traditional
software engineering practice.  Typical conditions include:

* The designer has a clear vision of the direction of achievement, or "goal
at infinity," but has no way to predict how far along this route progress can
be pushed.

* For any specified level of achievement, it is impossible to predict the
resources needed, perhaps not even their order of magnitude.

* Because nothing can be promised about the outcome, no long-term commitment
of resources is made, and the only predictable budget feature is that the
project has to be run on a shoestring.

* Software development cannot be planned, as the outcome of each phase and
step is unpredictable; it proceeds, mostly by trial and error, along lines of
least resistance or greatest promise.

* Last but not least, such a project is intimately tied to the particular
individuals involved and reflects their personal characteristics.  Removing a
key individual will kill the project or transform it beyond recognition;
assigning another team to do "the same" project results in a different
venture.

The rapidly spreading use of interactive applications has kept software
products in a state of flux for more than a decade: Spreadsheets, painting
programs, hypertext are just some of the products that lacked practical
models a decade ago.  All indications point to a continuation of the trend
that essentially new types of software will continue to be developed, along
with the bread-and-butter tasks of perfecting the next version of existing
models.  Thus it behooves the software engineering community to recognize
that exploratory software development is an activity of long-term importance
worth studying.  Even a casual acquaintance with several exploratory software
projects suggests that they proceed according to rules different than those
postulated in the software engineering literature.  It is not just a matter
of developing a prototype or two before starting on the product, and it is
definitely not a matter of going through half a dozen phases of a life-cycle
that includes requirements analysis, specification, coding, and testing.

The style of exploratory software development is linked so intimately to the
individuals involved that one might be reluctant to search for common traits,
expecting only to see talented and dedicated individuals "doing their own
thing."  The opportunity to observe several such projects by different teams
has convinced us, however, that there are significant similarities.  These
include:

* A blatant bottom-up approach to systems building: The system evolves around
key routines and components that are built early, are perennially modified,
and pragmatically combined in different ways in response to feedback.  System
behavior, far from being specified a priori, is an ever-changing result of
these experimental set-ups.

* At all times, have a running system, even if it is only a mini version of
what you are striving for.  Since specification ahead of time is out,
observation of the program's behavior, and of the users' reactions, is
essential at all times.

* Keep your tools sharp!  Building an environment useful for rapid
prototyping is effort well invested, as code is never "finished," and the
turn-around time needed to implement a new version or perform an experiment
is critical.

* Strive for a design that leaves as many options open as possible--tomorrow
we may know better than today what option to exercise.

* Keep your ears to the ground!  A significant project spans years, and
during its lifetime the surrounding circumstances change considerably--a good
dose of opportunism is necessary to keep the project relevant and up-to-date.

Software developed according to this philosophy evolves through selection a
bit like natural systems do--with one important exception.  A single team
cannot afford to imitate prolific nature and spawn mutations to be developed
concurrently.  At any time, the cumulative experience of the entire project
must be distilled into one running version.  It is worth remarking, though,
that in situations where several independent teams work on a similar goal,
the analogy with natural selection, and the multitude of related species it
creates, is apt--consider, for example, the many versions of UNIX in
existence.

The Unpredictable Complexity of Knowledge

Formalization

The development of an expert system, or any project that involves the
formalization of an open-ended domain of human knowlege, adds specific
difficulties associated with knowledge engineering to all the traditional
problems of software engineering.  The chief difficulty added by knowlege
engineering is that hardly anything of importance seems to be predictable,
and hindsight becomes the only reliable basis of judgment.  This is a drastic
departure from the norm of engineering development, which is based on the
assumption that we know ahead of time what we will end up with.  It forces
computer scientists to look at the process of formalization in a new light.

Computer science is the technology of formalization.  If anythingis to be
processed by computer, the rules of processing are expressed in a formal
system.  This system may be a programming language defined by a collection of
hardware and software that includes a computer, an operating system, and a
compiler; it may be a programing or specification language defined
abstractly; or it may be a more traditional mathematical system built on
axioms and rules of inference.  The programmer may or may not think of
his/her activity as a task in formalization--the point here is that, once a
program has been written, it is a formal system.

One can view computer science education as having two major goals: To teach a
sufficiently rich arsenal of techniques of formalization, and, in projects
and case studies, to impart as much experience as is feasible concernign the
practical application of these techniques.  Thus we expect, for example, that
a systems programmer can formalize a communications protocol, and a systems
analyst or database administrator can formalize the structure of payroll
information.  Both of these examples rely on the fact that the domain of
knowledge to be formalized is closed, as opposed to open-ended, and of every
limited size and complexity.

The word 'knowledge engineering,' on the other hand, is typically used for
domains of knowledge much larger and more complex than the examples above.
Attempts at formalization capture only a small fraction of therelevant
domain.  This is true, for example, of standard microwords of artificial
intelligence, such as chess.  The designer usually cannot even formalize all
of his personal knowledge of the field, let alone that possessed by the
community at large.  The literature on expert systems at times presents
interviews of human experts as the way to capture knowledge, but this is
really a technique of identifying knowledge, rather than formalizing it.

Given this state of affairs, the knowledge engineer typically starts with an
educated guess of what small part of the vast collection of know-how can be
formalized and will result in acceptable performance, then repeats the
following steps until patience or resources run out: Implement this subset,
observe the outcome (it can always be improved!), and tune the knowledge base
to achieve a hoped-for effect.  This is a basic human mode of operation, but
is not what traditional software engineering doctrine has been advocating,
and is not what most software environments support.

We illustrate these points with a study of an exploratory software
development project with a large knowledge engineering component.  Its goal,
a program to play the Oriental game of Go, is exotic and needs explaining,
including a brief survey of the history and current state of computer Go.

A Program to Play Go

As a study in exploratory software development and knowledge engineering, we
present Explorer, a program that runs on the Smart Game Board and plays the
Oriental game of Go.  It took four years to build and perfect the Smart Game
Board, and test it in two applications: An interactive Go calculator that
provides basic tactics routines, and a program that plays Othello.  By the
Spring of 1988 this powerful programming environment and test bed had matured
to the stage where a new member of the team, who focused on the analysis of
strategic features of board positions, could implement a program to play the
full game of Go during a summer of intensive work.  In the Fall of '88, in
its first test against other programs, Explorer tied for 2nd among 16
programs that competed in the 4th International Computer Go Congress in
Taiwan, the unofficial world championship organized by the Ing Foundation.
Explorer proved to be a fierce if somewhat unsteady fighter, whose play
revealed some crass shortcomings.  The team split soon thereafter, each group
trying out its own cure to avoid Explorer's predictable lapses.  As a
consequence, Explorer retired from active competition and spawned two
offspring.  Both run on the Smart Game Board and both improved on their
parent's record in their very first encounter.  In the summer of '89, Go
Intellect won the North American Computer Go Championship, and Swiss Explorer
won the Go tournament at the first Computer Olympiad in London.  We attribute
these results to a mixture of good luck and a solid dose of sound software
engineering practices.

Although this project is unique in several ways, it is typical of exploratory
software development in others.  We believe there are lessons to be learned
with respect to the interaction between software engineering and knowledge
engineering.  This case study illustrates the decisive importance of a
powerful development environment.  A large investment of effort in building
the best software engineering environment we were able to devise, out of
components that are well understood, made it feasible to experiment
extensively in the poorly understood realm of knowledge engineering.

Why Work on a go Program?

Work on game-playing programs is always, in part, a hobby.  But there must be
other justification as well, because game playing programs, when pursued far
enough that they can compete among the best, require several man-years of
effort.  Such justification is readily found, in an academic environment, in
terms of education and research.

Education.  The idea of robots playing games of strategy has intrigued people
since the invention of clockwork, but it remained for electronic computers to
turn it into reality.  Famous computer pioneers prepared the ground: John von
Neumann created the mathematical theory of games, alan Turing and Claude
Shannon devised techniques that allow computers to play chess [20, 22].  The
concepts and techniques developed range from game theory to knowledge
representation, and from heuristic search to program optimization.  Many of
these ideas have become an accepted part of computer science lore, and
students should be exposed to them.  The concepts are empirical more than
they are theoretical: The typical question is not whether something can be
done in principle, but how well it can be done within practical constraints.
In computer science, a meaningful exposure to empirical concepts must be
based on working programs.  We have been conducting seminars on heuristic
search and computer game playing in which the Smart Game Board serves as a
workbench and test-bed that makes it possible for students to write and test
a game-playing program of their own design in one semester.  It was a seminar
held in '86 over North Carolina's state-wide two-way teleclass network that
brought the authors together in cooperating on a Go playing program.

Research.  A significant part of our knowledge of what computers can and
cannot do in artificial intelligence comes from experiments with computer
game playing--so much so that computer chess has been called the "drosophila
of AI," i.e. the preferred test object for experimentation.  This assessment
is based on two very useful properties.  1) Games of strategy form
microworlds of just the right complexity (simple enough to be easily
formalized, hard enough to tax computational resources) to develop and test
our growing arsenal of artificial intelligence techniques.  2) Progress can
be measured accurately in terms of improved playing performance--a chess
rating or a Go rank are much more objective measures than exist in most other
applications of knowledge engineering.

What to Work On?

We took up game-playing programs as a side-line in 1983 based on a
combination of factors: personal interest, the view that there was still
unexplored land, and the conviction that knowledge engineering must be
approached empirically in a microworld where performance can be measured
objectively, accurately, quantitatively.  There was a lot of experience and
information about chess playing programs, and computer chess appeared to be
in a relatively mature stage where success came from doing the same thing
better and faster.  Comparatively little was known about computer Go and
other games.  Go has a reputation for being the most profound game of
strategy, thus may be one of the hardest to program, and hence a fertile
field for experimentation.  We are aware of four Ph.D. dissertations written
on computer Go [5, 13, 19, 26].  In contrast to chess, computer programs that
play Go were still in their infancy, playing so poorly that their performance
was essentially off the accepted scale for ranking human players.  An
increased interest in computer Go could be expected; in particular in Japan,
where Go is a national hobby, and the government-sponsored project on Fifth
Generation Computer Systems greatly boosted research in artificial
intelligence.

How?

The combination of 1) little existing experience, and 2) an anticipation that
during the course of the project many competitor projects might emerge,
caused us to approach the subject on a broad front, and to design a project
that emphasized basics and could be redirected to focus on different goals
depending on where progress appeared most promising as the work evolved.
Rather than setting specific goals, we decided on a few directions, on "goals
at infinity," and embarked on a journey that would lead as far as we could
go.  These directions included:

1. Design and build a workbench, later called the Smart Game Board, for the
development of game-playing programs.  What do various games have in common,
what components can be shared among programs that play different games?  This
workbench quickly became a useful teaching tool--a skeleton program to be
tailored to specific games in student term projects.  Soon it became a useful
vehicle to maintain and enhance an Othello program developed independently
[9], and to implement Go-Moku in one semester.

2. The Smart Go Board [12]--a useful teaching and studying tool for the Go
player to analyze, annotate, store, and print games.  This would ensure a
small but dedicated community of users to help in testing and improving the
program.

3. Go tactics calculator, expert-guided search.  Goal at infinity: Can a
person play noticeably better using a Go tactics calculator than when he
relies on his own wits?

4. Testbed for experimenting with strategies and techniques for playing a
full game of Go.

The last part of this article is devoted to 4), so let us briefly mention the
earlier phase 3), where we attacked the seemingly easier problem of
developing a "Go calculator," with a function analogous to that of a handheld
numerical calculator.  Its purpose is not to play Go against an opponent, but
to assist a human player in those tasks that are better delegated to a
machine because they require speed and accuracy, as opposed to expert
judgment.  The human player decides on strategy and calls upon the program to
calculate tactical sequences of moves to achieve a well-defined purpose.  As
the program builds and traverses a search tree, the player directs the search
and can interactively modify the board if he so chooses.  This approach can
be described as using computers as "amplifiers of human intelligence," rather
than as autonomous intelligent machines.  And rather than building a
computerized expert system that tries to capture human knowledge in
executable form, we investigated 'expert-guided search', where human
experience and intuition guide the great processing power and fast and
accurate memory towards fruitful goals.  This emphasis on interactive user
control investigates computer-aided human decision making.  Games of strategy
such as Go have always served as simplified models for decision making.  In
the modern era of decision support systems, it is reasonable to assume that
programs that support gaming can serve as test beds for some aspects of
computer support for decision making.  A reasonably objective test of this
game calculator is easy to arrange: does a player with calculator play
consistently better than without?

What can be Foreseen?

When this project was started in 1984, hardly any guidance was available: A
few precursors served as warning of difficulties to be expected, none
provided a model to be followed.  With hindsight gained through five years'
worth of experience, it is interesting to compare some of the questions and
expectations we had at the start of the project, and current answers.

Foreseen: A Smart Game Board is feasiable as a useful hypertext medium for
storing an annotated collection of games, and browsing through them.  It can
support a variety of games and run on a low-cost personal computer.
Advancing technology quickly solved initial problems such as insufficient
main memory and disk, low screen and printer resolution.  The result
justifies the years spent in cumulating detailed improvements: The Smart Go
Board is being used by at least 100 Go players to manage their games.  One
professional Go master was it as a teaching tool to exchange games and
comments with his remote students.

Might have been foreseen, but the idea evolved during the project: It is
feasible to develop a common user interface that fits entirely into one
screen, powerful enough to handle many different games.  Interaction with
different games takes place using the same windows, and, to a large extent,
the same operations.  This is possible because the majority of operations
provided by the Smart Game Board are defined on the game tree, and thus are
game-independent.  So far the user interface has proven to be adequate for
Go, Othello, Go-Moku, and chess.

Unforeseen: How hard is it to develop a useful Go tactics calculator?  We
expected to develop an interactive program capable of analyzing and solving
clear-cut tactical tasks identified by an expert amateur user.  This tactics
calculator has proven to be much harder than anticipated, and so far is
useful for only a very limited range of tactical problems: Mostly,
complicated ladders and loose ladders, i.e. forcing sequences in capturing a
block that has no more than 2 or 3 liberties at any time.  With hindsight, of
course, we can see why this problem is hard--the interface between human
strategy and machine tactics requires difficult formalization of intuitive,
fuzzy concepts.  To substantiate this point, we can do no better than quote
the operator of Deep Thought (DT), today's strongest chess computer, as it
assisted grandmaster Kevin Spraggett in an elimination match for the world
championship [7]: 'I was in the fortunate position of being able to watch and
assist the possibly first-time cooperation between a grandmaster and a
computer....  Other problems were related to "interfacing" DT with Spraggett
or his seconds.  It is indeed hard to translate a question like "can black
get his knight to c5?" or "can white get an attack?" or "what are black's
counter chances" into a "program" for DT, since it just doesn't have these
concepts.  Translating back DT's evaluations and principal variations (PV) is
easier, but this information is incomplete most of the time.  Indeed, the
alpha-beta algorithm generally gives accurate values on the PV only (other
lines are cut off), whereas a human wants to know why alternatives fail.'
Given the weak play of today's Go programs, a useful Go tactics calculator is
far off.

Unforeseeable: How strong a Go program to aim at.  The lack of experience
prevailing in the fifties and sixties triggered useless predictions about the
strength of chess programs that ranged from wildly optimistic ("a computer
will be world champion within the decade") to overly pessimistic ("computers
will never beat human experts").  Simiarly, there was no rational basis for
predicting how strong, or weak, a Go program we could develop.  With
hindsight, observing that half a dozen independently developed programs that
competed at recent computer Go tournaments are comparable in strength, one
can conlude that a knowledgeable and dedicated small team can develop a 15
kyu Go player (this rating scale is explained later) with an effort of a few
man-years.

The Development of Computer Go

A brief survey of the history and current state of computer Go serves to
place our project in perspective.  Compared to other games of strategy, in
particular checkers and chess, computer Go started late and, until recently,
progressed very slowly.  Many people feel this is due to the inherent
difficulty of Go--a view well expressed in [4] as "The Game of Go--The
Ultimate Programming Chalenge?"

Work in Computer Go started in the early 1960s [18] and intensified with the
Ph.D. dissertations of Zobrist [26, see also 23] and Ryder [19, see also 24].
Zobrist relied on pattern recognition, Ryder on tree search.  Both of these
approaches had already been explored in chess, where the latter, in
particular, has proven its value beyond doubt.  But although these approaches
capture part of what Go is about, neither program played as well as a
beginner plays after just a few games of experience.  Humans use pattern
recognition to a large extent, but their patterns are more complex and
abstract than those used by Zobrist.  And pure tree search runs into the
hurdle that the branching factor of Go is significantly larger than that of
other games on which this approach has been tried.  Selective search is more
important for Go than it is for chess, and setting goals to focus the search
calls for insight into the posiiton.

The Reitman-Wilcox program [17], based on a representation of the board that
reflects the way good players perceive the board, was a step forward.  They
concentrated on strategy, adding some tactical analysis late in the project.
Their program only reached the level of a novice player.  Bruce Wilcox
analyzed the strong and weak points of his first program, then rewrote it
from scratch [25].

Since 1985 activity in computer Go has mushroomed.  A strong Go-paying
program was a goal of the Japanese "Fifth Generation" project, but was
dropped (too hard?).  Several dozen individuals and small teams are now
involved in Go programming, and these efforts are beginning to bear fruit.
The annual International Computer Go Congress, organized since '85 by the Ing
Foundation of Taiwan, has provided an arena for competition and a yardstick
for assessing progress.  The best Go programs now play at the evel of 12 to
15 kyu, distinctly better than human beginners.  But they have a long way to
go as compared to chess, where commercial machines have achieved master
ratings.

SMART GAME BOARD

The Smart Game Board supports two kinds of experiments with game-playing
programs.  First, as a hypertext medium: Its user interface is designed to
help serious players study the game in ways that are impossible without a
computer.  Second, as a workbench for programming games: Its structure
encourages experimentation with different search algorithms and strategies.
Neither of these functions is bound to any one game; originally designed for
Go, the Smart Game Board was expanded to include Othelo and chess.

What is It?  A Tool to Assist Game Players

The Smart Game Board is a tool to assist game players in the many activities
they now perform on a wooden board and paper: Playing, teaching, discussing,
analyzing, studying, recording and printing.  A personal computer should make
a better tool for some of these activities, but, as with any new application,
it is not a priori clear what functions a game workbench can and should
provide.  At the very least, it must provide the functions of a wooden board.
This is much harder than it seems, because the board is not just used to pay
moves: Players talk while pointing at it and rearranging stones at will.  In
computer jargon, the board is a shared workspace that supports very fast and
convenient editing.

A wooden board is more agreeable than a computer screen, and players wil only
switch to the computer if it offers additional benefits: Easy annotation,
ready access to a library of games and openings, help with tactical analysis,
or a strong opponent.  The Smart Game Board has passed the threshold of
usefulness: About a hundred players use it, many say it is their preferred
way of studying the game, it has been used at tournaments to record games and
between rounds as an opening library.  Its versatility is proven by
applications we had not directly designed for: One-on-one lessons by
professionals, and taking notes during lectures on Go.

What Games have in Common

There is no point in trying to make the Smart Game Board general enough to be
used for any kind of game--games differ too much to be all brought under one
hat.  Our choice of Go, Othello, and chess reflects personal and professional
interests, but we think they represent a sufficiently diverse sample of an
important class of games: Two-payer board games with perfect information.
How do these characteristics make it easier to build a game-independent user
interface?

* Two players: Limiting the game to two players, Black and White, reduces the
complexity of the interface without unduly restricting the set of games.

* Board game: The board is a convenient visual representation of the current
state of the game, and makes it easy to enter a move by clicking or dragging
the mouse.  Throwing dice or drawing cards would make it more complicated to
enter a move.

* Complete information: If each player always has access to the complete
state of the game, there is no need to hide information from one of the
players.

* Sequence of moves: Most games evolve as a fixed sequence of discrete
actions (moves) by the players.  Action games ike PacMan on the other hand,
where moves come any time and timing is critical, do not fit this model.

These common traits allow us to implement the functions presented below
independently of any particular game.  Renju (Go-Moku) was included in an
earlier version; other games that could be added without major changes
include: Kalah, Checkers, and Connect Four.  It would be harder to integrate
card games like Blackjack, Bridge, and Poker, dice games like backgammon, or
word games like Scrabble.  We discuss game-specific differences in more
detail later.

An Editor for Game Trees.  The Smart Game Board is an editor specialized for
game trees, and its user interface shows similarities with other editors.  A
game is treated as a document that can be opened, viewed, modified, and
saved.  We concentrate on those aspects that distinguish the Game Board from
other editors.

The underlying structure of a game is a tree, where a list of properties is
associated with each node.  In its simplest form, the tree degenerates into a
sequence of nodes, each with exactly one property, a move.  The generality of
a tree is needed to let the user experiment with different sequences of moves
while keeping the original game intact.  The tree is extended to a directed
acyclic graph for opening libraries (different move sequences may lead to the
same position).  The concept of a list of properties at each node is needed
to store attributes such as comments referring to the move, marks declaring
the move to be good or bad, or the time left at this point in the game.

A game is replayed by traversing the tree: Advancing by one node executes the
move stored at that node.  It is useful to provide various ways of moving
about the tree: Depth-first traversal, go to next/previous branch point or
terminal node, or search for nodes with specific properties.  General tree
editing commands allow one to copy properties, nodes, and entire subtrees
from one game to another.  More powerful operations on game trees include
backing up values in min-max fashion, or sorting the branches according to
the number of terminal nodes in each subtree.  Go, chess, and Othello players
intuitively understand the concept of a game tree, as it is the simplest
notion powerful enough to capture what they are doing while analyzing a game.
If the user enters no alternative moves and thus grows no branches, the tree
degenerates into a trunk corresponding to the sequence of moves actually
played.  Playing a move automatically adds a node to the currently active
branch of the tree, adds a move property to that node, and executes the move.
Simpler interfaces could be designed for programs that offer fewer services,
such as merely playing against a computer, or looking up an opening library.
In contrast, the Smart Game Board is a general tool that helps serious payers
in unexpected ways.

Database for Games.  Database software dedicated to games, such as ChessBase
[16], is rapidly becoming an essential tool for serious players, as it helps
them track large collections of games, standard openings, statistics, etc.
Users can order games by openings, play through famous master games, or get a
printout of the games their next opponent recently played.  Today, games from
major tournaments are quickly made available to subscribers in
machine-readable form.  The Smart Game Board provides similar functions, but
is not yet as good a database system as we aim at.  It has been used to
organize 1300 Othello games and therefrom create a library of standard Othelo
openings.  Given an opening position on the board, it retrieves all matching
games from its database, as shown in Figure 1.

Game Server.  Some users working on their own electronic game board or
game-playing program want to have access to Smart Game Board's functions from
their own program.  For example, it can be used as an opponent to test the
strength of their programs, it might extract a game from the game library, or
it could provide a collection of positions for statistical analysis.  For
that purpose, the program provides a serial interface where most functions
availabe to the user can be accessed with textua commands.  The concept of a
game server helps us test and tune our Go program, with different versions of
the program playing each other overnight.  More work is needed to turn this
personal game server into a service people can connect to and play games
against.

Structure of the System

The structure of the Smart Game Board is designed to separate game-specific
from game-independent aspects.  It provides slots where each game can plug in
its specific routines for the rules (egal move recognition and execution),
the user interface (board display, move input, menu functions), and an
optional playing agorithm.  A search engine provides depth-first search and
iterative deepening based on game-specific routines for move generation,
positive evaluation, and time control.

The program is written in SemperSoft[TM] Modula-2 under MPW (Macintosh
Programmer's Workshop) and runs on the Apple[R] Macintosh.[TM]  We lack the
resources for porting it to other systems.  The program now consists of a
total of 44,000 lines of code (including definition modules and comments,
excluding blank lines).  The distribution among the components is shown in
Table 1.

The Go program compiles to 500 kBytes of object code, the Othello program to
250 kBytes.  In addition, at least 150 kBytes are needed for data structures;
if more memory is available, a larger has table and a larger tree are
allocated.  Game-specific modules make their presence or absence known by
installing procedures in some common lower-level module.  This structure
facilitates experimentation with different search algorithms as well as
different games, and it allows us to configure different versions of the
program.  While handling a command, the program switches back and forth
between game-specific and game-independent routines several times.  For
example, a click on the board intended to enter a move triggers a call to a
mouse-tracking routine.  This routine is game-specific so as to give feedback
appropriate to the game, e.g., in the case of illegal moves.  Once the move
is recognized as legal, a new node is added to the tree independently of the
type of game.  Then another game-specific routine is called to execute the
move stored in the node.

Implementing Go, Othello, Chess, and

Nine Men's Morris

So far, Go, Othello, chess, and Nine Men's Morris (Muhle) have been
implemented.  Go came first and determined structure and user interface.
Integrating an existing Othello program in the Go framework caused only minor
structural changes, as Go and Othello are superficially similar: Both have a
board with black and white stones, stones are added to the board and never
moved, the controlled territory at the end of the game decides the winner.
Adding chess, however, required major surgery to make more operations
game-dependent, such as move input, move representation, and board editing.
Whereas in Go and Othello a move is given by a point on the board, chess has
from--to moves and several kinds of pieces.  'Minor' differences cause
delicate program changes if the Smart Game Board is to retain an elegant
structure throughout its evolutionary adaptations.  For example, moves are
counted by plys in Go, but as a pair of white--black plys in chess.  The
consequence is that a simple incrementing statement in a display routine gets
replaced by an interpreter for a game-specific descriptor for move-counting
and displaying.  Nevertheless, adding chess to the Smart Game Board was a
minor task compared with implementing even a rudimentary chess user interface
from scratch.  A program that plays Nine Men's Morris was recently added in a
few weeks of work.  It benefited greatly from the changes due to chess and
added no new requirements.

Go and Othello programs that run on the Smart Game Board have competed
successfully in several tournaments.  The original Othello algorithm included
in the Smart Game Board was Brand, which placed second at the 1982 North
American Computer Othello Championship in Evanston.  Recently, a better
Othello program, Peer Gynt, was created by writing a new evaluation function
and improving the time control.  This work was done in only two man-weeks by
using many existing building blocks from Brand and from the Smart Game Board.
An early version of Peer Gynt placed third at the 1989 North American
Computer Othello Championship in Northridge.  The chess module is not
intended to play, only to manage game collections.  Nine Men's Morris
(programed by Ralph Gasser) plays a fair amateur game, but so far we have
found no computer opponents to challenge (none were present at the London
Olympiad in August 1989).

Most of the Smart Game Board's user interface is embodied in a control panel
common to all games (Figure 3 shows it in a window at the top left of the
screen).  Game-independent operations are defined as motions on a game tree.
Game-specific operations (e.g. setting up positions in chess, entering marks
for annotating Go games) and status information (e.g. number of captives in
Go) are provided in a menu specific for each game.

What does it take to add another game to the Smart Game Board?  A
game-specific module that implements two functions:

* Recognize, execute, and undo legal moves.

* Display the board and track move input.

Go-Moku, for example, is played on the same board as Go, and thus required no
change to the board display.  If the surface of a new game differs
significantly from any of the implemented games, it is probably easier to
change some parts of the Smart Game Board to better accommodate this and
future games, rather than forcing the new game into the current framework of
the Smart Game Board.

EXPLORER

Characteristics of Go

Assuming the reader knows little or nothing about Go, we attempt to provide
some intuition for this game's domain of knowledge, in part by comparing Go
to chess.  Several excellent introductory books are available from [3].  Go
is a two-person game of perfect information in the sense of game theory; at
all times, both players know the entire state of the game.  The players
alternate placing a black and a white stone on some empty intersection of a
19 by 19 grid.  Once played, a stone never moves, but it may disappear from
the board when captured.  A player's objective is to secure more territory
than his opponent, counted in terms of grid points.  In the process of
surrounding territory by laying down a border of stones that must be 'alive,'
fights erupt that typically lead to some stones being captured ('killed').
Much of the difficulty of Go comes from the fact that during most of the
game, few stones are definitively alive or dead.  Stones are more or less
vital or moribund, and their status can change repeatedly during the course
of a game, as long as the surrounding scene changes.  Only when the game has
ended can all stones be classified definitively as alive or dead.  Thus 'life
or death,' the key concept of Go, exhibits a split personality.  As an
opeational concept during the game, it is the most important factor in
estimating potential territory and for assessing the chances of battle, but
is rather fuzzy.  It becomes more and more precise as the game progresses,
and is a well-defined concept used for counting territory when the game has
ended.  The game ends when neither player can find a profitable move and all
points are classified as one of black, white, or no-man's-land.  This
situation typically arises after about 200 to 300 moves have been played,
with anywhere between 60 and 90 percent of the 361 grid points occupied.
Whereas in chess we count a move (by a white piece) and counter-move (black
piece) as a single move, in Go we count the placement of each single stone as
a move.  Even keeping in mind that a Go move corresponds to half a chess move
(a 'ply'), it is evident that a Go game can take a long time.

Professional games last one to two full days.  As an extreme example from the
1988 Honinbo Title match, the defending champion Takemiya spent 5 hours and 7
minutes on a single move.  The rich tradition of Go records instances of
games that took months.  Kawabata Yasunari, winner of the Nobel Price for
Literature in 1968, considered his novel 'The Master of Go' [8] to be his
finest work.  It is the chronicle of a single game, played in 14 sessions
from August through December of 1938, a contest for supremacy between the
heretofore invincible old Master of Go, Honinbo Shusai, and his younger
challenger, Kitani Minoru.  It is a moving tale of the contest between
tradition and change, and, ultimately, between life and death.

If chess is a model of a single battle (as fought thousands of years ago), Go
is a model of war: Typically, several loosely interacting land-grabbing
campaigns and battles proceed concurrently.  Go is a great game of
synchronization: The stronger a player, the better he is able to coordinate
his campaigns and disrupt the coordination among enemy forces.  Multipurpose
moves are the most effective, such as a 'splitting attack' that wedges in
between two enemy groups, or a move that threatens to kill an enemy group and
extends one's own territory at the same time.  Typically one player has the
initiative, called 'sente,' which enables him to play strong threats that
leave the opponent little choice but to respond locally.  Thus the player
with sente can choose the field of action to suit his goals, whereas his
opponent, said to be in 'gote,' "follows him around the board."  The
sente/gote relationship alternates between the players, but among opponents
of different skill, the stronger player will manage to keep sente most of the
time.

Intuition and experience let a player estimate the numerical value of each
move he considers.  In the opening, a typical move may be worth about 20
points (of 'equivalent territory').  Move values may be highest in the middle
game, but then they decrease steadily towards the endgame, where fights may
erupt over a single point, or even 'half a point.'  This phenomenon of
decreasing value of a typical move as the game progresses causes timing to be
of the utmost importance.  If a move is estimated to be worth x points in a
local context, it must be played at just the right moment, when the value of
other moves is also about x.  If played too early, the opponent may ignore
this move and reply with a bigger one elsewhere, perhaps gaining sente.  If
played too late, when the value of other available moves has diminished, the
opponent may prevent it, even at the cost of ending in gote.  Players
analyzing a game are always debating which move, among several good ones, is
'the largest.'

A handicap system makes it possible to balance players of different skill
without changing the nature of the game much.  The weaker player starts by
placing anywhere from 2 to perhaps 13 stones on the board.  In Japanese Go,
these stones are placed in a fixed pattern on 'handicap points.'  In Chinesse
Go, the weaker player places them anywhere--he starts with x free moves.  The
handicap system defines a fairly accurate rating scale, illustrated in Figure
4.  Player A is x stones better than B if the chances become equal when B
receives x handicap stones.  Playing strength of amateurs is measured on a
scale where one unit corresponds to a handicap stone.  A very weak player may
be 20 kyu, implying that he receives 10 handicap stones from a 10 kyu, who in
turn receives 9 handicap stones from a first kyu.  A first kyu 'takes Black,'
i.e. plays first, against a first dan, who receives 5 stones from a 6 dan.
That's about as high as the amateur scale goes.  Above that, there is a
separate scale for professionals, from the first (lowest, perhaps equal to an
amateur 6 dan) to the 9th (highest).  The professional scale has a finger
grating: 9 skill levels are compressed into a difference of about two
handicap stones.  As computer Go is in its infancy, one would like to build
on the mature experience with other games, but such experience does not
transfer readily.  At chess, for example, computers owe their spectacular
prowess more to the computer's speed at searching than to its knowledge of
chess lore.  But experience with the computer chess approach of full board
search does not apply directly to Go, for two reasons:

* Branching factor.  As Go is played on a 19 * 19 board, the number of legal
moves decreases from 361 at the start to several dozen at the end, creating a
tree with an average branching factor of about 200, as compared to a
branching factor of about 40 legal moves from a typical chess position.  This
larger fan-out may be compensated partly by the fact that in Go a greater
number of move sequences (permutations of each other) lead to same position
than is the case in chess.  (This phenomenon is due to the fact that almost
all Go moves onto an empty grid point are legal, thus they can be played at
any time, whereas many chess moves cease to be legal after some other pieces
have moved).  Thus transposition tables that detect positions analyzed
earlier promise to be relatively more effective in Go than in chess.  Still,
we must assume that the vastly larger search space of Go greatly reduces the
depth of feasible search.

* Position evaluation.  Material and mobility, the dominant factors in chess
evaluation functions, are easily computed.  In Go, possession of territory is
the closest equivalent to material possession in chess, but its evaluation is
much more subtle: Except at the very end of the game, a player's claim to
territory can always be challenged.  Go has no clear analog to chess
mobility; perhaps 'shape' comes closest, but good and bad shape are hard to
measure.

Whereas success in computer chess was achieved mostly by sidestepping the
issue of knowledge engineering, and using fast search to compensate for
meager chess knowledge, the analogous recipe is unlikely to lead to
comparable success in Go.  The insight that both domain-specific knowledge
and search are of critical importance makes Go a more diverse and balanced
test bed for artificial intelligence research than chess is.  Computer Go may
well become the new "drosophila of AI."

Despite the large branching factor, strong players routinely look 10 to 20
moves ahead when a fight demands it--an activity called 'reading.'  This
perplexing term suggests that a player, at that moment, is not free to let
his imagination roam, but simply has to discover what is given--whether a
plausible, perhaps forced, sequence of moves works or does not.  Go does know
the concept of narrow and deep search, as does chess, but with a difference.
'Reading' is limited to a local scene, say a 5 by 7 corner, and never extends
to the full board.  Even if reading guarantees a win in a local battle, that
does not necessarily mean much--the enemy might ignore this battle and get
compensation elsewhere.  A separate, more intuitive mental act is required to
assess the relevance of such a local search to the overall situation.  So
far, there is no alternative to the vast amount of Go knowledge accumulated
by analyzing thousands of professional games each year, compiled and
distilled in hundreds of Go books and journals.  Explorer is designed to
explore the feasibility of a knowledge-based approach to computer Go,
augmented by focused tactical lookahead in local fights.

What Aspects of Knowledge Engineering

are Relevant to Go?

It is useful to distinguish four major steps in the task of formalizing
knowledge: Acquisition, selection, representation, and use.  The nature and
difficulty of each step differs from application to application.  Acquisition
and representation are most widely discussed in the literature, but turn out
be non-issues in our project of formalizing Go knowledge.  Selection and use,
on the other hand, are critical.

Knowledge acquisition.  How do you get at the subject matter knowledge that
you may want to capture?  In contrast to other applications where this
knowledge may be scattered and needs to be painstakingly assembled, we have
all the Go knowledge at hand necessary at this stage of our project--mostly
as the experience of Ken Chen, amateur 6 dan.  It is interesting to reflect
on the fact that most progress in computer chess is due to amateurs of medium
playing strength--strong chess players were evidently not essential to the
development of champion chess machines.  This reflects the fact that
computers can play amazingly powerful chess without much explicit chess
knowledge.  We expect that explicit representation of game-specific knowledge
will be more important for Go than for chess.

Knowledge selection.  Selecting a 'domain of discourse or competence' that
matches the program's abilities is the crucial task.  At this stage it is
hopeless to develop a Go program by trying to capture as much Go knowledge as
possible.  And if we managed to get a lot of knowledge into a system, it
would probably get "confused" and not know how to use it.  The key design
issue in this knowledge engineering task is an expert assessment of what
small part of the vast collection of Go lore can be programmed and will
result in increased playing strength.  Emphasizing fundamentals, we try to
make Explorer understand concepts such as: What stones are dead and should be
given up, what groups are unsafe and should be protected or attacked, what
blocks are important and deserve high priority.

Knowledge representation.  The question as to whether this knowledge is
explicitly represented, say as rules in some appropriately chosen notation,
or implicitly in program fragments, has been decided by pragmatic
considerations of expediency.  At this early stage of the Explorer project,
the choice is overwhelmingly in favor of procedural representation of
knowledge--there is less overhead in getting started.  Some aspects of Go
knowledge, such as the beautifully geometric nature of patterns, clearly
invite the design of a pictorial language to specify condition-action
rules--for example, as generators for 'tesuji' moves, the trademark of a
highly skilled player.  Such notations have been developed [15], but their
design and efficient implementation is a project in its own right. [21] is
another example of explicit knowledge representatin, where patterns are
encoded not pictorially, but verbally in terms of standard Go concepts.  In
contrast, we have chosen not to tackle the general issue of assessing
advantages and disadvantages of various representations of knowledge, but to
focus on the specific question of what is feasible for a program under
continuous development that has to make a move in 20 seconds on the average.

Putting knowledge to good use.  If knowledge selection is the key design
issue, knowledge use is the key implementation problem.  The major source of
difficulty is the fact that human knowledge is ubiquitously
contradictory--for every proverb there is a counter-proverb.  In Go, the goal
of maximizing territorial claims contradicts, or at least competes for
resources against, a dozen other goals, such as influence, safety, making
good shape, attacking or restraining opponent forces.  Explorer has two dozen
move generators that all scream for their respective goals.  In a later
section we show how the structure of Explorer attempts to coordinate and
filter their screaming.

Go Knowledge: Design and Windfall

Among the Go concepts designed into Explorer, influence is the most basic.
Any stone radiates influence across the board: Its influence peaks at the
location of the stone, and decays exponentially with distance.  The
superposition of the influence of all the stones on the board creates a field
of force, a terrain that determines the structure of the board at this
moment.

Another basic concept of Go is that of a block: A set B of stones of the same
color such that any two stones in B are connected by a sequence of stones in
B, with any consecutive pair horizontally or vertically adjacent.  A block is
'strongly connected,' and its stones die and are removed in unison when they
lose their last 'liberty,' i.e., when there is no free point adjacent to any
stone in the block.  Blocks, more than single stones, are the basic building
blocks of Go positions.

'Block' is a rigorously defined concept essential for checking the legality
of moves, but a block is too small a unit for discussing the quality of
moves.  For this we need two higher concepts, 'chain' and 'group,' that turn
out 'o be fuzzy.  The intent of these concepts is clear, but their definition
is not.

A chain is a set of blocks that, under normal circumstances, can be connected
into a single block at will, even against (most) measures taken by the
opponent.  A chain is a 'potential block,' and for purposes of planning can
be treated as such.  Many chains will turn into block as the game progresses,
but there are exceptions: A higher level of planning may decide that the
connectivity of a particular chain is not worth preserving (for example,
during a 'ko fight').

A group is a set of chains that typically fight together--an attack on any
chain in a group is an attack on all of them.  Like an army that can operate
and survive independently, it has a claim to territory, and many groups will
turn into the boundary of a single territory as the game progresses.  But
there are lots of exceptions--groups split and merge routinely, perhaps
according to the players' plans, often dictated by the unpredictable whim of
the fortunes of war.

Figures 5-8 illustrate these concepts, and the results of Explorer's
analysis, on half a board in the early stage of a game.  There are three
nontrivial blocks (see Figure 5): The two stones labeled 11, two adjacent
stones labeled 9, and three adjacent stones also labeled 9.  All other blocks
consist of a single stone.  The labels identify chains.  The two diagonally
adjacent stones labeled 12 are not currently a block, but can be turned into
one whenever black desires, as white would need two consecutive moves to
separate them.  The same argument explains chains 9 and 6.  The two chains 7
and 11 are connected for most practical purposes, but if a fight erupts later
on, white might wedge in between and separate them.  Explorer considers them
to be two chains, not one.  Figure 6 shows the combined influence wielded by
all these stones, measured on a scale from 1 (low) to 64 (high) in favor of
black or white.  As an example, notice how the gap between chains 11 and 7 in
Figure 5 is under a strong black influence of 50 units; at the moment, this
square is under black's control.  The lower right corner is under very strong
influence exerted by chain 9, and will almost certainly turn into white
territory.  The black stone 8 is dwarfed by the white chain above it, and
thus exerts little influence.  It will most likely end up as a prisoner
inside white's territory, but before that fate is final, it has some nuisance
value ('aji')--it might link up with chain 12, or, less likely, support a
black invasion of the corner.

Influence combines with chains to identify groups, shown in Figure 7 labeled
by their group safety.  Safety ranges from 0 (hopeless) to 64 (totally safe).
The two chains 11 and 7 in Figure 5 are now clearly identified as a single
group, a fighting unit.  So are the single stones labeled 10 and 13 in Figure
5.  Experience shows that these two stones, or any two stones on the third
line separated by two empty spaces, control the territory between the below
them; this know-how is preached in every beginner's book.  Explorer has no
explicit concept 'two stones on the third line separated by two empty
spaces,' and thus can have no explicit rule about them.  But Explorer seems
to know this fact anyway, as reflected by the relatively high influence on
these points, as shown in Figure 6.  The influence measure was tuned to
produce a reasonable result on such standard configurations.  Unlike a
collection of explicit rules that can never capture all situations, influence
applies to all configurations, and gives reasonable results for most.  It is
also used to define a territorial claim for each group, shown shaded in
Figure 7.  This analysis of the board is updated after each move.  Figure 8
shows the effect of a single move in the position of Figure 7.  Left: A black
invasion lowers the safety of white's 2-stone group from 47 to 13, and
enhances the safety of black's group to the right from 47 to 51.  Right: A
white 'peep' that threatens to break black chain 6 drastically reduces its
safety from 46 to 28 and 4.

A last comment on knowledge selection.  Explorer is a weak player playing
other week players who are likely to exhibit typical beginners' shortcomings
in their repertoire of techniques.  It is tempting to design tricks into it,
that is, schemes and techniques that stand a good chance of catching a novice
unaware, but are basically unsound and will be refuted by an alert stronger
player.  Inspired by Bobby Fischer's famous creed: "I don't believe in
psychology, I believe in strong moves," we tried to avoid falling into this
trap.  Although tricks might lead to short-lived success, they are sure to
lead into a dead end, as tricks, by definition, contradict general truths,
and will only aggravate a problem inherent in knowledge engineering: That
different rules contradict each other.  Any expert player will recognize the
Go concepts we aim at as belonging to the classic fundamentals of Go lore.

Structure of Explorer

Explorer's knowledge is represeted internally in four different ways:

(1) Relationships among stones on the board are represented by three types of
frames: Blocks, chains, and groups.  Each has eight to thirty slots, filled
by attached procedures at the start of each move decision cycle.

(2) Knowledge concerning local urgent Go patterns is implemented as pattern
recognition procedures.

(3) Josekis, local urgent moves in a corner opening, are stored separately as
a tree on disk.

(4) Knowledge concerning moves to achieve specific goals is scattered in two
dozen move generators in implicit production rule form.

Explorer's Go knowledge is the major sourcefor its decision making.  Note
that with the exception of tactical information, Explorer's knowledge is all
derived from the present position.  The move generation and selection process
is shown in Figure 9.

Experience and Current Work

Reviewing the games Explorer played in the November 1988 International
Computer Go Congress in Taiwan, and in human tournaments in New Jersey and
Oklahoma in early 1989, we observe that our knowledge-based approach to
computer Go is feasible for developing a low-related amateur.  Knowledge is
power: Its knowledge of group safety and block values alone gave Explorer an
advantage over most other Go programs, as reflected in its strong performance
in large scale fighting during the mid game stage, in most cases making end
game play irrelevant to the win/loss results.  As is the case in chess, a
computer's style of playing is different from people's.  Explorer committed
bigger blunders than its human opponents, but often exhibited better
positional understanding than human players of equivalent strength.  Explorer
is now a non-voting member of the American Go Association rated 15 kyu.

With that official status it accomplished its purpose as a guinea pig and we
immediately raised our level of ambition.  Ken Chen believes in capturing
crucial Go knowledge.  He redesigned the position analyzer and produced Go
Intellect, a program with a much improved understanding of Go.  Anders
Kierulf produced Swiss Explorer, a relatively solid player that improved on
the components Explorer already had.  In August '89 Go Intellect won the
North American computer Go championship at Rutgers 4:0,and a week later in
London Swiss Explorer won the Computer Go Olympiad in a 10-player round-robin
tournament 8:1.

Experience with Explorer and its improved successors clearly shows that using
an additional source of knowledge to advantage is much harder than making the
additional knowledge available to the system.  Knowledge may backfire--more
knowledge does not necessarily mean better performance.  Different sources of
knowledge will suggest different urgent moves that conflict with each other.
The coordination of all knowledge sources in the system is the hardest
knowledge engineering task in the Explorer project.

Some knowledge of importance to human problem solvers amy be of little value
to a machine.  For example, human players memorize standard corner opening
sequences, called joseki, because they are safe and save a lot of thinking.
It is easy to implement such book knowledge, and Explorer has a large joseki
library on disk [6].  But it takes considerable strategic understanding to
take advantage of sophisticated opening sequences, and today's programs are
lacking in this respect.  During computer Go tournament, we turned Explorer's
joseki option off, in order to increase the chances of middle game strategic
fighting, where we felt Explorer had an advantage over other programs.

Almost all Go knowledge is heuristic, and thus imprecise.  We continually
attempt to refine our programs' knowledge in order to reduce conflicts and
improve performance, and investigate additional Go concepts that can
profitably be captured and used.  As computer chess has proven repeatedly,
game playing is an experimental subject where predictions are difficult.  But
we believe that an expanded pattern library may capture part of the concepts
of tesuji and aji, and serve as move generators that focus search on goals
that would otherwise be lost beyond the horizon.  Explorer had no concept of
one of the most important aspects of Go: Coordination among several battles,
as discussed in Section 1.  Synchornization of campaigns is a major extension
beyond our programs' current structure.  As a step in this direction, we have
experimented with making use of the concepts of 'sente' and 'gote' in the
endgame, where most scenes of action are independent [11].

Tactics is the one aspects of game-playing amenable to exact calculation.
Explorer's computing power was woefully inadequate for solving tactical
problems it was aware of.  Improving tactical prowess is an open-ended route
to a stronger Go program--no inherent limitation can be seen as yet.  Strong
chess machines use special-purpose hardware to generate and evaluate 100 to
1000 times more positions per second than a conventional microprocessor
could.  We are not aware of any work on Go hardware, but that's an obvious
approach to investigate.

What motivates researches to pursue this never-ending race for stronger
computer players?  Computer game playing is one of the few cases in the fuzzy
area of knowledge engineering where performance and progress can be measured
with remarkable accuracy, and the causes that affect performance can be
identified.  Computer chess, for example, has shown that you do not need to
understand much about chess at all in order to play at the level of an
international master, all you have to do is to look at 1 million positions
per second.  This insight may not generalize beyond chess, but it is an
impressive statement about the power of computation.

CONCLUSION

Computer Go stands today where computer chess was 20 years ago.  Among dozens
of programs, each one seems to follow its own approach, and the most visible
characteristics they share are a very low amateur level of play, and a lot of
hope.  It was impossible, two decades ago, to predict with any degree of
assurance how far and how fast computer chess would progress, and it is
impossible today to predict how successful computer Go will be, and what it
takes to get there.  The only road to insight and progress is hard,
trial-and-error experimentation.  Explorer is one data point along this road.
It tackles a difficult issue head on: Can a dumb program make good use of
classical Go knowledge?

Acknowledgments.  Thanks to Bill Hargrove, Martin Muller, Monty Newborn, Jim
Stein, and Ken Thompson for helpful comments.

REFERENCES

[1] American Go Association, P.O. Box 397, Old Chelsea Station, New York, NY
10113.

[2] Computer Go.  D.W. Erbach, Ed., 71 Brixford Crescent, Winnipeg, Manitoba
R2N 1E1, Canada.

[3] Ishi Press International, 1400 North Shoreline Blvd., Bldg. A7, Mountain
View, CA 94043.

[4] Bradley, M.B. The Game of Go--The Ultimate Programming Challenge?
Creative Computing 5, 3 (Mar. 1979), 89-99.

[5] Friedenbach, K.J. Abstraction hierarchies: A model of perception and
cognition in the game of go.  Ph. D. dissertation, Univ. of California, Santa
Cruz, 1980.

[6] Ishida, Y. Dictionary of Basic Joseki, Vol. 1, 2, 3.

[7] Jansen, P. DT as Spraggett's second in Quebec.  Msg on electronic news,
Feb. 17, 1989.

[8] Kawabata, Y. The Master of Go.  Perigee Books, NY, 1981.  Originally
published in Japanese, as 'Meijin', in 1951.

[9] Kierulf, A. Brand--an Othello Program.  In M.A. Bramer, Ed., Computer
Game-Playing: Theory and Practice, 197-208, Ellis Horwood, Chicheester, 1983.

[10] Kierrulf, A. Computer Go Bibliography.  Part 1 in [2] (Winter 1986/87),
17-19; part 2 in [2] 1, 3 (Summer 1987), 15-19.

[11] Kierulf, A. Human-Computer Intersection in the Game of Go.  In
"Methodologies for Intelligent Systems", Z.W. Ras and M. Zemankova, Eds.,
North Holland, 1987.

[12] Kierulf, A., and Nievergelt, J. Computer Go: A smart board and its
applications.  Go World No. 42, Winter 1985/86, 62-65, Ishi Press, Tokyo.

[13] Lehner, P.E. Planning in adversity: A computational model of strategic
planning in the game of go.  Univ. of Michigan, Ph. D. dissertation (1981).

[14] Levy, D.N.L. (Ed.).  Computer Games II.  Springer Verlag, New York,
1988.

[15' Mano, Y.  An Approach to conquer Difficulties in Developing a Go Playing
Program.  J. Info. Proc. 7, 2 (1984), 81-88.

[16] Nunn, J. Life with ChessBase.  ICCA J.  (International Computer Chess
Association) 11, 2/3 (June/Sept. 1988).

[17] Reitman, W., and Wilcox, B.  The structure and performance of the
interim.2 Go program.  In Proceedings of IJCA-6 (Tokyo, August 20-23, 1979),
711-719.

[18] Remus, H.  Simulation of a learning machine for playing Go.  In
Proceedings of IFIP Congress, North Holland, 1962.

[19] Ryder, J.L.  Heuristic analysis of large trees as generated in the game
of Go.  Ph.D. dissertation, Standford Univ., 1971.

[20] Shannon, C.E.  Programming a computer for playing chess.  Philosophical
Mag. 41, 314 (1950), 256-275.

[21] Shirayanagi, K.  A new approach to programming Go--Knowledge
representation and its refinement.  In Proceedings of the Workshop on New
Directions in Game-Tree Search (Edmonoton, Canada, May 28-31, 1989).

[22] Turing, A.M.  Digital computers applied to games.  In Faster than
Though: A Symposium on Digital Computing Machines?  (B.V. Bowden, Ed.), Ch.
25, 286-310, Pitman, London, 1953.

[23] Wilconx, B. Zobrist's program.  Amer. Go J. 13, 4/6 (1978), 48-51.

[24] Wilcox, B. Ryder's program.  Amer. Go J. 14, (1979), 23-28.

[25] Wilcox, B. Reflections on building two Go programs.  SIGART News 94
(Oct. 1985) 29-43.

[26] Zobrist, A.L.  Feature extraction and representation for pattern
recognition and the game of Go.  Ph.D. dissertation, Univ. of Wisconsin
(1970).

ANDERS KIERULF is an assistant in computer science at Informatik ETH in
Zurich.  His research interests include heuristic search and computer game
playing, user interfaces, algorithms and data structures.  Author's Present
Address: Informatik, ETH, CH-8092 Zurich, Switzerland.  kierulf@inf.ethz.ch.

KEN CHEN is associate professor of computer science at the University of
North Carolina at Charlotte.  His research interests include artificial
intelligence, in particular heuristic search and computer game playing,
knowledge-based systems.  Author's Present Address: Dept. of Computer
Science, University of North Carolina, Charlotte, NC 28223.
chen@unccvax.uncc.edu.

JURG NIEVERGELT, professor of computer science, has recently returned to ETH
Zurich after 4 years as chairman of the Department of Computer Science at the
University of North Carolina at Chapel Hill.  His research interests include
algorithms and data structures, and interactive systems.  Author's Present
Address: Informatik, ETH, CH-8092 Zurich, Switzerland.  jn@inf.ethz.ch.

Permission to copy without fee all or part of this material is granted
provided that the copies are not made or distributed for direct commercial
advantage, the ACM copyright notice and the title of the publication and its
data appear, and notice is given that copying is by permission of the
Association for Computing Machinery.  To copy otherwise, or to republish,
requires a fee and/or specific permission.