[mod.techreports] canai1 tech reports

E1AR0002@SMUVM1.BITNET (10/29/86)

.B1 "Abstracts"
.H1
.sp 0.5v
Recent \*(AI
Technical Reports
.sp 0.5v
.N1
\f2Editor's note:\fP Recent Canadian \*(AI technical reports are listed in this
department.  Abstracts will be included as space permits, with preference being
given to theses.
.5S
.sz 14
.ce
\fBUniversity of Toronto\fP
.sz 10
.sp 0.5v
Requests for any of the following publications should be addressed to:
  Joanne Mager
  Department of Computer Science
  University of Toronto
  Toronto, Ont., C\s-1ANADA  M5S 1A4\s0
.At
  Theory and parsing of
  the coordinate conjunction and''
.Aa
  Victoria L. Snarr
.Ad
  MSc thesis, TR CSRI-171
  September 1985.
.Ab
Although the conjunction \fIand\fP appears to have a simple function in the
English language, it has proved to be a stumbling block for both theoretical
and computational linguists.
.pp
One of the theoretical problems of conjunction is to determine what governs the
acceptability of a structure in which two elements are connected by \fIand\fP.
The corresponding computational problem is, given this knowledge, to
incorporate it into an efficient parser for English.
.pp
This thesis proposes a solution to the theoretical problem which is in the form
of two general constraints\|\(em\|a syntactic constraint and a semantic one;
and then incorporates these constraints into a strictly deterministic''
parser for English.
.At
  A foundational approach to
  conjecture and knowledge
.Aa
  James Patrick Delgrande
.Ad
  PhD Thesis, Technical Report CSRI-173
  September 1985
.Ab
A foundational investigation of the notion of hypothesis in knowledge
representation schemes is presented.  Three major areas are addressed.	The
first concerns formal issues in forming and maintaining a set of hypotheses,
based on a stream of facts or ground atomic formulae.  The second concerns
deductively reasoning with a set of known and hypothesized sentences, and the
relation of such a reasoning system with a hypothesis formation system.  The
last area, which represents preliminary work, deals with an informal theory of
meaning for statements that are in some sense general, and yet admit
exceptions.
.pp
For the first part, a language \fIHL\fP, and from it an algebra and logic, is
derived for forming hypotheses and maintaining the consistency of a set of
hypotheses.  The hypotheses are expressed in set-theoretic terms.  Two
soundness and completeness results for the logic are presented.
Through these formal systems, the set of potential hypotheses is precisely
specified and a procedure is derived for restoring the consistency of a set of
hypotheses after conflicting evidence is encountered.  For the second part,
reasoning with knowledge and hypothesis, an existing first-order language,
\fIKL\fP, that can represent and reason about what it knows is extended to one
that can reason with knowledge and hypothesis.
The third part, which addresses the issues of exceptions to general statements,
is treated by outlining a theory of meaning for naturally occurring classes.
.5S
.sz 14
.ce
\fBUniversity of Waterloo\fP
.sz 10
.sp 0.5v
Requests for any of the following publications should be addressed to:
  Donna McCracken
  Department of Computer Science
  University of Waterloo
  Waterloo, Ontario, C\s-1ANADA  N2L 3G1\s0.
.At
  The logical definition of deduction systems
.Aa
  David L. Poole
.Ad
  Research Report CS-84-12
.Ab
The separation of an algorithm into logic plus control has special benefits in
the definition of problem-solving systems.  Such a separation allows proving
the correctness and completeness of the logic base, independent of the control
imposed.  The control component can then be developed to enhance the efficiency
and explanation whilst preserving the logical power and correctness of the
system.
.pp
This paper presents the definition of one such logical base for the non-clausal
first-order predicate calculus.  This logical base specifies how to transform a
predicate calculus deduction problem into a problem of searching an
\s-1AND/OR\s0 tree. Different control strategies are developed, with resulting
systems combining the efficiency of connection graph proof procedures with the
efficiency of non-chronological backtracking. Each implementation uses the
input form of the unconstrained first-order predicate calculus, with each step
being locally explainable in terms of input given by the user.	This allows the
debugging and explanation facilities of expert systems to be incorporated into
the implementations.
.At
  A logical system for default reasoning
.Aa
  David L. Poole
.Ad
  in \fIProceedings of the AAAI
  Non-monotonic Reasoning Workshop\fR,
  October 1984, New Paltz, NY, 373\(mi384.
.Ab
This paper proposes an alternate motivation, justification, syntax and a
model-theoretic semantics for default logic. This is a conservative extension
of the first-order predicate calculus, to incorporate defaults as well as
facts.	Provability becomes explainable by a theory''.  This theory is like
a scientific theory, and must be consistent with the facts. The defaults are
the possible hypotheses in a theory.
.pp
We outline how a computational mechanism for such a default reasoning system
can be obtained from an existing deduction system with negation.
The analogy with science is discussed, along with a proposal as to how such
reasoning can be used to implement expert diagnosis systems.
By allowing the theories to be explicit we overcome many of the problems
which motivated the development of non-normal defaults.
The problems associated with quantified defaults are also discussed.
.At
  On the comparison of theories:
  Preferring the most specific explanation
.Aa
  David L. Poole
.Ad
  in the \fIProceedings of the Ninth International
  Joint Conference on Artificial Intelligence\fR,
  Los Angeles, August 1985
.Ab
One of the problems with systems that reason with defaults occurs when
two contradictory answers can be produced when one is preferable.  In this
paper, defaults are treated as possible hypotheses in a scientific'' theory
to explain the results. Within such a system we propose a theory comparator to
resolve conflicts by choosing the result supported by the most specific
knowledge. This overcomes many of the problems which motivated non-normal
defaults, and produces the correct results in inheritance systems.
A model-theoretic semantics for the default logic is defined, as well as a
computational mechanism in terms of normal first-order predicate calculus
deduction systems.  A comparison with other proposals shows that this has
advantages in its semantics, and in modularity of knowledge.
.At
  The need for pragmatics
  in natural language understanding
.Aa
  Robin Cohen
.Ad
  in the \fIProceedings of the CSCSI Workshop on
  Theoretical Advances in Natural Language
  Understanding\fR, May 1985, Dalhousie University,
  Halifax, N.S.
.Ab
This position paper maintains that considering the beliefs, goals, and
intentions of the conversants is absolutely critical to the construction
of some natural language understanding systems. Two particular examples
are studied in detail: \fI(i)\fP designing interfaces to interactive systems
and \fI(ii)\fP constructing a model to understand arguments. The consequence
is that the knowledge representation scheme must be more than a blueprint
for representing shared facts''.  In essence, pragmatic analysis is an
important constituent of natural language understanding and user models
are thus a critical component of the underlying knowledge representation.
.At
  Automated discovery
.Aa
  Paul Van Arragon
.Ad
  Research Report CS-85-14 [M.\|Math essay]
.Ab
Discovery learning is the area of \*(AI that attempts to automate the process
of scientific discovery.  We discuss the definition of discovery learning, and
review the previous work in the area, including Lenat's \s-1AM\s0 (that
researches mathematics), \s-1EURISKO\s0 (a revision of \s-1AM\s0 that studies
several domains), Langley's \s-1BACON\s0 (that discovers empirical laws), and
Aref's system (that discovers data structure concepts). We critique each
program and suggest directions for future research.
.At
  An expert system for educational diagnosis
  based on default logic
.Aa
  Marlene Jones and David Poole
.Ad
  \fIProceedings of the Fifth International
  Workshop on Expert Systems & their Applications\fR,
  May 13\(mi15, Avignon, France, 673\(mi683
.Ab
This paper shows how a formal logic can be used to build an expert system with
an explicit semantics so that the knowledge can be understood without appealing
to the working of the system. This is based on a default logic, which was
explicitly designed to handle incomplete knowledge of the form found in
diagnosis problems. The defaults correspond to possible hypotheses we are
prepared to accept in a diagnosis. The diagnosis consists of finding the best
theory that explains the symptoms. The semantics of the system and a
definition of best'' are provided, which allows us to reason in an efficient,
hierarchic manner to do a diagnosis.
It is shown how this approach can be employed in an expert system for
diagnosing or assessing children with learning disabilities.
.At
  Expert systems: Their potential roles
  within special education
.Aa
  Marlene Jones
.Ad
  \fIPeabody Journal of Education, \fB62\fR(1), 52\(mi66
.Ab
We investigate the potential of expert systems within the field of education,
particularly special education.
.At
  Student models: The genetic graph approach
.Aa
  B.\|J. Wasson
.Ad
  Research Report CS-85-10 [M.\|Math thesis]
.Ab
A major component of an intelligent computer-aided instruction
system is the student model. We identify certain criteria which an ideal
student model must satisfy and we use these criteria to evaluate existing
modelling techniques. One particularly promising approach, due to Goldstein, is
to use a genetic graph as a framework from which the student model is obtained.
Goldstein's method is described and its potential as a base for the
student model is examined in detail.
We propose extensions to the genetic graph which are necessary to fulfill
the requirements of the ideal student model.
.pp
To show the flexibility and usefulness of the approach and its extensions,
two radically different domains, subtraction and elementary ballet, are
illustrated as genetic graphs.
We discuss the necessity and feasibility of dynamic expansion of the student
model, and argue that this is an attainable goal.
.At
  Inductive concept learning using
  the artificial intelligence approach
.Aa
  Bruce Cockburn
.Ad
  Research Report CS-85-12 [M.\|Math essay]
.Ab
This essay is a critical survey of past attempts to build programs that
learn symbolic concepts by induction from examples.
Specifically, emphasis is placed on programs that use \*(AI
techniques, as opposed to the alternative methods of numerical and
statistical analysis.
First, the distinguishing characteristics of previous programs are described
to provide criteria for the evaluation of actual systems.
Then, learning programs are presented and their properties discussed.
A critical discussion outlines areas of weakness in past research.
Finally, appropriate directions for future research are identified.
.At
  A comparative study of pattern recognition
  and artificial intelligence techniques
  for the development of intelligent systems
.Aa
  Sheila A. McIlrath
.Ad
  Research Report CS-85-11 [M.\|Math essay]
.Ab
The purpose of this paper is to evaluate and compare pattern recognition
and \*(AI techniques for developing intelligent
systems, and to indicate areas where pattern recognition techniques could be
incorporated into an \*(AI framework.
.pp
An intuitive introduction to pattern recognition principles is provided.
It contains a summary of pertinent techniques in both decision-theoretic and
syntactic pattern recognition.
Pattern recognition and \*(AI techniques are compared with respect to
methodology, formalization, ease of implementation, ease of understanding and
modification, domain applicability, and potential for future expansion.
Three specific areas of \*(AI are isolated for more in-depth study:
knowledge representation, problem-solving techniques, and learning.
Within each area, a comparison of pattern recognition and \*(AI is provided,
and suggestions are made for the application of pattern recognition techniques
to the \*(AI environment.
.ds ES \s-1ESPRIT\s0
.At
  The potential role of Canada in the
  European Communities' \*(ES project
.Aa
  Randy Goebel
.Ab
A recent meeting of Canada and the Commission of the European Communities
included a presentation on \*(ES, the recently initiated European Communities
programme designed to accelerate research and development in information
technology.  This brief paper reports some observations on the \*(ES programme
and the potential for Canada to take a cooperative role.
.At
  Concurrent Prolog
  in a multi-process environment
.Aa
  Rosanna K.\|S. Lee and Randy Goebel
.Ad
  Research Report CS-84-46 [M.\|Math thesis]
  Research Report CS-85-09
  Also in the \fIProceedings of the IEEE 1985
  International Symposium on Logic Programming\fR,
  Boston, July 1985
.Ab
\fIConcurrent Prolog\fP is Shapiro's definition of a simple yet powerful
transformation of Prolog that incorporates concurrency, communication,
synchronization and indeterminacy.  Here we report on the development of a
computation model for Concurrent Prolog which uses processes communicating via
message-passing.  A prototype implementation, \fIPort Prolog\fP, has been
programmed.
.At
  Interpreting descriptions in a Prolog-based
  knowledge representation system
.Aa
  Randy Goebel
.Ad
  Research Report CS-85-08
  Also in the \fIProceedings of the Ninth
  International Joint Conference on Artificial
  Intelligence\fR, Los Angeles, August 1985
.Ab
Descriptions provide a syntactic device for abbreviating expressions of a
formal language. We discuss the motivation for descriptions in a system called
\s-1DLOG\s0.  We describe two approaches to specifying their semantics, and a
method for implementing their use.  We explain why some descriptions should be
given a higher-order interpretation, and explain how such descriptions can be
dealt with in the simpler logic of Prolog.
The essential idea is to constrain the domain of descriptions so that an
extended unification procedure can determine description equivalence
within the Prolog framework.
.At
  On eliminating loops in Prolog
.Aa
  David Poole and Randy Goebel
.Ad
  \fISIGPLAN Notices\fR, \fB20\fP(8), August 1985, 38\(mi40.
.Ab
Recent papers have explained how infinite loops in a Prolog search tree can be
avoided by use of subgoal deletion. We show here that this works only in
limited cases, and argue that these cases can be better avoided by slight
modifications of the program, rather than by increasing the complexity of all
programs with a rule that has very limited applicability.
.At
  Design and Implementation of the
  Waterloo \s-1UNIX\s0\*(TM Prolog Environment
.Aa
  Mantis H.\|M. Cheng
.Ad
  Research Report CS-84-47 [M.\|Math thesis]
.Ab
This document describes the development of a new Prolog system on a
\s-1VAX\s0 11/780 running under \s-1UNIX\s0.
.At
  Madame: A Planner for ISH
.Aa
  J.\|A.\|N.\|A. Trudel
.Ad
  Research Report CS-84-48 [M.\|Math thesis]
.Ab
An ongoing project at the University of Waterloo is the design and
construction of an interactive \s-1UNIX\s0 consultant, ISH (Intelligent Shell).
The consultant has been designed to answer the type of questions normally
posed to its human counterpart.
.pp
A planner called Madame'' was developed to be used as ISH's planning
component.  Madame is based on D.H.D. Warren's Warplan which is a goal directed
planner that uses goal regression.
An important feature of Madame is a data structure called a spider.''
The spider is used to store previously generated goal states.
These states then serve as alternate start states for Madame, so
Madame has many start states at its disposal instead of only one.
Experiments show that the spider does increase the efficiency of Madame.
Madame also required an axiomatization of the \s-1UNIX\s0 domain.
.pp
This dissertation describes Madame, the spider, and the \s-1UNIX\s0
axiomatization.
.At
  Quantitative deduction
  and its fixpoint theory
.Aa
  Maarten H. van Emden
.Ad
  Research Report CS-85-15
.Ab
A disadvantage of logic programming is that in expert systems one often wants
to use, instead of the usual two truth values, an entire continuum of
uncertainties'' in between.
That is, instead of the usual qualitative'' deduction, a form of
quantitative'' deduction is required.
In this paper I present an approach to generalizing the
Tarskian semantics of Horn clause rules to justify a form of
quantitative deduction. Each clause receives a numerical attenuation
factor. Herbrand interpretations, which are subsets of the Herbrand
base, are generalized to subsets which are fuzzy in the sense of Zadeh.
I show that as result the fixpoint method in the semantics
of Horn clause rules can be developed in much the same way for the
quantitative case.
.pp
As for proof theory, the interesting phenomenon is that a
proof should be viewed as a two-person game. The value of the game
turns out to be the truth value of the atomic formula to be proved,
evaluated in the minimal fixpoint of the rule set. The analog of the
Prolog interpreter for quantitative deduction becomes a search of
the game tree (\fIi.e.,\fP proof tree) using the alpha-beta heuristic
well-known in game theory.
.5S
.sz 14
.ce
\f3University of Alberta\fP
.sz 10
.sp 0.5v
The following report may be requested from the authors at:
  Department of Computing Science
  University of Alberta
  Edmonton, Alta \s-1T6G 2H1\s0
.At
  Is best-first search really best?
.Aa
  Alexander Reinefeld, T.A. Marsland,
  and Jonathan Schaeffer
.Ad
  Technical Report 85-16
  October 1985
.Ab
Of the many minimax algorithms, SSS* consistently searches the smallest game
trees.	Its success can be attributed to the accumulation and use of
information acquired while traversing the tree, allowing a best-first search
strategy.  The main disadvantage of SSS* is its excessive storage requirements.
This paper describes a class of algorithms, based on the popular alpha-beta
algorithm, that acquire and use information to guide tree search.  They retain
their directional nature, yet are as good as SSS*, even while searching random
trees.	Further, while some of these new algorithms also require substantial
storage, they are more flexible and can be programmed to use only the space
available, at the cost of some degradation in performance.
.5S
.sz 14
.ce
\fBUniversity of Saskatchewan\fP
.sz 10
.sp 0.5v
Requests for any of the following publications should be addressed to the
authors at:
  Department of Computational Science
  University of Saskatchewan
  Saskatoon, Saskatchewan S7N 0W0
.At
  G.E.N.I.U.S \(em An experiment in
  ignorance-based automated program advising
.Aa
  Gordon McCalla and K. M. Murtagh
.Ad
  Technical Report 85-20
.Ab
The process of dispensing useful advice on program bugs to students is one
requiring great intelligence and much knowledge.  To automate this process
would seem to require a large, sophisticated, knowledge-based system.  It is
the contention of this paper, however, that it is possible to build at least a
marginally useful automated program advisor without needing to build such a
large and complex system.  G.E.N.I.U.S. is one such system, founded on what
could be called ignorance-based'' principles in contrast to a knowledge-based
approach.  G.E.N.I.U.S. is a small program which uses a keyword-style natural
language interface to understand students' queries on programming difficulties,
and a simple discrimination net knowledge representation to dispense useful
advice on these difficulties.  G.E.N.I.U.S. was tested in several computer
science courses, and while it didn't exactly set the world on fire, it did
prove to be of use to many students.  With some enhancements many of its
current shortcomings could be ameliorated, making it even more useful.	There
is, of course, a limit on how far an ignorance-based approach such as
G.E.N.I.U.S.'s can be taken.  Nevertheless, this experiment shows that certain
problems can benefit in the short term by taking such an approach, at least
until knowledge-based approaches begin to map out more sophisticated solutions.
.At
  The design of the
  \s-1SCENT\s0 automated advisor
.Aa
  Gordon McCalla, R. B. Bunt, J. J. Harms
.Ad
  Technical Report 85-22
.Ab
The \s-1SCENT\s0 (Student Computing ENvironmenT) project is concerned with
building an intelligent tutoring system to help student programmers debug their
programs.  The major thrust of current \s-1SCENT\s0 investigations is into the
design of the \s-1SCENT\s0 advisor, which is meant to provide debugging
assistance to novice students.	Six conceptual levels constitute the advisor.
At the lowest level is the raw data'', consisting of the student's (possibly
buggy) Lisp program.  This can be interpreted by a program behaviour'' level
which can produce traces, cross-reference charts, etc., from the student's
program.  These traces, etc., can be analyzed by observers'' for interesting
patterns.  At the next level are strategy judges'' and diagnosticians''
which determine which strategy the student has used in his/her program and bugs
in this strategy.  Task experts'' provide task-specific input into the
process of analyzing the student's solution, and a student knowledge
component'' provides student-specific input into this process.  Information
from the six levels interacts in a variety of ways and control is similarly
heterarchical.	This necessitates a blackboard-style scheme to coordinate
information dissemination and control flow.
.pp
This paper discusses the objectives of \s-1SCENT\s0 and focuses on organizing
the process of debugging student programs.  A complete example is given to
illustrate how entities at the six levels interact and to indicate the kinds
of information sharing which occur in the \s-1SCENT\s0 advisor.  The paper
concludes with an evaluation of the strengths and weaknesses of this approach
to automated debugging, and suggestions about directions for further
exploration.
.5S
.sz 14
.ce
\fBUniversity of British Columbia\fP
.sz 10
.sp 0.5v
Requests for any of the following publications should be addressed to:
  Department of Computer Science
  University of British Columbia
  Vancouver, \s-1BC\s0, C\s-1ANADA  V6T 1W5\s0
.At
  A theory of schema labelling
.Aa
  William Havens
.Ad
  Technical Report 84-16
  Revised June, 1985
.Ab
Schema labelling is a representation theory which focuses on composition and
specialization as two major aspects of machine perception.  Previous research
in computer vision and knowledge representation has identified computational
mechanisms for these tasks.  We show that the representational adequacy of
schema knowledge structures can be combined advantageously with the constraint
propagation capabilities of network consistency techniques.  In particular,
composition and specialization can be realized as mutually interdependent
cooperative processes which operate on the same underlying knowledge
representation.  In this theory, a schema is a generative representation for a
class of semantically related objects.	Composition builds a structural
description of the scene from rules defined in each schema.  The scene
description is represented as a network consistency graph which makes explicit
the objects found in the scene and their semantic relationships.  The graph is
hierarchical and describes the input scene at varying levels of detail.
Specialization applies network consistency techniques to refine the graph
towards a global scene description.
.At
  A portable image processing system
  for computer vision
.Aa
  William Havens
.Ad
  Technical Report 85-9
.Ab
Computer vision research is flourishing, although its growth has been hindered
by the lack of good image processing systems.  Existing systems are neither
general nor portable despite various attempts at establishing standard image
representations and software.  Issues of hardware architecture and processing
efficiency have frequently dominated system design.  Often standard
representations are primarily data formats for exchanging data among
researchers working at affiliated laboratories using similar equipment.
.pp
We argue that generality, portability and extensibility are the important
criteria for developing image processing systems. The system described here,
called \s-1PIPS\s0, is based on these principles.  An abstract image datatype
is defined which is capable of representing a wide variety of imagery.	The
representation makes few assumptions about the spatial resolution, intensity
resolution, or type of information contained in the image.  A simple set of
primitive operations are defined for direct and sequential access of images.
These primitives are based on a bit stream access method that considers files
and devices to be a long contiguous stream of bits that can be randomly read
and written.  Bit streams allow the word boundaries and file system
architecture of the host computer system to be completely ignored and require
only standard byte-wide direct-access I/O support.
.pp
The standard image representation has encouraged the development of a library
of portable generic image operators.  These operators support interactive
experimentation and make it easy to combine existing functions into new more
complex operations.  Finally, graphics device interfaces are defined in order
to isolate graphics hardware from image processing algorithms.
.5S
.sz 14
.ce
\fBUniversity of Toronto\fP
.sz 10
.sp 0.5v
Requests for any of the following publications should be addressed to:
  Joanne Mager
  Department of Computer Science
  University of Toronto
  Toronto, Ont., C\s-1ANADA  M5S 1A4\s0
.At
  On the hierarchical construction of
  orientation and velocity selective filters
.Aa
  David J. Fleet and Allan D. Jepson
.Ad
  Research in Biological and Computational Vision
  Technical Report, RBCV-TR-85-8
  November 1985
.Ab
This paper concerns the development of tools for the early measurement of
visual primitives.  It is our view that a rich representation should be
computed in the first functional level of processing, using image-independent
processes that require no previous or concurrent interpretation.  We review
suitable design criteria for the extraction of orientation and velocity
information, and present a variety of tools useful in the construction of
simple linear mechanisms.  We use a hierarchical parallel processing scheme in
which nodes need only compute a weighted sum of inputs from a few nodes within
a small spatiotemporal neighbourhood.  The resulting scheme is easily and
completely analyzed, and is shown to provide mechanisms sensitive to narrow
ranges of both image velocity and orientation.
.At
  The effects of ambient illumination
  on the structure of shadows
  in chromatic images
.Aa
  Ron Gershon, Allan D. Jepson, and
  John K. Tsotsos
.Ad
  Technical Report RBCV-TR-86-9
  January 1986
.Ab
The structure of the relationships between shadow and lit regions in chromatic
images is discussed.  Although there have been previous attempts at providing
solutions to this problem, the assumptions they adopted were too restrictive.
In particular, we show that the ambient illumination cannot be assumed to have
the same spectral characteristics as the incident illumination, and therefore
the treatment of the relationships between shadow and lit regions becomes more
complex.  In such cases, we show that it is necessary to take into account the
effects of the spectrally biased ambient illumination and develop a scheme
which is more robust and stable than previous schemes.	We apply this scheme as
an initial step towards the early visual classification of discontinuities in
images as either material changes or shadows, and discuss further steps
required to achieve this goal.
.5S
.sz 14
.ce
\f3McGill University\fP
.sz 10
.sp 0.5v
The table below lists recent \*(AI technical reports from McGill University.
They may be requested from:
  The Librarian
  Computer Vision and Robotics Laboratory
  Department of Electrical Engineering
  McGill University
  3480 University Street
  Montreal, \s-1PQ\s0, C\s-1ANADA  H3A 2A7\s0
.sp
.1c
.hl
.sz 14
.ce
Recent McGill University \*(AI Technical Reports
.sz 10
.25
.TS
center tab(&);
lw(3.40i) lw(1.95i) l.
.sp 0.5v
\s+2Title\s0&\s+2Authors\s0&\s+2Number\s0
_
The diversity of perceptual grouping&S.W. Zucker&85-1R
.sp 0.5v
T{
Reconstructing and interpreting the 3D shape of moving objects
T}&F.P. Ferrie, M.D. Levine&85-2R
.sp 0.5v
Subjective figures and texture perception&S.W. Zucker, P. Cavanagh&85-2R
.sp 0.5v
T{
Points and endpoints: A size/spacing constraint for dot grouping
T}&S.W. Zucker, S. Davis&85-3R
.sp 0.5v
Sensitivity to corners in flow patterns&N.K. Link, S.W. Zucker&85-4R
....sp 0.5v
...D. Gauthier, P. Freedman, G. Carayannis, A.S. Malowany&A session
...layer design for the CVaRL local area network&85-7R
.sp 0.5v
T{
Computer Vision and Robotics Laboratory progress report, October 1984 to
September 1985
T}&\(em&85-10R
.sp 0.5v
T{
Early orientation selection: Tangent fields and the dimensionality of their
support
T}&S.W. Zucker&85-13R
.sp 0.5v
T{
Radial projection: An efficient update rule for relaxation labelling
T}&P. Parent, S.W. Zucker&85-15R
.sp 0.5v
T{
Receptive fields and the representation of visual information
T}&S.W. Zucker, R.A. Hummel&85-16R
.sp 0.5v
T{
Line detection in digital pictures: A hypothesis prediction/verification
paradigm
T}&T{
A.R. Mansouri,
.br
A.S. Malowany, M.D. Levine
T}&85-17R
.sp 0.5v
Multiple resolution skeletons&T{
A.R. Dill, M.D. Levine,
.br
P.B. Noble
T}&85-18R
.sp 0.2v
_
.5S
.sz 14
.ce
\fBNational Research Council\fP
.sz 10
.sp 0.5v
The following report may be obtained from:
  Editorial Office, Room 301
  Division of Electrical Engineering
  National Research Council of Canada
  Ottawa, Ontario  \s-1K1A 0R8\s0
.At
  \s-1WITNESS\s0: A System For
  Object Recognition Using Range Images
.Aa
  C. Archibald and M. Rioux
.Ad
  ERB-986
  NRCC No. 25588
  In English / en anglais
.Ab
The development of new data acquisition devices demands new methods
of data interpretation.  This paper describes a new device capable of
acquiring accurate range images in real time, and details methods which
have been developed to use this range data in a robotics environment.
A new approach to object recognition for a robotics environment is presented.
Two-dimensional models enriched with three-dimensional information are
constructed automatically from a range image.  These view models'' are
used to recognize objects by matching them to models subsequently constructed
from similar images.  A segmentation method for extraction of planar
surfaces from range images has been developed.	View models are constructed
as augmented surface adjacency graphs.	Heuristic model watching methods
are presented.	Results indicate that object recognition in a constrained
domain and environment can be achieved at nearly a 100% success rate using
single view range images.
.lp
La mise au point de nouveaux dispositifs d'acquisition de donn\*'ees
ne\*'cessite l'utilisation de nouvelles me\*'thodes d'interpre\*'tation des
donne\*'es.  La pre\*'sente communication de\*'crit un nouveau dispositif
permettant l'acquisition en temps re\*'el d'images de distance pre\*'cises
et explique les me\*'thodes qui ont e\*'te\*' mises au point pour
l'utilisation de ces donne\*'es de distance par des robots.  On pre\*'sente
une nouvelle approche pour la reconnaissance des objets par des robots.  Des
mode\*`les bidimensionnels enrichis d'e\*'le\*'ments tridimensionnels sont
construits automatiquement a\*` partir d'une image de distance.  On utilise
ces mode\*`les de vision'' pour reconnai\*^tre des objets en les
appareillant a\*` des mode\*`les construits ulte\*'rieurement a\*` partir
d'images similaires.  On a mis au point une me\*'thode de segmentation pour
l'extraction de surfaces planes a\*` partir d'images de distance.  Les
mode\*`les de vision sont construits sous forme de graphiques de
contigui\*:te\*' de surfaces augmente\*'s.  On pre\*'sente des me\*'thodes
heuristiques d'appariement de mode\*`les.  Les re\*'sultats montrent que la
reconnaissance des objets dans un domaine et un environnement restreints peut
e\*^tre re\*'alise\*'e avec au taux de re\*'ussite de presque 100%
lorsqu'on utilise les images de distance suivant une seule orientation.
.5S
.sz 14
.ce
\fBUniversite\*' Laval\fP
.sz 10
.sp 0.5v
L'on peut se procurer ces documents de recherche en e\*'crivant au:
  Laboratoire d'Intelligence Artificielle en
    Education
  1466 Pavillon DeKoninck
  Universite\*' Laval
  Que\*'bec, C\s-1ANADA  G1K 7P4\s0
.ip DR86-01
Computer text access
.ip DR86-02
Approches pe\*'dagogiques en enseignement intelligemment assiste\*' par
ordinateur
.ip DR86-03
Intelligent computer-assisted instruction: The nature of learner control
.ip DR86-04
L'enseignement intelligemment assiste\*' par ordinateur: Son contexte en
applications pe\*'dagogiques de l'ordinateur (\s-1APO\s0)
.ip DR86-05
Intelligent computer-assisted instruction (\s-1ICAI\s0): Flexible learning
through better stu\%dent\(micom\%puter interaction
.ip DR86-06
\s-1ICAI\s0 systems: Issues in computer tutoring
.ip DR86-07
Instructible \s-1ICAI\s0
.ip DR86-08
Conception pe\*'dagogique assiste\*'e par ordinateur: Le choix des me\*'thodes
d'enseignement
.ip DR86-09
Strate\*'gies tutorielles en \s-1EIAO\s0
.ip DR86-10
Models of learning in \s-1ICAI\s0
.ip DR86-11
La repre\*'sentation des connaissances en \s-1EIAO\s0
.ip DR86-12
L'\s-1EIAO\s0 en ge\*'nie
.br
.in 0
.5S
.sz 14
.ce
\fBUniversity of Toronto\fP
.sz 10
.sp 0.5v
Requests for any of the following publications should be addressed to:
  Joanne Mager
  Department of Computer Science
  University of Toronto
  Toronto, Ont., C\s-1ANADA  M5S 1A4\s0
.sp
.ce 1
\fILanguage and reasoning\fP
.At
  The representation of ambiguity
  in opaque constructs
.Aa
  Brenda Fawcett
.Ad
  CSRI TR-178 (MSc thesis)
.Ab
A knowledge of intensions, which are used to designate concepts of objects,
is important for natural language processing systems.
Certain linguistic phrases can refer either to
the concept of an entity or to the entity itself.
To properly understand a phrase and to prevent invalid inferences from being
drawn, the system must determine the type of reference being asserted.
We identify a set of opaque'' constructs and suggest that a common
mechanism be developed to handle them. Our
approach is intensional in that it maintains a distinction between
extensional and intensional reference, yet it does not adhere rigidly
to the strict, logical definition of intension.
.pp
To account for the ambiguities of opaque contexts,
noun phrases are translated into \fIdescriptors\fP. It must be made
explicit to whom the descriptor is ascribed and whether its referent
is non-specific or specific. Similarly, sentential constituents should
be treated as \fIpropositions\fP and evaluated relative to conjectured
\fIstates of affairs\fP. As a testbed for these ideas we define
a Montague-style meaning representation and implement
the syntactic and semantic components of a
moderate-size NLP system in a logic programming environment.
.pp
One must also consider how to disambiguate and
interpret such a representation with respect to a knowledge base.
Much contextual and world knowledge is required.
We attempt to characterize what facilities are necessary for an
accurate semantic interpretation,
considering what is and is not available in current knowledge representation
systems.
.At
  The detection and interpretation of
  ambiguities of intension and description
.Aa
  Brenda Fawcett and Graeme Hirst
.Ad
  \fIProceedings of the 24th Annual Meeting,
  Association for Computational Linguistics\fP, 1986
.Ab
A digest of the previous item.
.At
  A theory of diagnosis
  from first principles
.Aa
  Raymond Reiter
.Ad
  TR 187/86
.Ab
Suppose we are given a description of a system, together with an observation of
the system's behaviour which conflicts with the way the system is meant to
behave.  The diagnostic problem is to determine those components of the
system which, when assumed to be functioning abnormally, will explain the
discrepancy between the observed and correct system behaviour.
.pp
We propose a general theory for this problem.  The theory requires only that
the system be described in a suitable logic.  Moreover, there are many such
suitable logics, \fIe.g.,\fP first order, temporal, dynamic, etc.  As a result,
the theory accommodates diagnostic reasoning in a wide variety of practical
settings, including digital and analogue circuits, medicine, and database
updates.  The theory leads to an algorithm for computing all diagnoses, and
to various results concerning principles of measurement for discriminating
among competing diagnoses.  Finally, the theory reveals close connections
between diagnostic reasoning and non-monotonic reasoning.
.5
.ce 2
\fIResearch in Biological and
Computational Vision\fP
.At
  Some problems with correspondence
.Aa
  Michael Jenkin and Paul A. Kolers
.Ad
  Technical Report RBCV-TR-86-10,
  April 1986
.Ab
The notion of correspondence underlies many current theories of human and
machine visual information processing.	Algorithms for both the
\fIcorrespondence process\fP and solutions to the \fIcorrespondence problem\fP
have appeared regularly in the computer vision literature.  Algorithms for
stereopsis (Barnard and Thompson 1980; Marr and Poggio 1977; Mayhew and Frisby
1980) and for tracking objects through time (Dreschler and Nagel 1981; Jain and
Sethi 1984; Moravec 1977; Ullman 1979; Webb 1981) have been presented which
assume that token matching of separated or successive views is the underlying
visual process. This paper will address the notion of token matching as a
primitive operation in vision.	We will argue that correspondence seems ill
suited to the task of accounting for how an object is positioned in time or
space, and that some other mechanism may provide a more apt account.
.At
  Aspects of saccadic eye-movements
  towards or away from photopic,
  mesopic, or scotopic stimuli
.Aa
  Hansarj Doma and Peter E. Hallett
.Ad
  Technical Report RBCV-TR-86-11,
  April 1986
.Ab
Various aspects of saccadic eye-movements are correlated with luminance
range for a small lit stimulus that steps 10 degrees horizontally in darkness.
The correlations depend on whether the subject's task is to look towards
or away from the stimulus after it steps, \fIi.e.,\fP the `\fInormal\fP task',
which foveates the retinal image, and the `\fIanti\fP task' which
peripheralizes it.  \fI(1)\fP At photopic stimulus luminances, the anti task is
executed in some ways in a less visual manner than the normal task.  Latencies
are longer, angular errors of correctly directed responses are larger,
correction saccades do not require visual feedback of angular error,
and direction mistakes are more frequent.  \fI(2)\fP The transition from rod to
cone latencies coincides with the peripheral rod-cone transition. The
corresponding transition in the anti task is protracted, latency changing
little in mesopic range up to 0.7-1.0 log unit above the rod-cone transition.
Anti task direction errors are particularly common in this luminance range.
\fI(3)\fP At scotopic luminances, the differences between the normal and anti
tasks are diminished, largely because the normal task is more severely
affected. The implications are that a mixture of rod and cone signals hinder
the anti task, while pure rod signals are sub-optimal for both tasks.
.At
  Aspects of the design of the visual
  pathways of mouse and rat
.Aa
  Helena M. E. Hallett and Peter E. Hallett
.Ad
  Technical Report RBCV-TR-86-12
  March 1986
.Ab
Photoreceptors and neurons at various levels to cortex have been counted in
mouse and rat. The ratios of neuron numbers (rat/mouse) are similar to the
ratio retinal areas or the squared ratio of eye sizes; so to a first
approximation the two species have scaled eyes, equal photoreceptor spacings
(in \(*mm), and numerically scaled visual pathways. The design seems
straightforward: high visual acuity should be reached at low luminances with a
close approach to noise-limited performance; spatial encoding across the visual
field is plausibly just or almost adequate in the Nyquist sense at the cone and
geniculate levels. Some aliased broadband noise may be introduced at these
levels but its effects are probably slight. Hyperacuity is excluded on one
model of the cortex, though not on another. Overall, there is structural
and computational economy, or even \fIparsimony\fP, at all neural levels.
.5S
.sz 14
.ce
\fBUniversity of Waterloo\fP
.sz 10
.sp 0.5v
The following reports may be obtained by writing to:
  Pat Bennett
  Department of Computer Science
  University of Waterloo
  Waterloo, Ont  C\s-1ANADA  N2L 3G1\s0.
.At
  Using definite clauses and integrity constraints
  as the basis for a theory-formation approach
  to diagnostic reasoning
.Aa
  Randy Goebel, K. Furukawa, and David Poole
.Ad
  Research Report CS-85-50
  [to appear in \fIProceedings of the International
  Logic Programming Conference\fR, 1986]
.Ab
If one desires that an automatic theory formation program detect inconsistency
in a set of hypotheses, the Horn clause logic of Prolog is unsuitable as no
contradiction is derivable.  Full first-order logic provides a suitably
expressive alternative, but then requires a full first order theorem prover as
the basic theory construction mechanism.  Here we present an alternative for
augmenting definite clauses with the power to express potentially inconsistent
scientific theories.  The alternative is based on a partitioning of definite
clauses into two categories: ordinary assertions, and integrity constraints.
This classification provides the basis for a simple theory formation program.
We here describe such a theory formation system based on Prolog, and show how
it provides an interesting reformulation of rule-based diagnosis systems like
\s-1MYCIN\s0.
.At
  Gracefully adding negation and disjunction
  to Prolog
.Aa
  David Poole and Randy Goebel
.Ad
  Research Report CS-85-51
  [to appear in \fIProceedings of the International
  Logic Programming Conference\fR, 1986]
.Ab
We show how one can add negation and disjunction to Prolog, with the property
that there is no overhead in run time if we do not use the negation, and we
only pay for the negation when we actually use it. The extension is based on
Loveland's \s-1MESON\s0 proof procedure, which requires that a negative
ancestor search and availability of contrapositive forms of formulae be added
to Prolog.  We identify a property of literals that can be statically
determined, in order to avoid using the full generality of the \s-1MESON\s0
proof procedure when not required.
.At
  Theorist: A logical reasoning system
  for defaults and diagnosis
.Aa
  David Poole, Randy Goebel, and Romas Aleliunas
.Ad
  Research Report CS-86-06
  [to appear in \fIKnowledge Representation\fR,
  Nick Cercone and Gordon McCalla (editors),
  Springer-Verlag]
.Ab
We provide an introduction to Theorist, a logic programming system that uses a
uniform deductive reasoning mechanism to construct explanations of observations
in terms of facts and hypotheses.
Observations, facts, and possible hypotheses are each sets of logical formulas
that represent, respectively, at set of observations on a partial domain, a set
of facts for which the domain is a model, and a set of tentative hypotheses
which may be required to provide a consistent explanation of the observations.
.pp
Theorist has been designed to reason in a fashion similar to how we reason with
and construct scientific theories.  Rather than monotonically deduce theorems
from a fixed logical theory, Theorist distinguishes facts from hypotheses and
attempts to use deduction to construct consistent theories for which the
observations are logical consequences.	A prototype, implemented in Prolog,
demonstrates how diagnosis, default reasoning, and a kind of learning can all
be based on the Theorist framework.
.At
  A logic data model for the
  machine representation of knowledge
.Aa
  Randy Goebel
.Ad
  Research Report CS-86-07
  [PhD dissertation,
  Department of Computer Science,
  University of British Columbia, November 1985]
.Ab
DLOG is a logic-based data model developed to show how logic programming can
combine contributions of database management and \*(AI.
The DLOG data model is based on a logical formulation that is a superset of
the relation data model, and uses Bowen and Kowalski's notion of an amalgamated
meta- and object-language to describe the relationship between data model
objects.  The DLOG specification includes a language syntax, a proof (or
query evaluation) procedure, a description of the language's semantics, and a
specification of the relationships between assertions, queries, and application
databases.
.pp
DLOG's basic data description language is the Horn clause subset of
first-order logic, together with embedded descriptive terms and non-Horn
integrity constraints.	The embedded terms are motivated by \*(AI
representation language ideas, specifically, the descriptive terms of the
\s-1KRL\s0 language.  A similar facility based on logical descriptions is
provided in DLOG.  The DLOG language permits the use of definite and
indefinite descriptions of individuals and sets in both queries and assertions.
.At
  The design and implementation of DLOG,
  a Prolog-based knowledge representation
  system
.Aa
  Randy Goebel
.Ad
  \fINew Generation Computing\fR, \fB3\fR(1985), 385\(mi401
.At
  Default reasoning and diagnosis
  as theory formation
.Aa
  David Poole
.Ad
  Research Report CS-86-08
.Ab
We present a simple model-theoretic semantics for both defaults and diagnosis.
The semantics is based on normal first-order model theory; instead of changing
the logic, we propose to change the way in which the logic is used.  Rather
than deriving the consequences of our knowledge (finding what logically
follows), we build falsifiable scientific'' theories which explain some set
of observations.  By using a predefined set of possible hypotheses which can be
defaults or possible malfunctions or diseases, this idea subsumes the intuition
behind Reiter's default logic and characterises model-based diagnosis.  A
prototype implementation, called Theorist, executes all of the examples given.
.At
  Tutorial diagnoses of subtraction errors
.Aa
  Ross Bryant
.Ab
  Research Report CS-86-09
  [M.\|Math Essay]
.Ab
The Buggy system (Brown and Burton) for diagnosing student errors in
subtraction is reviewed.  It is argued that more direct, precise and
comprehensive diagnosis could be performed by an interactive automated tutor
which is capable of monitoring detailed solutions input by the student on the
screen.  A design for such a tutor, including other tutorial functions, is
proposed.  The educational issues involved in the design are discussed, in
particular the role of a system-maintained student model.  The design issues
are extended to a consideration of more complicated arithmetic skills and some
non-mathematical domains.
.At
  Summarizing natural language
  database responses
.Aa
  J.K. Kalita, Marlene Jones, and Gordon McCalla
.Ad
  [to appear in \fIComputational Linguistics\fR]
.Ab
In a human dialogue it is usually considered inappropriate if one conversant
monopolizes the conversation.  Similarly, it can be inappropriate for a natural
language database interface to respond with a lengthy list of data.  A
non-enumerative summary'' response is less verbose and often avoids
misleading the user where an extensional response might.
.pp
In this paper we investigate the problem of generating such discourse-oriented
concise responses.  We present details of the design and implementation of a
system which produces summary responses to queries of a relational database.
The system employs a set of heuristics which work in conjunction with a
knowledge base to discover underlying regularities that form the basis of
summary responses.  The system is largely domain-independent, and hence can be
ported relatively easily from one database to another.	It also has a number of
shortcomings which are discussed thoroughly and which form the basis for a
number of suggested research directions.
.5S
.sz 14
.ce
\fBUniversity of Toronto\fP
.sz 10
.sp 0.5v
The following reports may be obtained by writing to:
  The \*(AI group secretary
  Department of Computer Science
  University of Toronto
  Toronto, Ont	C\s-1ANADA  M5S 1A4\s0.
.At
  On using causal knowledge
  to recognize vital signals:
  A study of knowledge-based interpretation
  of arrhythmias
.Aa
  Tetsutaro Shibahara
.Ad
  PhD Thesis, June 1986
  LCM-TR86-1
.Ab
A technology of knowledge-based systems has been pursued to establish a
framework for the recognition of time-varying signals of a complex repetitive
nature, such as electrocardiograms (ECGs).
.pp
Using a stratified knowledge base, the proposed recognition system discerns
several perspectives to the phenomena of underlying entities, such as the
knowledge of a causal model of the physiological mechanism and the
observational knowledge of the temporality and morphology of signals emitted
from physiological events, where such events are projected into the observable
waveform domain.  The system reasons'' observational abnormalities in the
signal, by referring to the corresponding abnormalities of causal connections
and events in the entity model.  Projection links have been defined to
represent projection in our frame-type formalism, and are used to raise
hypotheses across various component KBs.
.pp
Our system also introduces causal links, and uses them extensively to represent
various causal and temporal relations between concepts in the physiological
event domain.  The control structure uses causal links to expect unseen events
from recognized events, to search for the corresponding waves in input data,
and to match and calculate the degree of integrity among causally related
events.  The meta-knowledge representation of statistical information about
events facilitates a default-reasoning mechanism and supports this expectation
process, providing context-sensitive statistical information.
.pp
Our design technology has been applied to the unsolved arrhythmia problem in
the ECG domain, and this prototype is called the CAA (Causal Arrhythmia
Analysis) system.  Our system inherits its basic control mechanisms, such as
the change/focus attention mechanism with similarity links, from the ALVEN (A
Left VENtricular Wall Motion Analysis) system. The CAA system, with a limited
number of abnormalities, has been implemented by using the PSN (Procedural
Semantic Network) knowledge-representation system.  The prototype has so far
demonstrated promising results, using independently sampled ECG data.
.At
  An overview of knowledge acquisition
  methods for expert systems
.Aa
  Sue Becker and Bart Selman
.Ad
  CSRI Technical Report CSRI-184
  June 1986
.Ab
In building an expert system, one of the most time-consuming tasks is knowledge
acquisition.  We present an overview of many of the techniques, tools and
methodologies which have been developed to facilitate this process.  Structured
human interviewing techniques and automatic interviewing programs can be used
in the early stages of expert system development, when the relevant knowledge
must be extracted from domain experts. A variety of other types of tools, such
as intelligent editors and graphical displays, can be employed later in the
development process, when the expert knowledge must be translated into a formal
knowledge representation language.  We describe, in some depth, a
representative sample of these knowledge acquisition systems and tools, to give
the reader a good overview of the field.  The best features of these systems
are discussed, with the goal of determining the desirable functions of a
complete knowledge-based system building tool.	Finally, we discuss our current
research in the area of knowledge acquisition for knowledge-based systems.
.5S
.sz 14
.ce
\fBUniversity of Waterloo\fP
.sz 10
.sp 0.5v
The following reports may be obtained by writing to:
  Pat Bennett
  Department of Computer Science
  University of Waterloo
  Waterloo, Ont  C\s-1ANADA  N2L 3G1\s0.
.At
  Incorporating user models into
  expert systems for educational diagnosis
.Aa
  Robin Cohen and Marlene Jones
.Ad
  [submitted for publication]
.Ab
In this paper we study a particular real-world domain, that of educational
diagnosis.  We argue that expert systems for educational diagnosis require user
models, and that these user models should include several components, including
the user's background knowledge of both the student and the domain as well
as the user's goals.  We propose possible representation schemes for the user
model; in particular, we focus on two schemes which have been advanced for
representing student models.  Finally, we discuss how the acquisition and
updating of the user model can be handled, and how the user model can be
employed to produce quality explanations.
.At
  The \s-1CGD\s0 project: An educational diagnostic
  system based on Theorist
.Aa
  Marlene Jones, J. McLeod, G. Robertson,
  J. Tubman, and B. Wasson
.Ad
  [submitted for publication]
.Ab
The intent of the Computer-Guided Diagnosis (\s-1CGD\s0) project is the
development of an expert system to assist the user in diagnosing learning
disabilities.  We discuss herein the motivation for the project, the
application domain including a model of educational diagnosis, as well as
previous research and development regarding expert systems within this
application domain.  We then present specifics regarding the \s-1CGD\s0 project
including the underlying educational and computational philosophies.  In
particular, we present the details of the implementation of the expert system
in Theorist, a logic programming deduction system developed at the University
of Waterloo.  We argue that a Theorist-based expert system for educational
diagnosis is particularly appealing because it captures the essence of what
expert diagnosticians do, \fIi.e.,\fP theory formation.  Combining Theorist
with our philosophy of employing expertise based on clinical, rather than
actuarial experience, promises to produce a system that closely resembles
the ideal behaviour of an expert in the performance of an educational
diagnosis.  In addition to reporting on our initial experimentation with
Theorist, which involved the development of an expert system for diagnosing
arithmetic difficulties, we briefly discuss future research and development, as
well as current related research projects.
.At
  Computational analogy
.Aa
  K. Wellsch and Marlene Jones
.Ad
  [submitted for publication]
.Ab
Evidence suggests that analogy is a key component in human reasoning and
learning situations.  Exploiting past experience through analogies is an
important aspect of intelligent behaviour.''  We briefly review the research
of Burstein, Gentner and Winston, and then present a particularly simple
(polynomial complexity) but powerful algorithm for detecting and applying
analogies.  The algorithm, which is based on subtree matching, employs a
hierarchical representation; we investigate the power of such a representation.
The effectiveness of the algorithm is examined using two-dimensional scenes as
the major test domain.	The results are discussed herein along with suggestions
for future research.
.At
  Expert systems for educational diagnosis:
  An experiment with Theorist
.Aa
  Marlene Jones and J. Tubman
.Ad
  [to appear in the \fIProceedings of the 6th
  International Conference on Expert Systems
  and Their Applications\fR, 1986]
.Ab
During the past decade many diagnostic expert systems have been developed; the
majority of these are rule-based production systems.  In this paper, we explore
an alternative approach.  First we briefly present the fundamentals of
Theorist, a logic programming system that uses a uniform deductive reasoning
mechanism to construct explanations of	observations in terms of facts (or
rules) and hypotheses (which are defaults).  Secondly, we describe here a small
expert system within the domain of educational diagnosis which has been
implemented with Theorist.  Future research plans are discussed in light of the
results of this initial experimentation with Theorist.