[comp.ai.digest] Computer and Cognitive Science Abstracts

yorick@nmsu.CSNET.UUCP (05/11/87)

ABSTRACTS OF 
MEMORANDA IN COMPUTER AND COGNITIVE SCIENCE

Computing Research Laboratory
New Mexico State University 
Box 30001
Las, Cruces, NM 88003.


Kamat, S.J. (1985), Value Function Approach to Multiple Sensor
Integration, MCCS-85-16.

A value function approach is being tried for integrating multiple sensors
in a robot environment with known objects.  The state of the environment is
characterized by some key parameters which affect the performance of the
sensors.  Initially, only a handful of discrete environmental states will be
used.  The value of a sensor or a group of sensors is defined as a function
of the number of possible object contenders under consideration and the
number of contenders that can be rejected after using the sensor information.
Each possible environmental state will have its effect on the function, and
the function could be redefined to indicate changes in the sampling frequency
and/or resolution for the sensors.  A theorem prover will be applied to the
sensor information available to reject any contenders.  The rules used by the
theorem prover may be different for each sensors, and the integration is
provided by the common decision domain. The values for the different sensor
groups will be stored in a database.  The order of use of the sensor groups
will be according to the values, and can be stored as the best search path.
The information in the database can be adaptively updated to provide a
training methodology for this approach.


Cohen, M. (1985), Design of a New Medium for Volume Holographic
Information Processing, MCCS-85-17.

An optical analog of the neural networks involved in sensory 
processing consists of a dispersive medium with gain in a narrow 
band of wavenumbers, cubic saturation, and a memory nonlinearity 
that may imprint multiplexed volume holographic gratings.  Coupled 
mode equations are derived for the time evolution of a wave 
scattered off these gratings; eigenmodes of the coupling 
matrix  $$kappa$$  saturate preferentially, implementing stable 
reconstruction of a stored memory from partial input and 
associative reconstruction of a set of stored memories.  Multiple 
scattering in the volume reconstructs cycles of associations that 
compete for saturation.  Input of a new pattern switches all
the energy into the cycle containing a representative of that 
pattern; the system thus acts as an abstract categorizer with 
multiple basins of stability.  The advantages that an imprintable 
medium with gain biased near the critical point has over either 
the holographic or the adaptive matrix associative paradigms 
are (1) images may be input as non-coherent distributions which 
nucleate long range critical modes within the medium, and (2) the 
interaction matrix $$kappa$$ of critical modes is full, thus implementing 
the sort of `full connectivity' needed for associative reconstruction 
in a physical medium that is only locally connected, such as a 
nonlinear crystal.


Uhr, L. (1985), Massively Parallel Multi-Computer 
Hardware = Software Structures for Learning, MCCS-85-19.

Suggestions are made concerning the building and use of appropriately
structured hardware/software multi-computers for exploring ways that
intelligent systems can evolve, learn and grow.  Several issues are addressed
such as: what computers are, the great variety of topologies that can be used
to join large numbers of computers together into massively parallel
multi-computer networks, and the great sizes that the micro-electronic VLSI
(``very large scale integration'') technologies of today and tomorrow make
feasible.  Finally, several multi-computer structures that appear
especially appropriate as the substrate for systems that evolve, learn and
grow are described, and a sketch of a system of this sort is begun.



Partridge, D. (1985), Input-Expectation Discrepancy Reduction:
A Ubiquitous Mechanism, MCCS-85-24.

The various manifestations of input-expectation discrepancy that occurs in a
broad spectrum of research on intelligent behavior is examined.  The point
is made that each of the different research activities highlights different
aspects of an input-expectation reduction mechanism and neglects others.

A comprehensive view of this mechanism has been constructed and applied in
the design of a cognitive industrial robot.  The mechanism is explained as
both a key for machine learning strategies, and a guide for the selection of
appropriate memory structures to support intelligent behavior.


Ortony, A., Clore, G. & Foss, M. A. (1985), Conditions of Mind, 
MCCS-85-27.

A set of approximately 500 words taken from the literature on emotion was
examined.  The overall goal was to develop a comprehensive taxonomy of the
affective lexicon, with special attention being devoted to the isolation of
terms that refer to emotions.  Within the taxonomy we propose, the best
examples of emotion terms appear to be those that (a) refer to [i]internal,
mental[xi] conditions as opposed to physical or external ones, (b) are clear
cases of [i]states[xi], and (c) have [i]affect[xi] as opposed to behavior or
cognition as their predominant referential focus. Relaxing one or another of
these constraints yields poorer examples or nonexamples of emotions; however,
this gradedness is not taken as evidence that emotions necessarily defy
classical definition.

Wilks, Y. (1985), Machine Translation and Artificial Intelligence:
Issues and their Histories, MCCS-85-29.

The paper reviews the historical relations, and future prospects for
relationships, between artificial intelligence and machine translation. The
argument of the paper is that machine translation is much more tightly bound
into the history of artificial intelligence than many realize (the MT origin
of Prolog is only the most striking example of that), and that it remains,
not a peripheral, but a crucial task on the AI agenda.


Coombs, M.J. (1986), Artificial Intelligence Foundations
for a Cognitive Technology: Towards The Co-operative Control of Machines, 
MCCS-85-45. 

The value of knowledge-based expert systems for 
aiding the control of physical and
mechanical processes is not firmly established.  However, with experience,
serious weaknesses have become evident which, for solution, require a new
approach to system architecture.

The approach proposed in this paper is based on the direct manipulation of
models in the control domain.  This contrasts with the formal syntactic
reasoning methods more conventionally employed.  Following from work on the
simulation of qualitative human reasoning, this method has potential for
implementing truly co-operative human/computer interaction.

Coombs, M.J., Hartley, R. & Stell J.F. (1986), Debugging
User Conceptions of Interpretation Processes, MCCS-85-46. 

The use of high level declarative languages has been advocated since they allow
problems to be expressed in terms of their domain facts, leaving details of 
execution to the language interpreter.  While this is a significant advantage,
it is frequently difficult to learn the procedural constraints imposed by
the interpreter.  Thus, declarative failures may arise from misunderstanding
the implicit procedural content of a program. This paper argues for a     
\fIconstructive\fR approach to identifying poor understanding of procedural
interpretation, and presents a prototype diagnostic system for Prolog.

Error modelling is based on the notion of a modular interpreter, misconceptions
being seen as modifications of correct procedures.  A trace language, 
based on conceptual analysis of a novice view of Prolog, is used by 
both the user to describe his conception of execution, and the system to
display the actual execution process.  A comparison between traces enables the
the correct interpreter to be modified in a manner which progressively 
corresponds to the user's mental interpreter. 

Dorfman, S.B. & Wilks, Y. (1986), SHAGRIN:  A Natural
Language Graphics Package Interface, MCCS-85-48. 

It is a standard problem in applied AI to construct a front-end to some
formal data base with the user's input as near English as possible.  SHAGRIN
is a natural language interface to a computer graphics package. In
constructing SHAGRIN, we have chosen some non-standard goals:  (1) SHAGRIN
is just one of a range of front-ends that we are fitting to the same formal
back-end. (2) We have chosen not a data base in the standard sense, but a
graphics package language, a command language for controlling the production
of graphs on a screen. Parser output is used to generate graphics world
commands which then produce graphics PACKAGE commands.  A four-component
context mechanism incorporates pragmatics into the graphics system as well
as actively aids in the maintenance of the state of the graph world.  


Manthey, M.J. (1986), Hierarchy in Sequential and
Concurrent Systems or What's in a Reply, MCCS-85-51. 

The notion of hierarchy as a tool for controlling conceptual
complexity is justifiably well entrenched in computing in general,
but our collective experience is almost entirely in the realm of
sequential programs.  In this paper we focus on exactly what the
hierarchy-defining relation should be to be useful in the realm of
concurrent programming.  We find traditional functional dependency
hierarchies to be wanting in this context, and propose an alternative
based on shared resources.  Finally we discuss some historical and
philosophical parallels which seem to have gone largely unnoticed in
the computing literature.

Huang, X-M (1986), A Bidirectional Chinese Grammar 
in A Machine Translation System, MCCS-85-52. 

The paper describes a Chinese grammar which can be run bidirectionally, ie.,
both as a parser and as a generator of Chinese sentences.  When used as a
parser, the input to the grammar is single Chinese sentences, and the output
would be tree structures for the sentences; when used as a generator, tree
structures are the input, and Chinese sentences, the output. The main body
of the grammar, the way bidirectionality is achieved, and the performance of
the system with some example sentences are given in the paper.

Partridge, D. & Wilks, Y. (1986), Does AI have a methodology different
from Software Engineering?, MCCS-85-53. 

The paper argues that the conventional methodology of software
engineering is inappropriate to AI, but that the failure of many
in AI to see this is producing a Kuhnian paradigm ``crisis''. The
key point is that classic software engineering methodology (which
we call SPIV: Specify-Prove-Implement-Verify) requires that the
problem be circumscribable or surveyable in a way that it is not
for areas of AI like natural language processing. In addition, it
also requires that a program be open to formal proof of
correctness.  We contrast this methodology with a weaker form SAT
( complete Specification And Testability - where the last term is
used in a strong sense: every execution of the program gives
decidably correct/incorrect results) which captures both the
essence of SPIV and the key assumptions in practical software
engineering. We argue that failure to recognize the
inapplicability of the SAT methodology to areas of AI has
prevented development of a disciplined methodology (unique to AI,
which we call RUDE: Run-Understand-Debug-Edit) that will
accommodate the peculiarities of AI and also yield robust,
reliable, comprehensible, and hence maintainable AI software.

Slator, B.M., Conley, W. & Anderson, M.P (1986), Towards an Adaptive 
Front-end, MCCS-85-54. 

An adaptive natual language interface to a graphics package has
been implemented.  A mechanism for modelling user behavior
operating over a script-like decision matrix capturing
co-occurrence of commands is used to direct the interface, which
uses a semantic parser, when ambiguous utterances are
encountered.  This is an adaptive mechanism that forms a model of
a user's tendencies by observing the user in action.  This
mechanism provides a method for operating under conditions of
uncertainty, and it adds power to the interface - but, being a
probabilistic control scheme, it also adds a corresponding
element of nondeterminism.  

A hidden operator experiment was conducted to collect utterance files
for a user-derived interface development process.  These empirical
data were used to design the interface; and a second set, collected
later, was used as test data.


Lopez, P., Johnston, V. & Partridge, D. (1986), Automatic Calibration
of the Geometric Workspace of an Intelligent Robot, MCCS-85-55. 

An intelligent robot consisting of an arm, a single camera, and a computer, 
functioning in an industrial environment, is described.  A variety of 
software algorithms that compute and maintain, at task-execution time, 
the mappings between robot arm, work environment (the robot's world), 
and camera coordinate systems, are presented.  

These mappings are derived through a sequence of arm movements 
and subsequent image ``snapshots'', from which arm motion is 
detected.  With the aid of world self-knowledge (i.e., knowledge of the 
length of the robot arm and the height of the arm to the base 
pivot), the robot then uses its ``eye'' to calculate a 
pixel-to-millimeter ratio in two known planes.  By ``looking'' 
at its arm at two different heights, it geometrically computes the 
distance of the camera from the arm, hence deriving the mapping from 
the camera to the work environment.  Similarly, the calculation of 
the intersection of two arm positions (where wrist location 
and hypothetical base location form a line) gives a base pivot 
position.  With the aid of a perspective projection, now possible
since the camera position is known, the position of the base and
its planar angle of rotation in the work environment (hence the world
to arm mapping) is determined.  Once the mappings are known, 
the robot may begin its task,
updating the approximate camera and base pivot positions with
appropriate data obtained from task-object manipulations.  These
world model parameters are likely to remain static
throughout the execution of a task, and as time passes, the 
old information receives more weight than new information when 
updating is performed.  In this manner, the robot first
calibrates the geometry of its workspace with sufficient accuracy
to allow operation using perspective projection, with performance 
``fine-tuned'' to the nuances of a particular work environment
through adaptive control algorithms.


Fass, D. (1986), Collative Semantics: An Approach to Coherence, 
MCCS-85-56. 

Collative Semantics (CS) is a domain-independent semantics for
natural language processing that focusses on the problem of
coherence.  Coherence is the synergism of knowledge (synergism is the
interaction of two or more discrete agencies to achieve an effect of
which none is individually capable) and plays a substantial role in
cognition.  The representation of coherence is distinguished from
the representation of knowledge and some theoretical connections are
established between them.  A type of coherence representation has
been developed in CS called the semantic vector.  Semantic vectors
represent the synergistic interaction of knowledge from diverse
sources (including the context) that comprise semantic relations.
Six types of semantic relation are discriminated and represented:
literal, metaphorical, anomalous, novel, inconsistent and redundant.
The knowledge description scheme in CS is the senseframe, which
represents lexical ambiguity.  The semantic primitives in senseframes
are word-senses which are a subset of the word-senses in natural
language.  Because these primitives are from natural language, the
semantic markerese problem is avoided and large numbers of primitives
are provided for the differentiated description of concepts required
by semantic vectors.  A natural language program called meta5 uses
CS; detailed examples of its operation are given.


McDonald, D.R. & Bourne, L.E. Jr. (1986), Conditional Rule Testing in 
the Wason Card Selection Task, MCCS-85-57. 

We used the Wason card selection task, with variations, to study
conditional reasoning.  Disagreement exists in the literature, as to
whether performance on this task improves when the problem is
expressed concretely and when instructions are properly phrased.  In
order to resolve some inconsistencies in previous studies, we examined
the following variables, (1) task intructions, (2) problem format,
and (3) the thematic compatibility of solution choices with formal
logic and with pre-existing schemas.  In Experiment 1, performance
was best in an 8-card, rather than a 4-card or a hierarchical
decision-tree format.  It was found in Experiment 2 that instructions
directing subjects to make selections based on ``violation'' of the
rule, rather than assessing its truth or falsity, resulted in more
correct responses.  Response patterns were predictable in part from
formal logical considerations, but primarily from mental models, or
schemas, based on (assumed) common prior experience and knowledge.
Several explanations for the findings were considered.

Partridge, D, McDonald, J., Johnston, V. & Paap, K. (1986)
AI Programs and Cognitive Models: Models of Perceptual Processes, 
MCCS-85-60. 

We examine and compare two independently developed computer models of
human perceptual processes: the recognition of objects in a scene and
of words.  The first model was developed to support intelligent
reasoning in a cognitive industrial robot - an AI system.  The second
model was developed to account for a collection of empirical data and
known problems with earlier models - a cognitive science model.  We
use these two models, together with the results of empirical studies
of human behaviour, to generate a generalised model of human visual
processing, and to further our claim that AI modelers should be more
cognizant of empirical data.  A study of the associated human
phenomena provides an essential basis for understanding complex
models as well as valuable constraints in complex and otherwise
largely unconstrained domains.

yorick@nmsu.CSNET.UUCP (05/11/87)

Computing Research Laboratory
New Mexico State University 
Box 30001
Las, Cruces, NM 88003.


Krueger, W. (1986)
Transverse Criticality and its Application to Image Processing,
MCCS-85-61. 

The basis for investigation into visual recognition of objects is
their representation.  One appealing approach begins by replacing the
objects themselves by their bounding surfaces.  These then are represented by
surfaces which have been smoothed according to various prescriptions.
The resulting smoothed surfaces are subjected to geometric analysis
in an attempt to find critical events which correspond to
``landmarks'' that serve to define the original object.

Many vision researchers have used this outline, often incorporating
it into a larger one that uses the critical events as constraints in
surface generation programs.  To deal with complex objects these
investigators have proposed a number of candidates for the notion of
critical event, most of which take the form of zero-crossings of some
differentially defined quantity associated to surfaces (e.g. Gaussian
curvature, etc.).  Many of these require some a posteriori geometric
conditioning (e.g. planarity) in order to be visually significant.

In this report, we introduce the notion of a transverse critical line
of a smooth function defined on a smooth surface.  Transverse
criticality attempts to capture the trough/crest behavior manifested
by quantities which are globally defined on surfaces (e.g. curvature
troughs and crests, irradiance troughs and crests).  This notion can
be used to study both topographic and photometric surface behavior
and includes, as special cases, definitions proposed by other
authors, among which notions are the regular edges of Phillips and
Machuca [PM] and the interesting flutings of Marr [BPYA].
Applications are made to two classes of surfaces which are important
in computer vision  height surfaces and generalized cones.


Graham, N. & Harary, F. (1986)
Packing and Mispacking Subcubes into Hypercubes,
MCCS-85-65. 

A node-disjoint packing of a graph G into a larger graph H 
is a largest collection of disjoint copies of G contained
in H; an edge disjoint packing is defined similarly, but no two
copies of G have a common edge. Two packing numbers of G into H 
are defined accordingly. It is easy to determine both of these numbers
when G is a subcube of a hypercube H. 

A mispacking of G into H is a maximal collection of disjoint
copies of G whose removal from H leaves no subgraph G, such that
the cardinality of this collection is minimum. Two mispacking numbers 
of G into H are defined analogously. Their exact determination
is quite difficult but we obtain upper bounds.


Dietrich, E. & Fields, C. (1986),
Creative Problem Solving Using Wanton Inference:
It takes at least two to tango,
MCCS-85-70. 

This paper introduces \fBwanton inference\fR, a problem solving strategy for
creative problem solving.  The central idea underlying wanton inference is
that creative solutions to problems are often generated by ignoring
boundaries between domains of knowledge and making new connections between
previously unassociated elements of one's knowledge base.  The major
consequence of using the wanton inference strategy is that the size of search
spaces is greatly increased.  Hence, the wanton inference strategy is
fundamentally at odds with the received view in AI that the essence of
intelligent problem solving is limiting the search for solutions.  Our view
is that the problem of limiting search spaces is an artificial problem in AI,
resulting from ignoring both the nature of creative problem solving and the
social aspect of problem solving.  We argue that this latter aspect of
problem solving provides the key to dealing with the large search spaces
generated by wanton inference.


Ballim, A. (1986),
The Subjective Ascription of Belief to Agents,
MCCS-85-74. 

A computational model for determining an agent's beliefs from the viewpoint
of an agent known as the system is described. The model is based on the
earlier work of Wilks and Bien(1983) which argues for a method of dynamically
constructing nested points of view from the beliefs that the system holds.
This paper extends their work by examining problems involved in ascribing
beliefs called meta-beliefs to agents, and by developing a representation
to handle these problems. The representation is used in ViewGen, a
computer program which generates viewpoints. 


Partridge, D. (1986), The Scope and Limitations of
First Generation Expert Systems, MCCS-85-43. 

It  is  clear that expert system's technology is one  of  AI's
greatest  successes so far.   Currently we see an ever increasing
application  of expert systems,  with no obvious limits to  their
applicability.  Yet  there  are  also  a number  of
well-recognized  problems associated  with this new technology.
I shall argue that  these problems are not the puzzles of normal
science that will yield to advances within the current
technology; on the contrary, they are symptoms of severe inherent
limitations of this first  generation technology.   By reference
to these problems I shall outline some important  aspects of the
scope and limitations of current expert system's technology.
The recognition of these limitations is  a prerequisite  of
overcoming  them as well as  of  developing  an awareness of the
scope of applicability of this new technology.


Gerber, M., Dearholt, D.W., Schvaneveldt, R.W., Sachania,
V. & Esposito, C. (1987), Documentation for PATHFINDER: A Program
to Generate PFNETs, MCCS-87-47. 

This documentation provides both user and programmer documentation for
PATHFINDER, a program which generates PFNETs from symmetric distance
matrices representing various aspects of human knowledge.  User
documentation includes instructions for input and output file formats,
instructions for compiling and running the program, adjustments to
incomplete or incompatable data sets, a general description of the
algorithm, and a glossary of terms.  Programmer documentation includes a
detailed description of the algorithm with an explanation of each
function and procedure, and hand execution examples of some of the more
difficult to read code.  Examples of input and output files are included.


Ballim, A. (1986)
Generating Points of View,
MCCS-85-68. 

Modelling the beliefs of agents is normally done in a static manner.
This paper describes a more flexible dynamic approach to generating
nestings which represent what the system believes other agents
believe.  Such nestings have been described in Wilks and Bien (1983)
as has their usefulness.  The methods presented here are based upon
those described in Wilks and Bien (ibid) but have been augmented to
handle various problems.  A system based on this paper is currently
being written in Prolog.


The Topological Cubical Dimension of a Graph
Frank Harary
MCCS-86-80

A cubical graph G is a subgraph of some hypercube $Q sub n$.  The
cubical dimension cd(G) is the smallest such n.  We verify that the
complete graph $K sub p$ is homeomorphic to a cubical graph H \(sb $Q
sub p-1$.  Hence every graph G has a subdivision which is a cubical
graph.  This enables us to define the topological cubical dimension
tcd(G) as the minimum such n.

When G is a full binary tree, the value of tcd is already known.
Computer scientists, motivated by the use of the architecture of a
hypercube for massively parallel supercomputers, defined the dilation
of an edge e of G within a subdivision H of G as the lenth of the image
of e in H, and the dilation of G as the maximum dilation of an edge
of G.  The two new invariants, tcd(G) and the minimum dilation of G
among all cubical subdivisions H of G, are studied.


CP: A Programming Environment for 
Conceptual Interpreters
M.J. Coombs and R.T. Hartley
MCCS-87-82

A conceptual approach to problem-solving is explored which we
claim is much less brittle than logic-based methods.  It also
promises to support effective user/system interaction when
applied to expert system design.  Our approach is ``abductive''
gaining its power from the generation of good hypotheses rather
than deductive inference, and seeks to emulate the robust
cooperative problem-solving of multiple experts.  Major
characteristics include: 

	(1) use of conceptual rather than
	syntactic representation of knowledge; 

	(2) an empirical approach to reasoning by model generation and
	evaluation called Model Generative Reasoning; 
	
	(3) dynamic composition of reasoning strategies from actors embedded
	in the conceptual structures; and

	(4) characterization of the reasoning cycle in terms of cooperating
	agents.


Semantics and the Computational
Paradigm in Cognitive Psychology
Eric Dietrich
MCCS-87-83

There is a prevalent notion among cognitive scientists and philosophers of
mind that computers are merely formal symbol manipulators, performing the
actions they do solely on the basis of the syntactic properties of the
symbols they manipulate.  This view of computers has allowed some
philosophers to divorce semantics from computational explanations.  Semantic
content, then, becomes something one adds to computational explanations to
get psychological explanations.  Other philosophers, such as Stephen Stich
have taken a stronger view, advocating doing away with semantics entirely.
This paper argues that a correct account of computation requires us to
attribute content to computational processes in order to explain which
functions are being computed.  This entails that computational psychology
must countenance mental representations.  Since anti-semantic positions are
incompatible with computational psychology thus construed, they ought to be
rejected.  Lastly, I argue that in an important sense, computers are not
formal symbol manipulators.


Problem Solving in Multiple Task Environments
Eric Dietrich and Chris Fields
MCCS-87-84

We summarize a formal theory of multi-domain problem solving
that provides a precise representation of the inferential dynamics
of problem solving in multiple task environments.  We describe
a realization of the theory as an abstract virtual machine that
can be implemented on standard architectures.  We show that
the behavior of such a machine can be described in terms of
formally-specified analogs of mental models, and present a necessary
condition for the use of analogical connections between such
models in problem solving.


An Automated Particulate Counting System for Cleanliness 
Verification of Aerospace Test Hardware
\fIJeff Harris and Edward S. Plumer\fR
MCCS-87-86

An automated, computerized particle counting system
has been developed to verify the cleanliness of aerospace test
hardware. This work was performed by the Computing Research
Laboratory at New Mexico State University (CRL) under a contract
with Lockheed Engineering and Management Services Company at the
NASA Johnson Space Center, White Sands Test Facility. Aerospace
components are thoroughly cleaned and residual particulate matter
remaining on the components is rinsed onto 47 mm diameter test filters. The
particulates on these filters are an indication of the
contamination remaining on the components. These filters are
examined under a microscope, and particles are sized and counted.
Previously, the examination was performed manually; this
operation has now been automated. Rather than purchasing a
dedicated particle analysis system, a flexible system utilizing
an IBM PC-AT was developed. The computer, combined with a
digitizing board for image acquisition, controls a
video-camera-equipped microscope and an X-Y stage to allow
automated filter positioning and scanning. The system provides
for complete analysis of each filter paper, generation of
statistical data on particle size and quantity, and archival
storage of this information for further evaluation. The system is
able to identify particles down to 5 micrometers in diameter and
discriminate between particles and fibers. A typical filter scan
takes approximately 5 minutes to complete. Immediate operator
feedback as to pass-fail for a particular cleanliness standard is
also a feature. The system was designed to be operated by
personnel working inside a class 100 clean room. Should it be
required, a mechanism for more sophisticated recognition of
particles based on shape and color may be implemented.


Solving Problems by Expanding Search Graphs:
Mathematical Foundations for a Theory of Open-world Reasoning
Eric Dietrich and Chris Fields
MCCS-87-88

We summarize a mathematical theory describing a virtual machine
capable of expanding search graphs. This machine can, at least
sometimes, solve problems where it is not possible to precisely
and in detail specify the space it must search. The mechanism for
expansion is called wanton inference. The theory specifies which
wanton inferences have the greatest chance of producing solutions
to given problems.  The machine, using wanton inference,
satisfies an intuitive definition of open-world reasoning.


Software Engineering Constraints Imposed by
Unstructured Task Environments
Eric Dietrich and Chris Fields
MCCS-87-91

We describe a software engineering methodology for building
multi-domain (open-world) problem solvers which inhabit
unstructured task environments.  This methodology is based on a
mathematical theory of such problem solving.  When applied, the
methodology results in a specification of program behavior that
is independent of any architectural concerns.  Thus the
methodology produces a specification prior to implementation
(unlike current AI software engineering methodology).  The data
for the specification are derived from experiments run on human
experts.


Multiple Agents and the Heuristic Ascription of Belief.
Yorick Wilks and Afzal Ballim
MCCS-86-75

A method for heuristically generating nested beliefs (what some agent
believes that another agent believes ... about a topic) is described.
Such nested beliefs (points of view) are esential to many processes
such as discourse processing and reasoning about other agents' reasoning
processes. Particular interest is paid to the class of beliefs known as
\fIatypical beliefs\fR and to intensional descriptions. The heuristic 
methods described are emboddied in a program called \fIViewGen\fR which
generates nested viewpoints from a set of beliefs held by the system.


An Algorithm for Open-world Reasoning 
using Model Generation
M.J. Coombs, E. Dietrich & R.T. Hartley
MCCS-87-87

The closed-world assumption places an unacceptable constraint on a
problem-solver by imposing an \fIa priori\fR notion of relevance on
propositions in the knowledge-base.  This accounts for much of the
brittleness of expert systems, and their inability to model natural
human reasoning in detail.

This paper presents an algorithm for an open-world problem-solver.  
Termed Model Generative Reasoning, we replace deductive inference
with a procedure based on the generation of alternative, intensional
domain descriptions (models) to cover problem input, which are then evaluated
against domain facts as alternative explanations.  We also give an illustration
of the workings of the algorithm using concepts from process control.


Pronouns in mind: quasi-indexicals and the ``language of thought''
Yorick Wilks, Afzal Ballim, & Eric Dietrich
MCCS-87-92

The paper examines the role of the natural-formal language
distinction in connection with the "language of thought"
(LOT) issue. In particular, it distinguishes a
realist-uniform/attributist-uniform approach to LOT and seeks to link
that distinction to the issue of whether artificial
intelligence is fundamentally a science or engineering. In a
second section, we examine a particular aspect of natural
language in relation to LOT: pronouns/indexicals. The focus
there is Rapaport's claims about indexicals in belief
representations. We dispute these claims and argue that he
confuses claims about English sentences and truth
conditions, on the one hand, with claims about beliefs, on
the other. In a final section we defend the representational
capacity of the belief manipulation system of Wilks, Bien
and Ballim against Rapaport's published criticisms.