POPX@VAX.OXFORD.AC.UK (03/24/88)
From: Jocelyn Paine,
Experimental Psychology Department,
South Parks Road,
Oxford.
Janet Address: POPX @ OX.VAX
SOFTWARE WANTED
-
TO BUILD A MIND
I'm trying to teach Oxford undergraduates an information-processing
model of psychology, by giving them a computerised organism (named P1O,
after the course) which has a "mind" which they can watch and experiment
with. To do this, I've depicted how a mind might be built out of units,
each performing a simpler task than the original mind (my depiction is
loosely based on Dennett's "Towards a Cognitive Model of
Consciousness"). Each of my units does some well-defined task: for
example, parsing, edge-detection, conversion of a semantic
representation to text.
Now I have to implement each unit, and hook them together. The units are
not black boxes, but black boxes with windows: i.e. I intend that my
students can inspect and modify some of the representations in each box.
The units will be coded in Prolog or Pop-11, and run on VAX Poplog.
Taking the parser as an example: if it is built to use a Prolog definite
clause grammar, then my students should be able to: print the grammar;
watch the parser generate parse trees, and use the editor to walk round
them; change the grammar and see how this affects the response to
sentences.
P1O will live in a simulated world which it perceives by seeing objects
as sharp-edged images on a retina. This retina is a rectangular grid of
perhaps 2000 pixels, each spot sensing either nothing, or a spot of some
particular colour. One of the images will be that of P1O's manipulator,
which can detect whether it is touching an object. P1O can also perceive
loud noises, which direct its attention toward some non-localised region
of space. Finally, P1O can hear sentences (stored as a list of atoms in
its "auditory buffer"), and can treat them either as commands to be
obeyed, statements to be believed (if it trusts the speaker), or as
questions to be answered.
P1O's perceptual interpreter takes the images on its retina, and
converts them via edge-detection and boundary-detection into hypotheses
about the locations of types of objects. The interpreter then checks
these hypotheses for consistency with P1O's belief memory, determining
the while which individuals of a type it's seeing. Hypotheses consistent
with past beliefs are then put into the belief memory, as Prolog
propositions.
The sentences which P1O hears are also converted into propositions, plus
a mood (question, command, or statement). This is done by generating a
parse tree, and then a propositional representation of the sentence's
meaning. Statements are checked for consistency with the belief memory
before being entered into it; questions cause the belief memory to be
searched; commands invoke P1O's planner, telling it for example to plan
a sequence of actions with which it can pick up the brown chocolate
button which it sees.
These action sequences then go to P1O's motor control unit, which moves
the manipulator. This involves positional feedback - P1O moves a small
step at a time, and has to correct after each step.
P1O's simulated environment is responsible for tracking the manipulator,
and updating the retinal image accordingly. Students can also update the
image for themselves.
At the top level, P1O has some goals, which keep it active even in the
absence of commands from the student. The most important of these is to
search for food. The type of food sought depends on P1O's current
feeling of hunger, which depends in turn on what it has recently eaten.
The goals are processed by the top-level control module, which calls the
other modules as appropriate.
Above, I've described P1O as if I've already built it. I haven't, yet,
and I'm seeking Prolog or Pop-11 software to help. I'd also accept
software in other languages which can be translated easily. I'll enter
any software I receive into my Prolog library (see AILIST V5.279, 3rd
Dec 1987; IKBS Bulletin 87-32, 18 Dec 1987; the Winter 1987 AISB News)
for use by others.
I think so far that I need these most:
(1) LANGUAGE ANALYSIS:
(1.1) A grammar, and its parser, for some subset of English, in a
notation similar to DCG's (though it need not be _implemented_ as
DCG's). Preferably with parse trees as output, represented as Prolog
terms. The notation certainly doesn't have to be Prolog, though it may
be translatable thereto: it should be comprehensible to linguists who've
studied formal grammar.
(1.2) As above, but for the translation from parse-trees into some kind
of meaning (preferably propositions, but possibly conceptual graphs,
Schankian CD, etc) represented as Prolog terms. I'm really not sure what
the clearest notation would be for beginners.
(1.3) For teaching reasons, I'd prefer my analyser to be 2-stage; parse,
and then convert the trees to some meaning. However, in case I can't do
this: one grammar and analyser which does both stages in one go. Perhaps
a chart parser using functional unification grammars?
(1.4) A morphological analyser, for splitting words into root, suffixes,
etc.
(2) VISION
(2.1) An edge-detector. This should take a 2-D character array as input,
and return a list of edges with their orientation. I'm content to limit
it to vertical and horizontal edges. It need not deal with fuzzy data,
since the images will be drawn by students, and not taken from the real
world. This can be in any algorithmic language: speed is fairly
important, and I can call most other languages from Poplog.
(2.2) A boundary-detector. This should take either the character array,
or the list of edges, and return a list of closed polygons. Again, it
can be in any algorithmic language.
(3) SPEAKING
(3.1) A speech planner, which takes some meaning representation, and
converts into a list of words. This need not use the same grammar and
other knowledge as the language analyser (though it would be nicer if it
did).
(4) WINDOWING
(4.1) Any software for allowing the Poplog editor VED to display more
than two windows on the same screen, and for making VED highlight text.
Alternatively, Pop-11 routines which control cursor-addressable
terminals directly, bypassing VED, but still being able to do immediate
input of characters.
(5) OTHER
(5.1) If I model P1O's mind as co-operating experts, perhaps a
blackboard shell would be useful. Does anyone have a Prolog one?
I'd also like to hear from anyone who has other software they think
useful, or who has done this kind of thing already - surely I can't be
the first to try teaching in this way? In particular, does anyone have
ideas on how to manage the environment efficiently, and what form the
knowledge in top-level control should take. I'll acknowledge any help in
the course documentation.
Jocelyn Paine
INFORMATION ON BULLETIN BOARDS WANTED
I already belong to AILIST: from time to time I see in it mention of
other boards, usually given in the form COMP.SOURCES or SCI.MED. Where
can I obtain a list of these boards, and how to subscribe?
Jocelyn Paine
[To reply, I think the following should work:
POPX%VAX.OXFORD.AC.UK%AC.UK%UKACRL.BITNET@CUNYVM.CUNY.EDU .
As for other bboards, different ones exist on different
networks. For a list of Arpanet bboards, write to
Zellich@SRI-NIC.ARPA. For Bitnet bboards, I think a message
containing the command HELP will get you started; just send
it to LISTSERV@FINHUTC (or @NDSUVM1). I don't know how one
gets the list of Usenet newsgroups. -- KIL]nobody@sunybcs.UUCP (03/25/88)
In article <8803250637.AA22481@ucbvax.Berkeley.EDU> POPX@VAX.OXFORD.AC.UK writes: > SOFTWARE WANTED > - > TO BUILD A MIND You might be interested in the following document, excerpts of which follow; the full document is available by contacting us. William J. Rapaport Assistant Professor Dept. of Computer Science||internet: rapaport@cs.buffalo.edu SUNY Buffalo ||bitnet: rapaport@sunybcs.bitnet Buffalo, NY 14260 ||uucp: {ames,boulder,decvax,rutgers}!sunybcs!rapaport (716) 636-3193, 3180 || DEVELOPMENT OF A COMPUTATIONAL COGNITIVE AGENT Stuart C. Shapiro, Director William J. Rapaport, Associate Director SNePS Research Group Department of Computer Science SUNY at Buffalo 226 Bell Hall Buffalo, NY 14260 shapiro@cs.buffalo.edu, rapaport@cs.buffalo.edu OVERVIEW. The long term goal of the SNePS Research Group is to understand the nature of intelligent cognitive processes by developing and experiment- ing with a computational cognitive agent that will be able to use and understand natural language, and will be able to reason and solve prob- lems in a wide variety of domains. ... ACCOMPLISHMENTS. In pursuit of our long term goals, we have developed: (1) The SNePS Semantic Network Processing System, a knowledge- representation/reasoning system that allows one to design, imple- ment, and use specific knowledge representation constructs, and which easily supports nested beliefs, meta-knowledge, and meta- reasoning. (2) SNIP, the SNePS Inference Package, which interprets rules represented in SNePS, performing bi-directional inference, a mix- ture of forward chaining and backward chaining which focuses its attention on the topic at hand. SNIP can make use of universal, existential, and numerical quantifiers, and a specially-designed set of propositional connectives that include both true negation and negation-by-failure. (3) Path-Based Inference, a very general method of defining inheri- tance rules by specifying that the existence of an arc in a SNePS network may be inferred from the existence of a path of arcs specified by a sentence of a ``path language'' defined by a regu- lar grammar. Path-based reasoning is fully integrated into SNIP. (4) SNeBR, the SNePS Belief Revision system, based on SWM, the only extant, worked-out logic of assumption-based belief revision. (5) A Generalized Augmented Transition Network interpreter/compiler that allows the specification and use of a combined parsing- generation grammar, which can be used to parse a natural-language sentence into a SNePS network, generate a natural-language sen- tence from a SNePS network, and perform any needed reasoning along the way. (6) A theory of Fully Intensional Knowledge Representation, according to which we are developing knowledge representation constructs and grammars for the Computational Cognitive Mind. This theory also affects the development of successive versions of SNePS and SNIP. For instance, the insight we developed into the inten- sional nature of rule variables led us to design a restricted form of unification that cuts down on the search space generated by SNIP during reasoning. (7) CASSIE, the Computational Cognitive Mind we are developing and experimenting with, successive versions of which represent an integration of all our current work. CURRENT RESEARCH. Current projects being carried out by various members of the SNePS Research Group, some joint with other researchers, include: (1) VMES, the Versatile Maintenance Expert System: ... (2) Discussing and Using Plans: ... (3) Intelligent Multi-Media Interfaces: ... (4) Cognitive and Computer Systems for Understanding Narrative Text: ... (5) The Representation of Natural Category Systems and Their Role in Natural-Language Processing: ... (6) Belief Representation, Discourse Analysis, and Reference in Nar- rative: ... (7) Understanding Pictures with Captions: ... BIBLIOGRAPHY. A bibliography of over 90 published articles, technical reports, and technical notes may be obtained from Mrs. Lynda Spahr, at the address given above, or by electronic mail to spahr@gort.cs.buffalo.edu.