Cottrell@NPRDC (02/20/86)
From: Leslie Kaelbling <Kaelbling@SRI-AI.ARPA>
From: MikeDixon.pa@Xerox.COM
From: haynes@decwrl.DEC.COM (Charles Haynes)
From: Cottrell@NPRDC
SEMINAR
From PDP to NDP through LFG:
The Naive Dog Physics Manifesto
Garrison W. Cottrell
Department of Dog Science
Condominium Community College of Southern California
The Naive Physics Manifesto (Hayes, 1978) was a seminal paper in
extending the theory of knowledge representation to everyday phenomena.
The goal of the present work is to extend this approach to Dog Physics,
using the connectionist (or PDP) framework to encode our everyday,
commonsense knowledge about dogs in a neural network[1]. However,
following Hayes, the goal is not a working computer program. That is in
the province of so-called performance theories of Dog Physics (see, for
example, my 1984 Modelling the Intentional Behavior of the Dog). Such
efforts are bound to fail, since they must correspond to empirical data,
which is always changing. Rather, we will first try to design a
competence theory of dog physics[2], and, as with Hayes and Chomsky, the
strategy is to continually refine that, without ever getting to the
performance theory.
The approach taken here is to develop a syntactic theory of dog
actions which is constrained by Dog Physics. Using a variant of
Bresnan's Lexical-Functional Grammar, our representation will be an
context-free action grammar, with associated s-structures (situation
structures). The s-structures are defined in terms of Situation
Dogmatics[3], and are a partial specification of the situation of the
dog during that action.
Here is a sample grammar which generates strings of action
predicates corresponding to dog days[4], (nonterminals are capitalized):
Day -> Action Day | Sleep
Action -> Sleep | Eat | Play | leavecondo Walk
Sleep -> dream Sleep | deaddog Sleep | wake
Eat -> Eat chomp | chomp
Play -> stuff(Toy, mouth) | hump(x,y) | getpetted(x,y)
Toy -> ball | sock
Walk -> poop Walk | trot Walk | sniff Walk | entercondo
Several regularities are captured by the syntax. For example,
these rules have the desirable property that pooping in the condo is
ungrammatical. Obviously such grammatical details are not innate in the
infant dog. This brings us to the question of rule acquisition and
Universality. These context-free action rules are assumed to be learned
by a neural network with "hidden" units[5] using the bark propagation
method (see Rumelhart & McClelland, 1985; Cottrell 1985). The beauty of
this is that Dogmatic Universality is achieved by assuming neural
networks to be innate[6].
The above rules generate some impossible sequences, however. This
is the job of the situation equation annotations. Some situations are
impossible, and this acts as a filter on the generated strings. For
example, an infinite string of stuff(Toy, mouth)'s are prohibited by the
constraint that the situated dog can only fit one ball and one sock in
her mouth at the same time. One of the goals of Naive Dog Physics is to
determine these commonsense constraints. One of our major results is
the discovery that dog force (df) is constant. Since df = mass *
acceleration, this means that smaller dogs accelerate faster, and dogs
at rest have infinite mass. This is intuitively appealing, and has been
borne out by my dogs.
____________________
[1]We have decided not to use FOPC, as this has been proven by Schank
(personal communication) to be inadequate, in a proof too loud to fit in
this footnote.
[2]The use of competence theories is a standard trick first intro-
duced by Chomsky, which avoids the intrusion of reality on the theory.
An example is Chomsky's theory of light bulb changing, which begins by
rotating the ceiling...
[3]Barwoof & Peppy (1983). Situation Dogmatics (SD) can be regarded
as a competence theory of reality. See previous footnote. Using SD is a
departure from Hayes, who exhorts us to "understand what [the represen-
tation] means." In the Gibsonian world of Situation Dogmatics, we don't
know what the representation means. That would entail information in
our heads. Rather, following B&P, the information is out there, in the
dog. Thus, for example, the dog's bark means there are surfers walking
behind the condo.
[4]Of course, a less ambitious approach would just try to account for
dog day afternoons.
[5]It is never clear in these models where these units are hidden, or
who hid them there. The important thing is that you can't see them.
[6]Actually this assumption may be too strong when applied to the
dogs under consideration. However, this is much weaker than Pinker's as-
sumption that the entirety of Joan Bresnan's mind is innate in the
language learner. It is instructive to see how his rules would work
here. We assume hump(x,y) is innate, and x is bound by the default s-
function "Self". The first time the puppy is humped, the mismatch
causes a new Passive humping entry to be formed, with the associated
redundancy rule. Evidence for the generalization to other predicates is
seen in the puppy subsequently trying to stuff her mouth into the ball.