[net.nlang] Phoenix and Akaeakamai learn sentences

cunningh@noscvax.UUCP (11/17/83)

[I'm told recent news postings from computers in San Diego didn't get
into the rest of the net -- so I'm reposting this.]

A segment of the NOVA "Signs of the Apes, Songs of the Whales" TV show
a couple of weeks ago covered some of Lou Herman's work on sentence
understanding using the two bottlenose dolphins Phoenix and
Akaeakamai.  Friday, I sat in on a seminar wherein he discussed his
work in more detail.  I'd like to share some of my impressions.  Any
misperceptions and misinterpretations are strictly my own  -- animal
linguistics not being a field I know much about.

BACKGROUND

Many animals display various kinds of signs (gestures, postures,
chemical emissions, sounds, etc.) expressing motivational and emotional
states, territories, etc.  Some (e.g. bees) show elaborate patterns of
communication.

'Language", however, seems a uniquely human kind of thing.  'Words' in
human languages can be combined in a variety of syntactically
meaningful ways to communicate a potentially infinite number of
'sentence' expressions.  Noam Chomsky claimed at one point that the
grasp and creative use of syntax strongly distinguished human beings
from animals.

30 years ago, Keith and Catherine Hayes adopted 'Viki', a female
chimpanzee, raised her as a child, and attempted to teach her to speak
English.  After five years of intensive training, Viki was able to
utter, with difficulty, three words  -- which she used in almost
arbitrary ways.  This, despite the fact that on simple non-verbal tests
requiring conceptual distinctions, Viki seemed to score as well as
average children her own age.

20 years later, Allen and Beatrice Gardner tried a different approach
with the chimp Washo.  Having determined that chimps in the wild are
responsive to gesture, but seem to have little voluntary control over
their vocalizations, they trained her to use a variety of signs from
the American Sign Language (ASL).  Washo seemed to respond to about 500
signs, and allegedly could make 80 reliably.  Other investigators
became very enthusiastic about training primates to produce sign
language.

The Premacks trained 'Sarah' to use plastic tokens of different sizes,
shapes and colors -- she apparently succeeded in using them in
meaningful syntactic order.  The Rombaughs devised a computer interface
so that 'Lana' could generate "lexigraphs" on a computer screen  -- she
was apparently successful in formulating sentences to request things
she wanted.  Penny Patterson claimed that her gorilla, 'Koko', could
invent new signs and new combinations of signs.

Herb Terrace, working with 'Nimchimski', re-did much of the Gardners'
work, trying to put it on a more formal basis, until he ran out of
grant money from NIMH.  This forced him to do a very significant
thing:  sit back and completely analyze all his data (especially his
video tape records) and many records of the Gardners' and Rombaughs'
work.

>From those data, Terrace concluded that most, if not all of the primate
linguistic productions resulted from extensive (often unconscious)
prompting by the trainers.  He published a now-famous paper (Science,
1979), with extensive criticisms of all the previous primate work.
Despite claims to the contrary, in the studies which seemed to show
that "Apes can produce sentences", the primates really didn't
understand what they were doing.  The accomplishments were no more
significant than that of a pigeon, taught by operant conditioning,
which could peck at keys in a particular order to obtain a particular
type of reward.

Sue Savage-Rombaugh tried some follow-up work with Lana that
essentially showed that Terrace was right.

HERMAN et al

Lou Herman considers that 98% of the work done during the last decade
or so with large-brained animals (mostly primates, as noted above),
concentrated on getting them to 'produce' language -- paying little
attention to whether they could 'accept' (comprehend) language.

After losing two years worth of effort when the dolphins Puka and Kea
were stolen from his lab, Herman acquired two other dolphins in 1978 --
Phoenix and Akaeakamai (Hawaiian for "lover of wisdom").  His aim was
to find out if dolphins could be trained to recognize 'sentences'
rather than just manipulate symbols to get rewards.

Phoenix knows a simple 'whistle language' (the different whistles are
produced by a computer, from a trainer's entering the 'words' on a
keyboard out of sight of the dolphin).  Using the symbols '[]' to mean
'optional', a well-formed sentence has the following syntax:

[<modifier>]<direct object><action>[[<modifier>]<indirect object>]

All sentences are commands for the dolphin to do something.  A minimum
sentence is 2 'words', maximum 5 (but see below). It's a linear
grammar, and the order of execution corresponds with the order of the
words.  For example:

	left ball put-in surface basket.

would mean: take the (or one of several) balls to your left, and put it
into the (or one of several) baskets which are floating on the surface
of your pool.  Dolphin's choice if several items qualify.

There's only about 30 'words' in this language.  Because there's no
recursion possible -- no conjunctions or embedded clauses -- the number
of possible sentences is fixed at about 1200, depending upon what
combinations make sense (but see below). There's several different
types of words:

  objects
    fixed (gate, panel, etc.)

    relocatable (speaker, water-spray, etc.)

    transportable (ball, basket, surfboard, frisbee, hoop, etc.)

  actions
    intransient, taking direct object only (touch, toss, over, thru,
						under, etc.)
    transient, taking direct & indirect objects (fetch, put-in, etc.)

  modifiers (right, left, surface, bottom)

  agents (Akaeakamai, Phoenix)

  other (yes, no, erase)

Since the 'action' (verb) is in the middle, many sentences are
'semantically reversible'.  The word 'erase' means cancel, or 'no
action'.  Left and right are relative to the location of the dolphin at
the time the command is given.

Akaeakamai was taught a gesture language with an 'inverse' grammer more
related to Japanese.  The syntactic structure is:

[[<modifier>]<indirect object>][<modifier>]<direct object><action>

With this structure, there's no way that she can know what has to be
done until the whole sentence is complete.  The gesture language uses
ASL signs, but 'left' and 'right' are different, and give no
directional clues.

The dolphins were taught using 2 or 3 word sentences (objects, verb --
but no modifiers), and have to grasp the 4 & 5 word sentences by
induction, something Lou calls 'structural novelty'.

The dolphins are pretty good about figuring out what to do, even if
it's immediately impossible.  For example, if told 'surfboard over'
(jump over the surfboard), and the surfboard is against the side of the
pool, typically the dolphin will first grasp the surfboard, move it to
the center of the pool, and then jump over.  If told 'ring through
bottom hoop' (put a little ring through the hoop resting on the
bottom), the dolphin will first grab the hoop, take it to the surface,
then grab the ring and push it through the (possibly falling) hoop.

Lou uses 'blind' observers who write down the observed behavior of the
dolphins, not knowing what the actual command was.  The dolphin is
graded 'correct' if it does the proper action immediately.  Overall
success rate is 85% -- the discrepancy mainly being that dolphins have
bad days as well as good ones.

The trainer is blindfolded when giving a command, so there's no eye
cues, nor any feedback possible from the trainer.  During an evaluation
session, sentences are chosen randomly.  This leads to occasional
'lexical novelty':  e.g. when, for the first time given the sentence
'water-spray throw', the experimenters didn't know ahead of time how
the dolphin would throw water from the spray flowing into the pool --
it did, by using a head-movement, jsut like a kid will 'throw' water
flowing from a faucet.

The dolphins show some 'lexical substitution'.  When given an unknown
word for an object, they usually try the action with some other object
[this might be an artifact of the training. Bob.].

Lou was careful to point out that, while they do use operant
conditioning, it's not the same as training a dolphin to do a
particular series of simple actions with one command.  I.e. the
dolphins do hear a series of tones meaning 'yes Akaeakamai fish', and
are rewarded a fish when they do the right thing.  But the right thing
is the meaning of the sentence, not just a one-signal/one-action
response (the usual type of response that you see with dolphins and
other cetaceans during a Marineland-type show).

There's usually a "no" paddle in the tank.  If a dolphin gets a command
to use an object not in the tank, it presses the "no" paddle.  That's
different from their behavior when given a syntactically nonsensical
sentence:  they do nothing then.

If the "no" paddle is not there, the dolphin again does nothing.
They've tried doing this, then throwing in a random selection of items
a little while (up to 30 seconds) later.  The dolphins then perform the
command.  Performance drops off noticeably (40-60%) if the time
interval is longer than 30 seconds.  Lou says that gets into 'memory'
rather than 'linguistics', and they're also running into the problem of
the dolphins getting "tired of playing the game". [Herman's earlier
work showed, among other things, that the limit of short-term memory
retention for arbitrary whistle-tone strings (like strings of random
digits amongst humans) was around 100 seconds.]

Here's some of Lou's answers to various questions...

What happens when they make a mistake?  "Well, usually they get angry.
Sometimes they start throwing things around the pool.  Dolphins don't
like to make mistakes." Why not cross-train the animals to learn each
other's language? "Phoenix knows both the tone and gesture languages
(gesture languages are easy to teach dolphins).  Akaeakamai knows, and
expresses the names for things in Phoenix's tone language -- but we
haven't taught her the actions yet."  Will they obey anybody?  "Yes.
Different trainers have different 'dialects' in the gesture language,
and the dolphins are pretty tolerant.  I had my 4-year-old daughter try
it -- the dolphins seemed a little astonished, but they obeyed the
commands.  In fact, you don't have to use your arms to make the gesture
signs.  If you're agile enough, you can make them with your legs." What
about trying recursion?  "Last week, we tried 'conjunctions' by running
together several sentences.  The dolphins simply performed the various
commands in in the order given [this is a simple example of 'left
recursion']."  What's next [my question]? "We'll be working on getting
the dolphins to produce sentences, in response to questions.  We've
already put a second, "yes", paddle in the tank, and we'll see how well
they can respond to questions about their tank 'world'."

[Lou Herman has done a great deal of basic research with dolphins, and
has published extensively.  If you're seriously interested in his
research, or cetacean research in general, look up some of his
publications, especially his 1980 book: "Cetacean Behavior".  His
address is: Dr. Louis Herman, Psychology Dept., University of Hawaii,
Honolulu HI 96822.  The work described above was conducted at the
Kewalo Basin Marine Laboratory of the University of Hawaii, and there
is no connection, electronic or otherwise, between Lou and the
machine/organization from which I happen to be posting this message.] ]

-- 
Bob Cunningham			 ..sdcsvax!noscvax!cunningh
21 17' 35" N  157 49' 38" W        MILNET:  cunningh@nosc-cc