[comp.ai.neural-nets] Pinker and Prince

marchman@amos.ling.ucsd.edu (Virginia Marchman) (04/26/89)

I heard that there was a conversation going on about the Pinker & Prince
article, and thought that I would pass along an abstract from a recent
Tech Report.  Requests for hard copy should be sent to
yvonne@amos.ucsd.edu.  (ask for TR #8902).  [please do not
forward to other listings.  thanks.]

-virginia marchman



	Pattern Association in a Back Propagation Network:
	   Implications for Child Language Acquisition

       Kim Plunkett                      Virginia Marchman
University of Aarhus, Denmark     University of California, San Diego


			Abstract

A 3-layer back propagation network is used to implement a pattern
association task which learns mappings that are analogous to the present
and past tense forms of English verbs, i.e., arbitrary, identity,
vowel change, and suffixation mappings.  The degree of correspondence
between connectionist models of tasks of this type (Rumelhart &
McClelland, 1986; 1987) and children's acquisition of inflectional
morphology has recently been highlighted in discussions of the
general applicability of PDP to the study of human cognition and
language (Pinker & Mehler, 1988).  In this paper, we attempt to
eliminate many of the shortcomings of the R&M work and adopt an
empirical, comparative approach to the analysis of learning (i.e.,
hit rate and error type) in these networks.  In all of our simulations,
the network is given a constant 'diet' of input stems -- that is,
discontinuities are not introduced into the learning set at any point.
Four sets of simulations are described in which input conditions (class
size and token frequency) and the presence/absence of phonological
subregularities are manipulated.  First, baseline simulations chart
the initial computational constraints of the system and reveal complex
"competition effects" when the four verb classes must be learned
simultaneously.  Next, we explore the nature of these competitions
given different type (class sizes) and token frequencies (# of
repetitions).  Several hypotheses about input to children are tested,
from dictionary counts and production corpora.  Results suggest that
relative class size determines which "default" transformation is
employed by the network, as well as the frequency of overgeneralization
errors (both "pure" and "blended" overgeneralizations).  A third series
of simulations manipulates token frequency within a constant class size,
searching for the set of token frequencies which results in "adult-like
competence" and "child-like" errors across learning. A final series
investigates the addition of phonological sub-regularities into the
identity and vowel change classes.  Phonological cues are clearly
exploited by the system, leading to overall improved performance.
However, overgeneralizations, U-shaped learning and competition effects
continue to be observed in similar conditions.  These models establish
that input configuration plays a role in detemining the types of
errors produced by the network - including the conditions under
which "rule-like" behavior and "U-shaped" development will and will
not emerge. The results are discussed with reference to behavioral
data on children's acquisition of the past tense and the validity
of drawing conclusions about the acquisition of language from models
of this sort.