[comp.ai.nlang-know-rep] NL-KR Digest Volume 3 No. 59

nl-kr-request@CS.ROCHESTER.EDU (NL-KR Moderator Brad Miller) (12/08/87)

NL-KR Digest             (12/08/87 01:31:56)            Volume 3 Number 59

Today's Topics:
        Wanted: References on author's intentions

        Miscellaneous

        Genesis of language (was: Why can't my cat talk, and a bunch of others)
        
----------------------------------------------------------------------

Date: Fri, 4 Dec 87 14:36 EST
From: Alan Pope <pope@UDEL.EDU>
Subject: References on author's intentions

I am interested in research involving author's intentions in text
understanding.  The only work I know of that addresses this topic is
Bertram Bruce's chapter "Plans and Social Actions" from _Theoretical_
Issues_in_Reading_Comprehension_.

I'd appreciate hearing about any other work done in this area.
Thanks,

Alan Pope                              pope.udel.edu
Computer and Information Sciences
University of Delaware
Newark, Delaware   19716

------------------------------

Date: Wed, 2 Dec 87 11:37 EST
From: WATKINS@rvax.ccit.arizona.edu
Subject: 1) Graphological forms for lexical items; 2) Phonemic transcription

>Rick Wojcik said:  "Lexical entries have phonological forms associated with 
>them.  That isn't controversial.  I claim that they should also have 
>graphological forms that exist independently of phonology--like Chinese 
>logographic signs, but capable of being translated into phonological form 
>if the need arises."

In fact, a lexical entry can exist with _only_ a graphological form 
associated with it.  I have acquired various lexical entries from reading 
without assigning any phonological form at all.  That is, when someone 
shows me the word and asks: "What's this mean?" I have an answer (which may 
or may not be dictionary-accurate), but if they ask, "How do you pronounce 
this?" I have no answer on tap and must develop one on the spot, "sounding 
out" the word for the first time, though I have encountered it, understood 
it, and possibly even used it (in writing) on several previous occasions.

Note:  This is not the same as encountering an unknown word and assigning 
it a "wrong" phonological form, as in /maIz@ld/ for "misled"; I've had that
happen too. For example, for over a year I failed to realize that the
/pEn@s/ I read about in a sex education book had any relation to a /pin@s/
("penis"); likewise I startled my mother by explaining to her that an
/Ep@tom/ was actually an /@pIt@mi/--she had been using both words (rarely)
without ever having cause to realize that the former was simply her
erroneous spelling pronunciation--acquired through reading--of the latter,
acquired through listening ("epitome").  And I was sorry to lose the word
AWry, in spite of the pleasure of discovering the connection to "wry" in
learning to pronounce it aWRY.  (That one I learned by getting it wrong on
a placement test!) 

ps:  I am delighted to learn of the convention of using /@/ for schwa.  Is 
there a convention for on-screen representation of other nonstandard 
characters?  For instance, I learned a backwards c for "aw", an ash 
(digraph) for the vowel of "cat", an eth (edh) for a voiced "th" and a
theta for an unvoiced one, a checked s for "sh", a checked j for the
consonants in "judge", and a checked z for the second consonant in "rouge".

I am aware that the phonemic alphabet I learned is probably not the same as 
one in standard use in linguistics, because mine was developed to describe 
historical factors--for instance, /y/ is reserved for an umlauted vowel 
that disappeared fairly early; an upside-down v serves for stressed schwa.

------------------------------

Date: Wed, 2 Dec 87 13:21 EST
From: Mary Holstege <HOLSTEGE@Sushi.Stanford.EDU>
Subject: Miscellaneous


   Oy!  An interesting issue -- I have a lot I want to respond to:

Naturalness of AMSLAN

   AMSLAN (and other sign languages used by deaf people) most certainly
   *are* natural languages.  They have native "speakers", and children who
   acquire them from their parents quite naturally.  Indeed, the first
   officially "defined" sign language was nothing more than the recording
   of a language that was already being used by a community for their
   everyday communication.  

   On the other hand, Koko is not being taught AMSLAN but a simplified 
   variant of signed English.  (They use signed English because they want
   to use both modalities simultaneously and it was felt that it would be
   too hard for Koko to try to follow two distinct languages at once.)

English: free word-order -> fixed word-order

   The primary motivation for this change was a purely functional one, I
   believe.  Old English has a rich case system with lots of inflectional
   morphemes on nouns and adjectives, so it was easy enough to identify the
   role of a noun phrase in a sentence by looking at the endings.  These
   inflections were in unstressed syllables at the ends of words and they
   eroded through the normal process of sound change.  Unstressed vowels start
   to sound all alike in rapid speech and eventually disappear completely,
   especially at the ends of words.  (For example, the "-ly" on adverbs used to
   be "like"; nowadays a lot of people drop the "-ly" as well.  Notice that
   hardly anyone pronounces the "a" in "nowadays" and so on and so on.)

   Once there is no audible distinction between, say, an accusative noun and
   a nominative noun, some other device is needed.  The distinction is still
   important for communicative purposes. It is natural to rely on word order,
   because even so-called free word-order languages have a strong preference
   for a particular word order, variant orders being used to indicate special
   emphasis.  Contrariwise, even so-called fixed word-order languages have
   many variant orders; the catch is that some special marking (extra words,
   special inflections) are required so that listeners can follow what's going
   on.  So in English we have the normal S-V-O order, as in "I made mistakes"
   and the inverted O-V-S order, as in "mistakes were made by me".  In more
   complex sentences we have more choices about word order.  Just as in free
   word-order languages, these variant orders are used to indicate special 
   emphasis.  (Here to suppress culpability.)

   What is interesting is that this process is not unidirectional.  Claims that
   languages with complex morphological systems are "less advanced" or "more
   complicated" are so much hooey.  You pays your money and you takes your 
   choice: complex morphology or complex syntax.  Is the passive form in 
   English so much simpler than having different endings on the nouns?
   Inflectional morphemes develop from independent words. The case endings in
   modern languages that we can trace back come from postpositions (like
   prepositions only following the noun).  As I said before, the adverbial
   marker in English developed from the independent word "like".

   In both modern English and Chinese new inflectional markings are developing.
   In Chinese (I am told by a linguist whose field is Chinese; I am no expert 
   so I may have this garbled) people are increasingly using number and (less
   often) gender markings, particularly in the North.  These markings are just
   little words that usually have a more substantive meaning used in a slightly
   different way. In English, verb particles are being increasingly absorbed 
   into verbs, and some look set to develop eventually into inflectional 
   endings.  Specifically, the verb particle "up" is well on its way to
   becoming a marker for perfective. (That is, completed action.  "I cleaned
   my room" vs "I cleaned up my room" or "John ate the turkey" vs "John ate up
   the turkey")  "up" can be used in this sense with almost every verb.  It is
   only a matter of time before the "up" gets swallowed into the verb
   completely; already "clean up" et al are pronounced as one word rather than
   as two.   Shall we regard this as a retrogression in the English language?
   Don't be absurd.

Language learning

   There seems to be a lot of confusion about what the issue here is.  The 
   question is not "can adults learn a second language" nor even "can adults 
   learn a second language so well that their proficiency is indistinguishable
   from that of a native speaker."  

   The answer to the first is that of course they can: people are clever and
   can learn to do many things that are difficult for them.  People have
   learned to play chess in their head.  People have learned to speak other
   languages fluently.  This is unsurprising.

   The answer to the second is still pretty clear but it is difficult to be
   absolutist.  I do know that many people are overly impressed by the
   linguistic abilities on some people who speak second languages "fluently."
   They may well be fluent, but that does not mean they match the proficiency
   of a native speaker.  Example: my father.  He is a native speaker of Dutch,
   which is pretty close to English to begin with.  Furthermore, the Dutch are 
   taught English in grade school.  He came to this country to get a Masters
   degree and has spoken no Dutch since (except on occasional visits to his
   mother -- she refused to speak English to her son).  That's some thirty-five
   years of total immersion.  To the casual observer he appears to be
   completely proficient and fluent in English.  If you weren't told of his
   background you probably wouldn't notice anything much, or you would put it
   down to normal conversational stumbles.  I know as a fact that my dad has
   been the basis of "successful speakers of English" stories.  But I know
   better.  He pronounces some words ("propaganda") very strangely; he is apt
   to mis-stress long words when he first encounters them; his repetoire of
   sentence structures is somewhat limited (even in written English); he very
   occasionally misapprehends unusual sentence structures.  The more you
   listen for it, the more you can hear.  He has been speaking English on a
   daily basis, fluently, for longer than I have; yet I speak it better than
   he.   So I think there are far fewer "completely proficient" speakers of
   second languages than casual observers think, but I cannot rule out the
   possibility that some exist.

   But it doesn't matter!  The question is not about second-language learning
   but about *first*-language learning.  The key question is: can a person
   who has had no linguistic exposure during the critical period  (say, before
   puberty) learn to speak any language with native-speaker proficiency.  I
   have gone over evidence for this already and I think the answer is pretty
   clear-cut.  People learning their first language later in life are less
   equipped than even an adult trying to learn a second language.  Second-
   language learning is relevant only to the degree that children learn 
   second languages more quickly and easily than adults do.  Evidence of this
   is mainly anecdotal, but you may try comparing the linguistic abilities of
   Americans, who are not taught foreign languages until high school or
   college, with those of... well, ANYWHERE, but let's say England, where
   kids are taught foreign languages from third form on.
   
   Finally, a brief word about phonology.  Anyone can be trained to hear a
   phonetic distinction that is used in some language distinctively.  English
   speakers can be trained to hear the difference between aspirated and
   unaspirated consonants (the "t" is "top" vs the "t" in "stop") or between
   palatalized and unpalatalized consonants (as used in Russian).  It may be
   somewhat difficult.  Learning to produce the variants reliably is 
   considerably more difficult, however. What is interesting is that young
   infants can hear the distinction without any special training.  Babies
   also start babbling using all sorts of speech sounds that their parents 
   couldn't produce without a great deal of special coaching.  After awhile,
   the babies catch on to what sounds aren't part of their language and they
   stop producing the others.  Later on still, they stop being able to make
   some distinctions as reliably as they once did.  That is, those distinctions
   that aren't part of their language get forgotten or eroded somehow.  Notice
   that some distinctions cannot be learned: using speech synthesis you can
   vary the onset of voicing in syllables like "pa", "p'a", and "ba".  Try as
   you might, you cannot train people to hear intermediates between these
   linguistic distinctions, even though they can readily be trained to 
   hear distinctions not used in their own language.

                              -- Mary
                                 Holstege@SUSHI.Stanford.EDU

------------------------------

Date: Sat, 5 Dec 87 04:15 EST
From: Jeffrey Goldberg <goldberg@russell.stanford.edu>
Subject: Re: 1) Language change


Watkins@rvax.ccit.arizona.edu write:
>How language changes
>====================

>I was taught that the near-absence of written English for a
>couple of centuries after the Norman conquest was an important
>factor in the amount of grammatical simplification that took
>place at the time--the free- to fixed-order shift (though such
>simplification was under way, more slowly, even before the
>conquest, in the form of the phonological merging of
>inflections).  The underlying theory seemed to be that, if a
>language exists in written form--and the written form receives
>the respectful attention of the speakers--the effect is to
>retard change, because the older forms preserved in the writing
>receive continual reinforcement.  On the other hand, since
>writing retards but does not arrest change, the written and
>spoken language inevitably diverge.

>Is this theory still current?  If not, what has replaced it?  If
>so, it is one more factor to examine in the
>free-to-fixed-word-order process.

I doubt that many people would have taught what you claim to have
been taught, but it is possible that someone might believe stuff
like that.

You may have been taught that the lack of written English during
the Norman occupation provides some evidence of the extent to which
French replaced English in many functions.  The pervasiveness of
French may have had a strong influence on English.  Many changes
occurred during this period.

Many modern people tend to believe, as you do, that having a
writing system entails having a language standard.  This is just
not true.  Anyone who has looked at Middle and Old English
manuscripts will tell you this.   There were no standard spellings
or punctuation conventions.  It is unlikely that there was
standardization of grammatical constructions.  Standardization of
this type is really quite recent.

>I also learned that grammatical and, to a lesser degree,
>phonological change in language is inevitably towards greater
>simplicity--that the further back one traces any given language,
>the greater the grammatical complexity.  (I always used to
>wonder where the original enormously complex languages were
>supposed to have come from; but, though everyone apparently
>agreed that they in turn must have grown into their complexity
>slowly, my inquiries about the process itself always ran up
>against the objection, "That's all speculative; we do not and
>cannot have real evidence.")  Fixed word order, I understood,
>evolved as a system of marking grammatical values no longer
>inherently obvious in the forms of the individual words.

I'm not sure that I follow everything you are saying here, but the
claim that all language change is to the greatest simplicity is
simply untenable.  Languages change from free word order to fixed
word order and back again to free word order.  How could one argue
that one is simpler?

>K Watkins
>WATKINS@ARIZRVAX

Jeff Goldberg         Internet: goldberg@russell.stanford.edu

------------------------------

Date: Sat, 5 Dec 87 15:31 EST
From: rolandi <ece-csc!ncrcae!gollum!rolandi@mcnc.org>
Subject: accents in adult language learners


Among linguists it is accepted as a truism that infants the world over
emit the same set of basic phonemic babblings.  If all infants start out 
with the same set of babblings, presumably a superset of the IPA, what 
happens to them as their users get older?   Why can't adults call up the
constituents of these elemental babblings when acquiring a foreign 
language and thereby do so without retaining an accent?

Because these "constituents" have become the victims of disuse.

When a child learns only one language, the child's culture essentially
ignores all phonemic utterances that are not included in its language.
Although initially included in the infant's behavioral repertoire, phonemes
unused by the verbal community do not mediate reinforcement.  They thereby
become less probable, ultimately falling entirely from the repertoire.
Whether or not this motor loss is accompanied by changes in neurology is
an experimental question, but the CAUSE of the behavioral loss is the fact
that they are unreinforced by the child's verbal community. (see Skinner, 1957)

There may well be some neurological correlate to this change in linguistic
ability.  But before one is to assume that some neural process is the CAUSE, 
(like our handy and unexplained explanation, the "crystallization process")
I think one should first consider the possibility that the neural process 
is instead the EFFECT.  Would anyone seriously maintain that unused muscles
do not fall to atrophy?

Regarding Mark Edwards contribution to the "why do adults have accents?"
discussion, 

>... what if we taught the adult ...
                ------

> I'm tired of the arguments, it can't be done because it hasn't been
> done in the past. If that were true than there would be a lot less
> Steven Jobs and Bill Gates in the world.

it's refreshing to see someone scrutinize the acquisition method before
jumping to the conclusion that there is some internal mechanism involved.
Unlike the hypothesized internal mechanism, methods of acquisition are
much more available to experimentation.  To say that adults do not
learn languages without accents does not mean that they CANNOT do so.  The
question at this point becomes, "How does one TEACH an adult a second
language without retaining an accent?"  I'd like to pose that question
to any speech pathologists out there.....

w.rolandi
u.s.carolina 
departments of linguistics and psychology
ncr advanced development, columbia
ncrcae!gollum!rolandi

job(ok) :- disclaim(rolandi,Everything).

------------------------------

Date: Wed, 2 Dec 87 21:27 EST
From: Mike Sellers <tektronix!sequent!mntgfx!msellers@ucbvax.Berkeley.EDU>
Subject: Genesis of language (was: Why can't my cat talk, and a bunch of others)

[I've just recently gotten my posting powers back, so this may seem somewhat
late.  However, the discussion doesn't seem to have progressed too far in
terms of answering some of the basic questions involved here, so I thought
I'd go ahead and throw in some neurological data.  I've included salient 
portions of the original article that started this whole thing, along with 
my comments.  
I'd appreciate comments, as I've not seen much in the way of cognition or 
linguistics from a neurological point of view on the net.]

Mike Glantz wrote an article that ended with:
>  Does anyone have any concrete information about human brain physiology
>  which would favor the completely ``physiological'' hypothesis of
>  linguistic capability over the ``sociological/anthropological''
>  explanation, or which would shed any other light on the question? 

  I haven't seen much in the way of concrete information about these 
questions on the net, so I'm posting what I know.  I think many of the people 
interested in problems like this one would do well to become more familiar 
with recent neurological findings; while they often follow what you might 
assume or intuit to be true, the human brain is often stranger and more 
elegant that you would imagine.  
  The rest of the references here are from Mike Glantz's original article.

>  Much discussion about neural networks carries the implication that it
>  is a human brain researchers are hoping, ultimately, to simulate, and
>  that a successful simulation will exhibit human linguistic capability.
>  This is certainly an admirable and worthwhile, if ambitious, goal. But
>  current models don't seem to have any features which would distinguish
>  a human brain from, say, a cat's brain (I realize this is very early
>  days - no criticism intended). This will eventually have to be dealt
>  with. 

  More precisely: current connectionist models are much closer to the brain
of the Aplysia (sea hare, a type of sea slug), or even the planaria's ganglia,
than they are to the human brain, both in terms of absolute neural complexity
and internal symbolic structure.  Both the amount of neural structure (whether 
biological or synthetic in origin) and the synaptic and symbolic organization 
of that structure are important to an understanding of what is happening and 
how it happens.  (I am using the work 'symbolic' here to denote any software-
like components of the neural organization; this level of complexity may 
derive some of its attributes from the underlying physical structure --what 
Pylyshyn calls the 'functional architecture'-- but the specific function of 
the architecture is not derivable by examining the structure itself.)
  Little is known about how and why the human brain organizes (lateralizes)
itself on a neural or nuclear (groups of neurons) level as it does.  This
knowledge is crucial to performing any sort of artificial simulation of human 
linguistic capabilities.  What is known, however, can shed some light on many
of the questions being bandied about in this discussion.

>  One possible explanation for why humans have language and cats don't is
>  that there may be one or more physiological structures unique to the
>  human brain, other than its larger capacity, which make language
>  possible. This is the most obvious explanation that comes to mind, and
>  is perfectly reasonable, although we haven't yet identified which
>  structures these are, or what roles they might play. 

  On a large scale, it is known that two areas of the brain are specific to
linguistic ability.  These are Broca's and Wernicke's areas, which appear 
in the left frontal and temporal cortices of most adult humans.  They do
not appear in other animals.  Some humans develop with these areas in other
places (i.e. about 30% of all left-handers lateralize with these areas in 
the right hemisphere, and a few people seem to have speech control resident 
in both hemispheres), but with only a few pathological exceptions, they do 
appear in all human brains.  It is probable that the amount of cortical mass
does have to with the ability to develop areas like Broca's and Wernicke's:
cortical real estate is expensive, so having areas with functions like 
linguistics probably depends on having enough 'other' mass to devote to 
everything else the organism needs to be doing.

>  But another possibility is that maybe the larger brain capacity is
>  sufficient, but that language is possible only after certain
>  ``internal'' or ``symbolic'' structures are built on top of the
>  physiological base. This building occurs during infancy and early
>  childhood, and the resulting structures can be considered to be part of
>  the human brain, every bit as real as the physiologically observable
>  features. 

  There are structural differences between Broca's and Wernicke's areas and
the rest of the cerebral cortex, and identifiable connections between these
two areas (called the Arcuate fasciculus, I believe) as well.  Broca's area
is adjacent to the motor cortex, and controls facial expression, phonation,
etc., while Wernicke's area controls comprehension, sentence construction,
etc.  Many tests and case studies have shown how integral these two areas 
are to our creation and comprehension of speech.  On the other hand, these
areas do not appear to be differentiated from the rest of the cortex at birth.
Some areas, such as the visual, sensory, and motor cortices, are already 
well developed and dedicated to their specific function at birth, even though
the brain is still rapidly growing at this point (some estimates put the rate 
of growth at 100,000 new neurons per minute!).  Other areas, such as most 
of the pre-frontal and temporal lobes, appear to be 'blank' at birth.  That 
is, the neural and glial structures are present, but no specific function has 
been assigned to or adopted by that area.  
  What is interesting is what happens beginning just before birth, and for 
several years afterwards: neurons die in droves.  It seems that each neuron
sends out many (thousands) of afferent and efferent fibers to other neurons
(how each knows which and how many to send out initially is still a mystery).
These fibers will eventually become dendrites (for input) and axons (for
output).  What then happens is that those fibers that are used (stimulated)
become thicker and stronger, and in the case of dendrites, put out smaller
hair-like fibers.  Those fibers that are not used die off, severing the
previously made connection.  If enough fibers from a cell are not used, the
cell itself dies.  Probably what is happening is that those fibers that are
used get preferential supplies of mitochondria and Golgi complexes, leaving
others to wither from disrepair; if too few fibers are used, the cell itself
does not maintain enough mitochondria and Golgi complexes to keep it going,
and so it dies.  The upshot of this is a sort of 'survival of the fittest'
among neurons: those that are used the most survive, while others die. 
Since connections between neurons in the brain is most of what matters (if
not all that matters), this has a profound effect on the developing organism
as a whole.  This process of neural death peaks out in humans at around 
5 years old, I believe, and generally ends by the time we are 7-10 years old.
It is unclear how much neural death and/or regeneration takes place after 
this point, but it is clearly over on any large scale by this time.  In some
areas of the brain, 10-20% percent of the neurons die (as in the visual
system, where the error rate for initial fibers seems to be as low as 5%,
even among billions of possible connections), while in other areas (such as
the pre-frontal cortex), up to 85% of the initial population die off.  It
is probably safe to make a connection between those areas that have high
incidence of neural death and those that are affected by the environment and
other non-biological factors.  The pre-frontal, temporal, and parietal lobes
(where we do much of our "thinking", associating, comprehending, remembering,
speaking, pattern matching, etc.) are all severely affected by the neural
death.

>  [...]
>  The principal hypothesis, here, is that, given sufficient relative
>  brain capacity, and the appropriate socialization process, any
>  individual of another species (a porpoise, for example) could acquire
>  linguistic ability. 

I don't think so.  What is involved in the development of the brain is more
than just socialization; it also has to do with feedback with sensory and 
motor targets in the body (if a mouse has no whiskers on one side, all 'those'
neurons will connect up with the whiskers on the other side) as well as with
evolutionary trends.  Clearly the environment (and thus socialization) play
a large role in how brain develops, as studies with rich/nominal/deprived
environments have shown, but this is not the only factor.  Even brain size
(or more accurately, central nervous system weight to body weight ratio)
is not necessarily a limiting factor.  Porpoises, for example, though they
have a CNS to body ratio similar to humans, have little 'blank' space in
their brains that could take up tasks like high-order association or 
lingusitics.  That they do not possess the organs for speech further compounds
the problem.  
  While animals in 'enriched' developmental environments will end up with 
thicker, denser cortices, better dendritic connections between neurons,
and seemingly more intelligence, they will not spontaneously start using
portions of their brains for previously unknown tasks (at least, not so far
as we know :-) ).  Thus a mouse will not develop a more complex mouse-language
or the ability to perform previously undo-able tasks after being raised in
an enriched environment.  It may do tasks that other mice can do better than
many of them, but it will not start doing really new things.  (While this
might lead you to believe that *no one* could come up with new cortical 
functions, keep in mind that all studies done so far do not take in to account
evolutionary time periods.  This could make a large difference.)  

>  [Aside: It is known that the human brain (and that of other mammals, as
>  well) undergoes physiological changes during the period of infancy and
>  early childhood. It is possible that the initial acquisition of
>  linguistic skills can only occur effectively during this period, during
>  which time these physiological changes are significantly ``molded'' by
>  the socialization process, where certain ``symbolic'' structures
>  actually become ``wired in''. If this were the case, then the period
>  during which basic linguistic ability can be acquired would be limited
>  to this ``crystallization'' period, which is possibly much longer in
>  humans than in other mammals. We would then have to amend the
>  hypothesis to read: given sufficient brain capacity and a sufficiently
>  long ``crystallization period'' etc. It then remains (among other
>  things) to determine the exact nature of this ``crystallization'', and
>  incorporate a sufficiently long duration of this in a computer model. 

  I have discussed briefly the period of development/molding that comes 
about in the CNS by the process of massive neural genesis, followed by 
massive neural death.  This is almost certainly responsible for much of the
high learning rate seen in human children.  Once the brain is relatively
stabilized (after age 8 or so), it may be that all subsequent learning is
accomplished with intra-neuron changes and changes in synaptic weights.
It is probable that some neural change occurs in response to learning in
adults, though nothing like what is seen in children.  This is also probably
the source of the 'crystallization period' brought up here, and accounts for
much of what has been discussed since.  
  I would amend the above hypothesis to read as follows:  Linguistic ability
(as an example of complex cognitively-learned behaviors, as opposed to things
like 3D visual perception) can only be brought about given a base containing
enough neural structure with a long period of highly dynamic change and
maturation and enough stimulation of the structure to organize it into 
function groups (ala neuronal nuclei).  This is a view of linguistic onset
and cognition in general that relies more on the developmental aspects of 
the brain than has been fashionable since the cognitive sciences became any
sort of a reality.  I do not believe that we will ever realize natural
language processing or any other sort of complex cognitive ability in 
artificial systems until we learn more about the development of the human
brain and take this information into account in our models.

Comments would be appreciated.


Mike Sellers
...!tektronix!sequent!mntgfx!msellers
Mentor Graphics Corp., EPAD

------------------------------

Date: Fri, 4 Dec 87 14:25 EST
From: M.BRILLIANT <ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU>
Subject: Re: Genesis of language (was: Why can't my cat talk, and a bunch of others)

In article <1987Dec2.182753.622@mntgfx.mentor.com>,
msellers@mntgfx.mentor.com (Mike Sellers) writes:
> 
> ...  so I thought > I'd go ahead and throw in some neurological data....
> I'd appreciate comments, as I've not seen much in the way of cognition or 
> linguistics from a neurological point of view on the net.] ....

It's hard to begin to summarize a 200-line article that combines functional
and physiological observations and hypotheses in such a global way, but the
bottom line seems to be:

> Linguistic ability ... can only be brought about given a base containing
> enough neural structure with a long period of highly dynamic change and
> maturation and enough stimulation of the structure to organize it into 
> function groups (ala neuronal nuclei).

I don't have any facts to add to that.  I have generally been relying
on another person's experience in second-language teaching and in the
training of second language teachers, and on my own casual reading in
Science and Scientific American.  However, the above synthesis (and the
full discussion of which it is a summary) looks so good to me that I
don't want to let it drop without a ripple.  It makes a lot of sense.

But the later statement,

> ....  I do not believe that we will ever realize natural
> language processing or any other sort of complex cognitive ability in 
> artificial systems until we learn more about the development of the human
> brain and take this information into account in our models.

has to be read cautiously.  It means we need a good understanding of
the essential processes required to process language.  As has been
pointed out by others, it doesn't mean we should imitate structures
and techniques that are just one way of executing those processes.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201)-949-1858
Holmdel, NJ 07733	ihnp4!houdi!marty1

------------------------------

Date: Fri, 4 Dec 87 21:52 EST
From: William Calvin <ptsfa!well!wcalvin@ames.arpa>
Subject: Re: Genesis of language (was: Why can't my cat talk, and a bunch of others)


Apropos cell death in brains, the old saw about losing 10,000 neurons every
day is now being challenged by the people that work on cerebral cortex; they
seem to think that there is little neuron loss there during most of postnatal
life.  Some subcortical areas like substantia nigra do lose 50% of cells by
age 70, while adjacent regions in midbrain may lose less than 2%.
  But there is a LOT of synapse death -- or, as I like to phrase it, withdraw
of axon collaterals, breaking synapses.  Synaptic density in neocortex
peaks at 8 months after birth (in humans; 2 months in monkey) -- and then
drops by 30-50% during childhood.  After puberty, the data gets too noisy
to interpret.  So there is a lot of opportunity for Darwinian editing of
randomly-made synaptic connection, achieving information storage by carving
(rather like photographic development removes unexposed silver grains).
	I review a variety of Darwinian selection stories in my piece
in the 5 November 1987 NATURE 330:33-34, entitled "The brain as a Darwin
Machine."
		William H. Calvin
		University of Washington NJ-15, Seattle WA 98195
		  206/328-1192  wcalvin@well.uucp

------------------------------

Date: Sat, 5 Dec 87 10:14 EST
From: rolandi <ece-csc!ncrcae!gollum!rolandi@mcnc.org>
Subject: talking cats or something

In article <1431@houdi.UUCP> you write:
>  (in reference to Sellers' well written summary of neurological variables
   involved in language......)
>
>has to be read cautiously.  It means we need a good understanding of
>the essential processes required to process language.  As has been
>pointed out by others, it doesn't mean we should imitate structures
>and techniques that are just one way of executing those processes.
>
C'mon Marty!  Do you mean to imply that you know of other "structures and
techniques" which might serve as models in natural language processing?
Do you mean to imply that "those processes" are comprehensively understood by 
anyone or any thing any where?  Sellers is right in suggesting that we will
not have automated natural language processors until we know a great deal
more about what we are trying to automate.  That knowledge will come from
studying the structure and FUNCTION of the only natural language processor
thus far recognized.

If you know of any non-human natural language processors that possess the
unrestricted conversational abilities of the average human speaker, I would
like to hear of them.  In fact, I would like to speak with them.  Could we
talk about dog training?  Art history?  How about the philosophy of science?


w.rolandi
job(ok) :- disclaim(rolandi,everything).
ncrcae!gollum!rolandi

------------------------------

End of NL-KR Digest
*******************