[net.ai] Langauge Evolution

rob@ptsfa.UUCP (Rob Bernardo) (10/21/84)

> From:  Rick Briggs <briggs@RIACS.ARPA>
> 
> 
>         Why do languages move away from case?  Why did Sastric Sanskrit
> die?  I think the answer is basically entropy.  The history of
> language development points to a pattern in which linguists write
> grammars and try to enforce the rules(organization), and the tendency
> of the masses is to sacrifice elaborate case structures etc. for ease
> of communication.

You may want to read Otto Jesperson, "Language: it's nature, development,
and origin", chap 19, "The origin of grammatical elements". 
He has some very lucid discussions of how language changes. He argues quite
well that yes, of course, language changes towards shorter and shorter forms.

However, he argues against the assumption that languages start out as having
so-called synthetic syntax (i.e. use of word roots with prefixes, infixes,
and suffixes, e.g. case endings) and evolve towards a so-called analytic syntax
(i.e. use of separater words, etc. , e.g. English and Chinese ).
There are numerous examples, even in English, where a full word root has evolved
into a prefix or suffix, showing the reverse trend exists as well.
For example, the suffix 'ly' comes from a noun meaning 'body, appearance, form'.The suffix 'ful' obviously comes from the adjective 'full'. So now we have
a so-called synthetic form 'truthfully' sort-of meaning 'having the form of
being full of truth'.

Another great example is the future tense in moderm
Romance languages (this is my example, not Jesperson's). Due to phonological
changes in Latin, the traditional future tense got confused with other tenses
and dropped out of use. For example, in classical Latin, "I will love" is
"amabo". This so-called synthetic form (a verb root plus suffixes ama-b-o)
was supplanted by analytic forms:
	Eo amare (lit. I am going to love)
	Habeo amare (lit. I have to love)
	Debeo amare (Lit. I ought to love)

The form using 'habere' (to have) with the infinitive is the one that stuck,
and as the forms of habere got shortened and ADDED TO THE END of the infinitive,a new set of verb endings resulted:

	Classical Common
	habeo ==> ajo
	habes ==> as
	habet ==> at
	habemus ==> emos
	habetis ==> ete, etis
	habent ==> ant

Hence we get the following as the future tense in the following languages:
	Spanish		French
	amare		aimerai
	amaras		aimeras
	amara		aimera
	amaremos	aimerons
	amareis		aimerez
	amaran		aimeront


Similarly, the Conditional tense of moderm Romance languages arose from the
PAST tense of 'to have' (in shortened form) added onto the end of the infinitive.

>         Why do languages move away from case?  Why did Sastric Sanskrit
> die?  I think the answer is basically entropy.  The history of
> language development points to a pattern in which linguists write
> grammars and try to enforce the rules(organization), and the tendency
> of the masses is to sacrifice elaborate case structures etc. for ease
> of communication.
So please beware of jumping to obvious conclusions and applying some
quasi-Marxist political theory to language change.

>         Current Linguistics has begun to actually aid this entropy by
> paying special attention to slang and casual usage(descriptive vs.
> prescriptive).  Without some negentropy from the linguists, I fear
> that English will degenerate further.

I seriously doubt that what linguists pay attention to very much alters
the course of evolution of a language. It is not obvious that language
change even when from synthetic forms to analytic forms is "degeneration", since
languages change to fit the communications needs of its speakers.
-- 


Rob Bernardo, Pacific Bell, San Francisco, California
{ihnp4,ucbvax,cbosgd,decwrl,amd70,fortune,zehntel}!dual!ptsfa!pbauae!rob

steiny@scc.UUCP (Don Steiny) (10/25/84)

**
	Rick Briggs seems to feel that English is degenerating.

>         Current Linguistics has begun to actually aid this entropy by
> paying special attention to slang and casual usage(descriptive vs.
> prescriptive).  Without some negentropy from the linguists, I fear
> that English will degenerate further.

	Most linguists would not say that langauges
degenerate, but that they change or even evolve.   English has
case marking on the nouns 1000 years ago, but lost them during
the years of the Norman conquest from 1066 for a couple of hundred
years until Chaucher.  

	Chinese has no case markings either.  It also has no tense
(we have two), no number (singular or plural), and no gender. 
Chinese clearly did not degenerate from Sanskrit.  I believe
there is written Chinese that is as old as any Sanskrit.  

	Case marking indicates the relationship between the nouns
and the verbs.  In some languages; for instance Latin, Sanskrit, 
and Russian; this relationship is indicated by suffixes to the
nouns.  In English we indicate these same relationships
by using word order and prepositions.   The English system
of prepositions is rich.   The prepositions serve a similar
function to endings that indicated indirect objects in languages
that have case marked endings.  Since prepositions are words
and not endings, there can be more distinctions.   It is easily
argued that since there are more ways of incicating specific
relationships in English that English is more precise than
languages that uses suffixes.

	In France, the French Academy trys to preserve the purity
of French.  They are down on loan words.  They have not mangaged
to hold back the changes in the French language, and they have
to revise their standard periodically.

	Hitler tried to purify German.  

Unambigious Languages

	The whole idea of languages that are unambigious was
throughly explored by the logical positivists, notably Carnap.
The postivists explored a procedure pioneered by the 
early Wittgenstein (and later abandonded and belittled by
Wittgenstein).   They believed that philosophical problems
could be solved by determining the reference of propostions
and determining the truth value of those propositions (as determined
by sensory experience).   Propositions about such things
as "good" were "sensless" in this system because "good"
does not refer to anything in the world we can verify with out
senses.

	This approach is find for something like "good", which 
we can easily do without.   It runs into problems with words
like "chair."    When I use the word "chair" I may not have
any specific chair in mind.  They solved this problem with
reference to "concepts", a fuzzy solution at best.  This idea
was trounced by Wittgenstein (in the Blue and Brown Books and
Philosophical Investigations).  

	Since ambiguity can be phonological, syntatic, semantic,
or pragmatic, to name a few, the term "amgiguity" is very ambigious.
Were we to select a formal language (say Sasitic Sanskrit or
a language developed by Carnap or Russell), we would find (courtsy of
Kirk Godel) that that language was incomplete.  

	Ambiguity is a feature, not a bug.
-- 
scc!steiny
Don Steiny - Personetics @ (408) 425-0382
109 Torrey Pine Terr.
Santa Cruz, Calif. 95060
ihnp4!pesnta  -\
fortune!idsvax -> scc!steiny
ucbvax!twg    -/

ir44@sdcc6.UUCP (Theodore Schwartz) (10/31/84)

There are some ideas around on why languages become simplified over time
as well as why they arrived at some initial point of complexity. It has
been suggested that simplification takes place where there is contact
between speakers of different languages who must communicate.
Pidginization results. In Pidgin languages, and there have been many,
such as current Melanesian pidgin English or current Indonesian, a 
language develops, sometimes to facilitate trade, sometimes where there
is domination by one group over the other, or where one population is
conquored or enslaved (early American slave English was a Pidgin
language). Often part of the vocabulary of the language of the dominant
group is combined with a simplified syntax (in the case of Melanesian
pidgin, derived from Melanesian). Many grammatical distinctions are 
dropped, sometimes even number and gender, irregular verbs are
regularized, as in "I is, you is, he is, we is, etc." Sydney Ray, early
this century, argued that Melanesian languages, were already pidginized
and locally developing some new complexities, when they encountered 
European traders and colonizers. Pidgin English grew up between them,
with contributions from both sides and without much conscious planning.
Pidgin languages spread wherever Europeans went but with both some
carry-over and much local influence in each area, so that Melanesian
pidgin includes some features from the China-coast trade pidgin and
some words from Portuguese African pidgins (like "pikinini" for child,
from "pequen\~a, "little.") Similarly, English is a pidginized language
having lost considerable complexity in comparison to other Germanic
languages.

The next step is to argue that simplification will occur, less rapidly
and dramatically, even in response to internal borrowing and exposure
to different dialects within a language community. Especially where
speech and verbal memory were the pricipal media for communication and
storage, differences would develop as a language community became 
larger and more spread out as well as more internally differentiated
by political and other cleavages. Communication across such gradients
would also lead to the sort of simplification that would facilitate
learning and intercommunication. Conversely, it has been suggested
that the greatest phonological and grammatical complexity develops
in relatively small, relatively isolated language communities, in 
part because internal gradients do not develop. Ancient Greek, referred
to by my predecessors in this discussion, may have reached maximum
complexity during a period when the population was small and relatively
isolated and progressively lost this as that population spread,
internally differentiated (within the language community) and entered
into communication with many groups who learned and simplified Greek.
I don't know this-- I'm merely fitting it to the above argument. It
could be that this is all we need and perhaps a historian could test
it, that complexity either develops or is conserved in relatively
small, isolated language groups. Why is this degree of complexity
needed in the first place? (A Melanesian language that I know, for
example, not only has singular, dual, and plural, but also trial
for small sets of persons or objects, not necessarily three, and
distinguishes inclusion or exclusion of the person addressed in the
first person pronouns). Why the complexity? Who needs it? Obviously
such complexity increases the redundancy in the communication of
ideas. Each distinction also implies constraints on the selection
or identification of referential objects or of other terms. One 
suggestion for which I can't argue very far, is that for people 
depending entirely on oral-verbal communication and memory, the 
additional redundancy may be useful. Much of the grammatical
complexity and referential specificity (e.g., no general set of
numbers except as inflected by numeral classifiers depending on
the many classes of objects counted, such as long, thin objects,
animate objects, etc.) Such specificity has proformal functions,
that is, the affix tells you something, narrows the range, of the
object referred to. It is likely also that small, isolated language
communities provide the container within which innovations can 
accumulate. There must be on-going complexifying processes as 
well as entropy or we would have nothing but pidgins around. Pidgins,
by the way, as you would expect from their minimal redundancy, require
extensive circumlocution to achieve the specificity of reference
(not only to objects but to more complex ideas) that the more complex
languages condense into words.