jack@rlgvax.UUCP (07/21/83)
Yes, you could just start with English vocabulary and impose a regularized grammar (such as Loglan's). However, the language that procedure would yield would have at least this drawback relative to Loglan: the new language would be syntactically ambiguous, as is English. This might not be a serious problem for human listners, but it's terrible for computers. And as long as we're contemplating designing artificial languages, why not cure the problem of syntactical ambiguity? To do so would even benefit human conversants in some situations. One source of ambiguity in English is *form masking*, which is exemplified by the phrase "form masking" itself. The listner cannot distinguish among "form masking", "form asking", and "for masking". Loglan's solution is to impose on the words a restricted set of vowel and consonant patterns, and to have certain stress and pause rules. Given the consonant and vowel restrictions, one can rarely form a word that is exactly the same as a natural language word. It then becomes convenient to have more than one source language on which to draw for vocabulary base. Typically, words are formed with phonemes from more than one of the eight base languages.
grw@fortune.UUCP (07/22/83)
Loglan sounds like a difficult language in which to make a pun. -Glenn