[net.ai] Sastric Sanskrit

alan@allegra.UUCP (Alan S. Driscoll) (10/16/84)

I don't understand a number of points that Rick Brigg has made:

> I did not mean to imply that lack of word order is a sufficient
> condition for unambiguity, only that it is an indication.

Why is lack of word order an indication of unambiguity?  Unambiguity
means one meaning per utterance, right?  That says nothing about how
meaning is conveyed.

>        My comments about English stem from its lack of case.

Why is it better (less ambiguous, clearer) to communicate meaning by
case inflection than by word order?  Can you back this assertion up?

>        "There is an activity(vyaapaara:) , subsisting in the pot,
>        with agency residing in one substratum not different from
>        Caitra, which produces the softening which subsists in rice."

A bit verbose, isn't it?  You could starve before explaining to your
waitress, precisely and unambiguously, what you wanted...

> I also disagree with "...structural ambiguity is not particularly
> bad nor incompatible with 'logical' expression."  Certainly ambiguity
> is a major impediment to designing an intelligent natural language
> processor.

What is the connection between the quoted statement and your reply? In
my opinion, an "intelligent natural language processor" would have to
deal with ambiguity "intelligently".  Ambiguity is there in language,
whether computational linguists like it or not, and I would argue that,
rather than being gratuitous, AMBIGUITY CARRIES MEANING in many cases.

-- 

	Alan S. Driscoll
	AT&T Bell Laboratories

elman@sdamos.UUCP (Jeff Elman) (10/23/84)

Rick,

Thank you for taking the time to respond to the comments on your
original article.

I think this discussion reveals some very basic differences in 
assumptions that one can make, as far as how one should
approach the goal of designing an intelligent natural language
processor.  I'd like to address those basic issues directly.  I think
they're far more interesting than the question of whether or not
Sastric Sanskrit contained ambiguity.

At one point you say

    "Certainly ambiguity is a major impediment to designing
    an intelligent natural language processor.  It would be very desirable
    to work with a language that allows natural flexibility without
    ambiguity."

Whether or not ambiguity poses an obstacle to  building a successful
natural language processor depends up what your processor looks like.
Don't assume that all architectures have the same problems.

That is, I would agree whole-heartedly with you that language understanding 
systems which are patterned after traditional machine-based parsers find 
ambiguity to be a serious problem.  Such systems also have a lot of difficulty 
with another, related problem, which is the enormous variability in the 
acoustic waveforms which represent given phonemes, syllables, or words.

I see both problems -- syntactic ambiguity and acoustic variability --  as
related because they have to do with instances where the mapping from
surface to meaning is complex; and  where one has to take other factors
into account.  

I think it is extremely important to point out that in most cases, what
one might label as "ambiguous" utterances are -- in their context -- really
not at all ambiguous.  Similarly, the acoustic variability displayed
by (say) a bilabial stop in different phonetic environments does not prevent
listeners from recognizing that they heard a bilabial.   Human listeners
do very well at integrating contextual information into the language
understanding process.  (Of course, sometimes we do misunderstand each other.
But human performance is so much better than machine based systems that
it's beside the point.)

My conclusion about how to deal with ambiguity or variability is thus 
different than yours.  You say

    "It would be very desirable to work with a language that 
    allows natural flexibility without ambiguity."


I say the alternative is to leave the language alone and work with a language 
*processor* that is able to take advantage of contextual constraints and has 
the kind of computational power which is needed to integrate information from
large numbers of sources.  Serial von Neumann machines do not have
this kind of power.  If you use them then of course you will be forced
into processing only languages  with a highly restricted syntax and a 
minimum of ambiguity.  There are many occasions where this kind of
limitation is satisfactory, and so that's fine.

But I think it's more challenging to accept the ambiguity of natural
language as a given, and then to figure out how it is that people
(still the only really successfull speech understanders around) 
resolve that ambiguity.   My strong feeling is that this leads
you to investigating the sorts of highly interactive, parallel architectures 
that are being studied here at UC San Diego, at CMU, at Brown, and at
other places.  
 
Jeff Elman  
Phonetics Lab, Dept. of Linguistics, C-008
Univ. of Calif., San Diego La Jolla, CA 92093
(619) 452-2536,  (619) 452-3600

UUCP:      ...ucbvax!sdcsvax!sdamos!elman
ARPAnet:   elman@nprdc.ARPA