Robert.Frederking%CMU-CS-CAD@sri-unix.UUCP (09/27/83)
Several comments in the last message in this exchange seemed worthy of comment. I think my basic sympathies lie with STLH, although he overstates his case a bit. While language is indeed a "fuzzy thing", there are different shades of correctness, with some sentences being completely right, some with one obvious *error*, which is noticed by the hearer and corrected, while others are just a mess, with the hearer guessing the right answer. This is similar in some ways to error-correcting codes, where after enough errors, you can't be sure anymore which interpretation is correct. This doesn't say much about whether the underlying ideal is best expressed by a grammar. I don't think it is, for NL, but the reason has more to do with the fact that the categories people use in language seem to include semantics in a rather pervasive way, so that making a major distinction between grammatical (language-specific, arbitrary) and other knowledge (semantics) might not be the best approach. I could go on at length about this (in fact I'm currently working on a Tech Report discussing this idea), but I won't, unless pressed. As for ignoring human cognition, some AI people do ignore it, but others (especially here at C-MU) take it very seriously. This seems to be a major division in the field -- between those who think the best search path is to go for what the machine seems best suited for, and those who want to use the human set-up as a guide. It seems to me that the best solution is to let both groups do their thing -- eventually we'll find out which path (or maybe both) was right. I read with interest your description of your system -- I am currently working on a semantic chart parser that sounds fairly similar to your brief description, except that it is written in OPS5. Thus I was surprised at the statement that OPS5 has "no capacity for the parallelism" needed. OPS5 users suffer from the fact that there are some fairly non-obvious but simple ways to build powerful data structures in it, and these have not been documented. Fortunately, a production system primer is currently being written by a group headed by Elaine Kant. Anyway, I have an as-yet-unaccepted paper describing my OPS5 parser available, if anyone is interested. As for scientific "camps" in AI, part of the reason for this seems to be the fact that AI is a very new science, and often none of the warring factions have proved their points. The same thing happens in other sciences, when a new theory comes out, until it is proven or disproven. In AI, *all* the theories are unproven, and everyone gets quite excited. We could probably use a little more of the "both schools of thought are probably partially correct" way of thinking, but AI is not alone in this. We just don't have a solid base of proven theory to anchor us (yet). In regard to the call for a theory which explains all aspects of language behavior, one could answer "any Turing-equivalent computer". The real question is, how *specifically* do you get it to work? Any claim like "my parser can easily be extended to do X" is more or less moot, unless you've actually done it. My OPS5 parser is embedded in a Turing-equivalent production system language. I can therefore guarantee that if any computer can do language learning, so can my program. The question is, how? The way linguists have often wanted to answer "how" is to define grammars that are less than Turing-equivalent which can do the job, which I suspect is futile when you want to include semantics. In any event, un-implemented extensions of current programs are probably always much harder than they appear to be. (As an aside about sentences as fundamental structures, there is a two-prong answer: (1) Sentences exist in all human languages. They appear to be the basic "frame" [I can hear nerves jarring all over the place] or unit for human communication of packets of information. (2) Some folks have actually tried to define grammars for dialogue structures. I'll withhold comment.) In short, I think warring factions aren't that bad, as long as they all admit that no one has proven anything yet (which is definitely not always the case), semantic chart parsing is the way to go for NL, theories that explain all of cognitive science will be a long time in coming, and that no one should accept a claim about AI that hasn't been implemented.