[comp.ai.nlang-know-rep] NL-KR Digest Volume 5 No. 28

nl-kr-request@CS.ROCHESTER.EDU (NL-KR Moderator Brad Miller) (11/12/88)

NL-KR Digest             (11/11/88 20:22:33)            Volume 5 Number 28

Today's Topics:
        best grammatical theories
        asking "how do you know" type questions
        quirky case
        Subjectivity, Mud-slinging, Behaviorism
        Re: Syntactical *definition* of English
        was predictive knowledge...
        
Submissions: NL-KR@CS.ROCHESTER.EDU 
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Fri, 11 Nov 88 16:11 EST
From: Avery Andrews <munnari!fac.anu.oz.au!andaling@uunet.UU.NET>
Subject: best grammatical theories

   > How can you tell if one theory of syntax is better than another?

   >Walter Rolandi
   >rolandi@ncrcae.Columbia.NCR.COM
   >NCR Advanced Systems Developmbou, Columbia, SC

Well,  to  begin with, you have to define what the NL syntax is actually doing.
(which I will henceforth call `grammar', to distinguish it from  various  other
things  that  people sometimes call `syntax').  I'd suggest that a grammar is a
scheme whereby a classification of the the vocabulary  of  a  language  into  a
finite  collection of (in general overlapping) categories, plus some additional
principles, is used to indicate how the meanings of the words  (or  formatives,
or  morphemes)  are to be put together.  In a PSG-based account of English, for
example, we would say that there were lexical categories, or `parts of speech',
such as Det, Adj, N and V, and phrase-structure rules such as

   S  ->  NP  V  (NP)  (NP)
   NP ->  (Det) Adj* N
   (grossly oversimplified, of course)

We'd  also  say something to the effect that NP's refer to entities, that their
components indicate the properties of the entity referred to, and that the V in
an S indicates the relation between the entities referred to.  How this is said
depends on what approach to semantics we're taking.

So the PS rules and the semantic  principles  tell  us  how  to  sort  out  the
attributes between the entities in a sentence like

   The naive herpetologist handed the beautiful princess a worried-looking
   crocodile

Other   languages   employ  different  sorts  of  principles,  often  involving
grammatical feature-based methods such as case-marking and agreement, and other
theories  describe  the facts in rather different ways--categorial grammar, for
example, builds virtually all of the  language-particular  aspects  of  grammar
directly  into  the  system  of  categories,  with  the  rules  being universal
principles (function application, composition, and type-raising, for  example).
But  these  variations  are  always  based on classifying the vocabulary into a
finite set of classes (including those defined by features).

Generative grammars frame these principles in such a way that they interact  so
as  to  assign  each  string  of  words  a  possibly empty set of grammatically
legitimate ways of composing the meanings of its words.  Ungrammatical  strongs
have  none,  ambiguous  ones  more than one.  (By speaking of ways of composing
word-meanings, rather than of meanings, we allow for a degree  of  fluidity  in
the lexicon, and for the use of contextual knowledge to discover the meaning of
unknown words by reverse engineering.)  In a successful utterance, knowledge of
context + common sense is sufficient to allow the hearer to choose the intended
meaning from the grammatically + lexically legitimate ones.

It turns out that there  are  lots  of  alternative  ways  of  presenting  such
principles, which are *not* created equal.  We could replace the PSG above with
the following finite-state grammar (FSG):

   S  =  (Det) Adj* N V ((Det) Adj* N) ((Det) Adj* N)

(Technically, this isn't really an FSG, since it describes the  arrangement  of
word-classes  rather  than  individual  words.    Maybe  it  should be called a
Word-Class Grammar (WCG).  Linguistically, FSG's have all the  faults  of  WCG,
and more besides).

Although this generates the same strings as the PSG, and could be equipped with
an equivalent semantics, it can be read as making very different  claims  about
the causation of linguistic behavior.

The  PSG  says  that  there  are  two distinct causal factors, S-structure, and
NP-structure, which interact in a certain way,  essentially,  the  NP-structure
principles  govern  the  internal form of some sequental blocks of words, whose
external arrangement is determined by the S-structure principles.  The FSG says
no  such thing--there is just a big formulae saying what order words appear in.
The PSG says that there is one causal factor behind the relative order of  Det,
Adj  and  N  in  the  three  positions, while the FSG says that there are three
different ones, which is definitely wrong.  (If they were different, why  would
they  have  the same effect, in position after position?  And why would similar
phenomena appear, with different details of order, in language after language?)

PSGs are better than FSGs, but they get into trouble quickly, for example, with
subject-verb  agreement,  which,  for  `classical'  (e.g.  feature-free)  PSGs,
requires that many rules be split into `singular' and `plural' verions. eg.:

   S  ->  {NPsg Vsg (NP) (NP),
           NPpl Vpl (NP) (NP)}

   NP ->  {NPsg, NPpl}

   NPsg -> (Detsg) Adj* Nsg

   NPpl -> (Detpl) Adj* Npl

What's wrong with this is that it says that there are different causal  factors
behind the order of words in singular S and NP than in plural ones, and this is
clearly wrong:  phrases differing properties such as  number,  gender  or  case
show essentially the same gross internal structure.

There  are  many  known cures for this ailment:  agreement transformations were
the first formal proposal widely accepted  by  linguists.    Various  kinds  of
feature-sharing  principles are today most popular, especially among those with
computational inclinations.  All of these solutions have in  common  that  they
treat the arrangement of the agreeing elements as one factor and the details of
what is shared as another.

Wh-movmement is a third  example:    here  we  have  superposed  on  the  basic
machinery  for assigning semantic roles (on the basis of agreement, word order,
case-marking, etc., depending on the language) some additional  machinery  that
allows  constituents  to  appear  elsewhere than their basic expected position,
subject to interesting constraints.  Many ways of accomodating the most obvious
facts  are  now  known:    transformational  movements or deletions, coindexing
schemes, `slash-feature' manipulations,  `functional  uncertainty',  functional
composition+type raising are some current contenders.

Thus  one  major  desideratum a grammatical theory is to allow one to formulate
principles that correspond accurately to the underlying causal  factors  behind
grammatical   phenomena,   and  correctly  predict  their  interactions.    The
corresponds pretty accurately to what Chomsky has sometimes called `descriptive
adequacy'  (he  has  unfortunately  also  applied this term to less interesting
properties,  such  as  simply  generating  the  right  set   of   sound-meaning
correspondences,  without  regard  to whether it has been done right.)  It also
corresponds pretty closely to being a pleasant language to specify grammars in.

Current grammatical theories (GTs) use a wide range of mechanisms to say  to  a
considerably  extent  the  same kinds of things about how grammatical phenomena
are to be carved up, They thus have a wide range of overlap  in  the  phenomena
they  handle  well.   Nonetheless, they nonetheless differ significantly in how
well they approximate descriptive adequacy in various areas.  LFG seems  to  me
to  be  unequalled  for  basic  predicate-argument structure and the complement
system:  proponents HPSG,  GPSG,  and  the  presently  available  varieties  of
Categorial  Grammar  have  not  yet  told  in public any convincing story about
things like Quirky Case in Icelandic,  or  preverbal  pronouns  in  French  and
Spanish.    The  other  theories seem to do better with Wh-movement and related
phenomena:  proponents of LFG have not yet told  in  public  a  sensible  story
about  parasitic  gaps  or accross-the-board extraction.  People tend to prefer
the theory that does best with the stuff they actually are working  on  at  the
moment.

Beyond  descriptive  adequacy  are  somewhat  murkier  and  more  controversial
concerns, which tend to go under the label of `Explanatory Adequacy'.  The most
generally  accepted  of  these  is  the  desideratum that a GT help explain why
language-learning is possible.  The basis of this is the idea that the less one
has  to  specify  to  determine  the  language, the easier it is to imagine how
children can do it.  The ultimate position here is that taken  by  Chomsky  and
his  followers in GB, who believe that an interesting portion of the grammar of
all languages (`core grammar') can be specified by  setting  the  values  of  a
relatively   small  number  of  binary-valued  (or  maybe  finite  list-valued)
parameters.  I find this idea pretty bizarre, but many linguists seem to accept
it.

A  more  modest claim is the difference between a transformational passive rule
and a `classic' LFG one:

   TG Passive:

    NP  -  X  -  V  -  NP  -  X    =>
    1      2     3     4      5
    4      2 be+en+3   0      5+by#1


   LFG Passive:

   SUBJ  ->  OBLby/0
   OBJ   ->  SUBJ
   0     ->  Pass.Part

The TG rule is a structure changing operation which has a complicated statement
because  it  has  to  recapitulate  most  of  the basic facts of English clause
structure (the subject comes before the verb,  the  object  after,  auxiliaries
come  between  the  subject and the verb, and prepositional phrases come after,
and before their NPs).  And it doesn't *really* work anyway.  The LFG rule is a
lexical rule operating on lexical items whose argument positions are associated
with grammatical relations:  it creates  a  new  lexical  entry,  altering  the
grammatical  relation/semantic  role assignments and the feature composition of
the item as indicated.  It is much  simpler  because  it  doesn't  recapitulate
basic facts of clause structure, but depends on other components of the grammar
that account for these facts (the rules associating grammatical relations  with
NPs,   and   connecting  the  feature-composition  of  verb  forms  with  their
distribution).  Furthermore, just about everything  in  it  corresponds  to  an
actual parameter of variation in passive-like constructions:  sometimes the old
subject is always  suppresssed  (the  0  option  for  SUBJ),  sometimes  never;
sometimes  OBJ  must  be  present and become SUBJ, sometimes not; sometimes the
grammatical category of the verb form become participial, sometimes not, It  is
most  unlikely  than  language  learners  would learn the TG rule, but it isn't
crazy to suppose that they could pick up the LFG one.

This concept  of  explanatory  adequacy  is  actually  closely  connected  with
descriptive  adequacy,  because the greater the extent to which the notation is
free from redundancy and formal fuss,  the  easier  it  is  to  imagine  how  a
learning  procedure  might  successfully  find  its  way  through  the space of
possibilities it provides.  But there is still nothing in descriptive  adequacy
that says that the GT shouldn't provide more facilities than one actually needs
to describe languages right, which this kind of explanatory adequacy does.

One can also ask that a GT illuminate actual facts  about  acquisition,  rather
than  just  make it easier to imagine how it might be possible.  Recent work by
Hyams and Pinker attempts to do this.  But this requires an  actual  theory  of
learning as well as a GT.

Another brand of explanatory adequacy pertains to processing.  The way in which
a GT presents  grammatical  information  can  make  a  big  difference  to  the
possibilities  for  using  it in comprehension and production.  TG's are pretty
terrible in this regard, many more recent theories are much better.   (You  can
get  an LFG parser to run on your PC from the Institute for Machine Translation
at the University of Stuttgart; nothing like this  is  even  envisionsable  for
classical TG).  A complicating factor is the possibility of compilation:  maybe
language learning, and the halting performances  of  beginners,  are  based  on
direct  processing  of some kind of augmented PSG, while the highly overlearned
performance of fluent speakers is based at least in part on the use of  FSGs  +
strategies   generated  by  various  kinds  of  compilation  processes.    Only
psycholinguistics will be able to sort that out.  But the generalizations (e.g.
basic  causal  factors) detected by linguists still have to be taken as part of
the causal order, for otherwise there is no account of why they are there).

The import of Explanatory Adequacy is that the  generalizations  discovered  by
linguists represent real causal factors.  Thus, they ought to interact sensibly
with other kinds of  causal  factors,  such  as  learning  procedures  and  and
processing mechanisms.

There  is  at  least  one complicating factor. The generalizations (e.g. causal
factors) identified by grammatical analysis are supposed to reside in the minds
of  language  users.    Because  sometimes  linguists get fooled.  For example,
languages tend to undergo sequences of regular sound changes,  which  can  turn
formerly  regular and uninteresting systems of forms into things that look like
the results of applying long sequences of phonological rules to rather abstract
underlying  representations.   There is good reason to believe that there exist
things that look and act pretty  much  like  standard  generative  phonological
rules  (the  original  sound  changes,  at  a minimum), but looking at say, the
residue of the great vowel change in  English,  it  is  hard  to  say  what  is
historical  residue  and  what is genuine Modern English grammatical structure.
Chomsky + Halle's rules in SPE do roughly represent genuine causal  factors  in
linguistic  behavior,  but  it's not so clear what there locus is:  history, or
the brains of modern speakers.

Happily,  most  of  syntactic  and  contemporary  phonological   theory   seems
reasonably  secure  against  this line of attack, but one has to rely on common
sense to ascertain what is the actual  locus  of  causal  factors  revealed  by
grammatical analysis.  (If a group of people started speaking a language taught
to them by aliens, its grammatical  generalizations  would  not  be  of  direct
relevance  to  human  GT,  since their utlimate locus would be the minds of the
aliens, not the human speakers.  On the other  hand,  the  changes  that  human
speakers introduced into the language might be highly revealing for human GT.)

   Avery Andrews
   The Australian National University
   andaling%fac.anu.oz@seismo.css.gov

------------------------

Date: Thu, 10 Nov 88 09:44 EST
From: w.rolandi <rolandi@gollum.UUCP>
Subject: asking "how do you know" type questions


>As far as who listens when "linguists
>speak" as much responsibility falls on the (potential) listener as does
>the speaker.  That's not blame that can be laid solely at the door of
>linguistics.

Perhaps I should endeavor to be a more imaginative listener.

>And just who is to determine which questions are of interest?  It is true
>that analysis and experimentation has played next to no role in linguistics
>*until recently*, but again, who determines what is "real science"?  Besides
>you, that is.  

"Real science" is predictive knowledge of nature.  So, assuming you are
asking a serious question, I'd say that "who" is, in fact, nature.
Another metric might be that real science supports applications of
its fundamental knowledge, in other words, engineering disciplines.

>I don't think you can remove information/messages/intentions
>from language without throwing the baby out with the bath water.  If language
>is none of these things, then what, pray tell, do you think it is?  

Verbal behavior.  And the analysis of behavior does not require evoking
explanatory fictions like information/messages/intentions.  Exactly
what is an intention?  How do you operationally define and measure it?

>>What's wrong with pragmatics and linguistics in general is that neither
>>field addresses any issues of scientific import.

>According to whom?  What exactly has given you the authority to make this
>bogus and unsubstantiated claim?

Are you saying that I haven't the authority to cite the absence of
controlled experimentation, measurement, and causal analysis?  This
IS rich.  A sort of "Linguistics: love it or leave it".

>When you know all about particle physics, what do you know?  

Many of the behavioral properties of light and energy.  How to make 
bombs and other things.  (see stuff about engineering above).

>I suggest that you find out a bit more about what is going on in
>linguistics before you sling mud again.  Chomsky is not the only linguist
>out here.

Is asking "how do you know" type questions slinging mud?  Are you
prepared to tell me that there is no place for this sort of thing in
the rigorous science of linguistics?   Make my day.

Your pal,

Walter Rolandi
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

------------------------------

Date: Fri, 11 Nov 88 06:44 EST
From: Clay M Bond <bondc@iuvax.cs.indiana.edu>
Subject: Re: asking "how do you know" type questions


Walter Rolandi:

> [ Lots of mud not worth repeating ]

Objections to current ideas are the stuff of science and progress
of knowledge.  However, I would say the same to you as I would to
the "anti-formalists" who have been doing the same all these years:

Please object.  But if you have no alternative to offer which
addresses your complaints by correcting what you perceive to
be deficiencies, you are contributing nothing to anything.  Or
to be more colloquial, put your money where your mouth is.

Whatever else I may think of Chomsky and his ideas, he at least
had a contribution to make.

>Make my day.

I'm so unimpressed.
-- 
<< Clay Bond -- IU Department of Leath-er, er, uh, Linguistics       >>
<< bondc@iuvax.cs.indiana.edu        AKA: Le Nouveau Marquis de Sade >>
<< {pur-ee,rutgers,pyramid,ames}!iuvax!bondc *********************** >>

------------------------------

Date: Mon, 7 Nov 88 12:27 EST
From: Avery Andrews <munnari!fac.anu.oz.au!andaling@uunet.UU.NET>
Subject: quirky case

The basic reference for Complementation in Icelandic, including
the evidence that there is Raising of Quirky-case subjects to
object position,  is:

  Thrainsson, Hoskuldur (1979) On Complementation in Icelandic.
     Garland Press.

An LFG analysis of the phenomenon is to be found in:

  Andrews, Avery (1982) On the Representation of Case in Modern
    Icelandic, in Bresnan (ed), The Mental Representation of
    Grammatical  Relations.

See also:

  Maling, Joan, and Annie Zaenen (to appear) Modern Icelandic Syntax.
    Academic Press.


Avery Andrews
andaling%fac.anu.oz@seismo.css.gov

------------------------------

Date: Wed, 9 Nov 88 20:06 EST
From: Clay M Bond <bondc@iuvax.cs.indiana.edu>
Subject: Subjectivity, Mud-slinging, Behaviorism


..RM79/

Walter Rolandi:

>Sure Rick. I think that the body of knowledge that is linguistics is 
>somewhat obscure and scarcely known to anyone outside of the field.  
>When linguists speak, mostly just linguists listen.

This is not without some element of truth, but only if you leave out
cognitive/computational/psycholinguists.  Linguistics, at least that
done by computational linguists is more than scarcely known in AI, for
example, as is psycholinguistics in psychology, and cognitive linguistics
in psychology/neuroscience/compsci.  As far as who listens when "linguists
speak" as much responsibility falls on the (potential) listener as does
the speaker.  That's not blame that can be laid solely at the door of
linguistics.


>I am suggesting that
>this would not be the case if the discipline were to address questions
>which, if answered, would be of interest to the general scientific community.
>More specifically, I am suggesting that linguistics will take a giant
>step towards science when the field embraces causal analysis and
>controlled experimentation.  This is the stuff of all real science.

And just who is to determine which questions are of interest?  It is true
that analysis and experimentation has played next to no role in linguistics
*until recently*, but again, who determines what is "real science"?  Besides
you, that is.  And you can produce bullshit just as easily with "real
science" as you can with "irreal (?) science".


>Silly me, I supposed that to mean that pragmaticists were busy producing
>a body of data that defines the causal relationships between the
>people, places, and objects around us that effect [sic] the contents of what
>we say.  What I found instead was a lot of pompous intellectualizing on
>the nature of "communication".   _hat I have seen is little more than 
>mentalistic, philosophical, even literary analysis of things like 
>"information", "messages", and "intentions".

Pompous intellectualizing!  Not a wee bit resentful here, are we?  I can
sympathize with your objection to mentalism/philosophy/literature masquerading
as science, but I don't think you can remove information/messages/intentions
from language without throwing the baby out with the bath water.  If language
is none of these things, then what, pray tell, do you think it is?  When
you use language does it contain no information/messages/intentions?


>What's wrong with pragmatics and linguistics in general is that neither
>field addresses any issues of scientific import.

According to whom?  What exactly has given you the authority to make this
bogus and unsubstantiated claim?


>Who cares about
>anti-homophones, u-umlaut and y (or rather, u-double-dot and y), and
>embedded, recursive prepositional phrases? 

A lot of people, obviously.  Your statement had no purpose but to be offensive.


>When you know all about
>these things, what do you know?  I want to know why people say the
>things that they say and why their utterances take the forms that they do.

When you know all about particle physics, what do you know?  The answer is
the same.  I also want to know why people say the things that they say.
And the things that they say are intrinsically bound up with information,
messages, and intentions, those things you thumbed your nose at earlier.


>I want a scientific answer.  Is it unfair or unkind of me to ask this
>of linguistics?

It is if you don't ask or require it of yourself.  Double standards are
never fair or kind.


>>(BTW, use the term 'linguist' to refer
>>to linguists in general.  If you are just complaining about generative
>>linguists, then say so.)
>
>I stand accused.  

You do indeed, twice.  And at least once by one who is not a generative
linguist.


>I want it [linguistics] to aspire to
>substance: give me a causal explanation of verbal behavior.  

You cannot explain behavior without explaining cognition.  Without cognition,
behaviorism *is* push-pull mechanism, which gives no more causal explanation
than does mentalism.  Behaviorism without cognitive theory has no explanatory
power.  I suggest that you find out a bit more about what is going on in
linguistics before you sling mud again.  Chomsky is not the only linguist
out here.
-- 
<< ***************************************************************** >>
<< Clay Bond -- IU Department of Leath-er, er, uh, Linguistics       >>
<< bondc@iuvax.cs.indiana.edu        AKA: Le Nouveau Marquis de Sade >>
<< {pur-ee,rutgers,pyramid,ames}!iuvax!bondc *********************** >>

------------------------------

Date: Tue, 8 Nov 88 16:43 EST
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Re: Syntactical *definition* of English


Greg Lee writes:
 me> [writing about relative clauses containing resumptive pronouns]
 me> Note that relative clauses that violate extraction constraints *must*
 me> contain pronouns that agree with the head NP.
GL> Two counter-notes.  Such relative clauses do not violate extraction
GL> constraints, strictly speaking, since nothing is extracted (obviously).

Let's not jump to conclusions either way.  The constraints were originally
described in terms of movement rules.  The question is over what it buys you
to use the gap metaphor or movement metaphor in describing these syntactic
constructions.  I don't think that there is any absolute truth here.

GL> And it is not clear that they must contain the pronouns you say they
GL> must.  After all, there are relative clauses that contain neither
GL> gaps nor resumptive pronouns -- we call them appositive.

In this regard, there is a very interesting article by Francis Pelletier
"Vacuous Relatives and the (Non-) Context-Freeness of English" in
_Linguistics_ and_Philosopy_ (11:3, 255-260, 1988).  Pelletier discusses a
controversy over 'such that' constructions, which are very similar to relative
clauses with resumptive pronouns (e.g. 'every triangle such that two of its
sides are equal').  The argument turns, in part, around Pullum's (and other's)
feelings that expressions like 'every triangle such that two sides are equal'
are as well-formed as expressions with the resumptive pronoun.  I agree with
Pelletier that they are not and that "this is not an area where one should
look to find a proof of the non-context-freeness of English."

My knowledge about resumptive pronouns derives primarily from the study of
Breton, where such pronouns occur in relative clauses marked by 'hag' /ag/
(literally 'and').  So you say the following:
   an    istor   hag   en-deus  klevet  ar     paotr anezhan
   'the' 'story' 'and' 'he-has' 'heard' 'the' 'boy'  'IT (resumptive)'
   "the story that the boy heard"
Interestingly, the use of question words for relatives, an innovation caused
by exposure to French, disallows resumptive pronouns:
   an istor petra en-deus klevet ar paotr (*anezhan)
           'what'
(In fact, most Bretons seem to prefer a passive clause with relativization out
of subject position here.  But my active clause gets grudging acceptance. :-)
So the use of a non-pronominal relativizer 'and' is associated with the use of
resumptive pronouns.  My question concerns other languages with resumptive
pronouns.  Are there examples of languages that have true pronoun relativizers
(requiring morphological agreement between head and relativizer) and
resumptive pronouns in the clause?  I suspect that this is rare.  The
occurrence of pronominal relativizers facilitates the existence of gaps.  This
is so because pronominal relativizers help the listener to identify the gap
site, whereas non-pronominal relativizers obscure it.  Anyway, this is my gut
feeling. 

-- 
Rick Wojcik   csnet:  rwojcik@boeing.com	   
              uucp:   uw-beaver!ssc-vax!bcsaic!rwojcik 

------------------------------

Date: Fri, 11 Nov 88 00:27 EST
From: Greg Lee <lee@uhccux.uhcc.hawaii.edu>
Subject: Re: Syntactical *definition* of English


From article <8563@bcsaic.UUCP>, by rwojcik@bcsaic.UUCP (Rick Wojcik):
" ...
" In this regard, there is a very interesting article by Francis Pelletier

The only point seems to be that James Higginbotham made a bad argument.
Surprise, surprise.

""Vacuous Relatives and the (Non-) Context-Freeness of English" in
" _Linguistics_ and_Philosopy_ (11:3, 255-260, 1988).  Pelletier discusses a
" controversy over 'such that' constructions, which are very similar to relative
" clauses with resumptive pronouns (e.g. 'every triangle such that two of its
" sides are equal').  The argument turns, in part, around Pullum's (and other's)
" feelings that expressions like 'every triangle such that two sides are equal'
" are as well-formed as expressions with the resumptive pronoun.  I agree with
" Pelletier that they are not and that "this is not an area where one should
" look to find a proof of the non-context-freeness of English."

If you say they are not well formed, you are not agreeing with
Pelletier, because he doesn't say that.  No participant in the
controversy (Pullum, Higginbotham, Pelletier) says that.

But let's say they're not.  Then a GPSG analysis would have to
treat such-that clauses parallel with relative clauses, and assign
them to a special category.  Where does that get us?  Or are
we no longer discussing the appropriateness of Gazdar's theory?

" ...
" occurrence of pronominal relativizers facilitates the existence of gaps.  This
" is so because pronominal relativizers help the listener to identify the gap
" site, whereas non-pronominal relativizers obscure it.  Anyway, this is my gut
" feeling. 

Makes sense to me.

		Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: Fri, 11 Nov 88 14:14 EST
From: w.rolandi <rolandi@gollum.UUCP>
Subject: was predictive knowledge...


In response to Clay's:
>Please object.  But if you have no alternative to offer which
>addresses your complaints by correcting what you perceive to
>be deficiencies, you are contributing nothing to anything.  Or
>to be more colloquial, put your money where your mouth is.

What on earth do you think I've been talking about?  From the onset,
I have been suggesting that the science of human behavior, behavior
analysis, offers specific advantages, both in its causal model and
in its experimental methodology for addressing the deficiencies of 
linguistic knowledge.  

Ironically, your emotional responses have unwittingly served to reinforce
my impression that linguists possess an impoverished understanding
of the purpose, spirit, and methods of science.  I have stated
methodological and epistemological criticisms of linguistics.  Curiously,
you do not address my specific complaints.  Forfeiting an opportunity
to cite data that would prove me wrong--on my own terms, no less--you
act as if you are stepping forward to avenge the honor of an unjustly 
maligned loved one.  

Your first response questioned my authority to make such statements, 
as if facts are untrue unless cited by authorities.  Your second response 
essentially entreats me to find something nice to say about linguistics
or to refrain from saying anything at all.

What can I say?  I am talking about methodological issues associated 
with obtaining a predictive knowledge of natural phenomena.  The problem 
is that I am talking specifics in an audience that is oblivious to the
general plan.

Who else but,


Walter Rolandi
rolandi@ncrcae.Columbia.NCR.COM
NCR Advanced Systems Development, Columbia, SC

------------------------------

End of NL-KR Digest
*******************