[comp.ai.nlang-know-rep] NL-KR Digest Volume 5 No. 41

nl-kr-request@cs.rochester.edu (NL-KR Moderator Brad Miller) (12/22/88)

NL-KR Digest             (12/21/88 18:03:15)            Volume 5 Number 41

Today's Topics:
        Last Issue of This Volume

        Similarity measurement
        phrasal verbs
        Categorization: Lakoff's mistake.
        Looking for Association for Automated Reasoning
        Re: information, message, intention, and the like
        How do they do it?

        Another research post at Edinburgh

        Active Bilingual Lexicon for MT (UNISYS seminar)
        
Submissions: NL-KR@CS.ROCHESTER.EDU 
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Wed, 21 Dec 88 17:47 EST
From: Brad Miller <miller@CS.ROCHESTER.EDU>
Subject: Last Issue of This Volume

And with me as moderator.

 Christopher Welty  ---  Asst. Director, RPI CS Labs
 weltyc@cs.rpi.edu             ...!njin!nyser!weltyc

has agreed to take over moderator duties, expect a posting from him in early
January when the system is up and he has a new address to give you.
nl-kr@cs.rochester.edu will forward to the new adress for a while so you
don't need to worry about address changes yet.

It's been fun!
----
Brad Miller		U. Rochester Comp Sci Dept.
miller@cs.rochester.edu {...allegra!rochester!miller}

------------------------------

Date: Mon, 19 Dec 88 15:09 EST
From: James King <king@rd1632.Dayton.NCR.COM>
Subject: Similarity measurement

SIMILARITY      ...  What does it mean?
for ANALOGY          What are the measures?
for REMINDING        Are there generalities or is it domain-specific?
for EXEMPLARS   ...  Etc. Etc.

I am performing independent research in the area of Case-Based
Reasoning, CBR, and I am working on various metrics for similarity.

In general, what ideas do you (the net-world) have about:
   - What about a new situation reminds you of a prior experience?
   - OR
   - How does one situation remind you of another?
This is obviously a (too) wide open question and requires some clarity, 
but I would rather not focus the discussion to a set of examples yet.
That is, unless it is more productive to do so.  A little more focus 
might be how does one discriminate and weight features of a new
situation (case) in relationship to a large case-base of experiences
that may or may not have a bearing on the new situation.  Did that
provide more focus or fuzziness!?

I send this notice out as a preliminary "attention-getter" to
provide myself with some input to help form a more formal survey.  Once 
written I hope to send it to a specific set of researchers (consisting
mostly of people in the CBR, information retrieval (IR), doc. mngt. areas)
and to anyone in netland that requests so.

My goals are to:
  - Produce some consensus of opinion on a view, or approach(es),
    to similarity and the possibility of codifying it to aid in reminding
    and retrieval of prior experiences.
  - ALSO:  I wish to propose to Dr. Katz at MITRE that a workshop be held
    at IJCAI-89 which would address/discuss this area of concern.  If
    anyone is interested in adding support or input please let me
    know.  (We held a lively workshop on CBR at AAAI-88, and 
    I believe a focused workshop in a specific area that would 
    benefit CBR researchers as well as IR, AI, etc. is important.)

If anyone is interested in responding to any of this:

   - I will watch the "nets" for replies
   - Email to:  j.a.king@dayton.ncr.com
   - Call:  (513)-445-1090 before 4:30 (EST) (317)-478-5910 after 6:00
   - Mail:  NCR Corp.  1700 S. Patterson  WHQ-5E  Dayton, OH 45479

After listening, talking to people, etc. for the next couple weeks I
will write up the survey (anyone can help form the survey) over Christmas
and send it out the first week in January.  By February-March (15th?)
I will hopefully be able to publish preliminary results (to the net, 
respondents, etc.). 

I am fairly familiar with the discussions of similarity by Kolodner,
Schank, Rissland, Porter and others in the CBR field - also from the 
IR field, Fox, Croft, Salton, etc.  I have built one CBR system
and two applications where the similarity metric was determined from the
experts. 

BUT  ... I am open to anyone's suggestions on reference works for 
understanding similarity metrics, methodologies, etc.

Thank you for your time.

Jim King 

------------------------------

Date: Mon, 19 Dec 88 16:05 EST
From: GOFORTH%LAUCOSC.LAURENTIAN.CA@CORNELLC.ccs.cornell.edu
Subject: phrasal verbs


A few comments from an interested amateur linguist (comp sci by
profession)

Phrasal verbs.  The alternative parsing of verb phrased as
V [P N] or [V P/N] N must be decided by semantics if the syntax is
ambiguous.  A clearer example of this than the ones cited has always
fascinated me:
John [turned on] the radio. / John turned [on the radio].

This also helps explain the other word order:
John turns the radio on.

A linguist from the University of Sherbrooke (ref escapes me but I could
find it if pressed) has compiled a dictionary of these phrasal verbs.  In
it, he makes the premise that there are in English six verbs whose meaning
is so diluted that they are little more than auxiliaries to their
propositional completions: have, put, get, go, do, be.  The book is
helpful to EFL learners who aren't well served in this regard by a standard
dictionary.

Incidentally, the discussion includi
Dave Goforth
MAth and Computer Science, Laurentian U

------------------------------

Date: Tue, 20 Dec 88 19:16 EST
From: Mark William Hopkins <markh@csd4.milw.wisc.edu>
Subject: Categorization: Lakoff's mistake.


In article <719@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>To continue this rather constructive approach of suggesting good books
>to read that bear on the subject, may I recommend
>
>	Women, Fire, and Dangerous Things
>	-- what categories reveal about the mind
>	George Lakoff, 1987
>	U of Chicago Press, ISBN 0-226-46803-8
>
>I don't think the data he presents are quite as much of a challenge to
>the traditional view of what a category is as he thinks, provided you
>think of the traditional view as an attempt to characterise ``valid''
>categories rather than actual cognition, just as classical logic is
>an attempt to characterise valid arguments rather than what people
>actually do.  As an account of what people do, it is of very great
>interest for both AI camps, and

I don't think it is even a challenge to the traditional view, when the view
is taken as an attempt to characterize human cognition.

Lakoff's essential argument is that humans do not form categories whose
membership is based on necessary and sufficient conditions (the Classical view 
of Categorization).  As a basic fill-in-the-blank example consider a category,
whose members have a majority of the properties out of the three: A, B, C. 
Lakoff asserts that this kind of category defies the Classical view, because a
given member need not have ANY of the three properties, nor have them ALL 
though it would have most of them.  None of the criteria is necessary and
none sufficient.

Yet this kind of argument does not rule out the Classical view, because the
predicate:
		    (A and B) or (B and C) or (C and A)

*IS* a necessary and sufficient condition for membership to such a class.
Forgetting about that magical word "or" is Lakoff's mistake.  Or could it
be that the people who hold to the Classical view have also made the same
mistake of forgetting about that word?

As a more concrete example, Lakoff brings up the Motherhood Test problem.
The idea is that there as MANY criteria that determine whether a given
woman is your mother or not, none of which need be possessed by any given
mother:  she could have given you birth to you, she could have nurtured you,
he/she could be female, etc.  But it's really the same kind of class as that
mentioned above.

------------------------------

Date: Wed, 21 Dec 88 04:28 EST
From: General Sonnim Account <sonnim@sorak.kaist.ac.kr>
Subject: Looking for Association for Automated Reasoning

Can anyone give me a pointer to the association for automated reasoning ?
I know it exists, but can't locate it.

Kim, Young Hoon

casun@klaatu.kaist.ac.kr


------------------------------

Date: Mon, 19 Dec 88 14:34 EST
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Re: information, message, intention, and the like


In article <15820@iuvax.cs.indiana.edu> bondc@iuvax.UUCP (Clay M Bond) writes:
[Agreeing with Walter Rolandi that you have to hear where rules exist and how
they affect behavior]
>....I don't think rules exist anywhere save
>as explanatory constructs, and that cognition (which you
>apparently consign to the realm of mentalism) can indeed
>be empirically studied and analyzed as firing patterns
>among neural networks.

Well, this really has me puzzled.  You shouldn't call something an explanatory
construct without saying what it is supposed to explain.  What do you think
rules are supposed to explain?  Traditional generative theory says
well-formedness intuitions.  I say that rules govern behavior, and that
well-formedness intuitions are derivative of behavioral strategies (a.k.a.
rules).  In either case, they exist in the brain, which is the seat of the
mind.  As for whether or not the mind exists, it is impossible to think that
it doesn't.  :-)


-- 
Rick Wojcik   csnet:  rwojcik@boeing.com	   
              uucp:   uw-beaver!ssc-vax!bcsaic!rwojcik 

------------------------------

Date: Thu, 15 Dec 88 18:58 EST
From: Steve Solomon <solomon@aero.ARPA>
Subject: How do they do it?


Does anyone have any references on psychological experiments or research
on written text to _prosodic_ structure processing? There is a
hypothesis that during the process of reading the human parser will
first posit a prosodic (i.e. phonological) representation of the input
string, from which the syntax for the sentence is then derived. That is,
the input to the syntactic parser is "bracketed." 

The kinds of evidence I'm looking for would be studies of reading tone
languages such a Chinese (Is tone encoded in the graphemes? How do the
Chinese know how to assign a tonal contour to a sentence on the page?).
What about Bantu languages with rich tonal systems? How do Hebrew
speakers parse unvoweled text and assign the missing vowels? (actual
segmental material) Deterministically?

Comments? Suggestions?
-- 
Steve Solomon
UCLA Dept. of Linguistics
and
The Aerospace Corp.
solomon@aerospace.aero.org

Wir fuhren in einem von einem Freund geliehenen, durch vielen Gebrauch
schon ziemlich wertlos gewordenen alten Auto, in einem gemu"tlichen,
unserer Ferienstimmung entspechenden Tempo, durch das wegen seiner
romantischen Scho"nheit und seines guten Weines beru"hmte Rheintal.

------------------------------

Date: Fri, 16 Dec 88 21:59 EST
From: David Stampe <stampe@uhccux.uhcc.hawaii.edu>
Subject: Re: How do they do it?


solomon@aero.ARPA (Steve Solomon) asks:
>Does anyone have any references on psychological experiments or research
>on written text to _prosodic_ structure processing? There is a
>hypothesis that during the process of reading the human parser will
>first posit a prosodic (i.e. phonological) representation of the input
>string, from which the syntax for the sentence is then derived. That is,
>the input to the syntactic parser is "bracketed." 
>
>The kinds of evidence I'm looking for would be studies of reading tone
>languages such a Chinese (Is tone encoded in the graphemes? How do the
>Chinese know how to assign a tonal contour to a sentence on the page?).
>What about Bantu languages with rich tonal systems? How do Hebrew
>speakers parse unvoweled text and assign the missing vowels? (actual
>segmental material) Deterministically?

I don't think you'll find anything very interesting, mainly because
the hypothesis [can you cite a reference?] as described seems too
poorly thought out to be worth studying.  A reader who sounds out
words, rather than reading them as chunks, might use a phonological
representation derived from the successive words he sounds out to
discover their resp. morphological analyses and lexical identities,
but it seems introspectively obvious that there's no way to convert
directly from print to a PROSODIC representation: that could be gotten
only directly from speech, or by processing syntactic-semantic
representations of the successive phrases and clauses read.

Actually, the examples you suggest don't really concern prosody
(rhythm) per se, but recognition of tones (which are mainly lexical)
and unwritten vowels.  But the same is true here: to say ths wrttn
sntnce wth its crrct vwls nd intntn, y hv to rcgnz th wrds it cntns.
The syntax tells you the pronunciation, not vice versa.

[Steve, I'm posting this because I seem not to be able to reach you
by email.  Greg Lee and I are interested in knowing about a phonetic
alphabet attributed to you in the electronic journal foNETiks.  Could
you email or post some information?]

David (stampe@uhccux.uhcc.hawaii.edu)

----
Somehow this reminds me of the George Stewart's story about what led
him to take up the study of meter: a high school English teacher who
read the line
	King Charles, and who'll do him right now?
as
	U    /        U   U      /  U   U     /	


------------------------------

Date: Mon, 19 Dec 88 14:56 EST
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Re: How do they do it?

In article <43299@aero.ARPA> solomon@aero.UUCP (Steve Solomon) writes:
>Does anyone have any references on psychological experiments or research
>on written text to _prosodic_ structure processing? There is a
>hypothesis that during the process of reading the human parser will
>first posit a prosodic (i.e. phonological) representation of the input
>string, from which the syntax for the sentence is then derived. That is,
>the input to the syntactic parser is "bracketed." 

Just out of curiosity, how does this work handle people who don't have native
prosodic structures--e.g. the deaf, foreign speakers?  Have supporters of the
hypothesis looked for differences in nonnatives that they can tie to
differences in pronunciation?  I hope that those references are posted, so
that others of us can have a look at them, too.


-- 
Rick Wojcik   csnet:  rwojcik@boeing.com	   
              uucp:   uw-beaver!ssc-vax!bcsaic!rwojcik 

------------------------------

Date: Wed, 21 Dec 88 12:24 EST
From: Alan Bundy <bundy%aiva.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Another research post at Edinburgh
  
	Department of Artificial Intelligence
	     University of Edinburgh
  
	       RESEARCH FELLOW
	    (Automated Reasoning)


Applications are invited for a research fellowship, funded by the
ESPRIT Basic Research Actions programme, as part of an international
consortium studying logic programing. The post is tenable from 1st
April 1989 (or soon thereafter) for 30 months.  The fellow will
attempt to apply the technique of proof plans to the guidance of
inference in knowledge-based systems. S/he will also be required to
liase with other members of the consortium.  Proof plans have been
developed as a technique for guiding the search for a proof in
automatic theorem proving, and tested in the domains of symbolic
equation solving and mathematical induction. The aim of the project is
to see if they are equally applicable to non-mathematical areas. The
project will be led by Professor Alan Bundy and Ms Jane Hesketh.

Candidates should possess a PhD or have equivalent research or industrial
experience.  Knowledge of artificial intelligence, mathematical logic
and/or logic programming would be an advantage.  Salary is on the AR1A
scale in the range 9,865 - 13,365 pounds p.a., according to age and
experience.

Applicants should send a curriculum vitae and the names of two
referees to:

  Prof. Alan Bundy.
  Department of Artificial Intelligence, 
  University of Edinburgh, 
  80 South Bridge, 
  Edinburgh,  
  EH1 1HN, 
  SCOTLAND.

as soon as possible.  The closing date for applications is 1st
February 1989.  Further details may be obtained from Prof. Bundy (at
the above address or email to bundy@uk.ac.edinburgh or
bundy@rutgers.edu) quoting reference 5614/em.

------------------------------

Date: Fri, 16 Dec 88 09:29 EST
From: finin@PRC.Unisys.COM
Subject: Active Bilingual Lexicon for MT (UNISYS seminar)


			      AI SEMINAR
		     UNISYS PAOLI RESEARCH CENTER
				   
				   
	 An Active Bilingual Lexicon for Machine Translation
				   
			      Igal Golan
			IBM Scientific Center
			    Haifa, Israel
				   
The work has been carried out as part of a project on machine
translation.  A prototype capable of translation sentences from
English to Hebrew was built and the active bilingual dictionary is
part of this prototype.  We design and implement a special language in
which the differentiation rules which comprise an entry in the
bilingual lexicon are stated.  Each statement in the set of rules
which comprises a given lexical entry defines a correspondence between
a syntactic environment (with semantic feature supplements) in the
source language and a translation into the target language. The
lexicon entries are directly executable by an interpreter written in LISP.

The lexicographer can state the lexical facts and the effects they
have on processing in terms that are relatively transparent from a
linguistic perspective. The available instructions are rather simple
and intuitive.  The language has enough expressive power to support a
variety of requirements for bilingual lexical mapping, while
restricting the scope of operations as much as possible, in order to
reduce complexity and avoid undesired consequences for other entries
or subsystems.  The active bilingual lexicon is the only system
component which is exposed to users and can serve to linguistically
control transfer effects.  A unified approach to lexicon creation and
maintenance was a design goal as was the means to gradually refine
sense specification and the ability to tailor the definitions to
specific text domains. Emphasize was placed on strict isolation of the
lexical subsystem from other parts of the translation system.
				   
		     2:00 pm  - December 22, 1988
			 BIC Conference Room
		     Unisys Paoli Research Center
		      Route 252 and Central Ave.
			    Paoli PA 19311
				   
   -- non-Unisys visitors who are interested in attending should --
   --   send email to finin@prc.unisys.com or call 215-648-7446  --
				   

------------------------------

End of NL-KR Digest
*******************