[comp.ai.nlang-know-rep] NL-KR Digest Volume 3 No. 47

nl-kr-request@CS.ROCHESTER.EDU (NL-KR Moderator Brad Miller) (11/11/87)

NL-KR Digest             (11/11/87 03:05:14)            Volume 3 Number 47

Today's Topics:
        Genie
        dynamic KB restructuring
        Re: NL-KR Digest Volume 3 No. 34 (definite articles)
        RE: NL-KR Digest Volume 3 No. 45
        Can you walk and chew gum at the same time?

        From CSLI Calendar, November 5, 3:6
        Seminar--Planning Coherent Multisentential Text--
        BBN AI Seminar -- Bran Boguraev
        BBN AI Seminar -- Reid Simmons
        CFP - Conference on Machine Translation
        CfP - 1st Australian Knowledge Engineering Congress (Nov. '88)
        
----------------------------------------------------------------------

Date: Thu, 5 Nov 87 12:24 EST
From: Dale Hall <necntc!linus!wdh@ames.arpa>
Subject: Genie

I recently saw reference to the case of a person called "Genie", who
was apparently horribly neglected by her parents. Perhaps it's simple
morbid curiosity, but I would like to locate a case history in the
open literature. Does one exist?

						Dale Hall

------------------------------

Date: Tue, 10 Nov 87 12:15 EST
From: William J. Rapaport <rapaport@cs.Buffalo.EDU>
Subject: dynamic KB restructuring

See:  

Jane Terry Nutter, "Assimilation:  A Strategy for Implementing Self-
Reorganizing Knowledge Bases," Proc. AAAI-87, pp. 449-453.

------------------------------

Date: Fri, 6 Nov 87 06:03 EST
From: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Subject: Re: NL-KR Digest Volume 3 No. 34 (definite articles)


In article <8710160048.AA13373@castor.cs.rochester.edu> Claus Tondering <ct@dde.uucp> writes:

>2) Does anybody know about other peculiarities about the definite article?

Rumanian, which is a Romance language resulting from Rome's Gulag on
the Black Sea (the first Australia? :-)), has a definite article which
is a suffix as in Danish, thus 'urs' - bear, 'ursul' - the bear. I know
nothing of any rigorous diachronic linguistics here, but it looks to me
like 'ille' from Latin going on the end. 'ille' works a bit like a
definite article in Latin, hence 'il' in Italian, 'el' in Spanish, and
'le' in French.
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

------------------------------

Date: Fri, 6 Nov 87 09:14 EST
From: We're all bozos on this bus! <CZICHON%NGSTL1%eg.ti.com@RELAY.CS.NET>
Subject: RE: NL-KR Digest Volume 3 No. 45

Squire Douglas Bonn, let's set the record straight.  There are significant
differences between monkeys and apes.  Monkeys have NOT been taught ASL;
apes have.  Speak no evil.

					C. Czichon
					Allen, Texas

------------------------------

Date: Fri, 6 Nov 87 20:31 EST
From: David Stampe <stampe@uhccux.UUCP>
Subject: Can you walk and chew gum at the same time?


>I never handwrite -- much too slow, I forget what I was thinking.
>I need to write at a rate comparable to measured speech. On the other
>hand I, and almost all people I know, can not compose poetry when
>typing. This is a demonstration of the essential difference between
>poetry and non-poetry.

  Interesting subject.  There seem to be several modes of reading, and
also of thinking.  One mode involves actually pronouncing phrases and
sentences, in real-time, in imagination, and has an upper tempo limit
approximating that of actual pronunciation.
  Another mode, as in speed-reading, is too fast for that.  I'm not
sure it really linguistic (as opposed to merely conceptual) stuff at
all.  Not everyone can read that fast.  (In fact, not everyone can
read silently.  Children normally seem to read aloud at first, and
only gradually to learn to read silently.  The same seems to be true
in human history.  There's a passage, in ?Augustine, about people
being amazed that ?Jerome could read without moving his lips.  Maybe
it is also true of speaking as well.  Asking very young kids to play
their verbal games of doll or cowboy-and-Indian conversation silently
seems to be equivalent to asking them to stop playing altogether.)
  Yet another mode, as in slow writing, is perhaps so slow that it
requires periodic inner re-pronunciation of the current sentence or
phrase in order to keep one's place.

  This suggests that verbal thought, like speaking, is rhythmic.  The
same, mutatis mutandis, seems true of musical thought.  One can't tell
how, or even whether, a verse scans, or a melody sounds (or how it
plays, in rehearsing an instrument mentally), without reviewing it at
roughly performance tempo.  Some musical tempos can be multiplied, of
course, but one soon hits an upper limit.  (I've seen conductors
speed-reading scores but I suspect all they experience in doing so is
the overall structure, rather like when one flips through a book to
review its overall contents.)  Linguists usually have similar
experiences about phonetically transcribing their speech: they can do
it silently, but only by repeating the phrase at pronounceable tempos.
Indeed, the same seems to be true of mentally rehearsing all sorts of
physical activity.  Some high-jumpers rehearse their approach to the
bar by bobbing their heads and moving their eyes to each spot where
they will place their feet on each step toward the bar, all in the
same rhythm that they use when they actually move.

Furthermore, there seems to be only one available rhythmic channel.
We can follow/imagine/play several voices in a fugue only when they
are part of a single rhythmic structure.  Maybe the reason we can't
follow two or more conversations at once is ultimately because they
aren't in rhythm.  Can anyone *really* listen to music and read at the
same time?  We gesture in rhythm to our speech.  Norman McQuown used
to show a slow-motion film of a family talking at dinner, and everyone
chewed and moved their forks in rhythm to the person talking.
Carrying placards protesting at a military parade one can't keep from
walking in rhythm to the military music.  Walking faster while we're
talking seems to make us talk faster.

Do readers understand text by reconstructing in their minds its
pronounced form, complete with rests, intonations, etc.?  If so, this
suggest that attempts at simulating natural language understanding
(parsing, etc.) may have been missing an important step.

------------------------------

Date: Mon, 9 Nov 87 19:09 EST
From: John Chambers <jc@minya.UUCP>
Subject: Re: Can you walk and chew gum at the same time?

In article <1074@uhccux.UUCP>, stampe@uhccux.UUCP (David Stampe) writes:
> >I never handwrite -- much too slow, I forget what I was thinking.
> >I need to write at a rate comparable to measured speech. On the other
> >hand I, and almost all people I know, can not compose poetry when
> >typing. This is a demonstration of the essential difference between
> >poetry and non-poetry.

Well, most people I know are unable to compose poetry when writing or
talking (:-).  Personally, I feel no difference that makes a difference.
In fact, typing to an editor like I'm doing now, I can stop and think,
back up, revise, and so on.  I just deleted this entire paragraph and
retyped it.  That's harder to do with a pen; impossible when speaking
in person (and hard enough when speaking to a recorder).  Perhaps you
have a bit of subconscious Luddism?  [Oh, well, it's rarely fatal.;-]

>   Interesting subject.  There seem to be several modes of reading, and
> also of thinking.  One mode involves actually pronouncing phrases and
> sentences, in real-time, in imagination, and has an upper tempo limit
> approximating that of actual pronunciation.
>   Another mode, as in speed-reading, is too fast for that.  I'm not
> sure it really linguistic (as opposed to merely conceptual) stuff at
> all.  Not everyone can read that fast.  

Sure, it's linguistic.  What's being read is language, isn't it?  And
there are many people around who can read at 5 or 6 times normal speaking
speed with full comprehension.  If you can't, it's probably due to lack
of training.  I've always found it frustrating to listen to lectures,
when I can read an hour's worth of speech in 10 or 15 minutes, with
better comprehension most of the time.  After all, when reading, I
can stop and thing, reread, or skim past stuff that I already know.
That's real hard to do when listening to speech.

> 			...  There's a passage, in ?Augustine, about people
> being amazed that ?Jerome could read without moving his lips.  Maybe
> it is also true of speaking as well.  

True, there are and always have been a lot of illiterate or marginally
literate people in the world.  That's not a comment on the capabilities 
of humans in general, but merely of the deficiencies of those illiterates.  
Most Americans are couch potatoes; that is no reflection at all on the 
rest of us.

>   This suggests that verbal thought, like speaking, is rhythmic.  The
> same, mutatis mutandis, seems true of musical thought.  One can't tell
> how, or even whether, a verse scans, or a melody sounds (or how it
> plays, in rehearsing an instrument mentally), without reviewing it at
> roughly performance tempo.  

Maybe *you* can't.  Don't generalize to the rest of us.

> Furthermore, there seems to be only one available rhythmic channel.
> We can follow/imagine/play several voices in a fugue only when they
> are part of a single rhythmic structure.  

Just yesterday about this time I was playing a Greek tune with a couple 
of friends, a fast hasapikos, of the sort where the tune is often in 
jig time, while the accompaniment is in 2/4.  I had no trouble at all 
playing the melody on the keyboard of my accordion (in 6/8) while playing 
the chords (in 2/4) on the bass.  I learned to do such things on the 
piano when I was 9, and I have done it often since.  In the Mideastern 
musical circles with which I sometimes associate, people would give you 
a funny look if you suggested that such elementary polyrythms were difficult. 
Maybe for musical illiterates, but not for a Real Musician.

> Carrying placards protesting at a military parade one can't keep from
> walking in rhythm to the military music.  Walking faster while we're
> talking seems to make us talk faster.

If you were ever involved in a musical production in school, one of
the things you were probably hit with was that it is a big mistake
to walk in time to music, or synchronized with someone else on stage.
Such synchronisation stands out plainly to an audience, and you have
to learn not to do it.  True, matching a rythm in the environment is
a natural trait.  That's why the directors of school plays have to tell
the kids not to do it.  Most of them learn very quickly; they have little
trouble breaking the reflex.  All you have to do is shame them by saying
that it looks "amateurish", and they learn real fast.

In general, it is a bad idea to look at the behavior of a set of people
handicapped by lack of training or interest, and generalize to all humans.

-- 
John Chambers <{adelie,ima,maynard,mit-eddie}!minya!{jc,root}> (617/484-6393)

------------------------------

Date: Wed, 4 Nov 87 20:08 EST
From: emma@russell.stanford.edu
Subject: From CSLI Calendar, November 5, 3:6

[Extracted from CSLI Calendar]

			    NEW PUBLICATIONS

   The following reports have recently been published.  They may be
   obtained by writing to Trudy Vizmanos, CSLI, Ventura Hall, Stanford,
   CA 94305-4115 or publications@csli.stanford.edu.

   97. Constituent Coordination in  HPSG  
       Derek Proudian and David Goddeau 	

   98. A Language/Action Perspective on the Design of Cooperative Work 
       Terry Winograd                           

   99. Implicature and Definite Reference 
       Jerry R. Hobbs 

   100. Thinking Machines: Can There be? Are we? 
        Terry Winograd                          

   101. Situation Semantics and Semantic Interpretation in
        Constraint-based Grammars  
        Per-Kristian Halvorsen                  

   102. Category Structures 
        Gerald Gazdar, Geoffrey K. Pullum, Robert Carpenter, Ewan Klein,  
        Thomas E. Hukari, Robert D. Levine      

   103. Cognitive Theories of Emotion 
        Ronald Alan Nash                        

   104. Toward an Architecture for Resource-bounded Agents 
        Martha E. Pollack, David J. Israel, and Michael E. Bratman  

   105. On the Relation Between Default and Autoepistemic Logic 
        Kurt Konolige                           

   106. Three Responses to Situation Theory 
        Terry Winograd                          


------------------------------

Date: Fri, 6 Nov 87 15:02 EST
From: Ana C. Dominguez <anad@vaxa.isi.edu>
Subject: Seminar--Planning Coherent Multisentential Text--

Date:  Wednesday November 11
Time:  3:30pm-4:30pm
Place: 7th Floor Small Conference Room


               PLANNING COHERENT MULTISENTENTIAL TEXT 

			    Eduard Hovy
		  USC Information Sciences Institute
			  Marina del Rey, CA


Generating multisentential text is hard. Though most text generators are 
capable of simply stringing together more than one sentence, they cannot 
determine coherent order. Very few programs have been written that attempt 
to plan out the structure of multisentential paragraphs. 

Clearly, the key here is coherence. The reason some paragraphs are coherent 
is that the information in successive sentences follows some pattern of 
inference or of knowledge with which the hearer is familiar. To signal such 
inferences, people usually use relations that link successive sentences in 
fixed ways. This point was made by Hobbs in 78. In 82, McKeown built fixed 
schemas (scripts) for constructing some paragraphs. Around the same time, 
after a wide-ranging linguistic study, Mann proposed that a relatively small 
number of intersentential relations suffices to bind together coherently most 
of the things people tend to speak about. 

The talk will describe a prototype text structurer that is based on the 
inferential ideas of Hobbs, uses Mann's relations, and is more general than 
the schema applier built by McKeown. The structurer takes the form of a 
standard hierarchical expansion planner, in which the relations act as plans 
and their constraints on relation fillers (represented in a formalism similar 
to Cohen and Levesque's work) as subgoals in the expansion. The structurer is 
conceived as part of a general text planner, but currently functions on its 
own and is being tested in two domains: database output and expert system 
explanantion. 

------------------------------

Date: Sun, 8 Nov 87 17:42 EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar -- Bran Boguraev

                    BBN Science Development Program
                       AI Seminar Series Lecture

    THE USE OF AN ON-LINE DICTIONARY FOR NATURAL LANGUAGE PROCESSING

                             Bran Boguraev
                          Computer Laboratory,
                        University of Cambridge (UK)
          (bkb%computer-lab.cambridge.ac.uk@NSS.Cs.Ucl.AC.UK)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                      10:30 am, Friday November 13


This talk is an attempt at a retrospective analysis of the collective
experience stemming from the use of the machine-readable version of the
Longman Dictionary of Contemporary English for natural language processing. It
traces the relationships between specific requirements for lexical data and
issues of making such data available for diverse research purposes. A
particular model of on-line dictionary use is presented, which promotes a
strong separation between the processes of extracting information from
machine-readable dictionaries and using that information within the pragmatic
context of computational linguistics. The talk further analyses some
characteristics of the raw lexical data in electronic sources and outlines a
methodology for making maximal use of such potentially rich, but inherently
unreliable, resources.
-------

------------------------------

Date: Tue, 10 Nov 87 16:11 EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar -- Reid Simmons

                    BBN Science Development Program
                       AI Seminar Series Lecture

           GENERATE, TEST AND DEBUG: A PARADIGM FOR SOLVING
                 INTERPRETATION AND PLANNING PROBLEMS

                              Reid Simmons
                               MIT AI Lab
                  (REID%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU)

                                BBN Labs
                           10 Moulton Street
                    2nd floor large conference room
                     10:30 am, Tuesday November 17


We describe the Generate, Test and Debug (GTD) paradigm and its use in
solving interpretation and planning problems, where the task is to
find a sequence of events that could achieve a given goal state from a
given initial state.  The GTD paradigm combines associational
reasoning in the generator with causal reasoning in the debugger to
achieve a high degree of efficiency and robustness in the overall
system.  The generator constructs an initial hypothesis by finding
local domain-dependent patterns in the goal and initial states and
combining the sequences of events that explain the occurrence of the
patterns.  The tester verifies hypotheses and, if the test fails,
supplies the debugger with a causal explanation for the failure.  The
debugger uses domain-independent debugging algorithms which suggest
repairs to the hypothesis by analyzing the causal explanation and
models of the domain.

This talk describes how the GTD paradigm works and why its combination
of reasoning techniques enables it to achieve efficient and robust
performance.  In particular, we will concentrate on the actions of the
debugger which uses a "transformational" approach to modifying
hypotheses that extends the power of the "refinement" paradigm used by
traditional domain-independent planners.  We will also discuss our
models of causality and hypothesis construction and the role those
models play in determining the completeness of our debugging algorithms.

The GTD paradigm has been implemented in a program called GORDIUS.  It
has been tested in several domains, including the primary domain of
geologic interpretation, the blocks world, and the Tower of Hanoi
problem.

------------------------------

Date: Fri, 6 Nov 87 16:20 EST
From: Machine.Translation.Journal@NL.CS.CMU.EDU
Subject: CFP - Conference on Machine Translation


                CONFERENCE ON MACHINE TRANSLATION


                        CALL FOR PAPERS


  The   Second   International   Conference  on  Theoretical  and
Methodological Issues in Machine Translation of Natural Languages
will  be held June 12 - 14 at the Center for Machine Translation,
Carnegie-Mellon University, Pittsburgh, PA.

  Contributions are solicited on all topics  related  to  machine
translation, machine-aided translation, and, generally, automatic
analysis and generation of natural language texts, the  structure
of   lexicons   and   grammars,  research  tools,  methodologies,
knowledge representation and  use,  and  theory  of  translation.
Relevant submissions on other topics are also welcome.

  Extended  abstracts  (not exceeding 1,500 words) should be sent
to

    MT Conference Program Committee
    Center for Machine Translation
    Carnegie-Mellon University
    Pittsburgh PA 15213, U.S.A.
    (412) 268 6591

Submission Deadline: February 1, 1988

Notification of Acceptance: March 21, 1988

Final Version Due: April 18, 1988

All submissions will be refereed by the members of the  Program Committee:

Christian  Boitet  (University  of Grenoble)
Jaime Carbonell (Carnegie-Mellon University)
Martin Kay (Xerox  PARC)
Makoto  Nagao (Kyoto University)
Sergei  Nirenburg  (Carnegie-Mellon University)
Victor Raskin  (Purdue University)
Masaru Tomita (Carnegie-Mellon University)

All inquiries should be directed to

    Cerise Josephs
    Center for Machine Translation
    Carnegie-Mellon University
    Pittsburgh, PA 15213 U.S.A.
    (412) 268 6591
    cerise@nl.cs.cmu.edu.ARPA

------------------------------

Date: Mon, 9 Nov 87 21:05 EST
From: ERIC Y.H. TSUI <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: CfP - 1st Australian Knowledge Engineering Congress (Nov. '88)

1ST 
AUSTRALIAN
KNOWLEDGE
ENGINEERING
CONGRESS
NOVEMBER 15TH - 17TH 1988

                           CALL FOR PAPERS

Following the success of the 1st Australian Artificial Intelligence Congress 
in November 1986, Melbourne will be the host to its successor - 
the Australian Knowledge Engineering Congress - in November 1988.

Contributions are invited on every aspect of Knowledge Engineering and 
Knowledge-base technology: Expressions of interest in the program and 
supporting activities are now invited either on the following topics or 
on any related theme:

 	Expert Systems case studies
 	Knowledge Engineering (including Prototyping) methodologies
 	Design and use of Conceptual Schemas
 	Natural Language Interfaces
 	Evaluation of tools and expert systems
 	Role of consultants in Knowledge Engineering
 	Design of Intelligent Tutors and Conversational Advisors
 	Knowledge Source Systems
 	Inference mechanisms

A preliminary indication of interest in offering a paper, management of 
specific streams and/or tutorial presentations should be sent as soon 
as possible to :-

Professor B. Garner
DEAKIN UNIVERSITY
VICTORIA 3217
AUSTRALIA

Electronic mail: brian@aragorn.oz

------------------------------

End of NL-KR Digest
*******************