[comp.ai.neural-nets] Neuron Digest V7 #1

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (01/09/91)

Neuron Digest   Tuesday,  8 Jan 1991
                Volume 7 : Issue 1

Today's Topics:
                              Administrivia
                  what CogSci at Buffalo has been doing
                       ML91 Final Call for Papers


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Administrivia
From:    "Neuron-Digest Moderator -- Peter Marvit" <neuron@hplabs.hpl.hp.com>
Date:    Tue, 08 Jan 91 17:46:52 -0800

Welcome to a new year of Neuron Digest.  As usual, I've arbitrarily
started a new volume number.  Long-term subscribers will remember that
Vol 1 #1 started on 1 December 1986 under the very capable editorship of
Mike Gately with some 200 recipients.  We are now up to 1000 addresses on
the master mailing list, with at least 100 redistribution points.  The
Digest is also gatewayed to the USENET group comp.ai.neural-nets.  Thus I
estimate a direct readership (via mail) is 2000 and indirect (via USENET)
around 40,000.  We certainly have grown -- along with this exciting
field!

For any number of reasons, the Digest has recently had more "Call for
Papers" than individual discussion.  You, the readership, have a voice in
what you see.  Your submissions are what makes the Digest.  If you are
content with current contents, keep reading.  If you want to see
something different or more information on some subject, write in!

Best wishes to you all!

        -Peter 

: Peter Marvit, Neuron Digest Moderator
: Courtesy of Hewlett-Packard Labs in Palo Alto, CA 94304  (415) 857-6646 
: neuron-request@hplabs.hp.com    OR   {any backbone}!hplabs!neuron-request

------------------------------

Subject: what CogSci at Buffalo has been doing
From:    talmy@acsu.buffalo.edu (len talmy)
Date:    Wed, 05 Dec 90 13:06:06 -0500

My apologies for taking up part of your e-space, but I thought some of
you might enjoy seeing what CogSci at Buffalo has been doing this
semester.  If not, this message is readily deleted.  Otherwise, below is
a cumulative announcement of events.  And, in any case, greetings.  --Len


                                  The
                        Cognitive Science Center

at SUNY Buffalo is initiating an ongoing series of speakers and
discussions every Wednesday, 2:00-3:30.  Roughly, the first and third
Wednesday of a month will be scheduled for a speaker from one of the
Cognitive Science disciplines.  The second Wednesday will be a business
meeting to plan Center activities.  And the fourth Wednesday might have a
tutorial, a report by a research group, a general discussion on some
pre-announced cross-disciplinary topic, or the like.  The full campus
community is welcome to all of these events, with the business meeting
especially intended for those students and faculty having a specific
interest in Cognitive Science and a desire for active participation in
it.  The Wednesday 2:00-3:30 time slot will remain fixed through future
semesters, so that those wishing to attend Cognitive Science events can
have a stable point around which to try to arrange their future
schedules.

=========================================================================

                  Center for Cognitive Science
                          Upcoming Events

Wednesday, Sep. 26, 2:00-3:30, Park 280:    Topic-Based Symposium + Discussion

          Topic:  "What is Cognitive Science?"

Organizer & Discussion-Leader: Erwin Segal, Department of Psychology

=========================================================================

Wednesday, Oct. 3, 2:00-3:30, Park 280:    Colloquium Presentation

                            Deborah Walters
                     Department of Computer Science

              "Representing Variables in Cognitive Science"

      How can variable information be represented in a cognitive sys tem
in  a manner which facilitates such cognitive functions as learn ing and
categorization?  Some constraints from these  functions  are  that  they
enable  the system to make fine distinctions between variables, and that
they enable generalizations to be made over broad  categories  of  vari-
ables.   The  basic  question  concerns whether it is possible to have a
representation which is good for both types of use?  For exam ple, if we
wanted  to  represent  the  temperature  of lake water we could use real
numbers to represent the temperature in degrees Celsius and  that  would
give  us the ability to make fine distinctions - we could tell which was
the warmer of two very similar temperatures.  But in many cases we would
want  just  a general idea of the temperature and a representation using
labels such as "frozen", "cold" and "warm" would  be  more  appropriate.
Is  it  possible  to have a representation which can be used efficiently
for both types of tasks?

     Multiple approaches to the study of this problem will be discussed,
including  a computational analysis and the use of our current knowledge
about the representations utilized in the mammalian brain.

=========================================================================

Wednesday, Oct. 10, 2:00-3:30, Park 280:    Business Meeting

=========================================================================

Wednesday, Oct. 17, 2:00-3:30, Park 280:    Colloquium Presentation

                           Barbara Tedlock
                       Department of Anthropology

                            "Mayan Cosmology"


By combining linguistic and ethnographic analysis of field research con-
ducted  among the 5 million Mayan-speaking peoples of Mexico, Belize and
Guatemala with archeological and epigraphic analysis  of  Classic  Mayan
culture  (300  A.D.  -  900 A.D.) and ethnohistoric research in colonial
Mayan sources, I have begun to tease out a few  key  Mayan  cosmological
concepts.   In  this  talk  I  will  limit myself to a discussion of two
areas, (1) the nature and type of their directional system, and (2) cer-
tain aspects of their astronomy.

=========================================================================

Wednesday, Oct. 24, 2:00-3:30, Park 280:  Topic-Based Symposium + Discussion

          Topic:  "Categorization:  perspectives and approaches"

Organizer & Discussion-Leader: 

     David Zubin, Department of Linguistics

Presenters:

    David A. Zubin, Department of Linguistics 
        "Why I think categorization  is a cognitive science topic"

    David Wilkins, Department of Linguistics    --Anthropological perspective
        "Some Australian Aboriginal categories"

    Newton Garver, Department of Philosophy     --Philosophical perspective 
        "Family resemblance as a way station 
             along Witgenstein's way to a theory of language use"

    Lynn Rose, Department of Philosophy         --Philosophical perspective
        "Plato on forms"

    LouAnn Gerken, Department of Psychology     --Psycholinguistic perspective
         "How abstract are children's linguistic categories?"

    Paul Luce, Department of Psychology         --Perceptual perspective  
        "Categorical perception in phonetic processing"

    Nicholas Leibovic, Biophysical Sciences  --Neurophysiological perspective
         "How the retina categorizes the world"

=========================================================================

Thursday, Oct. 25, 5:00 pm, Baldy 619:   Graduate Cognitive Science Club

    First meeting of the Graduate Cognitive Science Club
             (interested faculty also welcome)
     organizer: Valerie Shafer, 
                    graduate student, Linguistics & GA, Cognitive Science

=========================================================================

Wednesday, Oct. 31, 2:00-3:30, Park 280:    Colloquium Presentation

                         Peter Jusczyk
                     Department of Psychology

"From Infant Speech Perception to Word Recognition: Some Steps Along the Way"


For the past 20 years, studies of speech perception capacities in young
infants have revealed that infants possess many sophisticated abilities
for recovering information from the speech signal. Indeed, infants are
able to perceive speech sound contrasts in languages that they have never
heard before. The developmental picture that emerges from such studies is
that in acquiring the sound structure of their native language, infants
move from universal capacities to ones that are attuned to the categories
and regulari- ties that are present in the native language. Recent
evidence suggests that this process begins early in the first year of
life. This evidence and its implications for the way that word
recognition processes develop will be discussed in this talk.

=========================================================================

Wednesday, Nov. 7, 2:00-3:30, Clemens 204:    Colloquium Presentation
    Please note room change 

                              Kah Kyung Cho
                          Department of Philosophy

                         "Rethinking Intentionality"

        Recently published materials in Heidegger's Complete Works show
how early on and how sharply Heidegger dissented with Husserl's
conception of phenomenology.  The disagreement is poignant because
Heigegger understood his interpretation still within the spirit of
pehnomenological inquiry and as an attempt to further its cause, by
radicalizing it.

        The departure from Husserl's ``orthodoxy'' was seen usually in
the context of rejecting his method of ``reduction''.  Thus many other,
including Ingarden, Merleau-Ponty, Alfred Schuetz and Sartre, have
contributed to the snowballing effect of ``existentialized'' version of
phenomenology which gave up in principle the method of transcendental
reduction.

        The special interest of Heidegger's early attempts lies in his
first nonpolemically extending the notion of ``intentionality'' as
``being-with'' (the world) and then increasingly attacking Husserl's
inability to leave the state of self-incapsulation of the pure
consciousness behind.

        While this Heideggerian discussion of intenetionality, conjoined
with the critique of the method of reduction, throws an additonal light
on the reason why in spite of himself Heidegger can be viewed as an
``existential'' philosopher, our purpose is also to remind the merit of
the method of reduction which, Husserl, in the closing years of his life,
increasingly complained as having been completely misunderstood.

=========================================================================

Wednesday, Nov. 14, 2:00-3:30, Clemens 204:    Business Meeting
    Please note room change

    presentation by Karen Inman & Margie Lerman, Office of Sponsored Programs
                (plus other planning issues, if time permits)

     --to help those considering writing grant proposals and
to facilitate their running such proposals through the Cognitive Science
Center. Interested students are encouraged to attend as well

=========================================================================

Wednesday, Nov. 28, 2:00-3:30, Park 280:  Topic-Based Symposium + Discussion
    Note return to Park location

    Topic:  "Space-- How it is Cognized/Conceptualized: 
                      perspectives and approaches"

    Organizer & Discussion-Leader:   David Mark, Department of Geography

    Participants:

David Mark, Department of Geography:  
    "Geography and `Geographic' Space"
Madeleine Mathiot, Department of Linguistics:  
      "Categories of Information in Direction-Giving"
Erwin Segal, Department of Psychology:  
     "Spatial Perception"
Stuart Shapiro, Department of Computer Science:  
    "Spacae in SNePS"
Leonard Talmy, Department of Linguistics:  
    "Aspects of How Language Structures Space"
Joeseph Woelfel, Department of Communications:  
    "Spatial Models of Cognitive Processes"

=========================================================================

Wednesday, Dec. 5, 2:00-3:30, Park 280:    Tutorial

                           Charles Frake
                     Department of Anthropology

=========================================================================

for further information: Dawn Phillips, 636-2694, dcp@cs.buffalo.edu for
additional discussion of any kind: Len Talmy, 636-2177,
talmy@acsu.buffalo.edu

You are encouraged to use the Cognitive Science Center e-mail alias:
            Cogsci-all@cs.buffalo.edu
to announce events of general Cognitive Science interest

------------------------------

Subject: ML91 Final Call for Papers
From:    Lawrence Birnbaum <birnbaum@fido.ils.nwu.edu>
Date:    Tue, 18 Dec 90 12:02:01 -0600


            THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING

                               CALL FOR PAPERS


On behalf of the organizing committee, and the individual workshop committees,
we are pleased to announce submission details for the eight workshop tracks
that will constitute ML91, the Eighth International Workshop on Machine
Learning, to be held at Northwestern University, Evanston, Illinois, USA, June
27-29, 1991.  The eight workshops are:

        o Automated Knowledge Acquisition
        o Computational Models of Human Learning
        o Constructive Induction
        o Learning from Theory and Data
        o Learning in Intelligent Information Retrieval
        o Learning Reaction Strategies
        o Learning Relations
        o Machine Learning in Engineering Automation
        
Please note that submissions must be made to the workshops individually, at
the addresses given below, by March 1, 1991.  The Proceedings of ML91 will be
published by Morgan Kaufmann.  Questions concerning individual workshops
should be directed to members of the workshop committees.  All other questions
should be directed to the program co-chairs at ml91@ils.nwu.edu.  Details
concerning the individual workshops follow.

        Larry Birnbaum
        Gregg Collins

        Northwestern University
        The Institute for the Learning Sciences
        1890 Maple Avenue
        Evanston, IL 60201
        phone (708) 491-3500


- ----------------------------------------------------------------------------


                       AUTOMATED KNOWLEDGE ACQUISITION


Research in automated knowledge acquisition shares the primary objective of
machine learning research: building effective knowledge bases. However, while
machine learning focuses on autonomous "knowledge discovery," automated
knowledge acquisition focuses on interactive knowledge elicitation and
formulation. Consequently, research in automated knowledge acquisition
typically stresses different issues, including how to ask good questions, how
to learn from problem-solving episodes, and how to represent the knowledge
that experts can provide.  In addition to the task of classification, which is
widely studied in machine learning, automated knowledge acquisition studies a
variety of performance tasks such as diagnosis, monitoring, configuration, and
design.  In doing so, research in automated knowledge acquisition is exploring
a rich space of task-specific knowledge representations and problem solving
methods.

Recently, the automated knowledge acquisition community has proposed hybrid
systems that combine machine learning techniques with interactive tools for
developing knowledge-based systems.  Induction tools in expert system shells
are being used increasingly as knowledge acquisition front ends, to seed
knowledge engineering activities and to facilitate maintenance.  The
possibilities of synergistic human-machine learning systems are only beginning
to be explored.

This workshop will examine topics that span autonomous and interactive
knowledge acquisition approaches, with the aim of productive cross-
fertilization of the automated knowledge acquisition and machine learning
communities.

Submissions to the automated knowledge acquisition track should address basic
problems relevant to the construction of knowledge-based systems using
automated techniques that take advantage of human input or human- generated
knowledge sources and provide computational leverage in producing operational
knowledge.

Possible topics include:

o Integrating autonomous learning and focused interaction with an
  expert.
o Learning by asking good questions and integrating an expert's
  responses into a growing knowledge base.
o Using existing knowledge to assist in further knowledge acquisition.
o Acquiring, representing, and using generic task knowledge.
o Analyzing knowledge bases for validity, consistency, completeness,
  and efficiency then providing recommendations and support for revision.
o Automated assistance for theory / model formation and discovery.
o Novel techniques for knowledge acquisition, such as explanation,
  analogy, reduction, case-based reasoning, model-based reasoning,
  and natural language understanding.
o Principles for designing human-machine systems that integrate the
  complimentary computational and cognitive abilities of programs and
  users.

Submissions on other topics relating automated knowledge acquisition and
autonomous learning are also welcome. Each submission should specify the basic
problem addressed, the application task, and the technique for addressing the
problem.

WORKSHOP COMMITTEE

Ray Bareiss (Northwestern Univ.)
Bruce Buchanan (Univ. of Pittsburg)
Tom Gruber (Stanford Univ.)
Sandy Marcus (Boeing)
Bruce Porter (Univ. of Texas)
David Wilkins (Univ. of Illinois)

SUBMISSION DETAILS

Papers should be approximately 4000 words in length.  Authors should submit
six copies, by March 1, 1991, to:

Ray Bareiss
Northwestern University
The Institute for the Learning Sciences
1890 Maple Avenue
Evanston, IL 60201
phone (708) 491-3500

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.


- ----------------------------------------------------------------------------


                    COMPUTATIONAL MODELS OF HUMAN LEARNING


Details concerning this workshop will be forthcoming as soon as possible.


- ----------------------------------------------------------------------------


                           CONSTRUCTIVE INDUCTION


Selection of an appropriate representation is critical to the success of
most learning systems.  In difficult learning problems (e.g., protein folding,
word pronunciation, relation learning), considerable human effort is often
required to identify the basic terms of the representation language.
Constructive induction offers a partial solution to this problem by
automatically introducing new terms into the representation as needed.
Automatically constructing new terms is difficult because the environment or
teacher usually provides only indirect feedback, thus raising the issue of
credit assignment.  However, as learning systems face tasks of greater
autonomy and complexity, effective methods for constructive induction are
becoming increasingly important.

The objective of this workshop is to provide a forum for the interchange
of ideas among researchers actively working on constructive induction issues.
It is intended to identify commonalities and differences among various
existing and emerging approaches such as knowledge-based term construction,
relation learning, theory revision in analytic systems, learning of hidden-
units in multi-layer neural networks, rule-creation in classifier systems,
inverse resolution, and qualitative-law discovery.

Submissions are encouraged in the following topic areas:

      o Empirical approaches and the use of inductive biases
      o Use of domain knowledge in the construction and evaluation of new terms
      o Construction of or from relational predicates
      o Theory revision in analytic-learning systems
      o Unsupervised learning and credit assignment in constructive induction
      o Interpreting hidden units as constructed features
      o Constructive induction in human learning
      o Techniques for handling noise and uncertainty
      o Experimental studies of constructive induction systems
      o Theoretical proofs, frameworks, and comparative analyses
      o Comparison of techniques from empirical learning, analytical learning,
        classifier systems, and neural networks

WORKSHOP COMMITTEE

Organizing Committee:                   Program Committee:

Christopher Matheus (GTE Laboratories)  Chuck Anderson (Colorado State)
George Drastal (Siemens Corp.)          Gunar Liepins (Oak Ridge National Lab)
Larry Rendell (Univ. of Illinois)       Douglas Medin (Univ. of Michigan)
                                        Paul Utgoff (Univ. of Massachusetts)

SUBMISSION DETAILS

Papers should be a maximum of 4000 words in length.  Authors should include a
cover page with authors' names, addresses, phone numbers, electronic mail
addresses, paper title, and a 300 (maximum) word abstract.  Do not indicate or
allude to authorship anywhere within the paper.  Send six copies of paper
submissions, by March 1, 1991, to:

Christopher Matheus
GTE Laboratories
40 Sylvan Road, MS-45
Waltham MA 02254
(matheus@gte.com)

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.


- ----------------------------------------------------------------------------


                   LEARNING FROM THEORY AND DATA


Research in machine learning has primarily focused on either (1) inductively
generalizing a large collection of training data (empirical learning) or (2)
using a few examples to guide transformation of existing knowledge into a more
usable form (explanation-based learning).  Recently there has been growing
interest in combining these two approaches to learning in order to overcome
their individual weaknesses.  Preexisting knowledge can be used to focus
inductive learning and to reduce the amount of training data needed.
Conversely, inductive learning techniques can be used to correct imperfections
in a system's theory of the task at hand (commonly called "domain theories").

This workshop will discuss techniques for reconciling imperfect domain
theories with collected data.  Most systems that learn from theory and data
can be viewed from the perspective of both data-driven learning (how
preexisting knowledge biases empirical learning) and theory-driven learning
(how empirical data can compensate for imperfect theories).  A primary goal of
the workshop will be to explore the relationship between these two
complementary viewpoints.  Papers are solicited on the following (and related)
topics:

o Techniques for inductively refining domain theories and knowledge bases.
o Approaches that use domain theories to initialize an incremental, 
  inductive-learning algorithm.
o Theory-driven design and analysis of scientific experiments.
o Systems that tightly couple data-driven and theory-driven learning
  as complementary techniques.
o Empirical studies, on real-world problems, of approaches
  to learning from theory and data.
o Theoretical analyses of the value of preexisting knowledge in inductive 
  learning.
o Psychological experiments that investigate the relative roles 
  of prior knowledge and direct experience.

WORKSHOP COMMITTEE

Haym Hirsh (Rutgers Univ.), hirsh@cs.rutgers.edu
Ray Mooney (Univ. of Texas), mooney@cs.utexas.edu 
Jude Shavlik (Univ. of Wisconsin), shavlik@cs.wisc.edu

SUBMISSION DETAILS

Papers should be single-spaced and printed using 12-point type.  Authors must
restrict their papers to 4000 words.  Papers accepted for general presentation
will be allocated 25 minutes during the workshop and four pages in the
proceedings published by Morgan Kaufmann.  There will also be a posters
session; due to the small number of proceedings pages allocated to each
workshop, poster papers will not appear in the Morgan Kaufmann proceedings.
Instead, they will be allotted five pages in an informal proceedings
distributed at this particular workshop only.  Please indicate your preference
for general or poster presentation.  Also include your mailing and e-mail
addresses, as well as a short list of keywords.

People wishing to discuss their research at the workshop should submit four
(4) copies of a paper, by March 1, 1991, to:

        Jude Shavlik
        Computer Sciences Department
        University of Wisconsin
        1210 W. Dayton Street
        Madison, WI  53706

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.


- ----------------------------------------------------------------------------
                                   

            LEARNING IN INTELLIGENT INFORMATION RETRIEVAL


The intent of this workshop is to bring together researchers from the
Information Retrieval (IR) and Machine Learning (ML) communities to explore
areas of common interest.  Interested researchers are encouraged to submit
papers and proposals for panel discussions.

The main focus will be on issues relating learning to the intelligent
retrieval of textual data.  Such issues include, for example:

 o Descriptive features, clustering, category formation, and
   indexing vocabularies in the domain of queries and documents. 
          + Problems of very large, sparse feature sets.
          + Large, structured indexing vocabularies.
          + Clustering for supervised learning.      
          + Connectionist cluster learning.
          + Content theories of indexing, similarity, and relevance.

 o Learning from failures and explanations:
          + Dealing with high proportions of negative examples.
          + Explaining failures and successes.
          + Incremental query formulation, incremental concept
                 learning.
          + Exploiting feedback.
          + Dealing with near-misses.

 o Learning from and about humans:
          + Intelligent apprentice systems.
          + Acquiring and using knowledge about user needs and
                goals.
          + Learning new search strategies for differing user
                needs. 
          + Learning to classify via user interaction.
      
 o Information Retrieval as a testbed for Machine Learning. 

 o Particularities of linguistically-derived features.

WORKSHOP COMMITTEE

Christopher Owens (Univ. of Chicago), owens@gargoyle.uchicago.edu
David D. Lewis (Univ. of Chicago), lewis@cs.umass.edu
Nicholas Belkin (Rutgers Univ.)
W. Bruce Croft (Univ. of Massachusetts)
Lawrence Hunter (National Library of Medicine)
David Waltz (Thinking Machines Corporation)

SUBMISSION DETAILS

Authors should submit 6 copies of their papers.  Preference will be given to
papers that sharply focus on a single issue at the intersection of Information
Retrieval and Machine Learning, and that support specific claims with concrete
examples and/or experimental data.  To be printed in the proceedings, papers
must not exceed 4 double-column pages (approximately 4000 words).

Researchers who wish to propose a panel discussion should submit 6 copies of a
proposal consisting of a brief (one page) description of the proposed topic,
followed by a list of the proposed participants and a brief (one to two
paragraph) summary of each participant's relevant work.

Both papers and panel proposals should be received by March 1, 1991, at the
following address:

Christopher Owens
Department of Computer Science
The University of Chicago
1100 East 58th Street
Chicago, IL 60637
Phone: (312) 702-2505

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.


- ----------------------------------------------------------------------------

        
                         LEARNING REACTION STRATEGIES


The computational complexity of classical planning and the need for real-time
response in many applications has led many in AI to focus on reactive systems,
that is, systems that can quickly map situations to actions without extensive
deliberation.  Efforts to hand code such systems have made it clear that when
agents must interact with complex environments the reactive mapping cannot be
fully specified in advance, but must be adaptable to the agent's particular
environment.

Systems that learn reaction strategies from external input in a complex domain
have become an important new focus within the machine learning community.
Techniques used to learn strategies include (but are not limited to):

        o reinforcement learning
        o using advice and instructions during execution
        o genetic algorithms, including classifier systems
        o compilation learning driven by interaction with the world
        o sensorimotor learning
        o learning world models suitable for conversion into reactions
        o learning appropriate perceptual strategies

WORKSHOP COMMITTEE

Leslie Kaelbling (Teleos), leslie@teleos.com
Charles Martin (Univ. of Chicago), martin@cs.uchicago.edu
Rich Sutton (GTE), rich@gte.com
Jim Firby (Univ. of Chicago), firby@cs.uchicago.edu
Reid Simmons (CMU), reid.simmons@cs.cmu.edu
Steve Whitehead (Univ. of Rochester), white@cs.rochester.edu

SUBMISSION DETAILS

Papers must be kept to four two-column pages (approximately 4000 words) for
inclusion in the proceedings.  Preference will be given to submissions with a
single, sharp focus.  Papers must be received by March 1, 1990.

Send 3 copies of the paper to:

Charles Martin 
Department of Computer Science
University of Chicago
1100 East 58th Street
Chicago, IL 60637

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.


- ---------------------------------------------------------------------------


                              LEARNING RELATIONS


In the past few years, there have been a number of developments in empirical
learning systems that learn from relational data.  Many applications (e.g.
planning, design, programming languages, molecular structures, database
systems, qualitative physical systems) are naturally represented in this
format.  Relations have also been the common language of many advanced
learning styles such as analogy, learning plans and problem solving.  This
workshop is intended as a forum for those researchers doing relational
learning to address common issues such as:

Representation: Is the choice of representation a relational language, a
grammar, a plan or explanation, an uncertain or probabilistic variant, or
second order logic?  How is the choice extended or restricted for the purposes
of expressiveness or efficiency?  How are relational structure mapped into
neural architectures?

Principles: What are the underlying principles guiding the system?  For
instance: similarity measures to find analogies between relational structures
such as plans, "minimum encoding" and other approaches to hypothesis
evaluation, the employment of additional knowledge used to constrain
hypothesis generation, mechanisms for retrieval or adapation of prior plans or
explanations.

Theory: What theories have supported the development of the system?  For
instance, computational complexity theory, algebraic semantics, Bayesian and
decision theory, psychological learning theories, etc.

Implementation: What indexing, hashing, or programming methodologies have been
used to improve performance and why?  For instance, optimizing the performance
for commonly encountered problems (ala CYC).

The committee is soliciting papers that fall into one of three categories:
Theoretical papers are encouraged that define a new theoretical framework,
prove results concerning programs which carry our constructive or relational
learning, or compare theoretical issues in various frameworks.  Implementation
papers are encouraged that provide sufficient details to allow
reimplementation of learning algorithms, and discuss the key time/space
complexity details motivating the design.  Experimentation papers are
encouraged that compare methods or address hard learning problems, with
appropriate results and supporting statistics.

WORKSHOP COMMITTEE

Wray Buntine (RIACS and NASA Ames Research Center), wray@ptolemy.arc.nasa.gov
Stephen Muggleton (Turing Institute), steve@turing.ac.uk
Michael Pazzani (Univ. of California, Irvine), pazzani@ics.uci.edu
Ross Quinlan (Univ. of Sydney), quinlan@cs.su.oz.au

SUBMISSION DETAILS

Those wishing to present papers at the workshop should submit a paper or an
extended abstract, single-spaced on US letter or A4 paper, with a maximum
length of 4000 words.  Those wishing to attend but not present papers should
send a 1 page description of their prior work and current research interests.

Three copies should be sent to arrive by March 1, 1991 to:

Michael Pazzani
ICS Department
University of California
Irvine, CA 92717  USA

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.


- ---------------------------------------------------------------------------


                  MACHINE LEARNING IN ENGINEERING AUTOMATION


Engineering domains present unique challenges to learning systems, such as
handling continuous quantities, mathematical formulas, large problem spaces,
incorporating engineering knowledge, and the need for user-system interaction.
This session concerns using empirical, explanation-based, case-based,
analogical, and connectionist learning techniques to solve engineering
problems such as design, planning, monitoring, control, diagnosis, and
analysis.  Papers should describe new or modified machine learning systems
that are demonstrated with real engineering problems and overcome limitations
of previous systems.

Papers should satisfy one or more of the following criteria:

o Present new learning techniques for engineering problems.
o Present a detailed case study which illustrates shortcomings preventing
  application of current machine learning technology to engineering problems.
o Present a novel application of existing machine learning techniques to an
  engineering problem indicating promising areas for applying machine learning
  techniques to engineering problems.

Machine learning programs being used by engineers must meet complex
requirements.  Engineers are accustomed to working with statistical programs
and expect learning systems to handle noise and imprecision in a reasonable
fashion.  Engineers often prefer rules and classifications of events that are
more general than characteristic descriptions and more specific than
discriminant descriptions.  Engineers have considerable domain expertise and
want systems that enable application of this knowledge to the learning task.

This session is intended to bring together machine learning researchers
interested in real-world engineering problems and engineering researchers
interested in solving problems using machine learning technology.

We welcome submissions including but not limited to discussions of
machine learning applied to the following areas:

        o manufacturing automation
        o design automation
        o automated process planning
        o production management
        o robotic and vision applications
        o automated monitoring, diagnosis, and control
        o engineering analysis

WORKSHOP COMMITTEE

Bradley Whitehall (Univ. of Illinois)
Steve Chien (JPL)
Tom Dietterich (Oregon State Univ.)
Richard Doyle (JPL)
Brian Falkenhainer (Xerox PARC)
James Garrett (CMU)
Stephen Lu (Univ. of Illinois)

SUBMISSION DETAILS

Submission format will be similar to AAAI-91: 12 point font, single-spaced,
text and figure area 5.5" x 7.5" per page, and a maximum length of 4000 words.
The cover page should include the title of the paper, names and addresses of
all the authors, a list of keywords describing the paper, and a short (less
than 200 words) abstract.  Only hard-copy submissions will be accepted (i.e.,
no fax or email submissions).

Four (4) copies of submitted papers should be sent to:

Dr. Bradley Whitehall
Knowledge-Based Engineering Systems Research Laboratory
Department of Mechanical and Industrial Engineering
University of Illinois at Urbana-Champaign
1206 West Green Street
Urbana, IL 61801
ml-eng@kbesrl.me.uiuc.edu

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.



------------------------------

End of Neuron Digest [Volume 7 Issue 1]
***************************************