king@rd1632.Dayton.NCR.COM (James King) (03/01/89)
The following is an abridged version of a "survey", or call
for information, which has been sent over USmail and I am
now sending this out to the net to hit anyone
that has an interest and that isn't on the mailing list.
------------------------- cut here --------------------------
A Survey on Similarity
Important Address for Return
James A. King
NCR Corporation
WHQ-5E
1700 S. Patterson
Dayton, OH 45479
EMAIL responses are welcome:
j.a.king@dayton.ncr.com
j.a.king%dayton.ncr.com@relay.cs.net
With questions phone:
(513)-445-1090 (days)
(317)-478-5910 (nights)
Any related papers, dissertations, notes, software, etc. are
welcome. All information will be collected and reviewed
with the purpose of producing a report on similarity. This
survey is part of an effort to produce Case-Based Reasoning
mechanisms which is under the direction of Robert Simpson,
with consulting assistance from Dr. Edwina Rissland and Dr.
Janet Kolodner.
Please return this survey by March 10th if possible.
Acknowledgements to:
- Ray Bareiss, Vanderbilt University
- and a multitude of respondents to the initial request
for participation
Introduction (body of text) excluded ...
This survey is directed towards the following objectives:
- To collect researcher's definitions and approaches for
utilizing similarity metrics;
- To collect example systems which have made an operational
commitment to a particular similarity metric (or suite of
similarity judgments);
- To collect empirical results and judgments as to the
effectiveness of these metrics;
- To establish a space of similarity measurements which
could be factored by domains, tasks, etc.;
- To produce an informative survey report on metrics for
similarity.
- To report during a panel session at the DARPA sponsored
Case-Based Reasoning Workshop to be held May 31 - June 2
in Pensacola, Florida.
Thank you for your participation.
____________________________________________________________
Survey on Similarity
General Information
This survey consists of three parts:
I. General Questions
II. Survey Questions
III. Optional "Open" Questions
IV. Request for supporting information
Thank you for your participation.
____________________________________________________________
I. General Questions
Name:
Position:
Organization:
Address:
Phone:
EMAIL Address:
1. What is your primary research area?
2. Is similarity assessment an important aspect of your
work?
3. In which domains are you applying (and assessing)
similarity measurements?
4. What software have you designed, implemented, or
directed to be built which involves similarity
assessment? On which platforms, and with which
languages, or tools, have these systems been
constructed?
II. Survey Questions
1. In your opinion what makes one case (i.e., an object,
situation, or event) similar to another? In other
words, what about a new case reminds you a past
experience, object, sense, etc.?
2. What forms should measurements of similarity take
(e.g., quantitative and qualitative)? How should they
be processed? How should quantitative and qualitative
methods be combined?
3. How should similarity be assessed when case
descriptions are not uniform (i.e., when similar
information is provided by non-identical features)?
How important is this problem?
4. How much inferential effort is worthwhile for
determining the equivalence of nonidentical features?
How should a system determine the amount?
5. How should features be weighted with respect to
importance during similarity assessment?
6. What commonalties and differences exist between case
retrieval (i.e., recalling a potentially similar
experience) and similarity assessment? Are these
distinct processes?
7. Have you (formally or informally) compared different
methods of similarity assessment (e.g., additive vs.
multiplicative similarity functions)?
III. Open Questions:
1. In which domains are reminding processes and
measurements of similarity applicable? (In other
words, in which domains is case-based reasoning
applicable?)
2. What forms can verification of a reminding process
take?
3. What role should statistics (e.g., Bayesian analysis)
play in similarity assessment?
4. Are exemplars ground instances of categories defined in
terms of observable features or can their features be
abstracted?
5. What role should neural networks play in research on
similarity assessment (e.g., does this model provide a
more compelling explanation of the phenomena being
discussed)?
IV. Other Information:
1. Please list papers you have written which discuss
similarity assessment. If possible, attach copies of
these papers or abstracts.
2. Please suggest other researchers (names and addresses)
who could provide opinions on this topic.
3. Please make general comments on the survey (e.g., which
questions should we have asked?).