[comp.ai.vision] Vision-List delayed redistribution

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (04/21/88)

Vision-List Digest	Wed Apr 20 16:06:24 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 *** Change in VISION LIST moderator
 Character recognition
 Digitizer boards for Q-bus
 contrast and size
 Recording Visual Images


------------------------------

Date: Wed, 20 Apr 88 14:42:16 PST
From: Vision-List-Request@ads.com <Vision List moderator>
Subject: *** Change in VISION LIST moderator

To the Readership:

	Just to let you know, there has been a change of guard. Tod
Levitt, the moderator, protector, and champion of this Vision List for
the past several years has decided to buy a sailboat and travel around
the world. (Actually, Tod's not REALLY doing that; he just thought a
change in moderatorship was due.) His efforts in expanding the
readership (now into the thousands) have made this List the primary
conversant forum for Vision and Image Processing researchers and
practicians. Tod's presence and valuable input will continue in his
submissions to this List.

	The list will continue to operate as before. That is, mail 
Vision List submissions to VISION-LIST@ADS.COM.  Administrative
mail (e.g., adding/deleting you from the list, problems in receiving 
the List) should continue to be directed to VISION-LIST-REQUEST@ADS.COM.
Please notify me of any problems you have; this changeover is likely 
to cause at least a few glitches.


	Since Tod assumed the moderatorship in 1985, I have been
encouraged at the solidity and diversity of the readership. Readers
vary from newcomers to the field to well-established researchers in
vision and/or image processing. The bulk of the submissions to the
List is made up of seminar announcements, requests for literature, and
specific system questions.  This role is important in helping us keep
abreast of the field and it provides us with a rapid way to answer
questions by asking a large group.

	Yet, this List is not being utilized to our full advantage.
When I founded this List in 1984, I had hoped there would be more
technical dialogue on vision and IP-related issues. In part, the
historically more limited role of this List is due to the great
diversity in the technical background of the readership and the
chronic time pressues most of us must endure.  Even within these
constraints, I believe that submissions to this List can be expanded
in order to substantively address other important issues. I encourage
all of you to consider how you can more effectively use this List and
other readers to solve and discuss common problems and issues.
Comments to me or submitted to the List could get this going.

	Phil Kahn


----------------------------------------------------------------------

From: oltz@tcgould.tn.cornell.edu (Michael Oltz)
Date: 23 Mar 88 19:27:08 GMT
Organization: Theory Center, Cornell U., Ithaca NY


Frequently-asked question #487 :-)
What are some good references re character recognition, particularly
re arbitrary typeset fonts?  Algorithms requiring training for each
font would be fine, but forgiving algorithms would be helpful too.
Please respond by email and I will summarize.
 
Mike Oltz   oltz@tcgould.tn.cornell.UUCP  (607)255-8312
Cornell Computer Services
215 Computing and Communications Center
Ithaca NY  14853

------------------------------

Date: Mon, 11 Apr 88 14:38:18 CDT
From: dyer@cs.wisc.edu (Chuck Dyer)
Subject: Digitizer boards for Q-bus

What vendors sell digitizer boards for the Q-bus (for Vaxstations)?

-- Chuck Dyer
dyer@cs.wisc.edu

[ All I know about is the MicroVAX II/Q-bus board(s) by Data Translation
  (617) 481-3700; I've no experience with them. Let us know what you find.
	-pk-	]

------------------------------

From: munnari!latcs1.oz.au!suter@uunet.uu.net (David Suter)
Subject: contrast and size
Keywords: classical size illusion
Date: 12 Apr 88 11:30:46 GMT
Organization: Comp Sci, La Trobe Uni, Australia

I am seeking pointers or comments on variations upon what I believe is
called the classical size illusion: A dark square on a light
background - or the reverse - seems to be larger than it really is and
that this apparent size increases with increasing contrast.
Specifically I am interested in whether one can deduce that a 1-D
version of this (stripe instead of square) induces the same effect.
Furthermore, I am interested in explanations of this illusion. I am
aware of the explanations mentioned in van Erning et al. (Vision
Research Vol. 28 No. 3) and wonder if any radically different
explanations have been proposed.

I have tried using this newsgroup for commentary and pointers before -
without success - Is anybody out there? - or is it just that my
queries are not in the interests of subscribers. Any comments etc.
would be welcome.

[ I believe this question is well within the bounds of this List. You 
  may also want to post these type of questions to the Psychology
  bboards/digests.  		-pk-	]

d.s.

David Suter                            ISD: +61 3 479-1311
Department of Computer Science,        STD: (03) 479-1311
La Trobe University,                ACSnet: suter@latcs1.oz
Bundoora,                            CSnet: suter@latcs1.oz
Victoria, 3083,                       ARPA: suter%latcs1.oz@uunet.uu.net
Australia                             UUCP: ...!uunet!munnari!latcs1.oz!suter
                                     TELEX: AA33143
                                       FAX: 03 4785814

------------------------------

Date: Thu, 14 Apr 88 11:45:54 MDT
From: Wahner Brooks <wbrooks@yuma.arpa>
Subject: Recording Visual Images

Greetings,

	Can anyone provide me recommendations for film/lens/filters and
exposure durations that would record an image as the "average" human
eye would see it under both photopic and scotopic conditions?  Data
is required for still and motion cameras (and, if anyone has worked
with it, video).  Leads to individuals or organizations working this
problem would be useful.

Thank you.     Wahner Brooks  <wbrooks@yuma.arpa>
		602-328-2135

[ Contrast sensing after dark adaptation is scotopic (has reduced ability
  to perceive color); light adapted sensing is photopic. P46 in
  "Digital Picture Processing," by Rosenfeld and Kak describes the 
  phenomena. The index for the Optical Society of America or the
  Science Citation Index might help...
		-pk-	]

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (06/22/88)

Vision-List Digest	Tue Jun 21 11:54:52 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Matrox Comments
 Vicom, Datacube comments
 Medical Image Processing?

----------------------------------------------------------------------

Date: Fri 17 Jun 88 10:59:19-PDT
From: Gerard Medioni <MEDIONI%DWORKIN.usc.edu@oberon.USC.EDU>
Subject: MATROX Comments

Regarding the digitizers for the PC, here at USC we have bought a few 
MATROX boards and are very happy with them. 
  They cost about $1200, give you 512*512*8 (actually 512*480), and come 
with 1Meg of on-board memory, organized into 4 512*512 panes. 
  There is a decent library of low level routines which are C and Fortran 
callable.


------------------------------

Date: Mon, 20 Jun 88 16:25:24 EDT
From: Matthew Turk <turk@media-lab.media.mit.edu>
Subject: Vicom, Datacube comments


marc@acf8.nyu.edu writes:
  >We are considering the purchase of some of the Datacube Max Video
  >VME-bus image processing boards for use in a Sun environment.  We
  >currently have a VICOM-1, and would like the new facility to support
  >VICOM-like commands, support vision studies, and support real-time
  >robotic applications.
  > ....

I used a Vicom for almost three years and have been working with a
Datacube system for the past six months, so I'll chip in my $0.02
worth.  First the Vicom....

The Vicom I used was their Versados-based version (is that VICOM-1, I
forget?), not the newer VMEbus/Sun version.  It was used as the 
vision system for an autonomous vehicle, processing video and range
images to do things like find and describe the road and obstacles.
(In fact, two Vicoms are now being used in this system.)  The machine
was equipped with digitizers and framestores, a bunch of image memory,
a 3x3 convolution box, a point processor, mouse and graphics interface,
and a fast histogram board.  Things generally happen at frame rate.
My impression is that the hardware was good, but rather inflexible.
What could be done with the machine was sorta limited unless you
wanted to hack very low-level driver stuff.  The software that comes
with the Vicom was reasonably good -- once we got the thing up and
running, we could do all kinds of image processing from their canned
routines, and write Pascal programs to do others.   Programming the
thing wasn't too hard -- again, unless you wanted to do something
different than what was provided.  The development environment is
another story -- it was atrocious!  Although I complained at the time,
I must admit that Vicom's service was pretty good.

The Datacube system that I'm currently working with is sort of the
opposite of the Vicom.  Its hardware seems to be pretty hot, and can
be configured to do just about anything you are clever enough to think
of.  However, it may take you months to figure out the thing well
enough to digitize an image, store it, convolve it, and display the
output!  It is clearly a hardware product, and the software is up to
you.  The good thing is that you never have to worry about coding
obscure low-level driver stuff -- Datacube provides thorough
interfacing software -- but you certainly have to worry about what to
do with the medium-low-level routines you have available, how to
configure data paths in software and via bus cables, how to deal with
interrups and pipeline delays, etc.  For example, I have been spending
a great deal of time and effort trying to avoid crashing my Sun every
time I initialize the ``Roistore'' board!  With this and other
problems I have had little help from Datacube.  They seem to be much
more concerned with selling than supporting -- I hope this will
improve as more of us complain.

There's a group across the street at the AI Lab using a Datacube/Sun
to track a bouncing ball in real-time for such things as juggling and
ping-pong playing robots.  Their current simple vision system (using
five Datacube boards for two cameras) runs at frame rate.  Another
group at Harvard is using Datacube for tracking objects in real-time.
(Important to both of these projects is the Featuremax board....)  Our
group at the Media Lab is starting to get almost real-time pyramids
running.  The arithmetic depends on the board, but most of it is 8- or
16-bit.  (I think the 8x8 convolution kernel elements are 8-bit,
output is 16-bit.)  I don't know of any commercially available
software packages for the Datacube -- someone please speak up if you
do!  There is a group at Boeing who have done quite a bit with their
Datacube equipment, so it is possible to develop a VICOM-like command
set -- but at this point it's a *big* job.

I'd love to hear other opinions on this.....

	Matthew Turk
	MIT Media Lab


------------------------------

Date: 21 Jun 88 03:34:51 GMT
From: sher@sunybcs.uucp.arpa (David Sher)
Subject: Medical Image Processing?
Keywords: Medical Instrumentation
Organization: SUNY/Buffalo Computer Science


I am involved in a small project (that I hope will grow into a large project)
on medical image processing (in particular on echocardiographic data).
I am also interested in other topics on medical instrumentation.
However my expertise lies in computer perception rather than medicine.
Anyway is there a mailing list or discussion group that is particularly
relevant to this topic?  

Of particular interest to me are the issues of:
1. Funding sources for such work.
2. Places where work on this topic is published.  
   (There is some but not a lot of such work documented in the typical 
   computer vision literature such as CGVIP, and PAMI.)
3. Ways to learn more about the topic. (Would it be a good idea to take 
   a course on radiology or would it be just a waste of time?)
4. What other people out there are doing about medical imaging.
5. Successes and failures in collaborations between computer scientists
   and MD's.

-David Sher Ph.D. Computer Science
-David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (07/16/88)

Vision-List Digest	Fri Jul 15 11:58:02 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Motion vision hardware
 Updated call for Papers, Israeli AI Conference

----------------------------------------------------------------------

Date: Thu, 14 Jul 88 08:57:55 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Motion vision hardware

    Special-purpose hardware seems to be a necessity for real-time  
motion vision.  (Yes, one might be able to do it on a supercomputer,
a Connection Machine, or a GigaLogician, but that route is too
expensive for more than brief experiments.)  But, as yet, not much
suitable hardware is available.  On the other hand, the PRISM work
at MIT and Nishihara's later work at Schlumberger indicate that such
hardware can be built at relatively modest cost.  Is there sufficient
demand for such hardware to justify manufacturing it?  Are there
applications in sight?  

					John Nagle

------------------------------

From: Shmuel Peleg <peleg%humus.Huji.AC.IL@CUNYVM.CUNY.EDU>
Date: Thu, 14 Jul 88 22:34:57 JDT
Subject: Please Post: Call for Papers, Israeli AI Conference


                Call For Papers


Fifth Israeli Conference on Artificial Intelligence
Tel-Aviv, Ganei-Hata`arucha,
December 27-28, 1988


The Israeli Conference on Artificial Intelligence is the annual meeting
of the Israeli Association for Artificial Intelligence, which is a SIG
of the Israeli Information Processing Association.  Papers addressing
all aspects of AI, including, but not limited to, the following
topics, are solicited:

        - AI and education
        - AI languages, logic programming
        - Automated reasoning
        - Cognitive modeling
        - Expert systems
        - Image processing and pattern recognition
        - Inductive inference, learning and knowledge acquisition
        - Knowledge theory, logics of knowledge
        - Natural language processing
        - Computer vision and visual perception
        - Planning and search
        - Robotics

This year, the conference is held in cooperation with the SIG on
Computer Vision and Pattern Recognition,  and in conjunction with the
Tenth Israeli Conference on CAD and Robotics.  There will be a special
track devoted to Computer Vision and Pattern Recognition.  Joint
activities with the Confernece on CAD and Robotics include the
openning session, a session on Robotics and AI, and the exhibition.

Submitted papers will be refereed by the program committee, listed
below.  Authors should submit 4 camera-ready copies of a full paper or
an extended abstract of at most 15 A4 pages.  Accepted papers will
appear without revision in the proceedings.  Submissions prepared on a
laser printed are preferred.  The first page should contain the title,
the author(s), affiliation, postal address, e-mail address, and
abstract, followed immediately by the body of the paper.  Page numbers
should appear in the bottom center of each page.  Use 1 inch margin
and single column format.

Submitted papers should be received at the following address by
October 1st, 1988:

        Ehud Shapiro
        5th ICAI
        The Weizmann Institute of Science
        Rehovot 76100, Israel

The conference program will be advertized at the end of October.  It
is expected that 30 minutes will be allocated for the presentation of
each paper, including question time.


Program Committee

Moshe Ben-Bassat, Tel-Aviv University (B25@taunivm.bitnet)
Martin Golumbic, IBM Haifa Scientific Center
Ehud Gudes, Ben-Gurion University (ehud@bengus.bitnet)
Tamar Flash, Weizmann Institute of Science
Yoram Moses, Weizmann Institute of Science
Uzzi Ornan, Technion
Shmuel Peleg, Hebrew University (peleg@humus.bitnet)
Gerry Sapir, ITIM
Ehud Shapiro (chair), Weizmann Institute of Science (udi@wisdom.bitnet)
Jeff Rosenschein, Hebrew University (jeff@humus.bitnet)
Shimon Ullman, Weizmann Institute of Science (shimon@wisdom.bitnet)
Hezy Yeshurun, Tel-Aviv University (hezy@taurus.bitnet)

Secreteriate

Israeli Association for Information Processing
Kfar Hamacabia
Ramat-Gan 52109, Israel


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/02/88)

Vision-List Digest	Mon Aug  1 12:31:07 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 -- Thesis defense: Bayesian Modeling of Uncertainty in Low-Level Vision
 -- Automated supernova search people need advice
 -- Workstation Questions
 -- NETWORK COMPUTING FORUM - CALL FOR PAPERS
 -- FIRST ANNUAL MEETING OF THE INTERNATIONAL NEURAL NETWORK SOCIETY 
 -- Public Domain Sun Image Processing Software
 -- multidisciplinary conference on VISION, PROCESSING, and DISPLAY

----------------------------------------------------------------------

Date: Mon, 18 Jul 1988 09:34-EDT 
From: Richard.Szeliski@IUS2.CS.CMU.EDU
Subject: Thesis defense: Bayesian Modeling of Uncertainty in Low-Level Vision

	Bayesian Modeling of Uncertainty in Low-Level Vision
		       [ Thesis defense ]

			Richard Szeliski
		  Computer Science Department
		  Carnegie Mellon University

		   July 28, 1:00pm, WeH 5409

			    ABSTRACT

Over the last decade, many low-level vision algorithms have been devised for
extracting depth from one or more intensity images.  The output of such
algorithms usually contains no indication of the uncertainty associated with
the scene reconstruction.  In other areas of computer vision and robotics,
the need for such error modeling is becoming recognized, both because of the
uncertainty inherent in sensing, and because of the desire to integrate
information from different sensors or viewpoints.

In this thesis, we develop a new Bayesian model for the dense fields such as
depth maps or optic flow maps that are commonly used in low-level vision.
The Bayesian model consists of three components:  a prior model, a sensor
model, and a posterior model.  The prior model captures any a priori
information about the structure of the dense field.  We construct this model
by using the smoothness constraints from regularization to define a Markov
Random Field.  By applying Fourier analysis to this prior model, we show
that the class of functions being modeled is fractal.  The sensor model
describes the behaviour and noise characteristics of our measurement system.
We develop a number of sensor models for both sparse depth measurements and
dense flow or intensity measurements.  The posterior model combines the
information from the prior and sensor models using Bayes' Rule, and can be
used as the input to later stages of processing.  We show how to compute
optimal estimates from the posterior model, and also how to compute the
uncertainty (variance) in these estimates.

This thesis applies Bayesian modeling to a number of low-level vision
problems.  The main application is the on-line extraction of depth from
motion.  For this application, we use a two-dimensional generalization of
the Kalman filter to convert the current posterior model into a prior model
for the next estimate.  The resulting incremental algorithm provides a dense
on-line estimate of depth whose uncertainty and error are reduced over time.
In other applications of Bayesian modeling, we use the Bayesian
interpretation of regularization to choose the optimal smoothing parameter
for interpolation; we use a Bayesian model to determine observer motion from
sparse depth measurements without correspondence; and we use the fractal
nature of the prior model to construct multiresolution relative surface
representations.  The approach to uncertainty modeling which we develop, and
the utility of this approach in various applications, support our thesis
that Bayesian modeling is a useful and practical framework for low-level
vision.

------------------------------

Date: Fri, 29 Jul 88 07:46:22 PDT
From: John B. Nagle <jbn@glacier.stanford.edu>
Subject: Automated supernova search people need advice

    This is forwarded from USENET; please reply to "beard@uxl.lbl.gov", not me.

>From: beard@ux1.lbl.gov (Patrick C Beard)
Subject: Workstation Questions
Date: 28 Jul 88 23:41:21 GMT
Distribution: comp.sys.workstations,comp.edu,comp.graphics,comp.os.vms
Organization: Lawrence Berkeley Laboratory, Berkeley
Summary: Questions about available workstations, accelerating mVax


Hello everybody.

My group is conducting an automated search for supernovae and is
in the market for upgrading the computer that is the heart of the 
system.  We require a system that outperforms our current computer,
a Microvax, by at least a factor of 4, and hopefully a factor of 10.

I am submitting this message to the network community to ask three
questions:

1)  What machines are there in the workstation type class that can
outperform a Microvax by a factor of 4 to 10 times?  (Please describe
briefly, cost, speed, manufacturer.)

2)  Alternatively, what options exist for speeding up a Microvax?
Are there accelerator boards, processor upgrades, anything you can
think of?

3)  What image processing systems are available?  Commercial or public
domain, source code included, optimized for what hardware, and how easy
are they to modify or extend for special purpose use (such as astronomical
work)?

You may answer me directly, or via the net.  I'm sure there are a lot
of people who could benefit from this information.

Thanks in advance,

+=============================================================+
|                    Patrick C. Beard                         |
|              Lawrence Berkeley Laboratory                   |
|               Automated Supernova Search                    |
+-------------------------------------------------------------+
|              PCBeard@LBL.gov (arpa only)                    |
+=============================================================+



------------------------------

Date: 29 Jul 88 12:31 PDT
From: William Daul / McAir / McDonnell-Douglas Corp  <WBD.MDC@OFFICE-8.ARPA>
Author: Beverly Pieper <BKP.MDC@office-8.arpa>
Subject: NETWORK COMPUTING FORUM - CALL FOR PAPERS

NETWORK COMPUTING FORUM

   CALL FOR PAPERS

   OCTOBER 5-8, 1988

   HOLIDAY INN WESTPORT, ST. LOUIS, MISSOURI

The next meeting of the Network Computing Forum will be held on October 5-7 in 
St. Louis, Missouri.  This will be the fourth meeting of the Forum, and will 
focus on the role of the Forum as a catalyst for change in the industry.  The 
Forum is an industry group chartered to lead the way for rapid adoption of 
multi-vendor network computing concepts and technologies.  Forum meetings allow
representatives from users and vendors to work together on common issues in an 
open, informal atmosphere.  The Forum has over 100 member organizations, and 
more than 220 representatives attended the May 1988 meeting.

Forum meetings are organized into three sessions:  a conference featuring 
invited papers and panel sessions, meetings of interest groups and working 
groups, and a policy making executive committee meeting.  Some areas of 
interest to the Forum member organizations are listed, to suggest possible 
topics for papers:

   Definition of user requirements for network computing

   Practical experiences using network computing concepts & technologies

   Partitioning and/or integration of applications across networks

   Remote procedure calls and other core services for network computing

   System and network administration for networks of heterogeneous computers

   User interfaces and user environments for network computing

   Software licensing in a network environment 

   Data representation and command scripting across heterogeneous networks

   Use of network computing with IBM mainframes (MVS and VM)

Invited Papers

   As part of each Forum meeting, papers are invited from the community at 
   large for presentation and discussion.  These papers should address the use 
   or development of network based applications and services.  Emphasis should 
   be placed on creating and using tightly coupled links between multiple, 
   heterogeneous computer systems.  Technical descriptions of research 
   projects, user experiences, as well as commerically available products are 
   welcome.  Invitations are also extended for more informal talks on practical
   experience in administering heterogeneous computer networks.  All 
   presentations should be 35 minutes in length, with 15 minutes of discussion 
   following each presentation.

   Abstracts must be received by August 10, 1988.  Abstracts should summarize 
   the paper in two or three paragraphs and include the mailing address, 
   affiliation, and phone number of the author(s).  Notification of abstracts 
   selected will be sent on August 19, 1988 and papers must be submitted no 
   later than September 20, 1988.  Papers can be copyrighted, but must include 
   authorization for unrestricted reproduction by the Network Computing Forum. 
   Papers can be marked as working papers to allow future publication.

SEND ABSTRACTS BY AUGUST 10, 1988 TO the Program Chairman for the October 1988 
meeting:

   T.D.  Carter
   c/o Jan McPherson
   McDonnell Douglas Travel Company
   944 Anglum Drive, Suite A
   Hazelwood, MO 63042
   (314)  233-2951
   Internet Address:  TDC.MDC@OFFICE-8.ARPA


------------------------------

Date: Mon, 1 Aug 88 12:08:42 EDT
From: mike%bucasb.bu.edu@bu-it.BU.EDU (Michael Cohen)
Subject: FIRST ANNUAL MEETING OF THE INTERNATIONAL NEURAL NETWORK SOCIETY 

MEETING UPDATE:

September 6--10, 1988 
Park Plaza Hotel 
Boston, Massachusetts 

The first annual INNS meeting promises to be a historic event. Its program 
includes the largest selection of investigators ever assembled to present 
the full range of neural network research and applications. 

The meeting will bring together over 2000 scientists, engineers, students,
government administrators, industrial commercializers, and financiers. It
is rapidly selling out. Reserve now to avoid disappointment.

Call J.R. Shuman Associates, (617) 237-7931 for information about registration
For information about hotel reservations, call the Park Plaza Hotel at
(800) 225-2008 and reference "Neural Networks." If you call 
from Massachusetts, call (800) 462-2022.

There will be 600 scientific presentations, including tutorials, plenary
lectures, symposia, and contributed oral and poster presentations. Over 50
exhibits are already reserved for industrial firms, publishing houses, and
government agencies. 

The full day of tutorials presented on September 6 will be given by Gail 
Carpenter, John Daugman, Stephen Grossberg, Morris Hirsch, Teuvo Kohonen,
David Rumelhart, Demetri Psaltis, and Allen Selverston. The plenary lecturers
are Stephen Grossberg, Carver Mead, Terrence Sejnowski, Nobuo Suga, and Bernard 
Widrow. Approximately 30 symposium lectures will be given, 125 contributed oral
presentations, and 400 poster presentations. 

Fourteen professional societies are cooperating with the INNS meeting. They
are:

     American Association of Artificial Intelligence 
     American Mathematical Society 
     Association for Behavior Analysis 
     Cognitive Science Society 
     IEEE Boston Section 
     IEEE Computer Society 
     IEEE Control Systems Society 
     IEEE Engineering in Medicine and Biology Society 
     IEEE Systems, Man and Cybernetics Society 
     Optical Society of America 
     Society for Industrial and Applied Mathematics 
     Society for Mathematical Biology 
     Society of Photo-Optical Instrumentation Engineers 
     Society for the Experimental Analysis of Behavior 

DO NOT MISS THE FIRST BIRTHDAY CELEBRATION OF THIS IMPORTANT NEW
RESEARCH COALITION!



------------------------------

From: Phill Everson <everson%COMPSCI.BRISTOL.AC.UK@CUNYVM.CUNY.EDU>
Subject:    Public Domain Sun Image Processing Software
Date:       Tue, 19 Jul 88 13:26:39 +0100

    Version 1.1.1: of ALV Public Domain (see file "COPYRIGHT") Image
             Processing Toolkit for the Sun Workstation
                      released Sun 17 Jul, 1988

This is to introduce family of image processing programs written by
Phill Everson <everson@uk.ac.bristol.cs> with help from Gareth Waddell
(notably for the dynamic array library) at Bristol University in the UK
for SUN workstations, both colour and black/white.  (The imed image
editor is largely based on sunpaint, written a year or two ago by Ian
Batten at Birmingham University. Thanks very much!) It includes tools
to display images, to convolve a filter over an image, to create a
histogram of the greylevels in an image, to perform histogram
equalisation of an image, to threshold an image, to convert an image to
Postscript and ...  (read the manual page alv(1) for other features).

AlV stands for Autonomous Land Vehicle, the research project that these
were originally developed for. The toolkit was written to fullfil a
need rather than to be especially pretty, so in places there are some
rough edges. Some of the tools have been used MUCH more than
others and so can be regarded as being pretty much correct (dsp,
convolve, pixval, imagelw, subarea, subsample, winwidth, hist &
invert).  If any of the others seem to be playing up it is possible
that there is a bug in there somewhere -- some tools were added at the
request of others who promised to test them and have never been heard
of since! Please send me any bug reports (and fixes please :-) ) to me.
Note that imed does *not* work with colour workstations as yet!

*************************************************************************
To get this system up and on the road:

    1. Edit the Makefile, changing the directory paths for
       BINDIR, LIBDIR, INCDIR, MANDIR & FILDIR to suit your system.
    2. You might want to alter the command to send Postscript to
           your Laserprinter in imagelw.c - at present it is "lpr -Plw -v"
       on line 58.
    3. Type 'make' and everything will be compiled and installed. This
       takes about 15 minutes.
    4. Read the manual page alv(1). It can be formatted from this
       directory by typing 'nroff -man alv.1 | more'.
*************************************************************************

This family of programs has 3 manual pages; alv(1), alv(3) & alv(5).
alv(1) has a general description of each of the programs and what each
of them should do. alv(3) is a description of the library created and
alv(5) is a description of the file format used for an image.  (I've
also included the manual page dynamem(3) for a dynamic memory
allocation library which is used by the alv library and which someone
may find useful.)

The method that we have found works best is that everyone working
on vision programs uses the same file format (see alv(5)) and
most people will use the core tools to display images etc and the
library functions for their own programs.

These are and will be used a lot here, so if anybody adds or modifies
them, please send me details and I'll collect, collate and send updates
out on the mailing list. It is likely that new tools will be added here
also as I'm now sure to be here until at least 1990.

If you want to be put on a mailing list for additions and bugfix
reports, mail "alv-users-request@uk.ac.bristol.cs". The actual mailing
list can be accessed by mailing to "alv-users@uk.ac.bristol.cs".

I hope they're of some use.

Phill Everson

SNAIL:     Phill Everson, Comp. Sci. Dept., U of Bristol, Bristol, UK
JANET:     everson@uk.ac.bristol.cs   UUCP:     ...mcvax!ukc!csisles!everson
ARPANET:   everson%cs.bristol.ac.uk@nss.cs.ucl.ac.uk

------------------------------

Date: 28 Jul 88 12:41:17 EDT
From: Bernice Rogowitz <ROGOWTZ@ibm.com>
Subject: multidisciplinary conference on VISION, PROCESSING, and DISPLAY

**********  MEETING ANNOUNCEMENT AND CALL FOR PAPERS  ********

               a multidisciplinary conference on:

      HUMAN VISION, VISUAL PROCESSING, AND DIGITAL DISPLAY

                 Bernice E. Rogowitz, chairman
                      January 19-20, 1989

                This meeting is a subset of the
SPSE/SPIE Symposium on Electronic Imaging, January 15-20, 1989
       Los Angeles Hilton Hotel, Los Angeles, California


TOPICS:

  o Models for Human and Machine Vision

  o Color Vision and Color Coding

  o Digitization, Spatial Sampling, and Anti-Aliasing

  o Vision-Based Algorithms for Image Processing

  o Psychophysics of Image Quality

  o Spatial/Color/Temporal Interactions in Perception and Coding


CONFERENCE GOAL:

The goal of this two-day conference is to explore interactions between
human visual processing and the diverse technologies for displaying,
coding, processing, and interpreting visual information.


PARTICIPANTS:

Paper are solicited from scientists working in visual psychophysics,
computer vision, computer graphics, digital display, printing,
photography, image processing, visualization, medical imaging, etc.


IMPORTANT DATES:

150-word Abstracts Due:       August 31, 1988
Camera-ready Manuscript due:  December 19, 1988


FOR MORE INFORMATION, PLEASE CONTACT:

Dr. Bernice E. Rogowitz, chair              SPIE Technical Program
IBM T. J. Watson Research Center            1022 19th Street
Box 218  Yorktown Heights, NY  10598        Bellingham, WA  98225
(914) 945-1687  Net: ROGOWTZ@IBM.COM        (206) 676-3290



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/03/88)

Vision-List Digest	Fri Sep  2 15:33:46 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 IEEE CVPR 1989 Call for Papers

----------------------------------------------------------------------

Date: 31 Aug 88 23:02:18 GMT
From: wnm@uvacs.CS.VIRGINIA.EDU (Worthy N. Martin)
Subject: IEEE CVPR 1989 Call for Papers
Keywords: CVPR 1989
Organization: U.Va. CS Department, Charlottesville, VA

                      CALL FOR PAPERS

              IEEE Computer Society Conference
                            on
          COMPUTER VISION AND PATTERN RECOGNITION

                    Sheraton Grand Hotel
                   San Diego, California
                      June 4-8, 1989.



                       General Chair


               Professor Rama Chellappa
               Department of EE-Systems
               University of Southern California
               Los Angeles, California  90089-0272


                     Program Co-Chairs

Professor Worthy Martin          Professor John Kender
Dept. of Computer Science        Dept. of Computer Science
Thornton Hall                    Columbia University
University of Virginia           New York, New York  10027
Charlottesville, Virginia 22901


                     Program Committee

Charles Brown         John Jarvis            Gerard Medioni
Larry Davis           Avi Kak                Theo Pavlidis
Arthur Hansen         Rangaswamy Kashyap     Alex Pentland
Robert Haralick       Joseph Kearney         Roger Tsai
Ellen Hildreth        Daryl Lawton           John Tsotsos
Anil Jain             Martin Levine          John Webb
Ramesh Jain           David Lowe



                    Submission of Papers

Four copies of complete  drafts,  not  exceeding  25  double
spaced  typed  pages  should be sent to Worthy Martin at the
address given above by November 16, 1988  (THIS  IS  A  HARD
DEADLINE).   All reviewers and authors will be anonymous for
the review process.  The cover page will be removed for  the
review  process.   The  cover  page  must contain the title,
authors'  names,  primary  author's  address  and  telephone
number, and index terms containing at least one of the below
topics.  The second page of the  draft  should  contain  the
title  and  an abstract of about 250 words.  Authors will be
notified of notified of acceptance by February 1,  1989  and
final  camera-ready  papers, typed on special forms, will be
required by March 8, 1989.  Submission of Video Tapes  As  a
new  feature  there  will  be  one or two sessions where the
authors can present their work using video tapes only.   For
information  regarding  the  submission  of  video tapes for
review purposes, please contact John Kender at  the  address
above.



                 Conference Topics Include:

          -- Image Processing
          -- Pattern Recognition
          -- 3-D Representation and Recognition
          -- Motion
          -- Stereo
          -- Visual Navigation
          -- Shape from _____ (Shading, Contour, ...)
          -- Vision Systems and Architectures
          -- Applications of Computer Vision
          -- AI in Computer Vision
          -- Robust Statistical Methods in Computer Vision



                           Dates

      November 16, 1988 -- Papers submitted
      February 1, 1989  -- Authors informed
      March 8, 1989     -- Camera-ready manuscripts to IEEE
      June 4-8, 1989    -- Conference

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/13/88)

Vision-List Digest	Mon Sep 12 14:44:52 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Postdoc. Fellowships at National Research Council, Canada
 Del-squared G
 problems with multiple mailings, etc.
 book on basics of optical scanning
 Congress on Cybernetics and Systems 
 Where to go for image processing software?

----------------------------------------------------------------------

Date: 7 Sep 88 15:50:53 GMT
From: Stephen MacKay <samackay@watcgl.waterloo.edu>
Subject: Postdoc. Fellowships at National Research Council, Canada
Keywords: postdoctorate fellowships, intelligent robotics
Organization: National Research Council, Canada


Postdoctorate Fellowships at NRC

Applications for postdoctorate fellowships are being accepted 
from citizens of U.S.A., U.K., Japan, W. Germany, France, and 
Italy, in the area of Intelligent robotics.  These are 1 yr., 
non-renewable fellowships valued at approximately $CAN 29K plus 
travel expenses.  Interested parties can contact: 

Nestor Burtnyk or Colin Archibald
Laboratory for Intelligent Systems
National Research Council of Canada
Ottawa, Ont. Canada K1A 0R8
(613) 993-6580

email: archibald@dee.nrc.ca


------------------------------

Date: 8 Sep 88 06:59:09 GMT
From: srini@mist.cs.orst.edu (Srinivas Raghvendra)
Subject: Del-squared G
Keywords: edge detectors, zero crossings, Gaussian filter, David Marr


	In the literature I hae been looking up, there are frequent references
to the Del-squared G operator ( the Laplacian of the convolution of the image
with a Gaussian filter ). I would like to try and apply this operator to a
set of images that I have and study the results. I have the following questions
for you net folks :
	(1) Can you point me to an article/book that discusses the actual 
implementation of this operator. I have read a couple of David Marr's articles
that tell me why the operator is good and stuff of the sort. I need to know
how I can actually implement such an operator in code ( as a program ).
	(2) Related to this is the issue of detecting zero-crossings in the 
output of the above operator. Can this step be combined in some way with the
convolution step ?

	I apologise if this is not the appropriate newsgroup for this request.
I am unable to think of a better group for posting this request.

	I will be grateful to those of you who will take some time off to 
respond to my request.

	Thank you all.
	Srinivas Raghvendra ( srini@mist.cs.orst.edu )

[ (1) The convolution is usually implemented using a discrete mask.
      The formulas used to compute this mask is in many books (e.g.,
      Computer Vision by Ballard & Brown, Rosenfield & Kak,
      and other newer vision books.  Symmetric (isotropic) masks are 
      linearly decomposable, and can be implemented as two passes using
      one dimensional masks (see PAMI articles in the past year or two).
      Zero crossings can be detected by thresholding and detecting transition
      points.
  (2) I don't know of any.  
      			phil... 	]


------------------------------

Date: Fri, 9 Sep 88 20:57:07 PDT
From: Vision-List-Request <vision@deimos.ads.com>
Subject: problems with multiple mailings, etc.

As many of you might have noticed, there have been some rather
annoying multiple mailings of the List.  In part, this occured because
our SENDMAIL program has a problem with large distribution lists
(there are 367 sites on my master list). In addition, there have
been some problems with multiple mailings from buggy UUCP node
programs.

The SENDMAIL problems are easy to fix (simply break the list down into
chunks), but I am attempting to streamline the list by having more
sites with multiple recipients set up local redistribution systems.
The List is also distributed as a USENET digest with the title
comp.ai.vision. If you have access or currently access the List
through this facility, and you currently receive the List directly
through your mail facility, please let me know so that I can delete
you from the master list.  Please let me know of other mailer
anomalies or problems you are having.

I find my time with the List dominated by fielding problems in the
mailing of the List itself.  I am cleaning this up now to free up my
time to spend more time moderating the List.  Perhaps throw in a bit
of wood to stimulate some heated discussions. 

phil...

------------------------------

Date: 10 Sep 88 14:31:23 GMT
From: dscatl!mgresham@gatech.edu (Mark Gresham)
Subject: book on basics of optical scanning
Organization: Digital Systems Co. , Atlanta

Could anyone out there recommend a book on the basics of how optical
character recognition is handled?  And I do mean the *basic*,
fundamental principles of how the decisionmaking is done by the
software.

Please e-mail response. Thanks!

--Mark Gresham

++++++++++++++++++++++++++++++++
INTERNET: mgresham@dscatl.UUCP
UUCP: ...!gatech!dscatl!mgresham
++++++++++++++++++++++++++++++++


------------------------------

Date: Mon, 12 Sep 88 14:07:58 PDT
From: Vision-List-Request <vision>
Subject: Congress on Cybernetics and Systems 

[Reposted from Neuron digest.	phil... 	]

	Date: 9 Sep 88 20:13:56 GMT
	From: SPNHC@CUNYVM.CUNY.EDU
	Subject: Congress on Cybernetics and Systems
	Organization: The City University of New York - New York, NY
	DISCLAIMER: Author bears full responsibility for contents 
		    of this article 
	
	
	             WORLD ORGANIZATION OF SYSTEMS AND CYBERNETICS
	
	         8 T H    I N T E R N A T I O N A L    C O N G R E S S
	
	         O F    C Y B E R N E T I C S    A N D   S Y S T E M S
	
	                            to be held
	                         June 11-15, 1990
	                                at
	                          Hunter College
	                    City University of New York
	                         New York, U.S.A.
	
	     This triennial conference is supported by many international
	groups  concerned with  management, the  sciences, computers, and
	technology systems.
	
	      The 1990  Congress  is the eighth in a series, previous events
	having been held in  London (1969),  Oxford (1972), Bucharest (1975),
	Amsterdam (1978), Mexico City (1981), Paris (1984) and London (1987).
	
	      The  Congress  will  provide  a forum  for the  presentation
	and discussion  of current research. Several specialized  sections
	will focus on computer science, artificial intelligence, cognitive
	science, psychocybernetics  and sociocybernetics.  Suggestions for
	other relevant topics are welcome.
	
	      Participants who wish to organize a symposium or a section,
	are requested  to submit a proposal ( sponsor, subject, potencial
	participants, very short abstracts ) as soon as possible, but not
	later  than  September 1989.  All submissions  and correspondence
	regarding this conference should be addressd to:
	
	                    Prof. Constantin V. Negoita
	                         Congress Chairman
	                   Department of Computer Science
	                           Hunter College
	                    City University of New York
	             695 Park Avenue, New York, N.Y. 10021 U.S.A.
	

------------------------------

Date: Mon, 12 Sep 88 14:28:55 PDT
From: gungner@CS.UCLA.EDU (David Gungner)
Subject: Where to go for image processing software?

I'd like to know how to order GYPSY, VICAR, and XVISION image
processing software.  

Thanks ...  David Gungner,  UCLA Machine Perception Lab


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/27/88)

Vision-List Digest	Mon Sep 26 09:55:18 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Motion, correspondence, and aperture problem
 Re: temporal domain in vision
 Re: temporal domain in vision
 Image Processing User's Group - Twin Cities
 Re: How to connect SUN 3-160M and Imaging technology's series 151
 Re: Real-time data acquisition
 Info. request on computerised colorimetry/microscopy.
 Faculty position at Stanford
 Information needed on Intl. Wkshp. on Dynamic Image Analysis...
 Workshop on VLSI implementation of Neural Nets
 The 6th Scandinavian Conference on Image Analysis
 Stanford Robotics Seminars

----------------------------------------------------------------------

Date: Wed, 21 Sep 88 16:13:49 PDT
From: stiber@CS.UCLA.EDU (Michael D Stiber)
Subject: Motion, correspondence, and aperture problem

I am currently working on a thesis involving shape from motion, and
would appreciate pointers to the literature that specifically address
my topic, as detailed in the "abstract" below.  If the following rings
any bells, please send me the results of the bell-ringing.  I
appreciate any efforts that you make with regards to this, even if it
is just to send me flames.

SHAPE FROM MOTION: ELIMINATING THE CORRESPONDENCE AND APERTURE PROBLEMS

The traditional approach to the task of shape from motion has been to
first apply spatial processing techniques to individual images in a
sequence (to localize features of interest), and then to apply other
algorithms to the images to determine how features moved.  The
oft-mentioned "correspondence" and "aperture" problems arise here,
since one cannot be sure, from the information in the processed
frames, which features in one frame match which features in the
following frames.  The methods designed to process visual motion
are actually confounded by that motion.

Alternative approaches to shape from motion perform temporal (rather
than spatial) processing first.  These include work on optic flow.
When the causes of temporal variation are considered in this manner,
it becomes clear that different classes of variation are caused by
quite different types of changes in the "real world", and are best
accounted for at different levels of processing within a visual
system.  Thus, overall brightness changes (such as when a cloud moves
in front of the sun) are "eliminated from the motion equation" at the
lowest levels, with lightness constancy processing.  Changes due to
eye or camera motion, self motion, and object motion are likewise
identified at the appropriate stage of visual processing.  This
strategy for processing motion uses temporal changes as a source of
much additional information, in contrast to the "spatial first"
approaches, which throw away that information.  The "correspondence"
and "aperture" problems are eliminated by using this additional
information.

This thesis details an algorithm and connectionist architecture for
the processing of visual motion due to camera motion.  It performs
this by converting images from a sensor-based coordinate system (in
which camera motion causes temporal variation of images) to a
body-centered coordinate system (in which the percept remains
constant, independent of camera movement).  The "correspondence" and
"aperture" problems (as a result of camera motion) are eliminated by
this approach.


------------------------------

Date: 21 Sep 88 20:50:53 GMT
From: lag@cseg.uucp (L. Adrian Griffis)
Subject: Re: temporal domain in vision
Keywords: multiplex filter model
Organization: College of Engineering, University of Arkansas, Fayetteville


In article <233@uceng.UC.EDU>, dmocsny@uceng.UC.EDU (daniel mocsny) writes:
> In Science News, vol. 134, July 23, 1988, C. Vaughan reports on the
> work of B. Richmond of NIMH and L. Optican of the National Eye
> Institute on their multiplex filter model for encoding data on
> neural spike trains. The article implies that real neurons multiplex
> lots of data onto their spike trains, much more than the simple
> analog voltage in most neurocomputer models. I have not seen
> Richmond and Optican's papers and the Science News article was
> sufficiently watered down to be somewhat baffling. Has anyone
> seen the details of this work, and might it lead to a method to
> significantly increase the processing power of an artificial neural
> network?

My understanding is that neurons in the eye depart from a number of
general rules that neurons seem to follow elsewhere in the nervous system.
One such departure is that sections of a neuron can fire independent
of other sections.  This allows the eye to behave as though is has a great
many logical neuron without having to use the the space that the same number
of discrete cellular metabolic systems would require.  I'm not an expert
in this field, but this suggests to me that many of the special tricks
that neurons of the eye employ may be attempts to overcome space limitations
rather than to make other processing schemes possible.  Whether or not
this affects the applicability of such tricks to artificial neural networks
is another matter.  After all, artificial neural networks have space 
limitations of their own.

  UseNet:  lag@cseg                      L. Adrian Griffis
  BITNET:  AG27107@UAFSYSB

------------------------------

Date: 22 Sep 88 01:42:30 GMT
From: jwl@ernie.Berkeley.EDU (James Wilbur Lewis)
Subject: Re: temporal domain in vision
Keywords: multiplex filter model
Organization: University of California, Berkeley

In article <724@cseg.uucp> lag@cseg.uucp (L. Adrian Griffis) writes:
>In article <233@uceng.UC.EDU>, dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>> In Science News, vol. 134, July 23, 1988, C. Vaughan reports on the
>> work of B. Richmond of NIMH and L. Optican of the National Eye
>> Institute on their multiplex filter model for encoding data on
>> neural spike trains. The article implies that real neurons multiplex
>> lots of data onto their spike trains, much more than the simple
>> analog voltage in most neurocomputer models.
>
>My understanding is that neurons in the eye depart from a number of
>general rules that neurons seem to follow elsewhere in the nervous system.

I think Richmond and Optican were studying cortical neurons.  Retinal
neurons encode information mainly by graded potentials, not spike
trains....another significant difference between retinal architecture
and most of the rest of the CNS.

I was somewhat baffled by the Science News article, too.  For example,
it was noted that the information in the spike trains might be a
result of the cable properties of the axons involved, not necessarily
encoding any "real" information, but this possibility was dismissed with a 
few handwaves.

Another disturbing loose end was the lack of discussion about how this
information might be propogated across synapses.  Considering that
it generally takes input from several other neurons to trigger a 
neural firing, and that the integration time necessary would tend
to smear out any such fine-tuned temporal information,  I don't
see how it could be.

It's an interesting result, but I think they may have jumped the
gun with the conclusion they drew from it.

-- Jim Lewis
   U.C. Berkeley

------------------------------

Date: 22 Sep 88 04:51:39 GMT
From: manning@mmm.serc.3m.com (Arthur T. Manning)
Subject: Image Processing User's Group - Twin Cities
Summary: Announcement of Group's Formation
Keywords: vision, image, datacube
Organization: 3M Company - Software and Electronics Resource Center (SERC); St. Paul, MN

Datacube Inc. (a high-speed image processing hardware manufacturer) is
initiating an image processing user's group in the twin cities through
their local sales rep

  Barb Baker
  Micro Resources Corporation           phone: (612) 830-1454
  4640 W. 77th St, Sui 109              ITT Telex 499-6349
  Edina, Minnesota  55435               FAX (612) 830-1380


Hopefully this group will be the basis of valuable cooperation between
various commercial vision groups (as well as others) in the Twin Cities.

The first meeting to formulate the purpose, governing body, etc of this
image processing user's group was held September 22, 1988.

It looks like this newsgroup would be the best place to post further
developments.


Arthur T. Manning                                Phone: 612-733-4401
3M Center  518-1                                 FAX:   612-736-3122
St. Paul  MN  55144-1000   U.S.A.                Email: manning@mmm.uucp

------------------------------

Date: Thu, 22 Sep 88 00:01:33 CDT
From: schultz@mmm.3m.com (John C Schultz)
Subject: Re:How to connect SUN 3-160M and Imaging technology's series 151


While we do not have the exact same hardware we are using Bit 3
VME-VME bus repeaters (not extenders) with our SUN 3/160 and
Datacube hardware.  We had to add a small patch the the Datacube
driver to get it to work with the Bit 3 because of the strange Bit 3
handling of interrrupt vectors but now it is fine.

The Bit 3 might work for you application, as opposed to a bus
"extender", because this repeater keeps the VME backplanes logically
and electricially separate although memory mapping is possible.

Hope this helps.  We don't have IT boards to try it out with.  

------------------------------

Date: Thu, 22 Sep 88 00:10:49 CDT
From: schultz@mmm.3m.com (John C Schultz)
Subject: re:real-time data acquisition


>I was wondering if anybody out there knows of a system that will record
>rgb digital data in real-time plus some ancillary position information.  An
>inexpensive medium is of course preferred since it would be useful to
>be able to share this data with other folks.

Two ways to go (as the moderator mentioned) are low quality VHS style
machines which only record maybe 200 lines/image or broadcast quality
3/4 inch tape machines which cost big bucks (and still will probably
lose some data from a 512 x 485 image).

Digital storage is limited to maybe 30 seconds.  In addition to Gould
I know of one vendor (Datacube) who supplies 1.2 GB of mag disk for
real-time image recording ($25-40K depending on size I think).  If the
speed requirements are bursty, thse real-time disk systems could be
backed up to a Exabyte style cartridge holding a couple GB - or a
write-once optical media for faster recall I suppose.

As to location info, how about encoding during either vertical
blanking or on the audio channel if you use video tape.

------------------------------

Date:     23-SEP-1988 17:02:02 GMT
From: ALLEN%BIOMED.ABDN.AC.UK@CUNYVM.CUNY.EDU
Subject:  Info. request on computerised colorimetry/microscopy.

I wonder if anyone can help me with some background information.

We have a potential application which involves the measurement of the
colour of a surface in an industrial inspection task.  It is possible that
this will be computerised (eg. camera + framestore).  I am trying to find
background information on the use of computers in colorimetry.  Any books,
review articles, etc. which you can recommend, or experience with actual
systems, etc. - any info. would be gratefully received.

The second part of the project involves the measurement of the thickness
of a film (< 20/1000 inch), possibly by optical inspection.  We don't
want to reinvent the wheel: anyone been here before?

Alastair Allen
Dept of Physics
University of Aberdeen
UK.
ALLEN@UK.AC.ABDN


------------------------------

Date: 23 Sep 88 19:58:04 GMT
From: rit@coyote.stanford.edu (Jean-Francois Rit)
Subject: Faculty position at Stanford
Organization: Stanford University


STANFORD UNIVERSITY

Department of Computer Science

Faculty Position in Robotics


Qualified people are invited to submit applications for a tenure-track
faculty position in Robotics.  The appointment may be made at either
the junior or senior level depending on the qualifications of the
applicants.

Applicants for a tenured position must have strong records of
achievements both in research and in teaching and have demonstrated
potential for research leadership and future accomplishments.

Applicants for a junior, tenure-track position must have a PhD in
Computer Science and have demonstrated competence in one or several
areas of Robotics research and must have demonstrated potential for
excellent teaching.

Outstanding candidates in all areas of Robotics will be considered,
with preference to those in Advanced Control or Computer Vision.

Depending on specific background and interests, there is a strong
possibility of joint appointments with the Mechanical Engineering or
Electrical Engineering Departments.

Please send applications with curriculum vitae and names of at
least four references to: Professor Jean-Claude Latombe,
Chairman of Robotics Search Committee, Computer Science Department,
Stanford University, Stanford, CA 94305.

Stanford University is an Equal Opportunity/Affirmative Action
employer and actively solicits applications from qualified women and
targeted minorities.

Jean-Francois Rit               Tel: (415) 723 3796
CS Dept Robotics Laboratory     e-mail: rit@coyote.stanford.edu      
Cedar Hall B7                  
Stanford, CA 94305-4110

------------------------------

Date: Sat, 24 Sep 88 14:52 EDT
From: From the Land of the Himalayas <SAWHNEY@cs.umass.EDU>
Subject: Information needed on Intl. Wkshp. on Dynamic Image Analysis...

Hullo Folks,

Does anyone out there have information about THE 3RD INTERNATIONAL
WORKSHOP on TIME-VARYING ANALYSIS and MOVING OBJECT RECOGNITION to 
be held in Florence, Italy in May,1989 ?

I need information on deadlines, the program committee and the rest.

Harpreet

Univ. of Mass. at Amherst, COINS Dept.
sawhney@umass

------------------------------

Date: 22 Sep 88 00:43:58 GMT
From: munnari!extro.ucc.su.oz.au!marwan@uunet.UU.NET
From: marwan@extro.ucc.su.oz (Marwan Jabri)
Subject: Workshop on VLSI implementation of Neural Nets
Organization: University of Sydney Computing Service, Australia


                               Neural Networks -
                         Their Implementation in VLSI


                            A Two Day Workshop

                             5-6 November 1988

                         Sydney University Electrical
                                 Engineering


                                 Sponsored by

                      Electrical Engineering Foundation,
                            University of Sydney

Introduction
-------------

Research in artificial neural systems or, more commonly, artificial
neural networks (NNs) has gained new momentum following a decline in
the late 1960s as a result of unsuccessful problem solving areas where
conventional digital computers, with processing elements switching in
nanoseconds, do not perform as well as ``biological'' neural systems
that have electrochemical devices responding in milliseconds.

These problem solving areas share important attributes in that
they may be performed on noisy and distorted data. Vision, speech
recognition and combinatorial optimisation are examples of such
problems. 

VLSI implementations of NN systems have begun to appear as a natural
solution to building large and fast computational systems.  AT&T Bell
Labs and California Institute of Technology (Caltech) are two of the
leading research institutions where VLSI NN systems have been
developed recently.  Successful development of VLSI NNs requires a
robust design methodology.

Objectives of the Workshop
--------------------------
The workshop is organised by the Systems Engineering and Design
Automation Laboratory (SEDAL), Sydney University Electrical
Engineering (SUEE) and is sponsored by the Electrical Engineering
Foundation.  The workshop will present to academics, researchers and
engineers state-of-the-art methodologies for the implementation of
VLSI NN systems. It will also ll cover 6 important lectures of the
program.

Dr. Larry Jackel
----------------
Head Device Structures Research Department, AT\&T Bells Labs.

Dr. Larry Jackel is a world expert on VLSI implementation of
artificial NNs. He is leader of a group working on the implementation
of VLSI chips with several hundreds of neurons for image
classification, pattern recognition and associative memories. Dr
Jackel has over 80 technical publications in professional journals and
seven US patents.  He is recipient of the 1985 IEEE Electron Device
Society Paul Rappaport Award for best paper. Dr. Jackel is author
and/or co-author of several invited papers on NN design, in
particular, recently in the special issue on NNs of the IEEE Computer
magazine (March 88).

Ms. Mary Ann Maher
------------------
Member of the technical staff, Computer Science Department,
Caltech.

Ms Mary Ann Maher is member of the research group headed by professor
Carver in the simulation of VLSI implementations of NNs. She has
participated as an invited speaker at several conferences and
workshops on VLSI implementation of NNs including the IEEE
International Symposium on Circuits and Systems, 1988 at Helsinki.

Invited Speakers
----------------
The seminar will also feature speakers from several Australian
research institutions with a diverse background who
will give the participants a broad overview of the subject.

Prof. Max Benne Wales
Prof. Rigby will present an introduction to important
MOS building blocks used in  the VLSI implementation of NNs.

Other lectures and tutorials will be presented by the following 
speakers from Sydney University Electrical Engineering:

Peter Henderson, SEDAL

Marwan Jabri, SEDAL

Dr. Peter Nickolls, Laboratory for Imaging Science and Engineering

Clive Summerfield, Speech Technology Research


Venue
-----
The course will be held in Lecture Theatre 450,
Sydney University Electrical Engineering on November 5 and 6, 1988.

Registration
------------
The workshop registration cost is $400 for a private institutioFor more information please contact:


Marwan Jabri,
SEDAL, Sydney University Electrical Engineering,
NSW 2006

or by:

Tel: 02 692 2240 Fax: 02 692 2012
ACSnet marwan@extro.ucc.su.oz


------------------------------

Date: Thu, 22 Sep 88 13:43:59 +0300
From: scia@stek5.oulu.fi (SCIA confrence in OULU)


      The 6th Scandinavian Conference on Image Analysis
      =================================================

      June 19 - 22, 1989
      Oulu, Finland

      Second Call for Papers



      INVITATION TO 6TH SCIA

      The 6th Scandinavian Conference on Image  Analysis   (6SCIA)
      will  be arranged by the Pattern Recognition Society of Fin-
      land from June 19 to June 22, 1989. The conference is  spon-
      sored  by the International Association for Pattern Recogni-
      tion. The conference will be held at the University of Oulu.
      Oulu is the major industrial city in North Finland, situated
      not far from the Arctic Circle. The conference  site  is  at
      the Linnanmaa campus of the University, near downtown Oulu.

      CONFERENCE COMMITTEE

      Erkki Oja, Conference Chairman
      Matti Pietik{inen, Program Chairman
      Juha R|ning, Local organization Chairman
      Hannu Hakalahti, Exhibition Chairman

      Jan-Olof Eklundh, Sweden
      Stein Grinaker, Norway
      Teuvo Kohonen, Finland
      L. F. Pau, Denmark

      SCIENTIFIC PROGRAM

      The program will  consist  of  contributed  papers,  invited
      talks  and special panels.  The contributed papers will cov-
      er:

              * computer vision
              * image processing
              * pattern recognition
              * perception
              * parallel algorithms and architectures

      as well as application areas including

              * industry
              * medicine and biology
              * office automation
              * remote sensing

      There will be invited speakers on the following topics:

      Industrial Machine Vision
      (Dr. J. Sanz, IBM Almaden Research Center)

      Vision and Robotics
      (Prof. Y. Shirai, Osaka University)

      Knowledge-Based Vision
      (Prof. L. Davis, University of Maryland)

      Parallel Architectures
      (Prof. P. E. Danielsson, Link|ping University)

      Neural Networks in Vision
      (to be announced)

      Image Processing for HDTV
      (Dr. G. Tonge, Independent Broadcasting Authority).

      Panels will be organized on the following topics:

      Visual Inspection in the  Electronics  Industry  (moderator:
      prof. L. F. Pau);
      Medical Imaging (moderator: prof. N. Saranummi);
      Neural Networks and Conventional  Architectures  (moderator:
      prof. E. Oja);
      Image Processing Workstations (moderator: Dr.  A.  Kortekan-
      gas).

      SUBMISSION OF PAPERS

      Authors are invited to submit four  copies  of  an  extended
      summary of at least 1000 words of each of their papers to:

              Professor Matti Pietik{inen
              6SCIA Program Chairman
              Dept. of Electrical Engineering
              University of Oulu
              SF-90570 OULU, Finland

              tel +358-81-352765
              fax +358-81-561278
              telex 32 375 oylin sf
              net scia@steks.oulu.fi

      The summary should contain sufficient  detail,  including  a
      clear description of the salient concepts and novel features
      of the work.  The deadline for submission  of  summaries  is
      December  1, 1988. Authors will be notified of acceptance by
      January 31st, 1989 and final camera-ready papers will be re-
      quired by March 31st, 1989.

      The length of the final paper must not exceed 8  pages.  In-
      structions  for  writing the final paper will be sent to the
      authors.

      EXHIBITION

      An exhibition is planned.  Companies  and  institutions  in-
      volved  in  image analysis and related fields are invited to
      exhibit their products at demonstration stands,  on  posters
      or video. Please indicate your interest to take part by con-
      tacting the Exhibition Committee:

              Matti Oikarinen
              P.O. Box 181
              SF-90101 OULU
              Finland

              tel. +358-81-346488
              telex 32354 vttou sf
              fax. +358-81-346211

      SOCIAL PROGRAM

      A social program will be arranged,  including  possibilities
      to  enjoy  the  location  of the conference, the sea and the
      midnight sun. There  are  excellent  possibilities  to  make
      post-conference  tours  e.g.  to Lapland or to the lake dis-
      trict of Finland.

      The social program will consist of a get-together  party  on
      Monday June 19th, a city reception on Tuesday June 20th, and
      the conference Banquet on Wednesday June 21st. These are all
      included  in the registration fee. There is an extra fee for
      accompanying persons.

      REGISTRATION INFORMATION

      The registration fee will be 1300  FIM  before  April  15th,
      1989  and 1500 FIM afterwards. The fee for participants cov-
      ers:  entrance  to  all  sessions,  panels  and  exhibition;
      proceedings; get-together party, city reception, banquet and
      coffee breaks.

      The fee is payable by
              - check made out to 6th SCIA and mailed to the
                Conference Secretariat; or by
              - bank transfer draft account or
              - all major credit cards

      Registration forms, hotel information and  practical  travel
      information  are  available from the Conference Secretariat.
      An information package will be sent to authors  of  accepted
      papers by January 31st, 1989.

      Secretariat:
              Congress Team
              P.O. Box 227
              SF-00131 HELSINKI
              Finland
              tel. +358-0-176866
              telex 122783 arcon sf
              fax +358-0-1855245

      There will be hotel rooms available for  participants,  with
      prices  ranging  from  135 FIM (90 FIM) to 430 FIM (270 FIM)
      per night for a single room (double room/person).

------------------------------

From: binford@Boa-Constrictor.Stanford.EDU.stanford.edu (Tom Binford)
Subject: ROBOTICS SEMINARS
		
	Monday, Sept. 26, 4:15 
	
	Self-Calibrated Collinearity Detector.
	Yael Neumann
	Weizmann inst. Israel.
		
	Abstract:
		
The visual system can make highly precise spatial judgments. It has
been sugested that the high accuracy is maintained by an error
correction mechanism. According to this view, adaptation and
aftereffect phenomena can be explained as a by product of an error
correction mecahnism. This work describes an adaptive system for
collinearity and straightness detection. The system incorporates an
error correction mechanism and it is, therefore, highly accurate.  The
error correction mechanism is performed by a simple self calibration
process names proportional multi-gain asjustment. The calibration
process adjusts the gain values of the system input units.  It
compensate for errors due to noise in the input units receptive fields
location and response functions by ensuring that the average
collinearity offset (or curvature) detected by the system is zero
(straight).
		

		
	 Wednesday, September 28, 1988

	 Greg Hager
	 Computer and Information Science
	 University of Pennsylvania
		
	 ACTIVE REDUCTION OF UNCERTAINTY IN MULTI-SENSOR SYSTEMS
		 4:15 p.m.
		
		

	Oct 3, 1988
		
	Dr. Doug Smith
	Kestrel Institute
		
	KIDS - A Knowledge-Based Software Development System
		 
	Abstract:

KIDS (Kestrel Interactive Development System) is an experimental
knowledge-based software development system that integrates a number
of sources of programming knowledge.  It is used to interactively
develop formal specifications into correct and efficient programs.
Tools for performing algorithm design, a generalized form of deductive
inference, program simplification, finite differencing optimizations,
and partial evaluation are available to the program developer.  We
describe these tools and discuss the derivation of several programs
drawn from scheduling and pattern-recognition applications.  All of
the KIDS tools are automatic except the algorithm design tactics which
require some interaction at present.  Dozens of derivations have been
performed using the KIDS environment and we believe that it is close
to the point where it can be used for some routine programming.
		
		
------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/04/88)

Vision-List Digest	Mon Oct  3 14:01:56 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

  Circle Detection Literature
  RE: TEMPORAL DOMAIN IN VISION


----------------------------------------  

Date:       Fri, 30 Sep 88 15:31:06-0000
From: "x.cao" <eecao%PYR.SWAN.AC.UK@cunyvm.cuny.edu>
Subject: Circle Detection Literature

I am looking for information on image processing  algorithms
and architectures especially suited to detection of circles
in 2D digital images. In particular I am interested in parallel
systems and real-time operation.

I would be most grateful if you could send me details of any references
you have found useful in this area.


                        - - - - - - - - - - - - - - - - - - - -
Cao Xing,              | UUCP  : ...!ukc!pyr.swan.ac.uk!eecao   |
Image Processing Laboratory,     | JANET : eecao@uk.ac.swan.pyr              |
Electrical Engineering Dept.,    | voice : +44 792 205678 Ext. 4698       |
University of Wales,          | Fax     : +44 792 295532                 |
Swansea, SA2 8PP, U.K.         | Telex : 48358                          |
                     - - - - - - - - - - - - - - - - - - - -

[ Related to this is the detection and grouping of general
  curves in imagery. Please post references directly to this List. 
			phil...				]


----------------------------------------  

Date:     Fri, 30 Sep 88 10:22 EDT
From: Richard A. Young (YOUNG@GMR.COM)
Subject:  RE: TEMPORAL DOMAIN IN VISION
Re:   temporal domain in vision

I take issue with two replies recently made to dmocsny@uceng.UC.EDU (daniel 
mocsny) regarding the Science News article on the work of B. Richmond of NIMH 
and L. Optican of the National Eye Institute on their multiplex filter model 
for encoding data on neural spike trains: 

L. Adrian Griffis (lag@cseg.uucp):
> I'm not an expert in this field, but this suggests to me that many of the 
> special tricks that neurons of the eye employ may be attempts to overcome 
> space limitations rather than to make other processing schemes possible.  

James Wilbur Lewis ( jwl@ernie.Berkeley.EDU):
> Another disturbing loose end was the lack of discussion about how this
> information might be propagated across synapses...It's an interesting result, 
> but I think they may have jumped the gun with the conclusion they drew.

Instead, I have a more positive view of Richmond and Optican's work after 
reviewing their publications (see references at end), and talking with them 
at the recent Neural Net meetings in Boston. I am impressed with their 
approach and research. I think that the issue of temporal coding deserves 
much more careful attention by vision and neural net researchers than it has 
received over the years.  Richmond and Optican have produced the first hard 
published evidence I am aware of in the primate visual system that temporal 
codes can carry meaningful information about visual form.

Their first set of papers dealt with the inferotemporal cortex, a high level 
vision area (Richmond et al., 1987; Richmond and Optican, 1987; Optican and 
Richmond, 1987). They developed a new technique using principal component 
analysis of the neural spike density waveform that allowed them to analyze 
the information content in the temporal patterns in a quantifiable manner. 
Each waveform is expressed in terms of a few coefficients -- the weights on
the principal components. By looking at these weights or "scores", it is 
much easier to see what aspects of the stimulus might be associated with 
the temporal properties of the waveform than has been previously possible.  
They used a set of 64 orthogonal stimulus patterns (Walsh functions), that were 
each presented in a 400 msec flash to awake fixating monkeys. Each stimulus was
shown in two contrast-reversed forms, for a total of 128 stimuli. They devised 
an information theoretic measure which showed that "the amount of information 
transmitted in the temporal modulation of the response was at least twice that 
transmitted by the spike count" alone, which they say is a conservative 
estimate.  In other words, they could predict which stimuli were present to 
a much better extent when the full temporal form of the response was 
considered rather than just the total spike count recorded during a trial. 
Their laboratory has since extended these experiments to the visual cortex 
(Gawne et al., 1987) and the lateral geniculate nucleus (McClurkin et al.) 
and found similar evidence for temporal coding of form.
 
The concept of temporal coding in vision has been around a long time (Troland, 
1921), but primarily in the area of color vision. Unfortunately the prevailing 
bias in biological vision has been against temporal coding in general for 
many years.  It has been difficult to obtain funding or even get articles 
published on the subject.  Richmond and Optican deserve much credit for 
pursuing their research and publishing their data in the face of such strong 
bias (as does Science News for reporting it). The conventional view is that 
all neural information is spatially coded. Such models are variants of the 
"doctrine of specific nerve energies" first postulated by Mueller in the 
nineteenth century. This "labelled-line" hypothesis assumes that the 
particular line activated carries the information. From an engineering 
viewpoint, temporal coding allows for more information to be carried along any 
single line. Such coding allows more efficient use of the limited space 
available in the brain for axons compared to cell bodies (most of the brain 
volume is white matter, not grey!). In terms of biological plausibility, it 
seems to me that the burden of proof should be on those who maintain that such 
codes would NOT be used by the brain.

Anyone who has recorded post-stimulus time histograms from neurons observes
the large variations in the temporal pattern of the responses that occur 
with different stimuli. The "accepted view" is that such variations do 
not encode stimulus characteristics but represent epi-phenomena or noise. Hence
such patterns are typically ignored by researchers. Perhaps one difficulty 
has been that there has not been a good technique to quantify the many 
waveshape patterns that have been observed. It is indeed horrendously difficult
to try to sort the patterns out by eye -- particularly without knowing what the 
significant features might be, if any. With the application of the principal 
component technique to the pattern analysis question, Richmond and Optican 
have made a significant advance, I believe -- it is now possible to quantify
such waveforms and relate their properties to the stimulus in a simple manner.

The question raised by Lewis of whether the nervous system can actually make 
use of such codes is a potentially rich area for research. Chung, Raymond, and
Lettvin (1970) have shown that branching at axonal nodes is an effective
mechanism for decoding temporal messages. Young (1977) was the first to show
that bypassing the receptors and inserting temporal codes directly into 
a human nervous system could led to visual perceptions that were the same for 
a given code across different observers.

Work on temporal coding has potentially revolutionary implications for 
both physiological and neural net research. As was noted at the INNS 
neural net meeting in Boston, temporal coding has not yet been applied or 
even studied by neural net researchers. Neural nets today  can obviously 
change their connection strengths -- but the temporal pattern of the signal on 
the connecting lines is not used to represent or transmit information.  If it 
were, temporal coding methods would seem to offer potentially rich rewards for 
increasing information processing capabilities in neural nets without having to 
increase the number of neurons or their interconnections.

References
-----------
Chung, S. H., Raymond, S. & Lettvin, J. Y. (1970) Multiple meaning in single
visual units. Brain Behav. Evol. 3, 72-101.

Gawne, T. J. , Richmond, B. J., & Optican, L. M. (1987)  Striate cortex neurons 
do not confound pattern, duration, and luminance, Abstr., Soc. for Neuroscience

McClurkin, J.W., Gawne, Richmond, B.J., Optican, L. M., & Robinson, D. L.(1988)
Lateral geniculate nucleus neurons in awake behaving primates: I. Response to 
B&W 2-D patterns, Abstract, Society for Neuroscience.
 
Optican, L. M., & Richmond, B. J. (1987) Temporal encoding of two-dimensional 
patterns by single units in primate inferior temporal cortex. III. Information 
theoretic analysis. J. Neurophysiol., 57, 147-161.
 
Richmond, B. J., Optican, L. M., Podell, M., & Spitzer, H. (1987) Temporal 
encoding of two-dimensional patterns by single units in primate inferior 
temporal cortex. I. Response characteristics. J. Neurophysiol., 57, 132-146.

Richmond, B.J., & Optican, L. M. (1987)  Temporal encoding of two-dimensional 
patterns by single units in primate inferior temporal cortex. II. 
Quantification of response waveform. J. Neurophysiol., 57, 147-161.

Richmond, B. J., Optican, L. M., & Gawne, T. J. (accepted) Neurons use multiple 
messages encoded in temporally modulated spike trains to represent pictures. 
Seeing, Contour, and Colour, ed. J. Kulikowski, Pergamon Press.

Richmond, B. J., Optican, L. M., & Gawne, T. J. (1987) Evidence of an intrinsic 
code for pictures in striate cortex neurons, Abstr., Soc. for Neuroscience.
 
Troland, L. T. (1921) The enigma of color vision. Am. J. Physiol. Op. 2, 23-48.
 
Young, R. A. (1977) Some observations on temporal coding of color vision: 
Psychophysical results. Vision Research, 17, 957-965.


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/20/88)

Vision-List Digest	Wed Oct 19 12:34:49 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 ever hear of FULCRUM? Help...
 Re:  Looking for info on Complex Cepstrums
 Request for sensor information

----------------------------------------------------------------------

Date:    Fri, 14 Oct 88 19:30:08 PDT
From: palmer%hbvb.span@VLSI.JPL.NASA.GOV (Gary Palmer)
Subject: ever hear of FULCRUM? Help...

Hello everyone, I need some help.

  I am looking for some (any) information on a 
system/package/who-knows-what that does automated mapping.  What I
have heard is that one called FULCRUM exists but I have been unable to 
gather any other information.  That's all I know, any other help would 
be greatly appreciated...  Any leads about other automated mapping 
(from you cartographers out there) products would also be appreciated. 

If there are any requests to do so, I will post my findings.

  Please respond to me directly as I do not subscribe to these BB's 
that I am posting to.

Many Thanks,
Gary Palmer
Science Applications International Corp.
(213) 781-8644
SPAN:     hbva::palmer
INTERNET: palmer%hbva.span@vlsi.jpl.nasa.gov

[ I am posting this because many of the problems in automated mapping are
  experienced in spatial vision.  If you think your answers are relevant
  to vision-types, please post to this List.
			phil...  	]

------------------------------

Date: Mon, 17 Oct 88 12:18:56 EDT
From: efrethei@BLACKBIRD.AFIT.AF.MIL (Erik J. Fretheim)
Subject: Re:  Looking for info on Complex Cepstrums

Cepstrums come in two varieties.  The first of these is 
called the power cepstrum and is defined as the
Fourier transform of the log magnitude of a Fourier 
transform.  
	    FFT{log(|FFT{f(x)}|}

This gives a non-invertable result which shows the relative
magnitude of occurrences of frequency intervals in the base
function.  A this is used in voice analysis where it will give
a relatively constant spike for gluttal pitches, and at another
point on the quefrency scale give a series of peaks which represent
the formants for any particular utterance.  In image analysis the 
power cepstrum can be used to discover any regularity of structure.

The complex cepstrum differs from the power cepstrum in that it is 
invertable.  The complex cepstrum is computed by calculating an FFT
using the magnitude of an FFT as the real term and the phase of an
FFT as the imaginary term.  The complication in computing the complex 
cepstrum is that the phase must be unwrapped to be continuous, rather
than being limited to values between PI and -PI or 0 and 2*PI.  The 
complex cepstrum is used in canceling echos in seismographic testing
and for similar functions in voice processing.  In image processing it
can be used to cancel multiple copies of an object.  For example, if 
an image has three vertical bars, one of brightness level 80, one 50, and
one 30, a change to one term in the complex cepstrum will result in an 
image with only the 80 bar and slight (2-3 gray levels) disturbances  
where the other bars had been.  Another tweek to the complex
cepstrum will introduce multiple copies of a bar into a scene where there
previously was only one.

I have been unable to find any references which do more than describe the 
complex cepstrum and report it's uses.  I am interested in finding 
information about the theory behind the complex cepstrum, guidelines on 
how to consistently decide just where to apply changes to the complex 
cepstrum to obtain consistent results in the image (so far a guessing 
game) and information about two-dimensional applications of the complex 
cepstrum.  I have an image processing problem where if I could get a 
consistent theory for the complex cepstrum, I feel I could make some 
progress.

------------------------------

Date: 	Tue, 18 Oct 88 15:42:47 EDT
From: Mark Noworolski <noworol@eecg.toronto.edu>
Subject: Reqest for sensor information
Organization: EECG, University of Toronto

A project that I'm currently on requires that I be able to locate 
the area around an individuals open mouth- with his face being well
lit (similar to a dentists light).

My preferred method would be to use some kind of DSP techiniques on
the visual signal received by sensor 'X'.

My current problem is finding said sensor 'X' at a reasonable price.
(nb 2 dimensions are essential- the vision element can not scan left
to right or up and down).

A 2D CCD array would be nice but $ come into play there (or am I mistaken?)
I recall reading in byte about an 'optical ram' sometime around Sept 83.
In fact I even used that part once and it only cost me about $50.
Unfortunately it was very contrast sensitive and quite difficult to get; 
and so now I'm considering it as my last resort.

Does anyone know where I can get parts of comparable value for a reasonable
price? Even a general guideline where to look would be appreciated.

Hoping for some help,
mark   (noworol@godzilla.eecg    or noworol@ecf.toronto.edu)

[  The price of 2d CCD cameras has come down quite a bit, so you might
   be able to get one for around $500. If it's 2D CCD imaging chips you're
   after (without the camera), you might take a look at Fairchild Weston/
   Schlumberger.  Looking art their catalog, they have a 488x380 CCD
   chip (CCD222) and DSP chips which may solve your problem (phone
   number is 408-720-7600 USA).  There are many others.
			phil... 		]


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/22/88)

Vision-List Digest	Fri Oct 21 14:55:25 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 FULCRUM 
 gathering dense image sequences

----------------------------------------------------------------------

Date: Friday, 21 October 1988 10:22:36 EDT
From: Dave.McKeown@maps.cs.cmu.edu
Subject: FULCRUM 
Status: RO

FULCRUM is an interactive map display system that uses a video
disk to store digitized maps at various resolutions.  There is
a graphic man-machine interface that allows a user to query the
display for lat/lon/elevation, a gazetteer, and the ability to
generate and place icons.  It is not so much automated mapping
(ie., cultural or terrain feature extraction from imagery, or
automated map name placement, or automated map digitization) 
as it is a way to interface the display of precompiled cartographic
information with a spatial database of geographic facts.
It is (was) developed by a company called Interactive Television
Company in Virgina, but I don't know any other particulars.
Cheers, Dave

------------------------------

Date: Fri, 21 Oct 88 14:22:07 PDT
From: pkahn@meridian.ads.com (Phil Kahn)
Subject: gathering dense image sequences
Status: RO


I thought this might be of interest to those of you trying to acquire
controlled motion image sequences for testing or developing algorithms.

I am particularly interested in acquiring dense image sequences; that
is, imagery in which objects don't move more than about a single pixel
between frames (as described in Bolles, Baker, & Marimont IJCV, V1,
1987; Kahn, PAMI 7/85 and CVPR88 for a better description).

Smooth and controlled motion can be simply obtained by using a special
dolly and track available from most Motion Picture Equipment and
Supplies rental services (check the phone book). The movie industry
uses this to obtain smooth translation of the camera while
changing camera position.  Your tripod mounts on top of the dolly
(about 2' x 3') and fixed positions may be marked along the track
position in order to precisely control the position of the camera on
the ground plane. Tracks come in 4' and 8' sections.  It only cost $25
p/day for the dolly and $8 p/day for each 8' track section. For $50
p/day, we were able to acquire very smooth and precise motion.
Because of drift in robots, it is even more precise and controllable
than using a mobile robot vehicle to acquire test imagery.




------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/29/88)

Vision-List Digest	Fri Oct 28 17:23:21 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 MACHINE VISION AND APPLICATIONS JOURNAL

----------------------------------------------------------------------

From: Gerhard Rossbach <rossbach@hub.ucsb.edu>
Subject: MACHINE VISION AND APPLICATIONS JOURNAL
Date: Mon, 24 Oct 88 12:29:33 PDT

There is now a journal in the field of machine vision, 
integrating theory and applications.  It is published 
four times a year and has a personal subscription rate 
of $44.00. 
The contents of Volume 1, Issue 4 are:

"Dynamic Monocular Machine Vision", by Ernst Dieter 
Dickmanns and Volker Graefe.
"Applications of Dynamic Monocular Machine Vision", 
by Ernst Dieter Dickmanns and Volker Graefe.
"The P300: A System for Automatic Patterned Wafer 
Inspection", by Byron E. Dom.

The three previous issues have contained articles on:

"Background Substration Algorithms for 
Videodensitometric Quantification of Coronary 
Stenosis", by O. Nalcioglu, et. al.
"Automatic Optical Inspection of Plated Through-holes 
for Ultrahigh Density Printed Wiring Boards", 
by Moritoshi Ando and Takefumi Inagaki.
"Projection-based High Accuracy Measurement of 
Straight Line Edges", by Dragutin Petkovic, 
Wayne Niblack, and Myron Flickner.
"An Image-Recognition System Using Algorithmically 
Dedicated Integrated Circuits", by Peter A. Ruetz and 
Robert W. Brodersen.
"Pipelined Architectures for Morphologic Image Analysis",
 by Lynn Abbott, Robert M. Haralick, and Xinhua Zhuang.
"3-D Imaging Systems and High-Speed Processing for
 Robot Control", by Robert M. Lougheed and Robert E. Sampson.
"Interactive and Automatic Image Recognition System", 
by Fumiaki Tomita.
"Multidimensional Biomedical Image Display and Analysis
 in the Biotechnology Computer Resource at the Mayo Clinic",
 by Richard A. Robb.
"A Class of Adaptive Model- and Object-Driven Nonuniform 
Sampling Methods for 3-D Inspection", by Charles B. Malloch,
 William I. Kwak and Lester A. Gerhardt.
"Model-Based Estimation Techniques for 3-D Reconstruction 
from Projections", by Yoram Bresler, Jeffrey A. Fessler, 
and Albert Macovski.
"Active, Optical Range Imageing Sensors", by Paul J. Besl.
"Ash Line Control", by Ola Dahl and Lars Nielsen.

If you would like more information on submitting a paper 
or subscribing to the Journal, please send email to: 
rossbach@hub.ucsb.edu

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/08/88)

Vision-List Digest	Mon Nov 07 11:05:22 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 convexity bibliography update
 PIX IPS vision bulletin board 508-441-2118
 standford seminar

----------------------------------------------------------------------

Date: Mon, 31 Oct 88 13:14:21 PST
From: binford@Boa-Constrictor.Stanford.EDU.stanford.edu (Tom Binford)
		
		
			Monday, Nov 7, 1988
		4:15pm Cedar Hall Conference Room

                    Exploiting Temporal Coherence in
             Scene Analysis for an Autonomous Land Vehicle
		
                            Aaron F. Bobick
		
                     Artificial Intelligence Center
                           SRI International
		
		
		
		
A technique to build reliable scene descriptions by evaluating the
temporal stability of detected objects is presented.  Although current
scene analysis techniques usually analyze individual images correctly,
they occasionally make serious mistakes that could endanger an
autonomous vehicle that is depending upon them for navigational
information.  Our approach is designed to avoid these ugly mistakes and
to increase the competence of the sensing system by tracking objects
from image to image and evaluating the stability of their descriptions
over time.  Since the information available about an object can change
significantly over time, we introduce the idea of a representation
space, which is a lattice of representations progressing from crude
blob descriptions to complete semantic models, such as bush, rock, and
tree.  One of these representations is associated with an object only
after the object has been described multiple times in the representation
and the parameters of the representation are stable.  We define
stability in a statistical sense enhanced by a set of explanations
describing valid reasons for deviations from expected measurements.
These explanations may draw upon many types of knowledge, including the
physics of the sensor, the performance of the segmentation procedure,
and the reliability of the matching technique.  To illustrate the power
of these ideas we have implemented a system, which we call TraX, that
constructs and refines models of outdoor objects detected in sequences
of range data.  This work is a joint project with Bob Bolles.
		
		
------------------------------

Date: 4 Nov 88 12:31:25 GMT
From: prlb2!ronse@uunet.UU.NET (Christian Ronse)
Subject: convexity bibliography update
Organization: Philips Research Laboratory, Brussels
Keywords: 370 entries!

In February 1987 I publicised on the net a convexity bibliography with over
250 entries. Many people asked me a copy of it. It has since been updated and
contains now 370 entries. It will appear in IEEE Trans. PAMI in early 1989,
probably in March. You can again ask me a copy of it, provided that you can't
wait until March or you don't have PAMI in your library.

Christian Ronse		maldoror@prlb2.UUCP
{uunet|philabs|mcvax|cernvax|...}!prlb2!{maldoror|ronse}

------------------------------

Date: 6 Nov 88 03:55:33 GMT
From: manning@mmm.serc.3m.com (Arthur T. Manning)
Subject: PIX IPS vision bulletin board 508-441-2118
Keywords: software
Organization: 3M Company - Software and Electronics Resource Center (SERC); St. Paul, MN

Paragon Imaging has set up a bulletin board for vision topics and also
to promote their IPS (Image Processing Software) package.  All the stuff
I've seen so far has been in FORTRAN, but other source code may appear
soon.  It is a fairly unsophisticated system, but it may prove useful.

Paragon advertized this at Electronic Imaging in Boston last month.


Arthur T. Manning                                Phone: 612-733-4401
3M Center  518-1                                 FAX:   612-736-3122
St. Paul  MN  55144-1000   U.S.A.                Email: manning@mmm.uucp


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/08/88)

Vision-List Digest	Mon Nov 07 11:21:58 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 CONFERENCE ANNOUNCEMENTS:
 =========================
 CVPR Call for Papers reminder
 AI Symposium in Israel: Program
 CFP - 6TH SCANDINAVIAN CONFERENCE ON IMAGE ANALYSIS
 IEEE 1989 Int Conf on Image Processing
 Symposium on Chinese Text Processing

----------------------------------------------------------------------

Date: Sun, 06 Nov 88 20:46:25 EST
From: "Worthy N. Martin" <wnm@uvacs.cs.virginia.edu>
Subject: CVPR Call for Papers reminder


The following call for papers has appeared before,
however, it is being reissued to remind interested
parties of the first deadline, namely:

---->
----> November 16, 1988 -- Papers submitted
---->

This deadline will be held to firmly with the submission
date determined by postmark.
Thank you for your interest in CVPR89
   Worthy Martin



                      CALL FOR PAPERS

              IEEE Computer Society Conference
                            on
          COMPUTER VISION AND PATTERN RECOGNITION

                    Sheraton Grand Hotel
                   San Diego, California
                      June 4-8, 1989.



                       General Chair


               Professor Rama Chellappa
               Department of EE-Systems
               University of Southern California
               Los Angeles, California  90089-0272


                     Program Co-Chairs

Professor Worthy Martin          Professor John Kender
Dept. of Computer Science        Dept. of Computer Science
Thornton Hall                    Columbia University
University of Virginia           New York, New York  10027
Charlottesville, Virginia 22901


                     Program Committee

Charles Brown         John Jarvis            Gerard Medioni
Larry Davis           Avi Kak                Theo Pavlidis
Arthur Hansen         Rangaswamy Kashyap     Alex Pentland
Robert Haralick       Joseph Kearney         Roger Tsai
Ellen Hildreth        Daryl Lawton           John Tsotsos
Anil Jain             Martin Levine          John Webb
Ramesh Jain           David Lowe



                    Submission of Papers

Four copies of complete  drafts,  not  exceeding  25  double
spaced  typed  pages  should be sent to Worthy Martin at the
address given above by November 16, 1988  (THIS  IS  A  HARD
DEADLINE).   All reviewers and authors will be anonymous for
the review process.  The cover page will be removed for  the
review  process.   The  cover  page  must contain the title,
authors'  names,  primary  author's  address  and  telephone
number, and index terms containing at least one of the below
topics.  The second page of the  draft  should  contain  the
title  and  an abstract of about 250 words.  Authors will be
notified of notified of acceptance by February 1,  1989  and
final  camera-ready  papers, typed on special forms, will be
required by March 8, 1989.  Submission of Video Tapes  As  a
new  feature  there  will  be  one or two sessions where the
authors can present their work using video tapes only.   For
information  regarding  the  submission  of  video tapes for
review purposes, please contact John Kender at  the  address
above.



                 Conference Topics Include:

          -- Image Processing
          -- Pattern Recognition
          -- 3-D Representation and Recognition
          -- Motion
          -- Stereo
          -- Visual Navigation
          -- Shape from _____ (Shading, Contour, ...)
          -- Vision Systems and Architectures
          -- Applications of Computer Vision
          -- AI in Computer Vision
          -- Robust Statistical Methods in Computer Vision



                           Dates

      November 16, 1988 -- Papers submitted
      February 1, 1989  -- Authors informed
      March 8, 1989     -- Camera-ready manuscripts to IEEE
      June 4-8, 1989    -- Conference


------------------------------

Date: Tue, 1 Nov 88 19:15:01 JST
From: Shmuel Peleg <peleg%humus.Huji.AC.IL@CUNYVM.CUNY.EDU>
Subject: AI Symposium in Israel: Program

         Fifth Israeli Symposium on Artificial Intelligence
                   Tel-Aviv, Ganei-Hata`arucha
                      December 27-28, 1988


Preliminary Program

Tuesday,  December 27.

08:00-09:00     Registration

09:00-12:00     Openning Session, Joint with ITIM/CASA
Openning addresses.
Invited Talk:   Three dimensional vision for robot applications
David Nitzan, SRI International


12:00-13:30     Lunch Break

13:30-15:15     Session 2.4     Constraints

Invited Talk:  An Overview of the Constraint Language CLP(R)
Joxan Jaffar, IBM TJW Research Center

Belief maintenance in dynamic constraint networks
Rina Dechter, UCLA, and Avi Dechter, California State University


13:30-15:15     Session 2.5     Vision

Multiresolution shape from shading
Gad Ron and Shmuel Peleg, Hebrew University

Describing geometric objects symbolically
Gerald M. Radack and Leon S. Sterling, Case Western Reserve University

A vision system for localization of textile pieces on a light table
(short talk)
H. Garten and M. Raviv,  Rafael

15:15-15:45     Coffee Break

15:45-17:45     Session 3.4     Reasoning Systems


A classification approach for reasoning systems - a case study in
graph theory
Rong Lin, Old Dominion University


Descriptively powerful terminological representation
Mira Balaban and  Hana Karpas, Ben-Gurion University


Bread, Frappe, and Cake: The Gourmet's Guide to Automated Deduction
Yishai A. Feldman and Charles Rich, MIT


15:45-17:45     Session 3.5     Vision

Invited Talk:  Cells, skeletons and snakes
Martin D. Levine, McGill University

The Radial Mean of the Power Spectrum (RMPS) and adaptive image restoration
Gavriel Feigin and Nissim Ben-Yosef, Hebrew University

Geometric and probabilistic criteria with an admissible cost structure
for 3-d object recognition by search
Hezekiel Ben-Arie, Technion

17:45-18:15     IAAI Bussiness meeting


Wednesday, December 28.

09:00-10:30     Session 4.4     Computer Aided Instruction

The implementation of artificial intelligence in computer based training
Avshalom Aderet and Sachi Gerlitz, ELRON Electronic Industries

A logical programming approach to research and development of a
student modeling component in a computer tutor for characteristics of
functions
Baruch Schwarz and Nurit Zehavi, Weizmann Institute

Meimad --- A database integrated with instructional system for retrieval
(in Hebrew)
Avigail Oren and David Chen, Tel-Aviv University

09:00-10:30     Session 4.5     Robotics/Search

Invited Talk: Principles for Movement Planning and Control
Tamar Flash, Weizmann Institute

Strategies for efficient incremental nearest neighbor search
Alan J. Broder, The MITRE Corporation

10:30-11:00     Coffee Break

11:00-13:00     Session 5.4     Legal Applications/Language

Towards a computational model of concept acquisition and
modification using cases and precedents from contract law
Seth R. Goldman, UCLA

Expert Systems in the Legal Domain
Uri J. Schild, Bar-Ilan University

Machinary for Hebrew Word Formation
Uzzi Ornan, Technion

What's in a joke?
Michal Ephratt, Haifa University

11:00-13:00     Session 5.5     Expert Systems

Explanatory Meta-rules to provide explanations in expert systems
C. Millet, EUROSOFT, and M. Gilloux, CNET

Declarative vs. procedural representation in an expert system: A perspective
Lev Zeidenberg, IET,  and Ami Shapiro IDF

Automatic models generation for troubleshooting
Arie Ben-David, Hebrew University

A general expert system for resource allocation (in Hebrew)
Zvi Kupelik, Ehud Gudes, Amnon Mizels, and Perets Shoval, Ben-Gurion University


13:00-14:30     Lunch Break

14:30-16:00     Session 6.4     Logic Programming

Invited Talk: The CHIP constraint programming system
Mehmet Dincbas, ECRC

Time constrained logic programming
Andreas Zell, Stuttgart University

Automatic generation of control information in five steps
Kristof Verschaetse, Danny De Schreye and Maurice Bruynooghe,
Katoliche Universitet Leuven


14:30-16:00     Session 6.5     Data Structures for Vision

Invited Talk:   An Overview of Hierarchical Spatial Data Structures
Hanan Samet, University of Maryland

Optimal Parallel Algorithms for Quadtree Problems
Simon Kasif, Johns Hopkins University


16:00-16:30     Coffee Break

16:30-18:00     Session 7.4     Reasoning and Nonmonotonic Logic

Preferential Models and Cumulative Logics
Daniel Lehmann, Hebrew University

Invited Talk: Baysian and belief-functions formalisms for evidential
reasoning: A conceptual analysis
Judea Pearl, UCLA

16:30-18:00     Session 7.5     Pattern Matching

Scaled pattern matching
Amihood Amir, University of Maryland

Term Matching on a Mesh-Connected Parallel Computer
Arthur L. Delcher and Simon Kasif, The Johns Hopkins University


18:00-18:15     Closing remarks


For registration information please contact:

5th ISAI Secretariat
IPA, Kfar Maccabiah,
Ramat Gan 52109
Israel
(972) 3-715772

Or by e-mail:
udi@wisdom.bitnet
hezy@taurus.bitnet



------------------------------

Date: Thu, 3 Nov 88 08:26:40 +0200
From: jjr@tolsun.oulu.fi (Juha R|ning)
Subject: CFP - 6TH SCANDINAVIAN CONFERENCE ON IMAGE ANALYSIS


      The 6th Scandinavian Conference on Image Analysis
      =================================================

      June 19 - 22, 1989
      Oulu, Finland

      Second Call for Papers



      INVITATION TO 6TH SCIA

      The 6th Scandinavian Conference on Image  Analysis   (6SCIA)
      will  be arranged by the Pattern Recognition Society of Fin-
      land from June 19 to June 22, 1989. The conference is  spon-
      sored  by the International Association for Pattern Recogni-
      tion. The conference will be held at the University of Oulu.
      Oulu is the major industrial city in North Finland, situated
      not far from the Arctic Circle. The conference  site  is  at
      the Linnanmaa campus of the University, near downtown Oulu.

      CONFERENCE COMMITTEE

      Erkki Oja, Conference Chairman
      Matti Pietik{inen, Program Chairman
      Juha R|ning, Local organization Chairman
      Hannu Hakalahti, Exhibition Chairman

      Jan-Olof Eklundh, Sweden
      Stein Grinaker, Norway
      Teuvo Kohonen, Finland
      L. F. Pau, Denmark

      SCIENTIFIC PROGRAM

      The program will  consist  of  contributed  papers,  invited
      talks  and special panels.  The contributed papers will cov-
      er:

              * computer vision
              * image processing
              * pattern recognition
              * perception
              * parallel algorithms and architectures

      as well as application areas including

              * industry
              * medicine and biology
              * office automation
              * remote sensing

      There will be invited speakers on the following topics:

      Industrial Machine Vision
      (Dr. J. Sanz, IBM Almaden Research Center)

      Vision and Robotics
      (Prof. Y. Shirai, Osaka University)

      Knowledge-Based Vision
      (Prof. L. Davis, University of Maryland)

      Parallel Architectures
      (Prof. P. E. Danielsson, Link|ping University)

      Neural Networks in Vision
      (to be announced)

      Image Processing for HDTV
      (Dr. G. Tonge, Independent Broadcasting Authority).

      Panels will be organized on the following topics:

      Visual Inspection in the  Electronics  Industry  (moderator:
      prof. L. F. Pau);
      Medical Imaging (moderator: prof. N. Saranummi);
      Neural Networks and Conventional  Architectures  (moderator:
      prof. E. Oja);
      Image Processing Workstations (moderator: Dr.  A.  Kortekan-
      gas).

      SUBMISSION OF PAPERS

      Authors are invited to submit four  copies  of  an  extended
      summary of at least 1000 words of each of their papers to:

              Professor Matti Pietik{inen
              6SCIA Program Chairman
              Dept. of Electrical Engineering
              University of Oulu
              SF-90570 OULU, Finland

              tel +358-81-352765
              fax +358-81-561278
              telex 32 375 oylin sf
              net scia@steks.oulu.fi

      The summary should contain sufficient  detail,  including  a
      clear description of the salient concepts and novel features
      of the work.  The deadline for submission  of  summaries  is
      December  1, 1988. Authors will be notified of acceptance by
      January 31st, 1989 and final camera-ready papers will be re-
      quired by March 31st, 1989.

      The length of the final paper must not exceed 8  pages.  In-
      structions  for  writing the final paper will be sent to the
      authors.

      EXHIBITION

      An exhibition is planned.  Companies  and  institutions  in-
      volved  in  image analysis and related fields are invited to
      exhibit their products at demonstration stands,  on  posters
      or video. Please indicate your interest to take part by con-
      tacting the Exhibition Committee:

              Matti Oikarinen
              P.O. Box 181
              SF-90101 OULU
              Finland

              tel. +358-81-346488
              telex 32354 vttou sf
              fax. +358-81-346211

      SOCIAL PROGRAM

      A social program will be arranged,  including  possibilities
      to  enjoy  the  location  of the conference, the sea and the
      midnight sun. There  are  excellent  possibilities  to  make
      post-conference  tours  e.g.  to Lapland or to the lake dis-
      trict of Finland.

      The social program will consist of a get-together  party  on
      Monday June 19th, a city reception on Tuesday June 20th, and
      the conference Banquet on Wednesday June 21st. These are all
      included  in the registration fee. There is an extra fee for
      accompanying persons.

      REGISTRATION INFORMATION

      The registration fee will be 1300  FIM  before  April  15th,
      1989  and 1500 FIM afterwards. The fee for participants cov-
      ers:  entrance  to  all  sessions,  panels  and  exhibition;
      proceedings; get-together party, city reception, banquet and
      coffee breaks.

      The fee is payable by
              - check made out to 6th SCIA and mailed to the
                Conference Secretariat; or by
              - bank transfer draft account or
              - all major credit cards

      Registration forms, hotel information and  practical  travel
      information  are  available from the Conference Secretariat.
      An information package will be sent to authors  of  accepted
      papers by January 31st, 1989.

      Secretariat:
              Congress Team
              P.O. Box 227
              SF-00131 HELSINKI
              Finland
              tel. +358-0-176866
              telex 122783 arcon sf
              fax +358-0-1855245

      There will be hotel rooms available for  participants,  with
      prices  ranging  from  135 FIM (90 FIM) to 430 FIM (270 FIM)
      per night for a single room (double room/person).

------------------------------

Date:     Fri, 4 Nov 88 12:26 H
From: <CHTEH%NUSEEV.BITNET@CUNYVM.CUNY.EDU>
Subject:  ICIP'89 : IEEE 1989 Int Conf on Image Processing


     IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING
                         (ICIP'89)

               5-8 September, 1989, Singapore

                  CALL FOR PAPERS (Updated)


     The  1989  IEEE  International  Conference   on   Image
Processing  (ICIP'89)  will  be  held  in  Singapore  on 5-8
September, 1989.  The conference is jointly organized by the
Computer  Chapter, IEEE Singapore Section and the Department
of Electrical Engineering, National University of Singapore.
The  conference will include regular sessions on all aspects
of the theory and  applications  of  image  processing.   In
addition,  tutorials  by  eminent  speakers  presenting  the
state-of-the-art in selected areas of image processing  will
be  offered.  An exhibition will be held in conjunction with
the conference.

     Papers describing original work in all aspects of image
processing   are   invited.   Topics  for  regular  sessions
include, but are not limited to, the following :

  Image analysis/modeling          Office image processing
  Image restoration/enhancement    Machine vision
  Video communications             AI vision techniques
  Image pattern recognition        VLSI implementation
  Remote sensing                   System architecture
  Biomedical imaging               Color image processing

     Authors  are  invited  to  submit  four  copies  of  an
extended summary of at least 1000 words to :

           Technical Program Chairman, ICIP'89
           c/o Meeting Planners Pte Ltd
           100 Beach Road, #33-01
           Shaw Towers, Singapore 0718
           Republic of Singapore
           Telex : RS40125 MEPLAN
           Fax : (65) 2962670
           E-mail : OSH@NUSEEV.BITNET

     The summary should contain sufficient detail, including
a  clear  description  of  the  salient  concepts  and novel
features of  the  work.   The  summary  should  include  the
authors'  names,  addresses,  affiliations,  and  telephone,
telex and fax numbers.  The authors should also indicate one
or  more of the above topics that best describe the contents
of the paper.

     Proposals for tutorials and special sessions  are  also
welcome  and  should  be  addressed to the Technical Program
Chairman before 16 January 1989.


AUTHORS' SCHEDULE
   Submission of summary              1 February 1989
   Notification of acceptance        31 March    1989
   Submission of final manuscripts    1 June     1989

------------------------------------------------------------

Date: 2 Nov 88 17:58:33 GMT
From: uflorida!proxftl!francis@gatech.edu (Francis H. Yu)
Subject: 1989 Symposium on Chinese Text Processing
Organization: Proximity Technology, Ft. Lauderdale


                   *************************************
                   *           CALL FOR PAPER          *
                   *************************************


                 1989 Symposium on Chinese Text Processing
                     Boca Raton, FL, March 16-17, 1988
                                Sponsored by
              The Chinese Language Computer Society of Florida


       The Symposium's objective is to serve as an international
       forum for the presentation of research conducted in the area
       of ideographics.

       Papers describing original work in all aspects of
       ideographics are invited.

       A non-exclusive sample of the included topics follows:

       *    Charactor Input and Display
       *    Charactor Encoding
       *    Disign of Ternimals and Keyboards
       *    Chinese Language Lexicon
       *    Chinese Language Parsing
       *    Machine Translation
       *    Optical Charactor Recognition
       *    Computer Ideographics in 90's
       *    Speech Recognition
       *    Application
               Word Processors
               Electronic Typewriters
               Desk Top Publishing
               Typesetting

       Authors please submit 3 copies of proposed paper to
            Paper Review Committee
            1989 Symposium on Chinese Text Processing
            Florida Atlantic University
            Department of Computer Engineering
            500 NW 20 Street
            Boca Raton, FL 33486

       Symposium Chairman
            Dr George Kostopoulos
            (407) 393-3463

       Paper Submission Deadline:     December 1, 1988
       Notification of Acceptance:    January  1, 1989
       Final Manuscript Due:          February 1, 1989
 
FRANCIS H. YU  (305)-566-3511
Proximity Technology, Inc., Ft. Lauderdale, FL, USA
Domain:	francis@proxftl.uucp
Uucp:	...!{ uunet, uflorida!novavax }!proxftl!francis


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/12/88)

Vision-List Digest	Fri Nov 11 14:13:22 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Vision systems question/advice
 Translator

----------------------------------------------------------------------

Date: 11 Nov 88 03:36:37 GMT
From: rr21@isl1.ri.cmu.edu (Rajan Ramaswamy)
Subject: Vision systems question/advice
Keywords: vision system, steel plants, sensing
Organization: Carnegie-Mellon University, CS/RI


     APPLYING COMPUTER-VISION TO STEEL PLANT PROCESS CONTROL

We are investigating the feasibility of installing vision systems to
replace existing sensing systems in steel plants. These are electric
arc furnaces that produce couple of hundred tons of molten steel (3000
F) every few hours and the people concerned want to move their sensors 
as far away as they possibly can.

I would appreciate any views/advice/warnings/EXPERIENCE that anyone on
the net has to offer.  Pointers to relevant references or equipment
will also be welcome.

The following is a summary of the application and problems we foresee,

o We want to sense the position of large (10 ft), slow moving objects 
  (~ 1 ft/sec) with reasonable accuracy (~1 inch). These items are, the roof
  of a furnace, the tilt angle of the furnace etc.

o Light conditions are extremely variable. There is an order of magnitude
  change (at least) when the furnace lid is opened. Lots of shadows etc.

o Lots of dust, vibration and sparks flying about are to be expected.

o The temperatures may be rather high (100+ F). This requires that cameras
  be decently far off.

o Processing should be completed in less than 5 seconds.

Thanks a lot,

Rajan

p.s.: If you need further details to answer anything PLEASE contact me.

-- 
"Until the philosophy of one race superior, and another, inferior,
 is finally and permanently discredited and abandoned, it's WAR.
                                   Haile Selassie,Emperor of Abyssinia,
		                   O.A.U. Conference  197?

------------------------------

Date: Thu, 10 Nov 88 14:36:46 GMT
From: Guanghua Zhang <mcvax!cs.hw.ac.uk!guanghua@uunet.UU.NET>
Subject: Translator

For the moment, I am working on a knowledge-based vision system
which includes a commercial Solid Modelling package. The Solid Modeller
represents curve surface as a set of facets which can be displayed on the
screen or printed. What I am trying to do is to write a translator to
transform the output data into the real from, e.g. cyclinder. 
Could anybody give me some pointers about the existing work or any 
publications ?

Thanks in advance.

guanghua

[  I believe the work on Acronym and Successor by Binford and his students
   at Stanford has addressed this?  
		phil...		]

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/20/88)

Vision-List Digest	Sat Nov 19 12:16:19 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Translator
 bibliography requests
 Assistant Professor position at MIT

----------------------------------------------------------------------

Date: Sat, 12 Nov 88 13:22:20 GMT
From: Guanghua Zhang <mcvax!cs.hw.ac.uk!guanghua@uunet.UU.NET>
Subject: Translator

Following the previous mail, I describe more about our system on request.
The Solid Modelling package we use is RoboSolid Modeller from Robocom
Limited, UK (or Robosystems in some places). 
Models can be built interactively on the screen using mouse. One of the output
files is in the format of SXF, which includes coordinate data of vertex,
facets formed from those vertices and the adjacence of those facets. 
The work I am intending to do starts from the SXF file. I woulde like to
translate the SXF fromat into the internal representation of models in
the model base of our vision system.

I'd like to hear anything about using comercial modelling packages
as the modeller of vision systems.

Thanks a lot.

guanghua

[ I requested this author post more info about this package. If you know of 
  any other solid modelling packages which you feel could be useful for 
  vision, please post a summary and pointers to the source.
		phil... 		]

------------------------------

Date: Mon, 14 Nov 88 10:20:55 +0100
From: prlb2!ronse@uunet.UU.NET (Christian Ronse)
Subject: bibliography requests

Summary: s-mail address needed if you want a copy

Note to all those who requested me a copy of that bibliography:

1) As this bibliography will be copyrighted, I cannot make it available in
electronic form (FTP or anything else). I can only give preprints.

2) My electronic mailer does not transmit paper.

So: no postal address means request not honoured.

Christian Ronse		maldoror@prlb2.UUCP
{uunet|philabs|mcvax|cernvax|...}!prlb2!{maldoror|ronse}

		Time is Mona Lisa

------------------------------

Date: Mon, 14 Nov 88 14:48:13 est
From: Steve Pinker <steve@psyche.mit.edu>
Site: MIT Center for Cognitive Science
        vision-list@ads.com
Subject: Assistant Professor position at MIT

                                                November 8, 1988

	         	   JOB ANNOUNCEMENT

   The Department of Brain and Cognitive Sciences  (formerly  the
Department  of  Psychology)  of  the  Massachusetts  Institute of
Technology is seeking applicants for a  nontenured,  tenure-track
position in Cognitive Science, with a preferred specialization in
psycholinguistics, reasoning, or knowledge representation.    The
candidate   must  show  promise  of  developing  a  distinguished
research   program,   preferably   one   that   combines    human
experimentation  with  computational modeling or formal analysis,
and must be a skilled teacher. He or  she  will  be  expected  to
participate  in  department's  educational  programs in cognitive
science at  the  undergraduate  and  graduate  levels,  including
supervising  students' experimental research and offering courses
in Cognitive Science or Psycholinguistics.

  Applications must include a brief  cover  letter  stating  the
candidate's  research  and  teaching  interests, a resume, and at
least three letters  of  recommendation,  which  must  arrive  by
January 1, 1989.  Address applications to:


Cognitive Science Search Committee
Attn: Steven Pinker, Chair
E10-018
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/03/88)

Vision-List Digest	Fri Dec 02 16:16:37 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 DeScreening
 dissemination of sharware for Image Processing and Computer Vision
 Robotics Seminar

----------------------------------------------------------------------

Date:         Thu, 24 Nov 88 15:47:37 IST
From: Shelly Glaser  011 972 3 5450119 <GLAS%TAUNIVM.BITNET@CUNYVM.CUNY.EDU>
Subject:      DeScreening

Please publish the following question in the Vision Newsletter:

     I  am looking  for information  on practical  solutions to  the
     "de-screening" problem: taking a half-toned image (like the one
     in printed book or magazine)  and removing the half-tone screen
     so we get a true continuous-gray-scale image (as opposed to the
     binary  pulse area  modulated  half-tone  image).  The  obvious
     solution, low-pass filtering, often kills  too much of the fine
     details  in the  image,  so  I am  looking  for something  more
     sophisticated.

     Many thanks,
                             Sincerely Yours,
                                                       Shelly Glaser

                    Department of Electronic, Communication, Control
                                                and Computer Systems
                                              Faculty of Engineering
                                                 Tel-Aviv University
                                                    Tel-Aviv, Israel

                                                   FAX: 972 3 419513
                               Computer network: GLAS@TAUNIVM.BITNET

------------------------------

Date: 26 Nov 88 18:58:00 GMT
From: annala%neuro.usc.edu@oberon.usc.edu (A J Annala)
Subject: possible use of comp.ai.vision
Organization: University of Southern California, Los Angeles, CA


There has been some discussion in comp.graphics about using comp.ai.vision
as the home for discussions about andf distribution of image processing
software.  I personally suspect that this would not be an appropriate use
of the comp.ai.vision group; however, I would appreciate email to my user
account (which I will summarize) on this issue.

Thanks, AJ Annala ( annala%neuro.usc.edu@oberon.usc.edu )

[ Discussions on IP sofware are most definitely appropriate for the Vision
  List and comp.ai.vision.  Yet, as with other SIG networks, it is not 
  appropriate to submit the code in this forum.  Rather, if there is 
  shareware IP and CV software which should be disseminated, then a
  new network newsgroup entitled something like COMP.BINARIES.VISION
  should be established.  This requires a site and moderator for this new
  net which can establish and manage this new facility.  Any volunteers?

			phil...  	]

------------------------------

Date: Tue, 29 Nov 88 19:30:46 PST
From: binford@Boa-Constrictor.Stanford.EDU.stanford.edu (Tom Binford)
Subject: Robotics Seminar

			 Robotics Seminar
			Monday, Dec 7, 1988
		4:15pm Cedar Hall Conference Room


            SOLVING THE STEREO CONSTRAINT EQUATION

		Stephen Barnard
		Artificial Intelligence Center
		SRI International

The essential problem of stereo vision is to find a disparity map
between two images in epipolar correspondence. The stereo constraint
equation, in any of its several forms, specifies a function of disparity
that is a linear combination of photometric error and the first order
variation of the map.  This equation can also be interpreted as the
potential energy of a nonlinear, high dimensional dynamic system.  By
simulating either the deterministic newtonian dynamics or the
statistical thermodynamics of this system one can find approximate
ground states (i.e., states of minimum potential energy), thereby
solving the stereo constraint equation while constructing a dense
disparity map.

The focus of this talk will be a stochastic method that uses a
microcanonical version of simulated annealing.  That is, it explicitly
represents the heat in the system with a lattice of demons, and it cools
the system by removing energy from this lattice.  Unlike the
``standard'' Metropolis version of simulated annealing, which simulates
the canonical ensemble, temperature emerges as a statistical property of
the system in this approach.  A scale-space image representation is
coupled to the thermodynamics in such a way that the higher frequency
components come into play as the temperature decreases.  This method has
recently been implemented on a Connection Machine.


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/08/88)

Vision-List Digest	Wed Dec 07 20:05:04 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 If you have difficulties posting...

----------------------------------------------------------------------

Date: Wed, 7 Dec 88 20:04:42 PST
From: Vision-List-Request <vision@deimos.ads.com>
Subject: If you have difficulties posting...


Please let me know if you have posted and not seen the posting.  There have
been some reported problems.  

phil...


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/17/88)

Vision-List Digest	Fri Dec 16 12:44:29 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 announcement: Image Understanding & Machine Vision, Human & Machine Vision
 repost Geometer and request for info 
 Call for Papers for ACM INTERFACES meeting

----------------------------------------------------------------------


Date: Fri, 16 Dec 88 10:58 EST
From: Sandy Pentland <sandy@MEDIA-LAB.MEDIA.MIT.EDU>
Subject: announcement


CALL FOR PAPERS:

              Optical Society Topical Meeting on
            IMAGE UNDERSTANDING AND MACHINE VISION
           June 12-14, 1989, Cape Cod, Massachusetts
                chair: Alex Pentland, M.I.T.

          FIFTH WORKSHOP ON HUMAN AND MACHINE VISION
            June 15, 1989, Cape Cod, Massachusetts
                chairs: Jacob Beck, U. Oregon, 
		        Azriel Rosenfeld, U. Maryland



SCOPE OF TOPICAL MEETING: In the last few years the availability of
inexpensive image processing equipment has caused an upsurge of interest
in real-time vision, and now groups all across the country are engaged
in such projects.   This meeting will focus on the results of these
projects.

We especially encourage research reports on new algorithms and
techniques that are (1) already demonstrated on real imagery, and (2)
have a computational complexity that makes real-time applications
economically feasible within the next few years.  Papers concerning
implemented models of human visual processing are appropriate for this
conference.  Papers which are purely theoretical or purely application
oriented are not appropriate for this meeting.

Technical Committee: Ted Adelson (M.I.T.), Ruzena Bajcsy (U.
Pennsylvania) Dana Ballard (U. Rochester), Peter Burt (SRI Sarnoff
Research Center) Steve Case (Cyberoptics), Ramesh Jain (U. Michigan).

WORKSHOP ON HUMAN AND MACHINE VISION: June 15, the day following the
Topical Meeting, will be devoted to the Fifth Workshop on Human and
Machine Vision.  The workshop will consist of invited papers on models
for human visual processes.  There will be a separate, nominal
registration fee for this workshop.

SUBMISSION OF PAPERS:  Each author is requested to submit a SUMMARY
PAPER of no more than FOUR pages, including all figures and equations.
Original figures or glossy photographs must be submitted, pasted into
position on the summary pages.  If accepted this summary will be
reproduced directly by photo offset and be published in the meeting
proceedings.  The first page of the summary must begin with title,
author, affiliation, and address.

Authors must also submit a 25 word abstract on a separate sheet of
paper, with the title at the top of the page in 12-point upper- and
lower-case letters, followed by the authors name, affiliation, complete
return address, and the body of the abstract.  In the case of multiple
authors each author's name and address should be listed separately after
the title.  If the paper is accepted this abstract will appear in the
advance program.

All abstracts and summaries must reach the Optical Society office by 
February 21, 1989.  Send your paper to:
   Image Understanding
   Optical Society of America
   1816 Jefferson Place NW
   Washington, DC  20036
   (202) 223-0920
Authors will be notified of acceptance by mid April.

REGISTRATION at the meeting is $220 for Optical Society members, $260
for non-members, $90 for students.  Preregistration materials are
available from the Optical Society, at the above address, before May 12,
1989.

HOUSING will be at the Sea Crest Resort, North Falmouth, Cape Cod, MA.,
(508)540-9400.  Convention rates will be $122.50/night single occupancy,
$74.50 per person double occupancy.  BOTH RATES INCLUDE ROOM, BREAKFAST,
LUNCH, AND TAX, and include use of recreational facilities.  All
reservations, plus a $75 deposit check payable to the Sea Crest, must
reach the hotel by MAY 10, 1989, and must note affliation with the
Optical Society.  Check-in time is 3:00 pm; rates apply June 10-17,
1989.


------------------------------

Date: Fri, 16 Dec 88 12:43:51 PST
From: Vision-List-Request <vision@deimos.ads.com>
Subject: repost Geometer and request for info 


A subscriber posted a message which specified the location of Geometer
(a modeling package) on the UMASS FTP system.  Please repost.  Mailer
problems trashed the posting.

thanks,
phil...


------------------------------

Subject: Call for Papers
Date: Mon, 12 Dec 88 17:54:24 -0500
From: mitchell%community-chest.mitre.org@gateway.mitre.org


              ***** CALL FOR PAPERS AND PARTICIPATION *****
28th Annual Technical Symposium of the Washington, D.C. Chapter of the ACM
            INTERFACES:  Systems and People Working Together
             National Institute of Standards and Technology
                Gaithersburg, Maryland - August 24, 1989
     No computer is an island.  Increasingly, systems are being tied together
to improve their value to the organizations they serve.  This symposium will
explore the theoretical and practical issues in interfacing systems and in
enabling people to use them effectively.
     *** SOME TOPICS OF INTEREST FOR SUBMITTED PAPERS ***
                     * HUMAN FACTORS *
User interfaces              Meeting the needs of handicapped users
Conquering complexity        Designing systems for people
Intelligent assistants       The human dimension of information interchange
                   * SYSTEMS INTEGRATION *
Communications networks      Distributed databases
Data standardization         System fault tolerance
Communications standards (e.g. GOSIP)
                * STRATEGIC  SYSTEMS *
Decision support systems     Embedding expert systems in information systems
Strategic info systems       Computer Aided Logistics Support (CALS)
        * SYSTEM DEVELOPMENT AND OPERATION *
Quality control and testing  Designing a system of systems
System management            Conversion and implementation strategies
Software tools and CASE      Identifying requirements thru prototyping
     * ENABLING TECHNOLOGIES FOR APPLICATIONS PORTABILITY *
Ada                          Database management
Open software                Open protocol technology
Operating systems (e.g., POSIX)
==>  DON'T BE LIMITED BY OUR SUGGESTIONS - MAKE YOUR OWN!
     Both experienced and first-time authors are encouraged to present their
work.  Papers will be refereed.  A length of 10 to 20 double-spaced pages is
suggested.
     Those presenting a paper are entitled to register for the symposium at
the early advance registration rate.
     To propose special sessions or noncommercial demonstrations, please send
three copies of an extended abstract to the Program Chairman at the address
below.
     Note: A paper must include the name, mailing address, and telephone
number of each author or other presenter.  Authors of accepted papers must
transfer copyright to ACM for material published in the Proceedings (excepting
papers that cannot be copyrighted under Government regulations).
     The ACM policy on prior publication was revised in 1987.  A complete
statement of the policy appears in the November 1987 issue of Communications
of the ACM.  In part it states that "republication of a paper, possibly
revised, that has been disseminated via a proceedings or newsletter is
permitted if the editor of the journal to which it has been submitted judges
that there is significant additional benefit to be gained from republication."
                            *** SCHEDULE ***
March 2, 1989  Please send five copies of your paper to the Program Chairman:
     Dr. Milton S. Hess
     American Management Systems, Inc.
     1525 Wilson Boulevard
     Arlington, VA 22209
April 13, 1989  Acceptance notification
June 22, 1989  Final camera ready papers are due
August 24, 1989  Presentation at the symposium
     If you have any questions or suggestions, please contact:
     Symposium General Chairman: Charles E. Youman, The MITRE Corporation,
(703) 883-6349 (voice), (703) 883-6308 (FAX), or youman@mitre.org (internet).
     Program Chairman: Dr. Milton Hess, American Management Systems, Inc.,
(703) 841-5942 (voice) or (703) 841-7045 (FAX).
     NIST Liaison: Ms. Elizabeth Lennon, National Institute of Standards and
Technology (formerly the National Bureau of Standards), (301) 975-2832 (voice)
or (301) 948-1784 (FAX).

------------------------------


End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/22/88)

Vision-List Digest	Wed Dec 21 15:38:00 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Geometer

----------------------------------------------------------------------

Date: Mon, 19 Dec 88 17:36 EST
From: "C. Connolly" <GEOMETER@cs.umass.EDU>
Subject: Geometer


The GE R&D center and (more recently) the vision group at UMass have
been developing and using a solid modeler with some algebraic
manipulation capabilities for various vision & automated reasoning
experiments.  The system is called "Geometer", and some of its salient
features are:

  - Written in Common Lisp (runs on Symbolics, TI, Lucid, DEC)
  - Reliable at planar modeling
  - Capable of curved surface modeling
  - contains machinery for polynomial & transcendental function
    arithmetic, algebraic numbers, Grobner basis computation.
  - contains machinery for various projections & transformations

It is available via anonymous FTP from COINS.CS.UMASS.EDU, although some
people have reported trouble with the connection.  The files are in the
directories:

      VIS$DISK:[GEOMETER]*.LISP
      VIS$DISK:[GEOMETER.DOC]*.TEX

If you do get a copy this way, please send a message to
GEOMETER@COINS.CS.UMASS.EDU, just for verification purposes.  More
information will probably be available at the next DARPA IU workshop.



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/31/88)

Vision-List Digest	Fri Dec 30 09:55:05 PDT 88

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 backissues of Vision List DIgest available via anonymous FTP

----------------------------------------  

Date: Fri, 30 Dec 88 09:54:42 PST
From: Vision-List-Request <vision@deimos.ads.com>
Subject: backissues of Vision List DIgest available via anonymous FTP


Backissues of the Vision List Digest are now available via anonymous FTP. 
For those of you without FTP connection, limited backissues may still be
obtained by mailing your request to Vision-List-Request@ADS.COM .

To access the Digest backissues from anonymous FTP:
	1) FTP to ADS.COM
	2) Login name is ANONYMOUS
	3) Once you're logged on, change directory (cd) to
	   pub/VISION-LIST-ARCHIVE
	4) Each file contains an archived issue, and the file name is
	   the date and time the issue was created.

I hope that the availability of these backissues will provide a better
dissemination of information and add to the usefulness of the Vision List.
Please let me know if you have any problems.

I'd like to see more discussion of vision issues on the List. Some
potential topics are: assessment of recent advances and approaches,
pointers to interesting literature or equipment, state of funding and
the economic environment for vision research, impact of technologies on
vision work, short summaries of work going on in vision labs, relationships
between biological and machine vision, current and emerging paradigms, 
deeply personal vignettes about key workers in the field (only kidding),
etc. The continued posting of conferences and talks, dissertation topics/
defenses, call for papers, etc. continues to be a valuable contribution
to the List.

Also, please let me know if you've had trouble posting, receiving the
List, multiple copies, etc.  We continue to restructure the mechanics of
the List, and your feedback is always valued.


phil...


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (01/06/89)

Vision-List Digest	Thu Jan 05 18:34:07 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Comments on Vision List
 Re:  AICPA

----------------------------------------------------------------------

Date: Tue, 3 Jan 89 10:46:04 PST
From: oyster@tcville.hac.com (Michael Oyster)
Subject: Comments on Vision List

I have been subscribing to Vision List through UCLA and now directly
through work for more than three years.  Vision list has been useful to
me primarily in providing advance notice for conferences.  While
I have found vision list to be useful, I would like to make some 
hopefully productive comments.

Vision List needs more technical depth.  I'm not particularly interested
in reading notes by beginning graduate students about what's a good 
frame grabber or what's a good master's topic.  We need contributions
from serious researchers concerning CURRENT research topics that
can not be found in the open literature because of their late breaking
nature.

I know this is easier said than done because the hard part is getting
serious people interested and involved.  If I can lend a hand, I would
be glad to help.


				Best wishes,

				Mike Oyster
				Hughes Aircraft

[ I agree we need more technical depth.  How can we make it happen?
		phil...]


------------------------------

Date: Sat, 31 Dec 88 15:16:47 GMT
From: Steven Zenith <mcvax!inmos!zenith@uunet.UU.NET>
Subject: Re:  AICPA

                            occam user group
                      * artificial  intelligence *
                         special interest group

                             CALL FOR PAPERS
                 1st technical meeting of the OUG AISIG
                         ARTIFICIAL INTELLIGENCE
                                   AND
                   COMMUNICATING PROCESS ARCHITECTURE
         17th and 18th of July 1989, at Imperial College, London UK.
                        Keynote speakers will include
                            * Prof. Iann Barron *
                        "Inventor of the transputer"

The conference organising committee includes:
    Dr. Atsuhiro Goto       Institute for New Generation Computer
                            Technology (ICOT), Japan.
    Dr.med.Ulrich Jobst     Ostertal - Klinik fur Neurologie und 
                            klinische Neurophysiologie
    Dr. Heather Liddell,    Queen Mary College, London.
    Prof. Dr. Y. Paker,     Polytechnic of Central London
    Prof. Dr. L. F. Pau,    Technical University of Denmark.
    Prof. Dr. Bernd Radig,  Institut Fur Informatik, Munchen.
    Prof. Dr. Alan Robinson Syracuse University, U.S.A.
    Prof. Dr. David Warren  Bristol University, U.K.

Conference chairmen:
    Dr. Mike Reeve         Imperial College, London
    Steven Ericsson Zenith INMOS Limited, Bristol (Chairman OUG AISIG)

Topics include:
         The transputer and a.i.                 Real time a.i
          Applications for a.i.            Implementation languages
       Underlying kernel support          Underlying infrastructure
          Toolkits/environments                Neural networks

    Papers must be original and of high quality. Submitted papers
    should be about 20 to 30 pages in length, double spaced and single
    column, with an abstract of 200-300 words. All papers will be
    refereed and will be assessed with regard to their quality and
    relevance. 

    A volume is being planned to coincide with this conference to be
    published by John Wiley and Sons as a part of their book series on
    Communicating Process Architecture. 

    Papers must be submitted by the 1st of February 1989. Notification
    of acceptance or rejection will be given by March 1st 1989.
    Final papers (as camera ready copy) must be provided by April 1st
    1989.

Submissions to be made to either:
    Steven Ericsson Zenith                Mike Reeve
    INMOS Limited,                        Dept. of Computing,
    1000 Aztec West,                      Imperial College,
    Almondsbury,                          180 Queens Gate,
    Bristol BS12 4SQ,                     London SW7 2BZ,
    UNITED KINGDOM.                       UNITED KINGDOM.
    Tel. 0454 616616 x513                 Tel. 01 589 5111 x5033
    email: zenith@inmos.co.uk             email: mjr@doc.ic.ac.uk
    
Regional Organisers:                               
    J.T Amenyo             Ctr. Telecoms Research, Columbia University,
                           Rm 1220 S.W. Mudd, New York, NY 10027-6699.
    Jean-Jacques Codani    INRIA, Domaine de Voluceau - Rocquencourt,
                           B.P.105-78153 Le Chesnay Cedex, France.
    Pasi Koikkalainen      Lappeenranta University of Technology,
                           Information Technology Laboratory,
                           P.O.BOX 20, 53851 Lappeenrantra, Finland.
    Kai Ming Shea          Dept. of Computer Science,
                           University of Hong Kong, Hong Kong.
    Dr. Peter Kacsuk       Multilogic Computing, 11-1015 Budapest,
                           Csalogaiy u. 30-32. Hungary.

 *  Steven Ericsson Zenith     Snail mail: INMOS Limited, 1000 Aztec West,
 |    zenith@inmos.uucp                    Almondsbury, Bristol BS12 4SQ. UK.
      zenith@inmos.co.uk                   Voice: (UK)0454 616616x513
      ** All can know beauty as beauty only because there is ugliness ** 


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (01/15/89)

Vision-List Digest	Sat Jan 14 12:40:07 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Request for Industrial Research Topics
 How to evaluate computer vision techniques?
 Course announcement: Computational Neuroscience
 Role of Vision List (posted to comp.graphics)
 IEEE 1989 Int Conf on Image Processing (Last Call for Paper)

----------------------------------------------------------------------

Date: Mon, 09 Jan 89 08:42:03 -0500
Subject: Request for Industrial Research Topics
             <8901060238.AA28121@ads.com> 
From: "Kenneth I. Laws" <klaws@note.nsf.gov>


Most engineers start with real problems and do research
to find appropriate tools and solutions.  Most academics
start with tools and do research to find appropriate
problems and funding agencies.  Perhaps Vision-List
could help bring the two together by generating a list
of real-world problems that need to be solved.  This would
help the master's students and the companies with the
problems.  I've been told that it would also be a big
help to the funding agencies, particularly the NSF
Small Business program.  (It seems that publishing a
specific problem description usually draws good proposals,
whereas vague RFPs may draw nothing.)

I'm not thinking about generic research topics such as
shape from shading or stereo vision -- everyone knows about
those.  I'm thinking about applications such as inspecting
solder joints or nose cone windings.  Are there specific
problems which seem solvable but for which no off-the-shelf
technology is available?  Could some startup or small business
profit by helping your production line with a particular
inspection task?  What specific capabilities should the funding
agencies be trying to develop in the next five years?

Academics usually don't wish to reveal their ideas until
they can put together at least a conference paper -- at which
time there is little motivation for publishing in Vision-List.
The field also suffers from lack of definitive solutions for
any problem, making it impossible for any researcher to
declare victory and close out a line of research.  I hope that
the engineers will be less reticent in sharing the problems
they are working on (or have insufficient time or interest
to work on).  Making a problem widely known may be the
quickest way of uncovering an easy solution or a worthwhile
research priority.

					-- Ken

------------------------------

Date: Fri, 6 Jan 89 14:26:14 pst
From: ramin@scotty.stanford.edu (Ramin Samadani)
Subject: How to evaluate computer vision techniques?

I am looking for published results in "quality of results" of computer vision
techniques.

In reading some of the computer vision literature, I get the feeling that
many techniques are proposed, but are not fully tested. Is this true or have
I missed some body of work out there?  Are there any standard or published
methods for testing new techniques?  Could someone point me to any
literature on evaluation of the "quality" of computer vision techniques? Are
there studies where the techniques have been tried on a large number of
images?

	Ramin Samadani
	202 Durand Bldg.
	Electrical Engineering
	Stanford, CA 94305

	ramin@scotty.stanford.edu

------------------------------

Date: Thu, 12 Jan 89 09:19:51 EST
From: tony@cortex.psych.nyu.edu (Tony Movshon)
Subject: Course announcement: Computational Neuroscience


        Cold Spring Harbor Laboratory Course Announcement, Summer 1989

                      COMPUTATIONAL NEUROSCIENCE: VISION

                                 Instructors:
           Ellen C. Hildreth, Massachusetts Institute of Technology
                   J. Anthony Movshon, New York University

                                 July 2 - 15

     Computational approaches to neuroscience have produced important advances
in our understanding of neural processing.  Prominent successes have come in
areas where strong inputs from neurobiological, behavioral and computational
approaches can interact.  Through a combination of lectures and hands-on
experience with a computer laboratory, this intensive course will examine
several areas, including feature extraction, motion analysis, binocular
stereopsis, color vision, higher level visual processing, visual neural net-
works, and oculomotor function. The theme is that an understanding of the com-
putational problems, the constraints on solutions to these problems, and the
range of possible solutions can help guide research in neuroscience.  Students
should have experience in neurobiological or computational approaches to
visual processing. A strong background in mathematics will be beneficial.
     Past lecturers have included: Richard Andersen, Peter Lennie, John Maun-
sell, Gerard Medioni, Michael Morgan, Ken Nakayama, Tomaso Poggio, Terrence
Sejnowski, William Thompson, Shimon Ullman, and Brian Wandell.
     The deadline for application is March 15, 1989. Applications and addi-
tional information may be obtained from:

REGISTRAR
Cold Spring Harbor Laboratory
Box 100
Cold Spring Harbor, New York 11724
Telephone: (516) 367-8343


------------------------------

Date: Sat, 14 Jan 89 12:37:03 PST
From: Vision-List-Request <vision@deimos.ads.com>
Subject: Role of Vision List (posted to comp.graphics)

[ Apparently, the role of comp.ai.vision and the Vision List has been 
  discussed on comp.graphics.  E.g., they wanted to know if image processing
  was appropriate for this List.  The following is a copy of the message I
  posted to that group.
				phil... ]


The role of comp.ai.vision has been discussed in this group, and as
moderator, I thought it would be appropriate to outline the role of
the vision newsgroup.

The Vision List is a moderated newsgroup for which messages may be
posted by mailing to Vision-List@ADS.COM.  Administrative questions
(e.g., to get added/deleted, editorial comments, etc) should be sent
to Vision-List-Request@ADS.COM.  The Vision List is distributed
through comp.ai.vision and via direct mail accounts for users which do
not have access to USENET.

The Vision List is intended to embrace discussion on a wide range of
vision topics, including physiological theory, computer vision,
machine vision and image processing algorithms, artificial
intelligence and neural network techniques applied to vision,
industrial applications, robotic eyes, implemented systems, ideas,
profound thoughts -- anything related to vision and its automation is
fair game.

Since this is a graphics newsgroup, let me carefully distinguish what
I believe the primary difference between the graphics and vision
newsgroups. Quite simply, in graphics one goes from the computer to a
screen; in vision, one goes from the sensor to the computer. I.e., the 
difference is one of generation versus interpretation. So, for example, an
image processing algorithm which is of use only in image generation would
best appear in comp.graphics or a similar netgroup.  Conversely,
image filtering techniques can quite useful in the initial stages of imagery
interpretation. The bottom line: If when you ask yourself "Would this be
of use in understanding imagery?" you get a "Yes!", then it should be
posted to Vision-List@ADS.COM.

Hope this has helped to clarify things a bit.

Philip Kahn
moderator, Vision List
(administrative) Vision-List-Request@ADS.COM
(submissions)    Vision-List@ADS.COM


------------------------------

Date:     Mon, 9 Jan 89 08:56 H
From: <CHTEH%NUSEEV.BITNET@CUNYVM.CUNY.EDU>
Subject:  IEEE 1989 Int Conf on Image Processing (Last Call for Paper)


     IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING
                         (ICIP'89)

               5-8 September, 1989, Singapore

                  CALL FOR PAPERS (Updated)


     The  1989  IEEE  International  Conference   on   Image
Processing  (ICIP'89)  will  be  held  in  Singapore  on 5-8
September, 1989.  The conference is jointly organized by the
Computer  Chapter, IEEE Singapore Section and the Department
of Electrical Engineering, National University of Singapore.
The  conference will include regular sessions on all aspects
of the theory and  applications  of  image  processing.   In
addition,  tutorials  by  eminent  speakers  presenting  the
state-of-the-art in selected areas of image processing  will
be  offered.  An exhibition will be held in conjunction with
the conference.

     Papers describing original work in all aspects of image
processing   are   invited.   Topics  for  regular  sessions
include, but are not limited to, the following :

  Image analysis/modeling          Office image processing
  Image restoration/enhancement    Machine vision
  Video communications             AI vision techniques
  Image pattern recognition        VLSI implementation
  Remote sensing                   System architecture
  Biomedical imaging               Color image processing

     Authors  are  invited  to  submit  four  copies  of  an
extended summary of at least 1000 words to :

           Technical Program Chairman, ICIP'89
           c/o Meeting Planners Pte Ltd
           100 Beach Road, #33-01
           Shaw Towers, Singapore 0718
           Republic of Singapore
           Telex : RS40125 MEPLAN
           Fax : (65) 2962670
           E-mail : OSH@NUSEEV.BITNET

     The summary should contain sufficient detail, including
a  clear  description  of  the  salient  concepts  and novel
features of  the  work.   The  summary  should  include  the
authors'  names,  addresses,  affiliations,  and  telephone,
telex and fax numbers.  The authors should also indicate one
or  more of the above topics that best describe the contents
of the paper.

     Proposals for tutorials and special sessions  are  also
welcome  and  should  be  addressed to the Technical Program
Chairman before 16 January 1989.


AUTHORS' SCHEDULE
   Submission of summary              1 February 1989
   Notification of acceptance        31 March    1989
   Submission of final manuscripts    1 June     1989

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/04/89)

Vision-List Digest	Fri Feb 03 09:49:06 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 IEEE Workshop on Motion
 Call For Papers : Neural Nets & Optimization.
 NIPS CALL FOR PAPERS
 Datacube local user's group

----------------------------------------------------------------------

Date: Thu, 2 Feb 89 09:25:37 EST
From: schunck@caen.engin.umich.edu (Brian Schunck)
Subject: IEEE Workshop on Motion


IEEE WORKSHOP ON MOTION

With a selection of papers intended to reflect the state of the art in
computational theories of motion perception, provide recent results
from psychophysical experiments, and discuss attempts to incorporate
motion algorithms in pratical applications, the IEEE Workshop on
Motion will bring together researchers from computer vision,
artificial intelligence, and psychophysics to discuss current work on
the representation and analysis of motion in image sequences.  The
workshop will be held in Irvine, California March 20-22, 1989.  Papers
will be presented on all aspects of the analysis of motion in human
and machine vision.  The number of presentations will be limited to
increase time for discussion in the spirit of a workshop.  The
proceedings will be published and will be available at the workshop.

Papers presented at the workshop will cover the topics of object
tracking, estimation of the parameters of object motion, motion
constraint equations, estimation of the image flow velocity field,
motion correspondence, psychophysical experiments on motion
perceptions, structure from motion, visual navigation, and object
discrimination in dynamic scenes.

The program has been completed, so no additional papers or
presentations can be scheduled.  The deadline for early registration
is February 17, 1989.  Attendees can Register at the workshop site
which is the Newport Beach Marriott.  For further information contact:
Brian G. Schunck, Artificial Intelligence Laboratory, University of
Michigan, Ann Arbor, MI 48109-2122, (313) 747-1803,
schunck@caen.engin.umich.edu


------------------------------

Date: 1 Feb 89 17:09:23 GMT
From: harish@ece-csc.UUCP (Harish Hiriyannaiah)
Subject: Call For Papers : Neural Nets & Optimization.
Keywords: TENCON '89.
Organization: North Carolina State University, Raleigh, NC


                           CALL FOR PAPERS

                TENCON '89 (IEEE Region 10 Conference)

                               SESSION
                                  ON
                    OPTIMIZATION AND NEURAL NETWORKS

                       November 22 -- 24, 1989
                            Bombay, India



	Under the  auspices of the IEEE, the session organizers invite
	submission of papers for a session on "Optimization and Neural
 	Networks". This session will focus on the interrelationship of
	neural networks and optimization problems. Neural networks can 
	be seen to be related to optimization in two distinct ways:

	  +  As  an adaptive neural network learns from examples,
	     the convergence of its weights  solves an  optimiza-
	     tion problem.

	  +  A large  class of  networks , even with constant we-
	     ights , solves  optimization problems as they settle 
	     from initial to final state.

	The areas of interest include but are not limited to:

	  +  Combinatorial optimization

	  +  Continuous optimization

	  +  Sensor  integration ( when posed as an  optimization 
	     problem)

	  +  Mean Field Annealing

	  +  Stochastic Relaxation


	Depending on the number and quality of the responses,this ses-
	sion may be split into multiple sessions, with one part focus-
	ing on optimizing the weight-determination process in adaptive
	nets,and the second one on using those nets to solve other pro
	blems.

	Prospective authors should submit two copies of an extended ab
	stract (not exceeding 5 pages , double spaced) of their papers
	to either of the organizers by March 31, 1989. Authors will be
	notified of acceptance or rejection by May 15,1989.Photo-ready
	copy of the complete paper (not exceeding 25 pages double-spa-
	ced) must be received by Jul 15,1989 for inclusion in the pro-
	ceedings which will be  published by the IEEE and  distributed 
	at the symposium.


	Session Organizers

	Dr. Wesley E. Snyder / Mr. Harish P. Hiriyannaiah
	Dept of Electrical and Computer Engineering
	North Carolina State University
	Raleigh, NC 27695-7911, USA

	Telephone: (919)-737-2336
	FAX: (919)-737-7382
	email: {wes,harish}@ecelet.ncsu.edu -- (Internet)
	mcnc!ece-csc!{wes,harish} -- (UUCP)

-- 
harish pu. hi.     harish@ece-csc.ncsu.edu
                   {decvax,possibly other backbone sites}!mcnc!ece-csc!harish

I am not, therefore I think not ?

------------------------------

Date: Thu, 2 Feb 89 13:52:25 EST
From: jose@tractatus.bellcore.com (Stephen J Hanson)
Subject: NIPS CALL FOR PAPERS


                              CALL FOR PAPERS

                            IEEE Conference on

                   Neural Information Processing Systems
                         - Natural and Synthetic -

             Monday, November 27 -- Thursday November 30, 1989
                             Denver, Colorado


     This is the third meeting of a high  quality,  relatively  small,
     inter-disciplinary     conference     which    brings    together
     neuroscientists,  engineers,   computer   scientists,   cognitive
     scientists,  physicists,  and  mathematicians  interested  in all
     aspects of neural processing and computation.   Several  days  of
     focussed  workshops  will  follow  at  a  nearby ski area.  Major
     categories and examples  of  subcategories  for  papers  are  the
     following:

     1. Neuroscience: Neurobiological models of development,  cellular
     information  processing, synaptic function, learning, and memory.
     Studies and analyses of neurobiological systems  and  development
     of neurophysiological recording tools.

     2.  Architecture   Design:   Design   and   evaluation   of   net
     architectures to perform cognitive or behavioral functions and to
     implement conventional algorithms.  Data  representation;  static
     networks  and  dynamic  networks  that  can  process  or generate
     pattern sequences.

     3. Learning Theory Models of  learning;  training  paradigms  for
     static    and   dynamic   networks;   analysis   of   capability,
     generalization, complexity, and scaling.

     4.  Applications:  Applications  to  signal  processing,  vision,
     speech,   motor   control,  robotics,  knowledge  representation,
     cognitive modelling and adaptive systems.

     5. Implementation and Simulation: VLSI or optical implementations
     of  hardware  neural  nets.  Practical issues for simulations and
     simulation tools.


     Technical Program: Plenary, contributed, and poster sessions will
     be  held.  There  will  be no parallel sessions. The full text of
     presented papers will be published.

     Submission  Procedures:  Original  research   contributions   are
     solicited,  and  will  be  refereed  by experts in the respective
     disciplines.  Authors should submit four copies  of  a  1000-word
     (or  less)  summary  and four copies of a single-page 50-100 word
     abstract clearly stating their results by May 30, 1989.  Indicate
     preference  for  oral or poster presentation and specify which of
     the above  five  broad  categories  and,  if  appropriate,   sub-
     categories   (for   example,   Learning  Theory:  Complexity,  or
     Applications:  Speech)  best  applies  to  your  paper.  Indicate
     presentation preference and category information at the bottom of
     each abstract page and after each summary. Failure to do so  will
     delay  processing  of your submission.  Mail submissions to Kathy
     Hibbard, NIPS89 Local Committee, Engineering Center,  Campus  Box
     425, Boulder, CO, 80309-0425.


             DEADLINE FOR SUMMARIES  ABSTRACTS IS MAY 30, 1989

------------------------------

Date: 2 Feb 89 23:02:38 GMT
From: manning@mmm.serc.mmm.com (Arthur T. Manning)
Subject: Datacube local user's group
Summary: Be there!
Organization: 3M Company - Software and Electronics Resource Center (SERC); St. Paul, MN

The Twin Cities Datacube Local User's Group is meeting 

Thursday February 16, 4:30 pm
Micro-resources 
4640 W 77th St Suite 109

Call Barb Baker for more info 612-830-1454

Speakers are welcome.  Please send info to me if you're interested.

-- 
Arthur T. Manning                                Phone: 612-733-4401
3M Center  518-1                                 FAX:   612-736-3122
St. Paul  MN  55144-1000   U.S.A.                Email: manning@mmm.uucp


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/11/89)

Vision-List Digest	Fri Feb 10 16:54:57 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 - Machine Vision and Applications, Volume 2, Issue 1
 - Research Fellow in Computer Vision
 - RE: NSF Request for Industrial Research Topics

----------------------------------------------------------------------

Date: Thu, 09 Feb 89 11:32:58 PST
From: Gerhard Rossbach <rossbach@hub.ucsb.edu>
Subject: Machine Vision and Applications, Volume 2, Issue 1

There is now a journal in the field of machine vision, integrating theory and
applications.  "Machine Vision and Applications", the international journal
from Springer-Verlag is now in its second year.  It is published four times a
year and has a personal subscription rate of $45.00 (including postage and
handling).  The institutional rate is $105.00 (including postage and handling).

Volume 2, Issue 1 will be published the beginning of March, 1989. The contents
for this new issue are:

"Performance Assessment of Near Perfect Machines," Robert M. Haralick.

"Combined Decision Theoretic and Syntactic Approach to Image Segmentation," by
W. E. Blanz and B. J. Straub.

"Real-Time Model-Based Geometric Reasoning for Vision Guided Navigation," by
Uma Kant Sharma and Darwin Kuan.

"Report on Range Image Understanding Workshop, East Lansing, MI, March 21-23,
1988," by Ramesh Jain and Anil K. Jain.

For further information on submitting a paper or subscribing to the journal,
please send email to:
rossbach@hub.ucsb.edu or write to Springer-Verlag, 815 De La Vina Street, Santa
Barbara, CA 93101


------------------------------

Date: 10 Feb 89 09:47:00 WET
From: JOHN ILLINGWORTH <illing%ee.surrey.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Research Fellow in Computer Vision


University of Surrey, Department of Electronic and Electrical Engineering,
United Kindom. 

          **********************************
          RESEARCH FELLOWS : COMPUTER VISION
          **********************************

Two Research Fellows are required in the Department of Electronic and
Electrical Engineering for projects in Computer Vision. The project is
called Vision As Process, VAP, and is funded under the ESPRIT BASIC
RESEARCH ACTION, BRA, program. It is an international collaboration
involving institutes at Linkoeping and Stockholm (Sweden), Aalborg
(Denmark) and Grenoble (France). Surrey Universities major
contribution will be the development of a high-level scene
interpretation module.

The project will be carried out within an active research group in
Vision, Speech and Signal Processing. The group comprises about 20
members and has extensive computing resources including SUN, VAX and
Masscomp computers as well as specialised image processing facilities.

The sucessful candidates will be required to develop, implement in
software and experimentally evaluate computer vision algorithms.
Applicants for these post should have a degree in mathematics,
electronics, computer science or physics. Previous experience in
computer vision, image analysis, knowledge based methods or pattern
recognition will be an advantage.

The appointments will be initially for two years with a salary in the
range 9,865 to 14,500 UK pounds per annum depending upon age,
qualifications and experience (superannuation under USS conditions).

Applications in the form of a curriculum vitae (3 copies)
including the names and addresses of two referees should be sent to the
Personnel Office (JLG), University of Surrey, Guildford, Surrey GU2 5XH.
Further information may be obtained from: 
Dr J. Kittler (0483 509294) or 
Dr J. Illingworth (0483 571281 ext 2299) at
Department of Electronic and Electrical Engineering, Surrey University, UK.



------------------------------

Date: Tue, 07 Feb 89 11:29:23 -0500
Subject: RE: Request for Industrial Research Topics
From: "Kenneth I. Laws" <klaws@nsf.GOV>


>  From: skrzypek@CS.UCLA.EDU (Dr Josef Skrzypek)

>>>   MOST ACADEMICS START WITH TOOLS AND DO RESEARCH TO FIND APPROPRIATE
>>>   PROBLEMS AND FUNDING AGENCIES....

> This is a rather naive view of academics approach to problem solving.
> Ken, you have mixed it up.  Normally it's the other way around.  One
> thing is certain, having tools and poking around, with the hope of
> finding a problem is a prescription for very poor science and probably
> bad engineering. Is there a fundamental change in NSF philosophy?


No, NSF hasn't changed -- except for the constant turnover in
program directors and their viewpoints.  And I'll admit to
overstating the case.  I don't know the proportion of academics
who start with tools, and I have certainly seen engineers (or at
least members of technical staff) who have wanted to start at the
tool end.  Witness the recent burst of activity in neural
networks, or previous interest in orthogonal transforms, Hough
transforms, iterative relaxation, pyramid relaxation, etc.

I share your concern that this leads to bad science (and
especially to bad engineering).  In a few cases, the emphasis on
tools is wholly justified.  Someone has to study them, and to
provide expertise to those who need the tools to solve real
problems.  Mathematics and other core sciences have always
nurtured such people.

NSF is now charged with supporting engineering research and tech
transfer, as well as with traditional support of science and
of engineering education.  This broad mandate motivates us to ask
about relevance in funded research.  We would like to see
progress toward long-range goals of national importance.  Unfortunately,
few such goals have been identified.

The value of research in computer vision is fairly obvious.
After all, how many industries could expect to remain competitive
in world markets if they hired only unsighted workers?  The value
of research in mobile robotics or dexterous hands is less clear,
and I therefore expect stronger justification in such proposals.

For academics working on real-world problems, this should not
be difficult.  Very often, however, a professor whose expertise
is at the tool end depends on his graduate students to prove
theorems.  The students are thus trained mainly for academic
careers, and may even fail there if they cannot locate customers
willing to support such research.  Thus the desperate search for
applications, as well as for grants.  Unfortunately, many of
these people drop out of the science/engineering pipeline before
finding the support they need.

Engineers rarely fall into such a trap.  There are cases where
someone's expertise becomes outdated and the problems he knows
how to solve are no longer problems.  This can happen in pattern
recognition, for instance, when a system functions well enough
that there is no point in further improvements.  Still, an
engineer who has solved one problem can usually find work solving
another.  There is less of a tendency to stick with just the
tools that one has used in the past, more of a tendency to search
for tools appropriate to the application.

At NSF, we commonly deal with proposals about applying a
particular tool to a particular problem.  The need for the
research is often justified in terms of the problem, but the
scientific merit is usually judged by what it will teach us about
the tool.  We try to balance the two to best serve the nation,
but our review process and funding policies typically favor
the tool builders.

The particular problem that I was bringing up, and for which
there have been no responses, is the need for a list of research
goals for our Small Business program.  Or for any program, for
that matter.  The engineering directorate is generally able to
point to specific problems of national importance that they are
trying to solve.  The computer science directorate has more
difficulty with this.  We talk about bigger, smaller, faster,
cheaper, more robust -- but what are these computers and algorithms
really need for?  The COSERS report was one attempt to answer
this.  Our small-business people need more-specific projects,
however.  As do the nation's graduate students.

One way to get a handle on the problems is to ask about in-house
research efforts that have failed.  Perhaps someone else could
solve these problems, avoiding whatever technical or personnel
difficulties arose at your site.  Or perhaps someone in
management has been poking around saying "We need X.  Can we
do it in-house?  Is there a supplier?  Who do we call?"  Have
you ever had a blue-sky proposal that never got off the ground
because the vision technology just wasn't there?  Have customers
called, asking for something that just wasn't your product line?

Or, if you'd rather brainstorm, I'm open for blue-sky
suggestions.  Over the next decade, my program may be pumping
something like $50M into machine perception.  How would you
like to see that money spent?  What should our national goals
be?  Which research community should be supported?  Where should
we be one or two decades from now?  What would we do with
ultimate vision systems if we had them?  What's keeping us from
doing it now?  Any inputs?

					-- Ken Laws

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/11/89)

Vision-List Digest	Fri Feb 10 17:04:04 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 6th International Workshop on Machine Learning
 Workshop on Models of Complex Human Learning
 NIPS latex version PLEASE FORMAT, PRINT and POST

----------------------------------------------------------------------

Date: Sat, 4 Feb 89 21:52:51 -0500
From: segre@gvax.cs.cornell.edu (Alberto M. Segre)
Subject: 6th International Workshop on Machine Learning
Expires: 
References: 
Followup-To: 
Distribution: comp
Organization: Cornell Univ. CS Dept, Ithaca NY
Keywords: 
Status: RO








                             Call for Papers:

             Sixth International Workshop on Machine Learning

                            Cornell University
                         Ithaca, New York; U.S.A.
                          June 29 - July 1, 1989


          The Sixth International Workshop on Machine Learning will be
     held  at  Cornell  University  from June 29 through July 1, 1989.
     The workshop will be divided into  six  parallel  sessions,  each
     focusing on a different theme:

     Combining Empirical and Explanation-Based Learning  (M.  Pazzani,
       chair). Both empirical evaluation and theoretical analysis have
       been  used  to  identify  the  strengths  and   weaknesses   of
       individual  learning methods. Integrated approaches to learning
       have the potential of overcoming the limitations of  individual
       methods.  Papers  are  solicited  exploring  hybrid  techniques
       involving, for example, explanation-based learning,  case-based
       reasoning, constructive induction, or neural networks.

     Empirical Learning; Theory and Application  (C.  Sammut,  chair).
       This  session will be devoted to discussions on inductive (also
       called empirical) learning with particular emphasis on  results
       that  can  be  justified  by theory or experimental evaluation.
       Papers should characterize methodologies  (either  formally  or
       experimentally),  their  performance  and/or problems for which
       they  are  well/ill  suited.   Comparative   studies   applying
       different methodologies to the same problem are also invited.

     Learning Plan Knowledge (S.  Chien  and  G.  DeJong,  co-chairs).
       This  session  will  explore  machine  learning of plan-related
       knowledge; specifically,  learning  to  construct,  index,  and
       recognize  plans  by  using explanation-based, empirical, case-
       based, analogical, and connectionist approaches.

     Knowledge-Base  Refinement  and  Theory  Revision  (A.  Ginsberg,
       chair).  Knowledge-base  refinement  involves  the discovery of
       plausible refinements to a knowledge base in order  to  improve
       the breadth and accuracy of the associated expert system.  More
       generally, theory revision is concerned with systems that start
       out  having  some domain theory, but one that is incomplete and
       fallible.  Two basic problems  are  how  to  use  an  imperfect
       theory  to  guide one in learning more about the domain as more
       experience accumulates, and how to use the knowledge so  gained
       to revise the theory in appropriate ways.

     Incremental Learning (D. Fisher, chair, with J. Grefenstette,  J.
       Schlimmer,  R.  Sutton,  and  P.  Utgoff). Incremental learning
       requires continuous adaptation to the  environment  subject  to
       performance   constraints  of  timely  response,  environmental
       assumptions such as noise or concept drift, and knowledge  base
       limitations.    Papers   that   cross  traditionally  disparate
       paradigms   are   highly   encouraged,   notably    rule-based,
       connectionist,  and  genetic  learning;  explanation-based  and
       inductive   learning;   procedure   and    concept    learning;
       psychological  and  computational  theories  of  learning;  and
       belief revision, bounded rationality, and learning.

     Representational Issues  in  Machine  Learning  (D.  Subramanian,
       chair).   This  session will study representational practice in
       machine  learning  in  order  to  understand  the  relationship
       between  inference  (inductive  and  deductive)  and  choice of
       representation.   Present-day  learners   depend   on   careful
       vocabulary  engineering  for their success.  What is the nature
       of the contribution representation makes to learning,  and  how
       can  we  make  learners  design/redesign  hypotheses  languages
       automatically? Papers are solicited in areas including, but not
       limited  to, bias, representation change and reformulation, and
       knowledge-level analysis of learning algorithms.

                             PARTICIPATION

          Each workshop session  is  limited  to  between  30  and  50
     participants.   In order to meet this size constraint, attendance
     at the workshop is by invitation  only.  If  you  are  active  in
     machine   learning   and  you  are  interested  in  receiving  an
     invitation, we encourage you to submit a  short  vita  (including
     relevant publications) and a one-page research summary describing
     your recent work.

          Researchers interested in presenting their work  at  one  of
     the sessions should submit an extended abstract (4 pages maximum)
     or a draft paper (12 pages maximum) describing their recent  work
     in  the  area.  Final  papers  will  be  included in the workshop
     proceedings, which will be distributed to all participants.

                        SUBMISSION REQUIREMENTS

          Each submission (research  summary,  extended  abstract,  or
     draft  paper)  must  be  clearly  marked  with the author's name,
     affiliation, telephone number and Internet address. In  addition,
     you  should  clearly  indicate  for  which  workshop session your
     submission is intended.

     Deadline for submission is March 1, 1989. Submissions  should  be
     mailed directly to:

         6th International Workshop on Machine Learning
         Alberto Segre, Workshop Chair
         Department of Computer Science
         Upson Hall
         Cornell University
         Ithaca, NY 14853-7501
         USA

         Telephone: (607) 255-9196
         Internet: ml89@cs.cornell.edu


          While  hardcopy  submissions   are   preferred,   electronic
     submissions will be accepted in TROFF (me or ms macros), LaTeX or
     plain TeX. Electronic submissions must consist of a single  file.
     Be sure to include all necessary macros; it is the responsibility
     of the submitter to ensure his/her  paper  is  printable  without
     special   handling.    Foreign   contributors  may  make  special
     arrangements on an individual basis for sending their submissions
     via FAX.

          Submissions will  be  reviewed  by  the  individual  session
     chair(s).    Determinations   will   be  made  by  April 1, 1989.
     Attendance at the workshop is by invitation only; you must submit
     a  paper, abstract or research summary in order to be considered.
     While you may make submissions to more than one workshop session,
     each participant will be invited to only one session.

                            IMPORTANT DATES

     March 1, 1989
          Submission  deadline  for   research   summaries,   extended
          abstracts and draft papers.

     April 1, 1989
          Invitations issued; presenters notified of acceptance.

     April 20, 1989
          Final camera-ready copy of accepted papers due for inclusion
          in proceedings.

------------------------------

Date: Sat, 4 Feb 89 21:57:40 -0500
From: segre@gvax.cs.cornell.edu (Alberto M. Segre)
Subject: Workshop on Models of Complex Human Learning
Expires: 
References: 
Followup-To: 
Distribution: comp
Organization: Cornell Univ. CS Dept, Ithaca NY
Keywords: 
Status: RO

9

9                          CALL FOR PARTICIPATION

9                                WORKSHOP ON
                     MODELS OF COMPLEX HUMAN LEARNING

                            Cornell University
                         Ithaca, New York  U.S.A.
                             June 27-28, 1989

                 Sponsored by ONR Cognitive Science Branch


          This two-day workshop will bring together researchers  whose
     learning   research   gives  attention  to  human  data  and  has
     implications for understanding  human  cognition.  Of  particular
     interest  is  learning  research that relates to complex problem-
     solving tasks.  There is an emphasis on symbol-level learning.

          The workshop will be limited to  30-50  attendees.  Workshop
     presentations will be one hour in length, so as to allow in-depth
     presentation and discussion of recent research. Areas of interest
     include:

         Acquisition of Programming Skills
         Apprenticeship Learning
         Case Based Reasoning
         Explanation Based Learning
         Knowledge Acquisition
         Learning of Natural Concepts and Categories
         Learning of Problem Solving Skills
         Natural Language Acquisition
         Reasoning and Learning by Analogy
9
          The initial list of presenters is based  on  past  proposals
     accepted  by  ONR.  This  call  for  papers  solicits  additional
     submissions.   The  current  list  of  ONR-sponsored   presenters
     includes:

         John Anderson (Carnegie Mellon)
         Tom Bever (Univ. of Rochester)
         Ken Forbus (Univ. of Illinois)
         Dedre Gentner (Univ. of Illinois)
         Chris Hammond (Univ. Chicago)
         Ryszard Michalski (George Mason Univ.)
         Stellan Ohlsson (Univ. of  Pittsburgh)
         Kurt VanLehn (Carnegie Mellon)
         David Wilkins (Univ. of Illinois)

     SUBMISSIONS

          Presenters: Send four copies of (i) a  previously  published
     paper  with  a  four  page abstract that describes recent work or
     (ii) a draft paper.  These  materials  will  be  used  to  select
     presenters;  no workshop proceedings will appear. Please indicate
     whether you would consider being involved just as a participant.

          Participants:  Send  four  copies  of  a  short  vitae  that
     includes  relevant  publications,  and  a one-page description of
     relevant experience and projects.

          Submission Format: Hardcopy submissions are  preferred,  but
     electronic  submissions  will also be accepted in TROFF (ME or MS
     macros), LaTeX or plain TeX.  Electronic submissions must consist
     of  a  single file that includes all the necessary macros and can
     be printed without special handling.

          Deadlines: All submissions should be received by the program
     chair  by Tuesday, March 28, 1989; they will be acknowledged upon
     receipt.  Notices of acceptance will be mailed by May 1, 1989.

     PROGRAM CHAIR

         David C. Wilkins
         Dept. of Computer Science
         University of Illinois
         1304 West Springfield Ave
         Urbana, IL 61801

         Telephone: (217) 333-2822
         Internet: wilkins@m.cs.uiuc.edu

------------------------------

Date: Thu, 9 Feb 89 13:16:17 EST
From: jose@tractatus.bellcore.com (Stephen J Hanson)
        ailist@kl.sri.com, arpanet-bboards@mc.lcs.mit.edu,
        biotech%umdc.BITNET@psuvax1.cs.psu.edu, chipman@NPRDC.NAVY.MIL,
        rna!rocky2!cumc2!wch@cmcl2.ads.com, conferences@hplabs.hp.com,
        connectionists@C.CS.CMU.EDU, dynsys-l%unc.BITNET@psuvax1.cs.psu.edu,
        epsynet%uhupvm1.BITNET@psuvax1.cs.psu.edu, info-futures@bu-cs.bu.edu,
        kaiser%yorkvm1.BITNET@psuvax1.cs.psu.edu, keeler@mcc.com,
        mike%bucasb.bu.edu@bu-it.bu.edu, msgs@tractatus.bellcore.com,
        msgs@confidence.princeton.edu, neuron@ti-csl.csc.ti.com,
        parsym@sumex-aim.stanford.edu, physics@mc.lcs.mit.edu,
        self-org@mc.lcs.mit.edu, soft-eng@xx.lcs.mit.edu,
        taylor@hplabsz.hpl.hp.com, vision-list@ads.com
Subject: NIPS latex version PLEASE FORMAT, PRINT and POST
Status: RO


\documentstyle[11pt]{article}
%% set sizes to fill page with small margins
\setlength{\headheight}{0in}
\setlength{\headsep}{0in}
\setlength{\topmargin}{-0.25in}
\setlength{\textwidth}{6.5in}
\setlength{\textheight}{9.5in}
\setlength{\oddsidemargin}{0.0in}
\setlength{\evensidemargin}{0.0in}
\setlength{\footheight}{0.0in}
\setlength{\footskip}{0.25in}

\begin{document}
\pagestyle{empty}
\Huge
\begin{center}
{\bf CALL FOR PAPERS\\}
\Large
IEEE Conference on\\
\LARGE
{\bf Neural Information Processing Systems\\
- Natural and Synthetic -\\}
\bigskip
\Large
Monday, November 27 -- Thursday November 30, 1989\\
Denver, Colorado\\
\end{center}

\medskip
\large
\noindent
This is the third meeting of a high quality, relatively small,
inter-disciplinary conference
which brings together neuroscientists,
engineers, computer scientists, cognitive scientists, physicists,
and mathematicians interested in all aspects of neural processing
and computation.
Several days of focussed workshops will follow at a nearby ski area.
Major categories and examples of subcategories
for papers are the following:

\begin{quote}
\small
\begin{description}
\item[{\bf 1. Neuroscience:}] Neurobiological models of development,
cellular information processing, synaptic function,
learning, and memory. Studies and analyses
of neurobiological systems and development of
neurophysiological recording tools.

\item[{\bf 2. Architecture Design:}] Design and evaluation of
net architectures to perform cognitive or behavioral functions and to
implement
conventional algorithms. Data representation;
static networks and dynamic networks
that can process or generate pattern sequences.

\item[{\bf 3. Learning Theory:}] Models of learning; training paradigms
for static and dynamic networks; analysis of capability,
generalization, complexity, and scaling.

\item[{\bf 4. Applications:}] Applications to signal processing, vision,
speech, motor control, robotics, knowledge representation, cognitive
modelling and adaptive systems.

\item[{\bf 5. Implementation and Simulation:}]
VLSI or optical implementations of hardware neural nets.
Practical issues for simulations and simulation tools.

\end{description}
\end{quote}

\large
\smallskip
\noindent
{\bf Technical Program:} Plenary, contributed, and poster sessions will be
held. There will be no parallel sessions. The full text of presented papers
will be published.

\medskip
\noindent
{\bf Submission Procedures:} Original research contributions are
solicited, and will be refereed by experts in the respective disciplines.
Authors should submit four copies of a 1000-word (or less) summary and
four copies of a single-page 50-100 word abstract clearly stating their
results by May 30, 1989. Indicate preference for oral or
poster presentation and specify which of the above five broad
categories and, if appropriate,  sub-categories
(for example, {\em Learning Theory: Complexity}, or {\em Applications: Speech})
best applies to your paper. Indicate presentation preference
and category information at the bottom of each abstract page and after
each summary. Failure to do so will delay processing
of your submission.  Mail submissions to Kathie Hibbard, NIPS89 Local Committee,
Engineering Center, Campus Box 425, Boulder, CO, 80309-0425.

\medskip
\noindent
{\bf Organizing Committee}\\
\small
\noindent
{Scott Kirkpatrick, IBM Research, General Chairman; 
Richard Lippmann, MIT Lincoln Labs, Program Chairman; 
Kristina Johnson, University of Colorado, Treasurer; 
Stephen J. Hanson, Bellcore, Publicity Chairman; 
David S. Touretzky, Carnegie-Mellon, Publications Chairman; 
Kathie Hibbard, University of Colorado, Local Arrangements; 
Alex Waibel, Carnegie-Mellon, Workshop Chairman; 
Howard Wachtel, University of Colorado, Workshop Local Arrangements; 
Edward C. Posner, Caltech, IEEE Liaison; 
James Bower, Caltech, Neurosciences Liaison; 
Larry Jackel, AT\&T Bell Labs, APS Liaison}

\begin{center}
\large
{\bf DEADLINE FOR SUMMARIES \& ABSTRACTS IS MAY 30, 1989}\\
\end{center}

\begin{flushright}
Please Post
\end{flushright}

\end{document}


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/11/89)

Vision-List Digest	Fri Feb 10 17:04:04 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 What conferences and workshops should Vision List report?
 NIPS Call for Papers
 6th International Workshop on Machine Learning
 Workshop on Models of Complex Human Learning

----------------------------------------------------------------------

Date: Fri, 10 Feb 89 17:07:12 EST
From: Vision-List moderator Phil Kahn <Vision-List-Request@ADS.COM>
Subject:  What conferences and workshops should Vision List report?

If you've noticed, when there are several conference and workshop 
proceedings, I bundle them into a single List so regular postings aren't 
swamped.  Hope this helps.

Of the following three conferences and workshops, I only consider the
NIPS conference to be of interest to the Vision List.  The others I
believe are more mainstream AI, and hence are not appropriate for the 
Vision List.

Though I tend not to like editorially restricting submitted material, I
favor eliminating conference, seminar, and workshop postings which do
not bear a strong relationship to vision.  This is just to let you know
of this policy, since as the readership, this is your list.  If you do 
not agree, please post your reasons to the List.

I am trying to tighten the content to decrease clutter.  In particular,
I want to continue seeing more vision discussions and less peripheral
postings.

	phil...


----------------------------------------------------------------------

Date: Thu, 9 Feb 89 13:16:17 EST
From: jose@tractatus.bellcore.com (Stephen J Hanson)
Subject: NIPS CALL FOR PAPERS  
 
IEEE Conference on Neural Information Processing Systems 
- Natural and Synthetic -  
 
 
Monday, November 27 -- Thursday November 30, 1989 
Denver, Colorado 
 
 
This is the third meeting of a high quality, relatively small,
inter-disciplinary conference which brings together neuroscientists,
engineers, computer scientists, cognitive scientists, physicists, and
mathematicians interested in all aspects of neural processing and
computation.  Several days of focussed workshops will follow at a
nearby ski area.  Major categories and examples of subcategories for
papers are the following:

  
 [ 1. Neuroscience: ] Neurobiological models of development, cellular
information processing, synaptic function, learning, and memory.
Studies and analyses of neurobiological systems and development of
neurophysiological recording tools.

 [ 2. Architecture Design: ] Design and evaluation of net
architectures to perform cognitive or behavioral functions and to
implement conventional algorithms. Data representation; static
networks and dynamic networks that can process or generate pattern
sequences.

 [ 3. Learning Theory: ] Models of learning; training paradigms for
static and dynamic networks; analysis of capability, generalization,
complexity, and scaling.

 [ 4. Applications: ] Applications to signal processing, vision,
speech, motor control, robotics, knowledge representation, cognitive
modelling and adaptive systems.

 [ 5. Implementation and Simulation: ] VLSI or optical implementations
of hardware neural nets.  Practical issues for simulations and
simulation tools.

 
   Technical Program:  Plenary, contributed, and poster sessions will be
held. There will be no parallel sessions. The full text of presented papers
will be published.

 
 
   Submission Procedures: Original research contributions are
solicited, and will be refereed by experts in the respective
disciplines.  Authors should submit four copies of a 1000-word (or
less) summary and four copies of a single-page 50-100 word abstract
clearly stating their results by May 30, 1989. Indicate preference for
oral or poster presentation and specify which of the above five broad
categories and, if appropriate, sub-categories (for example, Learning
Theory: Complexity , or Applications: Speech ) best applies to your
paper. Indicate presentation preference and category information at
the bottom of each abstract page and after each summary. Failure to do
so will delay processing of your submission.  Mail submissions to
Kathie Hibbard, NIPS89 Local Committee, Engineering Center, Campus Box
425, Boulder, CO, 80309-0425.

 
 
   Organizing Committee  
 
 
Scott Kirkpatrick, IBM Research, General Chairman; 
Richard Lippmann, MIT Lincoln Labs, Program Chairman; 
Kristina Johnson, University of Colorado, Treasurer; 
Stephen J. Hanson, Bellcore, Publicity Chairman; 
David S. Touretzky, Carnegie-Mellon, Publications Chairman; 
Kathie Hibbard, University of Colorado, Local Arrangements; 
Alex Waibel, Carnegie-Mellon, Workshop Chairman; 
Howard Wachtel, University of Colorado, Workshop Local Arrangements; 
Edward C. Posner, Caltech, IEEE Liaison; 
James Bower, Caltech, Neurosciences Liaison; 
Larry Jackel, AT T Bell Labs, APS Liaison 

  
 
   DEADLINE FOR SUMMARIES   ABSTRACTS IS MAY 30, 1989  
 

----------------------------------------------------------------------

Date: Sat, 4 Feb 89 21:52:51 -0500
From: segre@gvax.cs.cornell.edu (Alberto M. Segre)
Subject: 6th International Workshop on Machine Learning
Organization: Cornell Univ. CS Dept, Ithaca NY


                             Call for Papers:

             Sixth International Workshop on Machine Learning

                            Cornell University
                         Ithaca, New York; U.S.A.
                          June 29 - July 1, 1989


          The Sixth International Workshop on Machine Learning will be
     held  at  Cornell  University  from June 29 through July 1, 1989.
     The workshop will be divided into  six  parallel  sessions,  each
     focusing on a different theme:

     Combining Empirical and Explanation-Based Learning  (M.  Pazzani,
       chair). Both empirical evaluation and theoretical analysis have
       been  used  to  identify  the  strengths  and   weaknesses   of
       individual  learning methods. Integrated approaches to learning
       have the potential of overcoming the limitations of  individual
       methods.  Papers  are  solicited  exploring  hybrid  techniques
       involving, for example, explanation-based learning,  case-based
       reasoning, constructive induction, or neural networks.

     Empirical Learning; Theory and Application  (C.  Sammut,  chair).
       This  session will be devoted to discussions on inductive (also
       called empirical) learning with particular emphasis on  results
       that  can  be  justified  by theory or experimental evaluation.
       Papers should characterize methodologies  (either  formally  or
       experimentally),  their  performance  and/or problems for which
       they  are  well/ill  suited.   Comparative   studies   applying
       different methodologies to the same problem are also invited.

     Learning Plan Knowledge (S.  Chien  and  G.  DeJong,  co-chairs).
       This  session  will  explore  machine  learning of plan-related
       knowledge; specifically,  learning  to  construct,  index,  and
       recognize  plans  by  using explanation-based, empirical, case-
       based, analogical, and connectionist approaches.

     Knowledge-Base  Refinement  and  Theory  Revision  (A.  Ginsberg,
       chair).  Knowledge-base  refinement  involves  the discovery of
       plausible refinements to a knowledge base in order  to  improve
       the breadth and accuracy of the associated expert system.  More
       generally, theory revision is concerned with systems that start
       out  having  some domain theory, but one that is incomplete and
       fallible.  Two basic problems  are  how  to  use  an  imperfect
       theory  to  guide one in learning more about the domain as more
       experience accumulates, and how to use the knowledge so  gained
       to revise the theory in appropriate ways.

     Incremental Learning (D. Fisher, chair, with J. Grefenstette,  J.
       Schlimmer,  R.  Sutton,  and  P.  Utgoff). Incremental learning
       requires continuous adaptation to the  environment  subject  to
       performance   constraints  of  timely  response,  environmental
       assumptions such as noise or concept drift, and knowledge  base
       limitations.    Papers   that   cross  traditionally  disparate
       paradigms   are   highly   encouraged,   notably    rule-based,
       connectionist,  and  genetic  learning;  explanation-based  and
       inductive   learning;   procedure   and    concept    learning;
       psychological  and  computational  theories  of  learning;  and
       belief revision, bounded rationality, and learning.

     Representational Issues  in  Machine  Learning  (D.  Subramanian,
       chair).   This  session will study representational practice in
       machine  learning  in  order  to  understand  the  relationship
       between  inference  (inductive  and  deductive)  and  choice of
       representation.   Present-day  learners   depend   on   careful
       vocabulary  engineering  for their success.  What is the nature
       of the contribution representation makes to learning,  and  how
       can  we  make  learners  design/redesign  hypotheses  languages
       automatically? Papers are solicited in areas including, but not
       limited  to, bias, representation change and reformulation, and
       knowledge-level analysis of learning algorithms.

                             PARTICIPATION

          Each workshop session  is  limited  to  between  30  and  50
     participants.   In order to meet this size constraint, attendance
     at the workshop is by invitation  only.  If  you  are  active  in
     machine   learning   and  you  are  interested  in  receiving  an
     invitation, we encourage you to submit a  short  vita  (including
     relevant publications) and a one-page research summary describing
     your recent work.

          Researchers interested in presenting their work  at  one  of
     the sessions should submit an extended abstract (4 pages maximum)
     or a draft paper (12 pages maximum) describing their recent  work
     in  the  area.  Final  papers  will  be  included in the workshop
     proceedings, which will be distributed to all participants.

                        SUBMISSION REQUIREMENTS

          Each submission (research  summary,  extended  abstract,  or
     draft  paper)  must  be  clearly  marked  with the author's name,
     affiliation, telephone number and Internet address. In  addition,
     you  should  clearly  indicate  for  which  workshop session your
     submission is intended.

     Deadline for submission is March 1, 1989. Submissions  should  be
     mailed directly to:

         6th International Workshop on Machine Learning
         Alberto Segre, Workshop Chair
         Department of Computer Science
         Upson Hall
         Cornell University
         Ithaca, NY 14853-7501
         USA

         Telephone: (607) 255-9196
         Internet: ml89@cs.cornell.edu


          While  hardcopy  submissions   are   preferred,   electronic
     submissions will be accepted in TROFF (me or ms macros), LaTeX or
     plain TeX. Electronic submissions must consist of a single  file.
     Be sure to include all necessary macros; it is the responsibility
     of the submitter to ensure his/her  paper  is  printable  without
     special   handling.    Foreign   contributors  may  make  special
     arrangements on an individual basis for sending their submissions
     via FAX.

          Submissions will  be  reviewed  by  the  individual  session
     chair(s).    Determinations   will   be  made  by  April 1, 1989.
     Attendance at the workshop is by invitation only; you must submit
     a  paper, abstract or research summary in order to be considered.
     While you may make submissions to more than one workshop session,
     each participant will be invited to only one session.

                            IMPORTANT DATES

     March 1, 1989
          Submission  deadline  for   research   summaries,   extended
          abstracts and draft papers.

     April 1, 1989
          Invitations issued; presenters notified of acceptance.

     April 20, 1989
          Final camera-ready copy of accepted papers due for inclusion
          in proceedings.

------------------------------

Date: Sat, 4 Feb 89 21:57:40 -0500
From: segre@gvax.cs.cornell.edu (Alberto M. Segre)
Subject: Workshop on Models of Complex Human Learning
Organization: Cornell Univ. CS Dept, Ithaca NY

                          CALL FOR PARTICIPATION

                                WORKSHOP ON
                     MODELS OF COMPLEX HUMAN LEARNING

                            Cornell University
                         Ithaca, New York  U.S.A.
                             June 27-28, 1989

                 Sponsored by ONR Cognitive Science Branch


          This two-day workshop will bring together researchers  whose
     learning   research   gives  attention  to  human  data  and  has
     implications for understanding  human  cognition.  Of  particular
     interest  is  learning  research that relates to complex problem-
     solving tasks.  There is an emphasis on symbol-level learning.

          The workshop will be limited to  30-50  attendees.  Workshop
     presentations will be one hour in length, so as to allow in-depth
     presentation and discussion of recent research. Areas of interest
     include:

         Acquisition of Programming Skills
         Apprenticeship Learning
         Case Based Reasoning
         Explanation Based Learning
         Knowledge Acquisition
         Learning of Natural Concepts and Categories
         Learning of Problem Solving Skills
         Natural Language Acquisition
         Reasoning and Learning by Analogy

          The initial list of presenters is based  on  past  proposals
     accepted  by  ONR.  This  call  for  papers  solicits  additional
     submissions.   The  current  list  of  ONR-sponsored   presenters
     includes:

         John Anderson (Carnegie Mellon)
         Tom Bever (Univ. of Rochester)
         Ken Forbus (Univ. of Illinois)
         Dedre Gentner (Univ. of Illinois)
         Chris Hammond (Univ. Chicago)
         Ryszard Michalski (George Mason Univ.)
         Stellan Ohlsson (Univ. of  Pittsburgh)
         Kurt VanLehn (Carnegie Mellon)
         David Wilkins (Univ. of Illinois)

     SUBMISSIONS

          Presenters: Send four copies of (i) a  previously  published
     paper  with  a  four  page abstract that describes recent work or
     (ii) a draft paper.  These  materials  will  be  used  to  select
     presenters;  no workshop proceedings will appear. Please indicate
     whether you would consider being involved just as a participant.

          Participants:  Send  four  copies  of  a  short  vitae  that
     includes  relevant  publications,  and  a one-page description of
     relevant experience and projects.

          Submission Format: Hardcopy submissions are  preferred,  but
     electronic  submissions  will also be accepted in TROFF (ME or MS
     macros), LaTeX or plain TeX.  Electronic submissions must consist
     of  a  single file that includes all the necessary macros and can
     be printed without special handling.

          Deadlines: All submissions should be received by the program
     chair  by Tuesday, March 28, 1989; they will be acknowledged upon
     receipt.  Notices of acceptance will be mailed by May 1, 1989.

     PROGRAM CHAIR

         David C. Wilkins
         Dept. of Computer Science
         University of Illinois
         1304 West Springfield Ave
         Urbana, IL 61801

         Telephone: (217) 333-2822
         Internet: wilkins@m.cs.uiuc.edu

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/17/89)

Vision-List Digest	Thu Feb 16 12:43:35 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Image Display package on X-windows wanted
 SPIE Conference on Robotics and Computer Vision
 Re: Vision research information
 An inexpensive 16level grey scale sensor
 Suggestions for pattern recognition algorithms
 ITI150 & ITI151 Image Processing Mailing List 
 Call for papers: IEEE Workshop on 3D Scene Interpretation

----------------------------------------------------------------------

Date: Mon, 13 Feb 89 17:33:16 JST
From: Shmuel Peleg <peleg%humus.Huji.AC.IL@CUNYVM.CUNY.EDU>
Subject: Image Display package on X-windows wanted

Please let me know if you have available image display and manipulation
system for X windows. We are using Sun 3/60's with grey level and color
screens, and X11 R3.

Thanks,
Shmuel Peleg <peleg@humus.bitnet>


------------------------------

Date: 15 Feb 89 00:11:12 GMT
From: mit-amt!turk@mit-amt.MEDIA.MIT.EDU (Matthew Turk)
Subject: SPIE Conference on Robotics and Computer Vision
Keywords: Philadelphia, 11/89
Organization: MIT Media Lab, Cambridge, MA


	   ** Announcement and Call for Papers **

	Intelligent Robots and Computer Vision VIII

   Part of SPIE's Advances in Intelligent Robotics Systems
   November 5-10, 1989
   Adams Mark Hotel
   Philadelphia, Pennsylvania  USA

   Chairman: David Casasent
	     Carnegie-Mellon University
   Co-Chairman: Ernie Hall
		University of Cincinnati

   This year's conference will focus on new algorithms and techniques for
   intelligent robots and computer vision.  Papers are solicited
   specifically for the following session topics:

	   - pattern recognition and image processing
	   - image understanding and scene analysis
	   - color vision, multi-sensor processing
	   - 3-D vision: modeling and representation
	   - neural networks, artificial intelligence, model-based processors
	   - fuzzy logic in intelligent systems and computer vision
	   - biological basis for the design of sensors in computer vision

   Abstract Due Date: April 3, 1989
   Manuscript Due Date: October 9, 1989

   Information:
     SPIE Technical Program Committee/Philadelphia '89
     P.O. Box 10
     Bellingham, WA  98227-0010
     USA

   Or e-mail to:
     turk@media-lab.media.mit.edu


------------------------------

Date: Mon, 13 Feb 89 09:04:49 EST
From: steinmetz!pyramid!malek@mcnc.org (aiman a abdel-malek)
Subject: Re: Vision research information

I am doing research in exploiting human visual systems characteristics for
better image generation and compression. If you are doing research on related
topic or one of the following topics :
_ Image segmentation using human visual properties and applications in image
compression.
-The use of visual models for better image generation.
-The use of spatial and temporal characteristics of the visual system to
enhance image quality and update rates.
     Contact me, regarding your most recent publications in any of the above
topics. Thank you
malek@pyramid.steinmetz.Ge.Com


------------------------------

Date: 	Tue, 14 Feb 89 23:35:37 EST
From: Mark Noworolski <noworol@eecg.toronto.edu>
Subject: An inexpensive 16level grey scale sensor
Organization: EECG, University of Toronto


About two months ago I asked about cheap image sensors. The best bet then
was the Fisher Price Kiddie Camcorder (US $99, Canada $169 or so).  This is
true, and it's one hell of a deal- it actually works, has a fully funcitonal
B&W TV - and it's lots o'fun.

Well I went out and bought one of these, Fisher-Price didn't want to help
me, so I figured it out myself.

I dug around and found the data stream and all necessary synch pulses- all
at TTL levels.

I'm quite willing to share what I've learned- however I figure probably the
best way would be to first figure out how many people want this info (and
hence whether I should use the SASE method or actually type all that info in-
graphic road maps included).

So if this interests you in a reasonably serious way mail me with a 
subject header to that effect and I'll decide which approach to take within
the week (maybe I'll even post here if enough demand develops).

I also wrote really ugly Turbo C code and managed to interface it to my PC
bus (with 3 chips) so that I can see what it sees.

Standard Disclaimer: I have no connection to Fischer-Price except that of a
frustrated hacker.

cheers
mark
noworol@godzilla.eecg or noworol@ecf.toronto.edu

[ If demand develops, I can place it in the VisionList anonymous FTP
  directory.  I wouldn't want to clutter the list with code...
		pk...			]


------------------------------

Date: 	Tue, 14 Feb 89 23:43:01 EST
From: Mark Noworolski <noworol@eecg.toronto.edu>
Subject: Suggestions for pattern recognition algorithms
Organization: EECG, University of Toronto

Well- now that I've broken the (seeming) tradition of only Conference
calls for Papers on the vision-list. Here's a question.

I need to use the aforementioned sensor to sense a mouth. Yes that's
right. Picture yourself at the dentist and the Dentists light shining
in your mouth and an image sensor on top of the dentists light. That's
almost exactly what it is.

Last time I tried doing pattern recognition I failed miserably (maybe 
because I tried to do it MY way). So this time I'm going to be smart
about it.

Are there any algorithms out there particularly well suited for this type
of process? What would be the best places to look? How about using some
kind of neural net to do this (I know very little if nothing about how to
program these- but a friend assures me that they're ideal for pattern 
recognition)?

Any help would be appreciated.
mark


------------------------------

Date: Thu, 16 Feb 89 02:05:55 PST
From: pvo1478@oce.orst.edu (Paul V. O'Neill)
Subject: ITI150 & ITI151 Image Processing Mailing List 

A new mailing list has been created for users of Imaging Technology's  
series 150 and 151 image processing systems and ITEX151 software.
 
The goal is to share algorithms, code, tricks, pitfalls, advice, etc. in an
effort to decrease development time and increase functionality for the users
of these systems.  (Also, despite their good support, we customers may want
to gang up on ITI someday!!)
 
I envision a simple, unmoderated mail exploder until such time as misuse or
inconsideration forces the list to be moderated.
 
Subscription requests to:

INTERNET:		iti151-request@oce.orst.edu
UUCP:		...!hplabs!hp-pcd!orstcs!oce.orst.edu!iti151-request
UUCP:		...!tektronix!orstcs!oce.orst.edu!iti151-request
 
Traffic to:
                             iti151@oce.orst.edu
		...!hplabs!hp-pcd!orstcs!oce.orst.edu!iti151
		...!tektronix!orstcs!oce.orst.edu!iti151

Paul O'Neill                 pvo@oce.orst.edu
Coastal Imaging Lab
OSU--Oceanography
Corvallis, OR  97331         503-754-3251


------------------------------

Date: Tue, 14 Feb 89 09:23:17 EST
From: flynn@pixel.cps.msu.edu (Patrick J. Flynn)
Subject: Call for papers: IEEE Workshop on 3D Scene Interpretation
Status: RO

                                    CALL FOR PAPERS

                     IEEE Workshop on Interpretation of 3D Scenes

                             Austin Mariott at the Capital
                                      Austin, TX

                                 November 27-29, 1989

     The interpretation of 3D scenes remains a difficult problem for many
     application areas and has attracted the attention of researchers in
     many disciplines.  The intent of this workshop is to bring together
     vision researchers to discuss current work in scene interpretation,
     representation, matching and  sensing.  A variety  of sessions will be
     devoted to different aspects of scene interpretation research. The
     number of presentations will be  limited, so there will be ample
     opportunity for discussion.  Papers are invited on all aspects of
     scene  interpretation  by human and machine, including:

     * General 3D interpretation       * Applications in navigation, industry,
      problems                           enabling technology, etc.

     * Internal 3D representation and  * Matching sensed scene structure
       modeling                          to internal representations.

     * Sensing 3D scene structure

     Authors are encouraged to present new  representations  or
     computational  methods  with  experimental results, present new
     theoretical insights, or relate new observations  of  relationships
     between human and machine processing of 3D scenes.

     Submission of Papers:

     Submit three copies of your paper to Eric  Grimson to be received on or
     before June 15, 1989.  Papers should not exceed a total of  25  double
     spaced  pages.  Authors  will  be  notified of reviewing decisions by
     August 15 and final papers in camera-ready form will be required  by
     the IEEE Computer Society by September 30,1989.

     General Chairman: Anil Jain, Michigan State University
                       (517) 353-5150
                       Internet: jain@cps.msu.edu

     Program Committee:
      Jake Aggarwal, University of Texas, Austin
      Dan Huttenlocher, Cornell University
      Katsushi Ikeuchi, Carnegie Mellon University
      Avi Kak, Purdue University
      David Lowe, University of British Columbia
      Linda Shapiro, University of Washington

     Program Chairpersons:
      Eric Grimson
      Artificial Intelligence Laboratory
      M. I. T.
      545 Technology Square
      Cambridge, MA 02139
                          
      George Stockman
      Computer Science Deptartment
      Michigan State University
      East Lansing, MI 48824

     Local Arrangements: Alan Bovik,  University of Texas, Austin


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/24/89)

Vision-List Digest	Thu Feb 23 12:46:10 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Decision on Fisher Price Camcorder
 Paint algorithm needed
 report alert: mathematical morphology, pt. 2
 Conf. on VISION & 3-D REPRESENTATION
 SPIE Conference on Robotics and Computer Vision
 Faculty Positions Available
 X11R3 on Mac

------------------------------

Date: 	Thu, 23 Feb 89 12:41:03 EST
From: Mark Noworolski <noworol@eecg.toronto.edu>
Subject: Decision on Fisher Price Camcorder
Organization: EECG, University of Toronto

So far I've gotten about 10-13 requests for what I've
found out about the Kiddie Camcorder. So I'll type it in
(it's going to be part of my thesis) and send it to vision
list. Expected date of arrival: March 6. This week and next
week are way too busy. Iron ring capers, ceremonies, et al.

cheers

"How much more black could it be. The answer is none. None more black."

	Nigel - Lead Guitar, Spinal Tap

noworol@eecg.toronto.edu   or  noworol@ecf.toronto.edu


------------------------------

Date: 15 Feb 89 17:29:06 GMT
From: fuhrman@b.coe.wvu.wvnet.edu (Cris Fuhrman)
Subject: Paint algorithm needed

A friend of mine is working on an object recognition project for
his senior project.  He was using some terribly inefficient algorithm to
find the area, centroid, corners, etc., of an object.  This algorithm was
taking 11 seconds (a uVAX using C)!  I suggested a quick paint/fill
algorithm with some modifications as a better way to obtain these
statistics.

I'm looking for an efficient algorithm that will fill a solid object or
an outlined object (similar to how graphics editors do the "paint-can"
effect).  Can anyone give me some pseudo-code or point me to an appropriate
reference guide?

-Cris

[ How about Foley and van Dam?			pk ]

------------------------------

Date: Tue, 21 Feb 89 14:15:34 +0100
From: prlb2!ronse@uunet.UU.NET (Christian Ronse)
Subject: report alert: mathematical morphology, pt. 2

The following report is available. Simply send an e-mail message to one of the
two authors, with your postal mail address. NB: Part I was distributed last
summer.


The Algebraic Basis of Mathematical Morphology; Part II: Openings and Closings

C. Ronse (ronse@prlb2.uucp), Philips Research Laboratory Brussels
H. Heijmans (henkh@mcvax.uucp), Centre for Mathematics and Computer Science

ABSTRACT: This paper is the sequel to a previous paper (Part I) where we have
introduced and investigated an abstract algebraic framework for mathematical
morphology. The main assumption is that the object space is a complete
lattice. Of interest are all (increasing) operators which are invariant under
a given abelian group of automorphisms on the lattice. In Part I we have been
mainly concerned with the basic operations dilation and erosion. In this paper
we will concentrate on openings and closings, which are very special classes
of idempotent operators. Much attention is given to specific methods for
building openings and closings. Some examples illustrate the abstract theory.

AMS 1980 Mathematics Subject Classification: 68U10, 68T10, 06A15, 06A23. 

Christian Ronse		maldoror@prlb2.UUCP
{uunet|philabs|mcvax|cernvax|...}!prlb2!{maldoror|ronse}

		``Stars were born of the sky.
		  Not the stars of glass,
		  but those of chrome steel.''

------------------------------

Date: Tue, 21 Feb 89 15:40:02 CST
From: mv10801@uc.msc.umn.edu
Subject: Conf. on VISION & 3-D REPRESENTATION


                          Conference on
           VISION AND THREE-DIMENSIONAL REPRESENTATION
                         May 24-26, 1989
                     University of Minnesota
                     Minneapolis, Minnesota

The appearance of the three dimensional world  from  images  pro-
jected  on  our two dimensional retinas is immediate, effortless,
and compelling. Despite the vigor of research in vision over  the
past  two decades, questions remain about the nature of three di-
mensional representations and the use  of  those  representations
for  recognition and action. What information is gathered? How is
it integrated and structured? How  is  the  information  used  in
higher level perceptual tasks? This conference will bring togeth-
er nineteen prominent scientists to address these questions  from
neurophysiological,  psychological,  and  computational  perspec-
tives.

The conference is sponsored by the Air Force Office of Scientific
Research  and the University of Minnesota College of Liberal Arts
in cooperation  with  the  Departments  of  Psychology,  Computer
Science,  Electrical  Engineering,  Child  Development,  and  the
Center for Research in Learning, Perception, and Cognition.

Registration:
-------------
The conference fee is $30 ($15 for current  students).  This  fee
includes  program materials, refreshments, and Wednesday's recep-
tion. Conference enrollment is limited, so early registration  is
recommended.

Accommodations:
---------------
A block of rooms has been reserved  at  the  Radisson  University
Hotel.  Rates  are $68 (plus tax) for double or single occupancy.
To make reservations, contact the hotel  at  (612)  379-8888  and
refer  to the program title to obtain these special rates. Reser-
vations must be made by April 9.

For Further Information, Contact:

Program: Jo Nichols, Center for Research in  Learning  Perception
         and Cognition, (612) 625-9367
 
Registration:  Char  Greenwald,  Professional   Development   and
               Conference Services, (612) 625-1520

Organizing Chairpersons:
        Gordon  Legge, Department  of Psychology, (612) 625-0846,   
                       legge@eye.psych.umn.edu
        Lee Zimmerman, Department   of   Electrical  Engineering,
                       (612) 625-8544,
		       lzimmerm@umn-ai.umn-cs.cs.umn.edu


Registrants should include their Name, Address, Day and Evening Telephone,
Job Position, and $30 general registration or $15 current student
registration  (give Student I.D. number) or University of Minnesota
Department budget number. Please make check or money order payable to the 
University of Minnesota.
Mail to:	Registrar
		Professional Development and Conference Services
		University of Minnesota
		338 Nolte Center
		315 Pillsbury Drive S.E.
		Minneapolis, MN 55455-0139

Registration should be received by May 15.


------------------------------

Date: 15 Feb 89 00:11:12 GMT
From: mit-amt!turk@mit-amt.MEDIA.MIT.EDU (Matthew Turk)
Subject: SPIE Conference on Robotics and Computer Vision
Keywords: Philadelphia, 11/89
Organization: MIT Media Lab, Cambridge, MA

	   ** Announcement and Call for Papers **

	Intelligent Robots and Computer Vision VIII

   Part of SPIE's Advances in Intelligent Robotics Systems
   November 5-10, 1989
   Adams Mark Hotel
   Philadelphia, Pennsylvania  USA

   Chairman: David Casasent
	     Carnegie-Mellon University
   Co-Chairman: Ernie Hall
		University of Cincinnati

   This year's conference will focus on new algorithms and techniques for
   intelligent robots and computer vision.  Papers are solicited
   specifically for the following session topics:

	   - pattern recognition and image processing
	   - image understanding and scene analysis
	   - color vision, multi-sensor processing
	   - 3-D vision: modeling and representation
	   - neural networks, artificial intelligence, model-based processors
	   - fuzzy logic in intelligent systems and computer vision
	   - biological basis for the design of sensors in computer vision

   Abstract Due Date: April 3, 1989
   Manuscript Due Date: October 9, 1989

   Information:
     SPIE Technical Program Committee/Philadelphia '89
     P.O. Box 10
     Bellingham, WA  98227-0010
     USA

   Or e-mail to:
     turk@media-lab.media.mit.edu

------------------------------

Date:     Wed, 22 Feb 89 10:59 H
From: <CHTEH%NUSEEV.BITNET@CUNYVM.CUNY.EDU>
Subject:  Faculty Positions Available

National University of Singapore : Faculty positions are available in the
areas of computer communication, computer systems, neural networks, and
computer vision, in the Department of Electrical Engineering. Interested
applicants may send resumes to the Head, Department of Electrical
Engineering, National University of Singapore, Singapore 0511, Singapore.
Enquires on current research activities in specific areas may be sent through
BITNET to : PERSDEPT@NUSVM.

------------------------------

Date: 22 Feb 89 20:20:59 GMT
From: peters@Apple.COM (Steve Peters)
Subject: Re: X11R3 on Mac?
Summary: X11R3 for A/UX is Apple product
Organization: Apple Computer Inc, Cupertino, CA

Apple will ship its X11R3 product for A/UX in March (1989). The
server will support 1-bit and 8-bit deep frame buffers, multiple
screens, backing store and save unders. The graphics code has
undergone substantial optimization. X11R3 will run on both A/UX 1.0
and A/UX 1.1, however A/UX 1.0 allows just a single monochrome screen.

Apple has contributed sources for a single screen, monochrome server
to the MIT X Consortium. These appear on the X11R3 distribution which
has been publicly available since October. comp.windows.x regularly announces
ftp (and other) sites where this distribution may be obtained.

Steve Peters
A/UX X Project Leader
Apple Computer, Inc.


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (03/02/89)

Vision-List Digest	Wed Mar 01 13:06:39 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 PD Image Processing Software for Suns
 Faculty Positions in Robotics

----------------------------------------------------------------------

Date:       Mon, 27 Feb 89 11:15:48 GMT
Subject:    PD Image Processing Software for Suns
From: phill%MED-IMAGE.COMPSCI.BRISTOL.AC.UK@CUNYVM.CUNY.EDU

     This is to introduce a toolkit of image processing pro-
grams,  collectively  called  the ALV toolkit for historical
reasons, written by Phill Everson <everson@uk.ac.bris.cs> in
the  Computer  Science  Dept.  of Bristol University, United
Kingdom.

     The toolkit is designed to aid image processing work on
Sun  workstations.  It is intended to be easy to use, though
not restrictive  to  experienced  users,  user-configurable,
extensible  and flexible.  For example the toolkit will work
on both black and  white  and  colour  workstations  and  in
either  case  will  transparently,  to  the user, display an
image to the best of its ability on the screen.

     The  toolkit  has recently  been  rewritten  to use the
standard Sun rasterfile format  to store its images allowing
multiple depth  images to be processed by the  same  toolkit
and easy migration of data between packages.

*** All people currently on the alv-users mailing list  will
receive a copy of the new toolkit in the next couple of days.

     The toolkit is made up of  a  number  of  tools.  These
include  programs  to  display  an  image  on the screen, to
display a histogram, to perform histogram  equalisation,  to
threshold,  to  print  an  image on an Apple Laserwriter, to
invert an image and  to  convolve  an  image  with  a  user-
supplied  linear  filter.  Currently, there are 27 such pro-
grams.

     The toolkit was initially written to fulfill a need  at
Bristol  University  for  a  single coherent set of tools to
support basic image processing research on a variety of pro-
jects.  We  had  found  that each user or group of users was
writing their own copy of programs  to  do  similar  things,
like  displaying  an  image  on  the screen, and more impor-
tantly, in an enviroment were disk  space  is  always  at  a
premium,  was  each  keeping  separate copies of these often
large programs.

     Using a coherent set of tools with  a  consistent  file
format  has substantially increased cross-project communica-
tion and in addition has provided a higher starting point on
the  learning  curve  for  novice Sun-Users/Imagers. We have
found that users generally use the core tools as a basis and
are then able to concentrate their work in their own area of
interest.

     The ALV toolkit comes complete with a 40-50 page manual
online which can easily be dumped to a laserwriter to  prov-
ide and impressive reference for a Public Domain Program.

The toolkit is currently distributed via email.
Contact <alv-users-request@uk.ac.bris.cs> to request a copy.

The following are the commands currently in the toolkit:

     array2ras - convert array to raster
     blend - blend two rasters together
     box - box a raster
     convert - convert textual raster to raster
     convolve - convolve a raster with a linear filter
     dither - convert 8 bit raster to 1 bit using dither matrix
     dsp  - display a raster on screen
     equalise - equalise a raster
     ffill - flood fill a raster
     halftone - convert an 8 bit raster to  1  bit  using  bitmap
     font
     hist - display histogram of raster
     im2ras - convert old ALV format to raster
     invert - invert the pixels in a raster
     ras2array - convert raster to array
     ras2im - convert raster to old ALV format
     ras2lw - output a raster on a Laserwriter
     rasinfo - print raster udimensions and depth
     rasrange - range a raster's greylevels
     rasregion - clip a raster to a region
     rasscale - scale a raster's size by a scaling-factor
     rasthresh - threshold raster
     rasval - print pixel values of raster
     scr2ras - interactive screendump to raster
     transform - shear or rotate a raster
     winlev - convert N bit deep raster to 8 bits deep
     winlev8  -  interactively  change  window  and  level  of  a
     displayed raster

Phill Everson
-------------------------------------------------------------------------
SNAIL:     Phill Everson, Dept Comp Sci, University of Bristol, England
JANET:     everson@uk.ac.bris.cs
UUCP:      ...mcvax!ukc!csisles!everson
ARPANET:   everson@cs.bris.ac.uk OR everson%uk.ac.bris.cs@nss.cs.ucl.ac.uk
BITNET:    everson%uk.ac.bris.cs@ukacrl.bitnet


------------------------------

Date: Mon, 27 Feb 1989 16:59-EST 
From: Takeo.Kanade@IUS3.IUS.CS.CMU.EDU
Subject: Faculty Positions in Robotics


			 Faculty Positions in Robotics

		          Carnegie Mellon University
		            Robotics Ph.D. Program


Applications are invited for tenure-track faculty positions in the Robotics
Ph.D. Program at Carnegie Mellon University.  The program is
interdisciplinary with participation from the Robotics Institute, School of
Computer Science, Carnegie Institute of Technology (the engineering
college), and Graduate School of Industrial Administration.  Appointees are
expected to play major roles in education and research in the program.  The
appointments may be made at either assistant, associate, or full professor
levels, and in general will be joint positions between the Robotics
Institute and an academic department or school, depending on the
qualifications and backgrounds of the applicants.  If so desired, a non-
tenure-track research faculty position at the Robotics Institute can also be
considered.

Applicants for tenured positions must have strong records of achievements in
research and education in robotics and have demonstrated leadership in
formulating and performing advanced research projects.  Applicants for
junior tenure-track positions must have a Ph.D. in a related discipline and
have demonstrated competence in one or several areas of Robotics research
together with potential for excellent teaching.

Outstanding candidates in all areas of Robotics are invited, including, but
not limited to, mechanism, manipulation, control, locomotion, vision,
design, planning, knowledge-based systems, simulation, graphics,
micro-electronics, parallel computing, manufacturing, and management. 

Applicants should send their applications with curriculum vitae and names of
at least four references to:  Professor Takeo Kanade, Director of the
Robotics Ph.D. Program, The Robotics Institute, Carnegie Mellon University,
Pittsburgh, PA 15213.

Carnegie Mellon is an Equal Opportunity/Affirmative  Action employer.


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (03/07/89)

Vision-List Digest	Mon Mar 06 12:31:33 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Images recording
 Re:  Yet Another Image Proc. Toolkit
 Perception data wanted
 Re: Vision research information

----------------------------------------------------------------------

Date:         Thu, 02 Mar 89 09:51:08 HNE
From: Jean Fortin <3606JFOR%LAVALVM1.BITNET@CORNELLC.CIT.CORNELL.EDU>
Subject:      Images recording

     Hi everybody,
          I'm working with a b/w video camera and I would like
     to record my images on magnetic tape. Normal VCR are
     equipped with automatic gain control (AGC) and other proces-
     sing features which are disturbing in my case. I would like
     to know if anybody knows the name of a company offering VCRs
     not equipped with these features and specially suited for
     scientific recording of images.

     Thank you!

               Jean Fortin <3606JFOR@LAVALVM1>
               Electrical Engineering Dept.
               Universite Laval
               Local 1114-J
               Ste-Foy, Quebec, Canada
               G1K 7P4

------------------------------

Date: Sat, 4 Mar 89 18:16:35 PST
From: Lew Hitchner <hitchner@saturn.ucsc.edu>
Subject: Re:  Yet ANother Image Proc. Toolkit

The info in a recent Vision-List about Bristol's image Toolkit was
interesting and worthwhile to announce on Vision-List.  However, there
seem to be quite a few of these beasts floating around (esp. in
academia), some of which are public domain (i.e., free, no maintenance,
maybe some documentation), some of which have been pseudo-
commercialized, some of which have been commercialized, etc.  Perhaps
Vision-List would be a good conduit for compiling a list of the known
image toolkits and their availbility status.  If you can find a willing
compiler (i.e., a human) who would edit the responses sent in by
Vision-List readers, it would probably be a very good service to the
Vision community (as you might guess, this implies I am not
volunteering, but perhaps a call for a volunteer in a Vision-List
announcement might succeed).

	Lew Hitchner
	UC Santa Cruz

[ I agree that this would be worthwhile.  Those with information, please
  post the information, and I will repost.  In particular, specify: the 
  language, target system and portability, scope of routines, copyrights, 
  support and maintenance, responsible person(s) to contact, extent of 
  constructs (e.g., image processing, region-based, abstractions for 
  object description, etc.), and (if appropriate) the cost of the system.
	        phil...	 	]

------------------------------

Date: 2 Mar 89 22:31:11 GMT
From: munnari!rhea.trl.oz.au!dconway@uunet.UU.NET ( Switched Networks)
Subject: Perception data wanted
Keywords: sensory input references
Organization: Telecom Research Labs,IPF,Melbourne, Australia

Is the following sentence meaningful?

	"Unimpaired humans receive XX% of all sensory input visually."

If it is:
	  a) what is the value of XX?
	  b) how is this determined?
	  c) what is the standard reference on this?

If not:
	  a) what _can_ be meaningfully said in this context?
	  b) what are the issues which complicate such estimates?
	  b) what is the standard reference on the relative
	     importance of the different senses?

Please reply by email.
I will summarize responses.

Thankyou,

  who: Damian Conway			email: dconway@rhea.trl.oz
where: Network Analysis Section		phone: (03) 541 6270
       SNRB, Telecom Research Labs	quote: "He was a dyslextheist;
       Clayton South building (CS)		he worshipped dogs."
       22 Winterton Road			
       Clayton 3168		
       AUSTRALIA

------------------------------

Date: Mon, 13 Feb 89 09:04:49 EST
From: steinmetz!pyramid!malek@mcnc.org (aiman a abdel-malek)
Subject: Re: Vision research information

I am doing research in exploiting human visual systems characteristics for
better image generation and compression. If you are doing research on related
topic or one of the following topics :
_ Image segmentation using human visual properties and applications in image
compression.
-The use of visual models for better image generation.
-The use of spatial and temporal characteristics of the visual system to
enhance image quality and update rates.
     Contact me, regarding your most recent publications in any of the above
topics. Thank you
malek@pyramid.steinmetz.Ge.Com

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (03/09/89)

Vision-List Digest	Wed Mar 08 11:00:09 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Percent visual input?
 OBVIUS (vision software)
 VIEWS: Image Processing Toolkit for Suns?

----------------------------------------------------------------------

Date:     Tue, 7 Mar 89 11:26 EDT
From: "RCSDY::YOUNG"@gmr.com
Subject:  Percent visual input?

Since I am unsure how to reach him via e-mail, I am replying directly to
VisionNet
to Damian Conway's prior question:
 
> Is the following sentence meaningful?    
> "Unimpaired humans receive XX% of all sensory input visually."  

Consider there are an estimated 10^9 neurons in the primary visual area (V1)
in cortex, another 10^9 (or possibly much more) in the secondary
visual cortical areas (V2-V4, peristriate, parastriate). Subcortical
areas such as LGN we can disregard, since there are only 10^6 neurons in each
optic tract, and only 10^8 in each eye. In addition we have motion and
eye movement processing over in the superior colliculus and its associated
pathways (perhaps about another 10^9). There are altogether at least
20 different known retinotopic maps in the cortex, not all of which
have complete number estimates. Total cortex is generally thought to contain
about 10^10 neurons, although this figure is a likely underestimate.

The usual estimate of vision-related neurons is made by considering just
the occipital area of the brain, where the visual sensory paths terminate,
which is known to contain about 70 percent of all the neurons in the
human central nervous system (M. D. Levine, Vision in Man and Machine,
1985, p. 84). So 70% would be the most widely quoted figure.

However if you also include association cortex which associates visual
information with auditory and tactile information the total figure would be
higher. Also what about the motor pathways controlling eye movements,
with visual re-afference which is necessary to maintain visual stability
with eye movements? So my own estimate is that final figure would be
that about 80% of the neurons in the brain are involved with
vision processing -- we are indeed visual creatures!

  Dick Young
  Machine Perception Laboratory
  General Motors Research Labs

------------------------------

Date: Tue, 7 Mar 89 17:30:29 EST
From: David Heeger <heeger@paddington.media.mit.edu>
Subject: OBVIUS (vision software)


OBVIUS (Object-Based Vision and Image Understanding System) is an
extension to Common Lisp and CLOS (Common Lisp Object System) for
manipulating pictorially displayable objects.  The system provides a
flexible interactive user interface for working with images.  In
addition, by using Lisp as its primary language, the system is able to
take advantage of the interpretive lisp environment (the
``listener''), object-oriented programming, and the extensibility
provided by incremental compilation.  OBVIUS runs on Sun 3 (using
Lucid Lisp) and Symbolics machines.

The basic functionality of OBVIUS is to present certain lisp objects
to the user pictorially.  These objects are refered to as {\bf
viewables}.  Some examples of viewables are monochrome images, color
images, one bit images, complex images, image pyramids, image
sequences, filters and discrete functions.  A {\bf picture} is a
pictorial representation of a viewable.  Note that a given viewable
may be compatible with several different picture types.  For example,
a floating point image may be displayed as an eight bit grayscale
picture, as a one bit dithered picture, or as a graphical surface
plot.  OBVIUS also provides postscript hardcopy output of pictures.

In the typical mode of interaction, the user types an expression to
the lisp listener and it returns a viewable as a result.  The
top-level lisp print function then automatically displays a picture of
the viewable in a window.  Each window contains a circular stack of
pictures.  Standard stack manipulation operations are provided via
mouse clicks (e.g., cycle, pop, and push).  Commonly used operations
such as histogram and zoom are also provided via mouse clicks.

OBVIUS provides a library of image processing routines (e.g., point
operations, image statistics, convolutions, fourier transforms).  All
of the operations are defined on all of the viewable types.  The
low-level floating point operations on the Suns are implemented in C
for speed.  OBVIUS also provides a library of functions for
synthesizing images.  In addition, it is straightforward to add new
operations and new viewable and picture types.

OBVIUS is now ready for beta-test distribution (available via
anonymous ftp from whitechapel.media.mit.edu).  Since it is currently
an in-house product it comes without warrantee or support.  For more
information contact David Heeger (heeger@media-lab.media.mit.edu) of
the MIT Media Lab Vision Science Group, at (617) 253-0611.


------------------------------

Date: Tue, 7 Mar 89 21:26:48 EST
From: achhabra@ucesp1.ece.uc.edu (Atul Chhabra)
Subject: VIEWS: Image Processing Toolkit for Suns?

At a recent conference, I saw a brochure about VIEWS, an image
processing toolkit for SUNs. This is a public domain software 
developed at Lawrence Livermore Labs. The brochure contained
the name and phone number of the contact person at Lawrence
Livermore. 

I have misplaced the brochure. Could someone on the net email
me the name, phone number, and the email address of the
distributor of VIEWS. 

Thanks

Atul Chhabra, Dept. of Electrical & Computer Engineering, ML 030,
University of Cincinnati, Cincinnati, OH 45221-0030.

voice: (513)556-4766  INTERNET: achhabra@ucesp1.ece.uc.edu
                                OR achhabra@uceng.uc.edu


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (03/15/89)

Vision-List Digest	Tue Mar 14 10:40:36 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Intrinsic image routines?
 Research posts - parallel processing and vision
 Imaging software.
 Camcorder computer interface modification description

----------------------------------------------------------------------

Date: Mon, 13 Mar 89 19:47:07 EST
From: Sean Philip Engelson <engelson-sean@YALE.ARPA>
Subject: Intrinsic image routines?


I'm doing some work in cognitive mapping and robotics, and, naturally,
I need some vision.  I'm just getting to the thinking about hacking
stage, so I figured I'd ask if you had programs to compute any sort of
intrinsic images from input data, that I could get ahold of.  Things
like local shape-from-shading, or stereo depth maps, or motion fields,
etc, is what I'm looking for; not model or feature based stuff.  I
need source, of course, since I want to interface all this stuff
together, and thus commercial quality is not necessary, but
well-written code would be nice.

Thanks very much in advance,


------------------------------

Date: 13 Mar 89 17:30:55 GMT
From: Andrew Wallace <mcvax!cs.hw.ac.uk!andy@uunet.UU.NET>
Subject: Research posts - parallel processing and vision
Organization: Computer Science, Heriot-Watt U., Scotland


                              Heriot-Watt University

                          Department of Computer Science

              Research Associates in Parallel Processing and Vision

        Applications   are   invited   for   two   SERC-funded    Research
        Associateships  to  work  on  aspects  of  rapid  prototyping  and
        implementation of algorithms for high level  image  interpretation
        on   multiple  instruction  multiple  data  (MIMD)  architectures.
        Although  working  closely   together,   each   RA   will   assume
        responsibility   for   a   specific  programme.   The  first  will
        concentrate  primarily  on  the  software  methodology,  including
        functional  specification  of  algorithms and their transformation
        into a parallel imperative language,  OCCAM  2.   The  other  will
        undertake  the  development,  optimisation  and  implementation of
        algorithms for visual  recognition  and  location  on  a  suitable
        machine.   The persons appointed will join a lively research group
        working  on  several  aspects  of  computer  vision  and  software
        development.

        Applicants should have an honours degree in Computer Science or  a
        related    discipline,   together   with   relevant   postgraduate
        experience.  The posts are tenable for three years, commencing  as
        soon  as possible after the 1st June.  The starting salary will be
        in the range L=8,675 to L=13,365 depending on  age  and  
	experience.

        Enquiries  should  be  directed  initially  to  the Staff Officer,
        Heriot-Watt University, Riccarton, Edinburgh EH14 4AS,  from  whom
        further  information  and  application forms may be obtained.  The
        closing date for applications is 7th April 1989.

        Informal enquiries may be directed to Dr. Andrew Wallace   at  the
        Department   of   Computer   Science,   tel.   031-225-6465   x542
        (andy@uk.ac.hw.cs)

 Andrew Wallace			JANET : andy@cs.hw.ac.uk
				ARPA  : andy@uk.ac.hw.cs
				UUCP  : ..ukc!cs.hw.ac.uk!andy


------------------------------

Date: Wed, 8 Mar 89 14:59 N
From: THIERRY PUN <PUN%CGEUGE51.BITNET@CUNYVM.CUNY.EDU>
Subject: Imaging software.

(Following vision-list digest of Monday March 6, 89).

LABO IMAGE:

Computer Science Center, University of Geneva, Switzerland


GENERAL DESCRIPTION:

Labo Image is a window based software for image processing and analysis. It
contains a comprehensive set of operators as well as general utilities. It
is designed to be open-ended; new modules can easily be added. The software
is mostly written in C, and currently runs on Sun 3/xxx, Sun 4/xxx (OS3.5 and
4.0) under SunView. It has been extensively used by students as well as
researchers from various domains: computer science (image analysis), medicine,
biology, physics. It is freely distributed.

CAPABILITIES:

Labo Image is an interactive software, whose interface is menu, mouse and
windows based. It can work on monochrome (binary) or color workstations. Its
main features are:
    - input-output: file, screen, postscript;
    - display: mono, RGB, dithering;
    - color table manipulations;
    - elementary interactive operations: region outlining, statistics and
      histogram computation, etc;
    - elementary operations: histogramming, conversions, arithmetic, images
      and noise generation;
    - interpolation, rotation/scaling/translation;
    - preprocessing: background substraction, filters, etc;
    - convolution/correlation with masks, image; padding;
    - edge extractions: various operators, peak-following;
    - region segmentation: various methods (being implemented);
    - transforms: Fourier, Haar, etc;
    - binary mathematical morphology, plus some grey-level morphology;
    - expert-system for novice users;
    - macros definitions, save and replay;
    - etc.

IMAGE FORMATS:

Own format: descriptor file + data file (binary, byte, int, float, complex;
mono or RGB). Conversions to various other formats.
Constructs:
    - iconic (pixel-based), which each image having its own parameter list;
    - vectors (histograms, look-up tables);
    - graphs (for regions; being implemented);
    - strings (for macros).

STATUS:

Version 0 has been released in January 1988, version 1 in November 1988,
version 2 will be released before end of March 1989:
    - hosts: Sun 3/xxx, Sun 4/xxx;
    - OS: Sun OS 3.5, 4.0;
    - window system: SunView, View2 as soon as possible; X11 in preparation;
    - language: mostly C, plus some Fortran (SPIDER modules) and some
      Common-Lisp (expert-system);
    - approx. code size: source 1MB (25'000 lines), executable 1.5MB under
      SunView/OS3.5;
    - documentation: manuals (french), leaflets (english); english manual is
      being prepared.

DISTRIBUTION POLICY:

Most of the software has been developped by us, and source code is available.
A few modules are licensed (SPIDER), and of course cannot be distributed;
these are however routines that all imaging groups have, such as median or
Fourier transform.
Interested persons can be sent the software by providing us with a 1/4"
cartridge. Under special request, it can be e-mailed. A typical disclaimer
notice will also be sent. In essence:
    - the software is our property, and the copyright notice must appear;
    - results obtained with Labo Image should reference it;
    - no responsability is assumed;
    - no money can be made out of it;
    - no redistribution without our consent;
    - bugs will usually be corrected since we use intensively the software;
    - modifications should be communicated to us, with (normally) allowance
      for redistribution.

CONTACTS:

Thierry Pun (general design) or Alain Jacot-Descombes (general design and
principal writer of the software): Computer Science Center, Univ. of Geneva,
12 rue du Lac, CH-1207 Geneva SWITZERLAND.
Tel. +(4122) 87 65 82 (T. Pun), 87 65 84 (A. Jacot-Descombes).
E-mail: pun@cgeuge51.bitnet, pun@cui.unige.ch, or jacot@cuisun.unige.ch.


------------------------------

Date: Mon, 13 Mar 89 10:33:47 PST
From: Mark Noworolski <noworol@eecg.toronto.edu>
Subject: Camcorder computer interface modification description


[ I have omitted the compressed binary files.  If anyone has a need for
  them, please let me know, and I will mail the uuencoded, compressed, 
  tar'ed (and feathered) file to you.
			phil...		]


Here, as promised are the details of how to interface the
Fischer-Price Kiddie Camcorder to a computer- in my case an
IBM PC. The camera has 120 horiz, 90 vertical, and 16 grey scales.

Several notes are in order:

1. It may sound like part of a thesis. Well it is.
2. Figures are not enclosed, I figure that it's reasonably easy
to figure things out without them anyway (provided you have a unit
near you). Figure that, figure, figure. figure.
3. The interface is built with the premise that if it's possible to do
it in software, it'll be done that way. Improvements are most
DEFINITELY possible (and probably welcome). Some of the parts of it
are probably redundant, but make me much happier about the likelihood
of frying something in my PC.
The actual interface is enclosed in 3 formats:
ORCAD schematic file v3.11
HPGL file spit out by orcad 
Postscript file (untested) after running through an hpgl to ps converter.
epson format file.
4. The program is written for Turbo C 2.0. It uses the BGI routines and
is REALLY ugly. I mean that. It's one of those programs that you write in
2 minutes 'just to see if it works' and then never clean it up.
5. The following should only be attempted by people who have a vague idea
of what they're doing. Since you're interfacing to the IBM bus directly you
should be VERY careful.
6. The executable of the display program will be provided on request.
7. A question... Why does the damned program generate a parity error when 
starting up? It goes away after that.

Well, here's the goods.

          Reverse Engineering

          The Fisher  Price Kiddie  Camcorder was found to be a very useful
          image sensor priced  reasonably  (at  the  time  of  writing $180
          Canadian). What  follows is  a description of how to use the unit
          as an image sensor giving 16 levels of  grey scale  and requiring
          only  a  minimum  of  interface  circuitry.  Please note that all
          direction references (unless specified otherwise) are  related to
          those observed when actually using the unit as a camera.

          Disassembly  of  the  unit  is  fairly simple, screws are located
          underneath the rubber pegs located on the right hand of  the unit
          (see figure  1).   These must  be pried off with a pointed object
          such as a screwdriver,  revealing  the  screws  underneath. These
          four  screws  must  then  be  removed  along  with the two in the
          handle. The unit can  then be  easily separated  into two halves,
          revealing the electronics and the cassette mechanism.

          Next  the   cassette  mechanism   must  be   separated  from  the
          electronics. This can be accomplished by separating the two while
          using the  pushbutton side as a pivot point (most wires are to be
          found on  that side).  In order  to simplify  interfacing the two
          wires leading to the motor should be disconnected.

          The switch  labelled SW1 should next be pushed in permanently (it
          is found directly behind  the  vision  sensing  element  near the
          shield), this  can be accomplished by pulling the spring out from
          within it and then  manually pushing  it into  position. The unit
          can at  this point  be used  as a  vision sensor which plugs into
          your TV. In other words what it now does is  work like  normal in
          the record  mode; except  that no  recording actually takes place
          since the motor doesn't turn.

          Towards the back of the board  there are  two SMD's.  They are 24
          pin devices  mounted side by side. Both of them have similar part
          numbers- FP519550. To the  left of  these there  are 7 resistors,
          the top  one is labelled R155. The bottom four are the 4 bit data
          stream (see figure 2), thus giving 16 levels of grey scale (a TTL
          high level  indicates a  corresponding high light intensity). The
          bottom resistor is the most significant bit and the fourth one up
          is the  least significant;  the right side of each being the data
          line itself. These lines are shown in figure 3 together  with the
          associated control signals. The data lines and associated control
          signals are at standard TTL levels of 5 volts.

          The synchronising signals can be found on the  left FP519550 SMD.
          Each is named as follows: 

          Current Frame- pin 5
               This signal  is a  square wave of period 130 msec. It can be
               used as a synchronising signal to indicate start of frame.

          Data valid- pin 17
               This signal is  active  low  for  approximately  250nsec and
               occurs 600  nsec before  the end  of a data valid period. In
               addition it goes low for a short period at the beginning and
               end of each frame.

          Horizontal Sync- pin 23
               This  signal  is  active  low for approximately 50usec every
               0.7msec. This can be used as the horizontal sync signal.

          Numerous additional control-related signals can be found on these
          two SMD's.  However the  three described  above are sufficient to
          enable interfacing to a computer with minimal circuitry.



          Interfacing to the IBM PC bus

          Emphasis in the  interface  planning  and  design  was  placed on
          simplicity  as  opposed  to  elegance.  The reasoning behind this
          being that this was still the initial prototype development phase
          of the project. In the final design a microcontroller such as the
          8051 might be a good choice for image aquisition processing.

          The final circuit designed with this premise in mind is  shown in
          figure  1.  Although  simple  in  function  and design, a lack of
          reasonable care can damage  the PC  bus and  some I/O  cards (the
          author himself  has manged to destroy his hard disk controller in
          a puff of smoke).

          The simple precaution of removing all PC cards possible will lead
          to a  safer environment in which to debug this circuit. Note that
          the DMA3 channel  is  used  to  do  the  interfacing.  Once again
          caution should  be stressed as some PC cards use the same channel
          for their functions and  it is  important that  this circuit does
          not conflict with them.

          Circuit Description

          The '74  latch is used to generate DMA requests by using the Data
          Valid line as a clock. The DMA acknowledge line clears  the flip-
          flop  thereby  setting  it  up  for  the  next data word. The DMA
          acknowledge time is  significantly  less  than  the  6usec period
          during which data is valid.

          The Data Valid line is also used as the clock for the '374 latch,
          with the data lines, Current Frame bit, and Hsync bit used as its
          inputs. The  output enable line is controlled by both the IOR and
          the DMAK3 lines, thereby assuring  that  no  bus  conflicts occur
          when  another  I/O  device  is  accessed (unless it uses the DMA3
          channel).

          Finally the  '06 open  collector buffer  is used  to minimize the
          risk of blocking other devices from using the DMA3 channel. This,
          however, is  probably  unnecessary  since  DMA3  service attempts
          would cause bus conflicts anyway. Nevertheless it made the author
          feel  much  more  comfortable  about  the   likelihood  of  other
          components in his computer vanishing in puffs of smoke.





------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (03/18/89)

Vision-List Digest	Fri Mar 17 10:04:20 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Re: VIEW (not VIEWS): Image Processing Toolkit for Suns?
 Visual System Characteristics
 Lectureship or Assistant Lectureship
 Re:  Call for image processing software

----------------------------------------------------------------------

Date: Wed, 15 Mar 89 03:32:37 EST
From: achhabra@ucesp1.ece.uc.edu (Atul K. Chhabra)
Subject: Re: VIEW (not VIEWS): Image Processing Toolkit for Suns?


(Following vision-list digest of Wed March 8, 89).

> I have misplaced the brochure. Could someone on the net email
> me the name, phone number, and the email address of the
> distributor of VIEWS. 

On getting no responses with the info that I had asked for, I
searched harder on my desk and located the brochure. It only
contains the snail mail address of the contact person:

	R.M. Rodrigues, L-153
	Lawrence Livermore National Laboratory
	P.O. Box 5504
	Livermore, CA 94550

Highlights of the VIEW (not VIEWS) software (quoted from the
brochure):

  o  Available at no charge
  o  User friendly interface
       -  Window-based
       -  Menu or command driven
  o  On-line HELP and user manual
  o  Multidimensional processing operations include:
       -  Image display and enhancement
       -  Pseudocolor operations
       -  Point and neighborhood operations
       -  Digital filtering
       -  Fourier transform domain operations
       -  Simulation operations
       -  Database management
       -  Seqeunce and macro processing
  o  Easily transportable
  o  Written in C (sources included)
  o  Handles multiple dimensions and data types
  o  Available on
       -  VAX (VMS, Ultrix)
       -  Sun (UNIX)

Atul

Atul Chhabra, Dept. of Electrical & Computer Engineering, ML 030,
University of Cincinnati, Cincinnati, OH 45221-0030.

Phone: (513)556-4766  INTERNET: achhabra@ucesp1.ece.uc.edu [129.137.33.114]
                                OR achhabra@uceng.uc.edu [129.137.33.1]

------------------------------

From: "John K. Tsotsos" <tsotsos@ai.toronto.edu>
Subject: Visual System Characteristics
Date: 	Wed, 15 Mar 89 15:45:45 EST


I am interested in collecting visual system characteristics from as 
many different species (both vertebrates and invertebrates) as possible. 
In particular, I would like to know for each type of animal:

    - the approximate number of cortical (and/or sub-cortical) neurons 
      devoted primarily to vision

    - whether or not `visual maps' have been discovered, and if so,
      how many, what is their size (in neurons), how are they organized, 
      and any other known characteristics. Positive statements about the
      absence of maps are also important.

    - average cortical fan-in and fan-out for visual neurons in terms of
      other neurons rather than total synapses


Please cite references as well.

Both physical and electronic mail addresses are given below.

I will gladly summarize and post the results on the net
if there is enough interest.


John K. Tsotsos
Department of Computer Science
10 King's College Road
University of Toronto
Toronto, Ontario, Canada M5S 1A4
416-978-3619

tsotsos@ai.toronto.edu

------------------------------

Date: Thu, 16 Mar 89 21:13:39 GMT
From: JM123%phoenix.cambridge.ac.uk@NSS.Cs.Ucl.AC.UK
Subject: Lectureship or Assistant Lectureship
	
	
                 University of Cambridge, UK
	
            Department of Experimental Psychology
	
      LECTURESHIP or ASSISTANT LECTURESHIP in Psychology
	
An appointment of a Lecturer or Assistant Lecturer in Experimental
Psychology will be made shortly under the New Academic Appointments
Scheme, subject to funding from the University Grants Committee.  The
starting date will be October 1, 1989, or as soon afterwards as possible.
The appointment will be made in the general area of cognitive psychology,
cognitive neuroscience or developmental psychology; preference may be
given to candidates working on computational modelling of cognitive
processes or on associative or neural networks.
	
The salary for a University Assistant Lecturer is UKL 10,460 p.a. rising
by four annual increments to UKL 12,760, and for a University Lecturer,
UKL 13,365 p.a., rising by eleven annual increments to UKL 20,615.
All Assistant Lecturers are consider for upgrading to Lecturer during
their appointment.
	
Further formal particulars may be obtained from Dr. D. Franks, Secretary
to the Appointments Committee for the Faculty of Biology B, 19 Trumpington
St., Cambridge CB2 1QA, to whom applications should be sent by 17 April, 1989.
	
Informal enquiries may be directed to Professor N. J. Mackintosh (223-333551)
Department of Experimental Psychology, Downing St., Cambridge, CB2 3EB,
United Kingdom; or, if urgent, to jm123@uk.ac.cam.phx.
	

------------------------------

Date: Thu, 16 Mar 89 17:08:39 EST
From: msl@vml3.psych.nyu.edu (Michael Landy)
Subject: Re:  Call for image processing software

     The following is in response to your request for infor-
mation on image processing software.

     HIPS is a software package for  image  processing  that
runs  under  the UNIX operating system.  HIPS is modular and
flexible,  it  provides  automatic  documentation   of   its
actions,  and  is  almost  entirely  independent  of special
equipment.  It handles sequences of images (movies) in  pre-
cisely the same manner as single frames.  Programs have been
developed for simple image transformations, filtering,  con-
volution,  Fourier  and  other  transform  processing,  edge
detection  and  line  drawing  manipulation,  digital  image
compression  and  transmission  methods,  noise  generation,
image pyramids, and image statistics computation.  Over  150
such  image transformation programs have been developed.  As
a result, almost any image processing task can be  performed
quickly  and  conveniently.  Additionally, HIPS allows users
to easily integrate their own custom routines.

     HIPS features images that are  self-documenting.   Each
image  stored  in  the  system  contains  a  history  of the
transformations that have been applied to that image.   HIPS
includes  a  small  set of subroutines which primarily deals
with a standardized  image  sequence  header,  and  a  large
library  of  image  transformation tools in the form of UNIX
``filters'' written in `C'.  As a result it runs on any Unix
workstation   (users   run   it   on   equipment  from  Sun,
Vax/Microvax, Masscomp, NCR, Silicon Graphics/Iris,  Apollo,
etc.  etc.).   HIPS has proven itself a highly flexible sys-
tem, both as an interactive  research  tool,  and  for  more
production-oriented  tasks.   It  is  both  easy to use, and
quickly adapted and extended to new uses.

     HIPS is distributed by SharpImage  Software,  P.O.  Box
373,  Prince  Street Station, New York, NY   10012-0007.  To
obtain more information, write us  or  call  Michael  Landy,
(212)  998-7857  (landy@nyu.nyu.edu).   HIPS  consists  of a
basic system and a number of additional modules (for fancier
Sun  display, additional image tools, etc.).  The basic sys-
tem  costs  $3,000,  and  is  available  at  a  considerable
discount   to  qualified  educational,  non-profit,  and  US
government users.  The  base  price  is  for  all  computing
equipment within a particular academic department of commer-
cial laboratory.  The software comes complete  with  source,
libraries,  a  library  of convolution masks, documentation,
and manual pages.  It also includes drivers for the Grinnell
and  Adage  image  processors,  display  drivers for the Sun
Microsystems consoles under SunView, gfx,  and  straight  to
the  console.  Users have contributed drivers for the Matrox
VIP-1024, ITI IP-512,  Macintosh  II,  X  windowing  system,
Iris, and Lexidata.  It is a simple matter to interface HIPS
with other framestores, and we can put interested  users  in
touch  with users who have interfaced HIPS with the Arlunya,
and Datacube Max-Video.  Our Hipsaddon product  includes  an
interface  to  the CRS-4000.  HIPS can be easily adapted for
other image display devices because 98% of HIPS  is  machine
independent.  It  has  been  described  in  Computer Vision,
Graphics, and Image Processing (Vol.   25,  1984,  pp.  331-
347), and in Behavior Research Methods, Instrumentation, and
Computers (Vol. 16, 1984, pp. 199-216).


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (03/28/89)

Vision-List Digest	Mon Mar 27 16:00:35 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Mobile Robot Information Request
 image processing machine learning
 ARTIFICIAL INTELLIGENCE AND COMMUNICATING PROCESS ARCHITECTURE

----------------------------------------------------------------------

Date: 22 Mar 89 18:01:36 GMT
From: yamauchi@cs.rochester.edu (Brian Yamauchi)
Subject: Mobile Robot Information Request
Keywords: mobile robot vision robotics
Organization: U of Rochester, CS Dept, Rochester, NY


	Our Robotics / Computer Vision research group is looking into
the possibility of getting a mobile robot for our lab.  We are looking
for recommendations from mobile robot researchers regarding what types
of robots have worked well for them.

	In particular, we are considering the Denning robots and (less
likely) the Heath Hero robots.  We limited access to mechanical /
electrical engineering facilities, so we are thinking in terms of a
commercially available robot as opposed to one that would have to be
built from scratch.

	We envision mounting CCD video cameras on the robot and having
it communicate through a MaxVideo image processor to a Sun 3/260 that
would be controlling the robot.  Both cables and radio/TV links are
being considered for communication, though (depending on the amount of
hardware required) the radio/TV links would be more desireable.

	My particular research interests are in the area of
behavior-based robotics, so I would be particularly interesting in
hearing from people who are doing mobile robot work in this area.

	Also, does anyone have the address / phone number of Denning
Mobile Robots?  Are they on the net?

				Thanks in advance,

[ I have seen some literature on the Vectrobot by Real World Interface,
  (603)654-6334.  Brooks has used it in some of his work.  Retails for
  $3400 for the base unit (warning: this was the price on 1/88). Has
  synchronous drive.
	Any others anyone knows of which are applicable for vision
  work?
				phil...	]
  

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department


------------------------------


Date: 25 Mar 89 22:35:49 GMT
From: nealiphc@blake.acs.washington.edu (Phillip Neal)
Subject: image processing machine learning
Keywords: image processing machine learning classification
Organization: Univ of Washington, Seattle

 I have been trying to do image processing on fish otoliths.
Essentially, counting the rings on otoliths to determine the age of
the fish. The age is then used for management purposes.

However, when I look through the literature, most of the articles seem
to be on triangles, or f-15's/Mig-23's or wrenches or other images
without a lot of noise in them. Furthermore, a lot of the articles seem
to be about theoretical or algorithmic type subjects with very little
discussion of classification results whenapplied to real world data.
Does anybody else sense this ? Or do I have the wrong paradigm
/weltschmertz (sp) .

Is anybody else doing image/learning work on biological images that
aren't kidneys or acres of corn ?

My name is Phil Neal. I am new on the net. I am at
nealiphc@blake.washington.edu in the USA.  .


------------------------------

Date: Thu, 23 Mar 89 20:56:37 GMT
From: Steven Zenith <zenith%inmos.co.uk@NSS.Cs.Ucl.AC.UK>
Subject: ARTIFICIAL INTELLIGENCE AND COMMUNICATING PROCESS ARCHITECTURE

                         International conference
      ARTIFICIAL INTELLIGENCE AND COMMUNICATING PROCESS ARCHITECTURE
          17th/18th of July 1989, at Imperial College, London UK.
                              Keynote speaker
                             Prof. Iann Barron

                             Invited speakers
          Prof. Igor Aleksander   Neural Computing Architectures.
                Prof. Colin Besant   Programming of Robots.
         Prof. David Gelernter   Information Management in Linda.
            Dr. Atsuhiro Goto   The Parallel Inference Machine.
           Prof. Tosiyasu Kunii   Primitive Image Understanding.
                  Dr. Rajiv Trehan   Parallel AI Systems.
        Prof. Alan Robinson   Functional and Relational reasoning.
          Prof. Les Valiant] Bulk-synchronous Parallel Computing.

                      * Parallel Processing and AI *

	Parallel Processing and Artificial Intelligence are two key themes
which have risen to the fore of technology in the past decade. This
international conference brings together the two communities.
	Communicating Process Architecture is one of the most successful
models for exploiting the potential power of parallel processing machines.
Artificial Intelligence is perhaps the most challenging applications for
such machines. This conference explores the interaction between these two
technologies.
	The carefully selected programme of invited talks and submitted papers
brings together the very best researchers currently working in the field. 

                            * Topics include *
             Robotics   Neural Networks   Image Understanding
    Speech Recognition   Implementation of Logic Programming Languages
      Information management   The Japanese Fifth Generation Project
                           Transputers and Occam


[ Detailed conference program omitted.  Please contact the conference
  organizers for more information. 
		phil...	]

                              * Proceedings *

The edited proceedings includes invited and submitted papers and is
intended for publication in a new book series on Communicating Process
Architecture published by John Wiley and Sons.

                  * The conference organising committee *
 
Organising committee, programme editors and conference chairmen: 

              Dr. Mike Reeve   Imperial College, London, UK. 
           Steven Ericsson Zenith   INMOS Limited, Bristol, UK.

The programme and organising committee: 

J.T Amenyo   Ctr. Telecoms Research, Columbia University. 
Jean-Jacques Codani   INRIA, France. 
Dr. Atsuhiro Goto   Institute for New Generation Computer Technology
(ICOT), Japan.
Dr.med.Ulrich Jobst   Ostertal - Klinik fur Neurologie und Klinische
Neurophysiologie 
Dr. Peter Kacsuk   Multilogic Computing, Budapest, Hungary.
Pasi Koikkalainen   Lappeenranta University of Technology, Finland. 
Prof. T. L. Kunii   The University of Tokyo, Japan.
Dr. Heather Liddell   Queen Mary College, London.  
Prof. Y. Paker   Polytechnic of Central London  
Prof. L. F. Pau   Technical University of Denmark.
Prof. Bernd Radig   Institut Fur Informatik, Munchen.
Prof. Alan Robinson   Syracuse University, USA.
Kai Ming Shea   University of Hong Kong. 
Prof. David Warren   Bristol University, UK.
Chung Zhang   Brighton Polytechnic. UK. 

                             * Registration *
	Registration should be received by June 16th. Late registration will
incur a 20 pound surcharge. All enquiries should be addressed to the
conference secretary:
				The Conference Secretary, 
				OUG AI Conferences, 
				INMOS Limited, 
				1000 Aztec West, 
				Almondsbury, 
				Bristol BS12 4SQ, 
				UNITED KINGDOM. 
				Tel. 0454 616616 x503 
				email: zenith@inmos.co.uk 

This conference is underwritten by INMOS Limited, to whom the organising
committee wish to extend their thanks. 


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (04/01/89)

Vision-List Digest	Fri Mar 31 08:49:55 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Vision List site network reconfiguration warning...
 texture data: synthetic and real.
 AISB89 Conference and Tutorials, Sussex University, April 18-21

----------------------------------------------------------------------

Date: Fri, 31 Mar 89 08:59:59 PST
From: Vision-List-Request <vision-list-request@ads.com>
To: vision-list@ads.com
Subject: Vision List site network reconfiguration warning...


Next week, the Vision List host site (Advanced Decision Systems) will
be shifting from the ARPANET (officially dead on April 1st) to BARNET
([San Francisco] Bay Area Research NETwork).  All addresses remain the
same, though there may be a short interruption of service.  If you have
any problems, please let me know ASAP at Vision-List-Request@ADS.COM .

Theretically, the changeover will not disturb service.  (Sure, and 
Bears use outhouses...) 

	phil...

----------------------------------------------------------------------

Date: 29 Mar 89 15:59:00 WET
From: JOHN ILLINGWORTH <illing%ee.surrey.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: texture data: synthetic and real.

greetings all,

               I am interested in texture segmentation and would like to 
        test out some algorithms. For these purposes I would like to know
        if anyone has any source programs for generating different types
        of textured regions. Ideally I would like a paint-like program so
        that I could construct arbitary shaped regions whose internal
        texture parameters (local density, orientation etc) are well 
        defined. 

        I know of the Brodatz album of textures but does a standard
        digitised database version of this exist or does everyone have 
        their own version taken by pointing their own camera at the
        pictures in the album??

        I would appreciate any advice on these matters, many thanks

        John Illingworth
        Dept of Electronics
        University of Surrey
        Guildford. U.K.


------------------------------

Date: 27 Mar 89 14:46:36 GMT
From: aarons@uk.ac.sussex.syma (Aaron Sloman)
Subject: AISB89 Conference and Tutorials, Sussex University, April 18-21
Keywords: Artificial Intelligence, Cognitive Science, conference, tutorials
Organization: School of Cognitive & Computing Sciences, Sussex Univ. UK

              AISB89 CONFERENCE ON ARTIFICIAL INTELLIGENCE

                         University of Sussex,
                                BRIGHTON

             Tuesday 18th of April - Friday 21st April 1989

                            INVITED SPEAKERS

J.F. Allen (University of Rochester)    H. Barrow (University of Sussex)
V. Lifschitz (Stanford University)      Y. Wilks (New Mexico State Uni.)

                                SESSIONS

Papers will be presented on:

    LOGIC PROGRAMMING                       COGNITIVE MODELLING
    CONSTRAINT REASONING                    NONSTANDARD INFERENCE
    ROBOTICS, NEURAL NETWORKS & VISION      PLANNING

                               TUTORIALS

Tutorials will be held on Tuesday 18th April 1989:

Neural Networks                 Prof. H. Barrow, University of Sussex
Prolog                          Dr. C. Mellish, University of Edinburgh
Computer Vision                 Dr. D. Hogg, University of Sussex
Knowledge Elicitation           Dr. N. Shadbolt, University of Nottingham
Object-Oriented Programming     Mr. T. Simons, University of Sheffield

                                  FEES

TUTORIALS

AISB Members             120.00 pounds sterling
Ordinary Delegates       180.00
Students (Full-time)      60.00

TECHNICAL PROGRAMME

AISB Member             115.00 pounds sterling
Ordinary Delegates      150.00
Students (Full-time)     90.00

NB Fees are not inclusive of lunches or accommodation.

For further details, Programme and Registration forms, contact:

Judith Dennison
AISB 89 Conference Office
School of Cognitive Sciences
University of Sussex
Falmer, Brighton BN1 9QN UK

Tel: (+44) (0) 273 678379
Email: JANET    judithd@uk.ac.sussex.cogs
       INTERNET judithd%uk.ac.sussex.cogs@nss.cs.ucl.ac.uk
       UUCP     ...mcvax!ukc!cogs!judithd



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (04/08/89)

Vision-List Digest	Fri Apr 07 10:09:27 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 OPTICOMP computer-mail list on Optical Computing and Holography
 resampling filter references wanted

----------------------------------------------------------------------

Date:         Sun, 02 Apr 89 14:22:41 IST
From: Shelly Glaser  011 972 3 545 0060 <GLAS%TAUNIVM.BITNET@CUNYVM.CUNY.EDU>
Subject:      OPTICOMP computer-mail list on Optical Computing and Holography

    'OPTICOMP' COMPUTER-MAIL LIST ON OPTICAL COMPUTING AND HOLOGRAPHY
    *---------------------------------------------------------------*

I would like to announce the opening  of a new computer mailbox, or list,
named OPTICOMP.   This list  will be dedicated  to Optical  Computing and
Holography.  Specifically, subjects such as

(*) Holographic  displays,  including  true holographic  3-D  display  of
    computer generated images,

(*) Optical (both  analog and digital) information  processing, including
    pattern   recognition,   algorithms  for   rotation/scale/perspective
    invariant  recognition through  correlation,  realization of  digital
    processing  machines with  optics, communication-intensive  computing
    architectures etc.

(*) Optical  realizations  of  neural  networks  and  associative  memory
    systems.

Those of you who are interested in getting this newsletter are invited to
write to me to one of the E-mail addresses below.

                             Yours,
                                                            Shelly Glaser

  Snail-mail:
    Department of Electronic, Communication, Control and Computer Systems
                                                   Faculty of Engineering
                                                      Tel-Aviv University
                                                         Tel-Aviv, Israel

                                                TELEPHONE: 972 3 545-0060
                                                        FAX: 972 3 419513
                                    Computer network: GLAS@TAUNIVM.BITNET
                                                  or:  glas@vm1.tau.ac.il
                                  or: glas%taunivm.bitnet@cunyvm.cuny.edu
Acknowledge-To: <GLAS@TAUNIVM>

------------------------------

Date: Thu, 6 Apr 89 17:35:30 PDT
From: ph%miro.Berkeley.EDU@berkeley.edu (Paul Heckbert)
Subject: resampling filter references wanted


I'm doing some research on filters for ideal resampling of one discrete
image into another according to an arbitrary 2-D distortion.
I map from a source image with domain (u,v) to a destination image with
domain (x,y) according to user-supplied functions x=x(u,v), y=y(u,v).
The mapping is arbitrary in general, but affine is an important (easy) case.
I want to minimize aliasing due to resampling.  Quality filtering for highly
distorted mappings is vital (my application is texture mapping for computer
graphics).
This topic is also called "interpolation" and "multirate DSP" by some.

Any recommendations of books/journal articles would be most appreciated!
Please email them to me, as I don't normally read this list.

thanks,

Paul Heckbert, CS grad student
508-7 Evans Hall, UC Berkeley		ARPA: ph@miro.berkeley.edu
Berkeley, CA 94720			UUCP: ucbvax!miro.berkeley.edu!ph


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (04/28/89)

Vision-List Digest	Thu Apr 27 18:04:48 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Volume rendering references
 Conference on Visualization in Biomedical Computing

----------------------------------------------------------------------

Date: Thu, 27 Apr 89 11:14 MET
From: "Victor Roos, TFDL-ECIT Wageningen" <"AGRT06::ROOS"%HWALHW50.BITNET@CUNYVM.CUNY.EDU>
Subject: Volume rendering references

We see possibilities for Volume Rendering after reading Karen Frenkels
article in Communications of the ACM, volume 32, #4, april 1989, for
visualization of CT-images from soil samples. Unfortunately Karens
article has no references. We are interested in any leads and references
to articles, algoritms etc. that explain this technique.

Victor Roos
min. of Agriculture and Fisheries
TFDL/ECIT
POB 256
Wageningen
the Netherlands


------------------------------

Date: 27 Apr 89 12:51:52 GMT
From: arkin@gatech.edu (Ron C. Arkin)
Subject: Conference on Visualization in Biomedical Computing
Organization: School of Information and Computer Science, Georgia Tech, Atlanta


                    First Announcement and Call for Papers

			   THE FIRST CONFERENCE ON

		    VISUALIZATION IN BIOMEDICAL COMPUTING

			 Ritz-Carlton Buckhead Hotel
		              Atlanta, Georgia 

			      May 22-25, 1990


Sponsored by/in cooperation with:
	National Science Foundation
	IEEE Computer Society
	American Association of Physicists in Medicine
	Georgia Institute of Technology
	Emory University School of Medicine
	Emory-Georgia Tech Biomedical Technology Research Center


Theme:
Visualization in scientific and engineering research is a rapidly emerging
discipline aimed at developing tools and approaches to facilitate the 
interpretation of, and interaction with, large amounts of data, thereby
allowing researchers to "see" and comprehend, in a new and deeper manner,
the systems they are studying.  This conference, the first of its kind, is 
aimed at bringing together researchers working in various aspects of 
visualization in order to present and discuss approaches, tools, and 
techniques associated with visualization science in general, and visualization
in biomedical computing in particular.

Topics:
	o Theories and approaches
		Models of visualization
		Representation of multidimensional parameters
		Psychological aspects of visualization
			Perception
			Cognition
			Human/machine interface
		Artificial intelligence in visualization
		Computer vision and image processing
		Graphics and display
		Visual communications and televisualization
		Courses and training in visualization

	o Applications, techniques, tools
		Visualization in
			Modeling and simulation of biomedical processes
			Diagnostic radiology
			Molecular biology and genetics
			Neurophysiology
			Prosthetics development
			Radiation treatment planning
			Education

	o Other related topics

Technical Committee Co-chairs:
	Dr. Edward Catmull, PIXAR Corp.
	Dr. Gabor Herman, University of Pennsylvania.

Organizing Committee:
	Dr. Norberto Ezquerra, Office of Interdisciplinary Programs,
	Georgia Institute of Technology (Chair).
	Dr. Ernest Garcia, Department of Radiology, Emory University.
	Dr. Ronald C. Arkin, School of Information and Computer Science,
	Georgia Institute of Technology.

Call for papers, panels and tutorials (Deadlines):

Paper Abstracts due: August 31, 1989 (Submit 6 copies of 800 word abstract)
Panel and tutorial proposals due: July 31, 1989
Acceptance notification to authors: November 1, 1989
Full papers due: February 1, 1990
Conference: May 22-25, 1990

Abstract Submission/further information:
Dr. Norberto Ezquerra
Office of Interdisciplinary Programs
Georgia Tech, Atlanta, GA 30332
Phone: (404)-894-3964.

Ronald C. Arkin
School of Information & Computer Science, Georgia Tech, Atlanta GA 30332
CSNet:  arkin @ GATech	ARPA:	arkin%GATech.CSNet @ CSNet-Relay.ARPA
uucp:	...!{akgua,allegra,hplabs,ihnp4,linus,seismo,ulysses}!gatech!arkin


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (05/05/89)

Vision-List Digest	Thu May 04 18:00:05 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Discontinuity detection in stereo images/References?
 Workshop on MRF models in Computer Vision
 Update on CVPR

------------------------------

Date: 1 May 89 17:42:40 GMT
From: doorn@ai.etl.army.mil (Bradley Doorn)
Subject: Discontinuity detection in stereo images/References?
Keywords: stereo,automatic correlation,discontinuities,artificial intelligence
Organization: USAETL, Fort Belvoir, Virginia


I am interested in references and people who are working on the problem of
automatically detecting terrain discontinuities from aerial stereo imagery
prior to obtaining a complete elevation model.  The question is 'Can
information obtained from correlation 'attempts' be used to enhance automatic
stereo matching and image interpretation?'  One of the baseline problems is the
distinction between textural edges and geometric edges.

Please send the references by e-mail and I will compile and post my findings. 



Bradley D. Doorn			doorn@ai.etl.army.mil
RI, Engineer Topographic Laboratories
Fort Belvoir, VA 22060

----------------------------------------------------------------------

Date: 1 May 89 16:51:53 GMT
From: anand%brand.usc.edu@usc.edu (Anand Rangarajan)
Subject: Workshop on MRF models in Computer Vision
Organization: University of Southern California, Los Angeles, CA


	Workshop on Theory and Applications of 
		Markov Random Fields 
			for 
	Image Processing, Analysis and Computer Vision
	      	June 9, 1989
	Sheraton Grand Hotel on Harbor Island
		San Diego, California


This one day workshop sponsored by the Information Robotics and Intelligent 
Systems Program of NSF will explore the strengths and limitations of the 
Theory and Applications of Markov Random Fields for Image Processing, Analysis
and Computer Vision.  The preliminary program given below consists of several 
invited lectures and a panel session.   


			Workshop Co-Chairs


Professor Rama Chellappa 		 	Professor Anil K. Jain
Department of Electrical Engineering 	       	Department of Computer Science
University of Southern California	        Michigan State University 
Los Angeles, CA 90089 			        East Lansing, MI 44824
(213) 743-8559					(517) 353-5150
rama@brand.usc.edu				jain@cps.msu.edu


			Preliminary Program 


8:00 a.m. 		Registration
8:30 a.m.		Opening Remarks
8:45-10:15 a.m.
J. Besag		Introduction to MRF and It's Applications
R.L. Kashyap		Robust Image Models for Restoration and Edge Detection
J.W. Woods		Simulated Annealing for Compound GMRF Models
10:15-10:45 a.m.	Coffee Break 
10:45-12:15 p.m.
D.B. Cooper		On the Use of MRF in 3-D Surface Estimation
H. Derin		MRF Based Segmentation and It's Limitations
S. Geman		An Application of MRF to Medical Imaging
12:15-1:30 p.m.	Lunch
1:30-3:00 p.m.

C. Koch 		Analog Networks for Early Vision
T. Poggio and   	Representation of Discontinuities and Integration of 
			Vision Modules
D. Weinshall	
3:00-3:30 p.m. 	Coffee Break 
3:30-5:00 p.m. 	Panel Session 
			Strengths and Limitations of MRF Models 
Panelists: 		J. Besag, R.M. Haralick, A.K. Jain (Moderator)
			L.N. Kanal, R.L. Kashyap. 

			Registration Information

A modest fee of $25.00 will be charged to cover incidental expenses.   Please 
make check payable to MRF '89 and send to:

		Miss Gloria Bullock 
		University of Southern California
		Signal and Image Processing Institute 
		Powell Hall 306, Mail Code 0272 
		University Park 
		Los Angeles, CA 90089

Anand Rangarajan
anand@hotspur.usc.edu
anand@brand.usc.edu
``A spirit with a vision

------------------------------

Date: 1 May 89 17:57:04 GMT
From: anand%brand.usc.edu@usc.edu (Anand Rangarajan)
Subject: Update on CVPR
Organization: University of Southern California, Los Angeles, CA

              IEEE Computer Society Conference
                            on
          COMPUTER VISION AND PATTERN RECOGNITION

                    Sheraton Grand Hotel
                   San Diego, California
                       June 4-8, 1989


                       General Chair

               Professor Rama Chellappa
               Department of EE-Systems
               University of Southern California
               Los Angeles, California  90089-0272


                     Program Co-Chairs

Professor Worthy Martin          Professor John Kender
Dept. of Computer Science        Dept. of Computer Science
Thornton Hall                    Columbia University
University of Virginia           New York, New York  10027
Charlottesville, Virginia 22903


		     Tutorials Chair

		Professor Keith Price
		Department of EE-Systems
		University of Southern California
		Los Angeles, CA 90089-0272

		Local Arrangements Chair
		
		Professor Shankar Chatterjee
		Department of Electrical and Computer Engg.
		University of California at San Diego
		La Jolla, CA 92093		   


                     Program Committee

Chris Brown           Avi Kak                Theo Pavlidis
Allen Hansen          Rangaswamy Kashyap     Alex Pentland
Robert Haralick       Joseph Kearney         Azriel Rosenfeld
Ellen Hildreth        Daryl Lawton           Roger Tsai
Anil Jain             Martin Levine          John Tsotsos
Ramesh Jain           David Lowe             John Webb
John Jarvis           Gerard Medioni


          General Conference Sessions will be held
                       June 6-8, 1989

                  Conference Registration
                  (for CVPR and Tutorials)

Conference Department
CVPR
IEEE Computer Society
1730 Massachusetts Ave
Washington, D.C. 20036-1903
(202) 371 1013
Fax Number:(202) 728 9614

                     Fees, before May 15

CVPR           - $200 (IEEE Members, includes proceedings and banquet)
	       - $250 (Nonmembers)
               - $100 (Students, includes proceedings and banquet)
Tutorials      - $100 per session (IEEE Members and Students)
	       - $125 per session (Nonmembers)
			
		     Fees, after May 15

CVPR 		- $240 (IEEE Members, includes proceedings and banquet)
		- $300 (Nonmembers)
	        - $105 (Students, including proceedings and banquet)

Tutorials 	- $125 per session (IEEE Members and students)
		- $150 per session (Nonmembers)

                     Hotel Reservations

RESERVATIONS CUT-OFF DATE
-------------------------

All reservations must be received by the hotel before May 15, 1989. 
Reservations received after this date are not guaranteed and will be accepted
on a space-available basis only.

A deposit of one night's room fee, a major credit card guarantee or a personal
or company check, is required to confirm a reservation. All rates are subject 
to a 9% occupancy tax.

Check-in time: 3:00 p.m.	Check out time: 12:00 noon

The Sheraton Grand Hotel on Harbor Island
Sheraton Reservation Center
1590 Harbor Island Drive
San Diego, CA 92101
(619)692-2265

Rooms - $102 per night (single or double)


The Advance Program with registration forms, etc. has been
mailed out of the IEEE offices.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (05/16/89)

Vision-List Digest	Mon May 15 11:17:30 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 CALL TO VOTE: sci.med.physics

----------------------------------------------------------------------

Date: 11 May 89 21:47:22 GMT
From: james@rover.bsd.uchicago.edu
Subject: CALL TO VOTE: sci.med.physics
Organization: University of Chicago - Dept. Rad. Onc. and Med. Physics


I  would like to call for a vote on the formation of a Medical Physics 
newsgroup (Sci.Med.Physics), which would deal with most applications of 
Physics in Medicine.  Based on my own experiences in medical imaging and in the 
Graduate Program in Medical Physics that I am currently in, I would like to see 
features generally related to imaging and radiation therapy, but I can easily 
see the applicability to biophysics and related fields.  Topics include (but
are not limited to):

Biomechanics

Biophysics

Basic interactions of radiation with matter
"                                  " living systems

Methods of Generating diagnostic information:

	NMR (MRI) imaging systems
	CT (computed tomography)
	Projection Radiography (including angiography and the new MEGAVOLTAGE
	imaging systems in Radiation therapy, as well as computed Radiography)
	Ultrasound
	Nuclear medicine imaging systems (eg. Gamma Camera, PET, SPECT,...)
	Thermography
	MEG
	EEG
	EKG
	Electric Impedence Tomography
	Image Communication
	Computer aided diagnosis
	All fields of analysis applied to medical images
	So on


Methods used in Radiation therapy, including:

	Simulation and verification methods
	2D and 3D treatment planning
	Electrons, protons and assorted heavy ions
	Neutron therapy
	Quality assurance
	Brachytherapy
	Monoclonal Antibody imaging/treatment methods

The call for votes is cross listed in the groups sci.physics and comp.graphics

All votes (YES and NO) will be taken by james@rover.uchicago.edu


Thank you for your attention.

Sincerely,

James Balter
James@rover.Uchicago.edu
"If the hat fits, slice it!"


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (05/20/89)

Vision-List Digest	Fri May 19 15:59:28 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Registration of multiple views of a beating heart
 Fifth workshop on human and machine vision
 SEMINAR: Prediction of the Illuminant's Effect on Surface Color Appearance
 Seminar in Machine Vision at Rutgers University

------------------------------

Date: Fri, 19 May 89 12:00:18 EDT
From: agr@ares.cs.wayne.edu
Subject: Registration of multiple views of a beating heart

 REQUEST FOR REFERENCES FOR REGISTRATION OF MULTIPLE VIEW OF A 
BEATING HEART.

We are trying to reconstruct the coronary arteries in three
dimension for a live heart. In order to that several views of the
beating heart should be registered correctly.

Any recommendations of books/journal articles would be very much
appreciated.
Please email to the  following address:

agr@jupiter.cs.wayne.edu

Thanks

Arindam Guptaray
Cardiac Laser Lab
Harper Hospital 
3990 John R,
Detroit, MI 48201.

------------------------------

Date: Fri, 19 May 89 14:25 EDT
From: Sandy Pentland <sandy@media-lab.media.mit.edu>
Subject: Workshop announcement


FIFTH WORKSHOP ON HUMAN AND MACHINE VISION

Sea Crest Resort,350 Quaker Road, North Falmouth, MA   02556-2903

On June  15,  the  day  following  the  Optical  Society  of
America's  Image  Understanding  and  Machine Vision Topical
Meeting, the Fifth Workshop on Human and Machine Vision will
be  held.  Organized by Jacob Beck of the University of Ore-
gon and Azriel Rosenfeld of the University of Maryland,  the
Workshop  will consist of invited papers on models for human
visual processes.  It will be held in the  Nauset  IV  Room,
Sea Crest Resort, North Falmouth, MA from 8:30 to 4 PM.
The Workshop registration fee is $30.

The following papers will be presented:
``Line Segregation'',  Jacob  Beck,  Azriel  Rosenfeld,  and
     Richard  Ivry,  University  of Oregon and University of
     Maryland.
``Motion  and  Texture  Analysis'',  John  Daugman,  Harvard
     University.
``The Medium is Not the Message  in  Preattentive  Vision'',
     James Enns, University of British Columbia.
``A Neural Network Architecture for Form and Motion  Percep-
     tion'', Stephen Grossberg, Boston University.
``Conservation Laws and the Evolution of  Shape'',  Benjamin
     B.  Kimia,  Allen  Tannenbaum,  and  Steven  W. Zucker,
     McGill University.
``A  Biological  Mechanism  for  Shape  from   Shading   and
     Motion'',  Alexander  Pentland, Massachusetts Institute
     of Technology.

For more information about transportation, lodging, etc., contact:
 Barbara Hope,  Center for Automation Research, 
 University of Maryland, College Park, MD   20742-3411
 Telephone:  301-454-4526


----------------------------------------------------------------------

Date: Wed, 17 May 89 15:24:42 PDT
From: binford@anaconda.stanford.edu (Tom Binford)
Subject: SEMINAR: Prediction of the Illuminant's Effect on Surface Color Appearance

	Monday, 2/22, 4:15, Cedar Hall Conference
	
	
	Prediction of the illuminant's effect on surface color appearance
	
	David H. Brainard
	Department of Psychology
	Stanford University
	
	Changes in the spectral power distribution of the ambient
	illumination change the spectral properties of the light reflected
	from a surface to the eye.  It is commonly believed that the human
	visual system adapts to reduce the change in perceived surface color
	appearance under changes of illumination.  I use a matching paradigm
	to quantify the effects of adaptation to the illuminant on color
	appearance.  My results show that this adaptation serves to reduce
	but not eliminate changes in surface color appearance.
	
	Because there are many possible surfaces and illuminant changes, it
	is not possible to measure directly the effects of adaptation for
	all of them.  I propose using a finite dimensional bi-linear system
	to model the process of adaptation.  This class of model has two
	advantages.  First, it is possible to test whether the model
	describes the data.  Second, to the extent that the model holds, it
	allows prediction of the effects of adaptation on the appearance of
	any surface for any illuminant change.  I present data that test how
	well the bi-linear model describes human performance.
	
	

------------------------------

Date: 18 May 89 04:16:46 GMT
From: scarter@caip.rutgers.edu (Stephen M. Carter)
Subject: Seminar in Machine Vision at Rutgers University
Organization: Rutgers Univ., New Brunswick, N.J.


[ Though this appears to be a for-profit enterprise, it bears sufficient
  relevance to this List to justify posting.
		phil...  ]
 

	The Center for Computer Aids for Industrial Productivity (CAIP)
		    at RUTGERS, The State University of N.J.
			            Presents

MACHINE VISION -An intensive five-day course for engineers and scientists 
concerned with the theory and application of machine vision.

		July 10-14, 1989 - New Brunswick, New Jersey

Lectures given by Dr. H. Freeman, and a staff of leading experts noted for
their work in the field of machine vision will provide a detailed presentation
of the concepts and techniques used in applying machine vision to industrial
problems. Emphasis will be placed on sensors,illumination techniques, computer
algorithms,hardware requirements, and system considerations.  The material
presented will range from basic techniques to the latest,state-of-the-art
methods. Case studies of actual applications in industry, health care,
surveillance, and the military will be presented.

Registration fee: $895.  Includes all materials, texts, etc. (Discount of 
10% for 3 or more registrants from same organization.)

Course Location:  Hyatt Regency, New Brunswick, N.J.

For further information contact: 
				Sandra Epstein
				Telephone: (201) 932-4208
				FAX: (201) 932-4775
				Email:  sepstein@caip.rutgers.edu
					..!rutgers!caip!sepstein

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (05/27/89)

Vision-List Digest	Fri May 26 14:43:10 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 learning to play Go and Neural Networks: info request

----------------------------------------------------------------------

Date: 23 May 89 08:32:41 GMT
From: munnari!latcs1.oz.au!mahler@uunet.UU.NET (Daniel Mahler)
Subject: learning to play Go and Neural Networks: info request
Organization: Comp Sci, La Trobe Uni, Australia

[ Not a mainline vision question, but perhaps of interest to some.
		phil...		]


i am an honours student working on implementing
a neural network that learns to play Go.
this seems appropriate as Go has a much
higher (~50x) branching factor, forcing
a pattern oriented rather than a lookahead
orinted approach, and most instruction
(heuristics advocated by books & players)
are of an intuitive nature hard to 
formalise into the classical ai symbolic/logical
paradigm. My current idea is to preprocess
the board position using image/signal processing techniques (eg transforms,
filters, convolution, multi dimensional grammars)
to enhance the strategic structure of
the position over superficial similarities;
in other words i will treat the board
as a 2d signal/image. I will concentrate mainly on
the opening to early middle game phase, as these are
quiescent; later stages become more dependent on tactical
lookahead. All responses, be they bibliographic,
theoretical, practical, or philosophical, will be apprecited.
i leave to your discretion to judge
the appropriatness of replying by news or email.



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (06/04/89)

Vision-List Digest	Sat Jun 03 17:05:23 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Request for range images
 Rule based decisions vs functional decisions
 Post Doctoral Faculty Positions in Computer Science at CMU
 Invitation to Join the SUN Medical Users Group

------------------------------

Date: 02 Jun 89 10:19:06+0200
From: Martin Peter <mpeter@bernina.zir.ethz.ch>
Subject: Request for range images

I'm working as a phd student on a 3-d object recognition system, based
on a constraint search algorithm. Typical application of the system
may be a robot for sorting parcels in a postoffice.  Objects in the
scene are modeled with planar, cylindrical and general curved surface
patches.Occlusion is allowed. Inputdatas should come from a structured
light sensor, which the low level part will convert into surface patch
information. Unfortunately the structured light sensor is not yet
working, so I'm playing around with some simple synthetically
generated images.

Question: Are there in the Computer Vision Community some standart
range images avaible, maybe some standart benchmark scenes ?  Can some
one mail me some range images ?

Thanks for any help.

[ This is a great idea.  It is often difficult for all of us to get 
  interesting and useful data.  Anyone willing to share data with others,
  please post a description of its type, size, properties, etc.  Anyone
  who knows of a public repository for this type of data, please post
  what you know.  One would think that the U.S. Gov't, with all the 
  image based work they fund, has at least several public repositories 
  which could be used to obtain data.  Any info anyone?
		phil...		]


Martin Peter                    > EAN:    mpeter@zir.ethz.ch
Computer Vison Lab              > UUCP:   mpeter@ethz.uucp
Swiss Fed. Inst. Of Tech.       > BITNET: mpeter@czheth5a.bitnet
IKT/BIWI
ETH Zurich/Swizterland

------------------------------

Date: 3 Jun 89 01:47:52 GMT
From: nealiphc@blake.acs.washington.edu (Phillip Neal)
Subject: rule based decisions vs functional decisions
Keywords: image understanding, machine learning , image processing
Organization: Univ of Washington, Seattle


Does anybody have any performance numbers for a rule based decision
making image segmentation method vs a regular functional decision
making method.

In other words, which is better -- theoretically or empirically in
terms of classification rate:

  1. Developing rules like:

       If the edge is more than 20 grey levels and the major direction is north
       east, then the confidence factor for this rule is 20
       else
       confidence factor for this rule is 0

                  vs

  2. y = f(delta(grey),direction)
     if(y.gt..5) then this is a true edge.
 
     and the coefficients are 'learned' through some inductive process 
     like discriminant analysis or some bayesian update routine.

That's all for now,
Phil Neal   ---- nealiphc@blake.acs.washington.edu


------------------------------

Date: Wednesday, 31 May 1989 23:12:50 EDT
From: Dave.McKeown@maps.cs.cmu.edu
Subject: Post Doctoral Faculty Positions in Computer Science at CMU


	Post Doctoral Faculty Positions in Computer Science

Digital Mapping Laboratory
School of Computer Science
Carnegie Mellon University

Applications are invited for a post-doctoral research faculty position in the 
School of Computer Science at Carnegie Mellon University.  This position is
tenable for two years with possibilities for appointment to the regular
Research or Systems Faculty track within the School.

The successful applicant will be expected to play a major role in the current
and future research efforts within the Digital Mapping Laboratory.  Our 
research is broadly focused on the automated interpretation of remotely 
sensed data including high-resolution aerial imagery and multi-spectral 
imagery such as SPOT and Landsat TM.  Current areas of investigation include
knowledge-based scene analysis, automated acquisition of spatial and 
structural constraints,  cultural feature extraction (road network, and 
building detection and delineation), automated scene registration and stereo
matching, parallel architectures for production systems, and large-scale 
spatial databases.

A strong background in one or more of these or related areas is required.
Excellent written and verbal communication skills are also expected.

Applicants should send a curriculum vitae and names of at least three 
references to:  
	David M. McKeown
	Digital Mapping Laboratory
	School of Computer Science
	Carnegie Mellon University
	Pittsburgh, PA 15213.

Carnegie Mellon is an Equal Opportunity/Affirmative Action employer.


----------------------------------------------------------------------

Date: Wed, 31 May 89 10:26:32 PDT
From: clairee%sunwhere@Sun.COM (Claire Eurglunes)
Subject: Invitation to Join the SUN Medical Users Group

Greetings:

My name is Claire Eurglunes and I have recently been hired by SUN
Microsystems in Mountain View to form the SUN Medical Users Group. I
work under Ken Macrae in the Scientific Products area.

I am currently looking for all SUN and TAAC users that may be
interested in joining this users group. We are planning a first
meeting towards the end of June.

Please contact me if you are interested...we hope to establish users
lists, a bibliography of papers, abstracts and presentations, a
software consortium, etc.

                         Claire Eurglunes
                         sunwhere.Sun.COM!clairee@sun.com
                         (415)336-5131



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (06/17/89)

Vision-List Digest	Fri Jun 16 09:19:51 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 rule based decisions vs functional decisions
    Image Repository
 Range Images
 RE: Vision-List delayed redistribution
 Range imagery
 Re: Request for range images
 Neural nets for vision
 Where is Alv software ?
 Vision/Image Processing Languages
 video recording equipment

----------------------------------------------------------------------

Date: 3 Jun 89 01:47:52 GMT
From: Phillip Neal <nealiphc@BLAKE.ACS.WASHINGTON.EDU>
Subject: rule based decisions vs functional decisions
Keywords: image understanding, machine learning , image processing
Organization: Univ of Washington, Seattle

Does anybody have any performance numbers for a rule based decision
making image segmentation method vs a regular functional decision
making method.

In other words, which is better -- theoretically or empirically in terms 
of classification rate: 

  1. Developing rules like:

       If the edge is more than 20 grey levels and the major direction is north
       east, then the confidence factor for this rule is 20
       else
       confidence factor for this rule is 0

                  vs

  2. y = f(delta(grey),direction)
     if(y.gt..5) then this is a true edge.
 
     and the coefficients are 'learned' through some inductive process 
     like discriminant analysis or some bayesian update routine.

That's all for now,
Phil Neal   ---- nealiphc@blake.acs.washington.edu

------------------------------

Date:       Mon, 5 Jun 89 09:08:43 BST
From: alien%VULCAN.ESE.ESSEX.AC.UK@CUNYVM.CUNY.EDU (Adrian F. Clark)
Subject:    Image Repository


I'm interested in collecting together a set of `standard' images of
all types--TV, remote sensing, tomography, range, etc to test out a
package I'm developing.  I'm also looking at ways of making imagery
available over a network (ie, lossless coding into that ASCII subset
which will pass through most gateways unscathed).  The sort of imagery
I'm looking for should be both good (standard algorithms work) and bad
(standard algorithms fail).  If anyone else is interested in such an
idea, please contact the author...especially if you've some suitable
imagery!

   Adrian F. Clark
   JANET:  alien@uk.ac.essex.ese
   ARPA:   alien%uk.ac.essex.ese@nss.cs.ucl.ac.uk
   BITNET: alien%uk.ac.essex.ese@ac.uk
   Smail:  Dept. of Electronic Systems Engineering, Essex University,
           Wivenhoe Park, Colchester, Essex C04 3SQ, U. K.
   Phone:  (+44) 206-872432 (direct)

"The great tragedy of Science--the slaying of a beautiful
hypothesis by an ugly fact."      -- T H Huxley (1825-95)

------------------------------

Subject: Range Images
Date: Mon, 05 Jun 89 10:06:58 -0400
From: "Kenneth I. Laws" <klaws@nsf.GOV>

One source of range images is the NRCC Three-Dimensional
Image Data Files.  The collection includes: simple objects
with planar, spherical, and cylindrical surfaces, sometimes
overlapping; multiple views of isolated objects; human faces;
and various complex objects.  For information about tapes
and diskettes (about $350 for each section of 40-66 images
on tape, $750 on 5.25" diskettes) contact

  M. Rioux or L. Cournoyer
  Photonics and Sensors Section
  Laboratory for Intelligent Systems
  Division of Electrical Engineering
  National Research Council of Canada
  Ottawa, Ontario, Canada  K1A 0R8

  (613) 993-7902

					-- Ken Laws
					   National Science Foundation


------------------------------

Date: Mon, 5 Jun 89 10:46 CDT
From: "H. Ogmen, OGMEN@UHVAX1.UH.EDU, OGMEN@UHVAX1.BITNET"
Subject: RE: Vision-List delayed redistribution
Re: Three-dimensional image data files.

The National Research Council of Canada (NRC) has a large number of
three-dimensional image data files. 
For information about these images (and prices) contact:

M. Rioux, L. Cournoyer
Photonics and Sensors Section
Laboratory for Intelligent Systems
Division of Electrical Engineering
National Research Council of Canada
Ottawa, Ontario, Canada
K1A 0R8
Tel. (613) 993 7902
Telex: 053-4134
Telefax: 613- 952-7998


H. Ogmen
Dept. of EE
University of Houston


------------------------------

Date: Mon 5 Jun 89 10:50:53-PDT
From: Gerard Medioni <MEDIONI%DWORKIN.usc.edu@oberon.usc.edu>
Subject: Range imagery

I am aware of at least two sets of "standard" range image databases,
one from university of Utah (Prof Thom Henderson), and one from the
National Research Council of Canada (M. Rioux and L. Cournoyer).  The
first is available at a nominal fee, the second for a few hundred $$.


------------------------------

Date: Tue, 6 Jun 89 09:31:20 PDT
From: Bruce Bon <bon@saavik.Jpl.Nasa.Gov>
Subject: Re: Request for range images

There is an extensive set (214 images) of range images published by the
National Research Council Canada.  The range data was taken with a
synchronized laser scanner.  A book, "The NRCC Three-dimensional Image
Data Files," CNRC 29077, contains all of these images and is available
at no charge from:

	Editorial Office, Room 301
	Division of Electrical Engineering
	National Research Council of Canada
	Ottawa, Ontario, Canada
	K1A 0R8

Machine-readable versions are available in several formats (ASCII/binary,
raw/interpolated) on 1600 bpi magtape and 5 1/4" DSDD diskettes.  For
information, contact:

	M. Rioux and L. Cournoyer
	Photonics and Sensors Section
	Laboratory for Intelligent Systems
	Division of Electrical Engineering
	National Research Council of Canada
	Ottawa, Ontario, Canada
	K1A 0R8

	Telephone: (613) 993-7902
	Telex:     053-4134
	Telefax:   (613) 952-7998

I hope this helps.

				Bruce Bon
				bon@robotics.jpl.nasa.gov (ARPAnet)

------------------------------

Date: 6 Jun 89 23:06:13 GMT
From: kroger@cs.utexas.edu (Jim Kroger)
Subject: Neural nets for vision
Keywords: neural nets, paten recognition, computer vision
Organization: U. Texas CS Dept., Austin, Texas

Are there any existing applications of neural net technology to object
recognition problems?  I know that much theoretical work with neural
nets has involved vision, the work of David Marr being an example.
However, I am interested in creatng a system which can actually
recognize a sizeable number of objects. I am not sure whether this is
somehing that can now be accomplished in hardware, or must be
implemented in software.  Can anybody advise me as to existing systems
or techniques, either hardware or software, which might accomplish
this task? Also, what kind of hardware is optimal for software
implementations?  Basically, I want to know if neural net technology
offers any practical, workable solution to object recognition.

Any information or references will be greatly appreciated, and a
summary will be posted.


			Jim Kroger
			kroger@cs.utexas.edu


------------------------------

Date: Wed, 7 Jun 89 21:24:00 edt
From: parzen%jimmy@bess.HARVARD.EDU (Michael Parzen)
Subject: Where is Alv software ?

Back in November of 1988, I ftp'd some software written by Phill Everson
called Alv (autonomous land vehicle). It was some good vision software.

Does anyone know where I ftp'd this software from, i.e. where it can be
located on the net ? I forgot where it was and need to get it again.

Thanks in advance.

Mike Parzen
parzen@csc.brown.edu

------------------------------

Date: 10 Jun 89 00:13:20 GMT
From: mdavcr!rdr@uunet.UU.NET (Randolph Roesler)
Subject: Vision/Image Processing Languages
Summary: Wanted - Image Processing Language References
Keywords: Image Language
Organization: /etc/organization

	I am looking for references to Image
	processing languages.  What I want is 
	a language which has builtin support
	for high and low level image/vision
	operations.

	I.e. image -> image operations
	     image -> object operations
	     object -> object operations

	Please, don't inform me of great general
	purpose languages such as lisp, prolog, ....

	Research and commercial languages OK.

	We summarize to the Net.

	Randy Roesler
	MacDonald Dettwiler
	Vancouver, BC, Canada
	604-278-3411
	uunet!van-bc!mdavcr!rdr


------------------------------

Date: 16 Jun 89 01:37:02 GMT
From: munnari!latcs1.oz.au!suter@uunet.UU.NET (David Suter)
Subject: video recording equipment
Organization: Comp Sci, La Trobe Uni, Australia

I have a query about the availabilty of video recording equipment
(manufacturers names if such exists) for a medical monitoring application.
Essentially the video signal has other information (other than the picture)
encoded during the blanking periods. The applications require to different
recording  systems:
1. After a period of monitoring (say about 10 secs) the recording equipment
dumps 1 picture frame to tape or disc and also the average of some quantity
that has been encoded on the blanking periods.
2. After an event trigger a recording is made of THE 30 SECS TO 1 MIN 
that occurred PRIOR to the Event - as well as 1 MIN or so after the event.

The general idea is that in both cases the encoded data as well as the
video data corresponding to this, can be analysed later to see what
was happening when interestin events occurred - in the later case it
is important to see what lead up to the event.  The applications
concerned are not mine - so the details relayed may be a little hazy.
However, the general charcteristics required are as above. My contact
believes that there are commercial systems that do the type of thing
above (DATA VIDEO ENCODER?) but doesn't know sources of such
equipment. For 2.  he wants any suggestions of ways of acheiving these
ends (somehow having the previous 1 min available to store if an
interesting event occurs). The system must be capable of running on a
video tape that is changed say every 3 hrs. Thus suggestion of
solutions he can get engineeered himself - or companies that deal in
equipment that provide this sort of functionality.

thanks.
d.s.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (06/21/89)

Vision-List Digest	Tue Jun 20 09:52:12 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 CONFERENCE ANNOUNCEMENTS:
    AI and communicating process architecturess conference
    CFP Eurographics'90 Conference

----------------------------------------------------------------------

Date: Tue, 13 Jun 89 11:16:22 BST
From: Steven Zenith <zenith@inmos-c.ads.com>
Subject: AI and communicating process architecturess conference

                         International conference
      ARTIFICIAL INTELLIGENCE AND COMMUNICATING PROCESS ARCHITECTURE
          17th/18th of July 1989, at Imperial College, London UK.
                              Keynote speaker
                             Prof. Iann Barron

                             Invited speakers
          Prof. Igor Aleksander   Neural Computing Architectures.
                Prof. Colin Besant   Programming of Robots.
         Prof. David Gelernter   Information Management in Linda.
            Dr. Atsuhiro Goto   The Parallel Inference Machine.
           Prof. Tosiyasu Kunii   Primitive Image Understanding.
                  Dr. Rajiv Trehan   Parallel AI Systems.
        Prof. Alan Robinson   Functional and Relational reasoning.
          Prof. Les Valiant] Bulk-synchronous Parallel Computing.

                      * Parallel Processing and AI *

	Parallel Processing and Artificial Intelligence are two key themes
which have risen to the fore of technology in the past decade. This
international conference brings together the two communities.
	Communicating Process Architecture is one of the most successful
models for exploiting the potential power of parallel processing machines.
Artificial Intelligence is perhaps the most challenging applications for
such machines. This conference explores the interaction between these two
technologies.
	The carefully selected programme of invited talks and submitted papers
brings together the very best researchers currently working in the field. 

                            * Topics include *
             Robotics   Neural Networks   Image Understanding
    Speech Recognition   Implementation of Logic Programming Languages
      Information management   The Japanese Fifth Generation Project
                           Transputers and Occam

                            * Submitted papers *
Fault Tolerant Transputer Network for Image Processing
      -- S Pfleger etal.
Multi-Transputer Implementation of CS-Prolog
      -- Peter Kacsuk and I Futo
Transputer Common-Lisp: A Parallel Symbolic Language on Transputers
      -- Bruno Pages
Fast Robot Kinematic Modelling via Transputer Networks
      -- A.Y.Zomaya and A.S.Morris
Transputer-based Behavioral Module for Multi-Sensory Robot Control
      -- Zhang Ying
PUSSYCAT: A Parallel Simulation System for Cellular Automata on Transputers
      -- Eddy Pauwels
Self-organising Systems and their Transputer Implementation
      -- D.A.Linkens and S.B.Hasnain
The Suitability of Transputer Networks for Various Classes of Algorithms
      -- M.Korsloot etal.
                              * Proceedings *

The edited proceedings includes invited and submitted papers and is
published in a new book series on Communicating Process Architecture 
published by John Wiley and Sons.

Organising committee, programme editors and conference chairmen: 

              Dr. Mike Reeve   Imperial College, London, UK. 
           Steven Ericsson Zenith   INMOS Limited, Bristol, UK.

                           * Conference dinner *
	The conference dinner will be held at London Zoo, with before dinner
sherry in the Aquarium. Coaches will transport delegates.

                             * Accommodation *
	Accommodation is available on the Campus of Imperial College. Campus
accommodation is available for Sunday and/or Monday night. Hotel
accommodation can be arranged separately by writing to the conference
secretary.

                              * Car parking *
	Available at a number of local NCP sites.

                                * Payment *
	Cheques or bankers drafts in pounds sterling should be made payable
to:   OUG AI Conferences

	Full name___________________________________________
	Institute/Company___________________________________
	Address_____________________________________________
	____________________________________________________
	____________________________________________________
	____________________________________________________
	Country_____________________________________________
	email :_____________________________________________

Non-residential			200 pounds sterling 	[]
Residential (1 night)		225 pounds sterling     []
Residential (2 nights)		250 pounds sterling     []
Conference dinner		42 pounds sterling	[]

		Total Payable________________________

Some student subsidy is available. 50% subsidy for UK students, 25% subsidy
for overseas students. Accomodation for students is at 15 pounds, but the
conference dinner is full fee.

Special dietary requirements: 
Vegan [] Vegetarian  [] Other (Please specify)

Date____________ 
Signed_______________________________    Dated_____________________

                             * Registration *
	Registration should be received by June 16th. Late registration will
incur a 20 pound surcharge. All enquiries should be addressed to the
conference secretary:
				The Conference Secretary, 
				OUG AI Conferences, 
				INMOS Limited, 
				1000 Aztec West, 
				Almondsbury, 
				Bristol BS12 4SQ, 
				UNITED KINGDOM. 
				Tel. 0454 616616 x503 
				email: zenith@inmos.co.uk 

                             occam user group
                       * artificial  intelligence *
                          special interest group
                  1st technical meeting of the OUG AISIG

This conference is underwritten by INMOS Limited, to whom the organising
committee wish to extend their thanks. 


------------------------------

Date: 7 Jun 89 07:28:39 GMT
From: ivan@cwi.nl (Ivan Herman)
Subject: CFP Eurographics'90 Conference
Organization: CWI, Amsterdam

        Call for Papers, Call for Tutorials and Call for State of the
                           Art Reports
                          EUROGRAPHICS '90
                        September 3-7, 1990
                       Montreux, Switzerland

                 Images: Synthesis, Analysis and Interaction

                      Call for Participation
                        (first announcement)


The EUROGRAPHICS Association is 10 years old in September 1990.
For the last 10 years, EUROGRAPHICS has served the European and 
worldwide research community in computer graphics and its applications,
through the annual event, journal, workshop programme and other 
activities.

In the past, EUROGRAPHICS conferences have concentrated in the main on
topics traditionally associated with computer graphics and human
computer interaction. EUROGRAPHICS '90 will continue to address such 
topics.

For EUROGRAPHICS '90, a new theme of the conference will be the 
relationship between image synthesis (traditionally the domain of 
computer graphics) and image processing and computer vision.

It is now clear that there is overlap between image synthesis and image
analysis in both techniques and applications. For example, as computer
graphics is used more and more in the visualization of scientific and
engineering computations, so it looks likely that image processing
techniques will be used to help develop an understanding of the results.

Tutorials, state of the art reports and invited papers will address the
relationship between graphics and image processing, at both introductory
and advanced levels, and submitted papers are invited in this area.


                               CONFERENCE
                           September 5-7, 1990

Papers selected by the International Programme Committee will present
the most relevant and recent developments in Computer Graphics.
The Conference Proceedings will be published by North-Holland.

List of Topics

Graphics Hardware
Superworkstations
Hypersystems
Graphics and Parallelism
Distributed Graphics
Visualization Techniques
Animation and Simulation
Image Processing
Sampling Theory
Unwarping
Image Filtering
Image Representation
Computational Geometry
Graphics Algorithms and Techniques
Modelling
Standards
Exchange of Product Data
Graphics for CAD, CAM, CAE, ...
Human-Computer Interaction
Human Factors
Tool Kits for UIMS and WMs
Presentation Graphics
Graphics in the Office
Graphics in Publication and Documentation
Page Description Languages
Novel Graphics Applications
Graphics as an Experimental Tool
Graphics in Education
Integration of Graphics and Data Bases
Colour
Multi Media Graphics

INSTRUCTIONS TO AUTHORS

Authors are invited to submit unpublished original papers related to 
recent developments in computer graphics and image processing.  Full 
papers (maximum 5000 words) including illustrations, should be sent to 
the Conference Secretariat by Nov. 15,1989.  Authors should indicate 
the topic area(s) they consider appropriate for the paper.  The first 
page should include the title, name of the author(s), affiliation, 
address, telephone, telex, and telefax numbers, and electronic mail
address, together with an abstract (maximum 200 words). Papers with 
multiple authors should clearly indicate to which author correspondence
should be addressed.

The author of the Best Paper, selected by an international jury, will 
receive the Gunter Enderle Award, which includes a cash prize. The best
three papers will be also published in an international journal.

Lectures will be given in English and all papers should be submitted 
in English.


                              TUTORIALS
                          September 3-4, 1990

The first two days of the event will be devoted to the tutorial 
programme. Tutorials will be given by leading international experts 
and will cover a wide range of topics offering an excellent opportunity
for professional development in computer graphics and related areas.
The programme includes both introductory and advanced tutorials.

Each tutorial will occupy one full day. Lecture notes will be provided
for attendees.

Preliminary List of Topics

Introduction to Image Processing
Introduction to Ray Tracing and Radiosity
Image Reconstruction
Superworkstations for Graphics
Human Visual Perception
Intelligent CAD Systems
Free-form Surfaces and CSG
Graphics and Distributed Environments
Scientific Data Visualization
Computer Vision
Traditional Animation: A Fresh Look
Computer Graphics for Software Engineering

The list of topics is still preliminary; the organisers would welcome 
any new proposal for tutorials.


                        STATE OF THE ART REPORTS
                          September 5-7, 1990

In parallel with the conference proper, a series of 1 1/2 hour reports 
on topics of wide current interest will be given by leading experts in
the fields.  These will serve to keep attendees abreast of the state
of the art in these fields and recent significant advances.

Preliminary List of Topics

Standardization in Graphics and Image Processing: Present and Future
Advanced Rendering
Object Oriented Design in Action
Digital Typography
Simulation of Natural Phenomena
Advanced Mathematics and Computer Graphics
Interactive Graphics and Video Discs
Graphics - Education
Human Prototyping

The list of topics is still preliminary; the organisers would welcome 
any new proposal for tutorials.


                         VIDEO AND FILM COMPETITION

There will be a competition of computer-generated videos and films, 
with prizes awarded for the best entries based on creativity and 
technical excellence.  Submissions are invited for scientific and 
technical applications, art and real-time generated sequences. Entries 
will be shown during the conference.


                              SLIDE COMPETITION

A competition will also be held for artistic images and scientific and
technical images submitted on 35mm slides. Prizes will be awarded for 
the best entries and slides will be shown during the conference.

The closing date for submission to both competitions will be June 15, 
1990. Entries should be sent to the Conference Secretariat. Rules for 
the competition will be sent to people returning the slip and 
indicating their intention to submit.


                             IMPORTANT DATES

Today:
Fill in and mail the attached reply card

July 15, 1989
Deadline for proposals for tutorials and state of the art reports

November 15, 1989:
Full paper should be received by Conference Secretariat

January 12, 1990:
International Programme Committee meeting

January 31, 1990:
Notification of acceptance or refusal of papers

March 31, 1990:
Final version of the accepted contributions should be
received by Conference Secretariat

April 1990:
Distribution of Preliminary Programme for EUROGRAPHICS '90

June 15, 1990:
Deadline for receiving Video, Film and Slide Competition entries

September 3-7, 1990:
EUROGRAPHICS '90
in Montreux

Official Conference Language is English


                            ORGANISING COMMITTEE

Conference Chairmen:                        Michel Roch (CH) 
                                            Michel Grave (F)

International programme committee (Chairs): David Duce (UK), 
                                            Carlo Vandoni (CH)

International programme committee (Members):

C. Binot (F)
W. Boehm (FRG)
P. Bono (USA)
K. Brodlie (UK)
P. Brunet (ESP)
S. Coquillart (F)
L. Dreschler-Fisher (FRG)
A. Ducrot (F)
P. Egloff (FRG)
J. Encarnacao (FRG)
B. Falcidieno (I)
A. Gagalowicz (F)
R. Gnatz (FRG)
M. Gomes (P)
P. ten Hagen (NL)
W. Hansmann (FRG)
R. Hersch (CH)
F. Hopgood (UK)
R. Hubbold (UK)
E. Jansen (NL)
M. Jern (DK)
K. Kansy (FRG)
M. Kocher (CH)
C. Pellegrini (CH)
F. Post (NL)
J. Prior (UK)
W. Purgathofer (AU)
W. Strasser (FRG)
P. Stucki (CH)

Tutorial chairmen:                          Ivan Herman (NL) 
                                            Gerald Garcia (CH)

State of the art reports chairmen:          Bertrand Ibrahim (CH) 
                                            Thierry Pun (CH)

Exhibition chairman:                        Jean-Francois L'Haire (CH)

Video, film, slide competitions:            Jean-Marie Moser (CH) 
                                            Daniel Bernard (CH)

Congress Organization: Georges Peneveyre
Paleo Arts et Spectacles
Case postale 177
CH - 1260 Nyon
Switzerland
Tel.: +41-22-62-13-33
Telex: 419 834
Telefax: +41-22-62-13-34


                           REPLY FORM


I intend to submit a full paper with the following title:


I enclose a full paper (to be submitted before November 15, 1989)

I intend to submit a video  film  slides
(to be submitted before June 15, 1990)

I would like to receive an information package for exhibitors

I would be interested in giving a tutorial or state of the art report
(include detailed abstract; submit before June 30, 1989)

I intend to participate at the congress
    1 Tutorial

    2 Tutorials

Date:       Signature:


For Further Information Please Contact:

EUROGRAPHICS '90
CONFERENCE SECRETARIAT
PALEO ARTS ET SPECTACLES
Case postale 177
CH - 1260 Nyon
Switzerland
Tel.: +41-22-62-13-33
Telex: 419 834
Telefax: +41-22-62-13-34

                         MAILING ADDRESSES:

Main address:

EUROGRAPHICS '90
CONFERENCE SECRETARIAT
PALEO ARTS ET SPECTACLES
Case postale 177
CH - 1260 Nyon
Switzerland
Tel.: +41-22-62-13-33
Telex: 419 834
Telefax: +41-22-62-13-34

In case of submission of a state if the art report proposal (AND ONLY IN 
THIS CASE) you may also address:

Thierry Pun
EG'90 State of the Art Report Co-chairman
Computing Center, University of Geneva
12, rue du Lac
CH - 1207 Geneve
Switzerland
Tel.: +41-22-787-65-82
Fax.: +41-22-735-39-05
Email: pun@cui.unige.ch, pun@cgeuge51.bitnet
Telex: CH 423 801 UNI

In case of submission of a tutorial proposal (AND ONLY IN THIS CASE)
you may also address:

Ivan Herman
EG'90 Tutorial Co-chairman
Centre for Mathematics and Computer Sciences (CWI)
Dept. of Interactive Systems
Kruislaan 413
NL - 1098 SJ Amsterdam
The Netherlands
Tel. (31)20-592-41-64
Telex: 12571 MACTR NL
Telefax: +31-20-592-4199
Email: ivan@cwi.nl (UUCP), ivan@tuintje.cwi.nl (from BITNET)

All other questions regarding the conference organisation, as well as 
submission of papers, have to be addressed to the Conference secretariat
address given above.


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (06/30/89)

Vision-List Digest	Thu Jun 29 13:30:10 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 NSF announcement for PRC researchers
 Testing 3-D Vision Nets: Call for help
 Transputer based frame grabbers
 Travel fellowships for graduates students in vision research
 Post-doctoral position in human and machine vision

----------------------------------------------------------------------

From: Ken Laws <klaws@note.nsf.gov>
Subject: NSF announcement for PRC researchers

Recently, in response to events in the People's Republic of
China (PRC), President Bush offered a one-year delayed departure
to all PRC students, scholars and other visitors now in the
United States.

Many visitors from the PRC currently receive support through NSF
awards, particularly as graduate students and postdoctoral
researchers.  Effective immediately, NSF will entertain requests
for supplements if the duration of the stay of a PRC student or
other researcher supported on an existing award is altered as the
result of the President's initiative.

For the remainder of FY 1989, reserve funds will be made
available to cover these supplements.  Program reference code
9284, "PRC Scientist Supplements," should be cited.

Information regarding the opportunity for these supplements will
be provided to the university community by the Division of Grants
and Contracts [(202) 357-9496].


					-- Ken Laws
					   (202) 357-9586

------------------------------

Date: Thu, 22 Jun 89 19:31:16 gmt
From: Ziqung Li <zql%aipna.edinburgh.ac.uk@NSFnet-Relay.AC.UK>
Subject: Testing 3-D Vision Nets: Call for help


The VisionNets is for 3-D object recognition and location
from range image. There is no limit for the types of surface
in an image, i.e. it is expected to deal with free-form objects
(but not those like tree leaves). It consists of 3 levels of 
neural nets based on the Hopfield models:

	1. Low level net: from a range image to surface
	   curvature images (work finished, tested using
	   real range data, well)
	2. Intermediate level net: from the suface curvature
	   images to invariant surface descriptions in
	   attributed grapghs (work finished, tested using
	   real range data, well)
	3. High level net: from the invariant surface
	   descriptions to object classification and
	   location (programs go through, tested in the 
	   recognition phase, work; but not the location 
	   phase owing to lack of a 3-D object model)

I am looking for geometric models to test the VisionNets as well
as the 3rd level net. Any information about available existing 3-D object 
models, such as CAD models of free-form objects or any other explicit 
models, would be greatly appreciated. I am also interested in a 
working environmant in which this work can be carried on.

Thanks in advance.


Ziqing Li
Graduate Student
A.I. Department
Edinburgh University
U.K.
zql@aipna.ed.ac.uk
zql%uk.ac.edinburgh.aipna@ucl.cs.nss


------------------------------

Date: 23 Jun 89 07:55:06 GMT
From: munnari!latcs1.oz.au!suter@uunet.UU.NET (David Suter)
Subject: transputer based frame grabbers
Organization: Comp Sci, La Trobe Uni, Australia


I am interested in details of frame grabber cards that are transputer based.
Any info. - particularly regarding throughput data to transputers
would be appreciated.
d.s.


David Suter                            ISD: +61 3 479-2393
Department of Computer Science,        STD: (03) 479-2393
La Trobe University,                ACSnet: suter@latcs1.oz
Bundoora,                            CSnet: suter@latcs1.oz
Victoria, 3083,                       ARPA: suter%latcs1.oz@uunet.uu.net
Australia                             UUCP: ...!uunet!munnari!latcs1.oz!suter
                                     TELEX: AA33143
                                       FAX: 03 4785814


------------------------------

Date: 	Mon, 26 Jun 89 17:06:48 EDT
From: Color and Vision Network <CVNET%YORKVM1.BITNET@psuvax1.psu.edu>
From: suzanne@skivs.ski.org
Subject: Travel fellowships for graduates students in vision research
	
                 Westheimer Symposium Travel Fellowships
                           August 10 - 12, l989
	
     Thanks  to additional support from an NSF grant, we are  able  to
offer five travel fellowships to graduate students in vision research.
To  apply for support send a CV with a letter describing your  present
research,  and  a  supporting  letter  from  your  graduate   advisor.
Fellowships  will cover airfare (up to $500.00) and accommodations  at
the  Clark  Kerr  Campus.    Both  those  students  who  are  presently
registered  for  the symposium and those who will be  registering  are
eligible for these travel fellowships.  Unregistered students who  are
applying for the fellowships should submit a registration form (but no
money)  with their application.  Registered students selected for  the
fellowships will receive a refund.
	
     You  can receive additional registration forms by  calling  (415)
561-1637 or (415) 561-1620.
	
     We have the pleasure to announce that three additional scientists
have  agreed  to  speak at the symposium: Dr.  Russell  DeValois,  Dr.
Barrie Frost, and Dr. Ralph Freeman.
	
     Anyone  who  will  be  submitting  a  paper  to  the   Westheimer
Festschrift  edition of Vision Research is reminded that the  deadline
for  submission is August 10, l989.  Papers may be sent to either  Ken
Nakayama  or  to  Suzanne  McKee  at  Smith-Kettlewell  Eye   Research
Institute, 2232 Webster St., San Francisco, CA, 94115, USA.
	

------------------------------

Date: 	Wed, 28 Jun 89 08:52:09 EDT
From: Color and Vision Network <CVNET%YORKVM1.BITNET@psuvax1.psu.edu>
Subject: Post-doctoral position in human and machine vision

	
	POST-DOCTORAL POSITION
	
	Applications are invited for a 2.5 year post-doctoral
	position to work on a European Community funded ESPRIT
	project on human and machine vision.  The project is
	interdisciplinary (psychophysics, computation and
	electrophysiology) and involves collaboration between some
	fifteen major laboratories in Europe.  The successful
	applicant for this post will take part in experimental and
	theoretical studies of binocular disparity, texture and optic
	flow as sources of information about 3-D structure and
	layout.  Salary will be in the range 10,500 - 16,700 U.K.
	pounds.
	
	Please contact Dr B.J.Rogers, Department of Experimental
	Psychology, University of Oxford, South Parks Road, Oxford.
	Phone: (44) 865 271368 or email BJR@vax.oxford.ac.uk
	

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (07/08/89)

Vision-List Digest	Fri Jul 07 14:15:02 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Vision-based solutions for the game of GO
 Summary of Image Processing Languages
 Re: Vision/Image Processing Languages
 E-mail addresses for range data

----------------------------------------------------------------------

Date: Thu, 29 Jun 89 17:43:25 -0500
From: uhr@cs.wisc.edu (Leonard Uhr)
Subject: Re:  Vision-based solutions for the game of GO
Status: RO

Al Zobrist used a vision-correlational approach to [the game of] GO in
his Ph.D.  thesis here (see diss abstracts).  He published this in one
of the Joint Computer Conferences (Eastern or Spring, around 1968-72).

------------------------------

Date: 29 Jun 89 19:14:47 GMT
From: mdavcr!rdr@uunet.UU.NET (Randolph Roesler)
Subject: Summary of Image Processing Languages
Keywords: image vision language
Organization: MacDonald Dettwiler, Richmond, B.C., Canada

A couple of weeks ago I asked the net for pointers to image processing
lanaguages (not libraries).  I was looking for a computer programming
language that would allow me to inter-image operations.  That is,
operations such as feature identifcation.

It seems that such things don't really exist.  All commercial packages
that claimed "language" were really function libraries (choose your
favorite language) with a little bit of control structure thrown in (a
bonus ?).  These libraries perform image to image transformations.
They give you like access to the internals an image.

Some systems were all control structure (KBVision as an example).
These systems provided good end user environments, but not good for
programmers or analyst tring to develop new image processing systems.
(KBVision is great!, get it if you need an image processing system for
"image" engineers, but its not really for programmers.)

Further, most products were tied to specialized hardware.  I work in
the research department and have to live with stock SUN 3/60s. So even
as libraries, most of these products are useless to us.

I did receive three informed responces to my query.  One person
suggested a set of unix filters (that he wrote) as a useful image
processing language.  Another suggested IDL (image description
langauge ?).  The third forwarded seven good references on the
subject.  I am researching them now.  Maybe, I'll write that image
processing language myself.

   =====included================

I am not familiar with any image languages, per se, although in the
past I've seen some references to them in the literature.  I have
written a general image processing software package (your image->image
transformations, mostly) which consists of a large number of UNIX
`filters' which can be connected up in relatively arbitrary ways using
the UNIX `pipe' facility.  If that sounds useful, let me know, and
I'll mail you some blurbs.

Mike Landy
SharpImage Software
P.O. Box 373, Prince St. Sta.
New York, NY   10012-0007
(212) 998-7857
landy@nyu.nyu.edu

   =====included================

have you heard of idl on a sun/vax - pretty good, but very few
high-level function in either space. call
david stern research systems inc. (IDL) 303-399-1326

john j. bloomer <jbloomer@crd.ge.com, bloomer@mozart.crd.ge.com>

   =====included================

Take a look to Picasso and Pal:
1.  Z. Kulpa. Picasso,, picasso-show and pal. a development of a high-level
software system for image processing. pages 13-24 in (3).
2. T. Radhakrishnan, R. Barrera, et al. Design of a high level language (L)
for image processing. pages 25-40 in (3).
3. Languages and Architectures for Image Processing. Academic Press, 1981.
Editors: M. J. B. Duff, S. Levialdi.
4. MAC. Chapter four of (3).
5. A Language for parallel processing of arrays. Chapter five of (3).
6. PIXAL. Chapter sic of (3).
7. A high-level language for constructing image processing commands. Chapter 7
of (3).
Also: Chapters Nine, ten, eleven, twelve, of (3).

Adolfo Guzman.  International Software Systems, Inc.
9420 Research Blvd., Suite 200.   Austin, TX.  78759
Tel. (512) 338 1895    Telex: 499 1223 ISSIC   Fax: (512) 338 9713
issi!guzman@cs.utexas.edu  or guzman@issi.uucp  or cs.utexas.edu!issi!guzman

 ==========================

PS - my orginal posting never made it to Canada (or maybe, just not BC).
     So the responce may have been smaller than need be.

 ==========================
Randy Roesler					...!uunet!van-bc!mdavcr!rdr
MacDonald Dettwiler				Up here in Canada.
Image Processing Specialists.			604-278-3411

------------------------------

Date: Fri, 30 Jun 89 12:16:29 GMT
From: H Wang <hw%dcs.leeds.ac.uk@NSFnet-Relay.AC.UK>

[Reply to Randy's original request for vision language information. pk...]

Subject: Re: Vision/Image Processing Languages We developed an
image-to-image language at Leeds on the transputer array, called
APPLY, which was originally from CMU fro the Warp machine. APPLY
performs localised window operations, eg. edge detection, 2D
convolution.  The APPLY compiler generates OCCAM (for transputer
arrays), C (for UNIX machine) and W2 (for Warp) at the moment,
although it is aimed at machine independent.  This language has two
major advantages: (1) ease the programming efforts, (2) efficiency.
For instance, it does better on the Canny edge detector than the
hand-crafted code reported in the literature.

If you are interesting, pls contact me. I could not reach you by e-mail.
My address is:

   Mr. H Wang,
   School of Comuter Studies, The university of Leeds, Leeds, LS2 9JT,
   phone: (0532) 335477 (in UK)
          +44532,335477 (international)
   e-mail: (ArpaNet)hw%uk.ac.leeds.dcs@uk.ac.ucl.cs.nss

------------------------------

Date: 3 Jul 89 11:54:00 WET
From: John Illingworth <illing%ee.surrey.ac.uk@NSFnet-Relay.AC.UK>
Subject: e-mail addresses for range data

hi, in recent vision lists there has been correspondence about range
image data bases. I wish to obtain further information about these
databases by email.  However no email addresses have been given in the
vision list. Anyone know the email address for

 M Rioux or L Cournoyer
 National Research Centre of Canada
 Ottawa, Ontario. Canada

or

 Tom Henderson
 University of Utah

 many thanks  John Illingworth


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (07/15/89)

Vision-List Digest	Fri Jul 14 14:08:05 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Call for Papers: INNS/IEEE Conference on Neural Networks, Jan. 1990
 Intensive summer school on statistical pattern recognition

----------------------------------------------------------------------

Date: 10 Jul 89 04:39:00 GMT
From: lehr@isl.stanford.edu (Michael Lehr)
Subject: Call for Papers: INNS/IEEE Conference on Neural Networks, Jan. 1990
Summary: Papers requested for joint neural net conference in Washington DC
Keywords: conference, neural networks
Organization: Stanford University EE Dept.


                           CALL FOR PAPERS

	  International Joint Conference on Neural Networks
			   IJCNN-90-WASH DC

			 January 15-19, 1990,
			   Washington, DC


The Winter 1990 session of the International Joint Conference on
Neural Networks (IJCNN-90-WASH DC) will be held on January 15-19, 1990
at the Omni Shoreham Hotel in Washington, DC, USA.  The International
Neural Network Society (INNS) and the Institute of Electrical and
Electronics Engineers (IEEE) invite all those interested in the field
of neural networks to submit papers for possible publication at this
meeting.  Brief papers of no more than 4 pages may be submitted for
consideration for oral or poster presentation in any of the following
sessions:

APPLICATIONS TRACK:

  * Expert System Applications
  * Robotics and Machine Vision
  * Signal Processing Applications (including speech)
  * Neural Network Implementations: VLSI and Optical
  * Applications Systems (including Neurocomputers & Network
    Definition Languages)

NEUROBIOLOGY TRACK:

  * Cognitive and Neural Sciences
  * Biological Neurons and Networks
  * Sensorimotor Transformations
  * Speech, Audition, Vestibular Functions
  * Systems Neuroscience
  * Neurobiology of Vision

THEORY TRACK:

  * Analysis of Network Dynamics
  * Brain Theory
  * Computational Vision
  * Learning: Backpropagation
  * Learning: Non-backpropagation
  * Pattern Recognition


**Papers must be postmarked by August 1, 1989 and received by August
10, 1989 to be considered for presentation.  Submissions received
after August 10, 1989 will be returned unopened.**

International authors should be particularly careful to submit their
work via Air Mail or Express Mail to ensure timely arrival.  Papers
will be reviewed by senior researchers in the field, and author
notifications of the review decisions will be mailed approximately
October 15, 1989.  A limited number of papers will be accepted for
oral and poster presentation.  All accepted papers will be published
in full in the meeting proceedings, which is expected to be available
at the conference.  Authors must submit five (5) copies of the paper,
including at least one in camera-ready format (specified below), as
well as four review copies.  Do not fold your paper for mailing.
Submit papers to:

             IJCNN-90-WASH DC
             Adaptics
             16776 Bernardo Center Drive, Suite 110 B
             San Diego, CA  92128  UNITED STATES

             (619) 451-3752


SUBMISSION FORMAT:

Papers should be written in English and submitted on 8-1/2 x 11 inch
or International A4 size paper.  The print area on the page should be
6-1/2 x 9 inches (16.5 x 23 cm on A4 paper).  All text and figures
must fit into no more than 4 pages.  The title should be centered at
the top of the first page, and it should be followed by the names of
the authors and their affiliations and mailing addresses (also
centered on the page).  Skip one line, and then begin the text of the
paper.  We request that the paper be printed by typewriter or
letter-quality printer with clear black ribbon, toner, or ink on plain
bond paper.  We cannot guarantee the reproduction quality of color
photographs, so we recommend black and white only.  The type font
should be Times Roman or similar type font, in 12 point type
(typewriter pica).  You may use as small a type as 10 point type
(typewriter elite) if necessary.  The paper should be single-spaced,
one column, and on one side of the paper only.  Fax submissions are
not acceptable.

**Be sure to specify which track and session you are submitting your
paper to and whether you prefer an Oral or Poster presentation.  Also
include the name, complete mailing address and phone number (or fax
number) of the author we should communicate with regarding your
paper.**

If you would like to receive an acknowledgment that your paper has
been received, include a self-addressed, stamped post-card or envelope
for reply, and write the title and authors of the paper on the back.
We will mark it with the received date and mail it back to you within
48 hours of receipt of the paper.  Submission of the paper to the
meeting implies copyright approval to publish it as part of the
conference proceedings.  Authors are responsible for obtaining any
clearances or permissions necessary prior to submission of the paper.

------------------------------

Date: 14 Jul 89 14:05:00 WET
From: Josef Kittler <kittler%ee.surrey.ac.uk@NSFnet-Relay.AC.UK>
Subject: Intensive summer school on statistical pattern recognition


            INTENSIVE SUMMER SCHOOL
                      ON
        STATISTICAL PATTERN RECOGNITION

            11-15 September 1989 

             University of Surrey


                  PROGRAMME

The course is divided into two parts:
           
Course A   The Fundamentals of Statistical 
           Pattern Recognition
           
Course B   Contextual Statistical Pattern 
           Recognition  

Course A will cover  the basic methodology of 
statistical pattern recognition. Course B will feature a number of 
advanced topics concerned 
with the use of contextual information in pattern recognition, with a
particular emphasis on Markov models in speech and images. 

Several example classes will be aimed at familiarizing the participants 
with the material presented. The course will include a seminar on 
application of pattern recognition methods to specific problems in which a 
step by step description of the design of practical pattern recognition 
systems will be outlined. Ample time will be devoted to discussion of 
algorithmic and practical aspects of pattern recognition techniques.


   COURSE A: THE FUNDAMENTALS OF STATISTICAL PATTERN RECOGNITION

                       11-13 September 1989 


ELEMENTS OF STATISTICAL DECISION THEORY

Model of pattern recognition system. Decision theoretic approach to pattern
classification. Bayes decision rule for minimum loss and minimum error rate.
Sequential and sequential compound decision theory. Optimum error 
acceptance tradeoff. Learning algorithms.

NONPARAMETRIC PATTERN CLASSIFICATION

The Nearest Neighbour (NN) technique: 1-NN, k-NN, (k,k')-NN pattern 
classifiers. Error acceptance tradeoff for nearest neighbour classifiers.
Error bounds. Editing techniques. 

DISCRIMINANT FUNCTIONS

Discriminant functions and learning algorithms. Deterministic learning. The 
least square criterion and learning scheme, relationship with the 1-NN 
classifier. Stochastic approximation. Optimization of the functional form of 
discriminant functions.

ESTIMATION THEORY

Probability density function estimation: Parzen estimator, k-NN estimator,
 orthogonal function estimator. Classification error rate estimation:
 resubstitution method, leave-one-out method, error estimation based on 
unclassified test samples.

FEATURE SELECTION

Concepts and criteria of feature selection, interclass distance measures,
nonlinear distance metric criterion, probabilistic distance and dependence 
measures and their properties, probabilistic distance measures for 
parametric distributions, entropy measures (logarithmic entropy, square 
entropy, Bayesian distance), algorithms for selecting optimal and 
suboptimal sets of features, recursive calculation of parametric 
separability measures. Nonparametric estimation of feature selection 
criterion functions.

FEATURE EXTRACTION

Probabilistic distance measures in feature extraction, Chernoff 
parametric measure, divergence, Patrick and Fisher method. 
Properties of 
the Karhunen-Lo\`eve expansion, feature extraction techniques based on the 
Karhunen-Lo\`eve expansion. Nonorthogonal mapping methods, nonlinear 
mapping methods, discriminant analysis.

CLUSTER ANALYSIS

Concepts of a cluster, dissemblance and resemblance measures, globally
sensitive methods, global representation of clusters by pivot points and 
kernels, locally sensitive methods (methods for seeking valleys in 
probability density functions), hierarchical methods, minimum spanning tree 
methods, clustering algorithms. 

 ***************************************************************************


    COURSE B: CONTEXTUAL STATISTICAL PATTERN RECOGNITION 

                   14-15 September 1989 

INTRODUCTION

The role of context in pattern recognition. Heuristic approaches to contextual 
pattern recognition. Labelling of objects arranged in networks (chains, 
regular and irregular lattices). Neighbourhood systems. Elements of 
compound decision theory.

MODELS

Markov chains. Causal and noncausal Markov 
random fields (MRF). Gibbs distributions. Hidden Markov chain and 
random field models for speech and images.  
Simulation of causal Markov processes. Simulation of noncausal MRF: 
 The Metropolis algorithm.

DISCRETE RELAXATION

Compatibility coefficients. Concept of consistent labelling. Waltz discrete 
relaxation algorithm. Maximum aposteriori probability  (MAP) of joint 
labelling. Viterbi algorithm for Markov chains, dynamic programming.
Iterative algorithm for local MAP optimization in MRF. Geman and Geman 
Bayesian estimation  by stochastic relaxation, simulated annealing.

RECURSIVE COMPOUND DECISION RULES

MAP of labelling individual objects. Filtering and fixed-lag smoothing in 
hidden Markov chains. Baum's algorithm. Labelling in hidden Markov meshes 
and in Pickard random fields. Unsupervised learning of underlying model
parameters.

PROBABILISTIC RELAXATION

Problem specification. Combining evidence. Support functions for specific 
neighbourhood systems. Relationship with conventional compatibility and 
support functions (arithmetic average and product rule). Global criterion 
of ambiguity and consistency.  Optimization approaches to label probability 
updating (Rosenfeld, Hummel and Zucker algorithm, projected gradient 
method).

APPLICATIONS

Speech recognition. Image segmentation. Scene labelling. Texture 
generation.

 ************************************************************************


GENERAL INFORMATION

COURSE VENUE

University of Surrey, Guildford, United Kingdom

LECTURERS

Dr Pierre DEVIJVER    & Philips Research Laboratory, Avenue  
                      & Em Van Becelaere 2, B-1170 Brussels, Belgium 

Dr Josef KITTLER      & Department of Electronic and Electri-  
                      & cal Engineering, University of Surrey,
                      & Guildford GU2 5XH, England 



PROGRAMME SCHEDULE

COURSE A will commence on Monday, September 11 at 10.00 a.m. (registration
9.00 - 10.00 a.m.) and finish on Wednesday, September 13 at 4 p.m. 
COURSE B will commence on Thursday, September 14 at 10.00 a.m. (registration 
9.00 - 10.00 a.m.) and finish on Friday, September 15 at 4 p.m. 

ACCOMMODATION

Accommodation for the participants will be available on the campus of the 
University for the nights of 10-14 September  at the cost of 27.80 
per night covering dinner, bed and breakfast.



REGISTRATION AND FURTHER INFORMATION

Address registration forms and any enquiries to Mrs Marion Harris, 
Department of Electronic and Electrical Engineering, University of Surrey, 
Guildford GU2 5XH, England,
telephone 0483 571281 ext 2271. Rights reserved to cancel the course or 
change the programme if minimum numbers are not obtained or to limit 
participation according to capacity. All reservations handled on first-come 
first-served basis.

WHO SHOULD ATTEND

The course is intended for graduate students, engineers, mathematicians, 
computer scientists, applied scientists, medical physicists and social 
scientists engaged in work on pattern recognition problems of practical 
significance. In addition programmers and engineers concerned with the 
effective design of pattern recognition systems would also benefit.
Applicants for COURSE A should have some familiarity with basic engineering
mathematics and some previous exposure to probability and statistics. 
Applicants for COURSE B only should have working knowledge of basic 
statistical pattern recognition techniques.

The material covered is directly relevant to applications in 
character recognition, speech recognition, automatic medical diagnosis, 
seismic data classification, target detection and identification, 
remote sensing, computer vision for robotics, and many other  
application areas.




------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (07/22/89)

Vision-List Digest	Fri Jul 21 09:07:53 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Image Processing. Mathematical morphology.

----------------------------------------------------------------------

Date: 18 Jul 89 23:40:02 GMT
From: maurinco@iris.ucdavis.edu (Eric Maurincomme)
Subject: Image Processing. Mathematical morphology.
Organization: U.C. Davis - Department of Electrical Engineering and Computer Science



Has anyone heard of any public domain software tools for mathematical 
morphology ?
I am particularly interested in grayscale morphology operations.
Also, what would be a good newsgroup to send this message to ?

Thanks in advance,


|  Eric Maurincomme                                                      
|  Dept. of Electrical Engineering and Computer Science                  
|  University of California                                              
|  Davis, CA 95616.                                                      
|  e-mail address : maurinco@iris.ucdavis.edu                            
|  Phone : (916) 752-9706                                                


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/01/89)

Vision-List Digest	Mon Jul 31 10:25:00 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Resolution issues
 Satellite Image Data
 IJCAI 89 Update
 color processing
 defocussing & Fourier domain.
 Need time sequences sampled in various ways

----------------------------------------------------------------------

Date: 21 Jul 89 22:13:53 GMT
From: muttiah@cs.purdue.edu (Ranjan Samuel Muttiah)
Subject: Resolution
Organization: Department of Computer Science, Purdue University


 I am looking for reference material on that may have been written
on issues pertaining to the relationship of machine vision resolution, 
accuracy, and execution time.
Please email.

Thanks

------------------------------

Date: Wed, 26 Jul 89 14:30:09-0000
From: Farzin Deravi <eederavi%pyramid.swansea.ac.uk@NSFnet-Relay.AC.UK>
Subject: Satellite Image Data

I need some satellite image data in a form portable to an IBM PC environment
for a student who is doing a project on region classification by texture. Could
you suggest where I can easily obtain such data. By "easily" I mean not 
having to write letters/applications and preferably through email!
Many thanks for your help and advice.

                                   - - - - - - - - - - - - - - - - - - - -
Farzin Deravi,                   | UUCP  : ...!ukc!pyr.swan.ac.uk!eederavi|
Image Processing Laboratory,     | JANET : eederavi@uk.ac.swan.pyr        |
Electrical Engineering Dept.,    | voice : +44 792 295583                 |
University of Wales,             | Fax   : +44 792 295532                 |
Swansea, SA2 8PP, U.K.           | Telex : 48149                          |
                                   - - - - - - - - - - - - - - - - - - - -


------------------------------

Date: Wed, 26 Jul 89 15:39:06 EDT
From: dewitt@caen.engin.umich.edu (Kathryn Dewitt)
Subject: IJCAI 89 Update


CONFERENCE HIGHLIGHTS
Invited Speakers:
    Koichi Furukawa, ICOT will speak Monday, August 21, at 11:10am. The title
    of his talk is "Fifth Generation Computer Project:  Toward a Coherent
    Framework for Knowledge Information Processing and Parallel Processing".

    Gerald Edelman, Rockefeller University, will speak Monday August 21, 
    at 2:00pm. The title of his talk is"Neural Darwinism and Selective 
    Recognition Automata".
        
    E.D. Dickmanns, Universitat de Bundeswehr Munchen, will speak Wednesday,
    August 23, at 11:10am.  The title of his talk is "Real-Time Machine Vision
    Exploiting Integrak Spatio-Temporal World Models".

    Enn Tyugu, Institute of Cybernetics, USSR, will speak Thursday, August 24,
    at 9:00am.  The title of his talk is "Knowledge-Based Programming 
    Environments"

    Fernado Pereira, AT&T Bell Laboratories, will speak Thursday, August 24,
    at 11:10am.  The title of his talk is "Interpreting Natural Language".

    Geoffrey Hinton, University of Toronto, will speak Friday, August 25, 
    at 11:10 am.  The title of his talk is "Connectionist Learning 
    Procedures".

Invited Panels:
    THE CHALLENGE OF NEURAL DARWINISM - Monday, August 21, 4:15pm.
            members:  Stephen W. Smoliar(chair), Linda Smith, David Zisper,
                      John Hollandand George Reeke

    ROBOT NAVIGATION - Tuesday, August 22, 9:00am
            members:  David Miller(chair), Rod Brooks, Raja Chatila, 
		      Scott Harmon, Stan Rosenschein, Chuck Thorpe, and 
		      Chuck Weisbin.

    HIGH-IMPACT FUTURE RESEARCH DIRECTIONS FOR ARTIFICIAL INTELLIGENCE
    Tuesday, August 22, 11:10am.
            members:  Perry Thorndyke(Chair), Raj Reddy, and Toshio Yakoi

    ARTIFICIAL INTELLIGENCE and SPACE EXPLORATION - Tuesday, August 22, 2:00pm
            members: Peter Friedland(chair), David Atkinson, John Muratore,
                     and  Greg Swietek.

    (HOW) IS AI IMPACTING MANUFACTURING? - Friday, August 25, 9:00am.
            members:  Mark Fox (chair), E.J. van de Kraatz, Dennis O'Connor,
                      and  Karl Kempf.

------------------------------

Date: Wed, 26 Jul 89 13:35:04 MET DST
From: mcvax!irst.it!bellutta@uunet.UU.NET (Paolo Bellutta)
Subject: color processing

I have a couple of problem about color procesing.


First: What is the best way to compress 24 bit color images in 8 bit color
       images but using always the same colormap?  I tryed to assign 3 bits
       for red and green and 2 bits for blue but the results are not very 
       good (the image in general has a very high contrast).  

Second: I want to compute from 24 bit rgb images one image that contains
        luminance information (Y = 0.299 * R + 0.587 * G + 0.114 * B) and
        another image with chrominance information (C = R / (R + G)).
        In parentheses I wrote what I'm using. 
        I found that in general the Y image has high contrast and the C
        image has poor color resolution. I mean that if two sides of an object
        have the same color but one is too dark, on the C image it is seen
        as black.
        Are there better algorithms to use?


     ___                      ___
    /   )                    /   )
   /---                     /---\
  /   __   _    /  _       /     )  _    /  /       /_  /_  __
 /   (_/__(_)__/__(_)     /_____/  (-'__/__/__/_/__/___/___(_/_

I.R.S.T.
loc. Pante' di Povo
38050 POVO (TN)
ITALY

vox: +39 461 810105
fax: +39 461 810851

e-mail: bellutta@irst.uucp
        bellutta@irst.it

------------------------------

Date: Fri, 28 Jul 89 10:27:58 +0100
From: prlb2!ronse@uunet.UU.NET (Christian Ronse)
Subject: defocussing & Fourier domain.

Suppose that one makes a picture of an image with out-of-focus lenses.
The transformation from the original image to the blurred picture is
linear and translation-invariant (I think!). What is known about the
Fourier transform of this blurring transformation, in particular on
its phase spectrum?

Christian Ronse

Internet:			maldoror@prlb.philips.be
UUCP:				maldoror@prlb2.uucp
ARPA:				maldoror%prlb.philips.be@uunet.uu.net
				maldoror%prlb2.uucp@uunet.uu.net
BITNET:				maldoror%prlb.philips.be@cernvax
				maldoror%prlb2.uucp@cernvax

[ This is an interesting question.  Krotkov, Pentland, and Subbarao
  have looked at some of these issues as they relate to computer vision
  (Krotkov recently published a paper in IJCV, and Pentland in PAMI). 
  I assume that you mean translation-invariant in the plane (since 
  translation in depth is what causes the blurring).  Though lens effects
  undermine this (e.g., diffraction, lens defects), the plane translation
  invariance seems reasonable to me. 
	The blur function due to defocussing is in the optics literature.
  It has been approximated by some as a gaussian (which, not coincidentally,
  is well-suited for analytic analysis in the Fourier domain).  The spread
  function differs with the wavelength of light, and this introduces 
  some complexities.  Subbarao has addressed this issue, though I don't
  know of a specific reference (he is at SUNY Buffalo).
			phil...		]
  


------------------------------

Date: Mon, 31 Jul 89 09:46:35 PDT
From: althea!pxjim@uunet.uu.net (James Conallen)
Subject: Need time sequences sampled in various ways

Hi there,
  I just recently posted a request for image sequences on comp.graphics, and
a reply from Prof. Dave Chelberg [dmc@piccolo.ecn.purdue.edu) suggested I 
post on the vision list.  I am looking for image sequences time sequentialy
sampled with different sampling patterns.  The ones I am interested in are:
	  lexicographic
	  2:1 line interlaced
	  2:1 dot interlaced
	  bit reversed line interlaced
	  bit reversed dot interlaced
I prefer 256x256x256 BW images, but I'm humble.  
	
	Can you offer me any help?
	
	-jim conallen
	BITNET: pxjim@widener
	UUCP:   pxjim@althea
	AT&T:   (215)499-1050

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/04/89)

Vision-List Digest	Thu Aug 03 10:59:09 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Re: defocussing & Fourier domain
 Defocusing
 Subbarao's address

----------------------------------------------------------------------

Date:        Tue, 1 Aug 89 09:44:38 BST
From: alien%VULCAN.ESE.ESSEX.AC.UK@CUNYVM.CUNY.EDU (Adrian F. Clark)
Subject:     Re: defocussing & Fourier domain


Christian Ronse asks about the effect of defocus blur on images.  This
topic was looked at in detail in the paper

   "Blind Deconvolution with Spatially Invariant Image Blurs with Phase"
   by T. Michael Cannon, IEEE Trans ASSP vol ASSP-24 no. 1 pp58-63 (1976).

In a nutshell, what you do is form the cepstrum (effectively the
logarithm of the power spectrum) and look for the zero crossings: the
defocus blur adds a Bessel function (actually J1(r)/r for a circular
aperture imaging system) pattern.  The same paper also treats linear
motion blur.

There are related papers of the same vintage by Cannon and colleagues
(including ones in Proc IEEE and Applied Optics, if I remember
correctly) which are also worth checking out.

   Adrian F. Clark
   JANET:  alien@uk.ac.essex.ese
   ARPA:   alien%uk.ac.essex.ese@nsfnet-relay.ac.uk
   BITNET: alien%uk.ac.essex.ese@ac.uk
   Smail:  Dept. of Electronic Systems Engineering, University of Essex,
           Wivenhoe Park, Colchester, Essex C04 3SQ, U. K.
   Phone:  (+44) 206-872432 (direct)

------------------------------

Date: Tue,  1 Aug 89 11:22:32 PDT
From: GENNERY@jplrob.JPL.NASA.GOV
Subject: Defocusing

This is in reply to the question from Ronse.  The point spread function
caused by an out of focus lens is an image of the aperture.  For a clear,
circular aperture, this is a uniform circular disk, neglecting lens
distortion, and the Fourier transform of this is a J1(x)/x function,
where J1 is the Bessel function of the first kind.  (See, for example,
D. B. Gennery, "Determination of Optical Transfer Function by Inspection
of Frequency-Domain Plot," Journal of the Optical Society of America 63,
pp. 1571-1577 (Dec. 1973).)  The actual apertures of cameras usually are
more polygonal than circular (because of the adjustable iris).  However,
a high-degree polygon can be approximated by a circle fairly well, so
the J1(x)/x function may be reasonably accurate in many cases.  But the
Gaussian function is not a good approximation to this, as can be seen by
the fact that its phase is always 0 and its amplitude decays rapidly,
whereas J1(x)/x oscillates in sign (thus its phase jumps betw
0 and 180 degrees), with the amplitude decaying slowly.  Of course, if
the blurring from focus is less than the blurring from other causes,
then what happens at the higher spatial frequencies doesn't matter much,
so almost any function will do.  But with a large amount of defocus,
the precise nature of the function is important.
				Don Gennery

------------------------------

Date: Tue, 1 Aug 89 15:20:54 EDT
From: sher@cs.Buffalo.EDU (David Sher)
Subject: Subbarao's address

I just thought that I'd correct a small error in the last posting:
last I heard Subbarao was at SUNY Stonybrook.  
-David Sher

[ I apologize for this inadvertant error.  
		phil...		]

------------------------------

Date:         Wed, 02 Aug 89 15:29:36 PDT
From: Shelly Glaser <GLASER%USCVM.BITNET@CUNYVM.CUNY.EDU>
Subject:      Re: Vision-List delayed redistribution

Have you  tried any textbook on  modern optics? Try, for  example, J.  W.
Goodman's "Introduction to Fourier Optics" (McGraw, 1968).

If the geometrical-optics approximation would  do, the FT of out-of-focus
point is the FT of a circle function; it becomes more complicated as you
add diffraction.

                                                            Shelly Glaser

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/12/89)

Vision-List Digest	Fri Aug 11 18:06:48 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Two Research Posts - Computer Vision
 least squares fitting.
 Friend looking for a image processing job in a stable company
 Grayscale Morphology software

----------------------------------------------------------------------

Date: Mon, 7 Aug 89 16:00:24 BST
From: Bob Fisher <rbf%edai.edinburgh.ac.uk@NSFnet-Relay.AC.UK>
Subject: Two Research Posts - Computer Vision

                     University of Edinburgh
              Department of Artificial Intelligence

              Two Research Posts - Computer Vision


Applications are invited for  two  researchers  to  work  in  the
Department of Artificial Intelligence on an European Institute of
Technology funded research project entitled  ``Surface-Based  Ob-
ject Recognition for Industrial Automation''.  Principal investi-
gators on the project are Dr. Robert Fisher and Dr. John Hallam.

The project investigates the use of laser-stripe based range data
to  identify  and locate parts as they pass down a conveyor belt.
The vision research to be undertaken includes topics in:  surface
patch  extraction  from  range  data,  surface  patch clustering,
geometric object modeling, model matching,  geometric  reasoning.
The project builds on substantial existing research.

The first researcher will be expected to take a leading  role  in
the  day-to-day  project management of this and a related project
(5  research  staff  total)  as  well  as  undertake   scientific
research.  Applicants for this post should have a PhD (or compar-
able experience) in an appropriate area, such as computer vision,
artificial intelligence, computer science or mathematics.

The second researcher will be more involved in software implemen-
tation and testing, but will be expected to undertake some origi-
nal research.  Applicants should have at least a BSc  in  an  ap-
propriate area.

Both applicants should have experience  with  the  C  programming
language.   Applicants  with  experience  in computer vision, the
UNIX operating system, the C++ language, or the  Prolog  language
would be preferred.

Both posts are funded for a period of three  years  starting  No-
vember  1,  1989.   The salaries will be in the range 10458-16665
(AR1a) for the first post and 9816-12879 (AR1b/a) for the  second
post,  with placement according to age, experience and qualifica-
tions.

Applications should include a curriculum vitae (3 copies) and the
names  and  addresses  of two referees, and should be sent to the
Personnel Department, University of Edinburgh, 63  South  Bridge,
Edinburgh,  EH1  1LS by September 6, 1989, from whom further par-
ticulars can be obtained.  In  your  application  letter,  please
quote  reference  number  1651, and indicate for which of the two
posts you are applying.


------------------------------

Date: Tue, 8 Aug 89 10:46:06 CST
From: George Travan <munnari!sirius.ua.oz.au!gtravan@uunet.UU.NET>
Subject: least squares fitting.


i interested in obtaining some pointers to C code which will do a least
squares fit on 2D and 3D shapes consisting of a number of discrete points.

also, are there any good reference sources to 2d or 3D shape analysis.
im particularly interested in mirror imaging and shape difference quantification

thanx  -GeO           	George Travan
		        University of Adelaide
			AUSTRALIA      ACSnet:  gtravan@sirius.ua.oz

------------------------------

Date: 10 Aug 89 17:15:48 GMT
From: hplabs!tripathy@hpscdc.hp.com (Aurobindo Tripathy)
Subject: Friend looking for a image processing job in a stable company
Organization: Hewlett-Packard, Santa Clara Div.



	For all you folks doing work in computer vision in the industry,

I have a question! ...Why is this group so quiet ? There are never any

[		     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	Good question.

				phil...]

issues discussed here. Does every body work for the military ? ...

Let me make a start. I have a freind looking for a job in the image processing

area with a solid background to image processing hardware design and excellent

understanding of image processing algorithms. He has about six years 

experience in the industry. Can any one recommend a stable :-) imaging,

image processing company.

aurobindo

------------------------------

Date: Thu, 10 Aug 89 15:30:03 pdt
From: maurinco@iris.ucdavis.edu (Eric Maurincomme)
Subject: Grayscale Morphology software


About 3 weeks ago, I posted a query about any existing public domain
morphology software. Firstly, I would like to thank all the people who
replied to my query, by giving me advices or answers.

Secondly, it appears that there is no public domain software for morphology
around. I was principally interested in grayscale operations.
Most of the replies I got were about general purpose software packages, 
in which a few morphological operations are implemented.

I will try to give a brief compilation of the answers I got :

At the University of Washington, Linda Shapiro and her colleagues use a
a software package called GIPSY, that has about 400 commands, including 
morphology, and that runs under UNIX. This is a general purpose package 
that runs slowly but covers a lot of ground. The morphology is just one 
command that can do dilations, erosions, openings and closings with the 
user defining his own structuring element by means of entering a mask.
Finally, it costs something like $5000. 

A few people at the University of Maryland working with Rosenfeld refered
me to a few existing software packages. There is one written by Serra's
team. It's called MORPHOLOG, or its new version which is called VISILOG;  
the latter one is on sale by a French company (NOESIS) for about $8000.
If you want more information on MORPHOLOG, you may want to contact La"y 
at the School of Mines in Paris. The software works on an hexagonal grid, 
and a description of it can be found in :
B. La"y, Descriptors of the programs of the Software Package Morpholog, 
Ecole des Mines, Paris.

They also refered me to an Image Processing Software package called IPS, 
that has been created by a French lab in Grenoble, and they have quite a 
few both binary and grayscale morphological operations running on it. 
It's been developed on Appolo workstations, and is on sale for about 
40000 French Francs, which is about $6000.
Apparently the same software has been implemented to work on a PC, and
is commercialized by the society Thomson-Titn, and is called SAMBA.
If you want more information on IPS, you may want to contact Guy Bourrel 
at bourrel@imag.imag.fr who is implied in the development of this software. 
His address is 
 Guy Bourrel
 Equipe de Reconnaissance des Formes et de Microscopie Quantitative
 Universite Joseph Fourier
 CERMO  BP 53X
 38041 Grenoble cedex
 France  tel 76-51-48-13

Another general purpose image processing software package which includes 
some of the basic morphology stuff is called HIPS, and is commercialized  
by Mike Landy at Sharpimage Software in New York.

Finally, a word of comment to tell the netters why we are looking for 
grayscale morphology tools. We have implemented some binary morphology 
in our Image Processing lab (now called CIPIC (Center for Image Processing 
and Integrated Computing), which is a campus-wide research unit). 
It runs on an image processing board IP8500, mounted on a VAX. It does 
all kind of opening/closing dilation/erosion, and can be used for 
skeletonization, etc....
The next step is to implement gray scale morphology. I am working with
Professor Ralph Algazi (algazi@iris.ucdavis.edu), and we wanted to know  
what the state-of-the-art is in this area.....
Thanks for listening,

Eric.


|  Eric Maurincomme
|  Dept. of Electrical Engineering and Computer Science
|  University of California
|  Davis, CA 95616.
|  e-mail address : maurinco@iris.ucdavis.edu
|  Phone : (916) 752-9706


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/19/89)

Vision-List Digest	Fri Aug 18 10:18:42 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 ICIP'89 (IEEE 1989 Int Conf on Image Processing) last announcement
 GYPSY
 MORPHOLOG and VISILOG update information
 some ideas on image analysis methods & vision
 Sensor Fusion
 Postdoctoral Positions-- Yale University

----------------------------------------------------------------------

Date: Mon, 14 Aug 89 14:12:02 -0500
From: teh@cs.wisc.edu (Cho-huak Teh)
Subject: ICIP'89 (IEEE 1989 Int Conf on Image Processing) last announcement

[ I significantly shortened this announcement please contact 
  teh@cs.wisc.edu for a full application form and more information.
		phil... 		]

This is the final announcement through email of IEEE ICIP'89 to be held
in Singapore from 5 to 8 September next month.  There will be presentations
of about 250 papers from 25 countries covering 26 dynamic and high-tech
topics of image processing.  The Conference and its related activities
will be conducted in English.

(A) In-depth tutorials will be held from 5 to 6 Sept :

The technical sessions include :
- Biomedical Image Processing I & II
- Applications of Machine Vision
- Computer Graphics
- 3D Vision I & II
- Image Coding I, II & III
- Feature Extraction I & II
- Character Recognition
- Image Registration
- Image Segmentation
- Artificial Intelligence Techniques
- Systems and Architectures I & II
- Edge Detection
- Image Enhancement and Restoration
- Remote Sensing
- Object Recognition
- 2D Signal Processing
- Dynamic Vision I & II
- Pattern Recognition
- Video Communications


------------------------------

Date: Wed, 16 Aug 89 03:29:55 GMT
From: us214777@mmm.3m.com (John C. Schultz)
Subject: GYPSY
Organization: 3M - St. Paul, MN  55144-1000 US

In a recent article I noticed a reference to GYPSY. I and my lab had the
misfortune to purchase GYPSY several years ago (for quite a bit more
than $5000).  My opinion is that it is not worth $.10.

Our version ran very slowly on a VAX 11/780 under VMS.  All operations 
involved reading and writing disk files so disk performance was
critically important.  Even such simple tasks as negating an image
would
chew up another 512 x 512 byte disk chunk (for example). This filled up
disks very rapidly needless to say.

The FORTRAN (or RATFOR I forget which) made it difficult for me to
maintain although I did manage to add some modules.  The difficulty was
the semi-infinite number of functions, arguments were passed through
before the generally small routine to do the actual number crunching.

Perhaps the most disappointing aspect was that while there were lots of
seemingly neat functions (mostly based on Haralick's facet models if you
like those), the documentation only told you what the functions did, not
why or what preprocessing was required to permit operations.  

For example, I never got the region-adjacency graph to work because I
could never get the proper file(s) preprocessed correctly.  

My feeling is that GYPSY is intended to be used only by Haralick's
former students and close associates.  I do not fall in either category.
We use a general purpose mathematical processing package called IDL
with the VAX.  Vision work uses memory mapped devices with another 
image processing
package I won't recommend, on smaller systems.

You mileage may vary.

------------------------------

Date: Wed, 16 Aug 89 17:37:29 +0200
From: mohr@saturne.imag.fr (Roger Mohr)
Subject: MORPHOLOG and VISILOG update information


A previously postinf wrote:

"A few people at the University of Maryland working with Rosenfeld refered
me to a few existing software packages. There is one written by Serra's
team. It's called MORPHOLOG, or its new version which is called VISILOG;
the latter one is on sale by a French company (NOESIS) for about $8000.
If you want more information on MORPHOLOG, you may want to contact La"y
at the School of Mines in Paris. The software works on an hexagonal grid,
and a description of it can be found in :
B. La"y, Descriptors of the programs of the Software Package Morpholog,
Ecole des Mines, Paris."

This has to updated :

	First af all, Bruno La"y is no more at Ecole des Mines but with his
own company: 	     Noesis
		      centre d'affaires de Jouy
		     5 bis rue du Petit-Robinson
		     78950 Jouy en Josas, France
			tel :(33)(1)34 65 08 95

	The product Visilog is also distributed by  
		Noesis Vision Inc
		6800 Cote de Liesse
		Suite 200
		Monreal, Que, H4T2A7
		CANADA
			tel (514) 345 14 00

	Il runs not only on hexagonal grid but also ond standard rectagular grid
and runs on PC with MS-DOS and almost all the  Unix workstations and support
several external devices like Matrox or Imaging and integrates more than 200
functions with few tens devoted to mathematical morphology (including grey level).
	Several academic research lab in France are using this software.

	I have no information about the prices, but usually you can get an
academic discount.

	Roger Mohr

------------------------------

Date: Thu, 17 Aug 89 16:00:36 +0100
From: prlb2!ronse@uunet.UU.NET (Christian Ronse)
Subject: some ideas on image analysis methods & vision

I have written down some ideas on the relevance of certain image analysis
methodologies (Fourier analysis and mathematical morphology) to vision.
They are not finalized, but a few people around have told me that the question
is interesting. I would like to have other people's thoughts on the subject.
So, if you think you have something to say about it, feel free to ask me a
copy of my working document, and if you are brave enough, send back any
comments.

To get that document, send me your complete PHYSICAL ("snail") mail address,
not the electronic one (I will not send source files, only printed text).
Don't forget your country, Belgian postmen can't guess it.


	PRLB Working Document WD54, June 1989

	Fourier analysis, mathematical morphology, and vision

Abstract:
Two opposite orientations in image analysis are given on the one hand by
linear filtering, spectrometry, and Fourier analysis, and on the other
hand by mathematical morphology, which emphasizes order relations and
set-theoretical properties. The former derives its appeal from its wide
application in the processing of sound signals, while the latter has
been sucessfully used in the analysis of materials or in cytology. We
make a fundamental study of issues at hand in the choice of such
methodologies in image analysis and vision. We start by outlining the
difference in purpose of vision and audition and its physical basis, the
scattering of waves. We criticize Serra's arguments on this matter. Then
we consider the general limitations of linear filtering methodologies
and the unsuitability of phase-independent spectrometry. We propose a
paradigm of concurrent processing and of sorting of information rather
than a single sequence of processing modules with a controlled loss of
information. Finally we analyse the domain of applicability of
mathematical morphology to the visual process and suggest that it is
restricted to certain types of tasks. 

Christian Ronse

Internet:			maldoror@prlb.philips.be
UUCP:				maldoror@prlb2.uucp
ARPA:				maldoror%prlb.philips.be@uunet.uu.net
				maldoror%prlb2.uucp@uunet.uu.net
BITNET:				maldoror%prlb.philips.be@cernvax
				maldoror%prlb2.uucp@cernvax

Philips Research Laboratory Brussels
Av. E. Van Becelaere, 2 b. 8
B-1170 Brussels, Belgium

------------------------------

Date: Thu, 17 Aug 89 13:51 EDT
From: Bartholomew Tschi-Quen 5C40 <tschi-quen@lewis.crd.ge.com>
Subject: Sensor Fusion


I would like to know if anyone on this list has information
Concerning Sensor Fusion since we are looking into this area
 and are very much interested in it.  Thanks you.

	-Tschi-Quen, Tech. liason
Com. Vision group, GE

[ You might check the Proceedings of the AAAI 1987 Workshop on 
  Spatial Reasoning and Sensor Fusion, Oct. 1987, Pheasant Run Resort, 
  St. Charles, IL; also, Rosenfeld's bibliography in CVGIP is always
  useful (better than Science Citation Index). What are other sources?
			phil...	]

------------------------------

Date: Fri, 18 Aug 89 11:41 EDT
From: DUNCAN%DUNCAN@Venus.YCC.Yale.Edu
Subject: Postdoctoral Positions-- Yale University

                          YALE UNIVERSITY
            Postdoctoral Positions in Medical Image Analysis 

One to two positions are open within a research group interested in
developing computer vision- and image understanding- based approaches
to several medical image analysis problems. We are particularly
interested in using model-based optimization strategies for locating
and quantifying anatomical structure, and are in the process of
extending these ideas to handle three-dimensional and four-dimensional
data sets now becoming available from several diagnostic imaging
modalities (including Magnetic Resonance). The group has four faculty
members performing medical image processing/image analysis research, 8
Ph.D.  students and 2 full-time programmers. The positions are joint
between the Departments of Diagnostic Radiology and Electrical
Engineering. In addition, the research group has strong ties with
faculty members in the Computer Science Department. Those who apply
should have a Ph.D. in Electrical Engineering or Computer Science,
preferably with a strong programming background and some familiarity
with, and coursework in, image processing and computer vision. The
initial appointment will be for one year, renewable for a second year
contingent upon the availability of funds and by mutual agreement.
Salary will be based on background and experience, but is expected to
be in the $28K - $32K range.  Review of applications will begin
immediately and will be accepted until the positions are filled.
Applicants should send a resume and the names and addresses of three
references to: Professor James Duncan, Departments of Diagnostic
Radiology and Electrical Engineering, Yale Unversity, 333 Cedar Street
(327 BML), New Haven, Connecticut, 06510, and/or contact him at
Duncan@Venus.YCC.Yale.edu.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/26/89)

Vision-List Digest	Fri Aug 25 09:59:27 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Postdoctoral Research Position

----------------------------------------------------------------------

Date: 	Mon, 21 Aug 89 12:26:30 EDT
From: Color and Vision Network <CVNET%YORKVM1.bitnet@ugw.utcs.utoronto.ca>
Subject: Postdoctoral Research Position
	
             POSTDOCTORAL RESEARCH POSITION
	
Low-level vision/Face recognition/Neural networks/Edinburgh

Applications are invited for a FOUR YEAR research post with Dr Michael
Morgan at the University of Edinburgh and Dr Roger Watt at the University
of Stirling. The succesful applicant would be based at Edinburgh. The post
is supported by a special grant from the SERC, "Recognition of faces
using principles of low-level Vision". The aim of the project is to
apply the Watt/Morgan "MIRAGE" spatial filtering algorithm to face
recognition, using MIRAGE spatial primitives as an input to a neural
network. The idea applicant would have UNIX/C programming experience and
a background in visual psychophysics, but appropriate training could be
provided in one of these areas if necessary.
	
Starting salary is in the region of 11K (UKL), depending on age and
experience. Overseas applicants would be given help in finding suitable
living accomodation in Edinburgh.
	
Applications, with CV and names of THREE referees, should be sent to:
M.J. Morgan, 135 Mayfield Road, EDINBURGH EH93AN, Scotland. Preliminary
enquiries may be made by Email to: MJM@STIR.CS (JANET).


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/01/89)

Vision-List Digest	Thu Aug 31 21:12:58 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 GIPSY
 Re: some ideas on image analysis methods & vision

----------------------------------------------------------------------

Date: Sat, 26 Aug 89 15:15:28 PDT
From: rameshv@george.ee.washington.edu (Ramesh Visvanathan)
Subject: GIPSY
Organization: University of Washington

A recent article referenced GIPSY and cited that it ran very slowly 
under VMS.      

GIPSY was primarily written to provide a flexible environment for the
researcher and it was intended to be very user-friendly.  Speed was
not of primary concern at the time of design of GIPSY.  While all the
old GIPSY code was written in RATFOR (Rational Fortran), new code
being added to GIPSY is written in C.  Further new GIPSY code provides
for dynamic memory allocation, unlike the old RATFOR version.  Hence,
new GIPSY uses less memory.

As far as usage of disk space is concerned, unless the system we are
working with has unlimited image buffers one has to use the system's
disk space to save processed images.  From the GIPSY environment the
user can delete or compress unwanted images.

I agree that the old version of GIPSY was slow because of
reading/writing from disk files.  We are currently attempting to speed
up GIPSY by using a large internal buffer to manage the number of I/O
situations.  We expect the speedup to be substantial.

About GIPSY's documentation and its ease of use, with SUN GIPSY we now
have a set of demo files which instructs the user about GIPSY file
formats and GIPSY commands which use them.  In addition, the working
of each GIPSY command is tested by a GIPSY runfile (which is nothing
but a batch file to test the command) and often the kind of
preprocessing necessary for the command is given in the runfile.  The
document files may not explain the preprocessing necessary, but the
runfiles give the sequence of GIPSY preprocessing commands that can be
used to generate the test data set for using a particular GIPSY
command. I give below the documentation file for the GIPSY RAG command
and also the run file used to test this command.

RAG.DOC
 
*RAG      Region adjacency graph

 VERSION: A.01  DATE: 09-15-80     AUTHOR: LINDA SHAPIRO , T.C.PONG

 ACTION: Given a symbolic image the command RAG, this command outputs two
         random access files representing the region adjacency graph of the
         image.The point file contains the pointers to and the adjacency
         lists for each region; point(i) has two fields: first is
         the pointer to the adjacency list for region i in the link
         file; second is the number of regions in this adjacency list
         the link file contains 16 region numbers (integers) per
         record; each adjacency list starts on a new record.

 SOURCE: Disk, input file name ( symbolic )
 DESTINATION: Disk, 2 Random access files: point file and link file
                 point file -- integer records
                      RECORD I: pointer and number of neighors for
                                region i
                 link file -- integer records
                      records in the link file are pointed to by the
                      point file; there are 16 elements/record.

FLAGS:(E) If the E flag is used then four neighbors are used.
      (F) If the F flag is used then eight neighbors are used.

 QUESTIONS: (1)  The user is asked which band of the image to process and
                for two integers representing the highest and lowest
                numbered regions to be processed.

            (2)  An option is given on four or eight neighbor adjacency

 COMMAND STRING EXAMPLE:

     RAG POINT.FILE , LINK.FILE  < IMAGE.LBL

     Creates a region adjacency graph for the symbolic image named
     IMAGE.LBL.  Put the pointers and lengths of the adjacency lists
     in POINT.FILE and the elements of the lists in LINK.FILE
     (in binary).

 ALGORITHM: For each line i in the image
                 for each pixel labeled j in line i
                      for each pixel labeld j' that is horizontally
                           adjacent to the pixel labeld j on line i
                           or vertically adjacent to it on line i+1
                                add j to adjacency list j' and
                                add j' to adjacency list j
                      end
                 end
            end

 COMMENTS: Currently the point and link files are binary files and
           are initialized to have a maximum of 2000 records.
           The point file contains the pointers to and the adjacency
           lists for each region; point(i) has two fields: first is
           the pointer to the adjacency list for region i in the link
           file; second is the number of regions in this adjacency list
           the link file contains 16 region numbers (integers) per
           record; each adjacency list starts on a new record.

"rag.run"


$ ! TESTING THE COMMAND RAG
$ ! CREATE A CHECKERBOARD AND MAKE IT A SYMBOLIC IMAGE
$ !
$      MKCHK CHK.SYM
       10 10 5 5 1 2 0
$ !
$ !
$ ! USE THE EXSIF COMMAND TO CHANGE THIS IMAGE TO SYMBOLIC IMAGE
$ !
$       EXSIF
        OPEN CHK.SYM
        PROT OFF
        MID 1 1
        MID 2 2
        MID 18 1
        DONE
$ !
$ ! TESTING THE COMMAND RAG
$ !
$       RAG CHK.PT4,CHK.LK4 < CHK.SYM
        4
$ !
$ !  TEST THE COMMAND RAG USING THIS PROPERTY FILE
$ !
$      PRTRAG TT<CHK.PT4,CHK.LK4 (A)
$ !
$ !


The documentation for GIPSY is constantly undergoing updating 
by people who are using the system in our lab. Anytime anyone has
had a question about a command, if the documentation did not explain
it in a clear way, the documentation was updated.  


If anyone using GIPSY has questions they can send mail to:

gipsy@george.ee.washington.edu


------------------------------

Date: Sun, 27 Aug 89 14:09:32 +0200
From: Ingemar Ragnemalm <ingemar@isy.liu.se>
Subject: Re: some ideas on image analysis methods & vision

A previous posting wrote:

>I have written down some ideas on the relevance of certain image analysis
>methodologies (Fourier analysis and mathematical morphology) to vision.
>They are not finalized, but a few people around have told me that the question
>is interesting. I would like to have other people's thoughts on the subject.
>So, if you think you have something to say about it, feel free to ask me a
>copy of my working document, and if you are brave enough, send back any
>comments.

>	PRLB Working Document WD54, June 1989
>	Fourier analysis, mathematical morphology, and vision

>Abstract:
>etc...

Yes, please, I would like to read your working document. In my opinion this
is the kind of work that is needed in computer vision presently. I'm not
sure that morphology should be done away with like you suggest, but rather
combined with the signal processing methods. However, it is hard to comment
only on the abstract.

Btw, I'm a member of prof. Per-Erik Danielsson's image processing lab. I'm
primarily working with morphology algorithms, and I have published a few
conference papers. I believe that I'm knowledgable enough to comment on
your work.

Please send the copy to:

Ingemar Ragnemalm
Dept of Electrical Engineering
Link|ping University
S-58183 Link|ping
SWEDEN
                                                . .
where the "|" are "o" with two dots above, like: O

Yours,
Ingemar

Dept. of Electrical Engineering	     ...!uunet!mcvax!enea!rainier!ingemar
                  ..
University of Linkoping, Sweden	     ingemar@isy.liu.se


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/09/89)

Vision-List Digest	Fri Sep 08 09:30:27 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 NIPS Registration

----------------------------------------------------------------------

Date: Thu, 7 Sep 89 07:36:41 -0400
From: jose@tractatus.bellcore.com (Stephen J Hanson)
Subject: NIPS Registration

    ****   NIPS89 Update   ****

We've just finished putting the program for the conference together and
have a preliminary program for the workshops.  A mailing to authors
will go out this week, with registration information.  Those who
requested this information but are not authors will hear from us starting
in another week.  If you received a postcard from us acknowledging
receipt of your paper, you are on our authors' mailing list.
If you haven't requested the registration packet, you can do so
by writing to

Kathie Hibbard
NIPS89 Local Committee
University of Colorado Eng'g Center
Campus Box 425
Boulder, CO  80309-0425

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/19/89)

Vision-List Digest	Mon Sep 18 10:39:25 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 neural nets and optical character recognition
 cfp - 6th IEEE Conference on AI Applications

----------------------------------------------------------------------

Date: 14 Sep 89 09:40:29 GMT
From: Smagt v der PPP <mcvax!cs.vu.nl!smagt@uunet.UU.NET>
Subject: neural nets and optical character recognition
Keywords: neural nets, OCR, references
Organization: V.U. Informatica, Amsterdam, the Netherlands

I'm writing a survey article on using neural nets for optical
character classification.  I've extensively studied neural nets
for half a year and have conducted many experiments, which will
also be reported in my article.  In order to write my paper, I
am in need of some references on the subject of NN and OCR.  
Does anyone have any literature suggestions?

Please e-mail your answer to smagt@cs.vu.nl

	Patrick van der Smagt
	V.U. Amsterdam
	The Netherlands


------------------------------

Date: 13 Sep 89 01:29:12 GMT
From: finin@prc.unisys.com (Tim Finin)
Subject: cfp - 6th IEEE Conference on AI Applications
Organization: Unisys Paoli Research Center, PO 517, Paoli PA 19301

                            CALL FOR PARTICIPATION

                         The Sixth IEEE Conference on
                     Artificial Intelligence Applications

                        Fess Parker's Red Lion Resort
                          Santa Barbara, California

                               March 5-9, 1990

                  Sponsored by: The Computer Society of IEEE


The conference is devoted to the application of artificial intelligence
techniques to real-world problems.  Two  kinds  of  papers  are appropriate:
Case studies of knowledge-based applications that solve significant problems
and stimulate the development of useful techniques.  Papers on AI techniques
and principles that underlie knowledge-based systems, and in turn, enable
ever more ambitious real-world applications.  This conference provides a
forum for such synergy between applications and AI techniques.

Papers describing significant unpublished results are solicited along
three tracks:

 - "Engineering/Manufacturing" Track.  Contributions stemming from
   the general area of industrial and scientific applications.

 - "Business/Decision Support" Track.  Contributions stemming from
   the general area of business, law and various decision support
   applications.

   Papers in these two application tracks must:  (1) Justfy the use
   of the AI technique, based on the problem definition and an
   analysis of the application's requirements; (2) Explain how AI
   technology was used to solve a significant problem; (3) Describe
   the status of the implementation; (4) Evaluate both the
   effectiveness of the implementation and the technique used.

 - "Enabling Technology" Track.  Contributions focusing on techniques
   and principles that facilitate the development of practical knowledge
   based systems, and can be scaled to handle increasing problem complexity.
   Topics include, but not limited to:  knowledge
   acquisition, representation, reasoning, searching, learning, software
   life cycle issues, consistency maintenance, verification/validation,
   project management, the user interface, integration, problem-
   solving architectures, and general tools.

Papers should be limited to 5000 words.  The first page of the paper should
contain the following information (where applicable) in the order shown:

 - Title.
 - Authors' names and affiliation. (specify student)
 - Abstract:  A 200 word abstract that includes a clear statement on
   what the original contribution is and what new lesson is imparted
   by the paper.
 - AI topic:  Knowledge acquisition, explanation, diagnosis, etc.
 - Domain area:  Mechanical design, factory scheduling, education,
   medicine, etc.  Do NOT specify the track.
 - Language/Tool:  Underlying language and knowledge engineering tools.
 - Status:  development and deployment status as appropriate.
 - Effort:  Person-years of effort put into developing the particular
   aspect of the project being described.
 - Impact:  A 20 word description of estimated or measured (specify)
   benefit of the application developed.

Each paper accepted for publication will be allotted seven pages in the
conference proceedings.  Best papers accepted in the Enabling Technology
track will be considered for a special issue of IEEE Transactions on
Knowledge and Data Engineering (TDKE) to appear in late 1990. Best
papers accepted in the application tracks will be considered for a
special issue of IEEE EXPERT, also to appear in late 1990. In addition,
there will be a best student paper award of $1,500, sponsored by IBM
for this conference.

In addition to papers, we will be accepting the following types of
submissions:

  - Proposals for Panel discussions.   Topic  and  desired  participants.
    Indicate  the  membership of the panel and whether you are interested
    in organizing/moderating the discussion.   A  panel  proposal  should
    include a 1000-word summary of the proposed subject.

  - Proposals for Demonstrations.  Videotape and/or description of a live
    presentation (not to exceed 1000 words).  The demonstration should be
    of  a  particular  system  or  technique  that shows the reduction to
    practice of one of the conference topics.  The demonstration or video
    tape should be not longer than 15 minutes.

  - Proposals   for   Tutorial  Presentations.    Proposals  of  both  an
    introductory and advanced nature are requested.  Topics should relate
    to  the  management  and  technical  development of usable and useful
    artificial intelligence applications.  Particularly of  interest  are
    tutorials  analyzing  classes of applications in depth and techniques
    appropriate for a particular class of  applications.    However,  all
    topics  will  be  considered.    Tutorials  are  three  hours in
    duration; copies of slides are to be provided in advance to IEEE  for
    reproduction.

    Each tutorial proposal should include the following:

       * Detailed  topic  list  and extended abstract (about 3 pages)
       * Tutorial level:  introductory, intermediate, or advanced
       * Prerequisite reading for intermediate and advanced tutorials
       * Short  professional vita including presenter's experience in
         lectures and tutorials.

  - Proposals for Vendor Presentations: A separate session will be held
    where vendors will have the opportunity to give an overview to
    their AI-based software products and services.

IMPORTANT DATES
 - September 29, 1989: Six copies of Papers, and four copies of all
   the proposals are due.  Submissions not received by that date will
   be returned unopened. Electronically transmitted materials will not
   be accepted.
 - October 30, 1989: Author notifications mailed.
 - December 12, 1989: Accepted papers due to IEEE.  Accepted tutorial
   notes due to Tutorial Chair, Donald Kosy
 - March 5-6, 1990: Tutorials
 - March 7-9, 1990: Conference

Submit Papers and Other Materials to:

Se June Hong  (Room 31-206)
IBM T.J. Watson Research Center
P.O. Box 218
Yorktown Heights, NY  10598
USA

Phone: (914)-945-2265
CSNET:  HONG@IBM.COM
FAX: (914)-945-2141
TELEX: 910-240-0632


Submit Tutorial Proposals to:

    Donald Kosy
    Robotics Institute
    Carnegie Mellon University
    Pittsburgh, Pennsylvania 15213

    Phone: 412-268-8814
    ARPANET: kosy@cs.cmu.edu

                            CONFERENCE COMMITTEES

General Chair
            Mark S. Fox, Carnegie-Mellon University

Publicity Chair
            Jeff Pepper, Carnegie Group Inc

Tutorial Chair
    Donald Kosy, Carnegie Mellon University


Program Committee
     Chair  Se June Hong, IBM Research
At-large    Jan Aikins, AION Corp.
            John Gero, University of Sidney
            Robert E. Filman, IntelliCorp
            Gary Kahn, Carnegie Group
            John Mc Dermott, DEC
Engineering/Manufacturing Track
     Chair  Chris Tong, Rutgers University (Visiting IBM)
            Sanjaya Addanki, IBM Research
            Alice Agogino, UC Berkeley
            Miro Benda, Boeing Computer Services
            Sanjay Mittal, Xerox PARC
            Duvurru Sriram, MIT
Business/Decision Support Track
     Chair  Peter Hart,  Syntelligence
            Chidanand Apte,  IBM Research
            Vasant Dhar,  New York University
            Richard Fikes,  Price-Waterhouse
            Timothy Finin,  Unisys Paoli Research Center
            Daniel O'Leary, University of Southern California
Enabling Technology Track
     Chair  Howard Shrobe,  Symbolics
            Lee Erman, CIMFLEX-Teknowledge
            Brian Gaines, University of Calgary
            Eric Mays,  IBM Research
            Kathy McKeown,  Columbia University
            Katia Sycara, Carnegie-Mellon University

Additional Information

For registration and additional conference information, contact:

    CAIA-90
    The Computer Society of the IEEE
    1730 Massachusetts Avenue, NW
    Washington, DC 20036-1903
    Phone: 202-371-0101


Tim Finin                     finin@prc.unisys.com (internet)
Unisys Paoli Research Center  ..!{psuvax1,sdcrdcf,cbmvax}!burdvax!finin (uucp)
PO Box 517                    215-648-7446 (office), 215-386-1749 (home),


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/26/89)

Vision-List Digest	Mon Sep 25 12:40:06 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 New Service for Vision List: Relevant Journal Table of Contents
 BBS Call for Commentators: Visual Search & Complexity
 Subject: street address for IEEE CAIA-90 submissions
 7th Intern. Conf. on Machine Learning
 IEEE Jrnl of Robotics and Automation  Aug 89
 IEEE Trans on PAMI  Jul 89

----------------------------------------------------------------------

Date: Mon, 25 Sep 89 12:14:29 PDT
From: Vision-List-Request <vision@deimos.ads.com>
Subject:  New Service for Vision List: Relevant Journal Table of Contents

Thanks to Jon Webb and the Computer Science Library at CMU, the
Vision List will now be posting the table of contents for select
relevant journals. These table of contents will be placed at the
end of the List in order to avoid cluttering up subscriber
discussion and comments. The goal of these indices is to simplify
the identification of current relevant literature and help us all
better manage our time. 

The journals include: IEEE Journal on Robotics and Automation (JRA),
IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI), International Journal on Computer Vision (IJCV), and
Perception. We may also include CVGIP and Spatial Vision. JRA often
has interesting vision articles, though it is not specifically vision
oriented: please let me know if you believe it should be omitted.

Comments are invited and encouraged.

phil...

----------------------------------------------------------------------

Date: 19 Sep 89 05:41:53 GMT
From: harnad@phoenix.princeton.edu (S. R. Harnad)
Subject: BBS Call for Commentators: Visual Search & Complexity
Keywords: computer vision, natural vision, complexity theory, brain
Organization: Princeton University, NJ

Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international,
interdisciplinary journal that provides Open Peer Commentary on
important and controversial current research in the biobehavioral and
cognitive sciences. Commentators must be current BBS Associates or
nominated by a current BBS Associate. To be considered as a
commentator on this article, to suggest other appropriate
commentators, or for information about how to become a BBS Associate,
please send email to:
	 harnad@princeton.edu              or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542  [tel: 609-921-7771]


Analyzing Vision at the Complexity Level

John K. Tsotsos

Department of Computer Science,
University of Toronto and
The Canadian Institute for Advanced Research

The general problem of visual search can be shown to be
computationally intractable in a formal complexity-theoretic sense,
yet visual search is widely involved in everyday perception and
biological systems manage to perform it remarkably well. Complexity
level analysis may resolve this contradiction. Visual search can be
reshaped into tractability through approximations and by optimizing
the resources devoted to visual processing. Architectural constraints
can be derived using the minimum cost principle to rule out a large
class of potential solutions. The evidence speaks strongly against
purely bottom-up approaches to vision. This analysis of visual search
performance in terms of task-directed influences on visual information
processing and complexity satisfaction allows a large body of
neurophysiological and psychological evidence to be tied together.


Stevan Harnad  
INTERNET:  harnad@confidence.princeton.edu   harnad@princeton.edu
srh@flash.bellcore.com      harnad@elbereth.rutgers.edu    harnad@princeton.uucp
CSNET:    harnad%confidence.princeton.edu@relay.cs.net
BITNET:   harnad1@umass.bitnet      harnad@pucc.bitnet       (609)-921-7771

------------------------------

Date: 25 Sep 89 18:33:33 GMT
From: finin@prc.unisys.com (Tim Finin)
Subject: street address for IEEE CAIA-90 submissions
Organization: Unisys Paoli Research Center, PO Box 517, Paoli PA 19301
				     
      REMINDER ----- IEEE CAIA-90 ----- DEAD LINE 9/29 ----- REMINDER
				     
		  6th IEEE Conference on AI Applications

For those colleagues who depend on express mailing (don't we all?),
here is the street address to use:

                      Se June Hong (Room 31-206)
                      IBM T. J. Watson Research Center
                      Route 134 (Kitchawan) and Taconic
                      (PO box 218 if regular post)
                      Yorktown Heights, NY  10598

Tim Finin                     finin@prc.unisys.com (internet)
Unisys Paoli Research Center  ..!{psuvax1,sdcrdcf,cbmvax}!burdvax!finin (uucp)
PO Box 517                    215-648-7446 (office), 215-386-1749 (home),

------------------------------

Posted-Date: Thu, 21 Sep 89 13:46:14 CDT
From: ml90@cs.utexas.edu (B. Porter and R. Mooney)
Date: Thu, 21 Sep 89 13:46:14 CDT
Subject: 7th Intern. Conf. on Machine Learning@@



                 SEVENTH INTERNATIONAL CONFERENCE ON MACHINE
                          LEARNING: CALL FOR PAPERS


     The Seventh International Conference on Machine Learning   will  be
     held  at  the  University  of Texas in  Austin during June  21--23,
     1990.  Its goal is   to  bring  together   researchers   from   all
     areas   of  machine  learning.   The   conference    will   include
     presentations   of refereed papers,  invited  talks,   and   poster
     sessions.   The   deadline    for  submitting papers is February 1,
     1990.

                               REVIEW CRITERIA

     In order  to ensure  high   quality papers, each   submission  will
     be  reviewed   by   two  members    of   the  program committee and
     judged  on clarity,  significance,    and  originality.   All  sub-
     missions  should contain new work, new results, or major extensions
     to prior work.   If the  paper  describes  a  running   system,  it
     should explain that system's representation of  inputs and outputs,
     its performance component, its learning  methods,  and  its evalua-
     tion.    In    addition to  reporting advances in current areas  of
     machine learning, authors  are  encouraged  to  report  results  on
     exploring novel learning tasks.


                            SUBMISSION OF PAPERS

     Each  paper must  have a cover   page  with  the  title,   author's
     names,  primary   author's  address  and  telephone number,  and an
     abstract of about 200 words. The  cover  page   should  also   give
     three  keywords  that  describe  the research. Examples of keywords
     include:

     PROBLEM AREA             GENERAL APPROACH       EVALUATION CRITERIA

     Concept learning         Genetic algorithms     Empirical evaluation
     Learning and planning    Empirical methods      Theoretical analysis
     Language learning        Explanation-based      Psychological validity
     Learning and design      Connectionist
     Machine discovery        Analogical reasoning

     Papers are  limited  to 12 double-spaced  pages (including  figures
     and  references),  formatted  with   twelve  point font.    Authors
     will be notified of  acceptance  by  Friday,  March  23,  1990  and
     camera-ready copy is due by April 23, 1990.


     Send papers (3 copies) to:         For information, please contact:

     Machine Learning Conference        Bruce Porter or Raymond Mooney
     Department of Computer Sciences    ml90@cs.utexas.edu
     Taylor Hall 2.124                  (512) 471-7316
     University of Texas at Austin
     Austin, Texas 78712-1188



------------------------------

Date: Fri, 22 Sep 89 10:29:21 EDT
Subject: IEEE Jrnl of Robotics and Automation  Aug 89
From: ES.Library@B.GP.CS.CMU.EDU

                                  REFERENCES

[1]   Ahmad, Shaheen and Luo, Shengwu.
      Coordinated Motion Control of Multiple Robotic Devices for Welding and
         Redundancy Coordination through Constrained Optimization in Cartesian
         Space.
      IEEE Journal of Robotics and Automation 5(4):409-417, August, 1989.

[2]   ElMaraghy, Hoda A. and Payandeh, S.
      Contact Prediction and Reasoning for Compliant Robot Motions.
      IEEE Journal of Robotics and Automation 5(4):533-538, August, 1989.

[3]   Hannaford, Blake.
      A Design Framework for Teleoperators with Kinesthetic Feedback.
      IEEE Journal of Robotics and Automation 5(4):426-434, August, 1989.

[4]   Jacak, Witold.
      A Discrete Kinematic Model of Robots in the Cartesian Space.
      IEEE Journal of Robotics and Automation 5(4):435-443, August, 1989.

[5]   Kumar, Vijay and Waldron, Kenneth J.
      Suboptimal Algorithms for Force Distribution in Multifingered Grippers.
      IEEE Journal of Robotics and Automation 5(4):491-498, August, 1989.

[6]   Kusiak, Andrew.
      Aggregate Scheduling of a Flexible Machining and Assembly System.
      IEEE Journal of Robotics and Automation 5(4):451-459, August, 1989.

[7]   Li, Chang-Jin.
      An Efficient Method for Linearization of Dynamic Models of Robotic
         Manipulators.
      IEEE Journal of Robotics and Automation 5(4):397-408, August, 1989.

[8]   Martin, D. P.; Baillieul, J.; and Hollerbach, J. M.
      Resolution of Kinematic Redundancy Using Optimization Techniques.
      IEEE Journal of Robotics and Automation 5(4):529-533, August, 1989.

[9]   Murray, John J. and Lovell, Gilbert H.
      Dynamic Modeling of Closed-Chain Robotic Manipulators and Implications
         for Trajectory Control.
      IEEE Journal of Robotics and Automation 5(4):522-528, August, 1989.

[10]  Pfeffer, Lawrence E.; Khatib, Oussama; and Hake, J.
      Joint Torque Sensory Feedback in the Control of a PUMA Manipulator.
      IEEE Journal of Robotics and Automation 5(4):418-425, August, 1989.

[11]  Rodriguez, Guillermo.
      Recursive Forward Dynamics for Multiple Robot Arms Moving a Common Task
         Object.
      IEEE Journal of Robotics and Automation 5(4):510-521, August, 1989.

[12]  Seraji, Homeyoun.
      Configuration Control of Redundant Manipulators: Theory and
         Implementation.
      IEEE Journal of Robotics and Automation 5(4):472-490, August, 1989.

[13]  Sorensen, Brett R.; Donath, Max; Yang, Guo-Ben; and Starr, Roland C.
      The Minnesota Scanner: A Prototype Sensor for Three-Dimensional Tracking
         of Moving Body Segments.
      IEEE Journal of Robotics and Automation 5(4):499-509, August, 1989.

[14]  Tsujimura, Takeshi and Yabuta, Tetsuro.
      Object Detection by Tactile Sensing Method Employing Force/Torque
         Information.
      IEEE Journal of Robotics and Automation 5(4):444-450, August, 1989.

[15]  Wang, Y. F. and Aggarwal, J. K.
      Integration of Active and Passive Sensing Techniques for Representing
         Three-Dimensional Objects.
      IEEE Journal of Robotics and Automation 5(4):460-471, August, 1989.


------------------------------

Date: Fri, 22 Sep 89 10:30:50 EDT
Subject: IEEE Trans on PAMI  Jul 89
From: ES.Library@B.GP.CS.CMU.EDU

                                  REFERENCES

[1]   Chen, David Shi.
      A Data-Driven Intermediate Level Feature Extraction Algorithm.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):749-758, July, 1989.

[2]   Chen, Ming-Hua and Yan, Ping-Fan.
      A Multiscale Approach Based on Morphological Filtering.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):694-700, July, 1989.

[3]   Gath, I. and Geva, A. B.
      Unsupervised Optimal Fuzzy Clustering.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):773-781, July, 1989.

[4]   Mallat, Stephane G.
      A Theory for Multiresolution Signal Decomposition: The Wavelet
         Representation.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):674-693, July, 1989.

[5]   Maragos, Petros.
      Pattern Spectrum and Multiscale Shape Representation.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):701-716, July, 1989.

[6]   Peleg, Shmuel; Werman, Michael; and Rom, Hillel.
      A Unified Approach to the Change of Resolution: Space and Grey-Level.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):739-742, July, 1989.

[7]   Sanz, Jorge L. C. and Huang, Thomas T.
      Image Representation by Sign Information.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):729-738, July, 1989.

[8]   Shah, Y. C,; Chapman, R.; and Mahani, R. B.
      A New Technique to Extract Range Information.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):768-773, July, 1989.

[9]   Strobach, Peter.
      Quadtree-Structured Linear Prediction Models for Image Sequence
         Processing.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):742-748, July, 1989.

[10]  Usner, Michael and Eden, Murray.
      Multiresolution Feature Extraction and Selection for Texture
         Segmentation.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):717-728, July, 1989.

[11]  Yeshurun, Yehezkel and Schwartz, Eric L.
      Cepstral Filtering on a Columnar Image Architecture: A Fast Algorithm
         for Binocular Stereo Segmentation.
      IEEE Transactions on Pattern Analysis and Machine Intelligence
         PAMI-11(7):759-767, July, 1989.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/30/89)

Vision-List Digest	Fri Sep 29 09:57:55 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 TOC for journals
 Journal Info: Machine Vision and Applications 

----------------------------------------  

Date: Wed, 27 Sep 89 09:52:34 EDT
From: jaenicke@XN.LL.MIT.EDU (Richard A. Jaenicke)
Subject: TOC for journals

Phil,
   I think that inclusion of table-of-contents in the
Vision-List is a great idea.  Hurray for John Web and the
CMU CS library!  I am for including any TOC's you can get
your (electronic) hands on as long as they have some remote
connection to computer vision. 

Richard Jaenicke             Machine Intelligence Group
jaenicke@xn.ll.mit.edu       MIT Lincoln Laboratory

[ In order to decrease clutter, I will eliminate article listings which
  do not have a connection to vision (e.g., path planning articles in the
  Journal of Robotics & Automation).  We are currently working to increase
  the number of listed journals relevant to vision.  We need to balance the
  helpfulness of information with clutter overload. All and any feedback 
  is useful. 
			phil...			]

------------------------------

Date: Fri, 29 Sep 89 09:42:25 PDT
From: rossbach%engrhub@hub.ucsb.edu
Subject: Machine Vision and Applications

Machine Vision and Applications, An International Journal

This journal is published four times a year and has a personal
subscription rate of $45.00 (including postage and handling.  The
institutional rate is $105.00 (including postage and handling).  If
you would like a sample copy or subscription information, please send
email to rossbach@hub.ucsb.edu.

Volume 2, Issues 2 and 3 contain the following articles:

Extraction of Graphic Primitives from Images of Paper-based Drawings  by C.
Shih and R. Kasturi.

A Fruit-Tracking System for Robotic Harvesting by R. Harrell, D. Slaughter and
P. Adsit.

Range Estimation from Intensity Gradient Analysis by Ramesh Jain and Kurt
Skifstad.

Reconstruction of Two Dimensional Patterns by Fourier Descriptors by A.
Krzyzak, S. Leung and C. Suen.

Analysis of Textural Images Using the Hough Transform by S. Srihari and V.
Govindaraju.

Knowledge-directed Inspection for Complex Multi-layered Patterns by M. Ejiri,
H. Yoda and H. Sakou.

Which Parallel Architectures are Useful/Useless for Vision Algorithms? by J. L.
C. Sanz.

Expect to see the next issue of the journal, Volume 2, Issue 4, this November!
For further information on submitting a paper or subscribing to the journal,
please send email to rossbach@hub.ucsb.edu; or write to Springer-Verlag, 815 De
La Vina Street, Santa Barbara, CA 93101; or call (805) 963-7960.  

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/07/89)

Vision-List Digest	Fri Oct 06 10:28:40 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Re: TOC for journals
 Research positions at U-Geneva, Switzerland.
 Workshop on Combining Multiple Strategies For Vision

----------------------------------------------------------------------

Date:       Wed, 4 Oct 89 09:19:18 BST
From: alien%VULCAN.ESE.ESSEX.AC.UK@CUNYVM.CUNY.EDU (Adrian F. Clark)
Subject:    Re: TOC for journals

I am pleased to see the inclusion of machine-readable tables of
contents for relevant journals.  This is so useful that I am archiving
them locally and forwarding them to other (non-Vision-List readers) in
the UK and Europe.  However, I would like to make two suggestions:

  o I urge you not to omit articles which do not appear to be
    vision-related.  Apart from pedantic objections, there are many
    cases where solutions to problems which have no immediate
    application to vision (or, indeed, any other discipline) provide
    useful insights into one's own problems.  Futhermore, there are
    a number of readers (such as myself) who are not working directly
    on vision, but apply image understanding techniques to other problems.

  o surely there is a case for distributing the contents lists in a format
    which allows easy insertion into some bibliographic database
    system?  I would imagine that most readers produce papers by computer.
    Unix `refer' or BibTeX spring to mind.  Since there are a
    number of widely-available tools for converting the former to the
    latter, `refer' might be better.  Or perhaps some generic mark-up
    system, which can easily be converted into any particular format,
    would be the best.

 Adrian F. Clark
 JANET: alien@uk.ac.essex.ese   ARPA: alien%uk.ac.essex.ese@nsfnet-relay.ac.uk
 BITNET: alien%uk.ac.essex.ese@ac.uk          PHONE: (+44) 206-872432 (direct)
 Dept ESE, University of Essex, Wivenhoe Park, Colchester, Essex, C04 3SQ, UK.

[ Ed.: Several readers have echoed the first comment.  Especially
   	 with regards to the J. Rob. & Auto. At the other end of the spectrum, 
	 I am concerned with reducing clutter on the List which undermines 
	 the focus of the List which discusses Vision issues. Feedback on
	 TOC and how they are maintained over the next several months
	 will better define where the line should be drawn.
       The second comment regarding TOC format has also been echoed
 	 by several readers. For now, I am posting the TOC in the
	 format I receive them in. They are alternatively available in
	 scribe format. A key advantage of the current form is that they
	 are clear and easy to rapidly scan and selectively pull apart. 
	 Database formatted references (e.g., refer format) are much more 
	 difficult and time consuming to visually scan.  Yet, I also 
	 appreciate the utility of maintaining a DB of relevant vision 
	 articles. These issues are still being considered, and will be 
	 deferred in the short run until an acceptable set of journals 
	 has been agreed uupon and stabilized. 

       CVGIP will hopefully soon be brought on-line. I am attempting to
	 include more neurobiological journals which address structure 
 	 of biological vision systems as well. I am still trying to obtain
	 electronic TOC for: Journal of the Optical Society of America: A,
	 Spatial Vision, Vision Research, and perhaps (depending on reader
	 interest), Trends in Neuroscience and Biological Cybernetics.
	 Volunteers wishing to provide these TOC on a timely and consistent
	 basis are sought.

		phil...]


------------------------------

Date: 05 Oct 89 18:48:26 GMT
From: Thierry Pun <pun@cui.unige.ch>
Subject: Research positions at U-Geneva, Switzerland.

AI and Vision Group, Computing Science Center
University of Geneva, Switzerland

We have THREE open positions, to be filled in as soon as possible, for

		RESEARCH ASSISTANTS IN COMPUTER VISION
		    AND ARTIFICIAL INTELLIGENCE

The succesfull candidates will participate to our work in the context of the
Swiss National Research Program PNR 23 "Artificial intelligence and robotics". 
The project concerns various aspects of the development of computer vision
systems for robotic applications.

The group currently consists of approximately 12 researchers, 6 in vision and
6 in AI. The AI and Vision group is part of the Computer Science Center, which
comprises approximately 60 researchers. Research facilities are excellent.

We offer advanced research with up-to-date computing facilities and a nice 
environment. Salary starts at 4'020 SFR per month (1 SFR has been recently
oscillating between 1.6 and 1.8 US$). 

We wish to hire a research oriented student, holding a university degree in 
computing science or closely related field. The candidate must be oriented
towards computer vision and/or artificial intelligence.

If you are interested, please send your resume, interests, publications,
references to:
        Prof. Thierry Pun, Computer Vision Group
	Computing Science Center, U-Geneva
        12, rue du Lac, CH-1207 Geneva SWITZERLAND  
        Phone : +41(22) 787 65 82; fax: +41(22) 735 39 05
        E-mail: pun@cui.unige.ch, pun@cgeuge51.bitnet

------------------------------

Date: Friday, 6 Oct 1989 12:57:43 EST
From: m20163@mwvm.mitre.org (Nahum Gershon)
Subject: Workshop on Combining Multiple Strategies For Vision
Organization: The MITRE Corp., Washington, D.C.

Eighteenth Workshop on
Applied Imagery Pattern Recognition

Subject: Combining Multiple Strategies For Vision

October 11-13, 1989
Cosmos Club, Washington, DC


Contact ERIM (Kathleen) at (313) 994-1200, x 2237

        Nahum


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/14/89)

Vision-List Digest	Fri Oct 13 11:10:59 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Fractal scene modeling
 Error in Swiss Fr to US $ conversion.
 ICIP'89 successfully held in Singapore
 TOC

----------------------------------------------------------------------

Date: Sat, 7 Oct 89 01:12:38 GMT
From: us214777@mmm.3m.com (John C. Schultz)
Subject: Fractal scene modeling
Organization: 3M - St. Paul, MN  55144-1000 US

I am looking for some routines which would compute the fractal "dimension"
of a natural texture.  For example, given a texture I would like to 
compute the fractal dimension for small (8x8 up to 64x64 pixels) blocks of
data.  

I have seen many references to such algorithms in the literature but the 
sections on how to actually compute a fractal dimension have been a trifle
light.  

Perhaps someone could post an implementation to the net.  A C implementation
would be preferable from my viewpoint. 

John C. Schultz                   EMAIL: jcschultz@mmm.com
3M Company                        WRK: +1 (612) 733 4047
3M Center, Building 518-01-1      St. Paul, MN  55144-1000        

------------------------------

Date: Sat, 7 Oct 89 19:04 N
From: THIERRY PUN <PUN%CGEUGE51.BITNET@CUNYVM.CUNY.EDU>
Subject: Error in Swiss Fr to US $ conversion.

In a previous posting regarding research assistant positions at
the University of Geneva, Switzerland, I mistakenly inverted the
exchange rate. You should read 1 US$ has been varying between
1.6 and 1.8 SFr.
Sorry....
Thierry Pun, CUI, U-Geneva, Switzerland.
pun@cui.unige.ch

------------------------------

Date: Mon, 9 Oct 89 23:18:12 -0500
From: teh@cs.wisc.edu (Cho-huak Teh)
Subject: ICIP'89 successfully held in Singapore

The First IEEE 1989 International Conference on Image Processing
was successfully held in Singapore from 5 to 9 of September.
The conference chair was K. N. Ngan of the National University of Singapore
and the keynote address were made by Takeo Kanade of CMU
on "Computer Vision and Physical Science".

There were about 200 delegates to the 19 sessions of the conference and
146 to the tutorials by T. S. Huang, B. G. Batchelor, R. J. Ahlers,
and F. M. Waltz.

Contact IEEE to obtain a copy of the conference proceedings.

The 2nd IEEE ICIP is being planned, possibly to be held in the Fall of 1991.


						Cho-Huak TEH
------------------------------

Date:         Tue, 10 Oct 89 21:52:18 IST
From: Shelly Glaser  011 972 3 545 0060 <GLAS%TAUNIVM.BITNET@CUNYVM.CUNY.EDU>
Subject:      TOC

You may also consider "Optics Letters" and "Applied Optics" as well.

I am  told that the  "Current Contents" weekly  is now available  also on
diskettes.  It also carries address list  for the authors which may be of
use for those who do not have access to a specific journal.

                                        Yours,
                                        Shelly
Acknowledge-To: <GLAS@TAUNIVM>


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/24/89)

Vision-List Digest	Mon Oct 23 13:27:15 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 BBS Call for Commentators: Visual Field Specialization
 Proceedings of ICIP'89
 Reference wanted
 Facial Features: Computer Analysis

------------------------------

Date: Thu, 19 Oct 89 03:05:05 -0500
From: mendozag@ee.ecn.purdue.edu (Victor M Mendoza-Grado)
Subject: Reference wanted

   I am looking for the exact reference to the following paper:

   ``Parallel Processing of a Knowledge-Based Vision System,''
   by D. I. Moldovan and C. I. Wu.

   It might have appeared as a conference paper around 1986 or 1987.

   I'd appreciate any pointers. Thanks in advance

   VMG
   mendozag@ecn.purdue.edu


------------------------------

Date: Fri, 20 Oct 89 01:03:47 EDT
From: harnad@clarity.Princeton.EDU (Stevan Harnad)
Subject: Facial Features: Computer Analysis

I would like information about software and hardware for representing
and analyzing faces and facial features. Ideally, I would like
something that, like Susan Brennan's program for generating
caricatures, has been standardized across large samples of faces and is
able to pull out the facial parameters that carry the kind of
information we pick up when we look at faces.

The purpose of the project is to find detectable, quantifiable features
that will predict the degree of genetic relatedness between two people
from images of their faces.

Please send the replies to me, not the net. If anyone wants me to share
the responses with them, send me an email request.

Stevan Harnad 

INTERNET: harnad@confidence.princeton.edu srh@flash.bellcore.com
harnad@elbereth.rutgers.edu          UUCP: harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet   harnad1@umass.bitnet        Phone: (609)-921-7771
Department of Psychology, Princeton University, Princeton NJ 08544


------------------------------

Date: Thu, 19 Oct 89 20:45:36 -0500
From: teh@cs.wisc.edu (Cho-huak Teh)
Subject: Proceedings of ICIP'89

To obtain a copy of the proceedings of 1989 1st IEEE ICIP,
you should contact the following instead of IEEE :

    Meeting Planners Pte Ltd
    100 Beach Road
    #33-01, Shaw Towers
    Singapore 0718
    Republic of Singapore
    Attn : ICIP'89 Proceedings

    Tel : (65)297-2822
    Tlx : RS 40125 MEPLAN
    Fax : (65)296-2670

					-- Cho Huak TEH

----------------------------------------------------------------------

Date: Mon, 16 Oct 89 00:44:19 EDT
From: harnad@clarity.Princeton.EDU (Stevan Harnad)
Subject: BBS Call for Commentators: Visual Field Specialization


Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international,
interdisciplinary journal that provides Open Peer Commentary on important
and controversial current research in the biobehavioral and cognitive
sciences. Commentators must be current BBS Associates or nominated by a 
current BBS Associate. To be considered as a commentator on this article,
to suggest other appropriate commentators, or for information about how
to become a BBS Associate, please send email to:
	 harnad@confidence.princeton.edu              or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542  [tel: 609-921-7771]



Functional Specialization in the Lower and Upper Visual Fields in Man:
Its Ecological Origins and Neurophysiological Implications

by Fred H. Previc
Crew Technology Division
USAF School of Aerospace Medicine
Brooks AFB, TX 78235-5301

Abstract: Functional specialization in the lower and upper visual
fields in man is reviewed and interpreted with respect to the origins
of the primate visual system. Many of the processing differences
between the vertical hemifields are related to the distinction between
near (personal) and far (extrapersonal) space, which are biased towards
the lower and upper visual fields respectively. It is hypothesized that
a significant enhancement of these functional specializations occurred
in conjunction with the emergence of forelimb manipulative skills and
fruit-eating, in which the fruit or distal object is searched and
attended to in central vision while the reaching motion of the hand and
other related manipulations are monitored in the proximal lower visual
field. Objects in far vision are searched and recognized primarily
using linear/local perceptual mechanisms, whereas nonlinear/global
processing is required in the lower visual field in order to perceive
the optically degraded and diplopic images contained in near vision.
The functional differences between the lower and upper visual fields
are correlated with their disproportionate representations in the
dorsal vs. ventral divisions of visual association cortex,
respectively, and in the magnocellular and parvocellular pathways that
project to them. The division between near and far visual functions may
also have contributed to the transformations of the lateral geniculate
nucleus, superior colliculus, and frontal visual areas which occurred
during the evolution of primate vision.


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/09/89)

Vision-List Digest	Wed Nov 08 17:22:11 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Fractal description of images
 Re: Applications of distance maps
 Computer Vision in High Energy Physics
 3-D Displays
 Research Fellowship

----------------------------------------------------------------------

Date: Tue, 7 Nov 89 03:03:56 GMT
From: us214777@mmm.serc.3m.com (John C. Schultz)
Subject: Fractal description of images
Organization: 3M - St. Paul, MN  55144-1000 US

Several weeks ago I asked if anyone had an algorithm they were willing to
share on how to calculate image fractal dimensions.  Although I still don't
have an implementation, I DO have a readable reference that even provides
pseudo-code.  The reference comes from U of Missouri-Colombia and perhaps
someone at that school could impose on the authors to provide their code?  In
any case, I will be working on a realization here for our specific hardware.

Here is the reference and thanks to the authors for a well written paper.

Keller JM, Chen S, & Crownover RM,  "Texture Description and Segmentation
through Fractal Geometry",  Computer Vision, Graphics, and Image Processing,
45, 150-166 (1989) Academic Press.
-- 
John C. Schultz                   EMAIL: jcschultz@mmm.3m.com
3M Company                        WRK: +1 (612) 733 4047
3M Center, Building 518-01-1      St. Paul, MN  55144-1000        
   The opinions expressed above are my own and DO NOT reflect 3M's

------------------------------

Date: Tue, 7 Nov 89 12:22:31 +0100
From: Ingemar Ragnemalm <ingemar@isy.liu.se>
Subject: Re: Applications of distance maps

In comp.ai.vision both you and I write:

>I'm writing a thesis on distance mapping techniques, and I need more
>references to applications.

>There are a lot of applications, like:
(I still need more of them! Please?)
>(some of my junk deleted, including some simple examples)

>These examples are with the crude "City Block" metric. There are far better
>metrics [Borgefors], including the exact Euclidean metric [Danielsson].

>E-mail: ingemar@isy.liu.se


>[ Please post responses to the Vision List...
>  
>  An aside:
>  City-block distance metric is particularly easy to compute.  Initialized
>  chamfer image locations set to 0 for occupied positions; infinity otherwise
>  (i.e., a very large integer). Two passes (top-to-bottom/left-to-right and 
>  bottom-to-top/right-to-left) then compute the chamfer. First pass takes the
>  MIN of the current location and the neighbors' chamfer values incremented 
>  by the indicated values (CP is the current position):
>			+2   +1  +2
>			+1   CP
>  Second pass uses the same mask flipped on both the vertical and horizontal
>  axes.  Region labelled chamfer obtained by also propagating region labels.
>  Region growing in constant time by thresholding the distance chamfer. 
>  Medial axis transform occurs at maxima in the distance chamfer. Sorry for 
>  being long winded, but this algorithm (shown to me originally by Daryl 
>  Lawton) has proven quite useful... N-nearest neighbor algorithms (for
>  N>1) get significantly more computationally complex (anyone know of good 
>  algorithms?). 

>	phil...]

Actually, most efficient algorithms use this "scanning" technique. A far
better algorithm with masks of the same size is:

			+4  +3  +4
			+3   0

as suggested by Borgefors [CVGIP 1986]. It is proven the optimal 3*3 chamfer
mask with integer weights. Borgefors also suggests the "5-7-11" mask for the
5*5 mask size.

Danielsson [CGIP 1980] uses a similar technique for the Euclidean distance
transform, as well as myself in some of my own work (published at the 6SCIA
conference 1989). The big difference is that Euclidean DT *must* use more than
two scans. Three or four should be used (as in my paper) or the equivalent
two "double" scans (as in Danielsson's paper).

An interesting point is that a canadian researcher, Frederic Leymarie, who
I met at a conference claimed that the 4-neighbor version of the Euclidean
distance transform is faster than everything but the City Block distance
transform. I'm still waiting for the actual paper, though.

So much for implementation.

BTW, Phil, you didn't say what *you* used the CB distance maps for.
Would you care to share that information?

I'll be back when I've got some more replies.

Ingemar Ragnemalm
Dept. of Electrical Engineering	     ...!uunet!mcvax!enea!rainier!ingemar
                  ..
University of Linkoping, Sweden	     ingemar@isy.liu.se

[ Very interesting mask modifications for computing CB chamfers. If I 
  remember correctly, when the distance metric scale in pixels is important
  (i.e., the distance values are in pixel unit CB distance), an "extra pass" 
  is required to reduce the resulting chamfer values to pixel units of 
  distance (since I often desired the values in pixel units). 
	What did I use the CB distance (chamfer) for?  Pretty varied uses. 
  Region growing in constant time (i.e., chamfer and threshold). Determining
  the closest surrounding regions to each image region (by propagating 
  the region ID with the distance metric). I used this to form a graph in
  which textels are vertices connected by edges to all chamfer determined 
  adjacent neighbors, and texture is described as topological properties of
  this Voronoi-related graph. (I never published this: CVPR88 had a paper or
  two that did something like this, but not quite). I have also used it to
  compute the probability dropoff of a vehicle detection against its 
  background due to radar sensor distortion, occlusion, and uncertainty
  (suggested by Doug Morgan of ADS).  
 	Note that use of 1-nearest neighbor is very susceptible to noise, 
  and its relationship to the Symmetric and Medial Axis Tranforms gives it
  some other nasty properties (e.g., small chamges in image topology can
  give rise to very large changes in chamfer/SAT/MAT topology).  The 
  n-nearest neighbor moves away from some of these problems.  Thing is,
  I haven't seen (nor honestly, looked very hard), for n-nearest neighbor
  algorithms which are efficient.

		phil...		]


------------------------------

Date:  Wed,  8 NOV 89 10:06 N
From: DANDREA%ITNVAX.CINECA.IT%ICINECA2.BITNET@CUNYVM.CUNY.EDU
Subject: Computer Vision in High Energy Physics

I'm looking for informations regarding the application
of Computer Vision or Image Processing techniques to
experimental problems in High Energy Physics.
What I'm thinking of could be the applications to the
problem of track reconstruction and particle identification.

If someone else is interested I'll post a summary to the list.

Thanks,

Vincenzo D'Andrea           e-mail:   DANDREA@ITNCISCA.BITNET
Dipartimento di Fisica
Universita` di Trento
38050 - POVO (TN)
ITALY                       tel. +39 (461) 881565


------------------------------

Date: 8 Nov 89 10:23:00 EST
From: "V70NL::SCHLICHTING" <schlichting%v70nl.decnet@nusc.navy.mil>
Subject: 3-D Displays

Could you please tell me where I could obtain a copy of the papers listed
in the recent vision list from ACM SIGGRAPH "Tutorial notes on stereo 
graphics"?

Thank you,
Christine Schlichting (Schlichting@nusc.navy.mil)


------------------------------

Date: Wed, 8 Nov 89 11:57:23 WET DST
Subject: Research Fellowship
From: M.Petrou%ee.surrey.ac.uk@NSFnet-Relay.AC.UK

           VISION, SPEECH AND SIGNAL PROCESSING GROUP
                  
                  University of Surrey, U. K.

                     RESEARCH FELLOWSHIP

A research fellowship has become available in the above group for the fixed 
term of three years. The research fellow will work on Renormalization Group 
techniques in Image Processing, a project funded by SERC.

A good knowledge of a high level programming language is necessary. No 
previous experience in Image Processing or Computer Vision is needed, but a 
Mathematical background will be an advantage. Starting salary up to 
13,527 pounds per annum.

For more information contact Dr M Petrou (tel (483) 571281 ext 2265) or Dr J 
Kittler (tel (483) 509294). Applications in the form of a carriculum vitae 
(3 copies) including the names and addresses of two referees and a list of 
publications should be sent to the Personnel Office (JLC), University of 
Surrey, Guildford GU2 5XH, U. K., quoting reference 893 by 1 December 1989.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/18/89)

Vision-List Digest	Fri Nov 17 15:52:33 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Rosenfeld's Computer Vision Bibliography online
 Upcoming Conferences
 Megaplus cameras and boundary tracing
 Pseudocode for boundary tracking algorithms
 Conference on pragmatics in AI

----------------------------------------------------------------------

Date: Mon, 13 Nov 89 13:23:26 PST
From: Vision-List-Request <vision@deimos.ads.com>
Subject: Rosenfeld's Computer Vision Bibliography online

Dr. Azriel Rosenfeld has been kind enough to provide on-line versions
of his outstanding bibliographies for Computer Vision and related
topics from 1984 to 1988 to the Vision List.  Since the formatted text
for these bibliographies occupy about 1.25MB, I do not want to clog up
the net with them. Instead, I have placed them in the Vision List FTP
directory at ADS. Once anonymous FTP'ed to this account (described
below), these bibliographies may be found in
/pub/VISION-LIST-ARCHIVE/ROSENFELD-BIBLIOGRAPHIES .  They may be
copied to your local site using the FTP 'get' command.  Due to their
large size (about 225KB per year) and the labor required for me over
the large number of List subscribers, please note that I cannot mail
copies of these bibliographies to indivudal subscribers; FTP is the
only current access now available.  Recommendations for alternative
distribution methods are invited, and these should be directed
directly to vision-list-request@ads.com.

I have also included the standard message mailed to new subscribers
which describes this list and how FTP access can be made.  It is a
good idea to refloat this now and again.

I hope that Rosenfeld's comprehensive Computer Vision bibliography is a
valuable addition to your efforts in vision and image processing.

	phil...


Subject: Welcome to Vision-List!  
From: Vision-List moderator Phil Kahn <Vision-List-Request@ADS.COM>


The host site for the ARPAnet Vision List is Vision-List@ADS.COM for
list contributions, Vision-List-Request@ADS.COM for undistributed
correspondence. As the moderator, I am interested in stimulating
exchanges on research topics of significance to the vision community.
Here are a few administrative details about the list.

			    Requests
			    --------

If you have problems sending messages to the list, questions about
technical details of the list distribution, or the like, send mail to
Vision-List-Request@ADS.COM and we will respond personally to your
query.

PLEASE DO NOT send your request to be added to/deleted from the list
to Vision-List@ADS.COM.  If you do, it may be automatically
redistributed as an article and appear in thousands of mailboxes around
the world.  Always send these requests to the -Request address.


        	   Submissions & Release Format
		   ----------------------------

To submit an article to Vision-List, simply send it to:

         		Vision-List@ADS.COM

Submissions to the list are currently being delivered in a single batch
mailing once a week, on weeks when there is something to send.  Caution:
the list moderator does not always edit the incoming messages before
they are redistributed. During those weeks, everything operates
automatically, with the attendant advantages and pitfalls.  (If you fall
into one of those pits, please send me mail at Vision-List-Request and
we'll do our best to fix things up.)

The following details may help you in formatting and choosing appropriate
titles for your list submissions. Within a single release of
Vision-List, the format consists of an initial identifying header, a
list of topics covered in that release, and then each message as
received in chronological order, separated by dashed lines.  Most of the
header information is automatically removed from each submission at
release time. When the software is working correctly, only "Date:",
"From:", and "Subject:" appear.


        	   Archives
		   --------

Backissues of the Vision List Digest are available via anonymous FTP.
For those of you without FTP connection, limited backissues may be obtained
by mailing your request to Vision-List-Request@ADS.COM .

To access the Digest backissues from anonymous FTP:
	1) FTP to ADS.COM
	2) Login name is ANONYMOUS
	3) Once you're logged on, change directory (cd) to
	   pub/VISION-LIST-ARCHIVE
	4) Backissues are in the subdirectory BACKISSUES. Each file 
           contains an archived issue, and the file name is the date and
           time the issue was created.


			** ** ** ** ** **

The list is intended to embrace discussion on a wide range of vision
topics, including physiological theory, computer vision, machine
vision and image processing algorithms, artificial intelligence and
neural network techniques applied to vision, industrial applications,
robotic eyes, implemented systems, ideas, profound thoughts --
anything related to vision and its automation is fair game.  I hope
you enjoy Vision-List.


				Phil Kahn, moderator

				Vision-List-Request@ADS.COM


------------------------------

Date: 17 Nov 89  9:11 -0800
From: John Ens <johne@ee.ubc.ca>
Subject: Upcoming Conferences

Does anybody have information on the following two conferences:
  - Computer Vision and Pattern Recognition; and
  - International Conference on Computer vision.

I am particularily interested in 
  - When and where the next conference is to be held; and
  - When and to whom to submit papers.

Thanks,
John Ens        johne@ee.ubc.ca

------------------------------

Date: Wed, 15 Nov 89 15:08:35 GMT
From: us214777@mmm.serc.3m.com (John C. Schultz)
Subject: Megaplus cameras and boundary tracing
Organization: 3M - St. Paul, MN  55144-1000 US

RE MEGAPLUS TO SUN:

Datacube, Inc (508) 535-6644 makes a whole line of VME bus based video rate
processing cards which amoung other things can acquire and display (on the SUN
display) Megaplus sized images and larger.  You would need a MAXSCAN and
ROISTORE 2MB to acquire image data.  Their connection to the Megaplus is
discussed in Datacube's Application Note MAX SC 011. The display board set is
called MAXVIEW and logically sits between the SUN cpu board video generator
and the SUN monitor.  Setup parameters allow you to display real-time video in
this window assuming you have made the SUN cpu think that the window is black.
This system would be pretty expensive ($20,000) but if you would be satisfied
with VME bus transfer rates of video data the cost would be half that.

Another system is available from Perceptics (615) 966-9200 and runs on the
Nubus from a MAC II.  If you have a MAC II with an Ethernet conection, this
would be a cheaper approach than purchasing the Datacube hardware but
obviously sending a lot of images over Ethernet from a MAC is going to take
awhile.

You might also try Recognition Concepts Inc (RCI) (702)831-0473 and Imaging
Technology Inc (617)938-8444

RE BOUNDARY TRACING:
Consider making a lookup table which outputs only the binary edge points and
then generate the (x,y) corrdinates of each boundary point.  Software can then
do fancy processing such as shape descriptions etc.  This approach would run
at video rates (30 times/sec) with appropriate hardware such as from
Datacube, Matrox, Perceptics, etc.  
-- 
John C. Schultz                   EMAIL: jcschultz@mmm.3m.com
3M Company                        WRK: +1 (612) 733 4047
3M Center, Building 518-01-1      St. Paul, MN  55144-1000        
   The opinions expressed above are my own and DO NOT reflect 3M's

------------------------------

Date: Mon, 13 Nov 89 23:41:43 CST
From: shvaidya@uokmax.ecn.uoknor.edu (Shankar Vaidyanathan)
Subject: Re: Pseudocode for boundary tracking algorithms

Hi:

A request from Sridhar Balaji was relayed for pointers/pseudocode/code
for boudary tracking algorithms for binary images.

Since, I am also in need of them, I would be pleased to receive a copy of them

My E-mail address: shvaidya@uokmax.ecn.uoknor.edu

Address: 540 B Sooner Drive, Norman, Oklahoma 73072

Best Regards

Shankar


[ As usual, please post answer to questions of general interest to the List.
		phil...	]

------------------------------

Date: 14 Nov 89 20:10:11 GMT
From: paul@nmsu.edu (Paul McKevitt)
Subject: CONFERENCE-ON-PRAGMATICS-IN-AI
Organization: NMSU Computer Science

                            CALL FOR PAPERS 

  
                   Pragmatics in Artificial Intelligence
       5th Rocky Mountain Conference on Artificial Intelligence (RMCAI-90)
                Las Cruces, New Mexico, USA, June 28-30, 1990 

  
PRAGMATICS PROBLEM:
The problem of pragmatics in AI is one of developing theories, models,
and implementations of systems that make effective use of contextual
information to solve problems in changing environments.
 
CONFERENCE GOAL: 
This conference will provide a forum for researchers from all
subfields of AI to discuss the problem of pragmatics in AI.
The implications that each area has for the others in tackling
this problem are of particular interest.

ACKNOWLEDGEMENTS:
In cooperation with:
Association for Computing Machinery (ACM) (pending approval)
Special Interest Group in Artificial Intelligence (SIGART) (pending approval)
U S WEST Advanced Technologies and the Rocky Mountain Society
for Artificial Intelligence (RMSAI)

With grants from:
Association for Computing Machinery (ACM)
Special Interest Group in Artificial Intelligence (SIGART)
U S WEST Advanced Technologies and the Rocky Mountain Society
for Artificial Intelligence (RMSAI)

THE LAND OF ENCHANTMENT:
Las Cruces, lies in THE LAND OF ENCHANTMENT (New Mexico),
USA and is situated in the Rio Grande Corridor with the scenic
Organ Mountains overlooking the city. The city is
close to Mexico, Carlsbad Caverns, and White Sands National Monument.
There are a number of Indian Reservations and Pueblos in the Land Of
Enchantment and the cultural and scenic cities of Taos and Santa Fe
lie to the north. New Mexico has an interesting mixture of Indian, Mexican
and Spanish culture. There is quite a variation of Mexican and New
Mexican food to be found here too.

GENERAL INFORMATION:
The Rocky Mountain Conference on Artificial Intelligence is a
major regional forum in the USA for scientific exchange and presentation
of AI research.
 
The conference emphasizes discussion and informal interaction
as well as presentations.
 
The conference encourages the presentation of completed research,
ongoing research, and preliminary investigations.
 
Researchers from both within and outside the region
are invited to participate.
 
Some travel awards will be available for qualified applicants.
 
FORMAT FOR PAPERS:
Submitted papers should be double spaced and no more than 5 pages
long. E-mail versions will not be accepted.

Send 3 copies of your paper to:
 
Paul Mc Kevitt,
Program Chairperson, RMCAI-90,
Computing Research Laboratory (CRL),
Dept. 3CRL, Box 30001,
New Mexico State University,
Las Cruces, NM 88003-0001, USA. 
 
DEADLINES:
Paper submission: March 1st, 1990
Pre-registration: April 1st, 1990
Notice of acceptance: May 1st, 1990
Final papers due: June 1st, 1990 
 
LOCAL ARRANGEMENTS:
Jennifer Griffiths, Local Arrangements Chairperson, RMCAI-90.
(same postal address as above).

INQUIRIES:
Inquiries regarding conference brochure and registration form
should be addressed to the Local Arrangements Chairperson.

Inquiries regarding the conference program should be addressed
to the program Chairperson.

Local Arrangements Chairperson: E-mail: INTERNET: rmcai@nmsu.edu
                                Phone: (+ 1 505)-646-5466
                                Fax: (+ 1 505)-646-6218.

Program Chairperson: E-mail: INTERNET: paul@nmsu.edu
                     Phone: (+ 1 505)-646-5109
                     Fax: (+ 1 505)-646-6218.

 
TOPICS OF INTEREST: 
You are invited to submit a research paper addressing  Pragmatics
in AI , with any of the following orientations:
 
  Philosophy, Foundations and Methodology
  Knowledge Representation
  Neural Networks and Connectionism
  Genetic Algorithms, Emergent Computation, Nonlinear Systems
  Natural Language and Speech Understanding
  Problem Solving, Planning, Reasoning
  Machine Learning
  Vision and Robotics
  Applications 


Paul Mc Kevitt,
Computing Research Laboratory,
Dept. 3CRL, Box 30001,
New Mexico State University,
Las Cruces, NM 88003-0001, USA.
505-646-5109/5466
CSNET: paul@nmsu.edu

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/23/89)

Vision-List Digest	Wed Nov 22 14:01:52 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Parallel Computers for Computer Vision
 Bibliography: IEEE 1989 SMC Conference
 6th Israeli AI & Vision Conference

----------------------------------------------------------------------

Date: Tue, 21 Nov 89 12:33:14 MEZ
From: Stefan Posch <posch@informatik.uni-erlangen.de>
Subject: Parallel Computers for Computer Vision
Return-Receipt-To: "Stefan Posch" <posch@informatik.uni-erlangen.de>

Hello,

I am looking for further material for a tutorial on
"Parallel Computers for Computer Vision" I plan to give next summer.

By computer vision I subsume here image processing, image
analysis as well as certain AI tasks.

To be more precise, I am interested in information/references
about/to:

   - hardware architecture of parallel computers used/usefull for
     computer vision
   - software environment including programming languages, 
     communication primitives, mechanisms for data sharing
   - applications, including speedup measurements and complexity estimation

My e-mail address is
	RFC822: posch@informatik.uni-erlangen.de
	UUCP:   {pyramid,unido}!fauern!posch
	X.400: <S=posch;OU=informatik;P=uni-erlangen;A=dbp;C=de>

Of cause I'll be happy to summarize to the net.
(How can I  decide whether a summary is wanted or not?)

Thank you in advance, Stefan

Stefan Posch
Universitaet Erlangen-Nuernberg
Lehrstuhl 5 (Mustererkennung)
8520 Erlangen
West-Germany

------------------------------

Date: Mon, 20 Nov 89 14:54:39 EST
From: flynn@pixel.cps.msu.edu (Patrick J. Flynn)
Subject: Bibliography: IEEE 1989 SMC Conference

Here's a list of computer vision-related papers presented at the 1989
IEEE SMC conference in Cambridge on November 15-17.  Apologies in
advance for any omissions; I tried to be liberal in my criteria for
deciding whether a paper is `computer vision-related' or not. 

S. Grossberg, Recent Results on Neural Network Architectures
for Vision and Pattern Recognition, p. 1.

F. Arman, B. Sabata, and J.K. Aggarwal, Hierarchical Segmentation
of 3-D Range Images, pp. 156-161.

P. Flynn and A. Jain, CAD-Based Computer Vision: From CAD Models
to Relational Graphs, pp. 162-167.

C. Hansen, T. Henderson, and R. Grupen, CAD-Based 3-D Object
Recognition, pp. 168-172.

R. Hoffman and H. Keshavan, Evidence-Based Object Recognition
and Pose Estimation, pp. 173-178.

G. Medioni and P. Saint-Marc, Issues in Geometric Reasoning from
Range Imagery, pp. 179-185.

M. Trivedi, M. Abidi, R. Eason and R. Gonzalez, Object Recognition
and Pose Determination in Multi-Sensor Robotic Systems, pp. 186-193.

E. Feria, Predictive Transform for Optimum Digital Signal Processing,
pp. 1228-1237.

W. Wu and A. Kundu, A Modified Reduced Update Kalman Filter for
Images Degraded by Non-Gaussian Additive Noise, pp. 352-355.

C. Wright, E. Delp, and N. Gallagher, Morphological Based Target
Enhancement Algorithms to Counter the Hostile Nuclear Environment,
pp. 358-363.

R. Jha and M. Jernigan, Edge Adaptive Filtering: How Much and
Which Direction?, pp. 364-366.

B. Javidi, M. Massa, C. Ruiz, and J. Ruiz, Experiments on
Nonlinear Optical Correlation, pp. 367-369.

H. Arsenault, Similarity and Pathology in Neural Nets,
pp. 401-404.

E. Paek, J. Wullert, A. VonLehman, J. Patel, A. Sherer,
J. Harbison, H. Yu, and R. Martin, Vander Lugt Correlator
and Neural Networks, pp. 408-414.

B. Javidi, Optical Attentive Associative Memory with
Channel Dependent Nonlinearity in the Weight Plane,
pp. 415-420.

Z. Bahri and B. Vijaya Kumar, Design of Partial Information Filters
for Optical Correlators, pp. 421-426.

F. Palmieri and S. Shah, A New Algorithm for Training Multilayer
Perceptrons, pp. 427-428.

N. Nasarbadi, W. Li, B. Epranian and C. Butkus, Use of Hopfield
Network for Stereo Correspondence, pp. 429-432.

W. Owen, W. Hare, and A. Wang, The Role of the Bipolar Cell in
Retinal Signal Processing, pp. 435-442.

J. Troy and C. Enroth-Cugell, Signals and Noise in Mammalian
Retina, pp. 443-447.

R. Emerson, Linear and Nonlinear Mechanisms of Motion Selectivity
in Single Neurons of the Cat's Visual Cortex, pp. 448-453.

A. Dobbins and S. Zucker, Mean Field Theory and MT Neurons,
pp. 454-458.

M. Tuceryan and A. Jain, Segmentation and Grouping of Object
Boundaries, pp. 570-575.

Y. Shui and S. Ahmad, 3-D Location of Circular and Spherical
Features by Monocular Model-Based Vision, pp. 576-581.

M. Chantler and C. Reid, Integration of Ultrasonic and Vision
Sensors for 3-D Underwater Scene Analysis, pp. 582-583.

H. Fujii and T. Inui, Modelling the Spatial Recognition
Process in a Computer-Generated 3-D World, pp. 584-585.

C. Chang and S. Chatterjee, Depth from Stereo Image Flow,
pp. 586-591.

R. Safae-Rad, B. Benhabib, K. Smith, and Z. Zhou, Pre-Marking Methods
for 3D Object Recognition, pp. 592-595.

C. Bandera and P. Scott, Foveal Machine Vision Systems, pp. 596-599.

J. Kottas and C. Warde, Trends in Knowledge Base Processing Using
Optical Techniques, pp. 1250-1257.

J. Horner, Variations of the Phase-only Filter, p. 644 (abstract)

D. Gregory, J. Kirsch, and W. Crowe, Optical Correlator Guidance
and Tracking Field Tests, pp. 645-650.

Y. Fainman, L. Feng, and Y. Koren, Estimation of Absolute Spatial
Position of Mobile Systems by Hybrid Opto-Electronic Processor,
pp. 651-657.

T. Lu, X. Xu, and F. Yu, Hetero Associative Neural Network
Pattern Recognition, pp. 658-663.

M. Oguztoreli, Neural Computations in Visual Processes, pp. 664-670.

M. Jernigan, R. Belshaw, and G. McLean, Image Enhancement with
Nonlinear Local Interactions, pp. 676-681.

R. Pinter, R. Olberg, and E. Warrant, Luminance Adaptation of
Preferred Object Size in Identified Dragonfly Movement
Detectors, pp. 682-686.

A. Meyer, Z. Li, E. Haupt, K. Lu, and H. Louis, Clinical
Electrophysiology of the Eye: Physiological Modeling
and Parameter Estimation, pp. 687-692.
S. Levine and J. Kreifeldt, Minimal Information for the Unique
Representation of a Pattern of Points, pp. 828-830.

N. Ansari and E. Delp, Partial Shape Recognition: A Landmark-Based
Approach, pp. 831-836.

T. Fukuda and O. Hasegawa, Creature Recognition and Identification by
Image Processing Based on Expert System, pp. 837-842.

Y. Shiu and S. Ahmad, Grouping Image Features into Loops for
Monocular Recognition, pp. 843-844.

T. Matsunaga, A. Tokumasu, and O. Iwaki, A Study of Document
Format Identification Based on Table Structure, pp. 845-846.

T. Topper and M. Jernigan, On the Informativeness of Edges,
pp. 909-914.

C. Choo and H. Freeman, Computation of Features of 2-D Polycurve-Encoded
Boundaries, pp. 1041-1047.

A. Gross and T. Boult, Straight Homogeneous Generalized Cylinders,
Analysis of Reflectance Properties and a Necessary Condition for
Class Membership, pp. 1260-1267.

W. Yim and D. Joo, Surface Contour Mapping Using Cross-Shaped
Structured Light Beam, pp. 1048-1050.

A. Perry and D. Lowe, Segmentation of Non-random Texture Regions
in Real Images, pp. 1051-1054.

S. Lee and J. Pan, Tracing and Representation of Human Line
Drawings, pp. 1055-1061.

T. Taxt, P. Flynn, and A. Jain, Segmentation of Document Images,
pp. 1062-1067.

K. Yokosawa, Human-Based Character String Image Retrieval from Textual
Images, pp. 1068-1069.

S. Lee, Classification of Similar Line-Drawing Patterns with Attributed
Graph Matching, pp. 1070-1071.

That's all for now.  Next week, I'll try to send citations from the 3-D
scene interpretation workshop in Austin, Texas.

Pat

------------------------------

Date: Mon, 20 Nov 89 15:46:36 +0200
From: hezy%TAURUS.BITNET@CUNYVM.CUNY.EDU
Comments: If you have trouble reaching this host as math.tau.ac.il
        Please use the old address: user@taurus.bitnet
Subject: 6th Israeli AI & Vision Conference

Preliminary Program for the Sixth Israel Conference on
Artificial Intelligence and Computer Vision

Monday, 25 December 1989
18:00  Reception, Kfar Hamaccabiyah

AI TRACK:

Tuesday, 26 December 1989

9:00-10:30
Keynote Speaker --- Nils Nilsson, Stanford University, USA


Session 1.1
11:00-13:00
"On Learning and Testing Evaluation Functions"
      B. Abramson, University of Southern California, USA
"Analogical Learning: A Cognitive Theory"
      B. Adelson, Tufts University, USA
"An Expert System for Computer Aided Design of Man-Machine Interface"
      D. Tamir, A. Kandel, Florida State University, USA


Session 1.3
14:15-15:15
"Why Can't I Smear Paint at the Wall"
      M. Brent, Massachusetts Institute of Technology, USA
"Resolution of Lexical Synonymy at the Word Level"
      S. Nirenburg, E. Nyberg, Carnegie Mellon University, USA

Session 1.3B
14:15-15:15
"From Local to Global Consistency in Constraint Networks:
Some New Results"
      R. Dechter, The Technion, Israel
"Principles of Knowledge Representation and Reasoning in the
FRAPPE System"
      Y. Feldman, Weizmann Institute of Science, Israel
      C. Rich, Massachusetts Institute of Technology, USA


Session 1.5
15:30-17:00
"Multi-Agent Autoepistemic Predicate Logic"
      Y. Jiang, Cambridge University, England
"A Framework for Interactions Among Agents of Disparate Capabilities"
      O. Kliger, Weizmann Institute of Science, Israel
      J. Rosenschein, Hebrew University of Jerusalem, Israel
"Function Characteristics System: An ITS for Integrating Algebraic and
Graphical Activities in Secondary School Mathematics"
      N. Zehavi, B. Schwarz, A. Evers, Weizmann Institute of Science, Israel

Session 1.5B
15:30-17:00
"Intelligent Decision Support in Quasi-Legal Domains"
      U. Schild, Bar-Ilan University, Israel
"Model Analysis in Model Based Diagnosis"
      A. Abu-Hanna, R. Benjamins, University of Amsterdam, The Netherlands
"Planning in TRAUMAID"
      R. Rymon, J. Clarke, B. Webber, University of Pennsylvania, USA


Wednesday, 27 December 1989

9:30-10:30
Keynote Speaker --- Bonnie Webber, University of Pennsylvania, USA


Session 2.1
11:00-13:00
"Corpus-Based Lexical Acquisition for Translation Systems"
      J. Klavans, E. Tzoukermann, IBM Research, USA
"CARAMEL: A Flexible Model for Interaction between the Cognitive
Processes Underlying Natural Language Understanding"
      G. Sabah, L. Nicaud, C. Sauvage, LIMSI, France
"Zihui Mishkalo Shel Shir b'Machshev"
      U. Ornan, The Technion, Israel
"A Polymorphic Type Checking System for Prolog in HiLog"
      T. Fruhwirth, SUNY, USA


Session 2.2
14:15-15:15
Invited Speaker --- Saul Amarel, Rutgers University, USA
"AI in Design"

Session 2.3
15:30-17:00
"A Model and a Prototype for a Knowledge-Base System for Graphic
Objects"
      E. Naphtalovitch, E. Gudes, J. Addess, A. Noyman,
      Ben-Gurion University of the Negev, Israel
"Rule Based Expert Systems Which Learn to Solve Problems Quicker"
      D. Berlin, M. Schneider, Florida Institute of Technology, USA
"A System for Incremental Learning Based on Algorithmic Probability"
      R. Solomonoff, Oxbridge Research, USA



 VISION TRACK
Tue, December 26
Session 1.2
11:00 - 13:00 Keynote speaker:
      P. Burt, David Sarnoff Research Center, USA
Dynamic Analysis of Image Motion
      M. Costa, R.M. Haralick, L.G. Shapiro, University of Washington, USA
Optimal affine- invariant point matching

Session 1.4
14:15-15:15
      S. negahdaripour & A. Shokrollahi, University of Hawaii, USA
Relaxing the brightness constancy assumption in computing optical flow
      D. Keren, S. Peleg, A. Shmuel, Hebrew University, Israel
Accurate hierarchical estimation of optical flow

Session 1.6
15:30-17:00
      D. Hung & O.R. Mitchell, New Jersey Institute of Technology, USA
Scaling invariant recognition of medium resolution, partially occluded objects
      D. Keren, R. Marcus, M. Werman, Hebrew University
Segmenting and compressing waveforms by minimum length encoding
      K. B. Thornton & L.G. Shapiro, University of Washington, USA
Image matching for view class construction


                                Wed, December 27
Session 2.5
9:15- 10:15 Keynote speaker
      R. Bajcsy, University of Pennsylvania, USA
Perception via Active Exploration Examples of Disassembly

Hall 1

Session 2.6
10:45-12:45
      H. Rom & S. Peleg, Hebrew University
Motion based segmentation of 2D scenes
      S. Levy Gazit & G. Medioni, University of Southern California, USA
Multi scale contour matching in a motion sequence
      Y. Pnueli, N.Kiryati, A. Bruckstein, Technion, Israel
On Navigation in moving influence fields
      A. Meizles & R. Versano, Ben Gurion University, Israel
A multiresolution view of token textures

Session 2.7
14:15-15:15
      S. Shimomura, Fujifacom Corporation, Japan
An application of high speed projecion module to document understatnding
      N. Kiryati & A. Bruckstein, Technion, Israel
Efficient shape recognition by a probabilistic generalized hough transform

Session 2.8
15:30-17:00
      A. Shmuel & M. Werman, Hebrew University, Israel
Active vision: 3D depth from an image sequence
      F.Y. Shih and O.R.Mitchell, New Jersey Institute of Technology, USA
Euclidean distance based on grey scale morphology
      D. Sherman & S. Peleg, Hebrew University, Israel
Stereo by incremental matching of contours

Hall 2

Session 3.1
10:45-12:45
      M. Lindenbaum & A.Bruckstein, Technion, Israel
Geometric probing using composite probes
      F.Y. Shih & C.S. Prathuri, New Jersey Institute of Technology, USA
Shape extraction from size histogram
      M. Irani & S. Peleg, Hebrew University, Israel
Super resolution from image sequences
      G. Ron & S. Peleg, Hebrew University, Israel
Linear shape from shading

Session 3.2
14:15-15:15
      N. gershon, J. Cappelleti, S. Hinds, MITRE Corporation, USA
3D image processing, segmentation and visualization of PET brain images
      V.F. Leavers, Kings College, England
Use of the Radon transform as a method of extracting information about
shape in 2D


Session 3.3
15:30-17:00
      E. Shavit & J. Tsotsos, University of Toronto, Canada
A Prototype for intermediate level visual processing: A motion hierarchy as
a case study
      D. Chetverikov, Computer Institute, Hungary
A data structure for fast neighborhood search in points set
      D. Sher, E. Chuang, R. Venkatesan, SUNY Buffalo, USA
Generating object location systems from complex object descriptions


The Sixth Israeli Symposium on Artificial Intelligence
Vision and Pattern Recognition
December 26-27, 1989

Registration information from:

IPA --- Information Processing Association of Israel,
Kfar Hamaccabiah
Ramat Gan 52109
Israel
Phone: (972)-3-715772

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/07/89)

Vision-List Digest	Wed Dec 06 10:34:59 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 We need "real world" images.
 Program For Realistic Images Wanted
 MATLAB
 Accuracy measure
 RE:Boundary tracking algorithms..
 Computer Vision Position
 Post-Doctoral Research Fellowship
 New Journal: Journal of Visual Communication and Image Representation
 Boundary Tracking -----> (collected Information).
 Bibliography: 3D Scene Interp. Workshop (Austin, TX)

----------------------------------------------------------------------

Date: 27 Nov 89 19:48:16 GMT
From: rasure@borris.unm.edu (John Rasure)
Subject: We need "real world" images.
Organization: University of New Mexico, Albuquerque

We need images.  Specifically, we need stereo pair images, fly by image
sequences, LANDSAT images, medical images, industrial inspection images, 
astronomy images, images from lasers, images from interferometers, etc.
The best images are those that correspond to a "typical" image processing
problems of today.  They need not be pleasing to look at, just representative
of the imaging technique and sensor that is being used.

Does anybody have such images that they can share?

John Rasure
rasure@bullwinkle.unm.edu
Dr. John Rasure
Department of EECE
University of New Mexico
Albuquerque, NM 87131
505-277-1351
NET-ADDRESS:	rasure@bullwinkle.unm.edu

------------------------------

Date: Wed, 29 Nov 89 11:25:37 +0100
From: sro@steks.oulu.fi (Sakari Roininen)
Subject: Program For Realistic Images Wanted

       We are preparing research project in the field of visual
       inspection. In our research work we want to compute and simulate
       highly realistic images.

       Key words are: Shading - Illumination

       We are looking for a software package including following
       properties (modules):

       -    Geometric description of objects.
       -    Optical properties of the surfaces. Surfaces of interest are:
         metal, wood, textile.
       -    Physical and geometrical description of the light sources.
       -    Physical and technological properties of the cameras.

       Software should be written in C and source code should be
       available so that we can customize the software to fit our
       applications.

       GOOD IDEAS ARE ALWAYS WELCOME !!!

       Please, contact:    Researcher Timo Piironen
                                Technical Research Centre of Finland
                                Electronics Laboratory
                                P.O.Box 200
                                SF-90571 OULU
                                Finland
                                tel. +358 81 509111

                                Internet: thp@vttko1.vtt.fi

------------------------------

Date: 1 Dec 89 18:00:26 GMT
From: Adi Pranata <pranata@udel.edu>
Subject: MATLAB


Hi, 

   I'm not sure where to posted this question, anyway Does any one
have any info, on convert raster images/picture to matlab matrix
format, since i am interested on use the matlab software to manipulate
it . Since it will be no problem to display the matlab file format
using the imagetool software.  Any info including what other newsgroup
more appropriate to posted will be welcome.  Thanks in advance.

You could reply to pranata@udel.edu

                                           Sincerely,

                                       Desiderius Adi Pranata
PS: Electromagnetig way
    146.955 Mhz -600 KHz
    Oldfashioned way
    (302)- 733 - 0990
    (302)- 451 - 6992

[ This is definitely appropriate for the Vision List.  Answers to the 
  List please. 			
		phil...]

------------------------------

Date: 2 Dec 89 17:41:18 GMT
From: muttiah@cs.purdue.edu (Ranjan Samuel Muttiah)
Subject: Accuracy measure
Organization: Department of Computer Science, Purdue University

I am looking for the various ACCURACY measures that are used in the
vision field.  If you have any information on this, could you 
email/post please ?

Thank you.

------------------------------

Date: Wed, 29 Nov 89 18:47:24 EST
Subject: RE:Boundary tracking algorithms..
From: Sridhar Ramachandran <Sridhar@UC.EDU>
 
 I have pseudo code for a Boundary Tracking Algorithm for Binary
 Images that uses Direction codes and Containment codes to track
 the boundary. It is pretty efficient and works fine.
 
 If interested, please e-mail requests to sramacha@uceng.uc.edu
 (OR) sridhar@uc.edu  (OR) sramacha@uc.edu.
 
 Sridhar Ramachandran.

------------------------------

Date: Tue, 5 Dec 89 14:08:30 EST
From: peleg@grumpy.sarnoff.com (Shmuel Peleg x 2284)
Subject: Computer Vision Position - David Sarnoff Research Center

The computer vision research group at David Sarnoff Research Center has an
opening for a specialist in image processing or computer vision who has an
interest in computer architecture and digital hardware. Master's level or
equivalent experience is preferred.

This is a key position in an established research team devoted to the
development of high performance, real-time vision systems. The group is active
at all levels of research and development from basic research to applications
and prototype implementation. Current programs include object recognition,
motion analysis, and advanced architecture.

Please send your vitae or enquire with Peter Burt (Group Head), David Sarnoff
Research Center, Princeton, NJ 08543-5300; E-Mail: burt@vision.sarnoff.com.

------------------------------

Date: Wed, 29 Nov 89 19:11:55 WET DST
From: "D.H. Foster" <coa15%seq1.keele.ac.uk@NSFnet-Relay.AC.UK>
Subject: Post-Doctoral Research Fellowship

			 UNIVERSITY OF KEELE

	      Department of Communication & Neuroscience

		  POST-DOCTORAL RESEARCH FELLOWSHIP

Applications are invited for a 3-year appointment as a post-doctoral
research fellow to work in the Theoretical and Applied Vision Sciences
Group.  The project will investigate early visual form processing, and
will entail a mixture of computational modelling and psychophysical
experiment.  The project is part of an established programme of
research into visual information processing, involving a team of about
ten members working in several well-equipped laboratories with a range
of computing and very high resolution graphics display systems.

Candidates should preferably be experienced in computational vision
research, but those with training in computing science, physics,
experimental psychology, and allied disciplines are also encouraged to
apply.  The appointment, beginning 1 January 1990, or soon thereafter,
will be on the Research IA scale, initially up to Point 5, depending
on age and experience.

Informal enquiries and applications with CV and the names of two
referees to Prof David H. Foster, Department of Communication &
Neuroscience, University of Keele, Keele, Staffordshire ST5 5BG,
England (Tel. 0782 621111, Ext 3247; e-mail D.H.Foster@uk.ac.keele).


------------------------------

Date: Thu, 23 Nov 89 16:48:13 EST
From: zeevi@caip.rutgers.edu (Joshua Y. Zeevi)
Subject: New Journal: Journal of Visual Communication and Image Representation

              New Journal published by Academic Press
              ---------------------------------------

Dear Colleague,

The first issue of the Journal of Visual Communication and Image Representation
is scheduled to appear in September 1990. Since the journal will cover topics 
in your area of expertise, your contribution will most likely have impact on
future advancements in this rapidly developing field.
Should you have questions regarding the suitability of a  specific paper or
topic, please get in touch with Russell Hsing or with me. The deadline for
submission of papers for the first issue is Feb. 15, and for the second issue
May 15, 1990.

For manuscript submission and/or subscirption information please write or call
Academic press, Inc. 1250 6th Ave., San Diego, CA 92101. (619) 699-6742.

Enclosed please find the Aims & Scope (including list of preferred topics) and
list of members of the the Editorial Board. 
          
       
          JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
          --------------------------------------------------------

    Dr. T. Russell Hsing, co-editor, Research Manager, Loop Transmission &
    Application District, Bell Communication Research, 445 South Street,
    Morristown, NJ 07960-1910  (trh@thumper.bellcore.com  (201) 829-4950)
                               
    Professor Yehoshua Y. Zeevi*, co-editor, Barbara and Norman Seiden Chair,
    Department of Electrical Engineering, Technion - Israel Ins. of Technology,
    Haifa, 32 000, Israel  

    * Present address: CAIP Center, Rutgers University, P. O. Box 1390,
      Piscataway, NJ 08855-1390  (zeevi@caip.rutgers.edu (201) 932-5551)


                              AIMS & SCOPE

The Journal of Visual Communication and Image Representation is an archival
peer-reviewed technical journal, published quarterly.  With the growing
availability of optical fiber links, advances of large scale integration, new
telecommunication services, VLSI-based circuits and computational systems, as
well as the rapid advances in vision research and image understanding, the
field of visual communication and image representation will undoubtedly 
continue to grow.  The aim of this journal is to combine reports on the state-
of-the-art of visual communication and image representation with emphasis on  
novel ideas and theoretical work in this multidisciplinary area of pure and 
pure and applied research.  The journal consists of regular papers and research
reports describing either original research results or novel technologies. The 
field of visual communication and image representation is considered in its 
broadest sense and covers digital and analog aspects, as well as processing and
communication in biological visual systems.

Specific areas of interest include, but are not limited to, all aspects of:

* Image scanning, sampling and tessellation
* Image representation by partial information
* Local and global schemes of image representation
* Analog and digital image processing
* Fractals and mathematical morphology
* Image understanding and scene analysis
* Deterministic and stochastic image modelling
* Visual data reduction and compression
* Image coding and video communication
* Biological and medical imaging
* Early processing in biological visual systems
* Psychophysical analysis of visual perception
* Astronomical and geophysical imaging
* Visualization of nonlinear natural phenomena

                      Editorial Board

R. Ansari, Bell Communications Research, USA
I. Bar-David, Technion - Israel Institute of Technology, Israel
R. Bracewell, Stanford University, USA
R. Brammer, The Analytic Sciences Corporation, USA
J.-O. Eklundh, Royal Institute of Technology, Sweden
H. Freeman, Rutgers University, USA
D. Glaser, University of California at Berkeley, USA
B. Julesz, Caltech and Rutgers University, USA
B. Mandelbrot, IBM Thomas J. Watson Research Center, USA
P. Maragos, Harvard University, USA
H.-H Nagel, Fraunhofer-Institut fur Informations-und Datenverbeitung, FRG
A. Netravali, AT&T Bell Labs, USA
D. Pearson, University of Essex, England
A. Rosenfeld, University of Maryland, USA
Y. Sakai, Tokyo Institute of Technology, Japan
J. Sanz, IBM Almaden Research Center, USA
W. Schreiber, Massachusetts Institute of Technology, USA
J. Serra, Ecole Nationale Superieure des Mines de Paris, France
M. Takagi, University of Tokyo, Japan
M. Teich, Columbia University, USA
T. Tsuda, Fujitsu Laboratories Ltd., Japan 
S. Ullman, Massachusetts institute of technology, USA
H. Yamamoto, KDD Ltd., Japan
Y. Yasuda, University of Tokyo, Japan

------------------------------

Date: Mon, 27 Nov 89 16:48 EDT
From: SRIDHAR BALAJI <GRX0446@uoft02.utoledo.edu>
Subject: Boundary Tracking -----> (collected Information).
X-Vms-To: IN%"Vision-List@ADS.COM"
Status: RO


/** This are some refs. and psedocode for the Boundary tracking I asked.
**  Thanks so much for all the contributors. Since so many 
** wanted this. I thought it may be useful to send it to the group.
							S. Balaji */
*******
From:	IN%"marra@airob.mmc.COM" 14-NOV-1989 15:28:27.22
CC:	
Subj:	Re: Boundary tracking.


Here is Pascal-pseudo code for our binary boundary tracker. A minor mod will 
extend it to handle multiple objects. Good luck.



  Pseudo Code for the ALV border tracing module

Program pevdcrok ( input,output );
(* include csc.si TYPE and VAR declarations *)
(* 	this causes declaration of the following data elements:
	
		dir_cue
		version
		V_IM
		O_IM
		PB
*)
(* include peve.si TYPE and VAR declarations 
	 the following are defined in peve.si:

	pevdcrok_static_block.dc
        pevdcrok_static_block.dc
*)
(* ____________________FORWARD DECLARATIONS____________________ *)
(* ----------OURS---------- *)
(* ----------THEIRS-------- *)

procedure pevdcrok(road,obst:imagenum)

TYPE
    border_type_type = (blob,bubble)

    direction_type = (north,south,east,west)

VAR
    inside_pix : d2_point;		(* Col,row location of a pixel on
					   the inside of a border *)
    outside_pix : d2_point;		(* Col,row location of a pixel on
					   the outside of a border *)
    next_pix : d2_point;		(* Col,row location of the next
					   pixel to be encountered during
					   the tracing of the border *)
    next_8_pix : d2_point;		(* Col,row location of the next 
    					   eight-neighbor pixel to be
					   encountered during the tracing of
					   the border *)
    westmost_pix : d2_point;  		(* col,row location of the westmost 
					   pixel visited so far this image *)
    eastmost_pix : d2_point;		(* col,row location of the eastmost
					   pixel visited so far this image *)
    northmost_pix : d2_point;		(* col,row location of the northmost
					   pixel visited so far this image *)
    southmost_pix : d2_point;		(* col,row location of the southmost
					   pixel visited so far this image *)
    border_type : border_type_type;	(* a processing control flag
					   indicating the type of border
					   assumed to be following *)

    direction : direction_type;		(* directions being searched *) 
    start_time,
    end_time,
    print_time : integer;		(* recorded times for time debug *)

  procedure find_border(inside_pix,outside_pix,direction)

  begin (* find_border *)
	Set a starting point for finding blob in the middle 
            of the bottom of the image
	if PB.D[pevdcrok,gra] then
	  mark the starting point
   	Search in direction looking for some blob, being sure you 
            don't go off the top of the road image
	Search in direction looking for a blob/non-blob boundary, 
	    being sure you don't go off the top of the blob image
	if PB.D[pevdcrok,gra] then
	    mark the inside_pix and the outside_pix
  end (* find_border *)


  procedure trace_border(border_type,inside_pix,outside_pix,direction)
  TYPE
    dir_type = (0..7);		(* 0 = east
    				   1 = northeast
				   2 = north
				   3 = northwest
				   4 = west
				   5 = southwest
				   6 = south
				   7 = southeast *)

  VAR
    dir : dir_type;		(* relative orientation of the inside_pix
    				   outside_pix 2-tuple *)
 
  begin (* trace_border *) 
    remember the starting inside and outside pix's for bubble detection
    set dir according to direction
      
    while we haven't found the end of this border do

      begin (* follow border *)
	next_pix.col := outside_pix.col + dc[dir];
	next_pix.row := outside_pix.row + dr[dir];

  	while road.im_ptr^[next_pix.col,next_pix.row] = 0 do
	  begin (* move the outside pixel clockwise *)
	    outside_pix = next_pix;
	    advance the dir
	    check for bubbles; if a bubble then
	      begin (* bubble has been found *)
		border_type := bubble
		exit trace_border
	      end (* bubble has been found *)
	    next_pix.col := outside_pix.col + dc[dir];
	    next_pix.row := outside_pix.row + dr[dir];
	  end (* move the outside pixel clockwise *)
  
	update the direction for moving inside_pix
  
	next_pix.col := inside_pix.col + dc[dir];
	next_pix.row := inside_pix.row + dr[dir];
	next_8_pix.col := inside_pix.col + dc[dir];
	next_8_pix.row := inside_pix.row + dr[dir];

  	while road.im_ptr^[next_pix.col,next_pix.row] = 0 or 
	     (road.im_ptr^[next_8_pix.col,next_8_pix.row] = 0
	     and mod(dir,2) = 0) do
	  begin (* move the inside pixel counter-clockwise *)
	    inside_pix := next_pix;
	    advance the dir
            if road.im_ptr^[inside_pixel.col,inside_pixel.row] <> 0 then
	      begin
		inside_pix := next_8_pix;
		advance the dir;
	      end;

	    check for bubbles; if a bubble then 
	      begin (* bubble has been found *)
		border_type := bubble
		exit trace_border
	      end (* bubble has been found *)

	    next_pix.col := outside_pix.col + dc[dir];
	    next_pix.row := outside_pix.row + dr[dir];
	  end (* move the inside pixel counter-clockwise *)
  
	update the direction for moving outside_pix
  
	update values of westmost_pix,eastmost_pix,northmost_pix,
	        southmost_pix
	if mod(num_border_points,crock_rec.boundary_skip) = 1 then
 	  record column and row values in V_IM.edge_record
  
      end (* follow border *)
    border_type := blob
  end (* trace_border *)


begin (* pevdcrok *)

  if PB.D[pevdcrok,time] then
    clock(start_time);

  AND the road image with the border image, leaving the 
  	result in road image

  border_type := bubble
  while border_type = bubble do
    begin
      if PB.D[pevdcrok,tty]
	writeln('PEVDCROK: Calling find_border');
      find_border(inside_pix,outside_pix,west)
      initialize IM edge_record; num_border_points := 0
      if PB.D[pevdcrok,tty]
	writeln('PEVDCROK: Calling trace_border');
      trace_border(border_type,inside_pix,outside_pix,west)
    end

  complete IM edge_record

  if PB.D[pevdcrok,time] then
    begin (* time debug *)
      clock(end_time);
      print_time := end_time - start_time;
      writeln('PEVDCROK: elapsed time = ',print_time,' msec');
    end; (* time debug *)

end (* pevdcrok *)
*******
From:	IN%"mv10801@uc.msc.umn.edu" 14-NOV-1989 16:04:34.13
CC:	
Subj:	Re: Motion tracking

See:

J.A.Marshall, Self-Organizing Neural Networks for Perception of Visual Motion,
to appear in Neural Networks, January 1990.
*******
From:	IN%"pell@isy.liu.se"  "P{r Emanuelsson" 15-NOV-1989 13:24:37.17
CC:	
Subj:	Re: Boundary tracking.


I think you want to do chain-coding. The best method I know was invented
by my professor (of course...) and is called "crack coding". It uses
a two-bit code and avoids backtracking problems and such. It's quite
easy to implement, but I don't think I have any code handy.
The algorithm is, however, given as flow charts in the second reference:

"Encoding of binary images by raster-chain-coding of cracks", Per-Erik
Danielsson, Proc. of the 6th int. conf. on Pattern Recognition, Oct. -82.

"An improved segmentation and coding algorithm for binary and
nonbinary images", Per-Erik Danielsson, IBM Journal of research and 
development, v. 26, n. 6, Nov -82.

If you are working on parallel computers, there are other more suitable
algorithms.

Please summarize your answers to the vision list.

Cheers,

     /Pell

Dept. of Electrical Engineering	                         pell@isy.liu.se
University of Linkoping, Sweden	                    ...!uunet!isy.liu.se!pell


------------------------------

Date: Thu, 30 Nov 89 13:20:28 EST
From: flynn@pixel.cps.msu.edu
Subject: Bibliography: 3D Scene Interp. Workshop (Austin, TX)

Here's a list of papers in the proceedings of the IEEE Workshop on
Interpretation of 3D Scenes held in Austin, Texas on November 27-29.

STEREO
------
R.P. Wildes, An Analysis of Stereo Disparity for the Recovery of
Three-Dimensional Scene Geometry, pp. 2-8.

S. Das and N. Ahuja, Integrating Multiresolution Image Acquisition and
Coarse-to-Fine Surface Reconstruction from Stereo, pp. 9-15.

S.D. Cochran and G. Medioni, Accurate Surface Description from Binocular
Stereo, pp. 16-23.

SHAPE FROM X
------------
R. Vaillant and O.D. Faugeras, Using Occluding Contours for Recovering
Shape Properties of Objects, pp. 26-32.

P.K. Allen and P. Michelman, Acquisition and Interpretation of 3D Sensor
Data from Touch, pp. 33-40.

P. Belluta, G. Collini, A. Verri, and V. Torre, 3D Visual Information
from Vanishing Points, pp. 41-49.

RECOGNITION
-----------
R. Kumar and A. Hanson, Robust Estimation of Camera Location and
Orientation from Noisy Data Having Outliers, pp. 52-60.

J. Ponce and D.J. Kriegman, On Recognizing and Positioning Curved 3D
Objects from Image Contours, pp. 61-67.

R. Bergevin and M.D. Levine, Generic Object Recogfnition: Building
Coarse 3D Descriptions from Line Drawings, pp. 68-74.

S. Lee and H.S. Hahn, Object Recognition and Localization Using
Optical Proximity Sensor System: Polyhedral Case, pp. 75-81.

MOTION
------
Y.F. Wang and A. Pandey, Interpretation of 3D Structure and Motion Using
Structured Lighting, pp. 84-90.

M. Xie and P. Rives, Towards Dynamic Vision, pp. 91-99.

ASPECT GRAPHS
-------------
D. Eggert and K. Bowyer, Computing the Orthographic Projection Aspect
Graph of Solids of Revolution, pp. 102-108.

T. Sripradisvarakul and R. Jain, Generating Aspect Graphs for Curved
Objects, pp. 109-115.

D.J. Kriegman and J. Ponce, Computing Exact Aspect Graphs of Curved Objects:
Solids of Revolution, pp. 116-121.

SURFACE RECONSTRUCTION
----------------------
C.I. Connolly and J.R. Stenstrom, 3D Scene Reconstruction from Multiple
Intensity Imagesm pp. 124-130.

R.L. Stevenson and E.J. Delp, Invariant Reconstruction of Visual Surfaces,
pp. 131-137.

P.G. Mulgaonkar, C.K. Cowan, and J. DeCurtins, Scene Description Using Range
Data, pp. 138-144.

C. Brown, Kinematic and 3D Motion Prediction for Gaze Control, pp. 145-151.

3D SENSING
----------
M. Rioux, F. Blais, J.-A. Beraldin, and P. Boulanger, Range Imaging Sensors
Development at NRC Laboratories, pp. 154-159.

REPRESENTATIONS
---------------

A. Gupta, L. Bogoni, and R. Bajcsy, Quantitative and Qualitative Measures
for the Evaluation of the Superquadric Models, pp. 162-169.

F.P. Ferrie, J. Lagarde, and P. Whaite, Darboux Frames, Snakes, and
Super-Quadrics: Geometry from the Bottom-Up, pp. 170-176.

H. Lu, L.G. Shapiro, and O.I. Camps, A Relational Pyramid Approach to View
Class Determination, pp. 177-183.

APPLICATIONS
------------
I.J. Mulligan, A.K. Mackworth, and P.D. Lawrence, A Model-Based Vision System
for Manipulator Position Sensing, pp. 186-193.

J.Y. Cartoux, J.T. Lapreste, and M. Richetin, Face Authentification or
Recognition by Profile Extraction from Range Images, pp. 194-199.

J.J. Rodriguez and J.K. Aggarwal, Navigation Using Image Sequence Analysis
and 3-D Terrain Matching, pp. 200-207.


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/09/89)

Vision-List Digest	Fri Dec 08 10:00:41 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Re:MATLAB
 Images Wanted
 Submission to vision list
 Re: MATLAB

----------------------------------------------------------------------

Date: Thu, 7 Dec 89 12:25:16 EST
Subject: Re:MATLAB
From: Sridhar Ramachandran <Sridhar@UC.EDU>

Hi,

Regarding Pranata's article on Matlab.

I am not sure about the Matlab Matrix file format, but if it is
an ASCII file, then you can write a small C program that converts
your Image file (binary, usually) to an ASCII text file that MATLAB
can use.

As mentioned, there is a large body of software, going under
the name of "Video Utilities", that do the reverse ie., convert
any format to raster format.

Sridhar Ramachandran.
Knowledge-Based Computer Vision Lab,
University of Cincinnati.

Replies to: Sridhar@UC.EDU || sramacha@uceng.uc.edu

------------------------------

Date:  7 Dec 89 10:02 -0800
From: <tate@cs.sfu.ca>
Subject: Images Wanted


We are doing research on the combination of laser and intensity imagery and
are looking for registered images of all levels of quality.  Can anyone help 
us out?  Thanks in advance.

Kevin Tate
Dept. of Computing Sciences
Simon Fraser University
Burnaby, B.C.
V5A-1S6

e-mail:	tate@cs.sfu.ca

[ As usual, please post answers to the List.  These types of questions are of
  particular interest to the readership.
		phil...	]

------------------------------

Date: Thu, 07 Dec 89 12:28:19 PST
From: rossbach%engrhub@hub.ucsb.edu
Subject: Books from Springer-Verlag

Springer Series in Perception Engineering
A Book Series from Springer-Verlag Publishers edited by Ramesh C. Jain 

For Information on any of the books in this series, please send email to
rossbach@hub.ucsb or call 1-800-SPRINGER.

     Perception is the process of interpreting information gathered from a
variety of sensors. As a complex set of physiological functions it plays a
vital role in our life. 
     In recent years, the RbiologicalS concept of perception has beem extended
to artificial perception systems. In manufacturing, productivity and quality
are being significantly improved using machine perception. Perception systems
in medicine are improving diagnostic procedures and the ability to monitor
patients. The role of robots in hazardous environments and in space is
expanding. The success of automated machinery depends critically on systems
with the ability to both sense and understand the environment and determine
appropriate action.
     In the last decade, research on perception systems has received
significant attention but, unfortunately, the progress in applications has been
slow. A lack of communication between researchers and practitioners, and, more
fundamentally, the lack of a well defined discipline of RPerception
EngineeringS has created a gap between the theoretical work on perception
systems and application oriented work.
     In order to realize the factories, homes, hospitals, and offices of the
future, this gap needs to be bridged. Books in the Springer Series in
Perception Engineering will emphasize not only the rigorous theoretical and
experimental characterization of perception but also the engineering, design,
and application aspects of perception systems.

Published in the series:
     Paul J. Besl:  Surfaces in Range Image Understanding 
          ISBN 96773-7  $58.00
     Jorge L. C. Sanz (ed.):  Advances in Machine Vision  
          ISBN 96822-9  $79.00
     R. C. Vogt:  Automatic Generation of Morphological Set Recognition
Algorithms
          ISBN  97049-5  $79.00                  
     E. P. Krotkov:  Active Computer Vision by Cooperative Focus and Stereo
          ISBN  97103-3  $49.00
     T. -J. Fan:  Describing and Recognizing 3-D Objects Using Surface
Properties
          ISBN  97179-3  $39.50
     R. C. Jain and A.K. Jain:  Analysis and Interpretation of Range Images
          ISBN  97200-5  $59.00

Coming in the series:
     R. Rao:  A Taxonomy for Texture Description and Identification
     T. Weymouth (ed.):  Exploratory Vision: The Active Eye

Related Books in production:
     P. Farelle:  Data Compression and Recursive Block Coding  ISBN 97235-8 


------------------------------

Date: Fri, 8 Dec 89 08:14:54 GMT
From: tobbe@isy.liu.se (Torbjorn Kronander)
Subject: Re: MATLAB

Adi Pranata <pranata@udel.edu> writes:

>Hi, 

>   I'm not sure where to posted this question, anyway Does any one
>have any info, on convert raster images/picture to matlab matrix
>format, since i am interested on use the matlab software to manipulate
>it . Since it will be no problem to display the matlab file format
>using the imagetool software.  Any info including what other newsgroup
>more appropriate to posted will be welcome.  Thanks in advance.

>You could reply to pranata@udel.edu

>                                           Sincerely,

>                                       Desiderius Adi Pranata
>PS: Electromagnetig way
>    146.955 Mhz -600 KHz
>    Oldfashioned way
>    (302)- 733 - 0990
>    (302)- 451 - 6992

>[ This is definitely appropriate for the Vision List.  Answers to the 
>  List please. 			
>		phil...]


I have done lots of work using images in MATLAB. It is fairly simple
to convert any structure to a matrix (.mat file). 
 
It is also fairly simplr to use m-files to create image display oin
a color monitor such as a color sun. 
 
I would however be nice to use mex-files instead of m-files as I did (no
mex facility available by then).

The following is example code. I do not even assume it will work 
right on, but it may suit as a basis for further work. (it works for mew)
My image reading tool is kind of special so begin with adding you r
own reading routine.

/**********************************************************************

matc.h

Routines for input/Output of MATLAB matrices.

Peter Nagy 8712xx

rewritten by Torbjorn Kronander 8712xx+eps

-changed to std unix stream routines (fopen ...)

-added return values for routines. <=0 is failure, 1 is success.


-Only ONE variable read at the time in readmat, next call
will return next variable (if any).

-Similar in writemat, subsequent calls will add new elements to already open 
file.

-New parameter added to call. "ctl", ctl is one of INIT,GO,FINIT as defined 

  INIT is file opening
  GO is read/write one variable
  FINIT is close.

-transpose added to get crrespondences in arhguments in matlab and C.


KNOWN PROBLEMS 
-

 **********************************************************************/



/**********************************************************************

int readmat(procctltyp ctl,char* fname,int* rows,int* cols,char* varname,boolean* iflag,double* rpart,double* ipart)
  procctltyp ctl;
  int *rows, *col, *iflag;
  char *fname, *varname;
  double *rpart, *ipart;

INPUT:
  ctl,     one of INIT,GO,FINIT (0,1,2) as described above
  fname,   filename if ctl=INIT, else not used
  
OUTPUT:
  function return  negative if read failure (normally premature eof)
                   0 if open failure 
                   1 if success.
  *rows,*cols  Number of rows and columns. (first, second C-index!)
  *iflag       0 if only real part
               1 if complex
  *rpart       Pointer to eal part matrix.
  *ipart       Pointer to imaginary part matrix.


SIDE EFFECTS:
Be careful if the declared number of columns does not agree with the
return from the routine (cols). Indexing will collapse

PN  mod TK

**********************************************************************/

typedef int procctltyp;

#define  INIT  0
#define  GO    1
#define  FINIT 2


extern int readmat(procctltyp ctl,char* fname,int* rows,int* cols,char* varname,boolean* iflag,double* rpart,double* ipart);



/**********************************************************************

int writemat(procctltyp ctl,char* fname,int rows,int cols,char* varname,boolean iflag,double* rpart,double* ipart)
  procctltyp ctl;
  int rows, cols, iflag;
  char fname[], varname[];
  double *rpart, *ipart;

INPUT:
  ctl,         One of INIT,GO,FINIT (0,1,2) as described above
  fname,       Filename if ctl=INIT, else not used
  rows,cols    Number of rows and columns. (first, second C-index!)
  varname      variables name (matrix name)
  iflag        0 if only real part
               1 if complex
  *rpart       Pointer to real part matrix.
  *ipart       Pointer to imaginary part matrix.
  
OUTPUT:
  function return  negative if write failure (normally write access trouble)
                   0 if open failure 
                   1 if success.


SIDE EFFECTS:
--

PN  mod TK

**********************************************************************/

extern int writemat(procctltyp ctl,char* fname,int rows,int cols,char* varname,boolean iflag,double* rpart,double* ipart);


/**********************************************************************

  matc.c


Routines for input/Output of MATLAB matrices.

Peter Nagy 8712xx

rewritten by Torbjorn Kronander 8712xx+eps

-changed to std unix stream routines (fopen ...)

-added return values for routines. <=0 is failure, 1 is success.


-Only ONE variable read at the time in readmat, next call
will return next variable (if any).

-Similar in writemat, subsequent calls will add new elements to already open 
file.

-New parameter added to call. "ctl", ctl is one of INIT,GO,FINIT as defined 
in tkctools.h 
  INIT is file opening
  GO is read/write one variable
  FINIT is close.

-transpose added to get crrespondences in arhguments in matlab and C.


KNOWN PROBLEMS 
-

 **********************************************************************/


#include <stdio.h>

#define MACHINECODE 1000      /* 1000 for SUN */


/**********************************************************************

double *getadress(row,col,mat,rows,cols)
int row,col;
double *mat;

returns pointer to mat(row,col)

INPUT:
row   row of index
col   column of index  (here row is FIRST coordinate in C!!)
mat   pointer to matrix of DOUBLE!!
rows  dimensionality of first index of mat.
cols  dimensionality of second index of mat.

OUTPUT:
function return:   pointer to double mat[row][col]

Sides:
no known   (be carefuul not to index out of mat !!)

**********************************************************************/

static double *getadress(row,col,mat,rows,cols)
int row,col;
double *mat;
{
register int ipnt,icols;
register double *pnt;

icols=(int) cols;
ipnt= row * icols + col;

pnt=mat + ipnt;
return( pnt);
}

/**********************************************************************

static void transpose(rows,cols,mat)
int rows,cols;
double *mat;

Transposes a matrix 

INPUT:
rows    dimensionality of first index in mat
cosl    same for second index
mat     pointer to matrix of double

OUTPUT:
-

Sides:
The matrix mat is transposed!



**********************************************************************/


static void transpose(rows,cols,mat)
int rows,cols;
double *mat;
{
double tmp, *adr1, *adr2;
int ix,iy;

for (ix=0; ix < rows; ix++)
  for (iy=ix+1; iy <  cols; iy++)
    {
      adr1=getadress(ix,iy,mat,rows,cols);
      adr2=getadress(iy,ix,mat,rows,cols);
      tmp = *adr1;
      *adr1 = *adr2;
      *adr2 = tmp;
    }
}


/**********************************************************************

int readmat(ctl,fname,rows,col,varname,iflag,rpart,ipart)
  procctltyp ctl;
  int *rows, *col, *iflag;
  char *fname, *varname;
  double *rpart, *ipart;

INPUT:
  ctl,     one of INIT,GO,FINIT (0,1,2) as described above
  fname,   filename if ctl=INIT, else not used
  
OUTPUT:
  function return  negative if read failure (normally premature eof)
                   0 if open failure 
                   1 if success.
  *rows,*cols  Number of rows and columns. (first, second C-index!)
  *iflag       0 if only real part
               1 if complex
  *rpart       Pointer to eal part matrix.
  *ipart       Pointer to imaginary part matrix.


SIDE EFFECTS:
Be careful if the declared number of columns does not agree with the
return from the routine (cols). Indexing will go balooney!!  

PN  mod TK

**********************************************************************/


int readmat(ctl,fname,rows,cols,varname,iflag,rpart,ipart)
  procctltyp ctl;
  int *rows, *cols, *iflag;
  char *fname, *varname;
  double *rpart, *ipart;
{
  int sizint,retn, inum[5], i;

  static FILE *infp;  

  switch(ctl)
    {
    case INIT:
      infp = fopen(strcat(fname,".mat"), "r");
      if (NULL != infp) 
	return(1);
      else
	return(0);
      break;
    case GO:
      if (5 != fread(inum,sizeof(sizint),5,infp)) return(-1); 
      if (MACHINECODE != inum[0]) return(-2); /* wrong machine type ! */
      *rows = inum[1];
      *cols = inum[2];
      *iflag= inum[3];

      /* Real part */
      retn=fread(varname,1, inum[4], infp);
      if (inum[4] != retn) return(-3);
      retn=fread(rpart,8,(*rows)*(*cols),infp);
      if ((*rows)*(*cols) != retn)  return(-4);

      if ( (*rows >1) && (*cols>1)){
	transpose(*rows,*cols,rpart);
      }
      
      /* Complex part */
      if (*iflag) {
	if (ipart == NULL) return(-5);
	retn=fread(ipart,8,(*rows)*(*cols),infp);
	if ((*rows)*(*cols) != retn)  return(-6);
	if ( (*rows >1) && (*cols>1)){
	  transpose(*rows,*cols,ipart);
	}
      }
      return(1);
      break;

    case FINIT:
      fclose(infp);
      return(1);
    }
}


/**********************************************************************

int writemat(ctl,fname,rows,cols,varname,iflag,rpart,ipart)
  procctltyp ctl;
  int rows, cols, iflag;
  char fname[], varname[];
  double *rpart, *ipart;

INPUT:
  ctl,         One of INIT,GO,FINIT (0,1,2) as described above
  fname,       Filename if ctl=INIT, else not used
  rows,cols    Number of rows and columns. (first, second C-index!)
  varname      variables name (matrix name)
  iflag        0 if only real part
               1 if complex
  *rpart       Pointer to real part matrix.
  *ipart       Pointer to imaginary part matrix.
  
OUTPUT:
  function return  negative if write failure (normally write access trouble)
                   0 if open failure 
                   1 if success.


SIDE EFFECTS:
--

PN  mod TK

**********************************************************************/

int writemat(ctl,fname,rows,cols,varname,iflag,rpart,ipart)
  procctltyp ctl;
  int rows, cols, iflag;
  char fname[], varname[];
  double *rpart, *ipart;

{
  int sizint,retn, inum[5], i;
  static FILE *outfp;  
  char  filename[100];

  switch(ctl)
    {
    case INIT:
      strcpy(filename,fname);
      outfp = fopen(strcat(filename,".mat"), "w");
      if (outfp != NULL)
	return(1);
      else
	return(0);
      break;
    case GO:
      if ( (rows >1) && (cols>1)){
	transpose(rows,cols,rpart);
	if (iflag) transpose(rows,cols,ipart);
      }

      inum[0]=MACHINECODE;
      inum[1]=rows;
      inum[2]=cols;
      inum[3]=iflag;
      inum[4]=strlen(varname)+1;

      retn=fwrite(inum,4,5,outfp);
      if (5 != retn) return(-1); 
#ifdef PRINTDEB
      printf("inum %d %d %d %d %d\n",inum[0],inum[1],inum[2],inum[3],inum[4]);
      printf("retn (inum) %d \n",retn);
#endif

      retn=fwrite(varname,1, inum[4], outfp);
      if (inum[4] != retn) return(-2);
#ifdef PRINTDEB
      printf("varname %d\n",retn);
#endif
      
      /* Real part */
      retn=fwrite(rpart,8,rows*cols,outfp);
      if (rows*cols != retn)  return(-3);
#ifdef PRINTDEB
      printf("real part %d\n",retn);
#endif

      /* Complex part */
      if (iflag) {
	if (ipart == NULL) return(-4);
	retn=fwrite(ipart,8,(rows)*(cols),outfp);
	if (rows*cols != retn)  return(-5);
      }

      if ( (rows >1) && (cols>1)){
	transpose(rows,cols,rpart);  /* redo rpart and ipart to original */
	if (iflag) transpose(rows,cols,ipart);
      }

      return(1);
      break;

    case FINIT:
      fclose(outfp);
      return(1);
    }
} /* writemat */


/* imtomat.c */
/**********************************************************************

Converts an image file (256 x 256) to a matlab .mat file.

Tobbe K 880123

**********************************************************************/

#include <stdio.h>
#include <string.h>



im256typ ima;
double      fima[256][256];

main(argc,argv)
int argc;
char *argv[];

{
int ix,iy;
char fname[100];


/* THIS IS WHERE YOUR WORK GOES...
readopenim256("");
*/

printf("Matlab filename without extension (.mat assumed)>"); 
scanf("%s",fname);



/* THIS IS WHERE YOUR WORK GOES...
readim256(ima);
*/

for (ix=0; ix <= 255; ix++) 
  for (iy=0; iy <=255; iy++)
    fima[ix][iy]=ima[ix][iy];

if (1 != writemat(INIT,fname,256,256,fname,FALSE,fima,NULL))  {
  printf("Error in open of tstmat!!");
  exit(1);
}

if (1 != writemat(GO,fname,256,256,fname,FALSE,fima,NULL))  {
  printf("Error in write of tstmat!!");
  exit(2);
}
}




/**********************************************************************

mat2im.c

Converts a matlab matrix .mat file to a info standard .256 file

Note must be 256 x 256 image presently !! 

Tobbe 880122

**********************************************************************/

#include <stdio.h>
#include <math.h>
#include <string.h>


im256typ ima;
double      fima[256][256];

main(argc,argv)
int argc;
char *argv[];

{
int ret,xrows,yrows,iflag;
register int ix,iy,tmpi;
float ifima;
char varname[30],fname[30];
double fa[3][3];

if (argc > 0) {
  strcpy(fname,argv[1]);
}
else {
printf("Filename without extension (.256 and .mat assumed)>"); 
scanf("%s",fname);
}

xrows=256;
yrows=256;

/* THIS IS WHERE YOUR WORK GOES...

if (!writeopenim256(fname)) {
  printf("Sorry there was some error in open of image file/n");
  exit(1);
}

if (1 != readmat(INIT,fname,&xrows,&yrows,varname,&iflag,fima,&ifima)) {
  printf("Error in open of %s.mat",fname);
  exit(2);
}
*/

do
  {
    if (1 != readmat(GO,fname,&xrows,&yrows,varname,&iflag,fima,&ifima)) {
      break;
    }

    printf("varname is: %s, iflag is; %d, xdim,ydim are (%d,%d)\n"
	   ,varname,iflag,xrows,yrows);
  
    for (ix=0; ix < 256; ix++) 
	for (iy=0; iy < 256; iy++) {
	  tmpi=rint(fima[ix][iy]);
	  ima[ix][iy]=MAX(0,MIN(255,tmpi));
	}

/* THIS IS WHERE YOUR WORK GOES...
    writeim256(ima);
*/

  }
while (ret =1);

}



Torbjorn Kronander			tobbe@isy.liu.se
Dept. of EE	Linkoping University    Sweden
ph +46 13 28 22 07


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/16/89)

Vision-List Digest	Fri Dec 15 10:55:52 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Re: range images
 stereo pair images wanted
 Moment Invariants
 raster to matlab

----------------------------------------------------------------------

Date: Mon, 11 Dec 89 13:54:28 BST
From: Guanghua Zhang <guanghua%cs.heriot-watt.ac.uk@NSFnet-Relay.AC.UK>
Subject: Re: range images

It was mentioned before that several sets of range images can be obtained
from the M. Rioux and L. Cournoyer. They can also provide with the registered
pairs of the intensity and range images ( I think for one set ). But I don't 
know how they register the two images, two perspective, two orthogonal or 
one perspective and one orthogonal with converting parameters.


Their address is:
Photonics and Sensors Section
Laboratory for Intelligent Systems'Division of Electrical Enfineering
Natonal Research Council of Canada
Ottawa, Otntario, Canada
K1A 0R8  

------------------------------

Date: Tue, 12 Dec 89 17:44 +0100
From: KSEPYML%TUDRVA.TUDELFT.NL@CUNYVM.CUNY.EDU
Subject: stereo pair images wanted

I am in need of stereo pair images, both real world and constructed, as
input for a human stereo-vision algorithm.
Does anybody have such images available?

Alexander G. van der Voort
Koninklijke Shell Exploratie en Produktie Laboratorium
Volmerlaan 6
2288 GD Rijswijk
The Netherlands

[ As usual, please post answers to the List.
		phil...	]

------------------------------

Date: 15 Dec 89 15:45:05 GMT
From: Manickam Umasuthan <suthan%cs.heriot-watt.ac.uk@NSFnet-Relay.AC.UK>
Subject: Moment Invariants
Organization: Computer Science, Heriot-Watt U., Scotland

I am very much interested to know whether any one has done research on
the application of moment invariants to practical problems ( mainly 3D
images using 3D moment invariants ).

M.UMASUTHAN

------------------------------

Date: Fri, 8 Dec 89 15:03:17 PST
From: ramin@scotty.Stanford.EDU (Ramin Samadani)
Subject: raster to matlab

Regarding raster to matlab conversion. I've taken some licensed stuff out of
the code we use and came up with the following which has the main parts 
for converting a file containing unsigned chars (rows*cols of them) to a
matlab readable format. It is currently hardwired for a vax but could
easily be modified for suns and macs, etc. Also, since I took the licensed
stuff out, the rows and cols are hardwired right now but that should be
easy to fix. The code follows, with no guarantees at all! 

	Ramin Samadani


/* imagetomat.c - writes a matrix matlab can
 * 					read. Double format output,byte
 *					format input for now.
 *
 * usage: matrix-name <in.hpl >out.mat
 *
 * to load: cc -o name name.c -O
 *
 * Ramin Samadani - 6 May 88
 */
int rows = 128;
int cols = 128;

#include <stdio.h>
typedef struct {
	long type; /*type*/
	long mrows; /* row dimension */
	long ncols; /* column dimension */
	long imagf; /* flag indicating imag part */
	long namlen; /* name length (including NULL) */
	} Fmatrix;
char *prog;

main(argc,argv)
        int argc;
        char *argv[];
{
/* VARIABLES */
    int rows,cols, i,j;
    unsigned char *ifr;
    double *ofr;
/*
 * Matlab declarations 
 */
    char *pname; /* pointer to matrix name */
    float *pr; /* pointer to real data */
    FILE *fp;
    Fmatrix x;
    int mn;

    prog = argv[0];

/*
 * check passed parameters
 */

    if (argc < 2) {
        fprintf(stderr,"use: %s matrix name <filein  >fileout\n",prog);
        exit(1);
    }
    if ((pname = (char *) calloc(80,sizeof(char))) == NULL) {
        fprintf(stderr,"%s: can't allocate matrix name\n",prog);
        exit(1);
    }
    pname = argv[1];
    x.type = 2000;
    x.mrows = (long) cols;
    x.ncols = (long) rows;
    x.imagf = 0;
    x.namlen = strlen(pname) + 1;
    fprintf(stderr,"matrix %s has %ld rows, %ld cols, double precision\n",
    	pname, x.mrows,x.ncols);
    rows = rows; cols = cols;

    if ((ifr = (unsigned char *) calloc(rows*cols,sizeof(char))) == NULL){
        fprintf(stderr,"%s: can't allocate input frame\n",prog);
        exit(1);
    }
    if ((ofr = (double *) calloc(rows*cols,sizeof(double))) == NULL){
        fprintf(stderr,"%s: can't allocate output frame\n",prog);
        exit(1);
    }
    if (read(0,ifr,rows*cols*sizeof(char)) == -1) {
        fprintf(stderr,"%s: can't read frame\n",prog);
        exit(1);
    }

/* MAIN PROCESSING */
    mn = x.mrows*x.ncols;
    for (i = 0; i < mn; i++) {
        ofr[i] = (double) (ifr[i]&0377);
    }
    

/*
 * write the matrix
 */
    if(write(1,&x,sizeof(Fmatrix)) != sizeof(Fmatrix)) {
    	fprintf(stderr,"%s: can't write matrix header\n",prog);
    	exit(1);
    }
    if(write(1,pname,(int)x.namlen*sizeof(char)) != 
    		(int)x.namlen*sizeof(char)) {
        fprintf(stderr,"%s: can't write name of matrix\n",prog);
        exit(1);
    }
    if (write(1,ofr,mn*sizeof(double)) != mn*sizeof(double)){
        fprintf(stderr,"%s: can't write matrix data\n",prog);
        exit(1);
    }
}


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/23/89)

Vision-List Digest	Fri Dec 22 09:42:22 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 multidimensional image data request
 Suggestions for a range-finder
 Study of Consciousness within Science--Intl' Conference

----------------------------------------------------------------------

Date: 18 Dec 89 19:43:04 GMT
From: jwbrandt@ucdavis.edu (Jon Brandt)
Subject: multidimensional image data request
Organization: U.C. Davis - Department of Electrical Engineering and Computer Science

I am looking for the following types of image data:

1) time sequences from stationary or moving platforms
2) 3D or higher, scalar or vector, simulation data (e.g. flow vortices)
3) 3D reconstructions (MR, CT, confocal optics, etc.)
4) other multidimensional data that I haven't thought of

Can anyone point to an ftp source?  Are there standard test sets in these
areas?  Size is no object: the bigger the better (within reason).

Thanks,
	Jon Brandt
	brandt@iris.ucdavis.edu

[ Please post all responses directly to the List.  Since the Vision List has 
  an FTP connection, it would be nice if some of this data couldn't be
  stored here...
				phil...	]


------------------------------

Date: Wed, 20 Dec 89 09:01:57 EST
From: Dmitry Goldgof <goldgof@SOL.USF.EDU>
Subject: Suggestions for a range-finder

We are looking into buying inexpensive (~20K) range-finder
for robotics applications. For this amount we can probably
only get structured-light system. Does anybody has suggestions
on the subject (we do have Paul Besl's paper). Our requirements
are standoff distance ~ 0.5m, depth of field ~ 1-2m or better,
i.e. not a system with very small depth of field.

Dmitry Goldgof
Department of Computer Science and Engineering
University of South Florida

------------------------------

Date: 19 Dec 89 14:04:37 GMT
From: bvi@cca.ucsf.edu (Ravi Gomatam)
Subject: STUDY OF CONSCIOUSNESS WITHIN SCIENCE--INTL' CONFERENCE
Organization: Computer Center, UCSF


         ------- FIRST INTERNATIONAL CONFERENCE --------
                         on the study of
                   CONSCIOUSNESS WITHIN SCIENCE

                         Feb. 17-18, 1990
                  Cole Hall, UCSF, San Francisco

                      CALL FOR REGISTRATION


   ORGANIZING COMMITTEE                   ADVISORY BOARD

   T.D. Singh                             Henry Stapp
   R.L. Thompson                          Karl Pribram
   Ravi Gomatam                           E.C.G. Sudarshan
   K.P. Rajan                             David Long


PURPOSE:

   In this century, developments in a variety of fields including 
quantum physics, neuro sciences and artificial intelligence have 
revealed the necessity for gaining an understanding of the nature 
of consciousness and its causal interplay in the study of even 
matter.  The present conference will examine the the 
methodological tools and problems in the study of consciousness 
from the perspective of a wide range of scientific fields.  
Prominent scholars will share and discuss their research through 
invited presentations and question and answer sessions.  The 
discussions will focus on the role of consciousness as a vital 
component of the scientific investigation of the natural world. 


               INVITED PRESENTATIONS INCLUDE:

NEW CONCEPTS ON THE MIND-BRAIN PROBLEM 
JOHN ECCLES Neurosciences, Switzerland Nobel Laureate

A QUANTUM THEORY OF CONSCIOUSNESS
HENRY STAPP Theoretical Physics, Lawrence Berkeley Labs.

BRAIN STATES AND PROCESSES AS DETERMINANTS OF THE CONTENTS OF 
CONSCIOUSNESS
KARL PRIBRAM Neuropsychology, Stanford University

CONSCIOUSNESS: IMMANENT OR TRANSCENDENT?
ROBERT ROSEN Biophysics, Dalhousie University

USE OF CONSCIOUS EXPERIENCE IN UNDERSTANDING REALITY 
BRIAN JOSEPHSON TCM Group, Cambridge University Nobel Laureate

WAVE MECHANICS OF CONSCIOUSNESS	- ROBERT JAHN
Engineering Anomalies Group, Princeton University

ENGINEERING ANOMALIES RESEARCH - BRENDA DUNNE
Engineering Anomalies Group, Princeton University

SPONTANEITY OF CONSCIOUSNESS: A PROBABILISTIC THEORY OF MEANINGS 
AND SEMANTIC ARCHITECTONICS OF PERSONALITY - V. NALIMOV
Mathematical Theory of Experiments, Moscow State University

EVOLUTION AND CONSCIOUSNESS
A.G. CAIRNS-SMITH Molecular Chemistry, Glasgow University

PATTERNS IN THE UNIVERSE
E.C.G. SUDARSHAN Theoretical Physics, Univ. of Texas, Austin

WHAT IS WRONG WITH THE PHILOSOPHY OF MIND?
JOHN SEARLE Cognitive Philosophy, U.C., Berkeley

A TRANS-TEMPORAL APPROACH TO MIND-BRAIN INTERACTION
R. L. THOMPSON Mathematical Biology, Bhaktivedanta Institute, SF 

BIOLOGICAL COHERENCE--CONNECTIONS BETWEEN MICRO AND MACRO PHYSICS
H. FROHLICH Theoretical Physics, University of Liverpool 



    Participation is by registration.  Besides these invited 
talks, there will be question/answer sessions, panels and a poster 
session.  The program format will afford registered participants 
ample opportunity for interaction with the distinguished guests. 
Early registration is encouraged. 

REGISTRATION FEE:   $125.00 before January 15, 1990;
                    $75.00 Full time students (limited seats)
[Fee includes luncheon on both days]
                    $150/$85 after January 15. 

    To register, please send check/money order (in U.S. dollars 
only) drawn in favor of the Bhaktivedanta Institute to the 
conference secretariat.  Please include name, address, 
institutional affliation and research interests (if any) of the 
registrant.

CALL FOR PAPERS:

    While all oral presentations are by invitation only, 
opportunities exist for registered participants to present papers 
in poster sessions on any topic related to the broad theme of the 
conference.  Three hard-copies of a full paper (8000 words) or 
extended abstract (1000 words) submitted before December 29, 1989 
are assured of full consideration. 

    Please direct all registration requests, paper submissions and 
inquiries to: 

        Ravi V. Gomatam, Organizing Secretary        
        THE BHAKTIVEDANTA INSTITUTE             
        84 Carl Street 
        San Francisco, CA 94117 U.S.A.           

        Tel: (415)-753-8647/8648 

        E-Mail: INTERNET: bvi@cca.ucsf.edu                  
                BITNET:   bvi@ucsfcca

CONFERENCE HOST: The Bhaktivedanta Institute - A private non-
profit research organization promoting international discussions 
on consciousness-based perspectives in relation to outstanding 
problems in various areas of modern science.  The institute has 
centers in Bombay and San Francisco and a staff of twenty 
concerned scientists from different fields. 

                            * * * * * 


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (12/30/89)

Vision-List Digest	Fri Dec 29 09:09:56 PDT 89

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 multidimensional data
 NASA IP Package?

----------------------------------------------------------------------

Date: Wed, 27 Dec 89 10:24:45 EST
From: Stephen M. Pizer <smp@cs.unc.edu>
Subject: multidimensional data

jwbrandt@ucdavis.edu writes that he needs to know of sources of
multidimensional data. A collection of medical (magnetic resonance image) and
molecular electron density data is available on magnetic tape
from payne@unc.cs.edu or
Pamela Payne, Softlab, Dept. of Computer Science, Sitterson Hall, Univ. of NC,
Chapel Hill, NC 27599-3175. This collection was produced as a result of the
Chapel Hill Volume Visualization Workshop in May 1989 and consists of data
contributed by Siemens, Inc. and Michael Pique of Scripps.
The cost is $50. I believe a second tape is under preparation for an 
additional charge.

------------------------------

Date: Thu, 28 Dec 89 11:17:26 PST
From: Scott E. Johnston <johnston@ads.com>
Subject: NASA IP Package?

I've heard rumors of an extensive, well-documented IP package
available from NASA.  A search of vision-list backissues
didn't uncover any references.  Anybody with further info?


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (01/11/90)

Vision-List Digest	Wed Jan 10 10:47:34 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Digital Darkroom (popi)
 3D-glasses
 CVGIP Abstract
 Conference on Photogrammetry Meets Machine Vision
 RMCAI 90

----------------------------------------------------------------------

Date: Thu, 4 Jan 90 03:14:23 GMT
From: us214777@mmm.serc.3m.com (John C. Schultz)
Subject: Digital Darkroom (popi)
Organization: 3M - St. Paul, MN  55144-1000 US

I recently grabbed a public domain version of an X window based image
generation? processing? analysis? package called DigitalDarkroom (or popi).
This seems to me to be an interesting package with a lot of potential for
image processing.  I particularly like the simple syntax to access image
pixels and the transparent conversion from rectangular to polar coordinates.

Does anyone use this package for image analysis? Any analysis routines
available (e.g. morphology, blob analysis, filtering, transforms)?

If you don't have a copy you might want to pick one up from the net.

John C. Schultz                   EMAIL: jcschultz@mmm.3m.com
3M Company                        WRK: +1 (612) 733 4047
3M Center, Building 518-01-1      St. Paul, MN  55144-1000        
   The opinions expressed above are my own and DO NOT reflect 3M's

------------------------------

Date: 4 Jan 90 18:55:00 GMT
From: tjeerd@mbfys4.sci.kun.nl (Tjeerd Dijkstra)
Subject: 3D-glasses
Keywords: 3D-glasses, liquid crystal

I want to use glasses with liquid crystal shutters in an
experimental setup that opens the visual feedback loop.
Until now I was unable to obtain any information on shops,
brandnames etc. Does anyone have any pointers?
I have a SUN4/260 CXP which has a framerate of 66Hz

Tjeerd Dijkstra

E-mail: tjeerd@sci.kun.nl

------------------------------

Date: Thu, 4 Jan 90 17:24:50 -0800
From: bertolas@cs.washington.edu (William Bertolas)
Subject: CVGIP Abstract

Computer Vision, Graphics, and Image Processing
Volume 49, Number 2, February 1990

CONTENTS

Kwangyoen Wohn and Allen M. Waxman.  The Analytic Structure of 
	Image Flows:  Deformation and Segmentation, p.127

Michael T. Goodrich and Jack Scott Snoeyink.  Stabbing Parallel 
	Segments with a Convex Polygon, p.152.

J.P. Oakley and M.J. Cunningham.  A Function Space Model for
	Digital Image Sampling and Its Application in Image
	Reconstruction, p.171.

Per-Erik Danielsson and Olle Seger.  Rotation Invariance in
	Gradient and Higher Order Derivative Detectors, p. 198.

Daphna Weinshall.  Qualitative Depth from Stereo, with Applications, p. 222.

NOTE
	Yuh-Tay Liow and Theo Pavlidis.  Use of Shadows for Extracting
	Buildings in Aerial Images, p. 242.


------------------------------

Date: 	Mon, 8 Jan 90 09:30:00 EST
From: ELHAKIM@NRCCIT.NRC.CA
Subject: conference on photogrammetry meets machine vision

Second Announcement and Call for Papers
ISPRS Symposium
Close-Range Photogrammetry Meets Machine Vision
ETH Zurich, Switzerland
September 3 - 7, 1990

Organised by
        - Institute of Geodesy and Photogrammetry
        - ISPRS Commission V

Sponsored by
        SGPBF - Swiss Society for Photogrammetry, Image Analysis
                        and Remote Sensing

Cooperating Organisations
        SPIE - The International Society for Optical Engineering
        IEEE - IEEE, The Computer Society, TC on Pattern Analysis
        and Machine Intelligence
        FIG - Federation Internationale des Geometres
        ITG - Information Technology Society of the SEV
        SGBT - Swiss Association of Biomedical Engineering


INVITATION TO ZURICH

You are invited to attend this international and interdisciplinary Symposium
of the International Society for Photogrammetry and Remote Sensing
(ISPRS) and to share your knowledge in photogrammetry and the various
vision disciplines with an expert group of photogrammetrists, geodesists,
mathematicians, physicists, system engineers, electrical engineers, computer
scientists, mechanical engineers, architects, archaeologists and others, whose
concern is precise and reliable spatial measurements using imaging systems.

We hope that this Symposium, according to its title "Close-Range Photo-
grammetry Meets Machine Vision", will provide the stage where ideas and
experience can be exchanged in a stimulating environment. The conference
will take place at ETH-Hoenggerberg, a campus of the Swiss Federal
Institute  of Technology (ETH) Zurich, which is conveniently located close to
downtown Zurich in a restful and delightful natural environment.

Zurich is a captivating city of many contrasts. It is a world-famous
banking and stock-exchange centre and at the same time an idyllic
place with all the charm of a small city. It is a bastion of the arts
and sciences - and also a friendly and hospitable city.  A paradise
for shoppers, it also offers a host of entertainment and leisure
activities. Zurich is situated on a celebrated lake and river, between
gentle hills, with the snow-capped peaks of the Alps on the skyline.
Aircraft from most countries of every continent land at Zurich's
airport and a day excursion is enough to reach any part of
Switzerland.

The conference will feature tutorials, technical sessions, a scientific
exhibition and a variety of social and cultural events. We will spare no effort
in providing an interesting program for both regular participants and
accompanying persons.

May I cordially invite you to participate in this Symposium and to submit a
paper dealing with the topics of interest to ISPRS Commission V.

Armin Gruen
President of ISPRS Commission V


GENERAL OBJECTIVES

In recent years the modern vision disciplines of computer vision, machine
vision and robot vision have found widespread interest in the scientific and
engineering world. The further development of these disciplines is crucial
for advancements in various other fields of science, technology and industry.
As the scientific and engineering concepts of vision systems are increasingly
being examined in practical application environments, the need for  precise,
reliable and robust performance with respect to quantitative measurements
becomes very obvious. Quantitative measurement on the other hand has been
a familiar domain to photogrammetrists for many years. The intention of this
symposium is to combine the longstanding, application-proven expertise of
classical photogrammetric procedures with the up-to-date, forward-looking
vision hardware and algorithmic concepts in order to overcome current
limitations and to arrive at truly efficient and reliable systems which in turn
will open up new and promising fields of application. The aim is to bring
together experts from various disciplines who are concerned with the design,
development and application of modern analogue, digital and hybrid vision
systems which operate in a close-range environment.

This conference is designed for scientists, engineers and users in the fields of
photogrammetry, machine vision and robot vision; from universities,
research institutes, industry, governmental organisations and engineering
firms.

The topics to be addressed should be related, but are not restricted to,  the
terms  of  reference  of the Working Groups of ISPRS Commission V:

WG V/1: Digital and Real-Time Close-Range Photogrammetry Systems
        Chairmen:       Dr. Sabry El-Hakim, Prof.Dr. Kam Wong
        - Real-time vision systems for metric measurements
        - System hardware and software integration
        - Demonstration of systems in actual application environments

WG V/2: Close-Range Imaging Systems - Calibration and Performance
        Chairmen:       Prof. Dr. John G. Fryer, Prof. Dr. Wilfried Wester-
+ Ebbinghaus
        - Geometric and radiometric characteristics of CCD and hybrid
                imaging systems
        - Procedures and strategies for calibration and orientation
        - High precision photogrammetry (<10-5) with large format
                photographic images and CCD matrix sensors in image space

WG V/3: Image Analysis and Image Synthesis in Close-Range
        Photogrammetry
        Chairmen:       Dr. Dieter Fritsch, Dr. Jan-Peter Muller
        - Algorithmic aspects in image analysis
        - Visualisation techniques in image synthesis
        - Hardware architecture for real-time image analysis and image
                synthesis

WG V/4: Structural and Industrial Measurements with
        Consideration of CAD/CAM Aspects
        Chairmen:       Dr. Clive S. Fraser, Prof.Dr. Heinz Ruther
        - Integration of CAD/CAM into the photogrammetric
                measurement process
        - Digital photogrammetric systems for industrial mensuration
        - Transfer of photogrammetric technology to the industrial
                design, engineering and manufacturing sector

WG V/5: Photogrammetry in Architecture and Archaeology
        Chairmen:       Mr. Ross W.A. Dallas, Dr. Rune Larsson
        - Application of new photogrammetric technology to
                architectural and archaeological surveying and recording
        - Possibilities offered by new low-cost photogrammetric systems
                and video-based systems
        - Study of appropriate applications of CAD/CAM and LIS/GIS

WG V/6: Biostereometrics and Medical Imaging
        Chairmen:       Prof.Dr. Andreas Engel, Prof.Dr. Peter Niederer
        - Human motion analysis and biological surface measurements
        - 3D medical imaging and anthropometry; 3D microscopy
        - Hardware and software for use in medical imaging

Associate Group: Robot Vision
        Chairman: Dr. Reimar Lenz
        - Recent developments
        - Applications


CALL FOR PAPERS

Deadline for abstracts: January 31, 1990
Notification of acceptance:     March 31, 1990
Deadline for complete manuscripts:      June 15, 1990

A separate Abstract Form can be obtained from the symposium organisation.
Instructions for authors and forms for papers will be mailed out in due course.

The papers of this Symposium are to be published as Proceedings in the
Archives series of the ISPRS (Volume 28, Part V), which will be made
available prior to the conference. This implies that the deadline for complete
manuscripts has to be observed strictly.


LANGUAGE

Papers may be presented in any of the three official ISPRS languages
English, French and German. The operating language of the Symposium will
be English.  Simultaneous translation will not be provided.


GENERAL INFORMATION


SYMPOSIUM SITE

ETH-Hoenggerberg, Zurich, a campus of the Swiss Federal Institute of
Technology (ETH) Zurich.

The location for the technical sessions, tutorials, exhibition and the
information and registration desk will be the HIL-Building.


FACILITIES

The lecture rooms are equipped with slide projectors (5x5 cm) and overhead
projectors. Video installations (projection and monitor display) can be
arranged on request.


TECHNICAL SESSIONS

The technical sessions will be arranged from September 4 to 7, 1990. If
necessary, two sessions will be held in parallel.


TUTORIALS

The following tutorials are offered on September 3, 1990:

(A) Full-day tutorial "Fundamentals of Real-Time Photogrammetry"

Lecturers: Dr. D. Fritsch*, Dr. R. Lenz*, Dipl.-Ing. E. Baltsavias,
Dipl.-Ing. ETH H. Beyer (*Technical Univ. Munich, FRG and ETH Zurich,
Switzerland).
Time: 9.00  to  17.30

A one-day tutorial covering algorithmic and hardware aspects of Real-Time
Photogrammetry is to be presented.

System design aspects and hardware components of Real-Time
Photogrammetric Systems are to be analysed. Emphasis will be placed on the
performance in 3-D vision and measurement tasks. The principal topics will
include:  system design, data acquisition, data transfer, processing, storage,
and display. Image acquisition will be analysed in more detail and an outline
will be given of the characteristics of CCD-sensors, cameras, video signals
and frame grabbers that influence image quality and measurement accuracy.

Algorithmic aspects of image analysis and computer vision techniques for
processing image data for 3-D applications will be presented. The main
topics include: image enhancement, edge detection and segmentation,
morphological and geometric operations, feature detection and object
recognition, image and template matching, point determination and
optimisation, surface measurement and reconstruction.

The presentation will be supported by practical demonstrations of the
hardware and algorithmic performance of digital close-range photogram-
metric systems.

This tutorial is designed for engineers and applied researchers with interest
in image analysis, machine vision, robotics and digital photogrammetry.
Basic knowledge of photogrammetry and image processing will be assumed.



(B) Half-day tutorial "Computer Vision and Dynamic Scene Analysis"

Lecturer: Prof.Th.S. Huang (Univ. of Illinois, Urbana- Champaign, USA)
Time: 13.30  to 17.30

A half-day tutorial covering computer vision with the emphasis on dynamic
scene analysis will be presented. The goal is to expose to researchers in
photogrammetry some of the important results in dynamic scene analysis.

Photogrammetry and computer vision have many common problems such as
stereo, pose determination, and camera calibration. The two fields can
certainly learn much from each other.

After an overview of computer vision, several examples of motion detection
and high-level spatial-temporal reasoning will be given. Then a detailed
discussion will be presented on the use of feature matching for pose
determination, camera calibration, and 3D motion determination. The key
issues include: extracting and matching of point and line features in images,
minimum numbers of features required for unique solution, linear vs.
nonlinear (esp. polynomial) equations, and robust algorithms.

It is hoped that this tutorial will provide a forum for the exchange of ideas
between researchers in photogrammetry and computer vision.

Registration for a tutorial should be made on the attached Registration Form.
Since participation will be restricted due to space limitations, the organisers
reserve the right of refusal.


EXHIBITION

A scientific/commercial exhibition will feature the latest developments in
systems, components, software and algorithms in close-range photogram-
metry and machine vision. Application forms for interested exhibitors can be
obtained from the Symposium Secretariat.


REGISTRATION

Registration of all participants (including accompanying persons) should be
made on the Registration Form which can be obtained from the congress
organisation. Please note that a separate form must be used for the hotel
reservation.

Correspondence concerning registration should be mailed to the Symposium
Secretariat.

Registration Fees
                                                Before June 1           after
+ June 1

Active Participants             SFr. 350.--                     SFr. 400.--
Accompanying Persons    SFr. 150.--                     SFr. 175.--
Tutorial (A)                    SFr. 250.--                     SFr. 275.--
Tutorial (B)                    SFr. 150.--                     SFr. 175.--


For active participants the registration fee includes admission to all sessions,
a copy of the Proceedings and the Welcome Party. Included in the regist-
ration fee for accompanying persons are the Opening and Closing Sessions,
the Welcome Party, and the right to participate in the program for
accompanying persons at the announced prices.

The registration fee and other charges payable to the Symposium Secretariat
should be enclosed with the Registration Form as a bank draft (drawn on a
Swiss bank, payable to the ISPRS-Symposium Secretariat) or a copy of a
bank transfer. Credit cards are not accepted.


CONFIRMATION

Confirmation of registration will be mailed to participants after receipt of the
Registration Form and payment.


INFORMATION

A reception desk will be open for registration and information on the ground
floor of the HIL-Building from September 3 to 7,  8.00-17.30.


ACCOMMODATION

The Verkehrsverein Zurich will arrange the hotel accommodation for all
participants. The Hotel Reservation Form should be mailed to the
Verkehrsverein Zurich (it can be obtained from the congress organisation).

The hotel will confirm the room reservations in the order in which the forms
and the hotel deposit payments are received. Please observe July 23, 1990
as the deadline for hotel reservation.


SOCIAL EVENTS

S1      Reception for Tutorial Participants
        Monday, September 3, 17.30      free

S2      Welcome Party for Symposium Participants
        Tuesday, September 4, 17.30     free

S3      An Evening on the Lake. Dinner Cruise on Lake Zurich
        Wednesday, September 5, 19.00   SFr. 55.--

        A 3 hour cruise on the Lake of Zurich. Whilst the boat takes you
        along the beautiful shores, traditional Swiss dishes will be served
        ("Bauernbuffet") and you can enjoy the view of vineyards, villages and
        some distinguished residential areas.

S4      Medieval Night. Dinner at Castle Lenzburg
        Thursday, September 6, 19.00    SFr. 98.--

        Busses will take you to the impressive Lenzburg Castle. Set aloft
        a precipitous base of rock, with its massive walls and profusion of
        towers and battlements, Lenzburg Castle presents the classical picture
        of a medieval fortress. Before having dinner at the Knight's Lodge,
        drinks will be served in the courtyard and picturesque French
        Gardens. A Swiss folk-music group will entertain you during and after
        dinner. At midnight the busses will take you back to Zurich.


PROGRAM FOR ACCOMPANYING PERSONS

AP1     "Goldtimer Tram" ride: A nostalgia-awakening veteran tram
        dating from 1928 takes you on a pleasure trip all through the city. An
        excellent way to get a first impression (1 h).

        Afterwards a hostess will take you for a stroll through the Old Town,
        including a sight of the famous Chagall-windows (1 1/2 h).
        Tuesday, September 4, 14.00 - 17.00     SFr.  25.--

AP2     Visit to the Lindt-Sprungli chocolate factory: A bus takes you
        to Kilchberg where a hostess will welcome you and guide you through
        the chocolate factory.
        Wednesday, September 5, 9.00 - 12.30    SFr.  15.--

AP3     Half-day excursion to the Rhine waterfalls and Schaff-
        hausen: Bus  tour through Zurich's wine-growing districts to the falls
        of the Rhine, which offer the visitor the glorious spectacle of the
        largest waterfall in Europe. Transfer to the city of Schaffhausen, a
        picturesque town with well preserved medieval architecture,
        overlooking the upper reaches of the Rhine.
        Wednesday, September 5, 13.00 - 17.00   SFr.  42.--

AP4     Full-day excursion to Rigi and Luzern: Bus tour through
        beautiful countryside to Luzern, a picturesque old town right in the
        heart of Switzerland, surrounded by Lake Luzern and highrising
        mountains. Sightseeing tour by bus. Then you will have free time for
        shopping and lunch. The bus takes you to Vitznau where a
        cogwheel-railway takes you to the Rigi (5900 ft) with its spectacular
        views of the Alps, their foothills and valleys. By bus back to Zurich.
        Thursday, September 6, 8.30 - 17.00     SFr.  75.--

Visits to the Kunsthaus (Art Gallery), Landesmuseum (Museum of History)
and a full-day excursion to the Stein cheese-dairy, Appenzell can be booked
at the Verkehrsverein desk. In addition, the official tourist agency
Verkehrsverein Zurich will offer a great variety of interesting activities and
excursions to mountains, lakes and cultural places of interest, folklore,
dancing and nightclub entertainment, as well as diverse sports.  Pre- and
postconference activities are also handled by this local tourist office. The
Verkehrsverein will operate an information and reservation desk next to the
registration desk.


GENERAL CONDITIONS FOR REGISTRATION AND TOURS

Registration and booking of social events and tours should be made on the
enclosed Registration Form. All payments must be made in full. All
payments will be refunded, after deduction of a 25% administration charge,
for all cancellations received before August 15, 1990. After this date no
refunds can be made for cancellation. No charge is made for children under
4 years for the social events and excursions.


SYMPOSIUM ORGANISATION

Director:       Prof. Dr. A. Gruen
                        President of ISPRS Commission V

Secretary:      Dipl. Ing. ETH  H. Beyer
                        Secretary of ISPRS Commission V

Members of the organising committee:
Dipl.Ing. E. Baltsavias; Dipl.Ing. H.-G. Maas; Dipl.Ing. M. Meister,
Dipl.Ing. Z. Parsic; L. Steinbruckner (ETH Zurich)
Dipl.Ing. L. Cogan; Dr.T. Luhmann; M. Streit; Dr. R. Zumbrunn (Kern & Co.AG)


ADDRESS FOR CORRESPONDENCE AND INQUIRIES:

Symposium of ISPRS Commission V
Institute of Geodesy and Photogrammetry
ETH-Hoenggerberg
CH-8093  Zurich
Switzerland

Tel.:   +41-1 377 3051  Telex:  823 474 ehpz ch
Fax:    +41-1 371 55 48         email:  chezpp@igpho.uucp


HOW TO GET TO THE ETH-HOENGGERBERG

Zurich International Airport, 11 km from the city centre, is served by most
International Airlines.

The Swiss Federal Railways run a feeder service to the main railway station
in Zurich by means of its Airport Line. During airport operational hours
trains run every 20 to 30 minutes between the underground station at the
Airport and the Main Station and vice versa.

Public transport City - ETH-Hoenggerberg: Tram Nos. 11 and 15 to Buch-
eggplatz or tram Nos. 7, 9, 10 and 14 to Milchbuck and then from each, Bus
69 to ETH-Hoenggerberg.  (The printed version contains a map of the symposium
and Zurich)


ABSTRACT FORMS and REGISTRATION MATERIAL can be obtained from
the Symposium organisation.


------------------------------

Date: 10 Jan 90 05:57:44 GMT
From: news%beta@LANL.GOV (Usenet News)
Subject: RMCAI 90
Organization: NMSU Computer Science

Updated CFP:
PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS

Cut---------------------------------------------------------------------------

SUBJECT: Please post the following in your Laboratory/Department/Journal:
  
                            CALL FOR PAPERS 

  
                   Pragmatics in Artificial Intelligence
       5th Rocky Mountain Conference on Artificial Intelligence (RMCAI-90)
                Las Cruces, New Mexico, USA, June 28-30, 1990 

  
PRAGMATICS PROBLEM:
The problem of pragmatics in AI is one of developing theories, models,
and implementations of systems that make effective use of contextual
information to solve problems in changing environments.
 
CONFERENCE GOAL: 
This conference will provide a forum for researchers from all
subfields of AI to discuss the problem of pragmatics in AI.
The implications that each area has for the others in tackling
this problem are of particular interest.

COOPERATION:
American Association for Artificial Intelligence (AAAI)
Association for Computing Machinery (ACM)
Special Interest Group in Artificial Intelligence (SIGART)
IEEE Computer Society
U S WEST Advanced Technologies and the Rocky Mountain Society
for Artificial Intelligence (RMSAI)

SPONSORSHIP:
Association for Computing Machinery (ACM)
Special Interest Group in Artificial Intelligence (SIGART)
U S WEST Advanced Technologies and the Rocky Mountain Society
for Artificial Intelligence (RMSAI)

INVITED SPEAKERS: 
The following researchers have agreed to present papers
at the conference:
 
*Martin Casdagli, Los Alamos National Laboratory, Los Alamos USA
*Arthur Cater, University College Dublin, Ireland EC
*Jerry Feldman, University of California at Berkeley, Berkeley USA
                & International Computer Science Institute, Berkeley USA
*Barbara Grosz, Harvard University, Cambridge USA
*James Martin, University of Colorado at Boulder, Boulder USA
*Derek Partridge, University of Exeter, United Kingdom EC
*Philip Stenton, Hewlett Packard, United Kingdom EC
*Robert Wilensky, University of California at Berkeley Berkeley USA

THE LAND OF ENCHANTMENT:
Las Cruces, lies in THE LAND OF ENCHANTMENT (New Mexico),
USA and is situated in the Rio Grande Corridor with the scenic
Organ Mountains overlooking the city. The city is
close to Mexico, Carlsbad Caverns, and White Sands National Monument.
There are a number of Indian Reservations and Pueblos in the Land Of
Enchantment and the cultural and scenic cities of Taos and Santa Fe
lie to the north. New Mexico has an interesting mixture of Indian, Mexican
and Spanish culture. There is quite a variation of Mexican and New
Mexican food to be found here too.

GENERAL INFORMATION:
The Rocky Mountain Conference on Artificial Intelligence is a major
regional forum in the USA for scientific exchange and presentation
of AI research.
 
The conference emphasizes discussion and informal interaction
as well as presentations.
 
The conference encourages the presentation of completed research,
ongoing research, and preliminary investigations.
 
Researchers from both within and outside the region
are invited to participate.
 
Some travel awards will be available for qualified applicants.
 
FORMAT FOR PAPERS:
Submitted papers should be double spaced and no more than 5 pages
long. E-mail versions will not be accepted. Papers will be published
in the proceedings and there is the possibility of a published book.

Send 3 copies of your paper to:
 
Paul Mc Kevitt,
Program Chairperson, RMCAI-90,
Computing Research Laboratory (CRL),
Dept. 3CRL, Box 30001,
New Mexico State University,
Las Cruces, NM 88003-0001, USA. 

 
DEADLINES:
Paper submission: April 1st, 1990
Pre-registration: April 1st, 1990
Notice of acceptance: May 1st, 1990
Final papers due: June 1st, 1990 

 
LOCAL ARRANGEMENTS:
Local Arrangements Chairperson, RMCAI-90.
(same postal address as above).

INQUIRIES:
Inquiries regarding conference brochure and registration form
should be addressed to the Local Arrangements Chairperson.
Inquiries regarding the conference program should be addressed
to the Program Chairperson.

Local Arrangements Chairperson: E-mail: INTERNET: rmcai@nmsu.edu
                                Phone: (+ 1 505)-646-5466
                                Fax: (+ 1 505)-646-6218.

Program Chairperson: E-mail: INTERNET: paul@nmsu.edu
                     Phone: (+ 1 505)-646-5109
                     Fax: (+ 1 505)-646-6218.

 
TOPICS OF INTEREST: 
You are invited to submit a research paper addressing Pragmatics
in AI, with any of the following orientations:
 
  Philosophy, Foundations and Methodology
  Knowledge Representation
  Neural Networks and Connectionism
  Genetic Algorithms, Emergent Computation, Nonlinear Systems
  Natural Language and Speech Understanding
  Problem Solving, Planning, Reasoning
  Machine Learning
  Vision and Robotics
  Applications
 
PROGRAM COMMITTEE:
 
*John Barnden, New Mexico State University 
(Connectionism, Beliefs, Metaphor processing)
*Hans Brunner, U S WEST Advanced Technologies 
(Natural language interfaces, Dialogue interfaces)
*Martin Casdagli, Los Alamos National Laboratory
(Dynamical systems, Artificial neural networks, Applications)
*Mike Coombs, New Mexico State University 
(Problem solving, Adaptive systems, Planning)
*Thomas Eskridge, Lockheed Missile and Space Co. 
(Analogy, Problem solving)
*Chris Fields, New Mexico State University 
(Neural networks, Nonlinear systems, Applications)
*Roger Hartley, New Mexico State University 
(Knowledge Representation, Planning, Problem Solving)
*Victor Johnson, New Mexico State University 
(Genetic Algorithms)
*Paul Mc Kevitt,  New Mexico State University
(Natural language interfaces, Dialogue modeling)
*Joe Pfeiffer, New Mexico State University 
(Computer Vision, Parallel architectures)
*Keith Phillips, University of Colorado at Colorado Springs 
(Computer vision, Mathematical modelling)
*Yorick Wilks, New Mexico State University 
(Natural language processing, Knowledge representation)
*Scott Wolff, U S WEST Advanced Technologies 
(Intelligent tutoring, User interface design, Cognitive modeling)


REGISTRATION: 
Pre-Registration: Professionals: $50.00; Students $30.00
(Pre-Registration cutoff date is April 1st 1990)
Registration: Professionals: $70.00; Students $50.00

(Copied proof of student status is required).

Registration form (IN BLOCK CAPITALS).
Enclose payment made out to New Mexico State University.
(ONLY checks in US dollars will be accepted).


Send to the following address (MARKED REGISTRATION):

	Local Arrangements Chairperson, RMCAI-90
	Computing Research Laboratory
	Dept. 3CRL, Box 30001, NMSU
	Las Cruces, NM 88003-0001, USA.

 
Name:_______________________________	E-mail_____________________________	Phone__________________________


Affiliation:	____________________________________________________


Fax:	 ____________________________________________________


Address:	____________________________________________________


	____________________________________________________


	____________________________________________________


	COUNTRY__________________________________________


Organizing Committee RMCAI-90:

Paul Mc Kevitt                Yorick Wilks
Research Scientist            Director
CRL                           CRL

cut------------------------------------------------------------------------
--



Paul Mc Kevitt,
Computing Research Laboratory,
Dept. 3CRL, Box 30001,
New Mexico State University,
Las Cruces, NM 88003-0001, USA.

E-mail: INTERNET: paul@nmsu.edu
Fax: (+1 505)-646-6218
Phone: (+1 505)-646-5109/5466

Nil an la an gaothaithe la na scolb!!


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (01/16/90)

Vision-List Digest	Mon Jan 15 09:38:15 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Request for Public Domain Image Processing Packages
 3D-glasses
 Connected Component Algorithm
 Posting Call-for-papers of ICCV'90
 Call for Papers Wang Conference

------------------------------

Date: Fri, 12 Jan 90 13:02:20 PST
From: Scott E. Johnston <johnston@odin.ads.com>
Subject: Request for Public Domain Image Processing Packages
Status: RO

I am collecting any and all public-domain image processing packages.
I plan on making them available to all via the vision-list FTP site
(disk space permitting).  Send pointers and packages themselves to
johnston@ads.com (not vision-list@ads.com).  Thanks.

----------------------------------------------------------------------

Subject: 3D-glasses
Date: Thu, 11 Jan 90 01:17:23 EST
From: Edward Vielmetti <emv@math.lsa.umich.edu>

Here's a pointer to info on the Sega 3D glasses.  --Ed


		------- Forwarded Message

From: jmunkki@kampi.hut.fi (Juri Munkki)
Subject: [comp.sys.mac.hardware...] Sega 3D glasses document fix 1.2
Date: 8 Jan 90 20:16:53 GMT
Followup-To: comp.sys.mac.hardware
Approved: emv@math.lsa.umich.edu (Edward Vielmetti)

This is patch 1.2 of the Sega 3D glasses interface document.  It
supersedes versions 0.9, 1.0 and 1.1 of the document. Version 1.2 is
available with anonymous ftp from vega.hut.fi [130.233.200.42].
pub/mac/finnish/sega3d/

Version 0.9 and 1.0 of the document have the TxD+ and TxD- pins
reversed. This causes problems only with my demo software and can
be noticed easily, because both lenses show the same image. Fix
this problem by pulling out the TxD+ and TxD- pins from the miniDIN
connector, swap them and push back in.

Version 1.1 (which is what you have after you make the previous
change) has the tip and center of the glasses connector switched.
Again this doesn't cause any problems unless you use the demo
software. The spiro and Macintosh demos will clearly be inside
the screen and their perspectives will look wrong. To fix the
problem resolder the connector or change the software to swap
the meanings of left and right. If you intend to write for the
glasses, it might be a good idea to include an option to switch
left and right.

     Juri Munkki jmunkki@hut.fi  jmunkki@fingate.bitnet        I Want   Ne   |
     Helsinki University of Technology Computing Centre        My Own   XT   |

			------- End of Forwarded Message


------------------------------

Date: Thu, 11 Jan 90 14:16:52 EST
From: palumbo@cs.Buffalo.EDU (Paul Palumbo)
Subject: Connected Component Algorithm

I was wondering if anybody out there in net-land knows an image analysis 
technique to locate connected components in digital images.  In particular,
I am looking for an algorithm that can be implemented in hardware that makes
only one pass through the image in scan-line order and reports several simple
component features such as component extent (Minimum and Maximum X and Y 
coordinates) and the number of foreground pixels in the component.

The project I am on is planning to design and develop custom image analysis
hardware to do this. We have developed an algorithm locally and was wondering
if somebody else has an easier method.


I know about the LSI Logic "Object Contour Tracer Chip" but this chip appears 
to be too powerful (and slow) for this application.   I had found some papers
by Gleason and Agin dated about 10 years ago but could not find the exact 
details of their algorithm. 

Does anybody else have a need for such hardware?

Any help or pointers on locating such an algorithm would be appreciated.
 
Paul Palumbo                    internet:palumbo@cs.buffalo.edu
Research Associate              bitnet:  palumbo@sunybcs.BITNET
226 Bell Hall                   csnet:   palumbo@buffalo.csnet
SUNY at Buffalo  CS Dept.
Buffalo, New York 14260
(716) 636-3407    uucp:   ..!{boulder,decvax,rutgers}!sunybcs!palumbo

------------------------------

Date: Fri, 12 Jan 90 11:11:31 JST
From: tsuji%tsuji.ce.osaka-u.JUNET@relay.cc.u-tokyo.ac.jp (Saburo Tsuji)
Subject: Posting Call-for-papers of ICCV'90


                                   Call for Papers
                 THIRD INTERNATIONAL CONFERENCE ON COMPUTER VISION
                       International House Osaka, Osaka, Japan
                                 December 4-7, 1990

          CHAIRS
          General Chair:
          Makoto Nagao, Kyoto University, Japan
          E-mail: nagao@kuee.kyoto-u.ac.jp
          Program Co-chairs:
          Avi Kak, Purdue University, USA
          E-mail:kak@ee.ecn.purdue.edu
          Jan-Olof Eklundh, Royal Institute of Technology, Sweden
          joe@bion.kth.se
          Saburo Tsuji, Osaka University, Japan
          tsuji@tsuji.ce.osaka-u.ac.jp
          Local Arrangement Chair:
          Yoshiaki Shirai, Osaka University, Japan
          shirai@ccmip.ccm.osaka-u.ac.jp

          THE CONFERENCE
          ICCV'90 is the third International Conference devoted  solely  to
          computer vision.  It is sponsored by the IEEE Computer Society.

          THE PROGRAM
          The program will consist of high quality  contributed  papers  on
          all  aspects  of computer vision.  All papers will be refereed by
          the members  of  the  Program  Committee.  Accepted  papers  will
          be  presented   as  long papers in a single track or as short pa-
          pers in two parallel tracks.

          PROGRAM COMMITTEE
          The  Program  Committee  consists  of  thirty  prominent  members
          representing all major facets of computer vision.


          PAPER SUBMISSION
          Authors should submit four copies of their papers to Saburo Tsuji
          at  the  address shown below by April 30, 1990.  Papers must con-
          tain major new research contributions. All  papers  will  be  re-
          viewed  using a double-blind procedure, implying that the identi-
          ties of the authors will not be known to the reviewers.  To  make
          this  possible,  two title pages should be included, but only one
          containing the names and addresses of the authors; the title page
          with the names and addresses of the authors will be removed prior
          to the review process. Both title pages should contain the  title
          of  the  paper  and  a short (less than 200 words) abstract.  Au-
          thors must restrict the lengths of their papers to 30 pages; that
          length  should include everything, meaning the title pages, texts
          (double-spaced), figures, bibliography, etc. Authors will be  no-
          tified  of  acceptance  by  mid-July.  Final camera-ready papers,
          typed on special forms, will be due mid-August.

          Send To:  Saburo Tsuji,
          Osaka University,  Department  of  Control Engineering, Toyonaka,
          Osaka 560, Japan.
          E-mail tsuji@tsuji.ce.osaka-u.ac.jp


 


------------------------------

Date:  Fri, 12 Jan 90 02:06:41 EST
From: mike@bucasb.bu.edu (Michael Cohen)
Subject: Call for Papers Wang Conference

                              CALL FOR PAPERS

               NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION
                              MAY 11--13, 1990

Sponsored by the Center for Adaptive Systems,
the Graduate Program in Cognitive and Neural Systems,
and the Wang Institute of Boston University
with partial support from 
The Air Force Office of Scientific Research


This research conference at the cutting edge of neural network science and
technology will bring together leading experts in academe, government, and
industry to present their latest results on automatic target recognition
in invited lectures and contributed posters. Invited lecturers include:

JOE BROWN, Martin Marietta, "Multi-Sensor ATR using Neural Nets"

GAIL CARPENTER, Boston University, "Target Recognition by Adaptive 
Resonance: ART for ATR"

NABIL FARHAT, University of Pennsylvania, "Bifurcating Networks for 
Target Recognition"

STEPHEN GROSSBERG, Boston University, "Recent Results on Self-Organizing 
ATR Networks"

ROBERT HECHT-NIELSEN, HNC, "Spatiotemporal Attention Focusing by 
Expectation Feedback"

KEN JOHNSON, Hughes Aircraft, "The Application of Neural Networks to the 
Acquisition and Tracking of Maneuvering Tactical Targets in High Clutter 
IR Imagery"

PAUL KOLODZY, MIT Lincoln Laboratory, "A Multi-Dimensional ATR System"

MICHAEL KUPERSTEIN, Neurogen, "Adaptive Sensory-Motor Coordination
using the INFANT Controller"

YANN LECUN, AT&T Bell Labs, "Structured Back Propagation Networks for
Handwriting Recognition"

CHRISTOPHER SCOFIELD, Nestor, "Neural Network Automatic Target Recognition 
by Active and Passive Sonar Signals"

STEVEN SIMMES, Science Applications International Co., "Massively Parallel 
Approaches to Automatic Target Recognition" 

ALEX WAIBEL, Carnegie Mellon University, "Patterns, Sequences and Variability:
Advances in Connectionist Speech Recognition"

ALLEN WAXMAN, MIT Lincoln Laboratory, "Invariant Learning and
Recognition of 3D Objects from Temporal View Sequences"

FRED WEINGARD, Booz-Allen and Hamilton, "Current Status and Results of Two 
Major Government Programs in Neural Network-Based ATR"

BARBARA YOON, DARPA, "DARPA Artificial Neural Networks Technology
Program: Automatic Target Recognition"


CALL FOR PAPERS---ATR POSTER SESSION: A featured poster session on ATR
neural network research will be held on May 12, 1990. Attendees who wish to
present a poster should submit 3 copies of an extended abstract 
(1 single-spaced page), postmarked by March 1, 1990, for refereeing. Include
with the abstract the name, address, and telephone number of the corresponding
author. Mail to: ATR Poster Session, Neural Networks Conference, Wang
Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. Authors
will be informed of abstract acceptance by March 31, 1990.

SITE: The Wang Institute possesses excellent conference facilities on a
beautiful 220-acre campus. It is easily reached from Boston's Logan
Airport and Route 128. 

REGISTRATION FEE: Regular attendee--$90; full-time student--$70.
Registration fee includes admission to all lectures and poster session, 
abstract book, 
one reception, two continental breakfasts, one lunch, one dinner, daily 
morning and afternoon coffee service. STUDENTS FELLOWSHIPS are available.
For information, call (508) 649-9731. 

TO REGISTER: By phone, call (508) 649-9731; by mail, write for further 
information to: Neural Networks, Wang Institute of Boston University, 72 Tyng 
Road, Tyngsboro, MA 01879.


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/01/90)

Vision-List Digest	Wed Jan 31 11:37:36 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Re: 3D-glasses
 CFP, Israeli Conference
 British Machine Vision Conference 1990

----------------------------------------------------------------------

Date: Fri, 26 Jan 90 16:31:16 IST
From: Shelly Glaser  011 972 3 545 0060 <GLAS%TAUNIVM.BITNET@CUNYVM.CUNY.EDU>
Subject:      Re: 3D-glasses

Try also  contacting Stereographics,  at Box 2309,  San Rafael,  CA 94912
(415)459-4500,  for  information  on special  hardware/software  for  3-D
display with computers  from the PC up.  It would  be more expensive than
the Sega, but there are other differences as well.  Tektronix, too, makes
3-D  display equipment,  but  I do  not have  the  address/phone for  the
particular department that sells it.

Both systems use  polarization encoding to switch one image  to the right
eye  and the  next to  the  left; high  refresh  rate is  used to  avoide
flicker.  Tektronix uses Liquid Crystal  shutter on the monitor and plain
polarizers on eye  glasses, so extra observers are cheap  and there is no
electrical  cord  connected  to  the Glasses.   Stereographics  puts  the
electronic shutter  on the glasses and  a plain polarizer on  the screen,
which comes out less expensive for just few observers.  Both systems have
color, and are available in several resolutions.

                                        yours
                                                            Shelly Glaser

    Department of Electronic, Communication, Control and Computer Systems
                                                   Faculty of Engineering
                                                      Tel-Aviv University
                                                         Tel-Aviv, Israel

                                                TELEPHONE: 972 3 545-0060
                                                       FAX: 972 3 5413752
                                    Computer network: GLAS@TAUNIVM.BITNET
                                                   glas@taunivm.tau.ac.il
                                      glas%taunivm.bitnet@cunyvm.cuny.edu
Acknowledge-To: <GLAS@TAUNIVM>

------------------------------

Date: Fri, 26 Jan 90 10:05:12 EST
From: peleg@grumpy.sarnoff.com (Shmuel Peleg x 2284)
Subject: CFP, Israeli Conference

                           CALL FOR PAPERS
    7th Israeli Conference on Artificial Intelligence and Computer Vision
                    Tel-Aviv, December 26-27, 1990

The conference is the joint annual meeting of the Israeli Association for
Artificial Intelligence, and the Israeli Association for Computer Vision
and Pattern Recognition, which are affiliates of the Israeli Information
Processing Association. The language of the conference is English.
Papers addressing all aspects of AI and Computer Vision, including, but
not limited to, the following topics, are solicited:

  Image Processing, Computer Vision, and Pattern Recognition.
  Visual Perception, Robotics, and Applications of Robotics and Vision.
  Inductive inference, Knowledge Acquisition, AI and Education, AI Languages,
  Logic Programming,  Automated Reasoning, Cognitive Modeling,  Expert Systems,
  Natural Language Processing, Planning and Search,  Knowledge Theory, Logics
  of Knowledge.

Submitted papers will be refereed by the program committee, listed below. 
Authors should submit 4 copies of a full paper. Accepted papers will appear 
in the conference proceedings.

Papers should be received by the conference co-chairmen at the following
address by June 1st, 1990. Authors will be notified of accepted papers by 
August 1st 1990.

      VISION:                           AI:
      Prof. A. Bruckstein               Dr. Y. Feldman
      7th AICV                          7th AICV
      Faculty of Computer Science       Dept of Computer Science
      Technion                          Weizmann Institute
      32000 Haifa, Israel               76100 Rehovot, Israel
      freddy@techsel.bitnet

Program Committee:
M. Balaban, M. Ben Bassat, R. Dechter, E. Gudes, T. Flash, D. Lehmann,
M. Luria, Y. Moses, U. Ornan, J. Rosenschein, E. Shapiro
Z. Meiri, A. Meizles, S. Peleg, M. Porat, M. Sharir, S. Ullman, M. Werman
H. Wolfson, Y. Yeshurun

------------------------------

Date:           Mon, 29 Jan 90  14:42 GMT
From: Rob Series 0684 895784 <"SP4IP::SERIES%hermes.mod.uk"@relay.MOD.UK>
Subject:        British Machine Vision Conference 1990

                       ANNOUNCEMENT AND CALL FOR PAPERS

                                  BMVC 90
                  British Machine Vision Conference 1990

                           University Of Oxford
                        24th - 27th September 1990

Organised by:
    The British Machine Vision Association and Society for Pattern Recognition

    The Alvey  Vision Conference  became established  as the  premiere
    annual UK  national  conference  for Machine  Vision  and  related
    topics.   The merger of the BPRA and Alvey Vision Club to form the
    BMVA enables this successful series of conferences to be continued
    with a similar flavour.

    The emphasis will continue to  be on UK research  being undertaken
    through  national   or   international   collaborative   projects,
    providing a  forum  for the  presentation  and discussion  of  the
    latest results  of  investigations.  Papers  from  other  nations,
    especially those  collaborating  with  UK groups,  are  also  very
    welcome. A printed copy  of the Proceedings  will be available  to
    delegates at the conference,  and a selection  of the best  papers
    will be  published separately  in  a special  issue of  Image  and
    Vision Computing Journal.

    Contributions are sought on any novel aspect related to:
            o  Image Processing and Feature Extraction
            o  Robotic Vision and Sensor Fusion
            o  Object Recognition and Scene Analysis
            o  Practical Applications of Machine Vision
            o  Reconstruction of 3D Shape
            o  Model Based Coding
            o  Advanced Pattern Analysis
            o  Architectures for Vision Systems
            o  Computational Issues in Visual Perception

    Papers will be  reviewed by  the BMVA Committee.  Papers must  not
    exceed 6 pages of A4 including figures, double column in 10  point
    type. A Poster  session will  again be  held. Authors  considering
    submitting short papers describing preliminary results may  prefer
    to use  this route.  Note that  posters (up  to 4  sides) will  be
    included in the proceedings. Standard format headers to assist  in
    preparation of camera ready copy available from:
        bmvc90@uk.ac.ox.robots

    Separate cash prizes will be given for the two papers which are
    judged by the programme committee:
        (i) to make the best scientific contribution (sponsored by the
            BMVA committee),
    or
        (ii) to have the greatest industrial potential (sponsored by
            Computer Recognition Systems Ltd).

TIMETABLE OF DEADLINES (1990)]                        

    7th May         Six copies of short-form paper (1500 words), or
                    draft final-format papers (see above), to be
                    submitted to Dr A. Zisserman
    4th June        Last date for early registration at preferential
                    rate
    11th June       Notification of acceptance of papers.
    16th July       Camera-ready final paper (plus four additional
                    photocopies) to be received by the Programme
                    Chairman. Final date for posters.
    24th Sept       Conference registration.

             REGISTRATION                            PROGRAMME

             Dr RW Series,                        Dr A Zisserman,
                BMVC90,                               BMVC90,
                 RSRE,                       Dept Engineering Science,
            St Andrews Rd,                          Parks Road,
               MALVERN,                               OXFORD.
            Worcs. WR14 3PS                           OX1 3PJ

          series@uk.mod.rsre                    az@uk.ac.ox.robots

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/09/90)

Vision-List Digest	Thu Feb 08 19:53:44 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Edge detectors
 Multisource Data Integration in Remote Sensing
 NIPS-90 WORKSHOPS Call for Proposals
 systematic biology & computing workshop

----------------------------------------------------------------------

Date: 1 Feb 90 16:30:17 GMT
From: Robert A Nicholls <ran@doc.imperial.ac.uk>
Subject: Edge detectors
Keywords: Edge Detectors, Canny
Organization: Dept. of Computing, Imperial College, London

Can anyone point me in the direction of an alternative source of J.Canny's
"Finding Edges and Lines in Images". It was originally a MIT AI Lab Report,
AI-TR-720, 1983. However, I can not find this report, I am sure I have seen
it or a similar article in a journal, though I can't remember which.

If anyone has implemented Canny's algorithms and wouldn't mind e-mailing them 
to me I would be eternally grateful.

I am a third year research student looking at methods of incorporating 
high-level knowledge into image segmentation. 

Thanks for any help,
                   Bob.


------------------------------

Date: Thursday, 8 February 1990 17:06:44 EST
From: Dave.McKeown@maps.cs.cmu.edu
Subject: Conference on Multisource Data Integration in Remote Sensing

                         IAPR TC 7 Workshop on
             Multisource Data Integration in Remote Sensing

               University of Maryland University College
                       Center of Adult Education
                    College Park, Maryland, U. S. A.

                            June 14-15, 1990

The International Association for Pattern Recognition's Technical 
Committee 7 on Applications in Remote Sensing is organizing a Workshop 
on Multisource Data Integration in Remote Sensing.  The workshop, which 
is being co-sponsored by the NASA Goddard Space Flight Center, 
Greenbelt, MD, will consist of invited and submitted papers in the 
following areas:

   o Remote-Sensing data sources and their characteristics

   o Integrative methods for across-sensor data registration

   o Gathering, validating and registering of ground reference data

   o Development of multi-source data sets, and

   o The utilization of multi-source data sets in Earth Science 
     applications.

Time will be scheduled for discussing tactics for facilitating the 
exchange of multi-source data sets between investigators.

The NASA Goddard Space Flight Center will publish the Workshop 
Proceedings as a NASA Conference Publication.  The Workshop is scheduled 
so as to feed into the IAPR's 10th International Conference on Pattern 
Recognition, which will be held on June 17-21, 1990 at Atlantic City, 
NJ.  If interest warrants, a bus will be chartered to transport the 
participants from College Park, MD to Atlantic City, NJ after the 
Workshop.

                               Deadlines

     February 28, 1990       Paper summary submission*
     April 1, 1990           Author notification
     June 14, 1990           Camera-ready manuscript due at Workshop

     * 4 page, double spaced paper summary plus a 300 word single spaced 
     abstract on a separate (single) page.  The abstract page should 
     include the paper title, and the name, mailing address, 
     affiliation, phone number, FAX number and electronic mail address 
     (if any) for each author.  Abstracts will be published in final 
     program.

                            Paper Submission

Submit papers to:  Dr. James C. Tilton, Mail Code 636, NASA Goddard 
Space Flight Center, Greenbelt, MD 20771, U.S.A.  Phone: (301) 286-9510.  
Fax: (301) 286-3221.  E-Mail: tilton@chrpisis.gsfc.nasa.gov  
(tilton@[128.183.112.25]).

------------------------------

Date: Tue, 6 Feb 90 18:28:54 EST
From: jose@neuron.siemens.com (Steve Hanson)
Subject: NIPS-90 WORKSHOPS Call for Proposals

                           REQUEST FOR PROPOSALS
                     NIPS-90 Post-Conference Workshops
                     November 30 and December 1, 1990


       Following the regular NIPS program, workshops on  current  topics
       on  Neural Information Processing will be held on November 30 and
       December 1, 1990, at a ski  resort  near  Denver.   Proposals  by
       qualified   individuals   interested  in  chairing  on  of  these
       workshops are solicited.

       Past topics  have  included:   Rules  and  Connectionist  Models;
          Speech;    Vision;   Neural   Network   Dynamics;   Neurobiology;
          Computational  Complexity  Issues;  Fault  Tolerance  in   Neural
          Networks; Benchmarking and Comparing Neural Network Applications;
          Architectural Issues; Fast Training  Techniques;  VLSI;  Control;
          Optimization, Statistical Inference, Genetic Algorithms.

       The format of the workshop is informal.  Beyond reporting on past
          research,  their  goal  is  to  provide  a  forum  for scientists
          actively working in the field to freely discuss current issues of
          concern  and  interest.  Sessions will meet in the morning and in
          the afternoon of both days, with free time  in  between  for  the
          ongoing individual exchange or outdoor activities.  Specific open
          or controversial issues are encouraged and preferred as  workshop
          topics.   Individuals  interested  in  chairing   a workshop must
          propose a topic of current interest and must be willing to accept
          responsibility for their group's discussion.  Discussion leaders'
          responsibilities include:  arrange brief  informal  presentations
          by   experts   working  on  this  topic,  moderate  or  lead  the
          discussion, and report its high points, findings and  conclusions
          to  the  group during evening plenary sessions, and in a short (2
          page) summary.

       Submission Procedure:  Interested parties should submit  a  short
          proposal  for  a workshop of interest by May 17, 1990.  Proposals
          should include a title  and  a  short  description  of  what  the
          workshop  is  to address and accomplish.  It should state why the
          topic is of interest or controversial, why it should be discussed
          and  what  the  targeted  group of participants is.  In addition,
          please send a brief resume of  the  prospective  workshop  chair,
          list  of publications and evidence of scholarship in the field of
          interest.

       Mail submissions to:
               Dr. Alex Waibel
               Attn: NIPS90 Workshops
               School of Computer Science
               Carnegie Mellon University
               Pittsburgh, PA 15213

          Name, mailing address, phone number, and e-mail net address 
	  (if applicable) must be on all submissions.

      Workshop Organizing Committee:

          Alex Waibel, Carnegie-Mellon, Workshop Chairman;
          Kathie Hibbard, University of Colorado, NIPS Local Arrangements;
          Howard Watchel, University of Colorado, Workshop Local Arrangements;

              PROPOSALS MUST BE RECEIVED BY MAY 17,1990
                             Please Post

------------------------------

Date: Sat, 3 Feb 1990 10:54:52 PST
From: "Michael G. Walker" <walker@sumex-aim.stanford.edu>
Subject: systematic biology & computing workshop

WORKSHOP ANNOUNCEMENT: 
 
       Artificial Intelligence and Modern Computer Methods
            in Systematic Biology (ARTISYST Workshop)
 
The Systematic Biology Program of the National Science Foundation is
sponsoring a Workshop on Artificial Intelligence, Expert Systems, and
Modern Computer Methods in Systematic Biology, to be held September 9
to 14, 1990, at the University of California, Davis.  There will be
about 45 participants representing an even mixture of biologists and
computer scientists.
 
Expenses for participants will be paid, including hotel (paid directly
by the workshop organizers), food (per diem of US $35), and travel
(with a maximum of US $500 for travel expenses).  Attendance at the
workshop is by invitation only.
 
These are the subject areas for the workshop:

  Machine vision and feature extraction applied to systematics. 
  Expert systems, expert workstations and other tools for identification;
  Phylogenetic inference and mapping characters onto tree topologies;
  Literature data extraction and geographical data;
  Scientific workstations for systematics;

  
The workshop will examine state-of-the-art computing methods and
particularly Artificial Intelligence methods and the possibilities
they offer for applications in systematics.  Methods for knowledge
representation as they apply to systematics will be a central focus of
the workshop.  This meeting will provide systematists the opportunity
to make productive contacts with computer scientists interested in
these applications.  It will consist of tutorials, lectures on
problems and approaches in each area, working groups and discussion
periods, and demonstrations of relevant software.
 
Participants will present their previous or proposed research in a
lecture, in a poster session, or in a software demonstration session.
In addition, some participants will present tutorials in their area of
expertise.
 
Preference will be given to applicants who are most likely to continue
active research and teaching in this area.  The Workshop organizers
welcome applications from all qualified biologists and computer
scientists, and strongly encourage women, minorities, and persons with
disabilities to apply.
 
If you are interested in participating, please apply by sending to the
workshop organizers the information suggested below:
 
1) your name, address, telephone number, and electronic mail address;
2) whether you apply as a computer scientist or as a biologist;
3) a short resume; 
4) a description of your previous work related to the workshop topic; 
5) a description of your planned research and how it relates to the workshop;
6) whether you, as a biologist (or as a computer scientist), have
taken or would like to take steps to establish permanent collaboration
with computer scientists (or biologists).  A total of two pages or
less is preferred.  This material will be the primary basis for
selecting workshop participants.
 
If you have software that you would like to demonstrate at the
workshop, please give a brief description, and indicate the hardware
that you need to run the program.  Several PC's and workstations will
be available at the workshop.
 
Mail your completed application to:
 
Renaud Fortuner, ARTISYST Workshop Chairman, 
California Department of Food and Agriculture
Analysis & Identification, room 340
P.O. Box 942871
Sacramento, CA 94271-0001
 
(916) 445-4521
E-mail: rfortuner@ucdavis.edu
 
APPLICATIONS RECEIVED AFTER APRIL 15, 1990 WILL NOT BE ACCEPTED
 
Notification of acceptance of proposal will be made before May 31, 1990 
 
For further information, contact Renaud Fortuner, Michael Walker,
Program Chairman, (Walker@sumex-aim.stanford.edu), or a member of the
steering committee:
 
Jim Diederich, U.C. Davis (dieder@ernie.berkeley.edu)
Jack Milton, U.C. Davis (milton@eclipse.stanford.edu)
Peter Cheeseman, NASA AMES (cheeseman@pluto.arc.nasa.gov)
Eric Horvitz, Stanford University (horvitz@sumex-aim.stanford.edu)
Julian Humphries, Cornell University (lqyy@crnlvax5.bitnet)
George Lauder, U.C Irvine (glauder@UCIvmsa.bitnet)
F. James Rohlf, SUNY (rohlf@sbbiovm.bitnet)
James Woolley, Texas A&M University (woolley@tamento.bitnet)
 
 
The following is a brief description of problems in systematics
related to machine vision. Abstracts for the four other topic areas are
available from Renaud Fortuner or Michael Walker.

 
 
  MACHINE VISION AND FEATURE EXTRACTION APPLIED TO SYSTEMATICS
 
                         F. James Rohlf
 
               Department of Ecology and Evolution
                  State University of New York
                  Stony Brook, NY 11794-5245  
 
 
 
     Most data presently used in systematics are collected through the
visual examination of specimens.  Features are usually found by the
visual comparison of specimens and most measurements are taken
visually.  These activities can be quite time consuming. Thus there is
the potential for saving a systematist's time if appropriate hardware
and software were available that would enable routine measurements to
be made automatically.  This would permit more extensive large- scale
quantitative studies.
 
     But automation is difficult in systematics since the features to
be measured are usually not easily separated from the background,
i.e., the visual scene is often cluttered, and the structures of
interest may not have distinct colors or intensities as in many
industrial applications of image analysis. The problem is especially
difficult for certain groups of organisms.  The problem is further
complicated due to biological variability.  One usually cannot depend
upon homologous structures having consistent geometrical features that
can be used to automatically identify landmarks of interest.  Other
important complications are that most structures of interest are
3-dimensional and that the "texture" of surfaces often contains
taxonomically useful information.  Both aspects are difficult to
capture with presently available hardware and software.
 
     For these reasons present applications of image analysis in
systematics have been quite modest.  In studies where data are
recorded automatically, time is spent simplifying the image.  For
example, structures of interest are physically separated from the rest
of the specimen and placed upon a contrasting plain background so the
outline can be found with little error.  Alternatively, an
investigator can identify structures of interest by pointing to them
with a mouse, watching how a program finds an outline, and them
editing the trace if necessary.  Working from this outline, additional
landmarks can be identified by the operator.  In some cases these
landmarks can be associated with geometrical features of the outline
and it will be possible for the software to help the operator to
accurately locate these points.  Due to the difficulty of solving the
general problems of the automatic analysis of complex biological
scenes, a more immediate goal should be to develop powerful tools that
a systematist can interact with to isolate structures, locate
landmarks, and compute various measurements.  In addition, it would be
desirable for the software to "learn" how to recognize the structures
so that the process will go faster as both the software and the
systematist become more experienced.
 
   Once the structures and landmarks have been found they are usually
recorded so that, if necessary, additional measurements can be made
without having to go back to the original image. These are usually in
the form of x,y-coordinates of landmarks or chain-coded outlines.  For
very large studies, methods to compress this raw descriptive
information need to be used.
 
   The features that are measured are usually the same types of
features that would have been measured by hand -- 2-dimensional
distances between landmarks or angles between pairs of landmarks. In
some studies the features used are parameters from functions (such as
Fourier, cubic splines, Bezier curves) fitted to the shapes of
structures or of entire outlines of organisms.  More work is needed to
develop new types of features and to evaluate the implications of
their use relative to traditional methods.




------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/21/90)

Vision-List Digest	Tue Feb 20 17:40:23 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Others ways to do triangulation sought
 Request for advice on equipment
 Range data archive
 Currently available packages for image processing
 digital photography
 CFP: IEEE TPAMI Special Issue on 3D Scene Interpretation
 CVGIP TOC, Vol. 50, No. 1, March 1990
 Conference on Visual Information Assimilation in Man and Machine
 VBC - 90 Preliminary Conference Announcement

----------------------------------------------------------------------

Date:    19 Feb 1990 11:52:39-GMT
From: aa538 <aa538%city.ac.uk@NSFnet-Relay.AC.UK>
Subject: Others ways to do triangulation sought

I am developing a representation for image structure which involves
triangulating a set of (mostly) irregularly spaced data points.  The
heuristic triangulation algorithm I developed is quite fast, but regularly
makes mistakes. I would be grateful if anyone could provide code (preferably C)
to perform the triangulation more robustly.  Delauny triangulation is the only
type I know, but any one would probably be fine.

Paul Rosin
Machine Vision Group
Centre for Information Engineering
Dept. Electronic, Electrical, and Information Engineering
City University
Northampton Square
London, ECIV OHB

------------------------------

Date: Sun, 18 Feb 90 23:18:12 EST
From: yehuda@acsu.buffalo.edu (yehuda newberger)
Subject: request for advice on equipment

    I need information on what kind of monitor and video card would be
appropriate for work in image analysis on MSDOS type equipment.  I have
a 386 running MSDOS.  Typically, I want to display 256 by 256 pixels 
with 256 different simultaneous shades of gray or colors.  I would prefer
to spend less than $1000 .  My address is  Edward Newberger
                                           90 Huntington Avenue
                                           Apt 104
                                           Buffalo, New York 14214


------------------------------

Date: Mon, 19 Feb 90 11:45:59 EST
From: flynn@pixel.cps.msu.edu (Patrick J. Flynn)
Subject: Range data archive

I've made 44 range images (obtained from our Technical Arts
scanner) available for anonymous ftp from:

styrofoam.cps.msu.edu    (IP address 35.8.56.144)

in the pub/images directory.

Some images contain one object, some contain several, with various
amounts of occlusion.

Direct *specific* questions about the images to me (flynn@cps.msu.edu).
General questions about range sensing are best answered by reading
the surveys by Jarvis (PAMI '83), Nitzan (PAMI '88), or Besl
(in the `Advances in Machine Vision' book by J. Sanz, pub. by Springer).


Here is the text of the README file in the images directory.

This directory contains a bunch of range images produced by the
MSU Pattern Recognition and Image Processing Lab's Technical
Arts 100X scanner (aka `White scanner').  You are free to use
these images to test your algorithms.  If the images are to appear
in a published article, please acknowledge the MSU PRIP Lab as
the source of the images (you don't have to mention my name, though).

File format: rather than deal with all the goofy standards out
there for images (and to preserve the floating-point representation),
these images are compressed ASCII text files.  Beware: they expand by
about 10x when uncompressed.  I recommend that you keep them
compressed to save disk space.  Many of you will probably convert
these files to your own `local' image format anyway.

Each image file has a three-line header giving the number of rows and
columns.  This is followed by four images.  The first is the
so-called 'flag' image, where a pixel value of 1 means the corresponding
(x,y,z) values at that pixel are valid.  If the flag value is zero, you
should ignore the (x,y,z) components for that pixel.

Following the flag image is the image of X-coordinates, the image
of Y-coordinates, and the image of Z-coordinates.  All are
floating-point images.  Our White scanner is configured so that
each stripe of range values occupies one column in the image.  We
sweep the object under the stripe with an XY table to get an image.
So the X coordinate image is a linear ramp; the X value is taken
from the absolute position of the X stage in the XY table (we don't
do anything in the Y direction at present).  The Y value depends
on the column number of the pixel, and the Z value is the measured
range (in our lab, Z is the height above a table).

You can use the 3D coordinates of each range pixel, or you can
throw away the X and Y images, and concern yourself with the Z-value
alone.  Note that the `aspect ratio' of the image doesn't
have to be 1, although I try to keep it in the neighborhood of 1.

Availability: I will try to keep these images available on
styrofoam.cps.msu.edu (35.8.56.144) until I leave MSU this summer.
If my next job has machines with internet access and some disk space,
I'll put them there.

Remember to use binary mode when you transfer the images.


------------------------------

Date: Wed, 14 Feb 90 17:25:43 PST
From: Scott E. Johnston <johnston@zooks.ads.com>
Subject: Currently available packages for image processing

The following is a list of currently available packages of image
processing source code.  Some packages are public domain, others are
one-time licenses.  I would welcome any additions or corrections to
this list.  Thank you for all contributions for date.

Scott E. Johnston
johnston@ads.com
Advanced Decision Systems, Inc.
Mountain View, CA  94043

***********
ALV Toolkit

Contact:	alv-users-request@uk.ac.bris.cs

Description:

Public domain image processing toolkit written by Phill Everson
(everson@uk.ac.bris.cs).  Supports the following:

	- image display
	- histogram display
	- histogram equalization
	- thresholding
	- image printing
	- image inversion
	- linear convolution
	- 27 programs, mostly data manipulation


***********

BUZZ

Contact:	Tehnical:		Licensing:
		John Gilmore		Patricia Altman
		(404) 894-3560		(404) 894-3559

		Artificial Intelligence Branch
		Georgia Tech Research Institute
		Georgia Institute of Technology
		Atlanta, GA 30332

Description:	

BUZZ is a comprehensive image processing system developed at Georgia
Tech.  Written in VAX FORTRAN (semi-ported to SUN FORTRAN), BUZZ
includes algorithms for the following:

	- image enhancement
	- image segmentation
	- feature extraction
	- classification


***********

LABO IMAGE


Contact:	Thierry Pun 	  Alain Jacot-Descombes
		+(4122) 87 65 82  +(4122) 87 65 84
		pun@cui.unige.ch  jacot@cuisun.unige.ch

		Computer Science Center
		University of Geneva
		12 rue du Lac
		CH-1207 
		Geneva, Switzerland

Description: 

Interactive window based software for image processing and analysis.
Written in C.  Source code available.  Unavailable for use in
for-profit endeavours.  Supports the following:

	- image I/O
	- image display
	- color table manipulations
	- elementary interactive operations: 
		- region outlining
		- statistics
		- histogram computation
	- elementary operations:
		- histogramming
		- conversions
		- arithmetic
		- images and noise generation
	- interpolation:  rotation/scaling/translation
	- preprocessing:  background subtraction, filters, etc;
	- convolution/correlation with masks, image; padding
	- edge extractions
	- region segmentation
	- transforms:  Fourier, Haar, etc.
	- binary mathematical morphology, some grey-level morphology
	- expert-system for novice users
	- macro definitions, save and replay

Support for storage to disk of the following:
	- images
	- vectors (histograms, luts)
	- graphs
	- strings


***********

NASA IP Packages

VICAR
ELAS -- Earth Resources Laboratory Applications Software
LAS -- Land Analysis System

Contact:	COSMIC (NASA Facility at Georgia Tech)
		Computer Center
		112 Barrow Hall
		University of Georgia
		Athens, GA  30601
		(404) 542-3265

Description:	

VICAR, ELAS, and LAS are all image processing packages available from
COSMIC, a NASA center associated with Georgia Tech.  COSMIC makes
reusable code available for a nominal license fee (i.e. $3000 for a 10
year VICAR license).

VICAR is an image processing package written in FORTRAN with the
following capability:

	- image generation
	- point operations
	- algebraic operations
	- local operations
	- image measurement
	- annotation and display
	- geometric transformation
	- rotation and magnification
	- image combination
	- map projection
	- correlation and convolution
	- fourier transforms
	- stereometry programs

"ELAS was originally developed to process Landsat satellite data, ELAS
has been modified over the years to handle a broad range of digital
images, and is now finding widespread application in the medical
imaging field ... available for the DEC VAX, the CONCURRENT, and for
the UNIX environment."  -- from NASA Tech Briefs, Dec. 89

"... LAS provides a flexible framework for algorithm development and
the processing and analysis of image data.  Over 500,000 lines of code
enable image repair, clustering, classification, film processing,
geometric registration, radiometric correction, and manipulation of
image statistics."  -- from NASA Tech Briefs, Dec. 89


***********

OBVIUS

Contact:	for ftp --> whitechapel.media.mit.edu
		otherwise --> heeger@media-lab.media.mit.edu
			      MIT Media Lab Vision Science Group
			      (617) 253-0611

Description:

OBVIUS is an object-oriented visual programming language with some
support for imaging operations.  It is public domain CLOS/LISP
software.  It supports a flexible user interface for working with
images.  It provides a library of image processing routines:

	- point operations
	- image statistics
	- convolutions
	- fourier transforms


***********

POPI (DIGITAL DARKROOM)

Contact:	Rich Burridge
		richb@sunaus.sun.oz.AU
                   -- or --
                available for anonymous ftp from ads.com 
		(pub/VISION-LIST-BACKISSUES/SYSTEMS)
		
Description:

Popi was originally written by Gerard J. Holzmann - AT&T Bell Labs.
This version is based on the code in his Prentice Hall book, "Beyond
Photography - the digital darkroom," ISBN 0-13-074410-7, which is
copyright (c) 1988 by Bell Telephone Laboratories, Inc.


***********

VIEW (Lawrence Livermore National Laboratory)

Contact:	Fran Karmatz
		Lawrence Livermore National Laboratory
		P.O. Box 5504
		Livermore, CA  94550
		(415) 422-6578

Description:

Window-based image-processing package with on-line help and user
manual.  Multidimensional (2 and 3d) processing operations include:
	- image display and enhancement
	- pseudocolor
	- point and neighborhood operations
	- digital filtering
	- fft
	- simulation operations
	- database management
	- sequence and macro processing

Written in C and FORTRAN, source code included.  Handles multiple
dimensions and data types.  Available on Vax, Sun 3, and MacII.

------------------------------

Date: Tue, 20 Feb 90 18:38 EST
From: DONNELLY@AMY.SKIDMORE.EDU
Subject: digital photography

Please help me obtain information about the manipulation of 
photographic images digitally.

What are the best products that can be used with a MacIIcx?

Did anyone attend the recent conference on Digital Photography 
that took place in Wash DC?

Are there any new interesting products?
Are there any good books on the subject?

Thanks for your assistance.

Denis Donnelly
donnelly@amy.skidmore.edu

------------------------------

Date: Mon, 19 Feb 90 11:33:25 EST
From: flynn@pixel.cps.msu.edu (Patrick J. Flynn)
Subject: CFP: IEEE TPAMI Special Issue on 3D Scene Interpretation

                           Call for Papers 

                Special Issue of the IEEE Transactions on
                Pattern Analysis and Machine Intelliegnce
                                 on 

                     Interpretation of 3D Scenes

Papers are solicited for a Special Issue of the IEEE Transactions on
Pattern Analysis and Machine Intelligence which will address the
subject of Interpretation of 3D Scenes. The issue is scheduled for
publication in September, 1991. The Guest Editors for the special issue
will be Anil Jain of Michigan State University and Eric Grimson of
M.I.T.

The interpretation of 3D scenes is a difficult yet an important area of
research in computer vision. Advances in sensors that directly sense in
3D and progress in passive 3D sensing methods have resulted in a steady
but not spectacular progress in 3D scene interpretation. The quality of
sensed data is getting better and faster hardware presents more
alternatives for processing it. However, the problems of object
modeling and matching still pose difficulties for general real world
scenes. Problems in 3D sensing, modeling, and interpretation are being
investigated by a number of vision researchers in a variety of
contexts. The goal of the special issue is to gather significant
research results on sensing, modeling, and matching into one volume
which specifically addresses these issues.

Papers describing novel contributions in all aspects of 3D scene interpretation
are invited, with particular emphasis on:

 -- 3D sensing technologies, both active (laser, sonar, etc.) and
    passive (stereo, motion vision, etc.),
 -- 3D object recognition, both from 3D data and from 2D data,
 -- 3D navigation and path planning
 -- novel object representations that support 3D interpretation
 -- applications (e.g. cartography, inspection, assembly, navigation)
 -- representation and indexing of large libraries of objects
 -- CAD-based 3d vision
 -- architectures for 3D interpretation

We particularly encourage papers that address one or more of these
issues or related issues in 3D interpretation, especially in the context
of complex scenes.  While both theoretical and experimental contributions
are welcomed, contributions in which new ideas are
tested or verified on real data are specially sought.

All papers will be subjected to the normal PAMI review process. Please
submit four copies of your paper to:

         			Eric Grimson
         			M.I.T. AI Laboratory
               			545 Technology Square
                 		Cambridge, MA 02139

The deadline for submission of manuscript is October 1, 1990. For further 
information, contact Anil Jain (517-353-5150, jain@cps.msu.edu)
or Eric Grimson (617-253-5346, welg@ai.mit.edu).

------------------------------

Date: Fri, 16 Feb 90 14:59:58 -0800
From: bertolas@cs.washington.edu (William Bertolas)
Subject: CVGIP TOC, Vol. 50, No. 1, March 1990

Computer Vision, Graphics, and Image Processing
Volume 50, Number 1, March 1990

CONTENTS

M.J. Korsten and Z. Houkes.  The Estimation of Geometry and Motion of a 
	Surface from Image Sequences by Means of Linearization of a Parametric
	Model, p. 1.

Clifford A. Shaffer and Hanan Samet.  Set Operations for Unaligned Linear
	Quadtrees, p. 29.

Phillip A. Veatch and Larry S. Davis.  Efficient Algorithms for Obstacle 
	Detection Using Range Data, p. 50.

David C. Knill and Daniel Kersten.  Learning a Near-Optimal Estimator for 
	Surface Shape from Shading, p. 75.

NOTE

	Amelia Fong.  Algorithms and Architectures for a Class of Non-Linear 
		Hybrid Filters, p. 101.

	Hug-Tat Tsui, Ming-Hong Chan, Kin-Cheong Chu, and Shao-Hua Kong.  
		Orientation Estimation of 3D Surface Patches, p. 112.

BOOK REVIEW

	Michael Lachance. An Introduction to Splines for Use in Computer 
		Graphics and Geometric Modeling. By R.H. Bartels, J.C. Beatty, 
		and B.A. Barsky, p. 125.

ABSTRACTS OF PAPERS ACCEPTED FOR PUBLICATION, p. 000.


------------------------------

Date: 16 Feb 90 20:23 GMT
From: sinha@caen.engin.umich.edu (SARVAJIT S SINHA)
Subject: Conference on Visual Information Assimilation in Man and Machine
Keywords: Conference, Call for Participation, Vision, Information Assimilation
Organization: U of M Engineering, Ann Arbor, Mich.

                          CALL FOR PARTICIPATION

                CONFERENCE ON VISUAL INFORMATION ASSIMILATION
                             IN MAN AND MACHINE

                          University of Michigan,
                               Ann Arbor, MI

                              June 27-29, 1990

In the last 20 years a variety of computational, psychological and neuro-
biological models of vision have been proposed. Few of these models have
presented integrated solutions; most have restricted themselves to a 
single modality such as stereo, shading, motion, texture or color.

We are hosting a 3 day conference be held June 27-29, 1990 at the University 
of Michigan, which will bring together leading researchers from each of 
these academic areas to shed new light on the problem of how visual 
information is assimilated in both man and machine. We have invited 
researchers from both academic instituitions and research centers in order 
to increase the cross-pollenation of ideas. 
Among the questions that we anticipate to be addressed by all 
perspectives are: What are the possible stages and representations 
for each visual modality? How is contradictory visual information 
dealt with? Is there in natural vision systems (and should there be 
in computer vision) one coherent representation of the world---a 
single model? If a single model will suffice, how (and where in 
neurobiology) can visual information be combined into such a model?
If a single model will not suffice, or are there reasons to 
believe that there are ways of partitioning visual information 
among multiple models that are more likely to be used in man and 
useful in machines? 

Invited Talks
Irving Biederman (University of Minnesota) 
          Human Object Recognition
Stephen M. Kosslyn (Harvard University)    
          Components of High-Level Vision
Whitman Richards (MIT) and Allen Jepson (Univ. of Toronto)
          What is Perception?
Geoffrey R. Loftus (Univ. of Washington)
          Effects of Various Types of Visual Degradation 
          on Visual Information Acquisition
Barry J. Richmond (National Inst. of Mental Health)
          How Single Neuronal Responses Represent Picture 
          Features Using Multiplexed Temporal Codes
Patrick Cavanagh (Harvard University)
          3D Representation
Daniel Green (University of Michigan)
          Control of Visual Sensitivity
Laurence Maloney (New York University)
          Visual Calibration
Misha Pavel (Stanford University)
          Integration of Motion Information
Brian Wandel (Stanford University)
          Estimation of Surface Reflectance and Ambient Illumination
Klaus Schulten (Univ. of Illinois)
          A Self-Organized Network for Feature Extraction
John K. Tsotsos (Univ. of Toronto)
          Attention and Computational Complexity of 
          Visual Information Processing
Shimon Ullman (Weizmann Inst-MIT)
          Visual Object Recognition

For an extended e-mail announcement, send a message to 
iris@caen.engin.umich.edu
For further information contact the University of Michigan Extension Service,
Department of Conferences and Institutes, 200 Hill Street, Ann Arbor, MI
48104-3297. Telephone 313-764-5305. 

Sarvajit Sinha                            sinha@caen.engin.umich.edu
157, ATL Bldg,University of Michigan                    313-764-2138


------------------------------

Date: 19 Feb 90 14:08:42 GMT
From: arkin%pravda@gatech.edu (Ronald Arkin)
Subject: VBC - 90 Preliminary Conference Announcement
Keywords: visualization, conference, biomedical
Organization: Georgia Tech AI Group

                             VBC '90
                 PRELIMINARY CONFERENCE PROGRAM

                 Georgia Institute of Technology
                               and
               Emory University School of Medicine
                            host the

                      First  Conference  on
             Visualization  in Biomedical  Computing

                         May 22-25, 1990

                     RITZ-CARLTON  BUCKHEAD
                        ATLANTA, GEORGIA


                             PURPOSE

The goal of the First Conference on Visualization in Biomedical Computing 
(VBC) is to help define and promote the emerging science of visualization 
by bringing together a multidisciplinary, international group of researchers,
scientists, engineers, and toolmakers engaged in all aspects of scientific 
visualization in general, and visualization in biomedical computing in 
particular.

                              THEME

Visualization in scientific and engineering research is a rapidly emerging
discipline aimed at developing approaches and tools to facilitate the inter-
pretation of, and interaction with, large amounts of data, thereby allowing
researchers to "see" and comprehend, in a new and deeper manner, the systems
they are studying. Examples of approaches to scientific visualization include
the dynamic presentation of information in three dimensions, development of
dynamic methods to interact with and manipulate multidimensional data, and
development of models of visual perception that enhance interpretive and 
decision-making processes. Examples of visualization tools include graphics
hardware and software to graphically display and animate information, as well
as environments that facilitate human-machine interaction for the interpreta-
tion of complex systems. Examples of applications of visualization in biomed-
ical computing include presentation of anatomy and physiology in 3D, animated
representation of the dynamics of fluid flow, and graphical rendering of bio-
molecular structures and their interactions.

                            AUDIENCE

The presentations, discussions, and interactions by and between participants
will be of interest to scientists, engineers, medical researchers, clini-
cians, psychologists, and students interested in various aspects of visualiza-
tion.

               COOPERATING/CO-SPONSORING ORGANIZATIONS

Alliance for Engineering in Medicine and Biology
American Association of Physicists in Medicine
Emory-Georgia Tech Biomedical Technology Research Center
Emory University School of Medicine
Georgia Institute of Technology
IEEE CS Technical Committee on Computer Graphics
IEEE Computer Society
IEEE Engineering in Medicine and Biology Society
Institute of Electrical and Electronics Engineers (IEEE)
International Federation for Medical and Biological Engineering
International Medical Informatics Association
National Science Foundation


                       OVERVIEW OF VBC 90

The technical program of VBC 90 will consist of: 

     o   One day of tutorial courses by leading experts 
     o   A plenary session highlighting invited speakers 
     o   Two parallel tracks of contributed papers representing both 
         theoretical and application areas of visualization in biomedical 
         computing 
     o   A series of panels on issues of controversy or of current interest, 
         open for discussions among all attendees 
     o  Technical exhibits by numerous commercial vendors of visualization 
        technologies

The remainder of the VBC 90 program includes continental
breakfast each morning, refreshment breaks each day, an evening
reception, and dinner accompanied by a laser show at Stone
Mountain.  Registrants who wish to do so may also obtain
continuing medical education credit.  A tear-off registration
panel is included with this program announcement.

TUTORIALS  Tutorial courses take place Tuesday May 22 from 8 AM
through 6:30 PM.  Each course lasts one half-day (approximately
four hours) and there are a total of four courses offered from
which each registrant can choose two.  The four tutorials are:

          Morning                       Afternoon
     Tu1a Volume Rendering         Tu2a Biomedical Visualization 
     Tu1b Human Visual Performance Tu2b Stereoscopic Visualization 
                                        Techniques 


PLENARY SESSION Invited papers will be presented during the first
morning session (W1) Wednesday at 8:30 AM.  The distinguished
speakers and their respective talks are:

          Dr. HENRY FUCHS, University of North Carolina 
          Future High-Speed Systems for Biomedical Visualization

          Dr. RICHARD FELDMANN, National Institutes of Health
          Visualizing The Very Small: Molecular Graphics 
                   ___________________________
TECHNICAL PRESENTATIONS  Two parallel tracks of contributed
papers will be offered, representing diverse theoretical and
applications-related research topics in biomedical visualization. 
The presentation topics and their respective sessions are
organized as follows:

WEDNESDAY AM
     o  Volume Visualization (W2a)
     o  Biomedical Applications I: Cells, Molecules, and Small Systems (W2b)
WEDNESDAY PM
     o  Models of Visualization (W3a)
     o  Computer Vision in Visualization I: Segmentation (W3b)
THURSDAY AM
     o  Artificial Intelligence and Inexact Visualization (T1a)
     o  Biomedical Applications II: Cardiovascular system (T1b)
     o  Visual Perception (T2a)
     o  Biomedical Applications III: Flow and MRI Studies (T2b)
THURSDAY PM
     o  Human-Machine Interfaces (T3a)
     o  Systems and Approaches I: System Design (T3b)
FRIDAY AM
     o  Systems and Approaches II: Algorithms (F1a)
     o  Computer Vision II: Analysis of Imagery II (F1b)
     o  Mathematical and Computational Models (F2a)
     o  Biomedical Applications IV: Treatment Planning (F2b)
FRIDAY PM 
     o  Visualization in Medical Education and General Applications (F3a)
     o  Biomedical Applications V: Tools and Techniques (F3b)


PANELS Two concurrent panels will take place on the afternoons of
both Wednesday and Thursday.  The panels are: 

Wednesday Afternoon
     o  Surface Versus Volume Rendering (W4a)
     o  Chaos and Fractals in Electroencephalography (W4b)
Thursday Afternoon
     o  The Role of 3D Visualization in Radiology and Surgery (T4a)
     o  Visualization in the Neurosciences (T4b)


                       CONFERENCE REGISTRATION

The registration fee for members of Cooperating/Co-sponsoring Organizations
is $295 prior to March 31. The registration fee after this date is $345. For
non-members, the registration fee is $345 prior to march 31 and $395 after
this date. The special student rate is $50. (Proceedings and reception 
tickets are not included at the special student rate, but may be purchased 
separately.) The registration fee includes conference registration, 
proceedings, reception, refreshments, and other amenities involved in making
this a rewarding learning experience.

                        TUTORIAL REGISTRATION

The tutorial registration fee is $175 per tutorial for attendees registering 
prior to March 31 and $215 for attendees registering after this date. 
Attendees will receive the special discounted rate of $275 for two tutorials
before March 31. The special tutorial registration fee for students is $95 
per tutorial or $150 for two tutorials prior to March 31, and $125 per 
tutorial or $190 for two tutorials after this date. The tutorial registration
fee includes course notes and refreshments.

                            ACCOMMODATIONS

Hotel arrangements are to be handled by the individual directly with The 
Ritz-Carlton Buckhead. To reserve your room, you may call the hotel directly
toll free at (800) 241-3333 or (404) 237-2700. A limited number of rooms
have been made available at the special group rate of $110 single or $119
double (plus tax). Please mention "Visualization in Biomedical Computing."
Reservations should be made as soon as possible but not later than March 31.

                        DISCOUNT AIR TRANSPORTATION

We have made special arrangements to provide you with a 40% discount off the
normal coach fare, no penalties, on Delta Air Lines. Discounts on restricted
supersaver fares are also available. To make your reservations, call 
(800) 288-4446 toll free and refer to "Emory University's Delta File No. 
A18445.

                             IMPORTANT DATES
                   Early registration:  March 15 1990
        Special hotel room rate guaranteed through:  March 15 1990



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (02/23/90)

Vision-List Digest	Thu Feb 22 16:46:13 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 shape decomposition
 Refs for Visual Analysis
 Programming Series 100 frame grabber (Imaging Tech. Inc.)

----------------------------------------------------------------------

From: mtc%aifh.edinburgh.ac.uk@NSFnet-Relay.AC.UK
Date: Wed, 21 Feb 90 13:57:17 GMT
Subject: shape decomposition

Can anybody out there mention references about shape decomposition
in 3D? The main concern is how to define a "subpart" in a 3D shape,
the applicative problem how to segment a range image into significant
subparts (subvolumes)

Thanks in advance

Manuel Trucco

Dept. of Artif. Intelligence, University of Edinburgh,
5 Forrest Hill, EH1 2QL,
Edinburgh, Scotland.
E-mail: mtc@edai.ed.ac.uk


------------------------------

Date: 21 Feb 90 16:20:42 GMT
From: M Cooper <maggie@castle.edinburgh.ac.uk>
Subject: Refs for Visual Analysis
Organization: Edinburgh University Computing Service

I'm looking for references to work on analysis of the meaning of
shapes within images, e.g. the machine equivalent of the human
knowledge that * is an asterisk, or that a picture of a pig represents
a pig.  I'm interested in work that looks at semantic image analysis,
beyond the early robotic vision systems.  The problem I'm looking at
may be seen as abstracting physical features or regularities from a
class of images, then matching the images to linguistically based
knowledge.  We could use a template based approach or some kind of
visual-spatial grammar to describe the morphology of object-images,
and mapping rules for semantics.  I guess you'll have gathered that
this is all new to me, so any references for work on visual languages,
image analysis, graphical lexicons, etc.  will be gratefully received.
Thanks,

	Maggie

------------------------------

Date: 22 Feb 90 17:02:11 GMT
From: walt.cc.utexas.edu!ycy@cs.utexas.edu (Joseph Yip)
Subject: Programming Series 100 frame grabber (Imaging Tech. Inc.)
Organization: The University of Texas at Austin, Austin, Texas

We bought a Series 100 frame grabber board from Imaging Technology Inc.
The software that comes with the board should include the Toolbox software
which has some source programs on how to program the board. We are installing
the board on a Sun system.

Does anyone out there who has the Toolbox software and would like to
send it to me? Or does anyone have any experience in programming the Series 100
frame grabber board on the Sun or Unix system?

Thank

Joseph Yip
Email: ycy@happy.cs.utexas.edu

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (03/02/90)

Vision-List Digest	Thu Mar 01 10:23:53 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 3D IMAGE Reconstruction.
 Pixar for image processing
 range-hardware
 Workshop: Adap. Neural Nets & Statistical Patt. Recog.

----------------------------------------------------------------------

Date: Fri, 23 Feb 90 10:48:12 +0000
From: P.Refenes@Cs.Ucl.AC.UK
Subject: 3D IMAGE Reconstruction.

Can anyone out there mention references about 3-D image
reconstruction.  I am looking for the best review paper on techniques
for 3D Image reconstruction. Failing that I could do with the second
best, or with any paper or descent reference.  The particular problem
that we have is to re-construct a 3-D image of slides of skin
(histopathology).

Thanks in advance.

Paul Refenes
Department of Computer Science
University College London
Gower Street, WC1 6BT.


------------------------------

Date: Fri, 23 Feb 90 09:35:07 CST
From: fsistler@gumbo.age.lsu.edu
Subject: Pixar for image processing

I am using a Pixar II with a SparcStation 4/370 for image analysis and
processing, and am trying to find a users group network where Pixar
users communicate.  Can anyone tell me if such a group exists, and how
I can join?

I would appreciate any help that anyone would offer.

Fred Sistler
Louisiana State University

------------------------------

Date: Wed, 28 Feb 90 11:37:02 est
From: "Y." (Greg)Shiu <yshiu@thor.wright.edu>
Subject: range-hardware

Does anybody know of companies that sell dense ranging devices, using
time of flight or triangulation? And what are their approximate prices?

I have heard that ranging system are expensive (around 100K), 
so I may want to buy a plane laser projector for building my own 
triangulation based range sensor but I dont know which companies 
sells plane laser devices.

   Greg Shiu
   Department of Electrical Engineering
   Wright State University, Dayton, OH 45435
   phone: (513) 873-4254
   email: yshiu@cs.wright.edu

------------------------------

Date: Mon, 26 Feb 90 19:08:29 EST
From: flynn@pixel.cps.msu.edu (Patrick J. Flynn)
Subject: Workshop: Adap. Neural Nets & Statistical Patt. Recog.

                                 Workshop on
              Artificial Neural Networks & Pattern Recognition

                                Sponsored by
        The International Association for Pattern Recognition (IAPR)

                                 Sands Hotel
                          Atlantic City, New Jersey
                                June 17, 1990
 

Recent developments in artificial neural networks (ANN's) have caused a
great deal of excitement in the academic, industrial, and defense
communities.  Current ANN research owes much to several decades of work
in statistical pattern recognition (SPR); indeed, many fundamental
concepts from SPR have recently found new life as research topics when
placed into the framework of an ANN model.

The aim of this one-day workshop is to provide a forum for itneraction
between the leading researchers from the SPR and ANN fields.  As
pattern recognition practioners, we seek to address the following
issues:

**In what ways do artificial neural networks differ from the well-known
paradigms of statistical pattern recognition?  Are there concepts in
ANN for which no counterpart in SPR exists (and vice versa?)

**What benefits can come out of interaction between ANN and SPR researchers?

**What advantages, if any, does ANN techniques have over SPR methods in
dealing with real world problems such as object recognition, pattern
classification, and visual environment learning?

                              Tentative Program

 8:00 Registration
 8:30 Issues in ANN and SPR, Laveen Kanal, University of Maryland
 9:15 Links Between ANN's & SPR, Paul Werbos, National Science Foundation
10:00 Coffee Break
10:30 Generalization & Discovery in Adaptive Pattern Recognition, Y. Pao,
      Case Western Reserve University
11:15 Character Recognition, Henry Baird, AT&T Bell Labs

12:00 LUNCH

 1:30 Target Recognition, Steven Rogers, U.S. Air Force
 2:15 Connectionist Models for Speech Recognition, Renato DeMori, McGill
      University
 3:00 Coffee Break
 3:30 Panel Discussion,  Moderators: Anil Jain, Michigan State University &
      Ishwar Sethi, Wayne State University

Registration Information:
  Advance Registration (by 5/15/90): $100
  Late Registration: $120

Contact:    Ms. Cathy Davison (Workshop on ANN and PR)
            Department of Computer Science, A-714 Wells Hall
            Michigan State University, East Lansing, MI 48824
            Tel. (517)355-5218, email: davison@cps.msu.edu, FAX: (517)336-1061


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (03/16/90)

Vision-List Digest	Thu Mar 15 09:39:46 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Vector correlation
 Use of Artificial Neural Networks in Image Processing
 Additions to List of IP Source Code Packages
 Australian AI Conference
 Call for discussion:  comp.robotics

----------------------------------------------------------------------

Date: Tue, 13 Mar 90 04:00:21 GMT
From: us214777@mmm.serc.3m.com (John C. Schultz)
Subject: vector correlation
Organization: 3M - St. Paul, MN  55144-1000 US

A friend gave me a copy of an article from the Jan 1990 (p 138-9) Photonics
Spectra on "Vector Correlation" which was the first I have heard of the
concept of vector correlation.

As I understand what the author was talking about, you use a high-pass filter
such as a Sobel to determine edge magnitude and gradient (the article only
discussed 4 angles vs. Sobel's 8).  The correlation for object location can
then be done much more robustly wrt lighting variations by correlating the 4
(or 8 for Sobel?)  images for each direction vector and summing the resulting
correlation images.

The advantages of this approach would seem to be:
1. insensivity to light level even as compared to normalized correlation
2. greater location precision since the object location is completely
   determined by its edge location(s)

The disadvantage is the computational complexity - what was one correlation
has suddenly become 4 (or 8 in the case of Sobel?).

My questions:

Why vector correlation?  Seems to me this is just a fancy name for object edge
correlation. 

Does anyone know have any references for this technique?  Possibly under a
different name?  The author was from Applied Intelligent Systems Inc. and
neglected to include any references :-).

Anyone have any experience with this technique?  Any code they would be
willing to share?

As a final note, I think this is what Intelledex uses internally to their
"turn-key" vision system and they do get 1/10 pixel location precision under a
lot of variable and/or poor lighting conditions.

John C. Schultz                   EMAIL: jcschultz@mmm.3m.com
3M Company,  Building 518-01-1     WRK: +1 (612) 733 4047
1865 Woodlane Drive, Dock 4
Woodbury, MN  55125

------------------------------

Date: Wednesday, 14 Mar 1990 13:37:03 EST
From: m20163@mwvm.mitre.org (Nahum Gershon)
Subject: Use of Artificial Neural Networks in Image Processing

I am looking for references on the use of Artificial Neural Networks
in image processing and also in biomedical imaging.  Does anyone have
any information?

  * * Nahum

------------------------------

Date: Tue, 13 Mar 90 12:09:16 PST
From: Scott E. Johnston <johnston@odin.ads.com>
Subject: Additions to List of IP Source Code Packages

In a recent posting to the vision-list I listed packages of image
processing source code, available in the public domain or for a
one-time license.  I inadvertently left out the HIPS software package
developed by Michael Landy.  My apologies to Professor Landy.  Here is
the information on HIPS.  In addition I received information about
a package called XVision available from the University of New
Mexico.  Once again, I welcome any additions or corrections to this list. 

Scott E. Johnston
johnston@ads.com
Advanced Decision Systems, Inc.
Mountain View, CA  94043

========================================================================

HIPS	

Contact:	SharpImage Software  
		P.O. Box 373		
		Prince St. Station
		NY, NY  10012

		Michael Landy (212) 998-7857 
		landy@nyu.nyu.edu

Description:

HIPS consists of general UNIX pipes that implement image processing
operators.  They can be chained together to implement more complex
operators.  Each image stores history of transformations applied.
HIPS is available, along with source code, for a $3000 one-time
license fee.

HIPS supports the following:
	- simple image transformations
	- filtering
	- convolution
	- Fourier and other transforms
	- edge detection and line drawing manipulation
	- image compression and transmission
	- noise generation
	- image pyramids 
	- image statistics
	- library of convolution masks
	- 150 programs in all

========================================================================

XVision

Contact:	John Rasure
		Dept. of EECE
		University of New Mexico
		Albuquerque NM 87131
		505-277-1351
		rasure@bullwinkle.unm.edu


XVision is a C-based system developed at the University of New Mexico.

It includes:	1) an image processing library of 150 algorithms from early
		processing to classification and shape analysis
		2) a graphical form/menu interface and a command line interface
		3) a visual language for configuring complex image processing
		pipelines
		4) an extensive 2d and 3d plotting capability
		5) an interactive image analysis capability
		6) code generators for generating the command line user 
		interface and the X windows user interface C code.

The system runs on most UNIX systems and they have a limited number of
licenses that they can give at no cost.


------------------------------

Date: Wed, 14 Mar 90 13:24:11 +0800
From: les@wacsvax.cs.uwa.oz.au (Les Kitchen)
Subject: Australian AI Conference

                                CALL  FOR  PAPERS

          4th Australian  Joint  Conference  on Artificial  Intelligence
              AI'90
          21-23 November, 1990
          Hyatt Regency, Perth, Western Australia

                       Everyday AI - New Tools for Society
                 This conference  is a  major regional  forum for
                 the  presentation  of  recent  research  on  the
                 theory and practical  applications of Artificial
                 Intelligence.    It   acts  as  a   catalyst  to
                 stimulate further  research  and cooperation  in
                 this important area  within the  Australasia and
                 Indian-Pacific region. The theme  of this year's
                 conference aims  to  encourage  and  promote  AI
                 techniques  and   tools  for   solving  everyday
                 problems.

          Topics Include (but not limited to):

          * Logic and Reasoning
          * Knowledge Representation and Acquisition
          * Machine Learning
          * Artificial Neural Networks  
  ====>   * Computer Vision and Robotics
          * Natural Language and Speech Recognition
          * Expert Systems and development tools
          * Applied AI in Civil,  Electrical, Electronic, and Mechanical
              Engineering
          * Knowledge Engineering in Business Applications
          * Applications in Government and Mining

          Criteria for acceptance
          This conference  welcomes  high quality  papers  which have  a
          significant contribution to  the theory   or practice  of A.I.
          Papers in  the  application  areas  will  be  judged by  their
          novelty in  the application,  its formulation,  application of
          new A.I.  techniques,  and  the  success  of  the  application
          project.

          Requirement for submission
          Authors must submit four copies of  their full papers to AI'90
          Programme Committee by 11th May 1990.    Submissions after the
          deadline may be returned without being opened. Notification of
          acceptance and format of the camera  ready copy will be posted
          by the 27th  July 1990. The  camera ready final  paper will be
          due on 24th August 1990.

          Paper Format for Review
          The paper should be about  5000 words in length.  It should be
          at least one and  a half spacing and  clearly legible. Authors
          should try to limit their paper to not  more than 15 pages not
          including diagrams.  Each  paper  must  include  a  title,  an
          abstract about 100 words, but no other identifying marks. The
          abstract of  100  words with  the  title,  authors names,  and
          correspondence address  should accompany  the submission  on a
          separate page.

          Publication
          All papers accepted in the conference will be published in the
          conference  proceedings.  Following  the   tradition  of  this
          conference, effort  will  also  be  made  to publish  selected
          papers from the conference in book form for wider circulation.

          Submission Check List
          When submitting your paper, please include the following: Name
          of contact, postal  address, telephone  (with country  code if
          applicable), fax number,  e-mail address,  FOUR copies  of the
          paper, an abstract, and a biographical note of the authors.

          Submit papers to:
          AI'90 Programme Committee
          c/o Department of Computer Science,
          University of Western Australia,
          Nedlands, W.A. 6009,
          AUSTRALIA


          Enquiries to:
                Dr. C.P.Tsang, AI'90 Programme Chair,
                Tel: +61-9-380-2763
                Fax: +61-9-382-1688
                email: ai90paper@wacsvax.oz.au

          This  conference  is  sponsored  by  the  Australian  Computer
          Society  through  the  National  Artificial  Intelligence  and
          Expert Systems Committee.

------------------------------

Date: 9 Mar 90 00:18:30 GMT
From: ttidca.TTI.COM!hollombe%sdcsvax@ucsd.edu (The Polymath)
Subject: Call for discussion:  comp.robotics
Organization: Citicorp/TTI, Santa Monica

[ I post this for your information only.  Please direct responses to 
  the poster of this message. 
		phil...	]


The broad distribution of this proposal should give some idea as to why I
consider the creation of this group appropriate.  The subject of robotics
can draw on information from all of these groups and more, yet none is
particularly appropriate to it.  There is no one group I could go to with
a specific robotics problem with a high probability of finding anyone who
even has the same problem, let alone a solution.  Rather than broadcast
such questions to the net, I'd like to see a dedicated group formed.

I therefore propose a new group:

     Name:     comp.robotics

     Moderation:  Unmoderated

     Charter:  The discussion and exchange of information on the practical
	       aspects of real-world robots and their applications --
               industrial, personal and experimental.

I put in the "real-world" qualification deliberately to exclude
discussions of positronic brains, R2D2 and who, if anyone, was really
inside Robbie in "Forbidden Planet."  I suggest that Asimov's laws are also
best left to a more philosophically or socially oriented group.

For those interested in leading edge research, AI, machine vision, etc. a
sci.robotics group might be more appropriate and can also be discussed at
this time.  I don't think the two groups are mutually exclusive. (i.e.:
Creation of one doesn't necessarily remove the need for the other).

Follow-ups are directed to news.groups

The Polymath (aka: Jerry Hollombe, hollombe@ttidca.tti.com)  Illegitimis non
Citicorp(+)TTI                                                 Carborundum
3100 Ocean Park Blvd.   (213) 450-9111, x2483
Santa Monica, CA  90405 {csun | philabs | psivax}!ttidca!hollombe


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (04/07/90)

Vision-List Digest	Fri Apr 06 09:46:31 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Chevreul illusion
 Stereo Sequence Wanted
 Pointers to image archives
 Image database
 WANTED: image processing/pattern recognition job
 Fulbright Lecturer/Researcher Position
 CALL FOR PARTICIPATION: AAAI-90 Workshop on Qualitative Vision
 Call for votes:  comp.robotics

----------------------------------------------------------------------

Date: Mon, 2 Apr 90 13:26:01 +0200
From: ronse@prlb.philips.be
Subject: Chevreul illusion

Can someone explain what is the Chevreul illusion seen in a luminance
staircase composed of two step edges? References appreciated. Thanks.

Christian Ronse

Internet:                       ronse@prlb.philips.be
BITNET:                         ronse%prlb.philips.be@cernvax

[ As usual, please respond to the List.		phil...	]

------------------------------

Date: Fri, 30 Mar 90 14:57:45 +0200
From: Meng-Xiang Li  <mxli@bion.kth.se>
Subject: Stereo Sequence Wanted

We are trying to create a stereo sequence of an indoor scene with a camera
mounted on a robot arm. But we have encountered some problems due to the
calibration of the system. We wonder if anyone out there has such stereo 
sequences or the like. The point is that we need the necessary (calibration)
data in order to reconstruct the 3D scene. Does anyone have or know such data?
If you do, please let us know. Any help is appreciated very much. 
(The data is going to be used in a joined ESPRIT project)

Mengxiang Li

E-mail: mxli@bion.kth.se
Mail  : Computational Vision & Active Perception Lab. (CVAP)
        Royal Institute of Technology, S-10044 Stockholm, SWEDEN
phone : +46 8 7906207

------------------------------

Date:     Tue, 3 Apr 90 16:17:36 EDT
From: Rshankar@top.cis.syr.edu
Subject:  Pointers to image archives

I am looking for pointers to existing archives for intensity/range/
stereo/motion images. I would also need information about the
availability of these images to outside users. I can summarize the 
replies and post it to the net.

- Ravi

------------------------------

Date: Wed, 4 Apr 90 12:08:53 EDT
From: nar@cs.wayne.edu
Subject: Image database

Hi,

I am looking for a database of 2-D shape (preferably industrial
shapes) to use in a shape recognition system that I am 
currently implementing.  I would appreciate if anybody
out there could give me one, or give me pointers as to where
I could find them.

Thanx in advance,

Nagarajan Ramesh                !   internet- nar@cs.wayne.edu
Dept. of Computer Science,      !   uucp    - ..!umich!wsu-cs!nar
Wayne State University,
Detroit MI 48202

------------------------------

Date: 5 Apr 90 17:52:41 GMT
From: Justine Stader <jussi@aiai.edinburgh.ac.uk>
Subject: WANTED: image processing/pattern recognition job
Summary: German hunting jobs in Edinburgh
Organization: AIAI, University of Edinburgh, Scotland

A German lad without net-access asked me for help so here goes:

He is coming to Edinburgh in August 1990 and he would like to
start working in Edinburgh around then.
By August he will have a German degree in Computer Science.
His main interests and experiences (from Summer jobs etc.)
are in the field of image processing and pattern recognition.

If you know or hear of any vacancies I (and he) would be 
grateful if you could let me know.

Thanks in advance
			Jussi.

Jussi Stader, AI Applications Institute, 80 South Bridge, Edinburgh EH1 1HN
E-mail jussi@aiai.ed.ac.uk, phone 031-225 4464 extension 213

------------------------------

Date: Mon, 2 Apr 90 11:56:35 +0200
From: mkp@stek7.oulu.fi (Matti Pietik{inen)
Subject: Fulbright Lecturer/Researcher Position

The Computer Vision Group at the University of Oulu, Finland, is engaged in
research into industrial machine vision, investigating problems
associated with robot vision, visual inspection and parallel vision
systems. With a staff of about 15, it is the largest group of its kind
in Finland.

The Group requests for a Fulbright lecturer/researcher scholar for one
semester during the academic year 1991-92 in the field of computer vision.
The scholar should participate in collaborative research and to teach
an advanced post-graduate course in computer vision. The scholar
should be a U.S. citizen and have a Ph.D with some years of experience
in computer vision.

The research collaboration can be in any of our interest areas,
including computer vision algorithms, vision for intelligent robots,
parallel algorithms and architectures for vision, or automated visual
inspection.

The students of the computer vision course will have at least an MSEE
degree with some experience in computer vision research. The course
will have 2 classroom hours per week. The students' comprehension
level in English is good. An estimated size of the class is 15.

The library of the Department of Electrical Engineering has all
major journals and a large number of books and conference proceedings
in the field of computer vision. The computer facilities of the
Computer Vision Group include several SUN-3 workstations, a Symbolics
3645 and IBM PC's. A dedicated image analysis system, consisting of
boards manufactured by Datacube, Inc., is used for high speed analysis
tasks. A multiprocessor system NTP 1000 based on transputers is used
for studying the problems of parallel processing. Shortly the facilities
will also include an industrial robot equipped with a CCD camera
and a range image sensor.

The Group provides office space, access to computer facilities and some
support from graduate assistants for the scholar. In addition to the
grant provided by the Fulbright program,  an additional salary of at
least US$ 2000/month will be arranged from our research funds.
The University of Oulu also provides housing for the scholar.

The Council for International Exchange of Scholars will advertise the
position in the "Fulbright Scholar Program. Faculty Grants. 1991-92."
This booklet will be distributed to all American colleges and
universities in March/April 1990.

The application period for the grant will end on September 15, 1990.


Prof. Matti Pietikainen
Head, Computer Vision Group

Computer Vision Group				email: mkp@steks.oulu.fi
Dept. of Electrical Engineering			tel:  +358-81-352765
University of Oulu				fax:  +358-81-561278
SF-90570 Oulu, Finland

------------------------------

Date: Fri, 6 Apr 90 11:56:35 +0200
From: pkahn@ads.com (Philip Kahn)
Subject: CALL FOR PARTICIPATION: AAAI-90 Workshop on Qualitative Vision

Sunday, July 29 1990      Boston, Massachusetts

Qualitative descriptions   of  the visual  environment   are receiving
greater  interest in   the  computer  vision   community.  This recent
increase in interest  is partly due  to   the difficulties that  often
arise in  the   practical application  of  more  quantitative methods.
These quantitative  approaches tend  to be computationally  expensive,
complex and brittle.  They require constraints which limit generality.
Moreover inaccuracies in  the input  data do  not often  justify  such
precise methods.    Alternatively,   physical  constraints  imposed by
application domains  such as    mobile robotics and real-time   visual
perception  have prompted  the exploration  of  qualitative mechanisms
which require less  computation, have better response time,  focus  on
salient and relevant aspects of the  environment, and use enviromental
constraints more effectively.

The  one-day AAAI-90  Workshop on  Qualitative  Vision seeks  to bring
together   researchers  from different   disciplines  for   the active
discussion  of  the   technical  issues and   problems related  to the
development  of  qualitative  vision   techniques  to  support  robust
intelligent  systems.     The  Workshop will   examine  aspects of the
methodology, the  description of  qualitative  vision  techniques, the
application of qualitative techniques  to visual domains  and the role
of qualitative vision  in  the building of robust intelligent systems.
Topics to be discussed include:
 o  What  is Qualitative  Vision?   (e.g., definitions, properties,
    biological/psychophysical models or bases)
 o  Qualitative Visual Features  and their Extraction  (e.g., 2D/3D
    shape, depth, motion)
 o  High level Qualitative Vision  (e.g., qualitative 2D/3D models,
    properties, representations)
 o  Qualitative Vision and Intelligent Behavior  (e.g., navigation,
    active or directed perception, hand-eye coordination, automated
    model building)

Since the number  of participants is limited  to under 50, invitations
for participation will be  based on  the review of  extended technical
abstracts  by  several  members of  the  Qualitative  Vision  research
community.  The  extended abstract  should  address one  of the  above
topic  areas,  be   3 to  5 pages  in   length (including figures  and
references), and it should begin with the title and author name(s) and
address(es).  Extended abstracts   (6   copies)  should be  sent,   by
*April 15, 1990*, to:
                      William Lim
                      Grumman Corporation
                      Corporate Research Center
                      MS A01-26
                      Bethpage, NY 11714
Decisions on acceptance of abstracts will be made by May 15, 1990.

ORGANIZING COMMITTEE:
* William  Lim,   Grumman     Corporation,    Corporate   Research Center,
  (516) 575-5638 or    (516) 575-4909,  wlim@ai.mit.edu   (temporary)   or
  wlim@crc.grumman.com (soon)
* Andrew  Blake,  Department of  Engineering Science,  Oxford  University,
  ab@robots.ox.ac.uk
* Philip Kahn, Advanced Decision Systems, (415) 960-7457, pkahn@ads.com
* Daphna   Weinshall, Center for   Biological Information Processing, MIT,
  (617) 253-0546, daphna@ai.mit.edu

------------------------------

Date: 29 Mar 90 00:15:16 GMT
From: ttidca.TTI.COM!hollombe%sdcsvax@ucsd.edu (The Polymath)
Subject: Call for votes:  comp.robotics
Followup-To: news.groups
Organization: Citicorp/TTI, Santa Monica

This is the call for votes on the creation of COMP.ROBOTICS .

     Name:     comp.robotics

     Moderation:  Unmoderated

     Charter:  The discussion and exchange of information on all
	       aspects of real-world robots and their applications --
               industrial, personal and experimental.

To cast your vote:

DO NOT post your vote or Followup this article (followups are directed to
news.groups).  Send it to me by replying to this article or via e-mail to
the address below.  If possible, include "comp.robotics YES" or
"comp.robotics NO" in the Subject: line as appropriate (put it in the
message body too).  The polls are now open and will remain so through the
month of April.  On May 1st I will tally the responses and post the
results and vote summaries to news.groups and news.announce.newgroups.


A little electioneering, while I'm at it:

The response to the call for discussion has been 100% favorable.  However,
I'd like to see this group created cleanly and unambiguously.  Please be
sure that _all_ persons interested at your site send their "YES" votes to
me for tallying. ("NO" voters are on their own. (-: [Yes, of course I'll
tally them too].).


The Polymath (aka: Jerry Hollombe, M.A., CDP, aka: hollombe@ttidca.tti.com)
Citicorp(+)TTI                                    Illegitimis non
3100 Ocean Park Blvd.   (213) 450-9111, x2483       Carborundum
Santa Monica, CA  90405 {csun | philabs | psivax}!ttidca!hollombe

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (04/12/90)

Vision-List Digest	Wed Apr 11 15:02:18 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Info wanted about 3D reconstruction
 Looking for a frame grabber board (MacIIx)
 Post-Doctoral Research Positions
 Research Associate Position In Robot Vision
 Re:  Job wanted in Computer Vision area
 Neural Network Chips

----------------------------------------------------------------------

Date: Tue, 10 Apr 90 10:57 EST
From: V079SPF5@ubvmsc.cc.buffalo.edu
Subject: Info wanted about 3D reconstruction

Dear Colleagues,

I just begin studying 2D and 3D image reconstruction and am particularly
interested in cone-beam algorithms.  I would be deeply grateful should you
recommend some papers (specially good review papers) to me. 

Also, It is said that there are a lot of public domain software on the network.
People can get the software using FTP (File Transform Protol). Does anyone
know such kind of stuff for image reconstruction in C? 

Thank you very much!  Best wishes. 

Sincerely yours,
Wang, Ge

Email Addr:	ge@sun.acsu.buffalo.edu
		V079SPF5@UBVMS.BITNET

------------------------------

Date: Wed, 11 Apr 90 11:20 N
From: David Roessli <ROESSLI%CGEUGE52.BITNET@CUNYVM.CUNY.EDU>
Subject: Looking for a frame grabber board (MacIIx)

Hello everybody,

      We are looking for a frame grabber board (color), for capturing,
processing and displaying color images on a Macintosh IIx.

The main features would be
    -  "real-time" capture from color video cameras and VCRs
        (grab speed of 1/25s or more).
    -   Multiple input connections (PAL).
    -  "genLock" output (CCIR RGB, PAL 50Hz preferred).
    -  High pixel resolution (something >= 768x512).
    -  Graphic/text overlay.
    -  Supported by TIFF24-compatible softwares packages (Studio/8,
        PhotoShop, ..).

    Any suggestions, proposals, comments, experiences, criticisms,
         ideas and invitations will receive a warm welcome !

David C. Roessli                    Email: roessli@sc2a.unige.ch  (preferred)
Dpt. Anthropologie et Ecologie             roessli@CGEUGE52.BITNET
University of Geneva                       david@scsun.unige.ch
12, rue Gustave-Revilliod           Phone: +41(22)436.930
CH-1227   SWITZERLAND               Fax:   +41(22)3000.351

'Any program that as been fully debugged is probably obsolete' [Murphy et al.]

------------------------------

Date: Mon, 9 Apr 90 11:31:50 CDT
From: Dan Kersten <kersten@eye.psych.UMN.EDU>
Subject: Post-Doctoral Research Positions

                    UNIVERSITY OF MINNESOTA

               POST-DOCTORAL RESEARCH POSITIONS

Two research positions available to study the linkages between the
initial stages of human perception and later recognition. The
research uses psychophysical and computational methods to understand
these problems.

Applicants must have a Ph.D.  Background in computer modeling,
psychoacoustics, visual psychophysics, perception,  or supercomputers
is highly desirable.   Applicants  capable of forging links between
audition and vision will be given  consideration. The research will
be conducted at the Center for the Analyses of Perceptual
Representations (CAPER) at the University of Minnesota .  This Center
encompasses four vision laboratories and one hearing laboratory in
the Psychology and Computer Science departments, and includes ample
facilities for  simulation and experimental studies. Center faculty
members are: Irving Biederman, Gordon Legge, Neal Viemeister, William
Thompson, and Daniel Kersten. Salary level:  $26,000 to $32,000
depending on the candidate's qualifications and experience.
Appointment is a 100% time, 12-month  appointment as post-doctoral
fellow. (Appointment may be renewable, contingent on satisfactory
performance and AFOSR funding.)  Starting date is July 1, 1990 or as
soon as possible.

Candidates should submit a vita, three letters of reference,
representative reprints and preprints, and a statement of long-term
research interests to:

	Professor Irving Biederman, 
	Department of Psychology, 
	University of Minnesota, 
	75 East River Road, 
	Minneapolis, Minnesota, 55455. 

Applications must be received by June 15, 1990.

The University of Minnesota is an equal opportunity educator and
employer and specifically invites and encourages applications from
women and minorities.

------------------------------

Date: Mon, 9 Apr 90 15:18:14 EDT
From: Jean Gray <jean@csri.toronto.edu>
Subject: Research Associate Position In Robot Vision

UNIVERSITY OF TORONTO
RESEARCH ASSOCIATE IN ROBOT VISION

The Government of Canada has established a Network of Centres
of Excellence named IRIS (Institute for Robotics and Intelligent
Systems), with one of its projects ("Active Vision for Mobile
Robots") based in the Department of Computer Science at the
University of Toronto.   A research associate position is available,
funded by this project, with funding guaranteed for up to four years.

The successful applicant must hold a PhD in Computer Science or
Electrical Engineering with specialty in areas related to robot vision,
and must possess a strong research record.  Experience with stereo-vision
robot heads would be an important asset.  Ideal candidates will have
broad interests and talents across such areas as biological models
of vision and motor control, computational vision and image understanding,
attention and active perception, robot navigation, and planning.

Applications should be sent by May 21, 1990 to:-
Professor Derek G. Corneil, Chairman
Department of Computer Science
University of Toronto
Toronto, Ontario
M5S 1A4, Canada.

In accordance with Canadian Immigration regulations, priority will be
given to Canadian citizens and permanent residents of Canada.

The University of Toronto encourages both women and men to apply for
positions.

------------------------------

From: Nora Si-Ahmed <nora@ral.rpi.edu>
Date: Mon, 9 Apr 90 14:08:27 EDT
Subject: Re:  Job wanted in Computer Vision area

Hi,

I am seeking for a researcher position, Area= Computer Vision, Pattern 
Recognition  and Artificial Intelligence.
I am, for the time being visiting scholar for a post-doc at RPI 
in the RAL lab. 
I will be available (and jobless) next July. 
I would like to find a Job in either USA, Canada, France (I speak fluently
french and was graduated there) or UK. 
My resume will be sent on request.

Thanks a lot
Nora

Phone num: 518-276-8042 & 276-2973 (work)
	   518-274-8735 (home)

nora@ral.rpi.edu

Nora Si-Ahmed 
RPI, CII8015
Troy NY 12180

------------------------------

Date: Mon, 9 Apr 90 14:42:29 CDT
From: shriver@usl.edu (Shriver Bruce D)
Subject: Neural Network Chips

There are several researchers who are using analog VLSI in vision
research, e.g. Carver Mead at CalTech comes to mind.  I thought 
the posting might also identify others.

I am interested in learning what experiences people have had using
neural network chips.  In an article that Colin Johnson did for PC
AI's January/February 1990 issue, he listed the information given
below about a number of NN chips (I've rearranged it in
alphabetical order by company name).  This list is undoubtedly
incomplete (no efforts at universities and industrial research
laboratories are listed, for example) and may have inaccuracies in
it.

Such a list would be more useful if it would contain the name,
address, phone number, FAX number, and electronic mail address of
a contact person at each company would be identified. 

Information about the hardware and software support (interface and
coprocessor boards, prototype development kits, simulators,
development software, etc.) is missing.

Additionally, pointers to researchers who are planning to or have
actually been using these or similar chips would be extremely
useful. I am interested in finding out the range of intended
applications.

Could you please send me:

  a) updates and corrections to the list
  b) company contact information
  c) hardware and software support information
  d) information about plans to use or experiences with having used 
     any of these chips (or chips that are not listed)

In a few weeks, if I get a sufficient response, I will resubmit an
enhanced listing of this information.

Thanks,
Bruce Shriver (shriver@usl.edu) 
=================================================================

Company:       Accotech
Chip Name:     AK107
Description:   an Intel 8051 digital microprocessor with its on-
               chip ROM coded for neural networks
Availability:  available now
Company:       Fujitsu Ltd.
Chip Name:     MB4442
Description:   one neuron chip capable of 70,000 connections per
               second
Availability:  available in Japan now

Company:       Hitachi Ltd.
Chip Name:     none yet
Description:   information encoded in pulse trains
Availability:  experimental

Company:       HNC Inc.
Chip Name:     HNC-100X
Description:   100 million connections per second
Availability:  Army battlefield computer

Company:       HNC
Chip Name:     HNC-200X
Description:   2.5 billion connections per second
Availability:  Defense Advanced Research Projects Agency (DARPA)
               contract

Company:       Intel Corp
Chip Name:     N64
Description:   2.5 connections per second 64-by-64-by-64 with
               10,000 synapses
Availability:  available now

Company:       Micro Devices
Chip Name:     MD1210
Description:   fuzzy logic combined with neural networks in its
               fuzzy comparator chip
Availability:  available now

Company:       Motorola Inc.
Chip Name:     none yet
Description:   "whole brain" chip models senses, reflex, instinct-
               the "old brain"
Availability:  late in 1990

Company:       NASA, Jet Propulsion Laboratory (JPL)
Chip Name:     none yet
Description:   synapse is charge on capacitors that are refreshed
               from RAM
Availability:  experimental

Company:       NEC Corp.
Chip Name:     uPD7281
Description:   a data-flow chip set that NEC sells on PC board
               with neural software
Availability:  available in Japan

Company:       Nestor Inc.
Chip Name:     NNC
Description:   150 million connections per second, 150,000
               connections
Availability:  Defense Dept. contract due in 1991

Company:       Nippon Telephone and Telegraph (NTT)
Chip Name:     none yet
Description:   massive array of 65,536 one-bit processors on 1024
               chips
Availability:  experimental

Company:       Science Applications International. Corp.
Chip Name:     none yet
Description:   information encoded in pulse trains
Availability:  Defense Advanced Research Projects Agency (DARPA)
               contract

Company:       Syntonic Systems Inc.
Chip Name:     Dendros-1
               Dendros-2
Description:   each has 22 synapses, two required by any number can
               be used 
Availability:  available now

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (06/09/90)

Vision-List Digest	Fri Jun 08 16:37:23 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 List of industrial vision companies
 Curvature for 2-D closed boundary description
 Array Technologies Color Scanner
 CVGIP table of contents: Vol. 51, No. 2, August 1990
 ECEM6 1991 Eye-Movements Conference, First Call
 Evans & Sutherland PS390 FOR SALE!!

----------------------------------------------------------------------

Date: 4 Jun 90 19:45:50 GMT
From: Brian Palmer <bpalmer@BBN.COM>
Subject: List of industrial vision companies
Organization: Bolt Beranek and Newman Inc., Cambridge MA

Well Samir and my attempts to generate a list of Industrial Vision
companies is going slowly.  Please continue to send info!  I will post
the summary.

The final posting will contain all related information but for now
here is the list of companies mentioned.

	Automatix
	Cognex
	Datacube
	View Engineering (still alive?)
	Adept 
	GMF
	Intelledex
	Focus Systems

Please send more names and a little info on the company.  Also, if you know
anything about the above (some descriptions were short) please pass that
along too.

Thanks,
Brian

------------------------------

Date: 6 Jun 90 23:17:11 GMT
From: zeitzew@CS.UCLA.EDU (Michael Zeitzew)
Subject: Curvature for 2-D closed boundary description
Organization: UCLA Computer Science Department

From:
O. Mitchell and T. Grogan
"Global and Partial Shape Discrimination for Computer Vision"
Optical Engineering,
     Volume 23, Number 5, September 1984. pg.484-491

A portion (section on the Fourier-Mellin Correlation) can 
be summarized as follows :

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If the boundary function $z(i) = x(i) + \jmath y(i)$ is twice differentiable,
the the curvature function, denoted by $\kappa(i)$ is given by :
\[ \kappa(t) = \partial / \partial t \;\; \arctan \dot{y}(t) / \dot{x}(t) \]
A discrete approximation of the curvature is :
\begin{eqnarray}
\kappa(i) = \arctan \frac{y_{i} - y_{i-1}}{x_{i} - x{i-1}} - \\
\arctan \frac{y_{i-1} - y_{i-2}}{x_{i-1} - x_{i-2}} \;\;i=0,\ldots,N-1 \nonumber
\end{eqnarray}

The curvature function provides a contour description which is invariant
under translation and rotation.....
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Does anyone know why the definition of curvature is given as the
partial derivative with respect to "time", and not "arc length" as
one would see in a first year Calculus book ? What are the
assumptions/implications here ?

If this is a stupid/obvious/boring question, I'm sorry. If you care
to e-mail me a response, thank you in advance.

Mike Zeitzew
zeitzew@lanai.cs.ucla.edu

------------------------------

Date: Thu, 7 Jun 90 10:07:32 EDT
From: marra@jargon.whoi.edu (Marty Marra)
Subject: Array Technologies Color Scanner

Does anyone have experience working with an Array Technologies Color
Scanner?  Ideally I'd like to control their "image server" from my Sun
via GPIB. Any general info about these scanners would also be greatly
appreciated.

Thanks.

 /\/\   /\/\  Marty Marra, Woods Hole Oceanographic Institution  (WHOI DSL)
/    \ /    \ Woods Hole, MA 02543 "marra@jargon.whoi.edu" (508)457-2000x3234

------------------------------

Date: Fri, 1 Jun 90 13:08:02 -0700
From: graham@cs.washington.edu (Stephen Graham)
Subject: CVGIP table of contents: Vol. 51, No. 2, August 1990

COMPUTER VISION, GRAPHICS, AND IMAGE PROCESSING			              .  	
Volume 51, Number 2, August 1990

CONTENTS

A. Huertas, W. Cole, and R. Nevatia.  Detecting Runways in Complex Airport
	Scenes, p. 107.

Maylor K. Leung and Yee-Hong Yang.  Dynamic Strip Algorithm in Curve Fitting, 
	p. 146.

Josef Bigun.  A Structure Feature for Some Image Processing Applications Based
	on Spiral Functions, p. 166.

Lawrence O'Gorman.  k x k Thinning, p. 195.

ANNOUNCEMENTS, p. 216.

ABSTRACTS OF PAPERS ACCEPTED FOR PUBLICATION, p. 217.


------------------------------

Date:         Fri, 08 Jun 90 15:36:42 +0200
From: ECEM6 <FPAAS91%BLEKUL11.BITNET@CUNYVM.CUNY.EDU>
Subject:      ECEM6 1991 Eye-Movements Conference, First Call
Organization: 6th European Conference on Eye Movements

        6th European Conference on Eye Movements
             15 - 18 september 1991
                 Leuven, Belgium

          First Announcement & Call for Papers

      AIM

      This is the sixth  meeting of the European Scientists  actively
      involved in  eye movement research and is  the successor to the
      first  meeting  held   in  Bern   (Switzerland)  in  1981,   an
      initiative  of the  European  Group for  Eye  Movement Research
      (for more information  on the  Group: R. Groner,  Laupenstrasse
      4, CH-3008 Bern, Switzerland). The aim of the Conference is  to
      promote  the  wider  exchange  of  information  concerning  eye
      movement research in all its  diverse fields, and to  encourage
      contact  between  basic  and applied  research.  The Conference
      will  be of interest  to psychologists, educational scientists,
      neurophysiologists,  medical doctors, bioengineers, ergonomists
      and others interested in visual science.

      VENUE

      The Conference  will be held  at the Department  of Psychology,
      University  of Leuven, Belgium, and  many presentations will be
      given in the Michotte Lecture Hall of the Department.

      CALL FOR PAPERS AND POSTERS

      If you wish to present a paper or a  poster, please send a one-
      page abstract  by January  31, 1991,  at the  very latest.  The
      selection  of posters or papers to be presented will be made by
      a  committee including  all  preceding  organizers of  E.C.E.M.
      (with notifications  to the  proposers in  April 1991).  Papers
      and   posters   on   the    following   topics   are   welcome:
      Neurophysiology   of   eye    movements,   oculomotor   system,
      measurement   techniques,  eye  movements   in  perceptual  and
      cognitive  tasks,   eye  movements   and  reading,   oculomotor
      disorders,  and applied  research.  Papers integrating  sensory
      sciences  and higher-order studies will  be favored. Papers and
      posters  should  be in  English,  and the  presentation  of the
      paper should not exceed 20 minutes.

      CONFERENCE PUBLICATIONS

      If your  paper is selected for presentation,  you will be asked
      to  resubmit  your  abstract  in  the form  specified  for  the
      publication in the  Congress Proceedings.  This volume will  be
      made  available at the  beginning of the  Conference. An edited
      volume   of  selected  papers  will   be  published  after  the
      Conference  by  North-Holland  Publishing  Company in  the  new
      series  "Studies  in  Visual  Information  Processing"  (Series
      Editors: R. Groner & G. d'Ydewalle).

      SECOND AND FINAL CALL

      The second and  final call will be  forwarded to all those  who
      submitted a paper and poster abstract. If you do  not submit an
      abstract and you  want to receive  the final call, please  give
      your address in the attached  information sheet. The final call
      will  include  more  details  on  the  programmes  as  well  as
      information on registration and accommodation.

      The organizers of  the conference will  be happy to answer  any
      questions  you  may have.  Our  address is  given  here at  the
      bottom line of the letter.

         ___   __
        /     /               6th European Conference on Eye Movements
       /---  /                         15 - 18 september 1991
      /___  /____                          Leuven, Belgium
     ___   __  __  l
    /     / l /  l l___        Laboratory of Experimental Psychology
   /---  /  l/   l l   l           Katholieke Unversiteit Leuven
  /___  /        l l___l             B-3000     Leuven, Belgium

  FPAAS91@BLEKUL11.EARN     Presidents: Gery d'Ydewalle & Eric De Corte
  tel: (32)(16) 28 59 65         Organizer: Johan Van Rensbergen
  fax: (32)(16) 28 60 99                                                99

------------------------------

Date: Mon, 4 Jun 90 15:48:52 CDT
From: ssmith (Sean Smith)
Subject: Evans & Sutherland PS390 FOR SALE!!

Evans & Sutherland PS390 (includes):
                        Graphics Control processor with 2 dual-sided Floppies
                        2 Megabytes Memory
                        Display Processor
                        19" Color Raster Monitor
                        RS-232C interface
                        PS390 System Firmware, Host Software and License
                        5 Volume user set
                        Ethernet Interface
                        1 Megabyte Memory
                        Keyboard w/led display
                        Control Dials w/LED display
                        Tablet 6" x 6" (active area)

*****Please contact Sean Smith
                    (713)363-8494
                    ssmith@bcm.tmc.edu
Please pass this message on to interested users.



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (06/15/90)

Vision-List Digest	Thu Jun 14 11:11:27 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Focus of attention in human and computer vision
 Computer Controllable Lenses
 Work at BMW
 RE: List of industrial vision companies..
 CANCELLED:  ANN Workshop, Atlantic City, Jun. 17

----------------------------------------------------------------------

Date: 13 Jun 90 16:14:43+0200
From: milanese ruggero <milanese@cuisun.unige.ch>
Subject: Focus of attention in human and computer vision

Hallo,

	I am at the beginning of my PhD work concerning the subject 
of focus of attention applied to visual perception. Good sources of 
information about the underlying mechanisms seem to be provided by 
psychologists and, to a given extent, by neurophysiologists. Rather
than analysis of elementar phenomena, what I am more interested in
are frameworks and theories that try to explain the global process
of attention in vision. Since I am a computer scientist, I shall 
also be interested in applying these concepts into a working machine 
vision system.
	Therefore, I would be grateful for any information, reference,
discussion or pointers about work done in this field. 

	Many thanks,
                                       Ruggero Milanese


E-mail:	  milanese@cuisun.unige.ch

Address:  Centre Universitaire d'Informatique
          12, rue du Lac
          1207 - Geneve
          Switzerland

------------------------------

Date: Tue, 12 Jun 90 08:46:42 BST
From: Alan McIvor <bprcsitu!alanm@relay.EU.net>
Subject: Computer Controllable Lenses

Hi,
	We are currently looking for a lens for our vision system with
computer controllable focus, focal length, and aperture. Do any of you
know of a source of such lenses? We have found many motorized lenses 
but most have auto-apertures and no feedback of settings. 
	I recall several years ago that a company called Vicon made such
a lens but I don't have any details. Anybody know how to get hold of
them?

Thanks,

Dr Alan M. McIvor		
BP International Ltd    	ukc!bprcsitu!alanm
Research Centre Sunbury	    	alanm%bprcsitu.uucp@uk.ac.ukc
Chertsey Road			bprcsitu!alanm@relay.EU.NET
Sunbury-on-Thames       	uunet!ukc!bprcsitu!alanm
Middlesex TW16 7LN		Tel: +44 932 764252
U.K.                            Fax: +44 932 762999

[ Please post responses to the List!!
		phil...	]

------------------------------

Date: Fri, 8 Jun 90 10:24:13 +0200
From: jost@bmwmun.ads.com (Jost Bernasch)
Subject: Work at BMW

[ I received this as part of a correspondence, and I thought it would
  be of general interest (with permission to post from Jost).
				phil...	]

I am with a research group at BMW and at the Technical University of Munich.
We are developing a selfguided driving car, which is lateral controled.
The car actually is driving on the BMW test route with about 60mph without
needing white lanes or something like that.  

We are just working towards a more stable system (brightness, shadows)
and we will identify, track and classify objects (cars, trucks, pedestrians).
Furthermore we are developing an attentive vision modul, which focuses
automatically attention to important parts of the image.

Naturally, we are confronted with problems which need adaptive
control (lateral guidance depending on the car's changing status 
and the environment, adaptiv, intelligently controlled attentive
vision, adapting a lot of parameters in the vision modules etc.).

Yours
Jost Bernasch, 
BMW AG Muenchen, Dep. EW-13, P.O. BOX 40 02 40, D-8000 Muenchen 40,  Germany
Tel. ()89-3183-2822  	FAX ()89-3183-4767      jost@bmwmun.uucp.dbp.de


------------------------------

Date: Mon, 11 Jun 90 14:49:33 -0500
From: krishnan@cs.wisc.edu (Harikrishnan Krishna)
Subject: RE: List of industrial vision companies..

 It would be great if the addresses were also posted along with the 
 names of the companies.

 Thanks.

 Krishna.

------------------------------

Date: Mon, 11 Jun 90 20:20:56 EDT
From: raja@pixel.cps.msu.EDU
Subject: CANCELLED:  ANN Workshop, Atlantic City, Jun. 17

The following workshop has been CANCELLED.  Any
inconvenience caused is regretted.

                                 Workshop on
              Artificial Neural Networks & Pattern Recognition

                                Sponsored by
        The International Association for Pattern Recognition (IAPR)

                                 Sands Hotel
                          Atlantic City, New Jersey
                                June 17, 1990

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (07/06/90)

Vision-List Digest	Thu Jul 05 10:47:27 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:
  IEEE Workshop on Directions in Automated ``CAD-Based'' Vision

----------------------------------------------------------------------

Date: Fri, 29 Jun 90 08:59:19 EDT
From: Dr Kevin Bowyer <kwb@SOL.USF.EDU>
Subject: IEEE Workshop on Directions in Automated ``CAD-Based'' Vision

    IEEE Workshop on Directions in Automated ``CAD-Based'' Vision

      June 2-3, 1991   Maui, Hawaii  (just prior to CVPR '91)

The purpose of this workshop is to foster dialogue and debate which 
will more sharply focus attention on important unsolved problems, so 
that better working solutions can be produced.  The program will 
consist of a small number of submitted papers, ``proponent/respondent'' 
discussions on selected topics, panel sessions, and presentations by
some potential ``consumers'' of machine vision on what they feel are 
important real problems in need of a solution.  Participants should be 
willing (preferably eager) to engage in open discussion, in the best 
collegial spirit, of both their own work and that of others.

A list of possible themes for submitted papers, meant to be suggestive 
rather than exclusive, is:

     Derivation of Vision-Oriented Object Models from CAD Models
      Model-Driven Extraction of Relevant Features from Images
       Strategies for Matching Image Features to Object Models
           Capabilities of Current CAD-to-Vision Systems
           ``Qualitative Vision'' and Automated Learning


Submission of Papers:  Submit three copies of your paper to the program 
chairman to be received on or before January 1, 1991.  Papers should not 
exceed a total of 25 double-spaced pages.  Authors will be notified of
reviewing decisions by March 1, and final versions will be due by April 1.

General Chairman:             
     Linda Shapiro (shapiro@cs.washington.edu)
     Dept. of Computer Science and Engineering
     University of Washington
     Seattle, Washington 98195

Program Chairman:
     Kevin Bowyer (kwb@sol.usf.edu)
     Dept. of Computer Science & Engineering
     University of South Florida
     Tampa, Florida 33620

Program Committee:
    Avi Kak, Purdue University       Joe Mundy, General Electric Corp. R.&D. 
    Yoshiaki Shirai, Osaka Univ.     George Stockman, Michigan State Univ.
    Jean Ponce, Univ. of Illinois    Katsushi Ikeuchi, Carnegie-Mellon Univ.
    Tom Henderson, Univ. of Utah     Horst Bunke, Universitat Berne



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (07/12/90)

Vision-List Digest	Wed Jul 11 09:51:04 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Help ! Rotated molecules & rotating physics student
 canonical vision book list
 Summary of Responses Received to generate Needle Diagrams
 Camera Mount
 Ideas Needed for Manned Exploration of Moon and Mars
 Character Recognition Bibliography?
 OCR Refs/Software?
 Job vacancies

----------------------------------------------------------------------

Date: 10 Jul 90 10:34:49 GMT
From: uh311ae@LRZSUN4_7.lrz.de (Henrik Klagges)
Subject: Help ! Rotated molecules & rotating physics student
Keywords: Correllation filters, rotated pattern recognition, GA
Organization: LRZ, Bavarian Academy of Sciences, W. Germany

Hi, netwise,
I need help !

The problem:

A Scanneling Tunnel Microscope (STM) produces a picture of a flat surface 
covered with macromolecules, forming a loose grid or just being scattered
around. For simplicity, it is assumed that there exist only 3 free para-
meters, namely two translatoric and one rotational for the molecules. 
A single molecule gives a noisy image, so it is desired to combine many 
individual molecule-images into a single one. To accomplish this task
several ways might be possible:

1) Have a clever program walking over the image and saying 'Wow ! That's a
   molecule turned foo degrees and translated bar units, let's add it to our
   data base !' (Ugh).

2) Run a fantastic correllation filter (fcf:) over the image that is able 
   to recognize the correllation between any (!) rotated and x,y-displaced 
   structures and amplify those structures (Does this exist yet ? Does one
   exist that matches all affine transformations ?).

3) If that is too much, select a "good" molecule, calculate its turned image
   for each degree, move over the whole image and try to match these 360 
   turns with the image and mark this place as occupied (Calculate 'n crunch 
   for ever ?).

4) Make a FF- or Hartley- or another integral transform of the image. That means
   no spatial parameters anymore, and then turn and match the transformed image 
   on itself, correllate, amplify, re-transform (Who knows if that works !).

Questions:

1) Who knows about a fcf ?

2) Which methods are suited for the task of matching two images that are 
   rotated and/or linearly displaced against each other ? (I heard about
   a genetic algorithm from Fitzgerald, Grevenstette et al.).

3) How do You walk over a surface and recognize noisy molecules ?

4) ANY hint or comment desperately welcome !


Thanks a lot

Henrik Klagges

STM group at LMU Munich
EMail: 	uh311ae@LRZSUN4_7.lrz.de
SMail:  Stettener Str. 50, 8210 Prien, FRG


------------------------------

Date: Tue, 10 Jul 90 12:39:23 +0200
From: ronse@prlb.philips.be
Subject: canonical vision book list

I received many references of books on vision, but of only one on image
analysis (Serra's first volume). So here is my "canonical" list of
fundamental vision books. Please send me also the references for your
preferred books on image processing and analysis.

D.H. Ballard & C.M. Brown: "Computer Vision", Prentice-Hall, 1982.

B.K.P. Horn: "Robot Vision", MIT Press, Cambridge, Mass., USA, 1986.

T. Kanade : "Three Dimensional Machine Vision", Kluwer Ac. Press.

J.J. Koenderink: "Solid Shape", MIT Press, Cambridge, Mass., USA., 1990.

M.D. Levine: "Vision in Man and Machine", McGraw-Hill, New York, USA,
1985.

D. Marr: "Vision", W.H. Freeman & Co., San Francisco, CA, 1982.

P.H. Winston (ed.): "The Psychology of Computer Vision", McGraw-Hill,
1975.

The books by Ballard & Brown and by Horn are "winners". Several people
suggested them.

Christian Ronse

Internet:			ronse@prlb.philips.be
BITNET:				ronse%prlb.philips.be@cernvax

------------------------------

Date: Fri, 6 Jul 90 17:09:41 EDT
From: ramamoor@cmx.npac.syr.edu (Ganesh Ramamoorthy)
Subject: Summary of Responses Received to generate Needle Diagrams

>From: jonh@tele.unit.no
Subject: Needle Diagrams

If you use matlab, ver. 3.5 you can use the command "quiver"
to generate nice needle diagrams. Unfortunately, ver 3.5 of matlab
may still not be available on SUN. However it is available for
VAX. I have taken the m-file "quiver.m" from Matlab ver. 3.5
running on VAX and used it with old matlab versions on SUN. 
This works fine. If you would like to try this out and have
problems getting hold of "quiver.m", let me know and I will
mail you a copy.

>From: Keith Langley <kl@robots.oxford.ac.uk>
Subject: needles

I did it the easy way by looking at FIG format for lines
and piping out a needle diagram that way.

>From: hmueller@wfsc4.tamu.edu (Hal Mueller)
Subject: Needle Diagrams

The graphics package called DISSPLA, sold by Computer Associates (formerly
sold by ISSCO), has an extra cost option for an automatic code generator.
One of the things this code generator produces is needle plots.  DISSPLA
runs on a Sun 3, and I would presume also on a Sun 4.

Reach them at (800) 645-3003.  The package is expensive, fairly difficult
to learn, is extremely powerful, and is as close to bug free as any
commercial product I've ever seen.

>From: oskard@vidi.cs.umass.edu
Subject: re: Programs for generating Needle Diagrams

	Hi.  I don't know if you've found any programs for displaying
flow fields yet, but we use KBVision here and it has a system called
the image examiner that displays flow fields among other things.
Anyway, their address is:

	Amerinex Artificial Intelligence Inc.
	274 N. Pleasant St.
	Amherst, MA 01002

	413-256-8941

>From: johnston@george.lbl.gov (Bill Johnston)
Subject: flow field diagramw

The NCAR (National Center for Atmospheric Research, Boulder CO)
graphics library has several routines for displaying flow
fields. It sits on top of GKS, so any GKS package that supports
a Sun should allow you to use the NCAR package.

------------------------------

Date: Sun,  8 Jul 90 12:38:29 -0400 (EDT)
From: "Ethan Z. Evans" <ee0i+@andrew.cmu.edu>
Subject: Camera Mount

Vision Wizards:
	I need to find a 2 degree of freedom camera mount (rotation and pitch).
 The mobile platform I work on is only about a foot tall, and the top is
cluttered with various hardware etc.  Thus to give the robot a view of
its world, we need to set the camera up on a pole.  Once its up their it
needs to be able to turn around and either look down at what the arm is
doing, or out at where the base is going.  Point is, I don't have the
slightest idea where to look for such a device.  If anyone could give me
a starting point as to where to get such a mount, especially one easily
controlled through a PC parallel or serial port, I would be most
greatful.
	
Thanks in advance,
	Ethan Evans
	ee0i@andrew.cmu.edu
	
Disclaimer:  I'm the lab rat, how could *I* have an opinion?

[ Please post responses to the List
		phil...	]

------------------------------

Date: 9 Jul 90 07:15:53 GMT
From: guyton%randvax.UUCP@usc.edu (Jim Guyton)
Subject: Ideas Needed for Manned Exploration of Moon and Mars
Keywords: PROJECT OUTREACH
Organization: Rand Corp., Santa Monica, Ca.

        PROJECT OUTREACH

        Ideas Needed for Manned Exploration of Moon and Mars.

        NASA is seeking innovative approaches to mission concepts and
        architectures, as well as technologies that could cut costs and
        improve mission schedule and performance.

        The RAND Corporation will provide an independent assessment
        of all suggestions.

        The procedure for submitting ideas is simple. For an information
        kit call 1-800-677-7796.  Call now.  The deadline for submissions
        is August 15, 1990.

------------------------------

Date: Mon, 9 Jul 90 19:56:22 EDT
From: atul k chhabra <Atul.Chhabra@UC.EDU>
Subject: Character Recognition Bibliography?

I am looking for a character recognition bibliography. I am interested in all
aspects of character recognition, i.e., preprocessing and segmentation,
OCR, typewritten character recognition, handwritten character recognition,
neural network based recognition, statistical and syntactic recognition,
hardware implementations, and commercial character recognition systems.

If someone out there has such a bibliography, or something that fits a part
of the above description, I would appreciate receiving a copy. Even if you
know of only a few references, please email me the references.

Please email the references or bibliography to me. I will summarize on the 
vision-list. Thanks,

Atul Chhabra
Department of Electrical & Computer Engineering
University of Cincinnati, ML 030
Cincinnati, OH 45221-0030

Phone: (513)556-6297
Email: achhabra@uceng.uc.edu

------------------------------

Date: Wed, 11 Jul 90 14:57:00 +0200
From: nagler@olsen.ads.com (Robert Nagler)
Subject: OCR Refs/Software?
Organization: Olsen & Associates, Zurich, Switzerland
Keywords: OCR, Pattern Recognition
Status: R

Could someone send me a good reference(s) on OCR/Pattern Recognition?
Pointers to PD software (source) would be nice, too.   Thanks.

Rob nagler@olsen.uu.ch

------------------------------

Date: Wed, 4 Jul 90 11:29:00 WET DST
Subject: job vacancy
From: J.Illingworth@ee.surrey.ac.uk

    UNIVERSITY OF SURREY: Dept of Electronics and Electrical Engineering.

    *********************************************************************

        RESEARCH FELLOW IN COMPUTER VISION AND IMAGE PROCESSING

    *********************************************************************



  Research Fellows are required for projects in Computer Vision. 
  The projects are concerned with the following problems: 

    *  robust 2D and 3D shape representation and  analysis; 
    *  high-level scene interpretation; 
    *  automatic inspection of loaded printed circuit boards; 
    *  relaxation labelling and neural net computation in vision 
       by associative reasoning. 

  The projects will be carried out within an active Vision, Speech and 
  Signal Processing research group which comprises about 35 members. 
  The group has extensive computing resources including SUN Sparc stations, 
  VAX and Masscomp computers as well as specialised image processing 
  facilities.

  The successful candidates will be required to develop, implement in 
  software  and experimentally evaluate computer vision algorithms. 
  Applicants for these posts should have a degree in mathematics, 
  statistics, electronics, computer science, artificial intelligence or 
  physics. Previous experience in computer vision, image analysis, 
  knowledge based methods or pattern recognition will be an advantage. 
  One of the posts may be reserved for an applicant able to provide 
  hardware and software support for the Group across projects. 

  The appointments will be initially for two years with a salary in the range 
  10458 -16665 pa (under review) depending upon age, qualifications and 
  experience, with superannuation under USS conditions. Applications in the 
  form of a curriculum vitae ( 3 copies) including the names and addresses 
  of two referees should be sent to the Personnel Office (JLG), University 
  of Surrey, Guildford, Surrey GU2 5XH by  30 June  1990 quoting reference 


  Further information may be obtained from Dr J Kittler, Department of 
  Electronic and Electrical Engineering on (0483) 509294 or from 
  Dr J Illingworth on (0483) 571281 ext. 2299. 



    PROJECT DESCRIPTION

 DETECTION OF LINEAR FEATURE IN IMAGE DATA

 The aim of the project is to investigate and develop the Hough 
 transform and associated pre-processing and postprocessing techniques with 
 the application to the problem of detecting  linear image features in noisy 
 and cluttered background of changing polarity. The emphasis of the approach
 will be on statistical hypothesis testing and robust estimation methods.
 An important component of the research will be to develop the theory and 
 methodology for the design of post processing filters to enhance the Hough 
 transform performance. The problem of detecting higher level features such as 
 corners and parallels using the same framework will also be considered.


 LOCATION AND INSPECTION WITH RANGE DATA

 The project is concerned with the problem of segmenting depth image data into 
 parametric surfaces. Robust hypothesis testing methods of which the Hough 
 transform is just one example will be investigated in this context.
 The research issues to be addressed include the problems of surface 
 parameterisation, efficient transform calculation and reliable transform 
 space analysis. Other approaches to range data segmentation 
 such as energy minimisation methods and knowledge based methods will be 
 investigated. 


 AUTOMATIC INSPECTION OF LOADED PRINTED CIRCUIT BOARDS 

 The aim of the project is to investigate and develop machine vision 
 techniques for advanced automated inspection of loaded printed circuit 
 boards (PCBs) and of surface mounted ceramic substrates with the view to
 enable real time fault correction to increase production yields,
 maintain and optimise product quality and to maximize manufacturing process 
 control capabilities. A particular emphasis will be on two aspects of the 
 inspection task: i) the use of 3D sensor data to address the problem of 
 inspecting soldered joints, active device pins, component leads and 
 the mounting of special devices, ii) development of knowledge based 
 approaches to guide and control the above surface  based inspection problem
 and to verify component identity. The strategic objectives of the project 
 include the following: Development of 3D scene modelling and surface 
 segmentation methods specifically in the context of the loaded PCB inspection 
 domain; Development of surface inspection approaches; Representation and 
 application of geometric, attribute and relational models of objects and 
 their arrangements in the application domain of electronic system assembly.  
 The research problems to be addressed in order to meet these objectives are 
 generic in nature. It is anticipated therefore that the research results 
 will have bearing on other application areas of computer vision.

 The proposed research will advance the state of the art in automatic loaded 
 PCB inspection by:
 1 evaluating  existing 3D surface segmentation methods in the 
   context of loaded PCB inspection,
 2 developing robust surface modelling methods,
 3 providing techniques and algorithms for surface inspection, 
 4 enhancing the methods of component identification and 
   verification, and
 5 developing inspection strategies that will allow full integration 
   of automatic printed circuit assembly, inspection and rework.

 VISION BY ASSOCIATIVE REASONING

 The project is concerned with the study of relaxation labelling processes
 in the computer vision context. The aim of the research will be to develop
 and apply existing probabilistic and discrete relaxation algorithms
 to image interpretation problems at intermediate levels of processing
 where the prior world knowledge may most naturally be specified in terms of 
 explicit rules. It will be necessary to develop a suitable interface that 
 facilitates the conversion of such rules into a form that can be used 
 directly by the evidence combining scheme employed in the relaxation 
 process. The work will also involve the development of evidence combining 
 methods for multilevel relaxation, development of hierarchical models
 and corresponding hierarchical relaxation processes. The relationship
 of relaxation processes and neural net computation will be investigated.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (07/13/90)

Vision-List Digest	Thu Jul 12 10:52:42 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Technology Transfer Mailing List
 AAAI information (including workshop info)
 Seventh IEEE Conference on Artificial Intelligence Applications
 IJCAI'91 Call for Participation
 IJCAI'91 Call for Workshops
 IJCAI'91 Call for Tutorials
 IJCAI'91 Call for Awards

----------------------------------------------------------------------

Date: Tue, 10 Jul 90 09:02:36 -0400
From: weh@SEI.CMU.EDU
Subject: Technology Transfer Mailing List
Organization: Software Engineering Institute, Pittsburgh, PA
Keywords: technology transfer, communication, mailing list

The Technology Applications group of the Software Engineering Institute is
pleased to announce the creation of a new electronic mailing list:
technology-transfer-list.  This mailing list, focused on technology transfer
and related topics, is intended to foster discussion among researchers and
practitioners from government and industry who are working on technology
transfer and innovation.  

Relevant topics include:

 * organizational issues (structural and behavioral) 
 * techno-economic issues
 * business and legal issues, such as patents, licensing, copyright, and
   commercialization
 * technology transfer policy
 * technology maturation to support technology transition
 * lessons learned
 * domestic and international technology transfer
 * transition of technology from R&D to practice
 * planning for technology transition
 * models of technology transfer
 * studies regarding any of these topics

The technology-transfer-list is currently not moderated, but may be
moderated or digested in the future if the volume of submissions warrants.
The electronic mail address for submissions is:

        technology-transfer-list@sei.cmu.edu

To request to be added to or dropped from the list, please send mail to:

        technology-transfer-list-request@sei.cmu.edu

Please include the words "ADD" or "REMOVE" in your subject line.

Other administrative matters or questions should also be addressed to:

        technology-transfer-list-request@sei.cmu.edu

The SEI is pleased to provide the facilities to make this mailing list
possible.  The technology-transfer-list is the result of two SEI activities:

 * transitioning technology to improve the general practice of software
   engineering 
 * collaborating with the Computer Resource Management Technology program
   of the U.S. Air Force to transition technology into Air Force practice

The SEI is a federally funded research and development center sponsored by
the U.S. Department of Defense under contract to Carnegie Mellon University.


------------------------------

Date: Sat, 7 Jul 90 11:34:42 -0400
Subject: AAAI information (including workshop info)

WORKSHOP REGISTRATION

Workshop registration begins at 7:30 am on Sunday the 29th (the day of the
workshop) in Hall B of the Hynes Conference Center.  The workshop will get
under way at 9:00 am.  Unfortunately there is no advance registration, and
there is no putting the registration off 'til later either:  registering
is how you (and I) find out which room the workshop is in.  The registration
desk may be a zoo, so please get there early.

There is a $50 registration fee for workshops, in addition to the fees
for any other technical or tutorial sessions you choose to attend.  You
can attend a workshop without registering for the whole conference.

CONFERENCE REGISTRATION

To advance register for the AAAI conference, send a check for the amount
listed below to AAAI-90, 445 Burgess Drive, Menlo Park, CA 94025, or you
can call (415) 328-3123 and use a credit card.  

onsite registration (starting Sunday July 29th):
	regular AAAI member	$335
	regular non-member	$375
	student AAAI member	$200
	student non-member	$220

AIR TRAVEL

Reduced air fares to the workshop and/or conference are available from
Northwest Airlines and TWA.  The reduction is 45% for round-trip or circle
trip, coach class, fares over $100, within the continental US.  Tickets
must be issued at least 10 days in advance of travel.  Flight reservations
must be made through one of the following:

Custom Travel Consultants  (in Woodside, CA)
(800)367-2105 or (415)369-2105,  9am - 5pm PST

Northwest Airlines
(800)328-1111  (use ID code 17379)

TWA
(800)325-4933  (use ID code B9912829)

CAR RENTAL

Special rates are available from Hertz when booked through Custom Travel
Consultants (see above).  Use code  CV 5522.

ACCOMMODATION

Economical accommodation is MIT dormitory housing (twin beds with linens,
laundry facilities available, no private baths or air conditioning), 6 blocks
from the conference center, $35 single, $50 double.  Requests for dormitory
housing must be received by Rogal by July 1st, and must be accompanied by
full prepayment.  Parking is an extra $5.

Here follows a list of a few hotels you might use.  Whether or not you're
staying for the rest of the conference, you can obtain a room at "special
conference rates" by contacting Rogal America, 313 Washington Street suite
300, Newton Corner, MA 02158, phone (617)965-8000.  Deadline for obtaining
these reduced rates at June 29th.

Back Bay Hilton
1 block from conference center
Single room rates: $125
Double room rates: $140

The Midtown
2 blocks from conference center
Single room rates: $80
Double room rates: $90

Boston Marriott Copley Place
2 blocks from conference center
Single room rates: $126, $137, $144
Double room rates: $140, $152, $160

Copley Square Hotel
3 blocks from conference center
Single room rates: $86, $96
Double room rates: $100

57 Park Plaza
8 blocks from conference center
Single room rates: $90
Double room rates: $100

Howard Johnson Cambridge
13 blocks from conference center
Single room rates: $90
Double room rates: $100

------------------------------

Date: Sat, 7 Jul 90 11:34:42 -0400
From: finin@PRC.Unisys.COM
Subject: Seventh IEEE Conference on Artificial Intelligence Applications
Organization: Unisys Center for Advanced Information Technology


 The Seventh IEEE Conference on Artificial Intelligence Applications

	      Fontainbleau Hotel,  Miami Beach, Florida
			February 24 - 28, 1991

			Call For Participation
		    (submission deadline 8/31/90)
				   
	      Sponsored by The Computer Society of IEEE

The conference is devoted to the application of artificial
intelligence techniques to real-world problems.  Two kinds of papers
are appropriate: case studies of knowledge-based applications that
solve significant problems and stimulate the development of useful
techniques and papers on AI techniques and principles that underlie
knowledge-based systems, and in turn, enable ever more ambitious
real-world applications.  This conference provides a forum for such
synergy between applications and AI techniques.

Papers describing significant unpublished results are solicited along
three tracks:

  o  "Scientific/Engineering"  Applications Track.  Contributions stemming 
     from the general area of industrial and scientific applications.
  
  o  "Business/Decision Support" Applications Track.  Contributions stemming 
     from the general area of decision support applications in business, 
     government, law, etc.
  
     Papers in these two application tracks must:  (1) Justify the use
     of the AI technique, based on the problem definition and an
     analysis of the application's requirements; (2) Explain how AI
     technology was used to solve a significant problem; (3) Describe
     the status of the implementation; (4) Evaluate both the
     effectiveness of the implementation and the technique used.

     Short papers up to 1000 words in length will also be accepted for
     presentation in these two application tracks.
  
  o "Enabling Technology" Track.  Contributions focusing on techniques
     and principles that facilitate the development of practical knowledge
     based systems that can be scaled to handle increasing problem
     complexity.  Topics include, but are not limited to: knowledge
     representation, reasoning, search, knowledge acquisition, learning,
     constraint programming, planning, validation and verification, project
     management, natural language processing, speech, intelligent
     interfaces, natural language processing, integration, problem-solving 
     architectures, programming environments and general tools.
  
Long papers in all three tracks should be limited to 5000 words and
short papers in the two applications tracks limited to 1000 words.
Papers which are significantly longer than these limits will not be
reviewed. The first page of the paper should contain the following
information (where applicable) in the order shown:

  -  Title.
  -  Authors' names and affiliation. (specify student status)
  -  Contact information (name, postal address, phone, fax and email address)
  -  Abstract:  A 200 word abstract that includes a clear statement describing
     the paper's original contributions and what new lesson is imparted.
  -  AI topic:  one or more terms describing the relevant AI areas, e.g.,
     knowledge acquisition, explanation, diagnosis, etc.
  -  Domain area:  one or more terms describing the problem domain area,
     e.g., mechanical design, factory scheduling, education, medicine, etc.
     Do NOT specify the track.
  -  Language/Tool:  Underlying programming languages, systems and tools used.
  -  Status:  development and deployment status, as appropriate.
  -  Effort: Person-years of effort put into developing the particular
     aspect of the project being described.
  -  Impact: A twenty word description of estimated or measured (specify)
     benefit of the application developed.
  
Each paper accepted for publication will be allotted seven pages in
the conference proceedings.  The best papers accepted in the two
applications tracks will be considered for a special issue of IEEE
EXPERT to appear late in 1991.  An application has been made to
reserve a special issue of IEEE Transactions on Knowledge and Data
Engineering (TDKE) for publication of the best papers in the enabling
technologies track.  IBM will sponsor an award of $1,500 for the
best student paper at the conference. 

In addition to papers, we will be accepting the following types of
submissions:

  - Proposals for Panel discussions. Provide a brief description of the
    topic (1000 words or less).  Indicate the membership of the panel and
    whether you are interested in organizing/moderating the discussion.   

  - Proposals for Demonstrations.  Submit a short proposal (under 1000
    words) describing a videotaped and/or live demonstration.  The
    demonstration should be of a particular system or technique that
    shows the reduction to practice of one of the conference topics.
    The demonstration or videotape should be not longer than 15 minutes.

  - Proposals for Tutorial Presentations. Proposals for three hour
    tutorials of both an introductory and advanced nature are
    requested.  Topics should relate to the management
    and technical development of useful AI applications.  Tutorials
    which analyze classes of applications in depth or examine
    techniques appropriate for a particular class of applications are of
    particular interest. Copies of slides are to be provided in advance to
    IEEE for reproduction.

    Each tutorial proposal should include the following:

     * Detailed topic list and extended abstract (about 3 pages)
     * Tutorial level:  introductory, intermediate, or advanced
     * Prerequisite reading for intermediate and advanced tutorials
     * Short  professional vita including presenter's experience in
       lectures and tutorials.

  - Proposals for Vendor Presentations. A separate session will be held
    where vendors will have the opportunity to give an overview to
    their AI-based software products and services.


IMPORTANT DATES

  - August 31, 1990: Six copies of Papers, and four copies of all proposals
    are due.  Submissions not received by that date will be returned
    unopened. Electronically transmitted materials will not be accepted.
  - October 26, 1990: Author notifications mailed.
  - December 7, 1990: Accepted papers due to IEEE.  Accepted tutorial
    notes due to Tutorial Chair.
  - February 24-25, 1991: Tutorial Program of Conference
  - February 26-28, 1991: Technical Program of Conference

Submit Papers and Other Materials to:

	Tim Finin
	Unisys Center for Advanced Information Technology
	70 East Swedesford Road
	PO Box 517
	Paoli PA 19301
	internet: finin@prc.unisys.com
	phone: 215-648-2840; fax: 215-648-2288

Submit Tutorial Proposals to:

	Daniel O'Leary
	Graduate School of Business
	University of Southern California
	Los Angeles, CA 90089-1421
	phone: 213-743-4092, fax: 213-747-2815

For registration and additional conference information, contact:

	CAIA-91
	The Computer Society of the IEEE
	1730 Massachusetts Avenue, NW
	Washington, DC 20036-1903
	phone: 202-371-1013

			CONFERENCE COMMITTEES

General Chair:      Se June Hong, IBM Research
Program Chair:      Tim Finin, Unisys
Publicity Chair:    Jeff Pepper, Carnegie Group, Inc.
Tutorial Chair:     Daniel O'Leary, University of Southern California
Local Arrangements: Alex Pelin, Florida International University, and
                    Mansur Kabuka, University of Miami
Program Committee:
 AT-LARGE				SCIENTIFIC/ENGINEERING TRACK
 Tim Finin, Unisys (chair)		Chris Tong, Rutgers (chair)
 Jan Aikins, AION Corp.			Sanjaya Addanki, IBM Research
 Robert E. Filman, IntelliCorp		Bill Mark, Lockheed AI Center
 Ron Brachman, AT&T Bell Labs		Sanjay Mittal, Xerox PARC
 Wolfgang Wahlster, German Res. Center	Ramesh Patil, MIT
  for AI & U. of Saarlandes		David Searls, Unisys
 Mark Fox, CMU				Duvurru Sriram, MIT

 ENABLING TECHNOLOGY TRACK		BUSINESS/DECISION SUPPORT TRACK
 Howard Shrobe, Symbolics (chair)	Peter Hart, Syntelligence (chair)
 Lee Erman, Cimflex Teknowledge		Chidanand Apte,  IBM Research
 Eric Mays,  IBM Research		Vasant Dhar,  New York University
 Norm Sondheimer, GE Research		Steve Kimbrough, U. of Pennsylvania
 Fumio Mizoguchi, Tokyo Science Univ.   Don McKay, Unisys
 Dave Waltz, Brandeis & Thinking Machines

  +----------------------------------------------------------------------+
  | Tim Finin                                   finin@prc.unisys.com     |
  | Center for Advanced Information Technology  215-648-2840, -2288(fax) |
  | Unisys, PO Box 517, Paoli, PA 19301 USA     215-386-1749 (home)      |
  +----------------------------------------------------------------------+


------------------------------

From: Kimberlee Pietrzak-Smith <kim@cs.toronto.edu>
Subject: IJCAI'91 Call for Participation
Date: 	Fri, 6 Jul 1990 16:15:06 -0400

               CALL FOR PARTICIPATION: IJCAI-91
  TWELFTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
            August 24 - 30, 1991, Sydney, Australia

The biennial IJCAI conferences are the major forums for the
international scientific exchange and presentation of AI research.
The next IJCAI conference will be held in Sydney, Australia, 24-30 August
1991. IJCAI-91 is sponsored by the International
Joint Conferences on Artificial Intelligence, Inc. (IJCAII), and
co-sponsored by the National Committee on Artificial Intelligence
and Expert Systems of the Australian Computer Society.

The conference technical program will include workshops, tutorials,
panels and invited talks, as well as tracks for paper and videotape
presentations. 


1. Paper Track: Submission Requirements and Guidelines

Topics of Interest 
 
Submissions are invited on substantial, original, and previously
unpublished research in all aspects of AI, including, but not limited
to:

* Architectures and languages for AI (e.g. hardware and
  software for building AI systems, real time and distributed AI)
* Automated reasoning (e.g. theorem proving, automatic
  programming, planning and reasoning about action, search, truth
  maintenance systems, constraint satisfaction) 
* Cognitive modelling (e.g. user models, memory models)
* Connectionist and PDP models
* Knowledge representation (e.g. logics for knowledge, belief and 
  intention, nonmonotonic formalisms, complexity analysis, languages and 
  systems
  for representing knowledge)
* Learning and knowledge acquisition
* Logic programming (e.g. semantics, deductive databases, relationships
  to AI knowledge representation)
* Natural language (e.g. syntax, semantics, discourse, speech recognition 
  and understanding, natural language front ends)
* Philosophical foundations
* Principles of AI applications (e.g. intelligent CAI, design,
  manufacturing, control)
* Qualitative reasoning and naive physics (e.g. temporal and spatial
  reasoning, reasoning under uncertainty, model-based reasoning, diagnosis)
* Robotics (e.g. kinematics, manipulators, navigation, sensors, control)
* Social, economic and legal implications
* Vision (e.g. colour, shape, stereo, motion, object recognition,
  active vision, model-based vision, vision architectures and hardware,
  biological modelling)


Timetable

1. Submissions must be received by December 10, 1990.
   Submissions received after that date will be returned unopened.
   Authors should note that ordinary mail can sometimes be considerably
   delayed and should take this into account when timing their
   submissions.
   Notification of receipt will be mailed to the first author (or
   designated author) soon after receipt.

2. Notification of acceptance or rejection: on or before March 20, 1991.
   Notification will be sent to the first author (or designated author).

3. Edited version to be received by April 19, 1991.


General

   Authors should submit five (5) copies of their papers in hard copy
form.  All paper submissions should be to one of the Program Committee
CoChairs. Electronic or FAX submissions cannot be accepted.


Appearance

   Papers should be printed on 8.5" x 11" or A4 sized paper,
double-spaced (i.e. no more than 28 lines per page), with 1.5" margins,
and with 12 point type. Letter quality print is required. (Normally, dot-matrix
printout will be unacceptable unless truly of letter quality. Exceptions will
be made for submissions from countries where high quality printers are not
widely available.)


Length

   Papers should be a minimum of 2500 words (about nine pages double
spaced) and a maximum of 5500 words (about 18 pages double spaced),
including figures, tables and diagrams. Each full page of figures
takes the space of about 500 words.


Title Page

Each copy of the paper must include a title page, separate from the
body of the paper. This should contain:

1. Title of the paper.
2. Full names, postal addresses, phone numbers and email addresses of all 
   authors.
3. An abstract of 100-200 words.
4. The area/subarea in which the paper should be reviewed.
5. A declaration that this paper is not currently under review for
   a journal or another conference, nor will it be submitted during
   IJCAI's review period. See IJCAI's policy on multiple submissions
   below.


Policy on Multiple Submissions

   IJCAI will not accept any paper which, at the time of submission, is
under review for a journal or another conference. Authors are also 
expected not to submit their papers elsewhere during
IJCAI's review period. These restrictions apply only to journals and
conferences, not to workshops and similar specialized presentations
with a limited audience. 


Review Criteria

   Papers will be subject to peer review. Selection criteria include
accuracy and originality of ideas, clarity and significance of
results and the quality of the presentation. The decision of the 
program committee will be final and cannot be appealed. Papers
selected will be scheduled for presentation and will be printed in the
proceedings. Authors of accepted papers, or their representatives, are expected
to present their papers at the conference.


Video Enhancement of Paper Presentations

   In addition to an oral presentation, the authors of accepted papers
may, if they so choose, submit a videotape which will be presented in the
video track session. These tapes will not be refereed but only reviewed for the
quality of the presentation. They are intended to provide additional support to
the written and oral presentation such as demonstrations, illustrations or
applications. For details concerning tape format, see the video track 
description below. Reviewing criteria do not apply to these tapes. Only the 
submitted papers will be peer-reviewed. Authors wishing to augment their paper
presentation with a video should submit a tape only after their paper
has been accepted. All such arrangements should be made with the video track
chair.


Distinguished Paper Awards

   The Program Committee will distinguish one or more papers of
exceptional quality for special awards. This decision will in no way depend on
whether the authors choose to enhance their paper with a video presentation.



2. Videotape Track: Submission Requirements and Guidelines

    This track is reserved for displaying interesting research on applications
to real-world problems arising in industrial, commercial, government, space and
educational arenas. It is designed to demonstrate the current levels of
usefulness of AI tools, techniques and methods.

    Authors should submit one copy of a videotape of 15 minutes maximum
duration, accompanied by a submission letter that includes:

* Title,
* Full names, postal addresses, phone numbers and email addresses of all 
  authors,
* Tape format (indicate one of NTSC, PAL or SECAM; and one of VHS or .75"
  U-matic),
* Duration of tape in minutes,
* Three copies of an abstract of one to two pages in length, containing the
  title of the video, and full names and addresses of the authors,
* Author's permission to copy tape for review purposes.

The timetable and conditions for submission, notification of acceptance or
rejection, and receipt of final version are the same as for the paper
track. See above for details. 

All tape submisssions must be made to the Videotape Track Chair.
Tapes cannot be returned; authors should retain extra copies for making 
revisions. All submissions will be converted to NTSC format before review. 

     Tapes will be reviewed and selected for presentation during the
conference.  Abstracts of accepted videos will appear
in the conference proceedings. The following criteria will guide
the selection:

* Level of interest to the conference audience
* Clarity of goals, methods and results
* Presentation quality (including audio, video and pace).

     Preference will be given to applications that show a high level of
maturity. Tapes that are deemed to be advertising commercial products,
propaganda, purely expository materials, merely taped lectures or
other material not of scientific or technical value will be rejected.


3. Panels, Tutorials, Workshops

    The IJCAI-91 technical program will include panels, tutorials and
workshops, for which separate calls for proposals have been issued. For
details about organizing one of these, contact the appropriate chair in
the following list.


4. IJCAI-91 Conference Contacts

Program CoChairs

Paper submissions, reviewing, invited talks, awards and all
matters related to the technical program:

Prof. John Mylopoulos
Department of Computer Science
University of Toronto
Toronto, Ont. M5S 1A4
CANADA
Tel: (+1-416)978-5379
Fax: (+1-416)978-1455
email: ijcai@cs.toronto.edu

Prof. Ray Reiter
Department of Computer Science
University of Toronto
Toronto, Ont. M5S 1A4
CANADA
Tel: (+1-416)978-5379
Fax: (+1-416)978-1455
email: ijcai@cs.toronto.edu


Videotape Track Chair

Videotape submissions, editing and scheduling of video presentations:

Dr. Alain Rappaport
Neuron Data
444 High Street
Palo Alto, CA 94301
USA
Tel: (+1-415)321-4488
Fax: (+1-415)321-3728
email: atr@ml.ri.cmu.edu


Tutorial Chair

Enquiries about tutorial presentations:

Dr. Martha Pollack
Artificial Intelligence Center, SRI International
333 Ravenswood Ave.
Menlo Park, CA 94025
USA
Tel: (+1-415)859-2037
Fax: (+1-415)326-5512
email: pollack@ai.sri.com


Workshop Chair

Enquiries about workshop presentations and scheduling:

Dr. Joe Katz
MITRE Corporation
MS-K318
Burlington Rd.
Bedford, MA 01730
USA
Tel: (+1-617)271-8899
Fax: (+1-617)271-2423
email: katz@mbunix.mitre.org


Panel Chair

Enquiries about panels:

Dr. Peter F. Patel-Schneider
AT&T Bell Labs
600 Mountain Ave.
Murray Hill, NJ 07974
USA
Tel: (+1-201)582-3399
Fax: (+1-201)582-5192
email: pfps@research.att.com


Australian National Committee Secretariat

For enquiries about registration, accommodation and other local
arrangements:

Ms. Beverley Parrott
IJCAI-91
Parrish Conference Organizers
PO Box 787
Potts Point NSW 2011
AUSTRALIA
Tel: (+61-2)357-2600
Fax: (+61-2)357-2950


IJCAI-91 Exhibition Secretariat

For enquiries concerning the exhibition:

Ms. Julia Jeffrey
Jeffrey Enterprises
104 Falcon Street
Crows Nest NSW 2065
AUSTRALIA
Tel: (+61-2)954-0842
Fax: (+61-2)925-0735


Australian National Committee Chair

For enquiries about general Australian arrangements:

Prof. Michael A. McRobbie
Centre for Information Science Research
I Block
Australian National University
GPO Box 4
Canberra ACT 2601
AUSTRALIA
Tel: (+61 6)249-2035
Fax: (+61-6)249-0747
email: mam@arp.anu.oz.au


Conference Chair

For other general conference related matters:

Prof. Barbara J. Grosz
Aiken Computation Lab 20
Harvard University
33 Oxford Street
Cambridge MA 02138, USA
Tel: (+1-617)495-3673
Fax: (+1-617)495-9837
email: grosz@endor.harvard.edu


IJCAII and IJCAI-91 Secretary-Treasurer 

Dr. Donald E. Walker                  
Bellcore, MRE 2A379                  
445 South Street, Box 1910              
Morristown, NJ 07960-1910     
USA
Tel: (+1-201)829-4312
Fax: (+1-201)455-1931
email: walker@flash.bellcore.com

------------------------------

Date: 	Fri, 6 Jul 1990 16:16:01 -0400
From: Kimberlee Pietrzak-Smith <kim@cs.toronto.edu>
Subject: IJCAI'91 Call for Workshops

                  Call for Workshop Proposals: IJCAI-91
 
 
	The IJCAI-91 Program Committee invites proposals for the Workshop
Program of the International Joint Conference on Artificial Intelligence
(IJCAI-91),to be held in Sydney, Australia, 24-30 August 1991. 

	Gathering in an informal setting, workshop participants will have the
opportunity to meet and discuss selected technical topics in an atmosphere
which fosters the active exchange of ideas among researchers and
practitioners.  Members from all segments of the AI community are invited
to submit proposals for review.
 
	To encourage interaction and a broad exchange of ideas, the workshops
will be kept small, preferably under 35 participants.  Attendance should be
limited to active participants only.  The format of workshop presentations
will be determined by the organizers proposing the workshop, but ample time
must be allotted for general discussion.  Workshops can vary in length , but
most will last a half day or a full day.  Proposals for workshops
should be between one and two pages in length, and should contain:

1. A brief description of the workshop identifying specific technical issues
that will be its focus.
2. A discussion of why the workshop is of interest at this time,
3. The names, postal addresses, phone numbers and email addresses of the 
organizing committee, consisting of three or four people knowledgeable in the 
field and not all at the same organization,
4. A proposed schedule for organizing the workshop and a preliminary
agenda.
 
	Proposals should be submitted as soon as possible, but no
later than 21 December 1990.  Proposals will be reviewed as they are received
and resources allocated as workshops are approved. Organizers will be notified
of the committee's decision no later than 15 February 1991. 

        Workshop organizers will be responsible for:

1. Producing a Call for Participation in the workshop, open to all members 
of the AI community, which will be distributed by IJCAI.
2. Reviewing requests to participate in the workshop and selecting the
participants.
3. Scheduling the workshop activities.  All organizational arrangements
must be completed by May 15, 1991.
4. Preparing a review of the workshop for publication.
 
	IJCAI will provide logistical support and a meeting place
for the workshop, and, in conjunction with the organizers, will determine the
workshop date and time. IJCAI reserves the right to cancel any workshop
if deadlines are missed.

To cover costs, it will be necessary to charge a fee of $US50 for each
participant.
 
	Please submit your proposals, and any enquiries to:
 
 
       Dr. Joseph Katz
       MITRE Corporation
       MS-K318
       Burlington Road
       Bedford, MA 01730
       USA
       Tel: (+1-617) 271-8899 
       Fax: (+1-617) 271-2423
       email:  katz@mbunix.mitre.org

------------------------------

Date: 	Fri, 6 Jul 1990 16:16:36 -0400
From: Kimberlee Pietrzak-Smith <kim@cs.toronto.edu>
Subject: IJCAI'91 Call for Tutorials

              Call for Tutorial Proposals: IJCAI-91

The IJCAI-91 Program Committee invites proposals for the
Tutorial program of the International Joint Conference on Artificial
Intelligence (IJCAI-91) to be held in Sydney, Australia, 24-30 August 1991.
Tutorials will be offered both on standard topics and on new
and more advanced topics.  A list of topics from the IJCAI-89 Tutorial
Program is given below, to suggest the breadth of topics that can be
covered by tutorials, but this list is only a guide.  Other topics,
both related to these and quite different from them, will be
considered:

* Introduction to Artificial Intelligence
* Logic Programming
* Planning and Reasoning about Time
* Evaluating Knowledge-Engineering Tools
* Truth Maintenance Systems
* Knowledge-Acquisition
* Natural Language Processing
* Artificial Intelligence and Education
* Common Lisp Object System
* Advanced Architectures for Expert Systems

* Computer Vision

* Uncertainty Management
* Model-Based Diagnosis
* Case-Based Reasoning
* Real-Time Knowledge-Based Systems
* Neural Network Architectures
* Managing Expert Systems Projects
* Knowledge Representation
* Artificial Intelligence and Design
* Reasoning about Action and Change
* Inductive Learning
* Verifying and Validating Expert Systems
* Constraint-Directed Reasoning
* Integrating AI and Database Technologies

Anyone interested in presenting a tutorial should submit a
proposal to the 1991 Tutorial Chair, Martha Pollack.  Proposals from a
pair of presentors will be strongly favored over ones from a single
individual.  A tutorial proposal should contain the following information:


1. Topic.
2. A brief description of the tutorial, suitable for inclusion in
the conference registration brochure.
3. A detailed outline of the tutorial.
4. The necessary background and the potential target audience for
the tutorial.
5. A description of why the tutorial topic is of interest to a
substantial segment of the IJCAI audience (for new topics only).
6. A brief resume of the presentor(s), which should include name,
mailing address, phone number, email address if available, background
in the tutorial area, any available examples of work in the area
(ideally, a published tutorial-level article on the subject), evidence
of teaching experience (including references that address the
proposer's presentation ability), and evidence of scholarship in
AI/Computer Science (equivalent to a published IJCAI conference paper
or tutorial syllabus).

Those submitting a proposal should keep in mind that tutorials are
intended to provide an overview of a field; they should present
reasonably well agreed upon information in a balanced way.  Tutorials
should not be used to advocate a single avenue of research, nor should
they promote a product.

Proposals must be received by Jan. 4, 1991. Decisions about topics and
speakers will be made by Feb. 22, 1991. Speakers should be prepared to submit
completed course materials by July 1, 1991.

Proposals should be sent to:

Dr. Martha Pollack
Artificial Intelligence Center
SRI International
333 Ravenswood Ave.
Menlo Park, CA   94025
USA
email: pollack@ai.sri.com
Tel: (+1-415) 859-2037
Fax: (+1-415) 326-5512  (NOTE:  Indicate clearly on the first page
that it is intended for "Martha Pollack, Artificial Intelligence Center".)

------------------------------

Date: 	Fri, 6 Jul 1990 16:17:08 -0400
From: Kimberlee Pietrzak-Smith <kim@cs.toronto.edu>
Subject: IJCAI'91 Call for Awards

       CALL FOR NOMINATIONS FOR IJCAI AWARDS: IJCAI-91

THE IJCAI AWARD FOR RESEARCH EXCELLENCE

The IJCAI Award for Research Excellence is given at an IJCAI, to a 
scientist who has carried out a program of research of consistently
high quality yielding several substantial results.  If the research
program has been carried out collaboratively, the Award may be made
jointly to the research team.  Past recipients of this Award are
John McCarthy (1985) and Allen Newell (1989). 

The Award carries with it a certificate and the sum of $US2,000 plus
travel and living expenses for the IJCAI.  The researcher(s) will
be invited to deliver an address on the nature and significance of
the results achieved and write a paper for the conference proceedings.
Primarily, however, the Award carries the honour of having one's
work selected by one's peers as an exemplar of sustained research
in the maturing science of Artificial Intelligence.

We hereby call for nominations for The IJCAI Award for Research
Excellence to be made at IJCAI-91 which is to be held in Sydney, 
Australia, 24-30 August 1991.  The accompanying note on Selection
Procedures for IJCAI Awards provides the relevant details. 


THE COMPUTERS AND THOUGHT AWARD

The Computers and Thought Lecture is given at each International
Joint Conference on Artificial Intelligence by an outstanding young
scientist in the field of Artificial Intelligence.  The Award
carries with it a certificate and the sum of $US2,000 plus travel
and subsistence expenses for the IJCAI.  The Lecture is presented
one evening during the Conference, and the public is invited to
attend.  The Lecturer is invited to publish the Lecture in the
conference proceedings.  The Lectureship was established with
royalties received from the book Computers and Thought, edited by
Feigenbaum and Feldman; it is currently supported by income from
IJCAI funds.

Past recipients of this honour have been Terry Winograd (1971),
Patrick Winston (1973), Chuck Rieger (1975), Douglas Lenat (1977),
David Marr (1979), Gerald Sussman (1981), Tom Mitchell (1983),
Hector Levesque (1985), Johan de Kleer (1987) and Henry Kautz
(1989).

Nominations are invited for The Computers and Thought Award to be
made at IJCAI-91 in Sydney.  The note on Selection Procedures for
IJCAI Awards describes the nomination procedures to be followed.


SELECTION PROCEDURES FOR IJCAI AWARDS

Nominations for The IJCAI Award for Research Excellence and The
Computers and Thought Award are invited from all members of the
Artificial Intelligence international community.  The procedures
are the same for both awards.

There should be a nominator and a seconder, at least one of whom
should not have been in the same institution as the nominee.  The
nominators should prepare a short submission of less than 2,000
words, outlining the nominee's qualifications with respect to the
criteria for the particular award.

The award selection committee is the union of the Program, Conference
and Advisory Committees of the upcoming IJCAI and the Board of
Trustees of International Joint Conferences on Artificial Intelligence, 
Inc., with nominees excluded.

Nominations should be sent to the Conference Chair for IJCAI-91 at the
address below.  They must be sent in hardcopy form; electronic
submissions cannot be accepted.  The deadline for nominations is
1 December 1990.  To avoid duplication of effort, nominators are
requested to submit the name of the person they are nominating by 
1 November 1990 so that people who propose to nominate the same
individual may be so informed and can coordinate their efforts.


Prof. Barbara J. Grosz
Conference Chair, IJCAI-91
Aiken Computation Lab 20
Harvard University
33 Oxford Street
Cambridge, MA 02138, USA
tel: (+1-617) 495-3673
fax: (+1-617) 495-9837
grosz@endor.harvard.edu

Due Date for nominations: 1 December 1990.




------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (07/26/90)

Vision-List Digest	Wed Jul 25 10:10:21 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 vision book list: addendum
 Camera Mount
 Anybody working in visualization in Israel?
 IJPRAI CALL FOR PAPERS
 (Summary) Re: Character Recognition Bibliography?

----------------------------------------------------------------------

Date: Thu, 19 Jul 90 15:42:52 +0200
From: ronse@prlb.philips.be
Subject: vision book list: addendum

I received a few more suggestions of important books on vision since I
sent my summary to the list. Here they are:

M.A. Fischler & O. Firschein: "Readings in Computer Vision: Issues,
Problems, Principles, and Paradigms", Morgan Kaufmann, Los Altos, CA,
1987.

A.P. Pentland: "From Pixels to Predicates", Ablex Publ. Corp., Norwood,
NJ, (1986).

Blake & Zisserman: "Visual Reconstruction"

Besl: "Range image Understanding"

Someone also suggested a special issue from a serial journal:

"Artificial Intelligence", special isue on Computer Vision, Vol. 17
(1981).

Christian Ronse

Internet:			ronse@prlb.philips.be
BITNET:				ronse%prlb.philips.be@cernvax

------------------------------

Date: Tue, 17 Jul 90 17:34 PDT
From: H. Keith Nishihara <hkn@natasha.ads.com>
Subject: Camera Mount
Phone: (415)328-8886
Us-Mail: Teleos Research, 576 Middlefield Road, Palo Alto, CA 94301

    Date: Sun,  8 Jul 90 12:38:29 -0400 (EDT)
    From: "Ethan Z. Evans" <ee0i+@andrew.cmu.edu>

    Vision Wizards:
	    I need to find a 2 degree of freedom camera mount (rotation and pitch).
     The mobile platform I work on is only about a foot tall, and the top is
    cluttered with various hardware etc.  Thus to give the robot a view of
    its world, we need to set the camera up on a pole.  Once its up their it
    needs to be able to turn around and either look down at what the arm is
    doing, or out at where the base is going.  Point is, I don't have the
    slightest idea where to look for such a device.  If anyone could give me
    a starting point as to where to get such a mount, especially one easily
    controlled through a PC parallel or serial port, I would be most
    greatful.
	
    Thanks in advance,
	    Ethan Evans
	    ee0i@andrew.cmu.edu

    [ Please post responses to the List
		    phil...	]


We are in the process of developing a small two axis camera mount as
part of a third generation video-rate sign-correlation stereo and motion
system that we are building this summer at Teleos Research.

The camera mount has the following approximate specs:

Pan range:     infinite (or till your camera wires break)
Pitch range:   -90 to 90 degrees
Accuracy:      11 min of arc 
Height:        21 cm
Diameter:      9.7 cm
Weight:        0.68 kg

Controller boards (for the IBM PC, VME, STD, and Multibus) are available for
operating the pan-pitch head in absolute position mode, incremental position
mode, or velocity mode.  We plan to use this device to hold a stereo camera
fixture that also provides vergence control for binocular stereo work.

We have not thought about making this device commercially available but that
is a possibility.  If interested contact Jeff Kerr or myself at Teleos
Research.

Keith

------------------------------

Subject: Anybody working in visualization in Israel?
Date: Wed, 18 Jul 90 13:01:51 -0400
From: haim@hawk.ulowell.edu

Hi,
I'm trying to establish contact with researchers in the visualization
field who work in Israel.
If you are, or you know somebody who is, I'd appreciate you sending me
a message (please send me a presonal message, even if you are also
posting it on this list).
Thanks,
Haim Levkowitz
Institute for Visualization and Perception Research
University of Lowell
Lowell, MA 01854
508-934-3654

------------------------------

Date: Sun, 22 Jul 90 15:27:59 PDT
From: skrzypek@CS.UCLA.EDU (Dr. Josef Skrzypek)
Subject: IJPRAI CALL FOR PAPERS

	    IJPRAI	 CALL FOR PAPERS	 IJPRAI

We are  organizing a  special  issue of  IJPRAI (Intl.   Journal of
Pattern Recognition  and Artificial Intelligence)  dedicated to the
subject   of neural networks in  vision   and pattern  recognition.
Papers  will be  refereed.  The  plan  calls for  the  issue  to be
published  in the fall of   1991.   I  would  like  to invite  your
participation.

   DEADLINE FOR SUBMISSION: 10th of December, 1990

   VOLUME TITLE: Neural Networks in Vision and Pattern Recognition

   VOLUME GUEST EDITORS: Prof. Josef Skrzypek and Prof. Walter Karplus
   Department of Computer Science, 3532 BH
   UCLA
   Los Angeles CA 90024-1596
   Email: skrzypek@cs.ucla.edu or karplus@cs.ucla.edu
   Tel: (213) 825 2381
   Fax: (213) UCLA CSD

		      DESCRIPTION

The capabilities    of   neural    architectures   (supervised  and
unsupervised learning,   feature  detection and   analysis  through
approximate pattern matching, categorization and self-organization,
adaptation, soft constraints,  and signal based processing) suggest
new approaches to solving problems in vision, image  processing and
pattern recognition as applied to  visual stimuli.  The purpose  of
this special issue  is to encourage further  work and discussion in
this area.

The  volume will  include both invited  and submitted peer-reviewed
articles.  We are seeking submissions  from researchers in relevant
fields,   including,  natural  and  artificial vision,   scientific
computing,  artificial  intelligence, psychology, image  processing
and pattern recognition.  "We  encourage submission of: 1) detailed
presentations of  models   or  supporting  mechanisms,   2)  formal
theoretical analyses, 3)  empirical and methodological studies.  4)
critical  reviews   of neural  networks   applicability to  various
subfields of vision, image processing and pattern recognition.

Submitted    papers may  be    enthusiastic   or  critical  on  the
applicability   of  neural   networks  to   processing   of  visual
information.   The  IJPRAI   journal   would  like    to  encourage
submissions    from  both  , researchers  engaged   in analysis  of
biological      systems         such            as         modeling
psychological/neurophysiological data using neural networks as well
as from members of  the engineering  community who are synthesizing
neural network  models.  The number of  papers that can be included
in this special issue  will be limited.  Therefore,  some qualified
papers may be encouraged  for submission to  the regular issues  of
IJPRAI.

		       SUBMISSION PROCEDURE

Submissions should  be sent to  Josef Skrzypek, by 12-10-1990.  The
suggested length is  20-22 double-spaced  pages  including figures,
references,  abstract and  so  on. Format  details, etc.  will   be
supplied on request.

Authors  are strongly  encouraged  to  discuss  ideas  for possible
submissions with the editors.

The  Journal   is  published  by   the  World   Scientific and  was
established in 1986.

Thank you for your considerations.

------------------------------

From: atul k chhabra <Atul.Chhabra@UC.EDU>
Subject: (Summary) Re: Character Recognition Bibliography?
Date: Sat, 21 Jul 90 14:08:04 EDT

Here is a summary of what I received in response to my request for references
on character recognition. I had asked for references in all aspects of 
character recognition -- preprocessing and segmentation, OCR, typewritten
character recognition, handwritten character recognition, neural network based
recognition, statistical and syntactic recognition, hardware implementations,
and commercial character recognition systems. THANKS TO ALL WHO RESPONDED.

IF ANYONE OUT THERE HAS MORE REFERENCES, PLEASE EMAIL ME. I WILL SUMMARIZE NEW
RESPONSES AFTER ANOTHER TWO WEEKS. THANKS.

Atul Chhabra
Department of Electrical & Computer Engineering
University of Cincinnati, ML 030
Cincinnati, OH 45221-0030

Phone: (513)556-6297
Email: achhabra@uceng.uc.edu


***************
From: Sol <sol@iai.es>

    Sol Delgado 
    Instituto de Automatica Industial
    La Poveda   Arganda del Rey
    28500 MADRID
    SPAIN

    sol@iai.es

[ 1]_ Off-Line cursive script word recognition
      Radmilo M. Bozinovic, Sargur N. Srihari
      IEEE transactions on pattern analysis and machine intelligence.
      Vol. 11, January 1989. 


[ 2]_ Visual recognition of script characters. Neural networt architectures.
      Jodef Skrzypek, Jeff Hoffman.
      MPL (Machine Perception Lab). Nov 1989.


[ 3]_ On recognition of printed characters of any font and size.
      Simon Kahan, Theo Pavlidis, Henry S. Baird.
      IEEE transactions on pattern analysis and machine intelligence
      Vol PAMI_9, No 2, March 1987.


[ 4]_ Research on machine recognition of handprinted characters.
      Shunji Mori, Kazuhiko Yamamoto, Michio Yasuda.
      IEEE transactions on pattern analysis and machine intelligence.
      Vol PAMI_6, No 4. July 1984.


[ 5]_ A pattern description and generation method of structural 
      characters
      Hiroshi nagahashi, Mikio Nakatsuyama.
      IEEE transactions on pattern analysis and machine intelligence
      Vol PAMI_8, No 1, January 1986.


[ 6]_ An on-line procedure for recognition of handprinted alphanumeric        
      characters.
      W. W. Loy, I. D. Landau.
      IEEE transactions on pattern analysis and machine intelligence.
      Vol PAMI_4, No 4, July 1982.


[ 7]_ A string correction algorithm for cursive script recognition.
      Radmilo Bozinovic, Sargur N. Srihari.
      IEEE transactions on pattern analysis and machine inteligence.
      Vol PAMI_4, No 6, November 1982.


[ 8]_ Analisys and design of a decision tree based on entropy reduction      
      and its application to large character set recognition
      Qing Ren Wang, Ching Y. Suen.
      IEEE transactions on pattern analysis and machine intelligence.
      Vol PAMI_6, No 4, July 1984.


[ 9]_ A method for selecting constrained hand-printed character shapes 
      for machine recognition
      Rajjan Shinghal, Ching Y. Suen
      IEEE transactions on pattern analysis and machine intelligence.
      Vol PAMI_4, No 1, January 1982


[10]_ Pixel classification based on gray level and local "busyness"
      Philip A. Dondes, Azriel Rosenfeld.
      IEEE transactions on pattern analysis and machine intelligence.
      Vol PAMI_4, No 1, January 1982.


[11]_ Experiments in the contextual recognition of cursive script
      Roger W. Ehrich, Kenneth J. Koehler
      IEEE transactions on computers, vol c-24, No. 2, February 1975.


[12]_ Character recognition by computer and applications.
      Ching Y. Suen.
      Handbook of pattern recognition and image procesing. 
      ACADEMIC PRESS, INC. August 1988.

[13]_ A robust algorithm for text string separation from mixed             
      text/graphics images      
      Lloyd Alan Fletcher, Rangachar Kasturi
      IEEE transactions on pattern analysis and machine intelligence.
      Vol 10, No 6, November 1988.


[14]_ Segmentation of document images.
      Torfinn Taxt, Patrick J. Flynn, Anil K. Jain
      IEEE transactions on pattern analysis and machine intelligence.
      Vol 11, No 12, december 1989.

[15]_ Experiments in text recognition with Binary n_Gram and Viterbi       
      algorithms.
      Jonathan J. Hull, Sargur N. Srihari
      IEEE transactions on pattern analysis and machine intelligence
      Vol PAMI-4, No 5, september 1982.


[16]- Designing a handwriting reader.
      D. J. Burr
      IEEE transactions on pattern analysis and machine intelligence
      Vol PAMI-5, No 5, september 1983.


[17]_ Experiments on neural net recognition of spoken and written text 
      David J. Burr
      IEEE transactions on acoustics, speech and signals processing 
      vol 36, No 7, july 1988


[18]_ Experimets with a connectionist text reader       
      D. J. Burr
      Bell communications research
      Morristow, N. J. 07960


[19]_ An Algorithm for finding a common structure shared by a family of     
      strings   
      Anne M. Landraud, Jean-Francois Avril, Philippe Chretienne.
      IEEE transactions on pattern analysis and machine intelligence
      Vol 11, No 8, august 1989


[20]_ Word_level recognition of cursive script
      Raouf F. H. Farag
      IEEE transactions on computers
      Vol C-28, No 2, february 1979


[21]_ Pattern Classification by neural network: an experimental system       
      for icon recognition
      Eric Gullichsen, Ernest Chang
      Marzo, 1987


[22]_ Recognition of handwritten chinese characters by modified hough        
      transform techniques.
      Fang-Hsuan Cheng, Wen-Hsing Hsu, Mei-Ying Chen
      IEEE transactions on pattern analysis and machine intelligence
      Vol 11, No 4, April 1989


[23]_ Inheret bias and noise in the Hough transform
      Christopher M. Brown
      IEEE transactions on pattern analysis and machine intelligence
      Vol PAMI-5, No 5, september 1983.


[24]_ From pixels to features
      J. C. Simon
      North-Holland
      	_ Feature selection and Language syntax in text recognition.
      	  J.J. Hull

      	_ Feature extraction for locating address blocks on mail pieces.
      	  S.N. Srihari.
      


[25]_ A model for variability effects in hand-printing, with implications 
      for the design of on line character recognition systems.
      J.R. Ward and T. Kuklinski.
      IEEE transactions on systems, man and cybernetics.
      Vol 18, No 3, May/June 1988.
     

[26]_ Selection of a neural network system for visual inspection.
      Paul J. Stomski, Jr and Adel S. Elmaghraby
      Engineering Mathematics And Computer Science
      University of Louisville, Kentucky 40292


[27]_ Self-organizing model for pattern learning and its application to 
      robot eyesight.
      Hisashi Suzuki, Suguru Arimoto.
      Proceedings of the fourth conference on A.I.
      san Diego, March 1988.
      The computer society of the IEEE.

***************

From: J. Whiteley <WHITELEY-J@OSU-20.IRCC.OHIO-STATE.EDU>

I only have five references I can offer, all are from the Proceedings of the
1989 International Joint Conference on Neural Networks held in Washington D.C.

Yamada, K.
Kami, H.
Tsukumo, J.
Temma, T.
Handwritten Numeral Recognition by Multi-layered Neural Network
with Improved Learning Algorithm
Volume II, pp. 259-266

Morasso, P.
Neural Models of Cursive Script Handwriting
Volume II, pp.539-542

Guyon, I.
Poujaud, I.
Personnaz, L.
Dreyfus, G.
Comparing Different Neural Network Architectures for Classifying
Handwritten Digits
Volume II, pp.127-132

Weideman, W.E.
A Comparison of a Nearest Neighbor Classifier and a Neural Network for
Numeric Handprint Character Recognition
Volume I, pp.117-120

Barnard, E.
Casasent, D.
Image Processing for Image Understanding with Neural Nets
Volume I, pp.111-115

Hopefully you are being deluged with references.

  Rob Whiteley
  Dept. of Chemical Engineering
  Ohio State University
  email:  whiteley-j@osu-20.ircc.ohio-state.edu

***********

From: avi@dgp.toronto.edu (Avi Naiman)

%L Baird 86
%A H. S. Baird
%T Feature Identification for Hybrid Structural/Statistical Pattern Classification
%R Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
%D June 1986
%P 150-155

%L Casey and Jih 83
%A R. G. Casey
%A C. R. Jih
%T A Processor-Based OCR System
%J IBM Journal of Research and Development
%V 27
%N 4
%D July 1983
%P 386-399

%L Cash and Hatamian 87
%A G. L. Cash
%A M. Hatamian
%T Optical Character Recognition by the Method of Moments
%J Computer Vision, Graphics and Image Processing
%V 39
%N 3
%D September 1987
%P 291-310

%L Chanda et al. 84
%A B. Chanda
%A B. B. Chaudhuri
%A D. Dutta Majumder
%T Some Algorithms for Image Enhancement Incorporating Human Visual Response
%J Pattern Recognition
%V 17
%D 1984
%P 423-428

%L Cox et al. 74
%A C. Cox
%A B. Blesser
%A M. Eden
%T The Application of Type Font Analysis to Automatic Character Recognition
%J Proceedings of the Second International Joint Conference on Pattern Recognition
%D 1974
%P 226-232

%L Frutiger 67
%A Adrian Frutiger
%T OCR-B: A Standardized Character for Optical Recognition
%J Journal of Typographic Research
%V 1
%N 2
%D April 1967
%P 137-146

%L Goclawska 88
%A Goclawska
%T Method of Description of the Alphanumeric Printed Characters by Signatures for Automatic Text Readers
%J AMSE Review
%V 7
%N 2
%D 1988
%P 31-34

%L Gonzalez 87
%A Gonzalez
%T Designing Balance into an OCR System
%J Photonics Spectra
%V 21
%N 9
%D September 1987
%P 113-116

%L GSA 84
%A General Services Administration
%T Technology Assessment Report: Speech and Pattern Recognition; Optical Character Recognition; Digital Raster Scanning
%I National Archives and Records Service
%C Washington, District of Columbia
%D October 1984

%L Hull et al. 84
%A J. J. Hull
%A G. Krishnan
%A P. W. Palumbo
%A S. N. Srihari
%T Optical Character Recognition Techniques in Mail Sorting: A Review of Algorithms
%R 214
%I SUNY Buffalo Computer Science
%D June 1984

%L IBM 86
%A IBM
%T Character Recognition Apparatus
%J IBM Technical Disclosure Bulletin
%V 28
%N 9
%D February 1986
%P 3990-3993

%L Kahan et al. 87
%A A. Kahan
%A Theo Pavlidis
%A H. S. Baird
%T On the Recognition of Printed Characters of any Font and Size
%J IEEE Transactions on Pattern Analysis and Machine Intelligence
%V PAMI-9
%N 2
%D March 1987
%P 274-288

%L Lam and Baird 86
%A S. W. Lam
%A H. S. Baird
%T Performance Testing of Mixed-Font, Variable-Size Character Recognizers
%R AT&T Bell Laboratories Computing Science Technical Report No. 126
%C Murray Hill, New Jersey
%D November 1986

%L Lashas et al. 85
%A A. Lashas
%A R. Shurna
%A A. Verikas
%A A. Dosimas
%T Optical Character Recognition Based on Analog Preprocessing and Automatic Feature Extraction
%J Computer Vision, Graphics and Image Processing
%V 32
%N 2
%D November 1985
%P 191-207

%L Mantas 86
%A J. Mantas
%T An Overview of Character Recognition Methodologies
%J Pattern Recognition
%V 19
%N 6
%D 1986
%P 425-430

%L Murphy 74
%A Janet Murphy
%T OCR: Optical Character Recognition
%C Hatfield
%I Hertis
%D 1974

%L Nagy 82
%A G. Nagy
%T Optical Character Recognition \(em Theory and Practice
%B Handbook of Statistics
%E P. R. Krishnaiah and L. N. Kanal
%V 2
%I North-Holland
%C Amsterdam
%D 1982
%P 621-649

%L Pavlidis 86
%A Theo Pavlidis
%T A Vectorizer and Feature Extractor for Document Recognition
%J Computer Vision, Graphics and Image Processing
%V 35
%N 1
%D July 1986
%P 111-127

%L Piegorsch et al. 84
%A W. Piegorsch
%A H. Stark
%A M. Farahani
%T Application of Image Correlation for Optical Character Recognition in Printed Circuit Board Inspection
%R Proceedings of SPIE \(em The International Society for Optical Engineering: Applications of Digital Image Processing VII
%V 504
%D 1984
%P 367-378

%L Rutovitz 68
%A D. Rutovitz
%T Data Structures for Operations on Digital Images
%B Pictorial Pattern Recognition
%E G. C. Cheng et al.
%I Thompson Book Co.
%C Washington, D. C.
%D 1968
%P 105-133

%L Smith and Merali 85
%A J. W. T. Smith
%A Z. Merali
%T Optical Character Recognition: The Technology and its Application in Information Units and Libraries
%R Library and Information Research Report 33
%I The British Library
%D 1985

%L Suen 86
%A C. Y. Suen
%T Character Recognition by Computer and Applications
%B Handbook of Pattern Recognition and Image Processing
%D 1986
%P 569-586

%L Wang 85
%A P. S. P. Wang
%T A New Character Recognition Scheme with Lower Ambiguity and Higher Recognizability
%J Pattern Recognition Letters
%V 3
%D 1985
%P 431-436

%L White and Rohrer 83
%A J.M. White
%A G.D. Rohrer
%T Image Thresholding for Optical Character Recognition and Other Applications Requiring Character Image Extraction
%J IBM Journal of Research and Development
%V 27
%N 4
%D July 1983
%P 400-411

%L Winzer 75
%A Gerhard Winzer
%T Character Recognition With a Coherent Optical Multichannel Correlator
%J IEEE Transactions on Computers
%V C-24
%N 4
%D April 1975
%P 419-423

***************

From: nad@computer-lab.cambridge.ac.uk

Hi, 
  I've only got two references for you - but they have 42 and 69 references,
respectively (some of the refs will be the same, but you get at least 69
references!).

They are:

"An overview of character recognition methodologies"
J. Mantas
Pattern Recognition, Volume 19, Number 6, 1986
pages 425-430

"Methodologies in pattern recognition and image analysis - a brief survey"
J. Mantas
Pattern Recognition, Volume 20, Number 1, 1987
pages 1-6

Neil Dodgson

*************

From: YAEGER.L@AppleLink.Apple.COM

I presume you know of "The 1989 Neuro-Computing Bibliography" edited by Casimir
C. Klimasauskas, a Bradford Book, from MIT Press.  It lists 11 references for
character recognition in its index.
 
- larryy@apple.com
 
***********

From: Tetsu Fujisaki <TETSU@IBM.COM>

1. Suen, C. Y., Berthod, M., and Mori, S.,
   "Automatic Recognition of Handprinted Characters - The State
    of the Art", Proc. IEEE, 68, 4 (April 1980) 469-487

2. Tappert, C. C., Suen, C. Y., and Wakahara T,
   "The State-of-the-Art in on-line handwriting recognition",
    IEEE Proc. 9th Int'l Conf. on Pattern Recognition, Rome Italy,
    Nov. 1988. Also in IBM RC 14045.

***********

From: burrow@grad1.cis.upenn.edu (Tom Burrow)

Apparently, the state of the art in connectionism, as a lot of people
will tell you, I'm sure, is Y. Le Cun et al's work which can be found
in NIPS 90.  Other significant connectionist approaches are
Fukushima's neocognitron and Denker et al's work which I *believe* is
in NIPS 88.

I am interested in handprinted character recognition.  Type set
character recognition is basically solved, and I believe you shouldn't
have any trouble locating texts on this (although I've only looked at
the text edited by Kovalevsky (sp?), which I believe is just entitled
"Reading Machines".  Bayesian classifiers, which you can read about in
any statistical pattern recognition text (eg, Duda and Hart, Gonzalez,
etc), are capable of performing recognition, since one can choose
reliable features present in machine printed text (eg, moments,
projections, etc), and the segmentation problem is fairly trivial).

Perhaps the greatest problem in handprinted recognition is the
segmentation problem.  Unfortunately, most connectionist approaches
fail miserably in this respect, relying on traditional methods for
segmentation which become a bottleneck.  I am inspecting connectionist
methods which perform segmentation and recognition concurrently, and I
recommend you do not inspect the problems independently.

I am by no means expert in any area which I've commented on, but I
hope this helps.  Also, again, please send me your compiled responses.
Thank you and good luck.

Tom Burrow

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/02/90)

Vision-List Digest	Wed Aug 01 12:47:50 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Line length using Hough Transform
 Where to find jobs in Image/ digital signal processing?
 Displaying 24bit/pix images on 8bit/pix device
 fractals and texture analysis
 postdoctoral position available
 3 Research Posts - Computer Vision

----------------------------------------------------------------------

Date: 26 July 90, 17:32:19 UPM
From: FACC005@SAUPM00.BITNET
Subject: Line length using Hough Transform

Can anyone give me any pointer to literature on finding the length
(not parameters) of a line using Hough Transform.  The line may
be short i.e. it may not cross the full image.

Thanks in advance.

 ATIQ (facc005@saupm00.bitnet)
------------------------------

Date: Thu, 26 Jul 90 16:15 EST
From: CHELLALI@northeastern.edu
Subject: Where to find jobs in Image/ digital signal processing?

Hi,

Can anyone tell me what's a good place to look for a  job in Electrical 
Engineering (Image/ digital signal processing) with a Ph.D degree.
Also, I would like to add that the job should not require US citizen...

I am in Boston right now, and jobs are not comming easy.

Thanks a lot.

M Chellali

------------------------------

Date: 27 Jul 90 20:56:07 GMT
From: surajit@hathi.eng.ohio-state.edu (Surajit Chakravarti)
Subject: Displaying 24bit/pix images on 8bit/pix device
Organization: The Ohio State University Dept of Electrical Engineering

Hi folks,

We have a few images in 24 bits/pixel format (one byte each of Red, Green 
and Blue per pixel). We need to convert these images into sun raster 8 bit 
color format or GIF format in order to display on X windows. Is there any
available programs or formulae  that would let us either convert these images
 to a compatible form or let us display them directly on the X windows?

Any information in this regard will be greatly appreciated.

Thanks in advance.

Surajit Chakravarti		     *  surajit@hathi.eng.ohio-state.edu
				     *	Office: 679 Dreese Labs
Electrical Engineering		     *  SPANN Lab: 512 Dreese Labs
The Ohio State University	     *  SPANN Phone: (614)-292-6502

------------------------------

Date: Mon, 30 Jul 90 17:52:44 bst
From: Eduardo Bayro <eba@computing-maths.cardiff.ac.uk>
Subject: fractals and texture analysis
Organization: Univ. of Wales Coll. of Cardiff, Dept. of Electronic & Systems Engineering

Does anybody knows about the use of fractals in image processing
for texture analysis. Fractals are used in chaos theory and non-
linearities. Answers by e-mail are appreciate. Many thanks.
                      Eduardo Bayro

 Eduardo Bayro, School of Electrical, Electronic and Systems Engineering,
 University of Wales College of Cardiff, Cardiff, Wales, UK.
 Internet: eba%cm.cf.ac.uk@nsfnet-relay.ac.uk        Janet:  eba@uk.ac.cf.cm
 UUCP:     eba@cf-cm.UUCP or ...!mcsun!ukc!cf-cm!eba

------------------------------

Date: Tue, 31 Jul 90 00:12 EDT
From: GINDI%GINDI@Venus.YCC.Yale.Edu
Subject: postdoctoral position available

			YALE UNIVERSITY
     Postdoctoral Position in Neural Networks for Vision

One position is open within a research group interested in developing
neural network-based approaches to computer vision and image understanding
problems.  We are particularly interested in using model-based optimization
strategies for locating and quantifying objects and other image structures,
and automatically learning the characteristics of new ones; we are in the
process of extending these ideas to scale-space and other continuation
methods for optimization.  The group includes three faculty members,
three Ph.D. students and a full time programmer.  Collaboration with
researchers in biomedical and neurobiological image processing is also
possible.  The position is joint between the Departments of Computer
Science and Diagnostic Radiology.  In addition, the research group has
strong ties with faculty members in the Electrical Engineering Department.
Those who apply should have a Ph.D. in a neural network-related field such
as computer science, electrical engineering, applied mathematics or physics,
preferably with a strong background and coursework in image processing and
computer vision.  A strong programming ability is also preferred.  The
initial appointment will be for one year, renewable for a second year
contingent upon the availability of funds and by mutual agreement. Salary
will be based on background and experience, but is expected to be in the
$28K - $32K range.  Review of applications will begin immediately and will
be accepted until the position is filled.

Applicants should send a resume and the names and addresses of three
references to:

	Professor Eric Mjolsness,
	Department Computer Science, Yale Unversity
	P.O. Box 2158 Yale Station
	51 Prospect Street
	New Haven, Connecticut, 06520,

and should also, if possible, contact him by electronic mail at
	mjolsness@cs.yale.edu

OR write and email to:

	Professor Gene Gindi
	Department of Electrical Engineering
	Yale University
	P.O. Box 2157 Yale Station
	New Haven, Connecticut 06520

	gindi@venus.ycc.yale.edu

------------------------------

Date: Wed, 1 Aug 90 12:06:50 BST
From: D K Naidu <dkn@aifh.edinburgh.ac.uk>
Subject: 3 Research Posts - Computer Vision

        Three Research Posts - Computer Vision

        University of Edinburgh
        Department of Artificial Intelligence

Applications are invited for three researchers  to  work  in  the
Department  of Artificial Intelligence on a European Institute of
Technology funded research project entitled "Surface-Based Object
Recognition for Industrial Automation" and a SERC funded IED pro-
ject entitled "Location and Inspection from Range Data".  Princi-
pal  investigators  on the projects are Dr. Robert Fisher and Dr.
John Hallam.

The projects investigate the use of laser-stripe based range data
to  identify  and locate parts as they pass down a conveyor belt.
The vision research to be undertaken includes topics in:  surface
patch  extraction  from  range  data,  surface  patch clustering,
geometric object modeling, model  matching,  geometric  reasoning
and parallel image processing.  The projects build on substantial
existing research.

One researcher will be expected to take a  leading  role  in  the
scientific  direction of the projects (5 research staff total) as
well as undertake scientific  research.   The  second  researcher
will be more involved in software implementation and testing, but
will be expected to undertake some original research.  The  third
researcher    will   be   mainly   investigating   parallel   re-
implementations  of  existing   serial   vision   algorithms   on
transputer-based  systems.   Applicants  for  the first two posts
should have a PhD (or comparable experience)  in  an  appropriate
area, such as computer vision, image processing, computer science
or mathematics.  Applicants for the third  post  should  have  at
least a BSc in an appropriate area.

In general, applicants should have experience with the C program-
ming  language.   Applicants  with experience in computer vision,
the UNIX operating system, the C++ language or parallel  process-
ing on transputer systems will be preferred.

The details are:

Post Salary (Pounds)  Scale         Duration                  Start Date
==========================================================================
1    11399-20372    AR1a/AR2    1 year (probable 2nd year)    Nov 1st
2    11399-14744    AR1b/AR1a   2 years                       Now
3    11399-14744    AR1b/AR1a   1 year (probable 2nd year)    Nov 1st

Placement for all posts is according to age, experience and qual-
ifications.

Applications should include a curriculum vitae (3 copies) and the
names  and  addresses  of two referees, and should be sent to the
Personnel Department, University of Edinburgh, 63  South  Bridge,
Edinburgh, EH1 1LS by August 21, 1990, from whom further particu-
lars can be obtained.  In your application letter,  please  quote
reference  number  5796,  and indicate for which of the posts you
are applying.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/10/90)

Vision-List Digest	Thu Aug 09 14:11:54 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Suppliers of real time digital video display equipment ??
 Gould image processor for trade
 A bug in PBM software ( .pcx => pbm)
 Research Associate post in Neural Networks and Image Classification
 New book
 CVPR-91 CALL FOR PAPERS
 Vision Conference
 Summary of Computer Controllable Lenses (long)

----------------------------------------------------------------------

Date: Tue, 7 Aug 90 09:17:24 +0200
From: jonh@tele.unit.no
Subject: Suppliers of real time digital video display equipment ??

Hi!

	Does anybody know of  any suppliers of equipment for 
real time display of digitized video sequences? Here is some
background:

	Our signal processing group has for some time been involved in
in research work in still image coding and to a lesser extent in coding
of image sequences. As we are planning to increase our activities
in the coding of image sequences, we are contemplating on acquiring equipment
for real time display digitized sequences, both monochrome and color.
All the coding algoriths will be runnning on a network of SUN SPARC stations.
We are planning to work on various image formats ranging from 352x288 pels
and up to HDTV resolution and at various frame rates. 
Also, we would prefer equipment that is based on RAM rather 
than real-time disks.

Any advice would be greately appreciated!

	John Haakon Husoy
	The Norwegian Institute of Technology
	Department of Electical and Computer Engineering
	7034 Trondheim - NTH

	NORWAY
	email: jonh@tele.unit.no
	tel:   ++ 47 + 7 + 594453
	fax:   ++ 47 + 7 + 944475

------------------------------

Date: 8 Aug 90 16:40:35 GMT
From: hughes@azroth.csee.usf.edu (Ken Hughes)
Subject: Gould image processor for trade
Organization: University of South Florida, College of Engineering

The department of Computer Science and Engineering here has a Gould IP8400
image processor that they are considering decomissioning and selling.  It
was suggested to us that instead of selling this system we might consider
trading it with another organization for a mobile robot platform somewhere
along the lines of a Cybermation robot.  If you or someone you know
might be interested in stch a trade, please contact me via e-mail.

Ken Hughes  (hughes@sol.csee.usf.edu) |  "If you were happy every day of
sysadm	     (root@sol.csee.usf.edu)  |   your life you wouldn't be human,
Dept of Comp Sci and Eng	      |   you'd be a game show host."
University of South Florida	      |     Winona Ryder, in "Heathers"

------------------------------

Date: Tue, 7 Aug 90 20:03:07 GMT
From: brian@ucselx.sdsu.edu (Brian Ho)
Subject: A bug in PBM software ( .pcx => pbm)
Organization: San Diego State University Computing Services

I have found a bug (may not appear in other system) in the PBM software.
The bug is in the program which converts a .PCX (paint brush image format)
to a Pbm portable bitmap.

The bug (very small) appear at line 102 in program "pcxtopbm.c" which is
under /pbmplus/pbm/.  The orignal code read :

    if (b & 0xC0 == 0xC0) 
      ..
      ..

However, some system (e.g. Sun 3/50) does not like this.  It miss interpret it
as  
    if (b & (OxCO == 0xC0) 
      ..
      ..

Therefore,  I have simply put a parentesis around the '&' clause

    if ((b & OxC0) == 0xC0)
      ..
      ..

  and it works Great!!!!!!!


I don't if it (the bug) appears in other system.. keep me posted...

my E-mail address is

brian@yucatec.sdsu.edu
eden@cs.sdsu.edu

------------------------------

Date:        6 Aug 1990 12:35:37 GMT
From: austin@minster.york.ac.uk
Subject: Research Associate post in Neural Networks and Image Classification

		     University	of York
Departments of Computer	Science, Electronics and Psychology

		 Research Associate post in
	  Neural Networks and Image Classification

Applications are invited for a three year  research  associ-
ateship	 within	 the  departments of Computer Science, Elec-
tronics	 and  Psychology  on  a	 SERC  image  interpretation
research  initiative. Applicants should	preferably have	pro-
gramming and research experience  of  image  interpretation,
neural networks	and psychology.

The project is aimed at	the development	of neural models  of
classification	 tasks	 and   involves	 characterizing	 the
processes involved in learning and  applying  classification
skills	in  clinical  screening	 tasks.	 A  major  aim is to
develop	models based on	current	advances in neural networks.

Salaries will be on the	 1A  scale  (  11,399  ---   13495).
Informal  enquiries  may  be  made  to	Dr. Jim	Austin (0904
432734,	email: austin@uk.ac.york.minster). Further  particu-
lars  may  be  obtained	 from  The  Registrar's	 Department,
University of York, Heslington,	York, YO1 5DD,	UK  to	whom
three copies of	a curriculum vitae should be sent. The clos-
ing date for applications  is  24  Aug	1990.  Please  quote
reference number J2.

		       August 6, 1990

------------------------------

Date: Tue, 7 Aug 90 09:47:36 PDT
From: shapiro@lillith.ee.washington.edu (Linda Shapiro)
Subject: new book

One more entry in the vision books category.  A new vision book by
Robert Haralick and Linda Shapiro is being completed this summer. It
will be published by Addison-Wesley. The following is the table of
contents.

COMPUTER AND ROBOT VISION
Table of Contents

1.  Computer Vision Overview
2.  Binary Machine Vision: Thresholding and Segmentation
3.  Binary Machine Vision: Region Analysis
4.  Statistical Pattern Recognition
5.  Mathematical Morphology
6.  Neighborhood Operators
7.  Conditioning and Labeling
8.  The Facet Model
9.  Texture
10. Image Segmentation
11. Arc Extraction and Segmentation
12. Illumination Models
13. Perspective Projection Geometry
14. Analytic Photogrammetry
15. Motion and Surface Structure 
    from Time Varying Image Sequences
16. Image Matching
17. The Consistent Labeling Problem
18. Object Models and Matching
19. Knowledge--Based Vision
20. Accuracy
21. Glossary of Computer Vision Terms

------------------------------

Date: Tue, 7 Aug 1990 10:43:32 PDT
From: Gerard Medioni <medioni%iris.usc.edu@usc.edu>
Subject: CVPR-91 CALL FOR PAPERS

                         CALL FOR PAPERS

 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION
                  Maui Marriott on Kaanapali Beach
                    Lahaina, Maui, HI 96761
                      June 3-6, 1991

 GENERAL CHAIR:
Shahriar Negahdaripour, Department of Electrical Engineering 
Department, University of Hawaii, Honolulu, HI 96822, 
E-mail: shahriar@wiliki.eng.hawaii.edu

 PROGRAM CO-CHAIRS:
Berthold K.P. Horn, MIT Artificial Intelligence Lab, 545 Technology 
Square, Cambridge, MA  02139, E-mail: bkph@ai.mit.edu

 Gerard Medioni, Institute for Robotics and Intelligent, 
232 Powell Hall, University of Southern California, Los Angeles, CA 90089,
E-mail: medioni@iris.usc.edu

 LOCAL ARRANGEMENTS CHAIR:
Tep Dobry, Department of Electrical Engineering, University of
Hawaii  Manoa, Honolulu, HI 96822, E-mail tep@wiliki.eng.hawaii.edu

 PROGRAM COMMITTEE

 N. Ahuja     A. Blake           K. Ikeuchi     J. Malik     R. Szeliski  
 N. Ayache    A. Bovik           K. Kanatani    J. Mundy     D. Terzopoulos
 D. Ballard   E. Delp            C. Koch        R. Nevatia   W. Thompson   
 H. Baker     K. Ganapathy       C. Liedtke     H. Samet     A. Yuille     
 B. Bhanu     D. Huttenlocher    J. Little      B. Schunck   S. Zucker 

THE PROGRAM
The program consists of high quality contributed papers on all aspects
of computer vision and pattern recognition. Papers will be
refereed by the members of the program committee. Accepted papers will
be presented as long papers in a single track, short papers in two
parallel tracks, and poster papers.

PAPER SUBMISSION
Four copies of complete papers should be sent to Gerard Medioni at the address
given above by November 12, 1990. The paper should include two title 
pages, but only one containing the names and addresses of the authors, to 
permit an anonymous review process. Both title pages should contain the title 
and a short (up to 200 words) abstract. Authors  MUST restrict the 
length of the papers to 30 pages, which includes everything, meaning the two 
title pages, text (double-spaced), figures, tables, bibliography, etc. 
Authors will be notified of acceptance by February 15, 1991. Final 
camera-ready papers, typed on special forms, will be due no later than March 
15, 1991.


 FOR FURTHER INFORMATION
 Please write to:

 CVPR-91, The Computer Society of IEEE, 1730 Massachusetts Ave, N.W.,
Washington, DC 20036-1903.

------------------------------

Date: 9 Aug 90 16:17:29 GMT
From: colin@nrcaer.UUCP (Colin Archibald)
Subject: Vision Conference
Keywords: Call for Papers
Organization: NRCC-Aeroacoustics, Ottawa, Ontario

V i s i o n  I n t e r f a c e  ' 9 1

	Calgary, Alberta, Canada
	3-7 June 1991

CALL FOR PAPERS

Vision Interface '91 is the fifth Canadian Conference devoted to
computer vision, image processing, and pattern recognition.  This is
an annual conference held in various Canadian cities and is sponsored
by the Canadian Image Processing and Pattern Recognition Society.  The 1991
conference will be held in Calgary, Alberta, June 3-7 1991 in conjunction
with Graphics Interface '91. 

IMPORTANT DATES:

Four copies of a Full Paper due:	31 Oct. 1990
Tutorial Proposals due:			15 Nov. 1990
Authors Notified:			 1 Feb. 1991
Cover Submissions due:			 1 Feb. 1991
Final Paper due:			29 Mar. 1991

TOPICS:

Contributions are solicited describing unpublished research results and
applications experience in vision, including but not restricted to the
following areas:

	Image Understanding and Recognition	Modeling of Human Perception
	Speech Understanding and Recognition	Specialized Architecture
	Computer Vision				VLSI Applications
	Image Processing			Realtime Techniques
	Robotic Perception			Industrial Applications
	Pattern Analysis & Classification	Biomedical Applications
	Remote Sensing				Intelligent Autonomous Systems
	Multi-sensor Data Fusion		Active Perception


Four copies of full papers should be submitted to the Program Co-chairmen
before Oct.31 1990. Include with the paper full names, addresses, phone
numbers, fax numbers and electronic mail addresses of all the authors. One
author should be designated "contact author"; all subsequent correspondence
regarding the paper will be directed to the contact author. The other
addresses are required for follow-up conference mailings, including the
preliminary program.

FOR GENERAL INFORMATION:		SUBMIT PAPERS TO:

Wayne A. Davis				Colin Archibald and Emil Petriu
General Chairman			VI '91 Program Co-chairmen
Department of Computing Science		Laboratory for Intelligent Systems
University of Alberta			National Research Council
Edmonton, Alberta, Canada		Ottawa, Ontario, Canada
T6G 2H1					K1A 0R6
Tel:	403-492-3976			Tel:	613-993-6580

------------------------------

Date: Tue, 7 Aug 90 14:17:40 BST
From: Alan McIvor <bprcsitu!alanm@relay.EU.net>
Subject: Summary of Computer Controllable Lenses

Hi,
	I recently placed the following request:

> Subject: Computer Controllable Lenses
> 
> Hi,
> 	We are currently looking for a lens for our vision system with
> computer controllable focus, focal length, and aperture. Do any of you
> know of a source of such lenses? We have found many motorized lenses 
> but most have auto-apertures and no feedback of settings. 
> 	I recall several years ago that a company called Vicon made such
> a lens but I don't have any details. Anybody know how to get hold of
> them?
> 
> Thanks,
> 
> Dr Alan M. McIvor		
> BP International Ltd    	ukc!bprcsitu!alanm
> Research Centre Sunbury	alanm%bprcsitu.uucp@uk.ac.ukc
> Chertsey Road			bprcsitu!alanm@relay.EU.NET
> Sunbury-on-Thames       	uunet!ukc!bprcsitu!alanm
> Middlesex TW16 7LN		Tel: +44 932 764252
> U.K.                          Fax: +44 932 762999
> 


	What follows is a summary of the responses that I received.


	Many companies make motorized lenses but few make them with
feedback facilities for accurate control of the position. The feedback
is almost invariably provided by potentiometers.

	Several compaines make lenses with potentiometer feedback of the
zoom and focus setting but not the aperture. This is either auto-iris or
open-loop. Examples are:


- Cosmicar (Chori America, Inc.)

  F. Maliwat
  Electronics Division
  350 fifth Ave., Suite 3323
  New York, N.Y. 10118
  (800) 445-4233
  (212) 563-3264

  Vista Vision Systems
  Levanroy House
  Deanes Close
  Steventon
  Oxfordshire OX136SR
  UK
  tel: +44 235 834466
  fax: +44 235 832540


 Ernitec Mechatronic Lenses
  [Henrik I. Christensen <hic@vision.auc.dk> and Kourosh Pahlavan
   <kourosh@ttt1.bion.kth.se> are using these. ]

  Ernitec A/S
  Fjeldhammervej 17
  DK 2610 Rodovre
  Denmark
  Tel +45 31 70 35 11
  Fax +45 31 70 11 55

*  Ernitec UK
  39/41 Rowlands Road
  Worthing
  West Sussex BN11 3JJ
  tel: 0903 30482
  fax: 0903 213333


* Molynx Ltd
  Albany Street
  Newport
  Gwent NP9 5XW
  UK
  Tel: +44 633 821000
  Fax: +44 633 850893


* Vicon Industries, Inc.
  525 Broad Hollow Rd.
  Melville, NY 11747 USA
  Phone:  800-645-9116
  [ Chuck Steward <stewardc@turing.cs.rpi.edu> claims that the rotational
    accuracy is only 1 degree ]

* Video-Tronic
  Lahnstrasse 1
  2350 Neumunster 6
  Germany
  Phone 0 43 21 8 79 0
  Fax   0 43 21 8 79 97
  Telex 2 99 516 vido d


	The only lens that I could find that had potentiometer feedback of
all three axes is:

* TecSec TLZMNDP12575
  Vista Vision Systems
  Levanroy House
  Deanes Close
  Steventon
  Oxfordshire OX136SR
  UK
  tel: +44 235 834466
  fax: +44 235 832540


	Other approaches to the construction of a computer controllable lens
that were suggested are:

* take a normal zoom lens, mount a collar around the
  focus sleeve, the aperture sleeve and the zoom barrel separately, and
  then turn each collar with a high precision stepper motor.
	Chuck Steward <stewardc@turing.cs.rpi.edu> is currrently doing
  this.
  [ This has the benefit of allowing you to use 35mm camera lenses which
    have better optical performance than CCTV lenses]

* from Don Gennery <GENNERY@jplrob.jpl.nasa.gov>
  We also recently talked to Mechanical Technology Inc. of Latham, N. Y.,
  about the possibility of them making some computer-controlled lenses for us.


* from Lynn Abbott <abbott@vtcpe4.dal.ee.vt.edu>
       I assembled a camera system with 2 motorized lenses about 3 years ago
  at the Univ. of Illinois, with N. Ahuja.  At the time, several companies
  sold motorized lenses, but we could not locate any company which 
  produced a controller for these lenses which would interface with a
  host processor.

  We located a small company which specialized in dc servo controllers.  
  This company, TS Products, was willing to customize a pair of motorized
  Vicon lenses so that one of their controllers would drive the lens actuators.
  This controller accepts commands from a host over an RS-232 line or
  via the IEEE-488 bus.  They were willing to work with us in specifying
  the system, and we were happy with the results.
  They were at
		TS Products, Inc.
		12455 Branford St. 
		Bldg-22
		Arleta, CA 91331 USA
		Phone:  818-896-6676

* Use a auto-everything 35mm lens and a lens adaptor.
	I have heard that at the Harvard Robotics Lab they use Canon EOS
  lenses which include all the motors, etc, and talk to the camera body
  via a 4pin serial connection, the protocol for which they have decoded.

* from Shelly Glaser 011 972 3 545 0060 <GLAS@taunivm.earn>
       vandalizing an amateur video cam-corder


* use a Sony B-mount teleconferencing lens via a mount adaptor. These
  have a serial interface for remote control. Very good optics - broadcast
  quality, but quite heavy units.

	Canon UK Ltd.,
	Canon House, Manor Road,
	Wallington, Surrey SM6 OAJ, UK
	(081) 773-3173

	Fujinon Inc.,
	3N, 125 Springvale,
	West Chicago, IL 60185
	(312) 231-7888

* from Reg Willson <Reg.Willson@ius1.cs.cmu.edu>
  A second alternative is to have a motorized lens custom made.  Computer
  Optics is a small company that will build a motorized lens to your specs,
  but we found them to be far too expensive.  They quoted us $US 30,000 for
  the lens we specified - at which point we decided to build our own.
	Computer Optics Inc.,
	G. Kane
	120 Derry Road, 
	P.O. Box 7
	Hudson, New Hampshire 03051
	(603) 889-2116

	Vista Vision Systems
  	Levanroy House
  	Deanes Close
  	Steventon
  	Oxfordshire OX136SR
  	UK
  	tel: +44 235 834466
  	fax: +44 235 832540



	Given a controllable lens with DC servo motors and potentiometer
feedback, there is also the question of how to control it. Unfortunately
most available servo motor controllers assume feedback via resolvers or
optical encoders, so are unapplicable. Possible solutions are:


1) Replace the motors on the lens with servo motors with optical encoders
   or stepper motors, and use an available controller

   - from Reg Willson <Reg.Willson@ius1.cs.cmu.edu>
   The current lens we have is a Cosmicar C31211 (C6Z1218M3-2) TV grade zoom
   lens (apx $US 560).  We replaced the DC servo motors and drive train with AX
   series digital micro stepping motors from Compumotor.  The stepping motors
   have a simple RS232 interface and have far more accuracy and precision than
   the DC servo motors they replaced.  Unfortunately they're rather expensive
   ($US 1700 / degree of freedom).  We also had to have a machinist build an
   assembly for the lens and motors.
	Compumotor Division, Parker Hannifin Corporation
	5500 Business Park Drive
	Rohnert Park, CA 94928
	(800) 358-9070
	(707) 584-7558

   - Galil motor controllers for DC Servo motors
	Galil Motion Control, Inc.
	1054 Elwell Court
	Palo Alto, CA 94303
	tel: (415) 964-6494
	fax: (415) 964-0426

	Naples Controls Ltd
	White Oriels
	Chaddleworth
	Berkshire RG16 0EH
	UK
	tel: 04882 488
	fax: 04882 8802

   - Digiplan motor controllers for Stepper motors

   - Themis 4-axis motor controller for servos

   - MDS-330 Servo Interface Card
     AMC Ltd
     Unit 2, Deseronto Industrial Estate
     St Mary's Road, Langley, Berks SL3 7EW
     tel: (0753) 580660 Fax: (0753)580653
 
   - PeP VMIC intelligent motion controller
     AMC Ltd
     Unit 2, Deseronto Industrial Estate
     St Mary's Road, Langley, Berks SL3 7EW
     tel: (0753) 580660 Fax: (0753)580653
  

2) Use a dedicated PID controller for the control loop and a Digital to
   Analog convertor in the computer to provide the setpoint (i.e., desired
   zoom, focus, aperture) - one per axes.

   Example PID controllers are:

	RS Servo Control Module
	Stock Number 591-663
	RS Components, UK.

	PVP 142 Linear Servo Amplifier
	Naples Controls Ltd
	White Oriels
	Chaddleworth
	Berkshire RG16 0EH
	UK
	tel: 04882 488
	fax: 04882 8802
	

   Example DACs are:

	PeP VDAD
	AMC Ltd
	Unit 2, Deseronto Industrial Estate
	St Mary's Road, Langley, Berks SL3 7EW
	tel: (0753) 580660 Fax: (0753)580653

	MDS-330 Servo Interface Card
	UKP 1195
	AMC Ltd
	Unit 2, Deseronto Industrial Estate
	St Mary's Road, Langley, Berks SL3 7EW
	tel: (0753) 580660 Fax: (0753)580653

	MDS-620 Analogue Output card
	AMC Ltd
	Unit 2, Deseronto Industrial Estate
	St Mary's Road, Langley, Berks SL3 7EW
	tel: (0753) 580660 Fax: (0753)580653

	BVME240 Analogue Output Module
	BVM Limited
	Flanders Road
	Hedge End, Southampton
	SO3 3LG
	tel 0703 270770
	fax 0489 783589

	ACROMAG AVME9210/15
	Universal Engineering and Computing Systems
	5/11 Tower St
	Newtown
	Birmingham B19 3UY
	tel: 021-359 1749
	fax: 021-333 3137

	Motorola MVME605
	Thame Microsystems
	Thame Park Road, Thame
	Oxford OX9 3UQ
	Tel: 0844 261456
	Fax: 0844 261682

	Burr-Brown MVP904
	Thame Microsystems
	Thame Park Road, Thame
	Oxford OX9 3UQ
	Tel: 0844 261456
	Fax: 0844 261682


3) Use an ADC to read the feedback potentiometer and a DAC to provide the
   motor drive voltage (via a power amp):

   Example ADCs are:

	PeP VADI
	AMC Ltd
	Unit 2, Deseronto Industrial Estate
	St Mary's Road, Langley, Berks SL3 7EW
	tel: (0753) 580660 Fax: (0753)580653

	MDS-310
	AMC Ltd
	Unit 2, Deseronto Industrial Estate
	St Mary's Road, Langley, Berks SL3 7EW
	tel: (0753) 580660 Fax: (0753)580653

	BVME250 Analogue Input Module
	BVM Limited
	Flanders Road
	Hedge End, Southampton
	SO3 3LG
	tel 0703 270770
	fax 0489 783589

	ACROMAG AVME9320
	Universal Engineering and Computing Systems
	5/11 Tower St
	Newtown
	Birmingham B19 3UY
	tel: 021-359 1749
	fax: 021-333 3137

	Burr-Brown MVP901
	Thame Microsystems
	Thame Park Road, Thame
	Oxford OX9 3UQ
	Tel: 0844 261456
	Fax: 0844 261682


   Example combined systems are (with onboard CPUs):

	Burr Brown MPV940 controller + ACX945 Analog I/O module
	Thame Microsystems
	Thame Park Road, Thame
	Oxford OX9 3UQ
	Tel: 0844 261456
	Fax: 0844 261682

	Scan Beam A/S SB100
	Scan Beam A/S
	Rosendalsvej 17
	DK-9560 Hadsard
	Tel: +45 98 57 15 99
	Fax: +45 98 57 48 87
	[this has the advantage of onboard power amps. It is being used by
	 Henrik I. Christensen <hic@vision.auc.dk> who warns that the
	 company is unstable.]

	BVME347 + IP-DAC + IP-ADC
	BVM Limited
	Flanders Road
	Hedge End, Southampton
	SO3 3LG
	tel 0703 270770
	fax 0489 783589
 
 Dr Alan M. McIvor 
 BP International Ltd            ukc!bprcsitu!alanm 
 Research Centre Sunbury         alanm%bprcsitu.uucp@uk.ac.ukc 
 Chertsey Road                   bprcsitu!alanm@relay.eu.net 
 Sunbury-on-Thames               uunet!ukc!bprcsitu!alanm 
 Middlesex TW16 7LN              Tel: +44 932 764252 
 U.K.                            Fax: +44 932 762999

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/14/90)

Vision-List Digest	Mon Aug 13 12:56:33 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Small camera lens
 KBVision
 Rotated Email Address
 Non-interlaced rs-170 monitor?
 Info Wanted: Vision Chip/Hardware Manufacturers

----------------------------------------------------------------------

Date: Fri, 10 Aug 90 9:27:10 MET DST
From: bellutta@irst.it (Paolo Bellutta)
Subject: small camera lens

Where I could find a source for small lenses for a CCD camera? The lenses
should be small enough to be glued directly on the CCD chip. I was thinking
to the lenses mounted on the disposable cameras or the more expensive
compact size cameras. The quality of the lens itself is not a major concern.
An non auto-iris wide angle lens would be better but I'm open to suggestions.

Please e-mail, in case I will receive a lot of "me too" responses I'll
summarize to the list.

Paolo Bellutta
I.R.S.T.                vox: +39 461 814417
loc. Pante' di Povo     fax: +39 461 810851
38050 POVO (TN)         e-mail: bellutta@irst.uucp
ITALY                           bellutta%irst@uunet.uu.net

------------------------------

Date: 10 Aug 90 20:20:19 GMT
From: cc@sam.cs.cmu.edu (Chiun-Hong Chien)
Subject: KBVision
Organization: Carnegie-Mellon University, CS/RI

I am in a process of evaluating the feasibility of using KBVision for vision
research.  I would appreciate any comments on and experiences with KBVision.
I am particularly interested in knowing the following.

1) How powerful/flexible/popular is the constraint module?  It seems to me
   while quite a few people are actually impressed by Excutive Interface
   (for image dispaly) and Image Examiner, very few people have much
   experience on the constraint module?  What are the reasons?

2) How is the knowledge module compared with other intelligent system tools
such as KEE.

3) Can KBVision be used in a real time environemnt?  embedded into a
robotics system?  If not how much overhead involved in the conversion
between programs for KBVision and regular C programs?

4) Others.

Thanks in advance, Chien

------------------------------

Date: 10 Aug 90 14:41:11 GMT
From: uh311ae@sun7.lrz-muenchen.de (Henrik Klagges)
Subject: Rotated Email Address
Keywords: Thanks & Address Change
Organization: LRZ, Bavarian Academy of Sciences, W. Germany

Hello, 

thanks a lot for the replies to my article concerning pattern
recognition of rotated molecules scattered over a surface ("Rotated
molecules & rotating physics student"). I think the "best" solution is
using a genetic optimizer that gets a parameter vector (angle &
displacement) as input and scans for a given pattern. If anyone likes
the C++ (cfront 2.0) code for a genetic algo- rithm (with bugs 8-)),
please email me.  To circumvent infinite mail problems there is a new
BITNET address that worked, at least a message from
koza@Sunburn.Stanford.EDU came through.

Best regards,
    Henrik Klagges

Scanning Tunnel Microscope Group at University of Munich
and LRZ, Bavarian Academy of Sciences
Stettener Str. 50, 8210 Prien
Try : uh311ae@DM0LRZ01.BITNET (works !)
      uh311ae@sun7.lrz-muenchen.de (?)

------------------------------

Date: Fri, 10 Aug 90 18:42 PDT
From: H. Keith Nishihara <hkn@natasha>
Subject: non-interlaced rs-170 monitor?
Phone: (415)328-8886
Us-Mail: Teleos Research, 576 Middlefield Road, Palo Alto, CA 94301

I'm looking for a video monitor to use with cameras capable of
producing RS-170 like video but non-interlaced (Panasonic
GP-MF-702's).  We need this mode to support a pipelined Laplacian of
Gaussian convolver we are building.

Any suggestions for a good (but not too expensive monitor)?  Someone
suggested one of the newer multisync monitors made for PC's and the
like, anyone know if that works?

thanks -- Keith    hkn@teleos.com

------------------------------

Date: Mon, 13 Aug 90 16:45:03 BST
From: S.Z.Li@ee.surrey.ac.uk
Subject: Info Wanted: Vision Chip/Hardware Manufacturers
Organization: University of Surrey, Guildford, Surrey, UK. GU2 5XH

Hi, netters:

I am interested in contacting chip/hardware manufacturers for vision
and neural nets. Do you know who/where they are? If you do, could 
you e-mail me the information? My address is: S.Z.Li@ee.surrey.ac.uk.

Many thanks in advance.

Stan

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/19/90)

Vision-List Digest	Sat Aug 18 14:15:59 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Re: DataCube Users Group
 Hand-Eye Calibration for Laser Range Sensors
 Suppliers of real-time digital video equipment
 Post-Doc Postion in Medical Imaging, CAS at the Stanford Robotics Lab
 SIEMENS Job Announcement
 Proceedings for the AAAI-90 Workshop on Qualitative Vision
 NEW JOURNAL -SYSTEMS ENGINEERING- SPRINGER VERLAG

----------------------------------------------------------------------

Date: 14 Aug 90 01:01:15 GMT
From: razzell@cs.ubc.ca (Dan Razzell)
Subject: Re: DataCube Users Group ???
Organization: University of British Columbia, Vancouver, B.C., Canada

There is a mailing list called

  <datacube-list@media-lab.media.mit.edu>.

To get on the list, send a message to

  <datacube-list-request@media-lab.media.mit.edu>.

      .^.^.      Dan Razzell <razzell@cs.ubc.ca>
     . o o .     Laboratory for Computational Vision
     . >v< .     University of British Columbia
  ____mm.mm____

------------------------------

Date: Tue, 14 Aug 90 15:47:19 EDT
From: Jean Lessard <sade%ireq-robot.UUCP@Larry.McRCIM.McGill.EDU>
Subject: Hand-Eye Calibration for Laser Range Sensors

    We are in the process of completing the installation of two
different laser range finders (one axis type) in our robotics lab. The
first one, having a working distance of 15 cm to 100 cm, with a field
of view of approx. 45 deg. and weighing 1.2 Kg, is intended to be
mounted on a PUMA-760 robot for telerobotics research applied to live
line maintenance and repair. The other one, more compact and with a
much smaller work distance, will be mounted on a custom designed 6 dof
robot which itself is mounted on a rail for turbine blade repair.

    I am looking for information and/or references on:

  1)	Sensor positioning and mounting on the robot. I expect
      difficulties with the sensor and wires causing limited robot
      movements, etc.

  2)	Hand-Eye calibration algorithms for this type of sensor. Are
      there any techniques developed to accurately link the sensor
      reference frame to the robot reference frame? Any good practical
      algorithms would be welcome.

Jean Lessard				      jlessard@ireq-robot.hydro.qc.ca
Institut de recherche d'Hydro-Quebec		     jlessard@ireq-robot.uucp
Varennes, QC, Canada   J3X 1S1
PHONE: +1 514 652-8136			               FAX:   +1 514 652-8435

------------------------------

Date:        Wed, 15 Aug 90 09:35:53 BST
From: Adrian F Clark <alien%sx.ac.uk@pucc.PRINCETON.EDU>
Subject:     Suppliers of real-time digital video equipment

Someone was recently asking about equipment for digitising image
sequences in real time.  Here at Essex we do a lot of work on coding
moving sequences, packet video and the like, and we have recently
looked at similar equipment.  Our choice came down to two:

1. the Abekas A60 (which we eventually chose and with which we're very
satisfied).  This is based on two parallel transfer discs and holds 30
seconds worth of digital video as luminance/chrominance (the latter
sampled at half the rate, as you'd expect).  There's also a four-disc
system which holds 60 seconds.  The A60 expects to input
CCIR601-format video, but Abekas sell the A20, which converts RGB to
CCIR601 in real time.  Just plug them together and you're away.  The
A60 outputs CCIR601, RGB or lum/chrom.  In terms of display, the
system is limited to the size of a standard TV frame (575x768), though
you can load and read smaller frames if you want.  The A60 is hosted
off Ethernet and supports rcp, rsh, etc, which makes it easy to
transfer image data to/from it.  The main disadvantage is that it's
very noisy -- keep it in a soundproofed room or invest in some ear
plugs at the same time.  In the UK, Abekas is at Portman House, 12
Portman Road, Reading, Berks, RG3 1EA.  Tel: +44 734-585421.  Fax: +44
734-597267.  They don't do educational discounts (boo, hiss).

2. DVS of Hannover (Abekas is US/UK, incidentally) sell a RAM-based
system which is more flexible (in terms of image sizes) than the A60.
However, when I looked at them, they couldn't hold anything like as
much as the A60 and were somewhat more expensive.  I don't have any
info to hand, not even an address, but I believe their systems came in
VME cages, so you stand a chance of interfacing one to a SparcStation.
Dunno about drivers, though.  If interested parties want to contact
DVS, mail me and I'll have a look for their address.

 Adrian F. Clark                                      JANET: alien@uk.ac.essex
 INTERNET: alien%uk.ac.essex@nsfnet-relay.ac.uk          FAX: (+44) 206-872900
 BITNET: alien%uk.ac.essex@ac.uk              PHONE: (+44) 206-872432 (direct)
 Dept ESE, University of Essex, Wivenhoe Park, Colchester, Essex, C04 3SQ, UK.

------------------------------

Date: 15 Aug 90 17:52:02 GMT
From: sumane@zsazsa.stanford.edu (Thilaka Sumanaweera)
Subject: Post-Doc Postion in Medical Imaging, CAS at the Stanford Robotics Lab

		  Post Doctoral Research Position
		Stanford Computer Aided Surgery Group
		       (Starting Fall, 1990)

	Summery:

	  The Stanford Computer Aided Surgery group, whose original 
	goal was to provide intelligent software tools for Stereotaxis 
	Surgery, is now moving onto new areas such as: Frameless 
	Stereotaxis Surgery, Geometric and Biomechanic Modelling of the 
	Spine, Stereotaxic Guided Clipping of AVM Feeders, Feature Space 
	Merging of MR and CT data and Robotic Manipulator Assisted 
	Stereotaxic Surgery. The systems developed at this group are now 
	being used at the Stanford Hospital during brain tumor retraction.

	In this group, we are concentrating on applying the techniques
	available in Computer Vision, Signal Processing and Robotics
        into medicine, especially surgery.

	The new Post-Doc has the following duties:
	  1). To carry out independant research in related areas and 
	      assist graduate students and surgeons in problem-solving.
	  2). Provide professional-quality systems administration support 
	      in maintaining the computer system which is being used at 
	      the operating room during surgery.
	  3). To facilitate building a set of state of the art surgical 
	      tools system which will be standard in the future.

	Requirements:
	  1). PhD in Computer Science, Electrical ENgineering, 
	      Mechanical Engineering or a related field.
	  2). Expertice in system building in Unix environment
	      in C, X-windows, LISP.
	  3). Start working in the Fall, 1990.
	  4). Some knowledge in medicine is a plus but not necessary.

	Our computer facilities include:
	  1). Silicon Graphics Personal IRIS 4D/25 machines.
	  2). SONY NEWS Networkstations.
	  3). Symbolics LISP machines.
	  4). DEC 3100 Workstations.
	  5). A fully equipped Computer Vision Lab.
	  6). Accessibility to General Electric MR and CT scanners.

	Please send your resume' to:
	 	Hiring Committee
		C/O Prof. Thomas O. Binford
		Post Doctoral Research Position in CAS
		Robotics Laboratory
		Computer Science Department
		Stanford University
		Stanford, CA 94305

		Internet: binford@cs.stanford.edu
		Fax:	  (415)725-1449

------------------------------

Date: Tue, 14 Aug 90 11:24:31 PDT
From: kuepper@ICSI.Berkeley.EDU (Wolfgang Kuepper)
Subject: SIEMENS Job Announcement

		IMAGE UNDERSTANDING and ARTIFICIAL NEURAL NETWORKS

	The Corporate Research and Development Laboratories of Siemens AG, 
	one of the largest companies worldwide in the electrical and elec-
	tronics industry, have research openings in the Computer Vision 
	as well as in the Neural Network Groups. The groups do basic and 
	applied studies in the areas of image understanding (document inter-
	pretation, object recognition, 3D modeling, application of neural 
	networks) and artificial neural networks (models, implementations, 
	selected applications). The Laboratory is located in Munich, an 
	attractive city in the south of the Federal Republic of Germany.

	Connections exists with our sister laboratory, Siemens Corporate 
	Research in Princeton, as well as with various research institutes 
	and universities in Germany and in the U.S. including MIT, CMU and 
	ICSI.

	Above and beyond the Laboratory facilities, the groups have a 
	network of Sun and DEC workstations, Symbolics Lisp machines, 
	file and compute servers, and dedicated image processing hardware.

	The successful candidate should have an M.S. or Ph.D. in Computer 
	Science, Electrical Engineering, or any other AI-related or 
	Cognitive Science field. He or she should prefarably be able to 
	communicate in German and English.

	Siemens is an equal opportunity employer.

	Please send your resume and a reference list to
		Peter Moeckel
		Siemens AG
		ZFE IS INF 1
		Otto-Hahn-Ring 6
		D-8000 Muenchen 83
		West Germany
	e-mail: gm%bsun4@ztivax.siemens.com
	Tel. +49-89-636-3372
	FAX  +49-89-636-2393

	Inquiries may also be directed to
		Wolfgang Kuepper (on leave from Siemens until 8/91)
		International Computer Science Institute
		1947 Center Street - Suite 600
		Berkeley, CA 94704
	e-mail: kuepper@icsi.berkeley.edu
	Tel. (415) 643-9153
	FAX  (415) 643-7684

------------------------------

Date: Mon, 13 Aug 90 18:43:05 -0700
From: pkahn@deimos (Philip Kahn)
Subject: Proceedings for the AAAI-90 Workshop on Qualitative Vision

Copies of the proceedings from the AAAI-90 Workshop on Qualitative
Vision are available for $35 (in North America) and $45US
(international), and can be obtained by writing: 

	AAAI-90 Workshop on Qualitative Vision
	Advanced Decision Systems
	Mountain View, CA  94043-1230

When requesting a copy of the Proceedings, please make your check
(payable in US $) to Advanced Decision Systems (this includes postage
and handling), specify the complete mailing address to which the
proceedings should be mailed, and (if available) include your e-mail
address in case there are any questions or problems.


                AAAI-90 WORKSHOP ON QUALITATIVE VISION
                             July 29, 1990
                              Boston, MA

Qualitative descriptions   of  the visual  environment   are receiving
greater  interest in   the  computer  vision   community.  This recent
increase in interest  is partly due  to   the difficulties that  often
arise in  the   practical application  of  more  quantitative methods.
These quantitative  approaches tend  to be computationally  expensive,
complex and brittle.  They require constraints which limit generality.
Moreover inaccuracies in  the input  data do  not often  justify  such
precise methods.    Alternatively,   physical  constraints  imposed by
application domains  such as    mobile robotics and real-time   visual
perception  have prompted  the exploration  of  qualitative mechanisms
which require less  computation, have better response time,  focus  on
salient and relevant aspects of the  environment, and use enviromental
constraints more effectively.

The  one-day AAAI-90  Workshop on  Qualitative  Vision seeks  to bring
together   researchers  from different   disciplines  for   the active
discussion  of  the   technical  issues and   problems related  to the
development  of  qualitative  vision   techniques  to  support  robust
intelligent  systems.     The  Workshop   examines  aspects   of   the
methodology, the  description of  qualitative  vision  techniques, the
application of qualitative techniques  to visual domains  and the role
of qualitative vision  in  the building of robust intelligent systems.

------------------------------

Date: Tue, 14 Aug 90 07:41:38 bst
From: eba@computing-maths.cardiff.ac.uk
Subject: NEW JOURNAL -SYSTEMS ENGINEERING- SPRINGER VERLAG
Organization: Univ. of Wales Coll. of Cardiff, Dept. of Electronic & Systems Engineering

**** NEW JOURNAL *** NEW JOURNAL *** NEW JOURNAL *** NEW JOURNAL ****

             -----------------------------------------
             [     JOURNAL OF SYSTEMS ENGINEERING    ]
             [    SPRINGER - VERLAG - INTERNATIONAL  ]
             -----------------------------------------

AIMS AND SCOPE

	The Journal of Systems Engineering will be a 
refereed academic journal which publishes both fundamental
and applied work in the field of systems engineering. 
Its aim will be to provide an active forum for 
disseminating the results of research and advanced 
industrial practice in systems engineering, thereby
stimulating the development and consolidation of this field.
	The scope of the journal will encompass all subjects
pertinent to systems engineering: systems analysis, 
modelling, simulation, optimisation, synthesis, operation,
monitoring, identification, evaluation, diagnosis, control 
etc. The journal will encourage the reporting of new 
theories, tools, algorithms, and techniques to support these
activities . It will also cover critical discussions of 
novel applications of systems principles and methods and 
of original implementations of different engineering systems,
including intelligent systems. 'Hard' and 'soft' systems from
all branches of engineering will be of interest to the 
journal. Papers on any systems aspects, from accuracy,
stability, noise inmunity, complexity, to efficiency, 
quality and reliability, will be considered.


ADDRESS

Please submit contributions to: 

	The Editor: Prof. D.T. Pham,
                    Journal of Systems Engineering,
                    University of Wales,
	       	    School of Electrical, Electronic and
                      Systems Engineering.
                    P.O. Box 904, Cardiff CF1 3YH,
                    United Kingdom
		    Tel. 0222- 874429
	            Telex 497368
		    Fax  0222- 874192 			  
 	            email  PhamDT@uk.ac.cardiff

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/23/90)

Vision-List Digest	Wed Aug 22 21:02:05 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Range sensors
 Call for papers: Vision Interface '91

----------------------------------------------------------------------

Date: Wed, 22 Aug 90 17:30:32 -0500
From: dmc@piccolo.ecn.purdue.edu (David M Chelberg)
Subject: Range sensors

I am considering acquiring a range sensor for our lab here at Purdue.
I am interested in any technical contacts at companies producing such
devices.  The requirements (which are somewhat flexible at this stage)
are:
	1.) At least 128x128 resolution (preferably 512x512)

	2.) 1m working distance, with at least .5 meter range, and
 	    1-5mm depth resolution (preferable < 1mm). Preferably
	    scalable. i.e. if we are working at .25 meter, the
	    accuracy should increase proportionately.

	3.) 5 frames per second (preferable > 10 fps).  Data rate
	    should apply to recovery of a calibrated dense scan.

I would especially appreciate technical contacts at companies, as
sales contacts do not normally have the background to competently
discuss the technical merits of the products.  I would appreciate a
quick response, as at least a preliminary choice to be included in a
proposal needs to be made soon.

Thanks in advance,

 -- Prof. David Chelberg (dmc@piccolo.ecn.purdue.edu)

------------------------------

Date: 22 Aug 90 18:00:18 GMT
From: colin@nrcaer.UUCP (Colin Archibald)
Subject: call for papers
Keywords: vision , image processing, robotics
Organization: NRCC-Aeroacoustics, Ottawa, Ontario

V i s i o n  I n t e r f a c e  ' 9 1

	Calgary, Alberta, Canada
	3-7 June 1991

CALL FOR PAPERS

Vision Interface '91 is the fifth Canadian Conference devoted to
computer vision, image processing, and pattern recognition.  This is
an annual conference held in various Canadian cities and is sponsored
by the Canadian Image Processing and Pattern Recognition Society.  The 1991
conference will be held in Calgary, Alberta, June 3-7 1991 in conjunction
with Graphics Interface '91. 

IMPORTANT DATES:

Four copies of a Full Paper due:	31 Oct. 1990
Tutorial Proposals due:			15 Nov. 1990
Authors Notified:			 1 Feb. 1991
Cover Submissions due:			 1 Feb. 1991
Final Paper due:			29 Mar. 1991

TOPICS:

Contributions are solicited describing unpublished research results and
applications experience in vision, including but not restricted to the
following areas:

	Image Understanding and Recognition	Modeling of Human Perception
	Speech Understanding and Recognition	Specialized Architecture
	Computer Vision				VLSI Applications
	Image Processing			Realtime Techniques
	Robotic Perception			Industrial Applications
	Pattern Analysis & Classification	Biomedical Applications
	Remote Sensing				Intelligent Autonomous Systems
	Multi-sensor Data Fusion		Active Perception


Four copies of full papers should be submitted to the Program Co-chairmen
before Oct.31 1990. Include with the paper full names, addresses, phone
numbers, fax numbers and electronic mail addresses of all the authors. One
author should be designated "contact author"; all subsequent correspondence
regarding the paper will be directed to the contact author. The other
addresses are required for follow-up conference mailings, including the
preliminary program.

FOR GENERAL INFORMATION:		SUBMIT PAPERS TO:

Wayne A. Davis				Colin Archibald and Emil Petriu
General Chairman			VI '91 Program Co-chairmen
Department of Computing Science		Laboratory for Intelligent Systems
University of Alberta			National Research Council
Edmonton, Alberta, Canada		Ottawa, Ontario, Canada
T6G 2H1					K1A 0R6
Tel:	403-492-3976			Tel:	613-993-6580

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (08/30/90)

Vision-List Digest	Wed Aug 29 13:05:54 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 MRF and simulated annealing for image restoration
 Vision equipment references
 Survey of vision systems for automated inspection
 Help : Separate bitmap characters
 Position available in Human Visual Perception

----------------------------------------------------------------------

Date: Thu, 23 Aug 90 16:23:01 BST
From: Guanghua Zhang <guanghua@cs.heriot-watt.ac.uk>
Subject:  MRF and simulated annealing for image restoration

Dear readers,

Anybody on in the group familar with using MRF and Simulated Annealing
for the image restoration?

It was orginally proposed in Gemans' paper and the success and robustness
of the technique was desmontrated. A series of subsequent work were reported.
Extension were made from the constant depth to piecewise smooth surfaces 
[Marroquin 1984]. But nearly all I have read just modied the original 
version for different applications.

It is diffcult to understand how the enery is assigned to each of the
clique configurations. It is related to the observations and the estimats 
 -- the Gaussian model, but how ? 

Although most of the authors claim that the parameters are chosen on a 
trial-and-error bases, it can be seen some of the common points in the
parameters.

I would like to hear comments on the technique and experiences in using it.

guanghua zhang

Computer Science
Heriot-Watt University
Edinburgh, Scotland 

------------------------------

Date: Wed, 29 Aug 90 13:06:57 bst
From: Eduardo Bayro <eba@computing-maths.cardiff.ac.uk>
Subject: Vision equipment references
Organization: Univ. of Wales Coll. of Cardiff, Dept. of Electronic & Systems 
              Engineering

Dear friends !!
Does anybody knows about the Pi030 image processing system
from the company: Performance Imaging (2281 Dunstan Street,
 Oceanside , CA 92054, USA). Because its convenient price
we are thinking to buy it. This will be used by research 
fellows involved with industrial real-time applications of 
computer vision . Answers by e-mail are appreciate.
                         Eduardo//

 Eduardo Bayro, School of Electrical, Electronic and Systems Engineering,
 University of Wales College of Cardiff, Cardiff, Wales, UK.
 Internet: eba%cm.cf.ac.uk@nsfnet-relay.ac.uk        Janet:  eba@uk.ac.cf.cm
 UUCP:     eba@cf-cm.UUCP or ...!mcsun!ukc!cf-cm!eba

------------------------------

Date: Tue, 28 Aug 90 15:07:54 BST
From: Netherwood P J <cs_s424@ux.kingston.ac.uk>
Subject: Survey of vision systems for automated inspection

I am currently doing a survey of vision systems to perform automated
visual inspection of surface mount solder joints. Anybody know of
any systems? (comercial or academic) If so send replys to me.

Thanks in advance

Paul Netherwood                      janet    :  P.J.Netherwood@uk.ac.kingston
Research                             internet :  P.J.Netherwood@kingston.ac.uk
                                     phone    :  (+44) 81 549 1366 ext 2923    
                                     local    :  cs_s424@ux.king  
          
School of Computer Science and Electronic Systems,
|/ingston |>olytechnic, Penrhyn Road, Kingston-upon-Thames, Surrey KT1 2EE, UK.
|\--------|--------------------------------------------------------------------
  \
------------------------------

Date: Mon, 27 Aug 90 17:26:55 GMT
From: brian@ucselx.sdsu.edu (Brian Ho)
Subject: Help : Separate bitmap characters
Organization: San Diego State University Computing Services

Hello out there,
  I have some interesting problem that you may find interested and may be
  you can give me a hand/hint.

  I am currently working on a OCR (optical Character Recogniation) project.
  I am now in the stage that I need to scan a page of document, and sperate
  each character appears in the document.  The image of the document from the
  scanner will converted into (binary) bitmap format. e.g


0001111111100000000000000111000000000000000000000000000000000000000000
0001110000000000000000000111000000000110000001101110000000000000000000
0001100000000111111100001111111000011111100001111110000000000000000000
0001100000000111101110001111000000110000110001110000000000000000000111
0001111110000111000110000111000001111111110001100000000000000000011110
0001100000000111000110000110000001111111110001100000000000000000001111
0001100000000111000110000111000001110000000001100000000000000000000011
0001100000000111000110000111000000111001110001100000000000000000000000
0001111111100111000110000011111000011111100001100000000000000000000000
0001111111100010000000000000000000000000000000000000000000000000000000

  
  And I have a function that can sperate each character from the document.
  My function work fine when two characters are sperated by one or more
  (blank) column, as the example shown in above.

  My problem is when two characters are sperated less than one blank column,
  I can not distinguish/sperate the two character. (P.S. the character has
  unknown size) e.g.


000111111110000000000001110000000000000000000000000000000000000000
000111000000000000000001110000000110000001101110000000000000000000
000110000001111111000011111110011111100001111110000000000000000000
000110000001111011100011110000110000110001110000000000000000000111
000111111001110001100001110001111111110001100000000000000000011110
000110000001110001100001100001111111110001100000000000000000001111
000110000001110001100001110001110000000001100000000000000000000011
000110000001110001100001110000111001110001100000000000000000000000
000111111111110001100000111110011111100001100000000000000000000000
000111111110100000000000000000000000000000000000000000000000000000

The characters "En" and "te" are eventually appears side by side with the other
character.


I am wondering if anybody out there that can give me some advices, how to solve
this problem.  Or even someone who is facing the same type of problem, I'll like
to hear about it.. Thank you .. Thank you.....


Thank you for advance.. 

Brian

Contact me at :

brian@yucatec.sdsu.edu
brian@ucselx.sdsu.edu

------------------------------

Date: 	Tue, 28 Aug 90 14:12:23 EDT
From: Color and Vision Network <CVNET%YORKVM1.bitnet@ugw.utcs.utoronto.ca>
Subject: CVNet- Position Available.

Position available in Human Visual Perception
in the Department of Experimental Psychology,
University of Oxford.

A pre- or post-doctoral research position will be available from 1st October (or
as soon as possible thereafter) for one year in the first instance (with the
possibility of renewal).  The project is funded by the Science and Engineering
Research Council of U.K. and the European Community Esprit programme.

The project is concerned with the perception and representation of 3-D surfaces
from disparity and optic flow information.  Applicants should have a background
in human psychophysics or computational vision and have an interest/experience
of research in 3-D or motion perception.

For further information, please contact:

Dr Brian Rogers,
Department of Experimental Psychology,
South Parks Road,
Oxford, OX1 3UD,
U.K.

Email:  BJR@vax.oxford.ac.uk

(I shall be at the E.C.V.P meeting in Paris, 5th-7th September)



------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/15/90)

Vision-List Digest	Sat Sep 15 08:50:58 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 grouping
 Locating edges in a field of view
 Industrial Vision Metrology Conference
 VISION and NN; special issue of IJPRAI
 available document
 MAC AI demos

----------------------------------------------------------------------

Date: 11 Sep 90 14:57:28+0200
From: Tilo Messer <messer@suncog1.informatik.tu-muenchen.dbp.de>
Subject: grouping

I am interested in grouping regions (not edges!) for increasing the
performance of an object identification system. It is part of a planned
real-time interpretation system of scenes taken from a moving camera.

I found a few articles and papers about grouping of egdes (Lowe et. al.), but
these don't fit. Is anybody else interested in this topic or does anybody
know some theoretical and practical work in this area?

I would be glad about some useful hints.

Thanks, Tilo

 | |\  /|					voice: ++ 49 89 48095 - 224
 | | \/ | FORWISS, FG Cognitive Systems		fax: ++ 49 89 48095 - 203
 | |    | Orleansstr. 34, D - 8000 Muenchen 80, Germany

------------------------------

Date: Wed, 12 Sep 90 11:05:03 EDT
From: ICR - Mutual Group  <rjspitzi@watserv1.waterloo.edu>
Subject: Locating edges in a field of view

Here is an interesting real-world problem for you comp.ai.vision'aries
out there:

   I have built a scanning unit which basically produces a picture in  
memory of a 2-D object (such as a peice of paper) passing under the
scanning unit.  The image is made only of a series of points outlining
the object itself.  The object passing under the scanner is roughly
rectangular (i.e. four edges) but the edges can be somewhat bowed to
make slightly concave or convex edges.  There should be definate corners
however.

  The problem is this.  Given the limited information that I receive from
the image, I must locate the edges of the object and calculate each side's
length.  The result should be a *very* accurate estimate of the height and
width of the object and hence the area it covers.
 
   Oh ya, one other twist, the object can come through in any orientation.
There is no guarantee a corner will always be first.

   Any ideas you have for algorithms, or documents you could point me
toward would be greatedly appreciated!  Like I said, an interesting problem.

------------------------------

Date: 	Tue, 11 Sep 90 11:05:00 EDT
From: ELHAKIM@NRCCIT.NRC.CA
Subject: Industrial Vision Metrology Conference

                      ANNOUNCEMENT AND CALL FOR PAPERS

                         INTERNATIONAL CONFERENCE ON
                         INDUSTRIAL VISION METROLOGY

   Location: The Canadian Institute for Industrial Technology
             Winnipeg, Manitoba, Canada

   Date:     July 11-13, 1991

   Organized by:
        -International Society for Photogrammetry & Remote Sensing
         Commission V: Close-Range Photogrammetry and Machine Vision
         WG V/1: Digital and Real-time Close-range Photogrammetry Systems
        -National Research Council of Canada

   Proceeding published by:
         SPIE- The International Society for Optical Engineering

   Focusing on:
         Industrial applications of metric vision techniques

   Topics include:
        -Vision metrology techniques
        -Real-time systems
        -3-D object reconstruction
        -Decision algorithms
        -System calibration
        -Shop-floor metrology problems
        -Applications such as dimensional inspection

   500-1000 words abstracts are to be submitted before January 1, 1991 to:

        Dr. S. El-Hakim
        National Research Council
        435 Ellice Avenue
        Winnipeg, Manitoba, Canada R3B 1Y6
        tel:(204) 983-5056 / Fax:(204) 983-3154

------------------------------

Date: Thu, 13 Sep 90 15:29:49 PDT
From: skrzypek@CS.UCLA.EDU (Dr. Josef Skrzypek)
Subject: VISION and NN; special issue of IJPRAI

Because of repeat enquiries about the special issue of IJPRAI (Intl.
J. of Pattern Recognition and AI) I am posting the announcement again.

	    IJPRAI	 CALL FOR PAPERS	 IJPRAI

We are  organizing a  special  issue of  IJPRAI (Intl.   Journal of
Pattern Recognition  and Artificial Intelligence)  dedicated to the
subject   of neural networks in  vision   and pattern  recognition.
Papers  will be  refereed.  The  plan  calls for  the  issue  to be
published  in the fall of   1991.   I  would  like  to invite  your
participation.

   DEADLINE FOR SUBMISSION: 10th of December, 1990

   VOLUME TITLE: Neural Networks in Vision and Pattern Recognition

   VOLUME GUEST EDITORS: Prof. Josef Skrzypek and Prof. Walter Karplus
   Department of Computer Science, 3532 BH
   UCLA
   Los Angeles CA 90024-1596
   Email: skrzypek@cs.ucla.edu or karplus@cs.ucla.edu
   Tel: (213) 825 2381
   Fax: (213) UCLA CSD

		      DESCRIPTION

The capabilities    of   neural    architectures   (supervised  and
unsupervised learning,   feature  detection and   analysis  through
approximate pattern matching, categorization and self-organization,
adaptation, soft constraints,  and signal based processing) suggest
new approaches to solving problems in vision, image  processing and
pattern recognition as applied to  visual stimuli.  The purpose  of
this special issue  is to encourage further  work and discussion in
this area.

The  volume will  include both invited  and submitted peer-reviewed
articles.  We are seeking submissions  from researchers in relevant
fields,   including,  natural  and  artificial vision,   scientific
computing,  artificial  intelligence, psychology, image  processing
and pattern recognition.  "We  encourage submission of: 1) detailed
presentations of  models   or  supporting  mechanisms,   2)  formal
theoretical analyses, 3)  empirical and methodological studies.  4)
critical  reviews   of neural  networks   applicability to  various
subfields of vision, image processing and pattern recognition.

Submitted    papers may  be    enthusiastic   or  critical  on  the
applicability   of  neural   networks  to   processing   of  visual
information.   The  IJPRAI   journal   would  like    to  encourage
submissions    from  both  , researchers  engaged   in analysis  of
biological      systems         such            as         modeling
psychological/neurophysiological data using neural networks as well
as from members of  the engineering  community who are synthesizing
neural network  models.  The number of  papers that can be included
in this special issue  will be limited.  Therefore,  some qualified
papers may be encouraged  for submission to  the regular issues  of
IJPRAI.
		       SUBMISSION PROCEDURE

Submissions should  be sent to  Josef Skrzypek, by 12-10-1990.  The
suggested length is  20-22 double-spaced  pages  including figures,
references,  abstract and  so  on. Format  details, etc.  will   be
supplied on request.

Authors  are strongly  encouraged  to  discuss  ideas  for possible
submissions with the editors.

The  Journal   is  published  by   the  World   Scientific and  was
established in 1986.

Thank you for your consideration.

------------------------------

Date: Wed, 5 Sep 90 13:21:08 +0200
From: ronse@prlb.philips.be
Subject: available document

The following unpublished working document is available. If you want a
copy of it, please send me:

- Your complete postal (snail mail) address, preferably formatted as on
an enveloppe (cfr. mine below); an e-mail address is useless in this
context.

- The title of the working document.

				Christian Ronse

Internet:			ronse@prlb.philips.be
BITNET:				ronse%prlb.philips.be@cernvax

				Philips Research Laboratory
				Avenue Albert Einstein, 4
				B-1348 Louvain-la-Neuve
				Belgium

				Tel: (32)(10) 470 611	(central)
				     (32)(10) 470 637	(direct line)
				Fax: (32)(10) 470 699

=========================================================================
A twofold model of edge and feature detection

C. Ronse

September 1990

ABSTRACT.

Horn's model of surface reflectance shows that edges in
three-dimensional surfaces lead to grey-level edges combining in various
ways sharp or rounded steps, lines and roofs. The perceptual analysis of
extended edges necessicates the localization not only of step and line
edges, but also of roof edges and Mach bands, and more generally of
discontinuities and sharp changes in the n-th derivative of the
grey-level. Arguments are given which indicate the inadequacy of
locating features at zero-crossings of any type of smooth operator
applied to the image, and the necessity of orientationally selective
operators. The null space of feature detection is defined; it contains
in particular all constant signals. Oriented local features are modelled
as the linear superposition of a featureless signal (in the null space),
an even-symmetric and/or an odd-symmetric feature, measured by
convolution with respectively even-symmetric and odd-symmetric
functions. Advantages of energy feature detectors are given.

KEY WORDS.
Edge types, zero-crossings and peaks, orientational selectivity, linear
processing, feature symmetry, energy feature detector.

------------------------------

Date: Wed, 12 Sep 90 02:41:16 GMT
From: pegah@pleiades.cps.msu.edu (Mahmoud Pegah)
Subject: MAC AI demos
Organization: Computer Science, Michigan State University, E. Lansing

Greetings;

I am trying to find freeware demos of AI that run on the MAC.  This
will be used in a classroom setting (not in a lab) and will be
projected on a large screen from the video on the MAC.

Demos having to do with search space techniques, natural language
processing, vision, neural nets, knowledge based systems... would all
be items I would like to FTP for use here.

These demos will be used in an intro grad level survey course in AI.

Reply to me directly, and indicate whether you would like your
demo to be listed in a catalogue of AI educational demos that
I will prepare from the mail I get. I will post the composed directory
back to the net in two weeks time.

Please indicate an FTP host (with internet number) from which your
demo can be FTPed.

Thanks in advance.

-Mahmoud Pegah				pegah@pleiades.cps.msu.edu
AI/KBS Group				pegah@MSUEGR.BITNET
Comp Sci Dept				... uunet!frith!pegah
Mich State Univ

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/20/90)

Vision-List Digest	Wed Sep 19 11:55:22 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Address correction for requesting the Workshop on Qualitative Vision
 Shallice/Neuropsychology: BBS Multiple Book Review
 NN workshop

------------------------------

Date: Wed, 19 Sep 90 10:16:34 -0700
From: pkahn@deimos (Philip Kahn)
Subject: Address correction for requesting the Workshop on Qualitative Vision

The address for requesting the proceedings failed to include the 
street address. It should have read:

Copies of the proceedings from the AAAI-90 Workshop on Qualitative
Vision are available for $35 (in North America) and $45US
(international), and can be obtained by writing: 

	AAAI-90 Workshop on Qualitative Vision
	Advanced Decision Systems
	1500 Plymouth Street
	Mountain View, CA  94043-1230

When requesting a copy of the Proceedings, please make your check
(payable in US $) to Advanced Decision Systems (this includes postage
and handling), specify the complete mailing address to which the
proceedings should be mailed, and (if available) include your e-mail
address in case there are any questions or problems.

------------------------------

Date: Mon, 17 Sep 90 23:02:16 EDT
From: Stevan Harnad <harnad@clarity.Princeton.EDU>
Subject: Shallice/Neuropsychology: BBS Multiple Book Review

Below is the abstract of a book that will be accorded multiple book
review in Behavioral and Brain Sciences (BBS), an international,
interdisciplinary journal that provides Open Peer Commentary on
important and controversial current research in the biobehavioral and
cognitive sciences. Commentators must be current BBS Associates or
nominated by a current BBS Associate. To be considered as a commentator
on this book, to suggest other appropriate commentators, or for
information about how to become a BBS Associate, please send email to:

harnad@clarity.princeton.edu  or harnad@pucc.bitnet        or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542  [tel: 609-921-7771]

To help us put together a balanced list of commentators, please give some
indication of the aspects of the topic on which you would bring your
areas of expertise to bear if you are selected as a commentator.

          BBS Multiple Book Review of:
         FROM NEUROPSYCHOLOGY TO MENTAL STRUCTURE

              Tim Shallice
	      MRC Applied Psychology Unit
	      Cambridge, UK

ABSTRACT: Studies of the effects of brain lesions on human behavior are
now cited more widely than ever, yet there is no agreement on which
neuropsychological findings are relevant to our understanding of normal
function. Despite the range of artefacts to which inferences from
neuropsychological studies are potentially subject -- e.g., resource
differences between tasks, premorbid individual differences and
reorganisation of function -- they are corroborated by similar findings
in studies of normal cognition (short-term memory, reading, writing,
the relation between input and output systems and visual perception).
The functional dissociations found in neuropsychological studies suggest
that not only are input systems organized modularly, but so are central systems.
This conclusion is supported by considering impairments of knowledge,
visual attention, supervisory functions, memory and consciousness.

----------------------------------------------------------------------

Date: Mon, 17 Sep 90 13:54:54 EDT
From: sankar@caip.rutgers.edu (ananth sankar)
Subject: NN workshop

The following is an announcement of a neural network workshop to be
held in East Brunswick, New Jersey. The workshop is sponsored by the
CAIP Center of Rutgers University, New Jersey. In the recent past
there has been a lot of neural network research with direct
applications to Machine Vision and Image Processing. Applications in
vision  and image processing include early vision, feature extraction,
pattern classification and data compression. It is hoped that this
workshop will be of interest to the members of vision-list.

Thank you.
Ananth Sankar

Announcement follows:

=====================================================================
Rutgers University

CAIP Center

CAIP Neural Network Workshop

15-17 October 1990

A neural network workshop will be held during 15-17 October 1990 in
East Brunswick, New Jersey under the sponsorship of the CAIP Center of
Rutgers University.  The theme of the workshop will be 

"Theory and impact of Neural Networks on future technology"

Leaders in the field from government, industry and academia will
present the state-of-the-art theory and applications of neural
networks. Attendance will be limited to about 100 participants.
  
A Partial List of Speakers and Panelists include:

		J. Alspector, Bellcore
		A. Barto, University of Massachusetts
		R. Brockett, Harvard University
		L. Cooper, Brown University
		J. Cowan, University of Chicago
		K. Fukushima, Osaka University
		D. Glasser, University of California, Berkeley
		S. Grossberg, Boston University
		R. Hecht-Nielsen, HNN, San Diego
		J. Hopfield, California Institute of Technology
		L. Jackel, AT&T Bell Labs.
		S. Kirkpatrick, IBM, T.J. Watson Research Center
		S. Kung, Princeton University
		F. Pineda, JPL, California Institute of Technology
		R. Linsker, IBM, T.J. Watson Research Center
		J. Moody, Yale University
		E. Sontag, Rutgers University
		H. Stark, Illinois Institute of Technology
		B. Widrow, Stanford University
		Y. Zeevi, CAIP Center, Rutgers University and The
                          Technion, Israel 

The workshop will begin with registration at 8:30 AM on Monday, 15
October and end at 7:00 PM on Wednesday, 17 October.  There will be
dinners on  Tuesday and Wednesday evenings followed by special-topic
discussion sessions.  The $395 registration fee ($295 for participants
from CAIP member organizations), includes the cost of the dinners.   

Participants are expected to remain in attendance throughout the entire
period of the workshop.  Proceedings of the workshop will subsequently
be published in book form.   

Individuals wishing to participate in the workshop should fill out the
attached form and mail it to the address indicated.

If there are any questions, please contact

			Prof. Richard Mammone
			Department of Electrical and Computer Engineering
			Rutgers University
			P.O. Box 909
			Piscataway, NJ 08854
			Telephone: (201)932-5554
			Electronic Mail: mammone@caip.rutgers.edu
			FAX: (201)932-4775
			Telex: 6502497820 mci

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (09/29/90)

Vision-List Digest	Fri Sep 28 11:52:53 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Vision Through Stimulation of Phosphors
 Still Image Compression JPEG
 Image scaling
 Grey-level interpolation
 Preprints available on color algorithms
 Symposium on Automated Inspection of Flat Rolled Metal Products
 Third ISAI in Mexico

----------------------------------------------------------------------

Date: Fri, 28 Sep 90 12:47:48 GMT
From: nigel@ATHENA.MIT.EDU (Sublime + Perfect One)
Subject: Vision Through Stimulation of Phosphors
Organization: Massachusetts Institute of Technology

About two (?) weeks ago I heard or read something about research
in the area of hooking up signals from a small video camera to the
cells in the brain responsible for those groovy color patterns
you see when you rub your eyes or when you stare at bright lights.

Does anyone have any references about this work or similar work 
that they would not mind e-mailing to me.  If there is a large 
request for this info I'll post it etc...

thanks
nigel

------------------------------

Date: 20 Sep 90 19:51:31 GMT
From: bedros@umn-ai.cs.umn.edu
Subject: Still Image Compression JPEG
Keywords: visual quantization matrix needed
Organization: University of Minnesota, Minneapolis

I am trying to implement the JPEG algorithm and compare it to other algorithms.
Can somebody please send me the visual quatization matrix for Y,U,V
of the JPEG algorithm , and the Hoffman table of the quantizer. 
I would appreciate some results on the Lena image ( snr,mse) .
Thanks
Saad Bedros
U.of Minnesota
bedros@ee.umn.edu

------------------------------

Date: Tue, 25 Sep 90 16:27:59 +0200
From: vlacroix@prlb.philips.be
Subject: image scaling

Suppose a digitalized image of NxN pixels. How would you transform it to
have a MxM pixels image? (M>N)

Clearly there are many possible transformations, but some of them will 
produce nice looking images, while other won't.

Please send me yours ideas or pointers to literature, I'll summarize them
for the net.

------------------------------

Date: Wed, 26 Sep 90 01:22:27 CDT
From: stanley@visual1.tamu.edu (Stanley Guan)
Subject: grey-level interpolation

Hi,

Does there exist a canned method to solve the gray level interpolation
for a deformed image.  The deformation is quivalent to a mapping which
may be expressed as
          x' = r(x, y)
and
	  y' = s(x,y).

If r(x, y) and s(x, y) were known analytically it might be possible in
principle to find non-integer values for (x, y) given integer values
of the coordinates(x', y').  Then methods such as bilinear interpolation,
cubic convolution interpolation may be applied for calculating the gray
level at (x', y').

The deformation I am interested in has no easy way to find the inverse
mapping.  So, the question is how am I going to solve the gray-level
assignment for my deformed image efficiently and systematically.  

Any suggestion or helpful hints ?  Please email me at 
	stanley@visual1.tamu.edu

Thanks in advance!

Stanley
Visualization Lab
Computer Science Dept
Texas A&M University
College Station, TX 77843-3112
Tel: (409)845-0531

------------------------------

Date: 20 Sep 90  8:33 -0700
From: mark@cs.sfu.ca
Subject: Preprints available on color algorithms

Technical report and preprint available.

Using finite dimensional models one can recover from camera RGB values
the spectra of both the illumination and the reflectances participating
in interreflections, to reasonable accuracy.
   As well, one recovers a geometrical form factor that
encapsulates shape and orientation information.

Mark S. Drew and Brian V. Funt
School of Computing Science 
Simon Fraser University
Vancouver, British Columbia, CANADA  V5A 1S6
(604) 291-4682
e-mail: mark@cs.sfu.ca

@TechReport(      FUNT89D
    ,title      = "Color constancy from mutual reflection"
    ,author     = "B.V. Funt, M.S. Drew, and J. Ho"
    ,key        = "Funt"
    ,Institution= "Simon Fraser University School of Computing Science"
    ,Number     = "CSS/LCCR TR 89-02"
    ,year       = "1989"
        )
@InProceedings(    DREW90C
    ,title      = "Calculating surface reflectance using a single-bounce
                   model of mutual reflection"
    ,author     = "M.S. Drew and B.V. Funt"
    ,key        = "Drew"
    ,organization = "IEEE"
    ,booktitle  = "Proceedings: International Conference on Computer 
                   Vision, Osaka, Dec.4-7/90"
    ,pages      = " "
    ,year       = "1990"
        )
	 	 
------------------------------

Date: Thu, 20 Sep 90 10:57:56 -0400
From: Herb Schilling <schillin@SCL.CWRU.Edu>
Subject: Symposium on Automated Inspection of Flat Rolled Metal Products

                          CALL FOR PAPERS

            12th Advanced Technology Symposium on Automated
             Inspection of Flat Rolled Metal Products


   The Advanced Technology Committee of the Iron and Steel Society will
   sponsor this symposium on November 3-6, 1991 in Orlando, Florida. The
   Symposium will focus on automated inspection - surface and internal - of 
   flat rolled metal products at all stages of production. Inspection methods
   will include those for defect detection as well as those for determination 
   of physical properties. Non-contact vision methods are of particular
   interest. The committee is soliciting abstracts on the
   following topics :

    From Instrumentation Suppliers / Researchers
    --------------------------------------------

              -   Advanced sensors/ automated systems
              -   New applications
              -   Research results
              -   Technology trends

    From Users of Inspection Systems
    --------------------------------

              -   Experiences with current inspection technologies
              -   Inspection system start-up/tuning challenges
              -   Benefits derived from the application of automated
                         inspection systems
              -   Future inspection needs


     Reply, by January 15, 1991 , to :


	Mr. E. A. Mizikar
	Senior Director, Process Development
        LTV Technology Center
        6801 Brecksville Road
        Independence, Ohio 44131
        USA

	Telephone : (216) 642-7206
        Fax No    : (216) 642-4599


	Or send email to : Herb Schilling at schillin@scl.cwru.edu

------------------------------

Date:         Fri, 28 Sep 90 09:40:06 CST
From: "Centro de Inteligencia Artificial(ITESM)" <ISAI@TECMTYVM.MTY.ITESM.MX>
Subject:      THIRD ISAI IN MEXICO

     To whom it may concern:
          Here you will find the information concerning the
      "THIRD INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE".

          Please also display it in your bulletin board.
          Thank you very much in advance.
              Sincerely,
                        The Symposium Publicity Committee.
====================================================================
          THIRD INTERNATIONAL SYMPOSIUM ON
               ARTIFICIAL INTELLIGENCE:
  APPLICATIONS OF ENGINEERING DESIGN & MANUFACTURING IN
          INDUSTRIALIZED AND DEVELOPING COUNTRIES

             OCTOBER 22-26, 1990
                ITESM, MEXICO

   The Third International Symposium on Artificial Intelligence will
   be held in Monterrey, N.L. Mexico on October 22-26, 1990.
   The Symposium is sponsored by the ITESM (Instituto Tecnologico y
   de Estudios Superiores de Monterrey)  in cooperation with the
   International Joint Conferences on Artificial Intelligence Inc.,
   the American Association for Artificial Intelligence, the Sociedad
   Mexicana de Inteligencia Artificial and IBM of Mexico.

   GOALS:
   * Promote the development and use of AI technology in the
     solution of real world problems. Analyze the state-of-the-art
     of AI technology in different countries. Evaluate efforts
     made in the use of AI technology in all countries.

   FORMAT:
   ISAI consists of a tutorial and a conference.
           Tutorial.- Oct. 22-23
           Set of seminars on relevant AI topics given in two days.
   Topics covered in the Tutorial include:
   "Expert Systems in Manufacturing"
   Mark Fox, Ph.D., Carnegie Mellon University, USA
   "A.I. as a Software Development Methodology"
   Randolph Goebel, Ph.D., University of Alberta, Canada

            Conference.- Oct. 24-26
            Set of lectures given during three days. It consists of
   invited papers and selected papers from the "Call for Papers"
   invitation. Areas of application include: computer aided product
   design, computer aided product manufacturing, use of industrial
   robots, process control and ES, automatic process inspection and
   production planning.
   Confirmed guest speakers:
   Nick Cercone, Ph.D, Simon Fraser University, Canada
   "Expert Information Management with Integrated Interfaces"

   Mitsuru Ishizuka, Ph.D, University of Tokyo, Japan
   "Fast Hypothetical Reasoning System as an Advanced
   Knowledge-base Framework"

   Alan K. Mackworth, Ph.D, University of British Columbia, Canada
   "Model-based Computational Vision"

   Antonio Sanchez, Ph.D, Universidad de las Americas, Mexico

   Sarosh N. Talukdar, Ph.D, Carnegie Mellon University, USA
   "Desing System Productivity: Some bottlenecks and potential
   solutions".

   Carlos Zozaya Gorostiza, Ph.D, CONDUMEX, Mexico

   IMPORTANT:
             Computer manufacturers, AI commercial companies,
   universities and selected papers with working programs could
   present products and demonstrations during the conference.
   In order to encourage an atmosphere of friendship and exchange
   among participants, some social events are being organized.
     For your convinience we have arranged a free shuttle bus
   service between the hotel zone and the ITESM during the three
   day conference.

    FEES:
         Tutorial.-
           Professionals    $ 250 USD + Tx(15%)
           Students         $ 125 USD + Tx(15%)
        Conference.-
           Professionals    $ 180 USD + Tx(15%)
           Students         $  90 USD + Tx(15%)
           Simultaneous Translation   $ 7 USD
           Formal dinner    $ 25 USD *
           *(Includes dinner, open bar, music  (Oct 26))
    Tutorial fee includes:
        Tutorial material.
        Welcoming cocktail party (Oct.22)

    Conference fee includes:
        Proceedings.
        Welcoming cocktail party (Oct.24)
        Cocktail party. (Oct.25)

    HOTELS:
        Call one to the hotels listed below and mention that you
    are going to the 3rd. ISAI. Published rates are single or
    double rooms.
    HOTEL                   PHONE*              RATE
    Hotel Ambassador       42-20-40          $85 USD + Tx(15%)
    Gran Hotel Ancira      42-48-06          $75 USD + Tx(15%)
                           91(800) 83-060
    Hotel Monterrey        43-51-(20 to 29)  $60 USD + Tx(15%)
    Hotel Rio              44-90-40          $48 USD + Tx(15%)
    * The area code for Monterrey is (83).

    REGISTRATION PROCEDURE:
        Send personal check payable to "I.T.E.S.M." to:
              "Centro de Inteligencia Artificial,
               Attention: Leticia Rodriguez,
               Sucursal de Correos "J", C.P. 64849,
               Monterrey, N.L. Mexico"

        INFORMATION:
              CENTRO DE INTELIGENCIA ARTIFICIAL, ITESM.
              SUC. DE CORREOS "J", C.P. 64849 MONTERREY, N.L. MEXICO.
              TEL.    (83) 58-20-00 EXT.5132 or 5143.
              TELEFAX (83) 58-07-71, (83) 58-89-31,
              NET ADDRESS:
                          ISAI AT TECMTYVM.BITNET
                          ISAI AT TECMTYVM.MTY.ITESM.MX


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/03/90)

Vision-List Digest	Tue Oct 02 11:09:51 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Information-Based Complexity in Vision (request)
 Paper Needs .......!!! (Urgent)
 Length of a line
 Vision research in Estonia
 International Workshop on Robust Computer Vision

----------------------------------------------------------------------

Date: Mon, 1 Oct 90 14:12:38 +0200
From: Dario Ringach <dario%TECHUNIX.BITNET@CUNYVM.CUNY.EDU>
Subject: Information-Based Complexity in Vision (request)
Comments:  Domain style address is "dario@techunix.technion.ac.il"

  I would be grateful for any references on information-based complexity
works (in the sense of [1]) to vision and image processing. Thanks in advance.

[1] J. Traub, et al "Information-Based Complexity", Academic Press, 1988.

BITNET: dario@techunix | ARPANET: dario%techunix.bitnet@cunyvm.cuny.edu
Domain: dario@techunix.technion.ac.il | UUCP: ...!psuvax1!techunix.bitnet!dario
Dario Ringach, Technion, Israel Institute of Technology,
Dept. of Electrical Engineering, Box 52, 32000 Haifa, Israel

------------------------------

Date: Sat, 29 Sep 90 23:20:10 GMT
From: brian@ucselx.sdsu.edu (Brian Ho)
Subject: Paper Needs .......!!! (Urgent)
Organization: San Diego State University Computing Services

Hello folks,
I am urgently looking for a paper which has just presented at : International
  conf. on Automation, robotics, and computer vision, the conf. was held
  in Singapore Sept 18 - 21, 1990.  The paper I am looking for is:

	Title : Recongition of cursive writing - a method of segmentation
		of word into characters.
	Arthors: M. Leroux and J.C. Salome.

  I will be greatly appreciate if anyone who can send me a copy or tell
  me where I can get hold of the paper.

PS. 
  Special thanks to Dr. M. Atiquzzaman (Department of Electrical Engineering)
  from King Fahd University of Petroleum & Minerals (Saudi Arabia), who
  has passed that information to me.


  Thanks in advance..

  Please Send me E-mail at:
   brian@ucselx.sdsu.edu
   brian@cs.sdsu.edu

   Brian


------------------------------

Date: Tue, 2 Oct 90 05:39:09 PDT
From: facc005%saupm00.bitnet@cunyvm.cuny.edu:
Subject: Length of a line

Does anyone know of any references on finding the "length of a line"
using the Hough Transform ?

Thanks,
ATIQ (facc005@saupm00.bitnet)

------------------------------

Date: Tue, 2 Oct 90 11:52:15 EDT
From: Michael Black <black-michael@CS.YALE.EDU>
Subject: Vision research in Estonia.

I am planning a visit to Tallinn, Estonia.  While there, I would
like to make contact with anyone interested in computer vision.
I'm not even sure where to look.  Any pointers to people, or even
institutions, who are interested in vision would be greatly 
appreciated.

Michael Black
black@cs.yale.edu

------------------------------

Date: Thu, 27 Sep 90 13:56:23 -0700
From: graham@cs.washington.edu (Stephen Graham)
Subject: International Workshop on Robust Computer Vision

International Workshop on Robust Computer Vision

in cooperation with

IEEE Computer Society
International Society for Photogrammetry 
and Remote Sensing

Robert M. Haralick, Co-Chairman
University of Washington, USA

Wolfgang Forstner, Co-Chairman
Institut fur Photogrammetrie, BRD

Program


University Plaza Hotel
400 NE 45th St
Seattle, Washington, 98105, USA
30 September-3 October 1990

Copies of the Proceedings are available from

     Workshop on Robust Computer Vision
     Stephen Graham
     Department of Electrical Engineering FT-10
     University of Washington
     Seattle, WA  98195
     USA


Sunday, 30 September 1990

Tutorials

9:00-10:30 AM  Quality Analysis, Wolfgang Forstner

10:30-10:45 AM   Break

10:45-12:00 Noon  Robust Methods, Wolfgang Forstner

12:00-1:30 PM  Lunch

1:30-2:45 PM  Robust Pose Estimation, Robert Haralick

2:45-3:00 PM  Break

3:00-4:15 PM  Bias Robust Estimation, Doug Martin


Monday, 1 October 1990

8:00-10:00 AM  Robust Techniques I

Robust Computational Vision, Brian G. Schunck, University 
of Michigan, USA

Developing Robust Techniques for Computer Vision, Xinhua 
Zhuang, University of Missouri, Columbia, and Robert M. 
Haralick, University of Washington, USA

Robust Vision-Programs Based on Statistical Feature 
Measurements,  Chien-Huei Chen and Prasanna G. 
Mulgaonkar, SRI International, USA

10:20-12:20 AM  Robust Techniques II

A Robust Model Based Approach in Shape Recognition, Kie-
Bum Eom and Juha Park, The George Washington 
University, USA

Robust Statistical Methods for Building Classification 
Procedures, David S. Chen and Brian G. Schunck, University 
of Michigan, USA

Noise Insensitivity of an Associative Image Classification 
System, Giancarlo Parodi and Rodolfo Zunino, University 
of Genoa, Italy

2:00-4:00 PM  Line and Curve Fitting I

WhatUs in a Set of Points?, N. Kiryati and A.M. Bruckstein, 
Technion Israel Institute of Technology, Israel

Robust Recovery of Precursive Polynomial Image Structure, 
Peter Meer, Doron Mintz, and Azriel Rosenfeld, University 
of Maryland, USA

Fitting Curves with Discontinuities, Robert L. Stevenson and 
Edward J. Delp, Purdue University, USA

4:20-5:40 PM  Line and Curve Fitting II

Non-Linear Filtering for Chaincoded Contours, Yuxin Chen 
and Jean Pierre Cocquerez, ENSEA, and Rachid Deriche, 
INRIA Sophia Antipolis, France

Using Geometrical Rules and a Priori Knowledge for the 
Understanding of Indoor Scenes, M. Straforini, C. Coelho, 
M. Campani, and V. Torre, University of Genoa, Italy


Tuesday, 2 October 1990

8:00-9:50 AM  Pose Estimation and Surface 
Reconstruction

Analysis of Different Robust Methods for Pose Refinement, 
Rakesh Kumar and Allen R. Hanson, University of 
Massachusetts, USA

A Robust Method for Surface Reconstruction, Sarvajit S. 
Sinha and Brian G. Schunck, University of Michigan, USA

Dense Depth Maps from 2-D Cepstrum Matching of Image 
Sequences, Dah Jye Lee, Sunanda Mitra, and Thomas F. 
Krile, Texas Tech University, USA

10:10AM-12:20 PM  Smoothing and Differential Operators

Accuracy of Regularized Differential Operators for 
Discontinuity Localization in 1D and 2D Intensity Functions, 
Heiko Neumann and H. Siegfried Stiehl, Universitat 
Hamburg, and Karsten Ottenberg, Philips 
Forschungslaboratorium Hamburg, Federal Republic of 
Germany

On Robust Edge Detection, Linnan Liu, Brian G. Schunck, 
and Charles C. Meyer, University of Michigan, USA

A Statistical Analysis of Stack Filters with Application to 
Designing Robust Image Processing Operators, N. Pal, C.H 
Chu, and K. Efe, The University of Southwestern Louisiana

Image Segmentation through Robust Edge Detection, Amir 
Waks and Oleh J. Tretiak, Drexel University

2:00-4:00 PM  Robust Hough Techniques

A Probabilistic Hough Transform, N. Kiryati, Y. Eldar, and 
A.M. Bruckstein, Technion Israel Institute of Technologu, 
Israel

Generalized Minimum Volume Ellipsoid Clustering with 
Application in Computer Vision, Jean-Michel Jolion, 
Universite Claude Bernard, France, and Peter Meer and 
Azriel Rosenfeld, University of Maryland, USA

Random Sampling for Primitive Extration, Gerhard Roth, 
National Research Council of Canada

4:20-5:40 PM  Panel


Wednesday, 3 October 1990

8:00-10:00 AM  Motion Estimation & Stereo I

Robust Motion Estimation Using Stereo Vision, Juyang Weng, 
Ecole Polytechnique, Montreal, and Paul Cohen, Centre de 
Recherche Informatique de Montreal, Canada

Robust Obstacle Detection Using Optical Flow, Giulio 
Sandini and Massimo Tistarelli, University of Genoa, Italy

10:20-12:20 AM  Motion Estimation & Stereo II

An Algebraic Procedure for the Computation of Optical Flow 
from First Order Derivatives of the Image Brightness, 
Alessandro Verri and Marco Campani,.University of Genoa, 
Italy

Robust Dynamic Stereo for Incremental Disparity Map 
Refinement, Arun Tirumalai, Brian Schunck, and Ramesh C. 
Jain, University of Michigan, USA

An Efficient, Linear, Sequential Formulation for Motion 
Estimation from Noisy Data, Subhasis Chaudhuri and Shankar 
Chatterjee, University of California, San Diego, USA

2:00-4:00 PM   Working Group

4:20-5:40 PM  Working Group Presentation


------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/06/90)

Vision-List Digest	Fri Oct 05 18:07:46 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Address to Kontron?
 Superquadrics from range data
 What is the State of the Art of Artificial Vision?
 Image Compression Routines for UNIX Systems
 Proceedings of the International Workshop on Robust Computer Vision
 Call for Papers: Geometric Methods in Computer Vision

----------------------------------------------------------------------

Date: Fri, 5 Oct 90 14:51:41 +0100
From: pell@isy.liu.se
Subject: Address to Kontron?

Hello. Does anyone have the (snail-mail) address to Kontron,
a manufacturer of Image Processing systems in Munich, Germany?
Thanks!

Dept. of Electrical Engineering	                         pell@isy.liu.se
University of Linkoping, Sweden	                    ...!uunet!isy.liu.se!pell

------------------------------

Date: Tuesday, 2 Oct 1990 23:43:24 EDT
From: Bennamoun Mohamed <MOHAMEDB@QUCDN.QueensU.CA>
Subject: Pentland's research.
Organization: Queen's University at Kingston

Hello !

Is anybody familiar with Pentland's research concerning the recovery of
superquadrics from range data?

I have problems understanding what he means by minimal length encoding, and
how his algorithm performs segmentation !!

I will appreciate any help.


Thanks in advance.
                       Mohamed.

------------------------------

Date: 4 Oct 90 12:21:07 GMT
From: loren@tristan.llnl.gov (Loren Petrich)
Subject: What is the State of the Art of Artificial Vision?
Organization: Lawrence Livermore National Laboratory

	I wish to ask how much has been accomplished in the field of
Artificial Vision. What sorts of things have been achieved in the
field of computerized visual perception? To put it another way, what
things is it possible to "perceive" with the computerized vision
systems that have been devised to date? What progress has been made in
artificial-vision algorithms and in artificial-vision hardware?

	I am sure that appropriate specialized hardware will be
essential for artificial-vision applications, since the amount of raw
data to be processed is enormous, and many of the fundamental
operations are relatively simple and can be done in parallel. And that
is why I asked about hardware.

	Has anyone published the kind of overview of the
artificial-vision field that I have been asking for?


Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

------------------------------

Date: 4 Oct 90 19:14:58 GMT
From: boulder!boulder!domik@ncar.UCAR.EDU (Gitta Domik)
Subject: Image Compression Routines for UNIX Systems
Keywords: Image compression, ADCT, UNIX Sources
Organization: University of Colorado, Boulder

I have the problem of storing large amounts of digitized images, and
want to compress them for long term archive. For our purposes, ADCT or
the items of this kind work best. I have tested Kodak's 'Colorsqueeze'
software for the MAC, and the results are okay, except for speed, but
I am looking for similar software to be run on UNIX machines. For me,
the optimal solutions would be public-domain UNIX sources. Can anyone
help?

I am not on Usenet, so please e-mail directly to my address:

[ Also please post to the List so others may benefit from the answers.
		phil...		]

fkappe@tugiig.uucp

Frank Kappe, Technical University Graz 
Institute for Computer Based New Media
Graz, Austria

------------------------------

Date: Thu, 4 Oct 90 14:52:27 -0700
From: graham@cs.washington.edu (Stephen Graham)
Subject: Proceedings of the International Workshop on Robust Computer Vision

The Proceedings of the International Workshop on Robust Computer Vision
are now available. The cost is US$40 per copy, including postage.
To order, please send a check or money order made out to the
International Workshop on Robust Computer Vision to the following
address:

	Workshop on Robust Computer Vision
	c/o Stephen Graham
	Dept. of Electrical Engineering FT-10
	University of Washington
	Seattle, WA  98195  USA

Further information may be obtained by calling (206) 543-8115 or
by e-mail to graham@cs.washington.edu

Stephen Graham

------------------------------

Date: Fri, 5 Oct 90 15:17:04 -0400
From: Baba Vemuri <vemuri@scuba.cis.ufl.edu>
Subject: CAll for Papers

		 Announcement and Call for Papers

		 Geometric Methods in Computer Vision
	(Part of SPIE's 36th Annual International Symp. on Optical
	 and Optoelectronic Applied Science and Engineering)
			
			Dates: 25-26th July 1991
		 Location: San Diego, California USA
	San Diego Convention Center and marriott Hotel & Marina


Conference Chair: 
Baba C. Vemuri, Dept. of CIS, University of Florida, Gainesville, Fl

Cochairs: Ruud M. Bolle, IBM T. J. Watson Research Ctr., Yorktown Heights NY
Demetri Terzopoulos, Dept. of CS, Univ. of Toronto, Canada
Richard Szeliski, CS Research labs, DEC, Boston, MA 
Gabriel Taubin, IBM T. J. Watson Research Ctr., Yorktown Heights NY


The theme of this conference is application of geometric methods in
low-level vision tasks, specifically for shape and motion estimation.
Over the past few years, there has been increased  interest in the use
of differential geometry and geometric probability methods for various
low-level vision problems. Papers describing novel contributions in
all aspects of geometric and probabilistic methods in low-level vision
are solicited, with particular emphasis on:

(1) Differential Geometric Methods for Shape Representation

(2) Probability and Geometry (Geometric Probability)

(3) Energy-based Methods for Shape Estimation

(4) Geometry and Motion Estimation


Deadlines: 

Abstract Due Date: 24 December 1990

Manuscript Due Date: 24th June 1991

You may recieve the author application kit by sending email requests
to vemuri@scuba.cis.ufl.edu.  Late abstract submissions may be
considered, subject to program time availability and chair's approval.

Submit To: 

SPIE Technical Program Committee/San Diego'91
P. O. Box 10, Bellingham, WA 98227-0010 USA
Telephone: 206/676/-3290 (Pacific Time)

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (10/17/90)

Vision-List Digest	Tue Oct 16 15:15:27 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 3-D Object Recognition
 Parallel Languages for Computer Vision
 Image processing at Siemens
 Fractals & data compression
 Performance Evaluation
 Call for Papers and Referees: Special Issue of IEEE COMPUTER

----------------------------------------------------------------------

Date: 16 Oct 90 20:10:22 GMT
From: eleyang@ubvmsd.cc.buffalo.edu (Yang He)
Subject: 3-D Object Recognition
Organization: University at Buffalo

    I have just developed a shape classification algorithm. It has 
shown a 100% classification result for 2-D shapes. The algorithm can
be easily extended to 3-D case. But I don't have 3-D data for the 
experiments.

    Could anybody out there tell me where I can get 3-D object data
suitable for classification? My requirements are as follows:

    1. Each 2-D slice should be a closed boundary without inner
       contour.
    2. Binary data preferred. But grey level images may also be
       used after boundary extraction. 
    3. I need 5+ classes, 20+ shapes for each class.       
    4. The shapes within a class have contour perturbation, i.e.,
       the shapes are NOT different ONLY in size and orientation.

    If anybody has the information, please send E-mail to
eleyang@ubvms.bitnet. Your help is very much appreciated.


------------------------------

Date: Fri, 12 Oct 90 15:33:10 -0400
From: Pierre Tremblay  <pierre@sol.McRCIM.McGill.EDU>
Subject: Parallel Languages for Computer Vision

I'm doing research as part of my Master's thesis on parallel
programming models and languages appropriate for "intermediate-level"
computer vision (i.e. no SIMD processing, less emphasis on graph
searching).

Does anyone have references to parallel programming languages
specifically designed for computer vision programming, or references
to general purpose parallel programming languages used in such a
capacity?

Please reply by E-mail, and I'll post back to the net with a summary.

Many thanks,

		Pierre

* Pierre P. Tremblay        Internet: pierre@larry.mcrcim.mcgill.edu *
* Research Assistant        Surface:  3480 University Street         *
* McGill Research Centre 	      Montreal, Quebec, Canada       *
* for Intelligent Machines  	      H3A 2A7                        *
*                           Phone:    (514) 398-8204 or 398-3788     *

------------------------------

Date: Fri, 12 Oct 90 17:15:01 +0100
From: br%bsun3@ztivax.siemens.com (A. v. Brandt)
Subject: Image processing at Siemens

I'd like to introduce to you the Image Processing Group of Siemens Corporate 
Research, Munich, Germany. We are about twenty researchers doing basic 
and applied studies in the areas of image understanding 
(document interpretation, object recognition, motion estimation, 3D modeling) 
and artificial neural networks (models, implementations, selected applications) The Laboratory is located in Munich, an attractive city in the south of the 
Federal Republic of Germany (i. e., in Bavaria).

Connections exist with our sister laboratory, Siemens Corporate Research 
in Princeton, NJ, as well as with various research institutes and universities 
in Germany and in the U.S. including MIT, CMU and ICSI.

Above and beyond the Laboratory facilities, the group has a network of 
Sun and DEC workstations, Symbolics Lisp machines, file and compute servers, 
and dedicated image processing hardware.

My personal interests are in image sequence analysis, moving object recognition
for surveillance and traffic monitoring, depth from stereo and motion,
optical flow estimation, autonomous navigation etc. If someone is interested 
in more details, or if someone would like to participate in one of our 
projects (we have openings), please send a message to:

	Achim v. Brandt
	Siemens AG
	ZFE IS INF 1
	Otto-Hahn-Ring 6
	D-8000 Muenchen 83
	(West) Germany

	email: 	brandt@ztivax.siemens.com
	Tel. +49-89-636-47532
	FAX  +49-89-636-2393

------------------------------

Date: Fri, 12 Oct 90 20:33:00 +0100
From: Eduardo Bayro <eba@computing-maths.cardiff.ac.uk>
Subject: fractals & data compression
Organization: Univ. of Wales Coll. of Cardiff, Dept. of Electronic & Systems 
              Engineering

Hello friends!!  

I asked a cupple of months ago for bibliography on fractals. I am very
thankfull to everybody wo answer me. The literature I have got,
actually, comprise of a some books and nearly ten articles. It is
anyway not worthy for mailing it as a representative one. Now friends,
what I am really interested on is in fractals for image compression.
Please, if anybody knows suitable references or have suggestions post
them by e-mail. Many Thanks, Eduardo.

 Eduardo Bayro, School of Electrical, Electronic and Systems Engineering,
 University of Wales College of Cardiff, Cardiff, Wales, UK.
 Internet: eba%cm.cf.ac.uk@nsfnet-relay.ac.uk        Janet:  eba@uk.ac.cf.cm
 UUCP:     eba@cf-cm.UUCP or ...!mcsun!ukc!cf-cm!eba

------------------------------

Date: Thu, 11 Oct 90 17:55:57 PDT
From: tapas@george.ee.washington.edu
Subject: Performance Evaluation

We are compiling a survey of work done on performance evaluation
of low level vision algorithms. The kind of papers we are looking
for are:
 (i)  Any work on general methodology of performance evaluation.
 (ii) Performance evaluation of specific type of algorithms,
      e.g., edge detection algorithms, corner detection algorithms, etc.

I have appended a list of references we have already collected.

We will appreciate any references to this kind of work. I'll summerize
the responses to the net. 

Thanks in advance.

Tapas Kanungo <tapas@george.ee.washington.edu>

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@article{ DeF:eval,
author = "Deutsch, E. S. and J. R. Fram",
title = "A quantitative study of the
         Orientational Bias of some Edge Detector Schemes",
journal = "IEEE Transactions on Computers",
month = "March",
year = 1978}
 
@article{FrD:human,
author = "Fram, J.R. and E.S. Deutsch",
title = "On the quantitative evaluation of edge detection schemes and
         their comparisions with human performance",
journal = "IEEE Transaction on Computers",
volume = "C-24",
number = "6",
pages = "616-627"
year = 1975}

@article{AbP:eval,
author = "Abdou, I.E. and W. K. Pratt",
title = "Qualitative design and evaluation of enhancement/thresholding
          edge detector",
journal = "Proc. IEEE",
volume = "67",
number = "5",
pages = "753-763",
year = 1979}
 
@article{PeM:eval,
author = "Peli, T. and D. Malah",
title = "A study of edge detection algorithms",
journal = "Computer Graphics and Image Processing",
volume = "20",
pages ="1-21",
year = 1982}
 
@article{KiR:eval,
author = "Kitchen, L. and A. Rosenfeld",
title = "Edge Evaluation using local edge coherence",
journal = "IEEE Transactions on Systems, Man and Cybernetics",
volume = "SMC-11",
number = "9",
pages = "597-605",
year = 1981}

@article{HaL:eval,
author = "Haralick, R.M. and J. S. J. Lee",
title = "Context dependent edge detection and evaluation",
journal = "Pattern Recognition",
volume = "23",
number = "1/2",
pages = "1-19",
year = 1990}

@article{Har:performance,
author = "Haralick, R.M.",
title = "Performance assessment of near perfect machines",
journal = "Journal of machine vision algorithms",
year = 1988}


------------------------------

Date: Fri, 12 Oct 90 00:39:31 EDT
From: choudhar@cat.syr.edu (Alok Choudhary)
Subject: Call for Papers and Referees: Special Issue of IEEE COMPUTER

		  Call for Papers and Referees

		for  A Special Issue of IEEE COMPUTER on

      Parallel Processing for Computer Vision and Image Understanding (CVIU)

The February 1992 issue of IEEE Computer will be devoted to
Parallel Processing for Computer Vision and Image Understanding (CVIU).
Tutorial, survey, case-study of architectures, performance evaluation
and other manuscripts are sought. Sub-areas of interest include, but are
not limited to :

Architectures :  Multiprocessor architectures and special purpose architectures
		 for CVIU.
Algorithms :    Design, mapping and implementations of parallel algorithms
		 for CVIU problems.
Languages :     Design of languages for efficient implementation of CVIU
		 programs, specially for parallel processing and architecture
	         independent implementations.
Software Development Tools : Software development tools for parallel CVIU
		 applications.
Performance Evaluation : Benchmarking, performance evaluation  of
		architectures and algorithms; performance evaluation of 
		integrated CVIU systems.
Real-time vision architectures and applications.

	          Instructions for submitting manuscripts

Manuscripts must not have been previously published or currently under
consideration for publication elsewhere. Manuscripts should be no longer
8000 words (approximately 30 double-spaced, single sided pages using a 12-point
type) including all text, figures and references).
Manuscripts should include a  title page containing: paper title;
full name(s), affiliations(s), postal address, e-mail address, telephone,
and fax number of all authors; a 300-word abstract; and a list of key words.

		  	     Deadlines

    - Eight (8) Copies of the Full Manuscript       March 1,  1991
    - Notification of Decisions                     August 1, 1991
    - Final Version of the Manuscript               October 1, 1991
    - Date of Special Issue                         February 1992


	         Send submissions and Questions to

Prof. Alok N. Choudhary                   Prof. Sanjay Ranka
Electrical and Computer Engineering       School of Computer and Information
Department				  Science 
121 Link Hall                             4-116, Center for Science and
					  Technology
Syracuse University                       Syracuse University
Syracuse, NY 13244 			  Syracuse, NY 13244
(315) 443-4280                            (315) 443-4457 
choudhar@cat.ece.syr.edu                  ranka@top.cis.syr.edu 


                             Referees

If you are willing to referee papers for the special issue, please send a note
with research interests to:
			 Prof. John T. Butler,
			 Associate Technical Editor, Computer
			 Department of Electrical and
			 Computer Engineering
			 Naval Postgraduate School, Code EC/Bu
			 Monterey, CA, 92943-5004
			 Office: (408) 646-3299
			      or (408) 646-3041
			 fax: (408) 646-2760
			 email: butler@cs.nps.navy.mil.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/01/90)

Vision-List Digest	Wed Oct 31 12:22:21 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Call for Suggestions: Workshop on Shape Description/Representation
 Optical Flow in Realtime
 Image-restoration and image reconstruction software?
 Paper needed!
 Canny's edge detector
 CVNet- Open house during OSA Annual Meeting
 Abstract: Neural Network System for Histological Image Understanding
 CNS Program at Boston University Hiring 2 Assistant Professors
 Submission for comp-ai-vision
 research post
 CVNet- Two tenure track positions

----------------------------------------------------------------------

Date: Tue, 30 Oct 90 14:08:20 +0100
From: henkh@cwi.nl
Subject: Call for Suggestions: Workshop on Shape Description/Representation

CALL FOR SUGGESTIONS

We are intending to organise a workshop on shape description and 
representation of 2-D (binary and greylevel) images.

The emphasis will be put on the underlying theory by 
(contemporary) mathematics and algorithms for application.

Keywords are: theory of shape, category theory, scale space methods, 
     differential geometry and topology, mathematical morphology, 
     graph representation, computational geometry. 


If you have any suggestions regarding the topics or persons working on
the subject, please let us know.


Kind regards,  O Ying-Lie, Lex Toet, Henk Heijmans. 

Please e-mail your suggestions to:
Henk Heijmans
CWI 
Kruislaan 413
NL-1098 SJ Amsterdam
The Netherlands

e-mail: henkh@cwi.nl

------------------------------

Date: Thu, 25 Oct 90 18:53:25 +0100
From: jost@bmwmun.uucp.dbp.de (Jost Bernasch)
Subject: Optical Flow in Realtime

Help! Is there anybody who could give me some hints or
answers to the following questions:

  1. Is there any company or research lab which could
     compute on grey images (256x256) image flow in 
     real time? Is a chip or a board anywhere available?

  2. Has anybody *practical* experience in computing
     qualitative depth information from optical flow?
     How sensitive is optical flow (from image sequences)
     to noise?. Are there any basic problems?

  3. Computing depth information from *normal* flow,
     is this theoretically possible?

We at BMW are developing a lateral and longitudinal
controlled car, which should (for experiments) drive
automatically and which might be in the future an intelligent
assistent to the driver, in which form soever.

We will use (if available) this techniques to detect
obstacles, that are lying or driving on the street,
by comparing the expected optical flow or the expected depth
of the plane (we assume, the street is a plane) to the
computed optical flow or depth. From the differences
we will conclude to obstacels NOT being in the plane.

Any help is very much welcomed!

Yours
Jost Bernasch,                                                        
BMW AG Muenchen, Dep. EW-13, 		      Tel. ()89-3183-2822     
P.O. BOX 40 02 40, 			      FAX ()89-3183-4767      
D-8000 Muenchen 40,  Germany		      jost@bmwmun.uucp.dbp.de 

------------------------------

Date: Fri, 26 Oct 90 08:06:42 +0100
From: Reiner Lenz <reiner@isy.liu.se>
Subject: Image-restoration and image reconstruction software?

Are there any (public domain or commercial) software packages for image 
restoration and image reconstruction available?

If there is enough response I will summarize.

"Kleinphi macht auch Mist"
Reiner Lenz | Dept. EE.                 |
            | Linkoeping University     | email:        reiner@isy.liu.se
            | S-58183 Linkoeping/Sweden |

------------------------------

Date: Fri, 26 Oct 90 12:34:57 -0700
From: zeitzew@CS.UCLA.EDU (Michael Zeitzew)
Subject: Paper needed!

Hello,

I am looking for a paper from the conference "Speech Technology" 1985....

J.F. Mari and S. Roucos, "Speaker Independent Connected Digit Recognition
Using Hidden Markov Models", Proc. Conf. "Speech Technology", New York, April 
1985

I know the publisher is Media Dimensions, but they won't sell or give me
just one paper, I'd have to buy the entire proceedings ($150+).

I you have it, or know where to get it and wouldn't mind mailing it to me,
I'll be glad to pay for postage, etc. If you know a library that has it, please
let me know.

Mike Zeitzew
zeitzew@cs.ucla.edu

------------------------------

Date: Mon, 29 Oct 90 22:36:42 EST
From: namuduri@ziggy.cmd.usf.edu (Kameswara Namuduri)
Subject: Canny's edge detector

I need the program for Canny's edge detector. I appreciate it if some one can send it to the following adress.

namuduri@ziggy.usf.edu
	
Thanks in advance			-namuduri

------------------------------

Date: 	Tue, 30 Oct 90 06:31:48 EST
From: Color and Vision Network <CVNET%YORKVM1.bitnet@ugw.utcs.utoronto.ca>
Subject: CVNet- Open house during OSA Annual Meeting

Open House with the MIT Media Lab Vision Group:

For those of you who will be attending the OSA meeting in Boston, the
Vision and Modeling group of the MIT Media Lab invites you to visit
on Wednesday, Nov. 7, from 10:0am to 1:00pm.  We will be showing
our current work on such topics as face recognition, motion analysis,
image coding, physical modeling, and 3-D sensing.

The Media Lab is in the Wiesner Building (also known as E15),
at 20 Ames St., in Cambridge.  It is near the Kendall subway stop on the
Red Line.  From the conference, take the Green Line to Park Station,
then change for the Red Line toward Harvard.  Get off at Kendall, walk
1/2 block to Legal Seafoods Restaurant, then turn left and go 3/4 block
on Ames.

Hope to see you!

Ted Adelson
Sandy Pentland

------------------------------

Date: Mon, 29 Oct 90 17:43:30 +0000
Subject: Abstract: Neural Network System for Histological Image Understanding
From: P.Refenes@cs.ucl.ac.uk

The following pre-print (SPIE-90, Boston, Nov. 5-9 1990) is available.
(write or e-mail to A. N. Refenes at UCL)

AN INTEGRATED NEURAL NETWORK SYSTEM for HISTOLOGICAL IMAGE UNDERSTANDING

A. N. REFENES, N. JAIN & M. M. ALSULAIMAN
Department of Computer Science, University College London, London, UK.

This  paper  describes  a  neural  network   system   whose
architecture   was   designed   so   that  it  enables  the
integration of heterogeneous  sub-networks  for  performing
specialised tasks. Two types of networks are integrated: a)
a low-level feature  extraction  network  for  sub-symbolic
computation,  and  b)  a  high-level  network  for decision
support.

The  paper  describes  a  non  trivial   application   from
histopathology, and its implementation using the Integrated
Neural Network System. We show that  with  careful  network
design,   the  backpropagation  learning  procedure  is  an
effective way of training neural networks for  histological
image  understanding.  We evaluate the use of symmetric and
asymmetric squashing functions in  the  learning  procedure
and  show that symmetric functions yield faster convergence
and 100% generalisation performance.

------------------------------

Date: Tue, 30 Oct 90 14:43:52 -0500
From: mike@park.bu.edu
Subject: CNS Program at Boston University Hiring 2 Assistant Professors

Boston University seeks two tenure track assistant or associate
professors starting in Fall, 1991 for its M.A. and Ph.D. Program
in Cognitive and Neural Systems.  This program offers an
intergrated curriculum offering the full range of psychological,
neurobiological, and computational concepts, models, and methods
in the broad field variously called neural networks,
connectionism, parallel distributed processing, and biological
information processing, in which Boston University is a leader. 
Candidates should have extensive analytic or computational
research experience in modelling a broad range of nonlinear
neural networks, especially in one or more of the areas: vision
and image processing, speech and language processing, adaptive
pattern recognition, cognitive information processing, and
adaptive sensory-motor control  Candidates for associate
professor should have an international reputation in neural
network modelling.  Send a complete curriculum vitae and three
letters of recommendation to Search Committee, Cognitive and
Neural Systems Program, Room 240, 111 Cummington Street, Boston
University, Boston, MA 02215, preferably by November 15, 1990 but
no later than January 1, 1991.  Boston University is an Equal
Opportunity/Affirmative Action employer.  
Boston University (617-353-7857) Email: mike@bucasb.bu.edu
Smail: Michael Cohen                     111 Cummington Street, RM 242
       Center for Adaptive Systems        Boston, Mass 02215
       Boston University

------------------------------

Date: 30 Oct 90 16:38:08 GMT
From: Paul Lewis <P.H.Lewis@ecs.soton.ac.uk>
Subject: research post

                University of Southampton
       Department of Electronics and Computer Science
           Research Post in Image Understanding

Applications are invited for a  research fellow at post-doctoral 
level to work on a SERC funded project entitled "Enhanced Methods 
of Extracting Features of Engineering Significance from Remotely 
Sensed Images".

The aim of the project is to develop and apply recent work on knowledge
based feature extraction to the provision of tools for extractiong 
features such as roads and river networks from satellite images. 
The work will be set in a GIS context and will  make use of transputer 
based imaging workstations.

Applicants should be post-doctoral or similar level, ideally having 
recent research experience in image understanding, artificial 
intelligence and software development in C, Prolog and Lisp.

The appointment will be for one year in the first instance with the 
expectation of remnewal for two further years. The starting salary 
will be  13495 pounds per annum and the post is available from 
January 1st 1991.

Applications, including a curriculum vitae  and the names and 
addresses of two referees, should be sent to Mr H. F. Watson, 
Staffing Department, the University, Southampton, UK, SO95NH,
to arrive before November 23rd 1990.

Preliminary informal enquiries may be made to Dr Paul Lewis
by  telephone (0703 593715 or 0703 593649).

Dr Paul H. Lewis,
Dept. of Electronics and Computer Science,
The University of Southampton, Southampton, U.K., SO95NH.

e-mail  phl@ecs.soton.ac.uk
Tel. 0703 593715
Fax. 0703 593045 

------------------------------

Date: 	Wed, 31 Oct 90 10:19:26 EST
From: Color and Vision Network <CVNET%YORKVM1.bitnet@ugw.utcs.utoronto.ca>
Subject: CVNet- Two tenure track positions

The Department of Psychology of the Ohio State University seek Asst.
Prof applications for two tenure-track positions, starting in September of
1991. One position is in visual perception and one is for a scientist
interested in the relation of visual cognition to neuroscience.
Salary will be in the range of $34,080 to $40,920 and considerable
start-up funds are available. Research areas might include visual
electrophysiology, object recognition, visual attention and memory,
visual/neural modeling and/or visuo-motor coordination. Candidates
should send vita, work sample, and five reference letters to
Mari Riess Jones, Chair Visual Perception Search, 142 Townshend
Hall, Dept Psychology, The Ohio State University, Columbus, Ohio, 43210.
The deadline for applications is December 15, 1990.

------------------------------

End of VISION-LIST
********************

Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn) (11/06/90)

Vision-List Digest	Mon Nov 05 09:59:09 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 RE: Optical Flow in Realtime
 Standard (or at least famous) pictures - where to find them
 SGI and ground-truth for shape from motion algorithm
 Fingerprint ID
 Abstract of Talk on Computer Vision and Humility

----------------------------------------------------------------------

Date: Fri, 2 Nov 90 23:33:13 GMT
From: Han.Wang@prg.oxford.ac.uk
Subject: RE: Optical Flow in Realtime

> (1) Is there any company or research lab which could
>compute on grey images (256x256) image flow in real time?

I have achieved the rate of 2~4 seconds in computing optic
flow using 8 T800 transputers on 128x128 images. This is 
only along edge contours (Canny).

>We at BMW are developing a lateral and longitudinal
>controlled car, which should (for experiments) drive
>automatically and which might be in the future an intelligent
>assistent to the driver, in which form soever.
>
>We will use (if available) this techniques to detect
>obstacles, that are lying or driving on the street,

In oxford, we are building a system of bybrid architecture
using Sparc station, Transputer array and Datacube to
compute a 3D vision system DROID (Roke Manor Research) in
real time which can effectively detect obstacles in an
unconstraint 3D space. This is however not based on optic flow.
It uses corner matching instead. So far, we have succeed in
testing many sequences including a video camera carried by a
robot vehicle. This experiment will be demonstrated in
Brussels during the ESPRIT conference (9th - 16th Nov. 1990).

regards
Han

------------------------------

Date: Fri, 2 Nov 90 11:40:06 EST
From: John Robinson <john@watnow.waterloo.edu>
Subject: Standard (or at least famous) pictures - where to find them

We are searching for raster versions of "famous" monochrome and colour images.
Actually, any format will do if we can also get access to a format to raster
convertor. We are particularly interested in getting hold of:

Girl and toys,
Boy and toys,
Gold hill (steep village street with countryside)
Boilerhouse (picture with lots of shadows),
Side view of man with camera on a tripod (actually there are at least two
pictures of that description around - we'd prefer the one with the overcoat),
The various portraits from the 60s of one or two people that are often used,
Any single frames taken from videoconference test sequences.

Anything else that fulfils the following would be appropriate:
Good dynamic range,
Low noise,
No restrictions on copyright,
Portraits completely devoid of sexist overtones (e.g. not Lena),

Is there an FTP site with a good selection of these?

Thanks in anticipation

John Robinson
john@watnow.UWaterloo.ca

[ The Vision List Archives are on the build.  Currently, of static imagery, 
  they contain Lenna (girl with hat) and mandrill.  A collection of motion
  imagery built for the upcoming Motion Workshop (including densely sampled
  and stereomotion imagery) is also in the FTP accessible archive. 

  If you have imagery which may be of interest and may be distributed to the
  general vision community, please let me know at vision-list-request@ads.com.
				phil...		]

------------------------------

Date:         Thu, 01 Nov 90 19:38:35 IST
From: AER6101%TECHNION.BITNET@CUNYVM.CUNY.EDU
Organization: TECHNION - I.I.T., AEROSPACE ENG.
Subject: SGI and ground-truth for shape from motion algorithm

I am presently working with 3-D scene reconstruction from a sequence
of images. The method I am using is based on corner matching between a
pair of consecutive images. The output is the estimated depth at the
corner pixels. The images are rendered by a perspective projection of
3-D blocks whose vertices are supplied by me as input to the program.
However, the detected corners are not necessarily close to those
vertices. In order to obtain a measurement of the accuracy of the
algorithm I am using, the actual depth at that pixel is needed and I
tried to recover it from the z-buffer.  I thought that (my station is
a SilliconGraphics 4D-GT) the z-buffer values (between 0 and 0x7fffff)
were linearly mapped to the world z-coordinates between the closest
and farthest planes used in the perspective projection procedure
available in the Sillicon's graphic library.

The results however don't match the above hypothesis. I tested the
values of the z-buffer obtained when viewing a plane at a known depth
and it was clear that the relation was not linear. Can someone
enlighten me about how the z-buffer values are related to actual
depth? I know there is a clipping transformation that transforms the
perspective pyramid into a -1<x,y,z<1 cube, but perhaps I am missing
something else. If anybody has an opinion or reference that could help
me I would be very pleased to receive it in my E-mail
(aer6101@technion.bitnet). I would summarize the received answers and
post a message with the conclusions.

Thanking you in advance,        jacques-

------------------------------

Date: Mon, 5 Nov 90 10:07:37 EST
From: perry@dewey.CSS.GOV (John Perry)
Subject: Fingerprint ID

Does anyone know of any review articles or recent
textbooks in fingerprint ID systems, research, etc.

Thanks,
John

------------------------------

Date: 2 Nov 90 01:59:57 GMT
From: sher@wolf.cs.buffalo.edu (David Sher)
Subject: Abstract of Talk on Computer Vision and Humility
Organization: State University of New York at Buffalo/Comp Sci

I thought the people who read these lists may find this talk
abstract of interest:

Computer Vision Algorithms with Humility 
David Sher
Nov 1, 1990

Often computer vision algorithms are designed in this way - a plausible
and mathematically convenient model of some visual phenomenon is
constructed which defines the relationship between an image and the
structure to be extracted.  For example: the image is modeled as broken
up into regions with constant intensities degraded by noise and the
region boundaries are defined to be places in the undegraded image with
large gradients.  Then an algorithm is derived that generates optimal
or near optimal estimates of the image structure according to this
model.  This approach assumes that constructing a correct model of our
complex world is possible.  This assumption is a kind of arrogance that
yields algorithms that are difficult to improve, since the problems
with this algorithm result from inaccuracy in the model.  How one
changes an algorithm given changes in its model often is not obvious.

I propose another paradigm which follows the advice of Rabbi Gamliel,
"Provide thyself a teacher" and Hillel, "He who does not increase his
knowledge decreases it."  We are designing computer algorithms that
translate advice and correction into perceptual strategies.  Because
these algorithms can incorporate a large variety of statements about
the world into their models they can be easily updated and initial
inaccuracies in their models can be automatically corrected.  I will
illustrate this approach by discussing 6 results in computer vision 3
of which directly translate human advice and correction into computer
vision algorithms, 3 of which indirectly use human advice.

 David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher%cs.buffalo.edu@ubvm.bitnet	UUCP:
{apple,cornell,decwrl,harvard,rutgers,talcott,ucbvax,uunet}!cs.buffalo.edu!sher

------------------------------

End of VISION-LIST
********************