[comp.ai.neural-nets] Neuron Digest V6 #16

neuron-request@HPLABS.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (02/24/90)

Neuron Digest	Friday, 23 Feb 1990
		Volume 6 : Issue 16

Today's Topics:
		       Attentional Neurocomputers
		      backprop training with noise
		    Re: backprop training with noise
		    Re: backprop training with noise
		    Re: backprop training with noise
			  can machines think ?
		  Companies involved in NN development
			 Network initialization
		Neural Microchip Intel's N64 Info Desired
			 Re: Neuron Digest V6 #9
			   Real Brain Theories
	      Request for training data sets for learning.
	       SunNet .... A PDP Network Simulator for Sun
       Where to get the Digit database from the US Postal Service
      Re: Where to get ADDRESS database from the US Postal Service


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Attentional Neurocomputers
From:    ravula@mrsvr.UUCP (Ramesh Ravula)
Organization: GE Medical, MR Center, Milwaukee
Date:    08 Feb 90 15:06:27 +0000 

   In a recent issue of Electronic Engineering Times, there was a section on
   "Emerging Technologies" in which Robert Hecht-Nielsen has an article on
   "Attentional Neurocomputers". Could someone cite any further refernces on
   the subject.

   Thanks

   Ramesh Ravula
   GE Medical Systems
   Mail W-826
   3200 N. Grandview Blvd.
   Waukesha, WI 53188

   email: {att|mailrus|uunet|phillabs}!steinmetz!gemed!ravula
			   or
          {att|uwvax|mailrus}!uwmcsd1!mrsvr!gemed!ravula

------------------------------

Subject: backprop training with noise
From:    ksr1492@cec1.wustl.edu (Kevin Scott Ruland)
Organization: Washington University, St. Louis MO
Date:    13 Feb 90 16:26:24 +0000 


  I heard that Wasserman had tried training feedforward nets by backprop
with a random (cauchy, I think) vector added to the weights.  I saw a
single page report from a proceedings that reported Wasserman had tried
this with some success but failed to list numerical results.  I had tried
this training on a 3-4-1 net to do the 3-d xor problem with some good
convergance results (approx. 95% of all nets trained in this way
converged compared to <15% when trained without the added noise).  If
anyone has done some of this, or knows of some references please drop me
a line.

kevin

kevin@rodin.wustl.edu

------------------------------

Subject: Re: backprop training with noise
From:    andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head )
Organization: National Semiconductor, Santa Clara
Date:    14 Feb 90 08:20:55 +0000 

Phil Wassermann came to speak at our plant recently, and mentioned that
Cauchy was often superior to Boltzmann in that a large jump out of an
extended local minimum area was more likely, and thus a global minimum
more likely to be successfully found. Clearly, this is handwaving, and
I have seen no maths to vindicate this for specific weight landscapes.
No refs, I'm afraid.
...........................................................................
Andrew Palfreyman	andrew@dtg.nsc.com	Albania before April!

------------------------------

Subject: Re: backprop training with noise
From:    snorkelwacker!usc!elroy.jpl.nasa.gov!aero!aerospace.aero.org!plonski@tut.cis.ohio-state.edu (Mike Plonski)
Organization: The Aerospace Corporation
Date:    16 Feb 90 01:30:44 +0000 


Some of the work by Harold Szu on fast simulated annealing compares 
Cauchy and Boltzmann machines.  References follow in bib form.

%A Harold H. Szu
%T Fast Simulated Annealing
%J |AIP151|
%P 420-426
%K Cauchy Machine

%T Fast Simulated Annealing
%A Harold H. Szu
%A Ralph Hartley
%J |PHYLA|
%V 122
%N 3,4
%P 157-162
%D |JUN| 8, 1987
%K Cauchy Machine

%T Nonconvex Optimization by Fast Simulated Annealing
%A Harold H. Szu
%A Ralph L. Hartley
%J |IEEPro|
%V 75
%N 11
%D |NOV| 1987
%K Cauchy Machine

%T Design of Parallel Distributed Cauchy Machines
%A Y. Takefuji
%A Harold H. Szu
%J |IJCNN89|

ba:~ (53) tibabb phyla
D PHYLA Phys. Lett. A
D PHYLA Physics Letters. A
ba:~ (54) tibabb IEEPro
D IEEPro IEEE Proc.
D IEEPro Institute of Electrical and Electronics Engineers. Proceedings
ba:~ (55) tibabb IJCNN89
D IJCNN89 International Joint Conference of Neural Networks\
ba:~ (56)

  -----------------------------------------------------------------------------
.   . .__.			       The opinions expressed herein are soley
|\./| !__!	 Michael Plonski       those of the author and do not represent
|   | |		"plonski@aero.org"     those of The Aerospace Corporation.
_______________________________________________________________________________

------------------------------

Subject: Re: backprop training with noise
From:    uokmax!munnari.oz.au!murtoa.cs.mu.oz.au!ditmela!latcs1!sietsma@apple.com (Jocelyn Sietsma)
Organization: Comp Sci, La Trobe Uni, Australia
Date:    16 Feb 90 02:08:30 +0000 

>ksr1492@cec1.wustl.edu (Kevin Scott Ruland) writes:
>>   I heard that Wasserman had tried training feedforward nets by backprop
>> with a random (cauchy, I think) vector added to the weights. 

I'm not sure if this is relevant to this discussion, but I have been
training feedforward networks adding noise, not to the weights, but to
the training inputs.  This makes learning slower (the training set must
be presented more times, with new noise each time) but gives networks
that use more of their units independently and that are *far* better at
recognising new noisy inputs.  I can't comment on whether it avoids local
minima in the training as I have mainly used it on networks of a size
that trained reliably with clean inputs.  (I don't think it would.)  I
assume you know that redundant units remove local minima ? Eg if you
train XOR with, say, 4 units, convergence is much more reliable than with
two.

I've written this up in a couple of conference papers: 
	Neural Net Pruning - Why & How
	J. Sietsma & R.J.F.Dow
	IEEE ICNN2 San Diego 1988
and
	The Effect of Pruning a Back-Propagation Network
	J. Sietsma
	1st Australian Conference on Neural Networks
	Sydney, Jan 1990

The proceedings of ACNN'90 only has abstracts, so you would have to write
or email me if you want a copy of the whole paper.

Jocelyn Sietsma
email: sietsma@latcs1.oz.au
address: USD, Materials Research Laboratory, PO Box 50,
	 Ascot Vale 3032, Melbourne, Australia
phone: (03) 319 3775

------------------------------

Subject: can machines think ?
From:    Wey Fun (Phd 89) <wef%edai.edinburgh.ac.uk@NSFnet-Relay.AC.UK>
Date:    Thu, 22 Feb 90 19:15:22 -0000 

[[ Editor's Note: See also a following message entitles "Real Brain
Theories". Although I find this high level discussion interesting, I hope
Wey Fun or others who reply will be able to ground their arguments in
current technologies.  For example, how might the three proposed
definitions of consciousness apply to a specific articifial system, for
example. How can we apply some of these thoughts to existing and current
attempts at "artifical life" as presented at the recent conference of the
same name?  The empiricist in me want to keep discussions near the
circuits, lest they devolve into arguments about semantics.  -PM ]]

   This is a response to the discussions on the recent article in Sci.
Am. on whether machines can think.

   What is true machine learning ? For a symbolic system this would mean
self-programming.  The notion of version space is insufficient because it
can only be termed as "the formation of prejudice with its pre-built
knowledge for further effective interpretation of its environment", by
the updating of small amount of pre-defined params with relatively large
pre-built knowledge or competence.  It is rather the application of
knowledge on specific domains.  Self-programming in fact stretches
symbolic paradigm to its limit - ie. a system whose structure can be
represented symbolically, can alter itself through manipulating its
source code.  This is found to be very infeasible because the basis of
knowledge required for auto-programming is very huge.  Whereas in neural
nets this can be relatively easily achieved because the learning rules
are relatively much simpler and uniform, and the system can circumvent
with the complexity of its environment with increment in its size, taking
longer time to learn, etc. ie. qualitative improvement of operation with
those quantitative factors that can be altered with some simple and
linearised rules.

   There is a big problem with the applicability of learning system - the
pre-built behaviours and knowledge are subjected to self-modification
durinng its operation, which means that the intention of the designer
cannot be retained with the design of them as in non-learning static
systems.  Instead that intention has to be instantiated upon the
intention of the system, which determines in what way should its
knowledge and behaviours be changed.  This is where the notion of
pleasure and pain, etc. come into the perspective for the design of
learning systems.  The conceptualisation of a truly learning machine with
moderate complexity should be sensation-based (not solely sensor-based).
The system is supposed to learn how to achieve desirable sensational
states with the forming of behavioral domains with reference to its
environmental regularities (stimuli-based bias of behaviours for
competence forming).  I agree with Rodney Brooks' idea that true AI will
be more easily achieved in robotics.  This is because the eventual
requirement of commonsense will mean that the enormous amount of physical
interactive knowledge of a knowledge must be gained directly from
interacting with the world by the system itself, and not to be manually
input into it.  Manual building of complex physical interaction knowledge
is pointless, because the laws of nature are always out there.  The
system can always make reference to the objective world for correction of
its action so that some pre-determined goals are reached.  Note that all
natural learning systems begin to learn through physical interaction with
the environment.

   In natural life forms, one biases one's action to a domain which would
likely lead to the reduction of agitation and arousal of pleasure.  The
survivability of a lower species is determined largely by how relevant is
the sensation domain to its self-maintenance, which also depends on the
regularities and consistencies of its environment.  For example
newly-hatched sea tortoises, climbing onto the surface of the beach, will
be prompted by its instincts to struggle towards something that glitters
- - which was used to be (unambiguously) the surface of the sea.  This
worked fine for millions of years until the appearance of artificial
lights on the beach, which cause them to move in the opposite direction.
Animals in the wild are usually well-equipped with phylogenetic knowledge
- - e.g. horses are able to walk within minutes after birth.  In contrast
humans are less well-equipped with phylogenetic knowledge - one will
probably take more than a year in order to be able to walk.  A person is
prompted to acquire most of his skills for effective interactions with
the environment onto-genetically, from which the experience of learning
provides him with far greater power of circumvention.

   I would quote an informal definition of consciousness as 'the
awareness of awareness'.  An animal is aware - of the conditions in its
environment relevant to its survival.  A conscious man is aware (sub-
consciously) that 'his is aware' - that he has an awareness of 'judging
his actions' with relevance to his avoidance of undesirable situations
and attachment to desirable situations.  This meta-awareness provides us
with enormous power towards self-restraining ourselves so that our
actions will not be dominated by instinctual drives that may result in
harmful situations, and towards developing stimuli-independent drives
(ineterests) that are essential for objective learning.  It provides us
with the ability of acheiving endurance and not be fooled by natural
prompts.  Still higher cognitive functions such as beliefs and blind
faith which prevent us from making actions that have short-term apparent
benefits but lead to long-term bad consequences, are then possible.

   A powerful being used for the construction and operation of
consciousness is language - it is with symbols and other forms of
explicit abstracted representations that the idea of units,
identification of self as separated from the environment (and thereby the
achievement of objective interactions), are achievable by humans.  Of
course that does not mean that symbolic AI systems are conscious - their
symbolic manipulation is only as good as algorithmic reflexive processes.
Symbols are not being used for overcoming the deficiencies of pre-built
functional domains in its achievement of tasks.

   Here is a more formal definition of what it means by a system having
consciousness :

[1] it is initially equipped (by birth or creation) with a fixed
pre-built basis of competence (phylogenetic knowledge) for its effective
interaction with its environment for self-maintaining its integrity;

[2] during the interaction with the environment it acquires further
environment-specific knowledge and competence for more robust and
effective self-maintenance;

[3] the ontogenetic knowledge thereby acquired will be accumulated to a
stage where it provides the system the knowledge structure for
recognising itself as a concrete unit (largely) separated from its
environment, and that its phylogenetic knowledge for self-maintenance
(instinctual drives) probably needs to be overrided in order to further
improve its chance of maintaining its integrity.

  One problem with various fields in the study of intelligence is that we
are trying to see living things as possible machines, with invariant
logical structures at some level.  This is obvious in behavioral
psychology, where humans are usually termed as subjects.  What we are
lacking of is to see machines as possible living things, with the notion
of competence-building being centered on self-maintenance (Maturana's
Autopoietic machines).  Must the arousal of pain and pleasure be only
possible within bio-chemical compounds that are alive ?  We cannot prove
whether that is true, but neither can we prove that it is false.  It is
only from a different point of view that a machine can have sensation.
The requirement for a efficient concept towards the design of complex
robotics (that can be termed as artificial life) makes that point of view
essential.

   The next and most important point is - what is the use of creating a
selfish machine ? Are we able to manipulate the 'needs' and 'desires' of
complex machines so that they are of use to us ?

Critiscism would be welcome.

Wey Fun
wef@aipna.ed.ac.uk


------------------------------

Subject: Companies involved in NN development
From:    rmyers@ics.uci.edu (Richard E. Myers)
Organization: UC Irvine Department of ICS
Date:    10 Feb 90 19:01:52 +0000 

In a similar vein to the message posted this summer requesting
information on graduate programs in neural networks, I would like to
enquire about companies involved in neural network research and
development.  I know that most large high technology corporations such as
AT&T and IBM are active in this area, but it is more difficult to find
out about small and medium sized concerns involved.

Any pointers to, or input on, NN work being done would be greatly
appreciated.  I will consolidate and repost to this board all relevant
responses that I receive.

  -- Richard

[[ Editor's Note: Of course, one could look at the exhibiters's list and
list of papers of the recent IJCNN conference as a start.  I suspect
Richard is looking for companies who are not giving papers, however. -PM ]]

------------------------------

Subject: Network initialization
From:    patil@a.cs.okstate.edu (Patil Rajendra Bha)
Organization: Oklahoma State Univ., Stillwater
Date:    02 Feb 90 22:22:31 +0000 

	Does anybody know about different methods of initialization of a
network (different than randomizing the weights) before the training
process is started, also any numerical/ stastical techniques to find
different properties of datasets, like symmetry(XOR) and other which can
help doing prepreocessing on the dataset before training in order to
reduce the training time.

Thanks,

Rajendra,
OSU(OK).

------------------------------

Subject: Neural Microchip Intel's N64 Info Desired
From:    occam@cnam.UUCP (occam)
Organization: C.N.A.M, Paris, France
Date:    02 Feb 90 19:44:46 +0000 

       Has anyone experience with Intel's N64 Neural Microchips ?

       Could anyone Provide a bibliography or references to work 
       being done to Intel's N64 ?

*************************************************************************
* Rodrigo Laurens                                                       *
* C.N.A.M (Paris-France)                 e-mail:occam@cnam.UUCP         *
*************************************************************************

------------------------------

Subject: Re: Neuron Digest V6 #9
From:    hokc_ltd@uhura.cc.rochester.edu (Hok Kiu Chan)
Organization: University of Rochester
Date:    08 Feb 90 06:16:42 +0000 

It is interesting that many of the neural nets we study today are digital,
unlike the biological brain, which takes continuous analogy signals.  Does
anyone know the references for frequency modulated or analog neural nets?
I would be very interested to educate myself in this area.

Thanks.
Victor Hok-kiu Chan

[[ Editor's Note: A good start would be Carver Mead's new "Analog VLSI"
book. -PM ]]

------------------------------

Subject: Real Brain Theories
From:    thomasp@lan.informatik.tu-muenchen.dbp.de (Patrick Thomas)
Organization: Inst. fuer Informatik, TU Muenchen, W. Germany
Date:    19 Feb 90 18:26:03 +0000 

[[ Editor's Note: I would appreciate any readers who would offer a
review/critique of this book. Are his theories testable or are they
merely philosohpizing?  The TOC look interesting, and it was cited a
great deal at the recent "Study of Consciousness in Science" conference.
For that matter, would someone who went to that conference like to give a
synopsis?  I was somewaht disappointed with the uneven quality of the
talks and odd ramblings of certain well-known names. -PM ]]


If you'are interested in the REAL brain theories, leave Grossberg aside for a
moment and check this out: 

                   T H E   R E M E M B E R E D   P R E S E N T

                      A BIOLOGICAL THEORY OF CONSCIOUSNESS

                              by Gerald M. Edelman

   CONTENTS
   ========

1. CONSCIOUSNESS AND THE SCIENTIFIC OBSERVER
   An Initial Definition, The Scientific Observer, The Feasibility Argument,
   The Matter of Constraints, Philosophical Issues.

2. PROPOSALS AND DISCLAIMERS
   Further Definitions, The Scope of the Extended Theory, Scientific
   Assumptions, Phenomenal States, Problems of Report in Humans and Animals,
   Reference States for the Theory, The Human Referent, The Insufficiency of
   Functionalism, The Sufficiency of Selectionism.

3. NEURAL DARWINISM
   Global Brain Theory, Major Unresolved Issues in Neuroscience, Basic
   Mechanisms of Neural Darwinism, Perceptual Categorization, Neuronal Groups
   as Units of Selection, Categorization, Memory, and Learning, Heuristic
   Models of Selective Neuronal Systems: Recognition Automata.

4. REENTRANT SIGNALING
   Types of Reentry, Cortical Correlation and Integration, The Reentrant
   Cortical Integration (RCI) Model for Early Vision, Recursion and the
   Multiplicity of Reentrant Integration Mechanisms, Gestalt Properties,
   Evolution, and Reentry.

5. PERCEPTUAL EXPERIENCE AND CONSCIOUSNESS
   The Adaptive Significance and Neural Forerunners of Consciousness, A Preview
   of the Consciousness Model, Connecting Value to Category by Reentry, 
   Properties and Tests, Higher-Order Consciousness.

6. MEMORY AS RECATEGORIZATION
   Generalization and Recategorization, The Problem of Ordering.

7. TIME AND SPACE: CORTICAL APPENDAGES AND ORGANS OF SUCCESSION
   Succession and Smooth Motion: The Cerebellum, Succession and Sense: The
   Hippocampus, Succession, Planning, and Choice: The Basal Ganglia, Some
   Conclusions.

8. CONCEPTS AND PRESYNTAX
   Brain Mechanisms for Concept Formation, Presyntax.

9. A MODEL OF PRIMARY CONSCIOUSNESS
   The Model Proper, A Schematic Representation of the Model, Prefrontal
   Cortex: A Locus for C[C(W) C(I)], Possible Anatomical Bases for the Key
   Reentrant Loop, Phenomenal Aspects of Primary Consciousness, Tests of
   the Model.

10. LANGUAGE
    An Epigenetic Theory of Speech, Comparison with other Models.

11. HIGHER-ORDER CONSCIOUSNESS
    The Conceptual Self and Freedom from the Present.

12. THE CONSCIOUS AND THE UNCONSCIOUS
    Unity and Heterogeneity of Conscious Experience, Attention, A Model
    for the Conscious Control of Attention, Conscious, Nonconscious, and
    Unconscious States, Repression and the Unconscious, Levels of Description:
    The Choice of Language in Psychological Systems.

13. DISEASES OF CONSCIOUSNESS
    A General Framework, Brain Damage, Amnesia, and Aphasia, Dissociative
    Diseases: Specific Blockade of Reentrant Loops, Obsessive-Compulsive
    Disorder as a Disease of Succession and Attention, Affective Disorder:
    Value Disturbance and Altered Qualia, Schizophrenia as a Generalized
    Disease of Reentry, Some Evaluative Remarks.

14. PHYSICS, EVOLUTION, AND CONSCIOUSNESS: A SUMMARY
    Topobiology and the Morphoregulator Hypothesis, The TNGS Proper, 
    Consciousness and the Extended TNGS, Conclusions.

15. PHILOSOPHICAL ISSUES: QUALIFIED REALISM
    The Matter of World Description: Causality and Consciousness, Determinism,
    Volition and Free Will, Biologically Based Epistemology and Qualified
    Realism, The Problem of Knowledge and its Relation to Language, Science
    and Heuristics, Personhood and Human Concerns.

Patrick

------------------------------

Subject: Request for training data sets for learning.
From:    hall@ziggy.EDU (Lawrence O. Hall)
Organization: University of South Florida, Tampa, FL
Date:    08 Feb 90 18:10:29 +0000 

We are testing a supervised learning algorithm and would like to
benchmark it against some well known data sets.  Does anyone know how to
get the following data sets (preferably by ftp) and/or complete
descriptions of them?  They are: the Thyroid data set, Soybean diseases,
and Chess end games.  Pointers to other extensively tested data sets
would also be appreciated.  Thank you in advance.

 --Larry Hall
hall@sol.usf.edu
Department of Computer Science and Engineering
University of South Florida
Tampa, Fl. 33620

------------------------------

Subject: SunNet .... A PDP Network Simulator for Sun
From:    gates@ccu.umanitoba.ca
Organization: University of Manitoba, Winnipeg, Manitoba, Canada
Date:    08 Feb 90 00:52:43 +0000 

Can anyone tell me anything about "SunNet Version 5.2 : A Tool for
Constructing, Running, and Looking into a PDP Network in a Sun Graphics
Window" (Tech.Rep. ICS-8708 Univ.ofCA,Inst. for Cognitive Science)??  by
Miyata (1987)

(It is mentioned in an article by P.Todd in Computer Music Journal Volume
13, No.4 (winter'89) pgs.27-43)

Is it PD, freeware, "research-ware", etc.; how can I get it ?

(reply here or personal)			Thanks in advance
						D.Gates
						U of M
						Dept. of Elec. Eng.
						<gates@ccu.umanitoba.ca>

------------------------------

Subject: Where to get the Digit database from the US Postal Service
From:    salas@pprg.unm.edu (NN])
Organization: U. of New Mexico, Albuquerque
Date:    24 Jan 90 16:10:32 +0000 

Hello,

     Here are all the facts about getting the US Postal Service digit
database.  It is distributed by State University of New York at Buffalo.
The individual in charge of distributing the database in Jonathon Hull,
but the person to contact is Steven Tylock.  He can by E-mail, phone, or
mail:

Steven Tylock
State University of New York at Buffalo
Department of Computer Science 226 Bell Hall
Buffalo, New York 14260

(716) 636-3406 or 3291
(tylock@cs.buffalo.edu)

     The database is not public domain, you will need to send a $200
check payable to,

		University at Buffalo Foundation
to,

Jonathan Hull
State University of New York at Buffalo
Department of Computer Science 226 Bell Hall
Buffalo, New York 14260

Specify whether you want a 9-track or 8 millimeter exabyte tape.  The
tape will be in tar format and the datafiles will be in HIPS format.  The
tape will include a description of HIPS so that you can convert them to
the format you need.

------------------------------

Subject: Re: Where to get ADDRESS database from the US Postal Service
From:    tylock@sunybcs.cs.Buffalo.EDU (Steve Tylock)
Organization: SUNY/Buffalo Computer Science
Date:    07 Feb 90 22:10:55 +0000 

Hello,

        This is to clarify some points that have been raised on the
network about two databases of digital images that the University at
Buffalo USPS Research Group is distributing on behalf of the United
States Postal Service.

The USPS OAT has been soliciting proposals for research in Postal
automation.  Specific areas of interest are described in the Postal
Service publication 'Research Interests in Automated Address Reading'.
In order to aid the selection of proposals, the Postal Service is
requesting the Offeror to demonstrate current capabilities.  To that end,
they are making available on request a database entitled "United States
Postal Service Office of Advanced Technology Image Database for Research
Announcement Proposal Preparation(1989)".  This database contains 500
images distributed over machine printed, dot-matrix, handwritten as well
as cursive ADDRESSES in a 300 ppi scale, (greyscale), as well as a 212
ppi (binary) scale.

Jonathan Hull is in charge of this distribution.  Please direct inquiries
to:

        United States Postal Service Office of Advanced Technology
        Image Database for Research Announcement Proposal Preparation (1989)
        c/o
        Jonathan Hull
        Department of Computer Science
        226 Bell Hall
        State University of New York at Buffalo
        Buffalo, New York 14260
        (716) 636-3191
        (hull@cs.buffalo.edu)


     You will need to send a $200 check payable to,

                University at Buffalo Foundation

Specify whether you want an 8mm(exabyte), 9-track (6250/1600) or 1/4" sun
tape.  The tape will be in tar format and the datafiles will be in HIPS
format.  The tape will include a description of HIPS so that you can
convert them to the format you need.  The tape is roughly 30mb (files are
compressed).

The other database is entitled "United States Postal Service Office of
Advanced Technology Handwritten ZIP Code Database (1987)".  It contains
about 2000 handwritten ZIP Codes scanned at 300 ppi.  This database is
NOT publicly available. If you think you are interested in this, do not
contact myself or Jon. Please talk to:
        John Tan, Technology Resource Center, (202) 646-1500
        Arthur D. Little,
        955 L'enfant Plaza SW,
        Suite 4200
        Washington, D.C. 20024-2119
He will be able to tell you more information.

Steve

   Steven Tylock @ SUNY/Buffalo Computer Science (716-636-3406)
   internet: tylock@cs.buffalo.edu    bitnet: tylock@sunybcs.BITNET
   uucp:     ..!{ames,boulder,decvax,rutgers}!sunybcs!tylock

------------------------------

End of Neuron Digest [Volume 6 Issue 16]
****************************************