[comp.ai.neural-nets] Classification of Animal Vocalizations

duncan@convex.csd.uwm.edu (Shan D Duncan) (01/11/91)

[Try this again]


I came across a reference of some work done at Sea World/Hubbs
Research (1988?) about using NN to classify Killer Whale
vocalizations.  I remember reading an issue of AI that also
mentioned this work but no real details were given.  Any
information available?

I assume that digitized sonagrams/spectragrams were used and
then pattern recognition of the time/frequency picture.

I would like to know if the digitized data from an A/D board
could be used directly (waveform) without a transformation  or if
a matrix of amplitude values at specific frequencies over time is
necessary via FFT and windowing. 

I would like to train a network on  species specific
vocalizations and then use it to classify.  I would ultimately
like to use the same principle to handle vocalizations on an
individual level where the amount of data makes the traditional
approach difficult (i.e. recording the vocalization, obtaining a
frequency/time representation, measuring variables then using
statistical technique both univariate and multivariate to obtain
a measure of variability and similarity/dissimilarity).
I am not sure if I am really asking the proper questions or if
NNs are really the appropriate technique but it seems made for a
situation where one must handle voluminous data and the resulting
signal could be rather complex with both Frequency Modulation and
Amplitude Modulation.

Please assume a very basic level of understanding, I do not mind
being told things I already know. :-)

Thank you for any help or information, programs (unix),


-Shan Duncan
Dept. of Biological Sciences
University of Wisconsin--Milw.
Milwaukee,  WI.  53201