[comp.ai.neural-nets] Looking for Neural Net/Music Applications

pastor@bigburd.PRC.Unisys.COM (Jon Pastor) (04/26/89)

In article <8211@boulder.Colorado.EDU> fozzard@boulder.Colorado.EDU (Richard Fozzard) writes:
>
>                   Jimi Hendrix meets the Giant Screaming Buddha:
>                 Recreating the Sixties via Back Propagation in Time
>
>          ... Recent
>          advances in neural network technology have provided legal ways to
>          artificially recreate and combine models of Hendrix  and  LSD  in
>          recurrent  PDP  networks.  The basic idea is to train a recurrent
>          back propagation network via Williams & Zipser's (1988) algorithm
>          to  learn the map between musical scores of Hendrix music and the
>          hand movements as recorded in various  movies.   The  network  is
>          then given simulated doses of LSD and allowed to create new music
>          on its own.

This is a great idea.  However, I'm concerned that the resolution (presumably
at the note level) will make it difficult for a BP net to learn passages like 
the one in "Castles Made of Sand" in which Jimi reverses the temporal order 
of the music at a level that could only be approximated at a frame-by-frame
level in the sound waveform.

Also, especially in light of the "Tastes Great!/Less Filling" BP vs. ART debate 
that raged on this newsgroup recently, I'm surprised that nobody recognized
that ART is a particularly appropriate architecture for implementing 
HendrixNet.  First of all, since ART is biologically more plausible, it seems 
a more likely architecture for experiments with simulated hallucinogens.  
The parameters available for modification in ART also seem more plausible: 
reducing the vigilance would cause disparate categories to be merged, thus 
simulating the flashes of (spurious) insight commonly reported in the 
literature on hallucinogen experiments; with the appropriate choice of 
vigilance, 6 could easily turn out to be 9.  Finally, perhaps a dose of LSD 
would help those who find reading about ART to find meaning in the equations...

... segue to a commentary on ART and the BP vs. ART debate...

Actually, if you work your way carefully through the discrete-time equations,
it's pretty clear at least *how* ART-1 and ART-2 work; *why* takes a little 
more thought, but even that's not beyond the reach of mathematically disabled 
folks like me (i.e., if *I* can do it, *anyone* can do it).  Grossberg math
is certainly no nastier than some of the other math that anyone who's 
attended any of the NN conferences has seen flashed on the screen for ten or
fifteen seconds, and the architecture is novel (there are very few self-
organizing architectures around), interesting (for reasons other than its
biological plausibility), and has a solid theoretical basis and proven
convergence results.  I would like to think that NN practitioners,
researchers, and theorists are not subject to the parochialism one finds in
other branches of ... well, you name the domain or discipline -- but I guess
that's naive.  I will put in a plug for diversity, for the absolute 
vapiditiy of the position that *any* computational technique is universal
in any but the most abstract and theoretical sense, and the wisdom of keeping
your toolkit stocked with a wide variety of tools and applying the one that's
most appropriate to your problem.  Anyone who maintains that tool X is better
than tool Y without explicitly stating the context in which the evaluation 
took place is guilty of the same offense that cereal manufacturers commit 
when they brag about "No Cholesterol! Contains Oat Bran!", but neglect to 
mention the palm oil and coconut. 

Finally, I am somewhat surprised and offended by the appearance of disparaging
personal remarks in the BP/ART discussion.  Grossberg is no more or less
"lost in his own little world" than Kohonen, Fukushima, Rumelhart, Smolensky,
or any other NN researcher who has developed a specialized architecture; since
when do we evaluate the viability of ideas on the basis of whether we 
understand them?  And while I'm not even really sure what "the dude is
clueless about people" means, or why it was an appropriate response to the
question, it's clearly offensive and in poor taste.  This has been a classy
newsgroup, one in which heated discussion has never degenerated into a
vituperative, mudslinging free-for-all; I'd like to see it stay that way.

androula@cb.ecn.purdue.edu (Ioannis Androulakis) (05/09/89)

I apologize for this posting, since it is actually a question
addressed to Jerry Ricario, concerinig one of his postings 
long time ago. It is about the work he is doing attempting
to have a NN learn how to do basic jazz "impovisation"
My question is the following, how do you define "improvisation"
and, once you do that, what do you mean by "learn how to imporovise"
I believe that imporvisation is not the output of some neurons 
that learned how to do something. What I do not undertstand is 
what you expect the network to learn. If we will ever be able
to construct a network that has the ability to imporvise, as a human,
then we would have achieved much more that this imporvisation.
Who knows, this way we will may be able to "construct" a new 
Chopin or a List, both masters of imporvisation......

Thank you, and once again I apologize, although I will be waiting
for any answer since I happen to be interested in both AI and music.

 yannis
 androula@helium.ecn.purdue.edu  

chank@cb.ecn.purdue.edu (King Chan) (05/10/89)

In article <937@cb.ecn.purdue.edu>, androula@cb.ecn.purdue.edu (Ioannis Androulakis) writes:
> long time ago. It is about the work he is doing attempting
> to have a NN learn how to do basic jazz "impovisation"
> My question is the following, how do you define "improvisation"
> and, once you do that, what do you mean by "learn how to imporovise"
> I believe that imporvisation is not the output of some neurons 
> that learned how to do something. What I do not undertstand is 
> what you expect the network to learn. If we will ever be able
>  yannis
>  androula@helium.ecn.purdue.edu  
> 
  I am aware of AI application to musical composition.  Specifically, 
  research at MIT produced interesting model-based composition programs
  for jazz, rock, and rag time.  This was on exhibit at chicago's
  museum of science and technology.              
	There is a possibility of learning even for improvisation.
  Music can be considered as a collection of primitives, patterns of 
  which make a piece of music.  The learning aspect can be spoken of 
  as the ability to pass a judgement on such a piece as being aesthetically
  appealing to a musician or not.  It is this judgement that allows a 
  adaptive approach to the development of music.  The judgement is the 
  part of the musician's knowledge that needs to be learned by the program 
  if it is to make any good improvisations.
  QED
                                                   KING CHAN
                                                   (chessnut)

lwyse@cochlea.usa (Wyse) (05/11/89)

Two exciting publications coming up this year: Computer Music
Journal (MIT Press), and INTERFACE (a journal of research in
music, sorry-publisher unknown) are both devoting special
issues to neural networks and music. Interface will have a 
more "systems-theoretic" flavor.