[comp.music] MUSIC AND AI

Stephen Smoliar <ISSSSM%NUSVM.BITNET@CUNYVM.CUNY.EDU> (06/26/91)

In article <3191@lee.SEAS.UCLA.EDU> bsmith@turing.seas.ucla.edu (Brian Smith)
writes:
>In article <10936@idunno.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot
>Handelman) writes:
>>
>>It's much more interesting to focus on the listener, because apprehension
>>is the simplest act of creativity.
>
>Listening is pretty fascinating stuff, but I think that composition is a much
>more interesting problem since one can't compose without listening.

Where have you been for the last forty years, Brian?  Following the Second
World War, there was no end of experimentation in music;  and much of which
emerged had nothing to do with listening (at least at the time of composition).
The early days of computer-synthesized sounds provide a good case in point.
Composers often labored long and hard to debug the theoretical formulation
of what they wanted yet rarely had much intuition regarding what the tape would
sound like when all the processing was done.  You may wish to take the ethical
position that one OUGHT NOT to compose without listening, but do not expect all
practicing composers to accept that position.

>  To some
>extent, everyone is a composer (i.e. humming arbitrary tunes, melodic contours
>in speech, etc.), so it does seem to be a worthwhile area of study.

I think you are homing in on an important point here, Brian;  and I would like
to try to push it a bit further.  What you are really talking about here is
BEHAVIOR, and one of my favorite hobby-horses is that there is more to behavior
than can be captured in logical calculi or neural nets.  The trouble is that we
to not do a terribly good job when it comes to DESCRIBING such behavior.  The
sorts of protocol analyses which were performed by Newell and Simon were little
more than self-fulfilling prophecies--descriptions based on a foundation of
symbol manipulation which they assumed HAD to be there.  Music, on the other
hand, does not lend itself to such symbol-based descriptions because, as Ed
Hall has been suggesting on comp.music, the actual PRACTICE of MAKING MUSIC
has precious little to do with the symbols of music notation.  Getting a
MACHINE to "make music" (i.e. to exhibit such behavior) may thus be viewed
as a major challenge to artificial intelligence, because it is an aspect of
behavior which has been ignored (and certainly not accounted for) by most of
the progress in AI to date.

>  Once the
>knwoledge gained through listening is captured, how do we use it in the
>performance domain?
>
This is another example of how thinking about music should force us to expand
the current horizons of artificial intelligence.  Much of the artificial
intelligence community seems inclined to live in a world in which "learning"
is a matter of adding declarative sentences to some kind of "knowledge base."
However, the above sentence captures an element of learning which is much
truer to behavior as we know it:  How one behaves "in the performance domain"
is a reflection of what has happened during past listening experiences.  This
is not a new idea Brian.  Quite some time ago, Minsky wrote a wonderful essay
on the role of a musical composition AS TEACHER.  Unfortunately, we have made
precious little progress in implementing any of these ideas.

Nevertheless, those ideas still deserve more attention.  After reading Volume
47 of ARTIFICIAL INTELLIGENCE, we should be at least SKEPTICAL about what
logical calculi can ultimately offer us.  At the very least, we should be
encouraged to work on problems which logic does not "fit" as comfortably
as it does is, for example, choosing the parameters for the design of an
elevator system.  What IS "knowledge gained through listening?"  We really
do not have the slightest idea?  We do not yet even have a handle on how we
know that what we are hearing NOW is the same tune we heard five minutes ago!
So far I have only been able to pursue such questions as peripheral activities,
but my current hunch is that SERIOUS attention to these matters may ultimately
lead to a significant shift in our current paradigms for artificial
intelligence.

===============================================================================

Stephen W. Smoliar
Institute of Systems Science
National University of Singapore
Heng Mui Keng Terrace, Kent Ridge
SINGAPORE 0511

BITNET:  ISSSSM@NUSVM

"He was of Lord Essex's opinion, 'rather to go an hundred miles to speak with
one wise man, than five miles to see a fair town.'"--Boswell on Johnson

eliot@phoenix.Princeton.EDU (Eliot Handelman) (06/27/91)

In article <3191@lee.SEAS.UCLA.EDU> bsmith@turing.seas.ucla.edu (Brian Smith)
writes:
;In article <10936@idunno.Princeton.EDU> eliot@phoenix.Princeton.EDU (Eliot
;Handelman) writes:
;>
;>It's much more interesting to focus on the listener, because apprehension
;>is the simplest act of creativity.
;
;Listening is pretty fascinating stuff, but I think that composition is a much
;more interesting problem since one can't compose without listening.

First of all I don't get this line of reasoning, since one can't compose
without being alive (unless you know how to contact Rosemary Brown),
and therefore composition is a much more interesting problem than the
problem of life? Second of all I don't see what the "problem" of composition
is, composition is not problem solving. Third of all, listening, as 
our friend S. Handel points out, is not the same thing as hearing, 
which apart from its ontology, suggests something machine-like, 
a registration of acoustic facts by a set of more or less dependable 
receptors. You will hear this sound as louder if I do that, that sort 
of thing. But listening is about the construction of reality, actively 
probing the environment, intentionalistic echolocation, bouncing attention 
off the mirror of your own disposition and carving yourself into 
multidimensional virtualities.  Listening means to me in the first place 
your self-awareness, your hearing yourself think or toss around idle 
imagery, it is the fundamental capacity and insistence of the mind to 
engage itself with itself. And last of all "composition" is dead, 
significant music is not made that way anymore.

mrsmith@rice-chex.ai.mit.edu (Mr. P. H. Smith) (06/28/91)

As EH points out, listening is somewhat self-referential.  But let's
go farther and say it's completely self-referential; at this point
listening does not require sound (as in "external" "objective"
soundwaves).  Or, more properly music (qua listening) does not require
sound.  Sound is not a necessary condition for music.  

Eliot, I must thank you for nudging my thinking in this direction,
because you have now made it possible for me to solve a basic problem
in the theory of music (no, not "what is it?"):  What is the primary
musical element, the "thing" without which we would have no music?
What are the boundaries of music?  (I must tell y'all why I am pleased
to be able to eliminate sound as the necessary primary musical element
-- it's really quite obvious:

"SPUNKY CHEERLEADER IS DEAF-INITELY AMAZING"

Pretty cheerleader Jennifer Jenkins stays in perfect step with her
squad even though she can't hear the crowd rooting for her high school
basketball team.  
     Jennifer has been deaf since birth, and speaks only in sign
language, but that doesn't stop the 16-year-old junior from leading
cheers at Carman-Ainsworth High School near Flint, Michigan.
     'And when they do dance routines, she's right in sync with them,'
says her proud mom, Pam Jenkins.
     ...
     'If the music is loud enough to vibrate the floor, I can feel it
when I'm dancing,' the pert 5-ft.6-in. teen explains through a sign
interpreter.
     ...
     'Sometimes she'd be a tad off,' Pam says.  'But many times, she'd
end right with the music.  You just wouldn't know that she couldn't
hear.'"
	-- The Globe, May 14, 1991, p. 21.)

Now, there are many stories just like this one where deaf people enjoy
music via some kind of 'vibration.'  This vibration is arguably
not sound.  Moreover, there are musical experiences of listening that
do not even involve feeling a vibration (e.g., composing).  

Music Mediates Mind.tm clearly can encompass such kind of musical
experiences.  That's why it's so charming.  I would add that music
requires, minimally, the emergence of a self-referential (i.e.,
referring to the listening self) musical value.  This musical value
can be any of quite a variety of values - dance, religion, purity,
patriotism, abandon, peace, propriety, sound, silence, etc.

I call this point of emergence of a musical value in the mind Punction.tm.
(look it up).  An excellent word because of it's relationship to
terms such as 'punctus' (which, as a 'dot' or 'prick'
metaphorically connotes the emergence of a positive musical value in
the mind, regardless of whether the actual symbol represents a vocal
sound, a rest, a glass breaking, or a felt vibration of no sonic
quality whatever).  Thus, it is apparent that musical value emerges in the
mind - how banale!  The only important thing about Punction.tm
is that it allows musical research to include the experiences of a
Jennifer Jenkins - and that it can provide an interesting foundational
locus for Music Mediates Mind.tm.  What a strange understanding of
music can now emerge: a mental feedback system most commonly triggered
by sounds, but not always.  Punction: the emergence of a musical value
in the mind. (Note that this is not tautological.  Punction is not the
musical value per se, but the point of its emergence.)


Paul Smith
mrsmith@ai.mit.edu
[- not an ai guy.]