[sci.virtual-worlds] 3-D audio

d90-erl@nada.kth.se (Erland Lewin) (04/19/91)

  Could anyone tell me about generating 3-D audio, or recommend some
 references?

  I'm particularly curious about how the brain knows if a sound is coming from
 straigh ahead or from the rear.

  It would be fun to experiment with this on some cheap hardware (Apple IIGS),
 for example make a game where the interface is only audio - you run around
 chasing something that sends out a sound, for example.

    Virtually yours,

      Erland


=============================================================
Erland Lewin        d90-erl@nada.kth.se        Happy Hacking!
=============================================================

harry@harlqn.co.uk (Harry Fearnhamm) (04/19/91)

   Date: Thu, 18 Apr 91 19:44:03 -0700
   From: d90-erl@nada.kth.se (Erland Lewin)
   Date: Thu, 18 Apr 91 17:20:12 GMT

     Could anyone tell me about generating 3-D audio, or recommend some
   references?

   I'm particularly curious about how the brain knows if a sound is coming from
   straigh ahead or from the rear.

In the music industry recently there has been a lot of interest in the
Roland Sound Space. It is an attempt to create `genuine' stereo, as
opposed to placing mono signals, by creating the tiny delays between
signals to each ear that occur naturally when you are listening to
something.  This is the key to realistic placement, although of course
there are still the sounds that travel through your head/body which I
don't believe it is trying to mimic.  Naturally this is easiest done
on headphones, but Roland claim to be able to deal with speakers as
well.  Pseudo stereo through phase-inversion/multitap-delay is still
hot with simulated reverb and more exotic effects like phasing and
flanging (I mean exotic WRT the real world!), which might still be
valid for creating Virtual Unrealities, but this placement-with-delay
is suppoesed to be more realistic (I speak from a point of ignorance,
although I'm going to try to see it demo'ed at the Midi Music Show
26th-28th April Hammersmith Novotel, London, England).  Vertical
placement is achieved by replicating the effect that the pinna (outer
ear) has on sound, since the *quality* of the sound is slightly
altered when it hits the ear from different directions.  This also
accounts for ahead/behind detection.  Unfortunately I've no idea how,
but I'm sure there are plenty of papers on the subject.  My guess is
that a certain kind of filtering is happening depending on the
position - form the front, the pinnae scoop up more sound, since we
evolved to locate prey/beasties from the front, and this will amplify
these sounds (but not *just* amplify, I fear).  From behind, the sound
may be slightly more muffled, owing to the fact that some of it will
be going through the pinna itself - what the hey I don't really know
what I'm talking about!!!  But you get the general idea.  Note that
*each sound source* will require its own processing for accurate
placement, unless you've got some *really* clever algorithms.

--
   Harry Fearnhamm, ,---.'\   EMAIL: loki@harlqn.co.uk
    Harlequin Ltd, (, /@ )/          ...!ukc!cam-cl!harlqn!loki
   Barrington Hall,  /( _/ ')   VOX: +44 (0)223 872522
     Barrington,     \,`---'    FAX: +44 (0)223 872519
   Cambridgeshire,       DISCLAIMER: Nothing is True.
      ENGLAND.                       Everything is Permitted.