[comp.sys.amiga] Sonic Tomfoolery

rap@ardent.UUCP (Rob Peck) (02/25/88)

We have two ears, and two output channels to fool with on the Amiga.
Each of the stereo channels has two real hardware channels feeding
into it.  If I hack the audiotools to add the stereo support
that I need for a coming hack, I can send the same sound to
a left and right channel at the same time, one at a higher volume
and the other at a lower volume, moving the sound from left to
right as the volume is modified on both channels simultaneously.

Then, not only control the volume, but also control the phasing
between the two channels a little bit so as to create a distance
perception "somewhere out there in front of me to the left or the
right".

But what kind of controls must I apply to make the sound appear
to be "behind me"?????

Just thought someone else here might have a ready answer.  When
and if I manage any part of the above, it will become part of
the audiotools and made available to whomever wants it, prolly
through some public distribution channels near you.

Rob Peck			...ihnp4!hplabs!ardent!rap

lupin3@ucscb.UCSC.EDU (-=/ Larry Hastings /=-) (02/25/88)

+-In article <319@ardent.UUCP>, rap@ardent.UUCP (Rob Peck) wrote:-
+----------
|
| We have two ears, and two output channels to fool with on the Amiga.
| Each of the stereo channels has two real hardware channels feeding
| into it.  If I hack the audiotools to add the stereo support
| that I need for a coming hack, I can send the same sound to
| a left and right channel at the same time, one at a higher volume
| and the other at a lower volume, moving the sound from left to
| right as the volume is modified on both channels simultaneously.
| 
| Then, not only control the volume, but also control the phasing
| between the two channels a little bit so as to create a distance
| perception "somewhere out there in front of me to the left or the
| right".
| 
| But what kind of controls must I apply to make the sound appear
| to be "behind me"?????
| 
| Rob Peck			...ihnp4!hplabs!ardent!rap
|
+----------

  This is a little trickier.  (This is also the reason that your phasing trick
appears to make the sound "in front of me", rather than "either in front of
or behind me".)
  The way that us Human Beengs tell if a sound is in front of, or behind of, us
is because of those big funny-looking things that hang on the outside of our
heads, holding our glasses up.  Yep, the actual exterior part of the ear. You
see, when something is behind you, the outer ear runs a little interference and
changes the spectral makeup of the sound slightly, you compare it to a similar
sound you've heard before and decide "it's the same sound as THIS, it's just
behind me".
  Professor Muma (my teacher for History of Electronic Music) went off on a
side note about this once; it seems he and an associate did some research on
the exact subject.  (He stuck small mikes in his ears, and they played identical
sounds at 12 different positions around his head, and did a spectral analysis of
all the results.)  I suppose I could ask him for more info if someone is REAL
interested... but methinks it would be a _major_ pain.  Who knows, it could
be real easy; but I suspect the former...

  By the way, he also quickly mentioned that humans can tell _height_ of a sound
as well; the problem being that this is a) a learned trait, and b) something
that us civilization dwellers don't really need to learn well.  (Aboriginies
(sp!), he said, could tell the height of a sound amazingly well...)  He didn't
explain how height was detected...

jea@ur-cvsvax.UUCP (Joanne Albano) (02/25/88)

In article <319@ardent.UUCP>, rap@ardent.UUCP (Rob Peck) writes:
> We have two ears, and two output channels to fool with on the Amiga.
> Then, not only control the volume, but also control the phasing
> between the two channels a little bit so as to create a distance
> perception "somewhere out there in front of me to the left or the
> right"...
> But what kind of controls must I apply to make the sound appear
> to be "behind me"?????
> Just thought someone else here might have a ready answer.  When
> Rob Peck			...ihnp4!hplabs!ardent!rap

You asked a good question because this is a case where the
nervous system is more readily fooled. In fact, when the signal
is behind you it is equivalent to both ears! And so it is when
the signal is in front of you too. How is this resolved?
First there are times when it is not resolved and thew brain
is confused. Second, more typically one moves ones head which
immediately provides a cue. Third, the external auditory meatus
(called simply by some as ears) has the effect of attentuating the
signal a bit and this can provide a cue in the proper context.
I suggest that you provide context to your signal and inject a
simulated head movement by having the signal move slightly off the
midline so it has a directional component and perhaps a slight
attentuation too.

I am not an auditory psychobiologist but I have some training
on this. Actually Im a visual psychobiologist.

-- 
===================================================================
 Joanne Albano, Center for Visual Science     (716) 275-6848
 Room 256 Meliora Hall, Univ. of Rochester, Rochester NY 14627 
 UUCP: ur-cvsvax!jea@rochester.EDU ARPANET: UR-CVSVAX!JEA@ROCHESTER.ARPA

denbeste@bbn.COM (Steven Den Beste) (02/25/88)

rap@ardent.UUCP (Rob Peck) writes:
> Subject: Sonic Tomfoolery

> We have two ears, and two output channels to fool with on the Amiga.
> Each of the stereo channels has two real hardware channels feeding
> into it.  If I hack the audiotools to add the stereo support
> that I need for a coming hack, I can send the same sound to
> a left and right channel at the same time, one at a higher volume
> and the other at a lower volume, moving the sound from left to
> right as the volume is modified on both channels simultaneously.

> Then, not only control the volume, but also control the phasing

You're working too hard - the human ear has no mechanism for detection of
phase. Only for low frequencies does it matter, and there only because it
manifests as a slight time delay which neurons can measure.

> between the two channels a little bit so as to create a distance
> perception "somewhere out there in front of me to the left or the
> right".

> But what kind of controls must I apply to make the sound appear
> to be "behind me"?????

The front/back asymmetry of the human ear affects high frequencies much more
than low frequencies. If your signal goes through the equivalent of a variable
low-pass filter, it will seem to move front-to-back with front correlating to
lots of high frequencies and back to much less. Precisely where the hinge of
the filter ramp should be I'm not certain - 8-10 KHZ would be a good place to
start.

Of course, this is a lot more difficult thing to do than running the amplitude
up and down. An approach like the following might work:

Take your signal and run it through an FFT. Severely cut down the high
frequencies and then resynthesize the waveforms. We now have O (the original
waveform) and P (the processed, low-passed waveform).

You then generate a series of waveforms Q (containing various amounts of O and
P) by taking each location in turn and doing a non-equal average between the
same locations in O and P:

 Q(10%) = (9*O+P)/10
 Q(20%) = (8*O+2*P)/10

etc.

I'm guessing on this - it might turn out that the resynthesized waveform bore
no resemblance to the original. If so, you might have to resynthesize an
unchanged waveform just to make sure the phases are right (vital for the
averaging approach I describe). If this approach doesn't work, then you have to
do it the hard way: process the FFT output several times and resynthesize
each.
-- 
Steven C. Den Beste,   Bolt Beranek & Newman, Cambridge MA
denbeste@bbn.com(ARPA/CSNET/UUCP)    harvard!bbn.com!denbeste(UUCP)

dillon@CORY.BERKELEY.EDU (Matt Dillon) (02/26/88)

>You're working too hard - the human ear has no mechanism for detection of
>phase. Only for low frequencies does it matter, and there only because it
>manifests as a slight time delay which neurons can measure.
	
	Huh, where did this come from?  I've played around with sound
quite a bit, and if I generate two tones of slightly different frequencies,
I can hear the phase quite fine thank you.  In fact, if I generate two
tones of the same frequency that are out of phase, I can also tell the
difference.  This effect is certainly not limited to low frequencies.

				-Matt

flaig@cit-vlsi.Caltech.Edu (Charles M. Flaig) (02/26/88)

In article <319@ardent.UUCP> rap@ardent.UUCP (Rob Peck) writes:
>We have two ears, and two output channels to fool with on the Amiga.

[ lines deleted ]

>But what kind of controls must I apply to make the sound appear
>to be "behind me"?????

As I recall from a class I took last year, the shape of the ear cause
different filtering to take place on sound coming from behind you than
in front of you. 

I believe the simplest effect was that of a low pass filter from behind,
since your ears block high frequencies while low frequencies diffract
around them.  There were also 2nd order effects having to do with the
multiple reflections sound undergoes enroute to your ear canal, which
had a smearing effect (sorry, I forget the details).

______________________________________________________________________________
  ___   ,               ,                                           ,;,;;;,
 /   Y /|              /|              Charles Flaig                ;/@-@\;
|      |/     __, ,__  |/              flaig@csvax.caltech.edu       | ^ |
|      /^\   /  | | |  /  /\  /\                                      \=/ 
 \____/|  \_/|_/\_/ \_/ \_\/_/_/_/     "What, you think they PAY me for this?"

jea@ur-cvsvax.UUCP (Joanne Albano) (02/26/88)

In article <8802251858.AA21577@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
> >You're working too hard - the human ear has no mechanism for detection of
> >phase.
> 	
> 	Huh, where did this come from?  I've played around with sound
> quite a bit, and if I generate two tones of slightly different frequencies,
> I can hear the phase quite fine thank you.

Quite right Matt! The confusion here is that the "ear" has no
mechanisms whatsoever. It is the brain that is the hearing system.
It turns out the the superior olive has some very specialized cells
for detecting phase differences between the two ears! The Superior
Olivary Nucleus is in the brainstem at the third or fourth
processing point (synapse).

-- 
===================================================================
 Joanne Albano, Center for Visual Science     (716) 275-6848
 Room 256 Meliora Hall, Univ. of Rochester, Rochester NY 14627 
 UUCP: ur-cvsvax!jea@rochester.EDU ARPANET: UR-CVSVAX!JEA@ROCHESTER.ARPA

Michael_M_Butler@cup.portal.com (02/27/88)

The psychoacoustics of placement are tricky.  When the usual information
(phase, delay, etc.) gets scrambled enough, the auditory perception system
"gives up" and you wind up with the sound seeming to come from "nowhere"
(with headphones on, it sounds as if it's coming from "inside your head").

If you're working with a sound which is intended to move from (say) directly
in front of the listener around to the listener's left and then continue be-
hind, the factors to bear in mind are that the sound from dead-ahead will
compare with the sound from dead-astern as subtly different.  The pinnae
will tend to cut treble and make the sound a trifle fainter.  There may be
other effects; I forget.  I think Chowning (at Stanford?) did some work on
this around '76 or '78.

Also bear in mind that the power ratio of the L and R channels has to vary
in a way consistent with physics if you want the motion from dead-ahead to
abeam-portside to sound right.  A little trig ought to do ya.

I'm not sure about percieved attack and decay.  I suspect that you're not
going to include simulated ambience in this rev (go ahead, prove me wrong(:D)).

Thanks for audiotools, Mr. Peck!

Michael [My fancy trailer is at the cleaners] Butler
Xanadu Operating Company

thad@cup.portal.com (02/27/88)

Hi Rob,

See me at the FAUG meeting Tuesday and I can give you some ideas about some
of that sonic tomfoolery.  (Audio is a hobby of mine, and I've developed
some interesting circuits over the years).

What you're asking is more readily accomplished using binaural techniques
(practically neccessitating the use of headphones), but there are tricks
one can do with discrete ciruits; would be interesting to see if one can
duplicate them on the Amiga.

Thad

joe@lakesys.UUCP (Joe Pantuso) (02/27/88)

In article <5570@cit-vax.Caltech.Edu> flaig@cit-vlsi.UUCP (Charles M. Flaig) writes:
>In article <319@ardent.UUCP> rap@ardent.UUCP (Rob Peck) writes:
>We have two ears, and two output channels to fool with on the Amiga.

[ lines deleted ]

>But what kind of controls must I apply to make the sound appear
>to be "behind me"?????

This brings to mind another question: When the signals come off the sound chip
are the voices already combined to the left and right channels?  Would it be
possible to split them?  I have this old Quadraphonic sterio here...could be
fun.

Note; this idea of using lower frequencies is a good one: additional idea,
place base speakers behind, tweeters in front, in this manner the lower
frequencies *will* be coming from behind, as well as lending an all-around
sense.

     Snail Mail:       Real Mail:
*-------------------*                                     *---------------*
|Joe Pantuso        |  joe@lakesys.UUCP                   |You too can be |
|1631 n. 69 St.     |  {ihnp4,uwvax}!uwmcsd1!lakesys!joe  |famous in five |
|Wauwatosa WI  53213|                                     |easy lessons   |
*-------------------* "Veteran of the Psychic Wars...."   *---------------*

ejkst@cisunx.UUCP (Eric J. Kennedy) (02/28/88)

In article <719@ur-cvsvax.UUCP>, jea@ur-cvsvax.UUCP (Joanne Albano) writes:
> In article <8802251858.AA21577@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
> > >You're working too hard - the human ear has no mechanism for detection of
> > >phase.
> > 	
> > 	Huh, where did this come from?  I've played around with sound
> > quite a bit, and if I generate two tones of slightly different frequencies,
> > I can hear the phase quite fine thank you.

That's not 'detection of phase', that's detection of two tones of slightly
different frequencies.  It's not the same thing at all.  Two tones of
slightly different frequencies will create a 'beat' between them, which
will sound like the tones are quickly increasing and decreasing in
volume.  The 'phase' here is two tones of the _same_ frequency, but with 
one slightly leading or lagging the other.  Here I'd have to agree with
Matt, we can't detect that nearly as readily.  


-- 
------------
Eric Kennedy
ejkst@cisunx.UUCP

dillon@CORY.BERKELEY.EDU (Matt Dillon) (02/29/88)

:In article <719@ur-cvsvax.UUCP>, jea@ur-cvsvax.UUCP (Joanne Albano) writes:
:> In article <8802251858.AA21577@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
:> > >You're working too hard - the human ear has no mechanism for detection of
:> > >phase.
:> > 	
:> > 	Huh, where did this come from?  I've played around with sound
:> > quite a bit, and if I generate two tones of slightly different frequencies,
:> > I can hear the phase quite fine thank you.
:
:That's not 'detection of phase', that's detection of two tones of slightly
:different frequencies.  It's not the same thing at all.  Two tones of
:slightly different frequencies will create a 'beat' between them, which
:will sound like the tones are quickly increasing and decreasing in
:volume.  The 'phase' here is two tones of the _same_ frequency, but with 
:one slightly leading or lagging the other.  Here I'd have to agree with
:Matt, we can't detect that nearly as readily.  

	What I meant was, let's say I generated two tones of 1Khz and
1.00001Khz.  The beat frequency is .001 hz, and thus slow enough that
it doesn't effect the experiment.

	Now, what I am hearing at a MOMENT in time is the SAME frequency
generated twice, one slightly out of phase with the other, and the phase
changing slowly.  I can HEAR the phase changing.  I can HEAR the volume
get lower as it approaches 180 degrees, etc...

	Now, take two generators of the same frequency (say, 1Khz).
place them out of phase at some phase angle.  This continuous static
phase angle is equivalent to what I heard for a moment in the latter
experiment.  If you then change the phase angle to something else, it is
equivalent to what I heard for another moment in the latter experiment.
I can readily hear the difference between the two phases, though it
would be extremely difficult to tell which 'sound' is what phase angle.
Generally, it is quite easily to tell that there is *some* phase because
such tones are distinctly different than a pure tone.

	If your two tones are comming from physically different areas,
then the phase entering your ears may become quite complex, and depend
on the placement of your head.  For instance, your right ear may be hearing
1Khz@30 degrees, and your left ear may be hearing 1Khz@60 degrees.  If
you isolate your ears so each ear 'hears' only one of the generators,
you would hear just one phase 1Khz@X degrees.  But I'm getting beyond
my knowledge here.  I don't *think* the brain syncs up two tones comming
in uniquely one to each ear.  Besides, I don't think the original poster
was thinking of ear-isolation in his posting.

					-Matt
	

kent@xanth.cs.odu.edu (Kent Paul Dolan) (02/29/88)

In article <7233@cisunx.UUCP> ejkst@cisunx.UUCP (Eric J. Kennedy) writes:
>In article <719@ur-cvsvax.UUCP>, jea@ur-cvsvax.UUCP (Joanne Albano) writes:
>> In article <8802251858.AA21577@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>> > >You're working too hard - the human ear has no mechanism for detection of
>> > >phase.
>> > 	
>> > 	Huh, where did this come from?  I've played around with sound
>> > quite a bit, and if I generate two tones of slightly different frequencies,
>> > I can hear the phase quite fine thank you.
>
>That's not 'detection of phase', that's detection of two tones of slightly
>different frequencies.  It's not the same thing at all.  Two tones of
>slightly different frequencies will create a 'beat' between them, which
>will sound like the tones are quickly increasing and decreasing in
>volume.  The 'phase' here is two tones of the _same_ frequency, but with 
>one slightly leading or lagging the other.  Here I'd have to agree with
>Matt, we can't detect that nearly as readily.  

I don't think so!  I remember reading many years ago that although the
human ear can only hear pitches up to about 20,000 Hz, that a stereo
system, to maintain fidelity enough to allow a listener to pick out
the second violinist playing half a tone flat in a symphony recording,
had to keep the left and right channel phase relationship correct to
the equivalent of 200,000Hz, because the human brain, processing the
audio signals, is that sensitive to phase relationships.  I believe
the math can be done to prove this with a pocket calculator and a back
of the envelope diagram of a symphony hall and a schematic human head.

Certainly, given a choice between believing I detect directions that
closely by the amplitude difference or by the phase difference, I'll
go for phase on intuition alone.

I think I noted an argument supporting this view by one of the
references in a different article, where she identified herself as a
professional in sensory sciences (with an emphasis on visual, but she
gave a reasonable sounding bunch of medical jargon for why and where
we detect phase).

Comments?

Kent, the man from xanth.

dlleigh@mit-amt.MEDIA.MIT.EDU (Darren L. Leigh) (03/01/88)

In article <4292@xanth.cs.odu.edu> kent@xanth.UUCP (Kent Paul Dolan) writes:
>In article <7233@cisunx.UUCP> ejkst@cisunx.UUCP (Eric J. Kennedy) writes:
>>In article <719@ur-cvsvax.UUCP>, jea@ur-cvsvax.UUCP (Joanne Albano) writes:
>>> In article <8802251858.AA21577@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:

[discussion of phase, frequency the ears and the mind]

>I don't think so!  I remember reading many years ago that although the
>human ear can only hear pitches up to about 20,000 Hz, that a stereo
>system, to maintain fidelity enough to allow a listener to pick out
>the second violinist playing half a tone flat in a symphony recording,
>had to keep the left and right channel phase relationship correct to
>the equivalent of 200,000Hz, because the human brain, processing the
>audio signals, is that sensitive to phase relationships.  I believe
>the math can be done to prove this with a pocket calculator and a back
>of the envelope diagram of a symphony hall and a schematic human head.

Aaaayyy!  This gets batted around all the time in rec.audio.  How
about a little back of the envelope calculation instead of reading
trashy audiophile magazines.

Sound travels at about 340 m/s.  At 200 KHz, that makes one wavelength
equal to 1.7 mm.  If phase were a problem, we would have to match the
path lengths from each speaker to the eardrum to within a small fraction
of that distance.  Even when wearing headphones, there are differences
in the ear canal which can easily amount to more than that.  At
reasonable frequencies, say about 10 KHz, the wavelength is still
too small (3.4 cm) to worry about.  Also, there are diffraction and
interference problems that can mess up the phase.

I know there are some good texts on psychoacoustics.  Can somebody
recommend any?

>Certainly, given a choice between believing I detect directions that
>closely by the amplitude difference or by the phase difference, I'll
>go for phase on intuition alone.

This is the problem with a lot of those audiophile magazines.  Belief,
but no fact.

>Comments?
>
>Kent, the man from xanth.

Sorry, Kent.  I hope your presidential candidacy goes better.

Please don't post flaming replies to comp.sys.amiga.  If anyone has
anything substantial to say, please send e-mail.

fgd3@jc3b21.UUCP (Fabbian G. Dufoe) (03/01/88)

In article <8802251858.AA21577@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
> 	Huh, where did this come from?  I've played around with sound
> quite a bit, and if I generate two tones of slightly different frequencies,
> I can hear the phase quite fine thank you.  In fact, if I generate two
> tones of the same frequency that are out of phase, I can also tell the
> difference.  This effect is certainly not limited to low frequencies.

If you generate two tones of slightly different frequencies you will hear a
pulsation caused by the amplitude change that results from the union of the
two tones.  Sometimes their amplitudes will reinforce one another and
sometimes they will cancel one another.  The rate at which they do this is
called the beat frequency.  The closer in frequency the two tones are the
lower will be their beat frequency.

Phase refers to the relationship between a waveform and its origin.  For
example, the plot of the sine function is 90 degrees out of phase with the
plot of the cosine function.  If you generate two sine-wave tones of
identical frequency and adjust their phase difference to 180 degrees you
won't hear anything.  The sum of their amplitudes at each instant will be
zero.  If the two tones are 90 degrees out of phase their amplitude will be
greater by a factor of the square root of two.  The resultant waveform will
be out of phase with each of the originals by 45 degrees.

It's true that you can hear the effect of a phase change if you generate
two tones of the same frequency and play around with the phase difference.
However, I don't think you could tell they were out of phase unless you
could compare the effect of changing their phase difference.

--Fabbian Dufoe
  350 Ling-A-Mor Terrace South
  St. Petersburg, Florida  33705
  813-823-2350

UUCP: ...gatech!codas!usfvax2!jc3b21!fgd3 

ejkst@cisunx.UUCP (Eric J. Kennedy) (03/01/88)

In article <4292@xanth.cs.odu.edu>, kent@xanth.cs.odu.edu (Kent Paul Dolan) writes:
> In article <7233@cisunx.UUCP> ejkst@cisunx.UUCP (Eric J. Kennedy) writes:
> >In article <719@ur-cvsvax.UUCP>, jea@ur-cvsvax.UUCP (Joanne Albano) writes:
> >> > 	
> >> > 	Huh, where did this come from?  I've played around with sound
> >> > quite a bit, and if I generate two tones of slightly different frequencies,
> >> > I can hear the phase quite fine thank you.
> >
> >That's not 'detection of phase', that's detection of two tones of slightly
> >different frequencies.  It's not the same thing at all.  Two tones of
> >slightly different frequencies will create a 'beat' between them, which
> >will sound like the tones are quickly increasing and decreasing in
> >volume.  The 'phase' here is two tones of the _same_ frequency, but with 
> >one slightly leading or lagging the other.  Here I'd have to agree with
> >Matt, we can't detect that nearly as readily.  
> 
> I don't think so!  I remember reading many years ago that although the
> human ear can only hear pitches up to about 20,000 Hz, that a stereo
> system, to maintain fidelity enough to allow a listener to pick out
> the second violinist playing half a tone flat in a symphony recording,
> had to keep the left and right channel phase relationship correct to
> the equivalent of 200,000Hz, because the human brain, processing the
> audio signals, is that sensitive to phase relationships.  
Show me the article and I'll believe it.  This still sounds like the
above misconception to me. This is two _different_ frequencies creating
a beat between them, not two identical frequencies out of phase.
  
>                                                               I believe
> the math can be done to prove this with a pocket calculator and a back
> of the envelope diagram of a symphony hall and a schematic human head.

My pocket calcutalor says:
 
speed of sound = (wavelength) * (frequency)

1000 ft/sec = wavelength * 20000 1/sec

so wavelength = 0.05 ft = 0.6 inch.

One half of one wavelength = 0.3 inch.  This means that if you are in a
symphony hall and move your head 0.3 inches to the side, the phase of
that 20000 Hz sound will shift by 180 degrees.  You're telling me you can
hear the phase difference when you move your head from side to side?
  
Now, granted, at low frequencies, it's an entirely different ball of
wax.  At 60 Hz, the half-wavelength is 8 feet.  This is why you make
sure your woofers are wired with the same polarity.  So, the bottom line
is it depends on frequency.  

For (relatively) high frequency sounds, I still think you can't detect a
phase difference.
 
> Kent, the man from xanth.


-- 
------------
Eric Kennedy
ejkst@cisunx.UUCP

haitex@pnet01.cts.com (Wade Bickel) (03/01/88)

dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>>You're working too hard - the human ear has no mechanism for detection of
>>phase. Only for low frequencies does it matter, and there only because it
>>manifests as a slight time delay which neurons can measure.
>	
>	Huh, where did this come from?  I've played around with sound
>quite a bit, and if I generate two tones of slightly different frequencies,
>I can hear the phase quite fine thank you.  In fact, if I generate two
>tones of the same frequency that are out of phase, I can also tell the
>difference.  This effect is certainly not limited to low frequencies.
>
>				-Matt


        Seems to me that you should be able to recognize phase only above
a certain frequency.  That is, stereo phase, which is what the original
posting was refering to.

        If I'm not mistaken, at low frequencies, say those much below 2K,
the waves have a longer period than the distance between your ears.  Seems
to me that this would destroy phase recognition.  Of course, most tones
include harmonics, so a "low" note might still exihibit a shifting of phase.


                                                        Wade.


UUCP: {cbosgd, hplabs!hp-sdd, sdcsvax, nosc}!crash!pnet01!haitex
ARPA: crash!pnet01!haitex@nosc.mil
INET: haitex@pnet01.CTS.COM

karl@sugar.UUCP (Karl Lehenbauer) (03/06/88)

Multiple musical notes being played simultaneously are routinely out of phase 
with each other.  When this happens, you hear a fattening of the sound.
Consider one violinist playing a note compared to several violinists.  
They do not phase align their waveforms.  (More on this later)

Consider the Amiga.  You start one waveform playing on an audio channel, 
then the same waveform playing on another.  If the sounds are played at 
exactly the same frequency and enough time elapsed between starting them 
that the hardware wavetable pointers aren't the same, you have two notes 
out of phase.  You will hear two notes.

Now consider when you have two notes that phase shift.  In this case
the frequencies of the two notes are slightly different.  Thus, as the
notes play, they go into and out of phase with each other at the rate
of the difference in their frequecies.  In other words, their waveforms
slide into and out of alignment with each other at the rate of the
difference in their frequencies.  As this happens, it significantly
changes what you hear.  When, during the phase shifting, the waveforms 
have become very closely aligned, it sounds like one lound note.  As
they shift apart, the complexity of the sound generated goes up (although
the amplitude usually goes down) causing the sound to have a different 
quality (timbre).  Since they're lining up and moving apart, lining up
and moving apart, this change in sound occurs repeatedly, and rythmically.
This rythmic shifting and aligning caused by two notes playing at slightly 
different frequecies causes this "beating" sound mentioned in an earlier 
posting.  It is also the effect acheived by a wah pedal.  If the difference 
in frequency of the notes gets a little bigger it can sound downright 
discordant.  This is why musicians tune. :-)  If it gets a lot bigger,
it can produce harmony.
-- 
"Lack of skill dictates economy of style." - Joey Ramone
..!uunet!nuchat!sugar!karl, Unix BBS (713) 438-5018

shimoda@rmi.UUCP (Markus Schmidt) (03/08/88)

Hi!

There's just another way to the effect.
I know of someone, who uses this to enter relaxed and alternate
states of mind.
The theory is, that if the two ears hear slightly different tones,
the two parts of the brain try to meet. Thus the are syncronizing
their activity and lead to alternate states of mind.

C u
Markus
(shimoda@rmi.uucp)

andy@phoenix.Princeton.EDU (Andrew M. Milburn) (03/09/88)

In article <908@rmi.UUCP> shimoda@rmi.UUCP (Markus Schmidt) writes:
>
>Hi!
>
>There's just another way to the effect.
>I know of someone, who uses this to enter relaxed and alternate
>states of mind.
>The theory is, that if the two ears hear slightly different tones,
>the two parts of the brain try to meet. Thus the are syncronizing
>their activity and lead to alternate states of mind.

Yeah, I've played with this a little.  In the (admittedly feeble)
literature it's referred as the Hemisync Effect.  It gets
used by people in sleep and dream research to shunt subjects
through various states of waking/sleeping/dreaming.

I built a grungy little amiga program to try and produce these 
effects with no success.  The problem seems to be that
only a very few ratios between the tones are effective and
nobody has published them.  Any leads?

keithd@cadovax.UUCP (Keith Doyle) (03/16/88)

In article <1527@sugar.UUCP> karl@sugar.UUCP (Karl Lehenbauer) writes:
>and moving apart, this change in sound occurs repeatedly, and rythmically.
>This rythmic shifting and aligning caused by two notes playing at slightly 
>different frequecies causes this "beating" sound mentioned in an earlier 
>posting.  It is also the effect acheived by a wah pedal.

Not quite.  It is the effect achieved by delays, flangers and chorus pedals,
NOT by a wah.  A wah pedal is simply a moving bandpass filter, that has a
fairly high Q.  While it is true that you can simulate the effect of
flangers with a bank of moving bandpass filters (as has been done in
"phaser" pedals) to do this with a wah, you'd need about five of them
ganged together (or an old Roland Jet-Phaser), and even then, you are
only simulating part of the effect of delays/flangers/chorus units.

Keith Doyle
#  {ucbvax,decvax}!trwrb!cadovax!keithd  Contel Business Systems 213-323-8170