[sci.electronics] Comb filters

doug@certes.UUCP (07/11/87)

I keep hearing references to "comb filters" in reference to both video
and to audio. What are they?

Part two of the question: some years ago, I read an interview with someone
at Atari, who said that they had a new video game that used comb filters
to remove phase information from sound output, which removed directional
cues that the human ear/brain uses, so that the sound from this arcadge
game seemed to be coming from all directions at once, rather than sounding
like it came right out of the speaker. This seemed odd but interesting...
can anyone shed any light on this?  Is this something I could do with
digitized sound on, e.g. an Amiga?

Sorry to post to both groups separately but cross posting doesn't work here.
	Doug Merritt		ucbvax!unisoft!certes!doug
				("reply" doesn't work, so mail to the above)

scw@CS.UCLA.EDU (07/13/87)

In article <8707110358.AA14182@unisoft.UNISOFT> doug@certes.UUCP writes:
>I keep hearing references to "comb filters" in reference to both video
>and to audio. What are they?
>
>Part two [...] video game that used comb filters
>to remove phase information from sound output, which removed directional
>cues that the human ear/brain uses, [...] light on this?
i>Is this something I could do with
>digitized sound on, e.g. an Amiga?

Sorry, the human auditory system disgards phase information (actually it
doesn't even capture it). The only things of interest to the middle ear are
pitch and volume, any other information is generated/recreated by the brain.
Stephen C. Woods; UCLA SEASNET; 2567 BH;LA CA 90024; (213)-825-8614
UUCP: ...!{{inhp4,ucbvax,{hao!cepu}}!ucla-cs,ibmsupt!ollie}!scw 
ARPA:scw@CS.UCLA.EDU  <-note change from locus.ucla.edu

phd@speech1.cs.cmu.edu (Paul Dietz) (07/14/87)

In article <7150@shemp.UCLA.EDU>, scw@CS.UCLA.EDU writes:
> In article <8707110358.AA14182@unisoft.UNISOFT> doug@certes.UUCP writes:
> >...to remove phase information from sound output, which removed directional
> >cues that the human ear/brain uses, [...] light on this?
> 
> Sorry, the human auditory system disgards phase information (actually it
> doesn't even capture it). The only things of interest to the middle ear are
> pitch and volume, any other information is generated/recreated by the brain.

Above a kHz or so, this is a reasonable assumption. However, at lower
frequencies the cells in the cochlea are capable of "following" the
signal to some degree. Far be it from the brain not to use info it's gone
through the trouble of gathering! Phase IS IMPORTANT for horizontal
localization at LOW FREQUENCIES. For more info, look at the literature
on the position variable model by Stern and Colburn. (In the
Journal of the Acoustical Society of America 64, 127-140, 1978.)

Paul H. Dietz
Carnegie Mellon University

kavaler@zion.berkeley.edu (Robert Kavaler) (07/14/87)

The human ear is sensative to the relative phase difference between sounds
entering both ears.  This is how we (and animals) can tell where sounds
originate.  By shifting the phase difference a sound can "appear" to 
come from elsewhere.  Phase-shifting techniques are used extensively in
all sorts of stereo equipment, from cheap Boom-boxes to very expensive
mixers.

randys@mipon3.intel.com (Randy Steck) (07/15/87)

In article <19698@ucbvax.BERKELEY.EDU> kavaler@zion (Robert Kavaler) writes:
>
>
>The human ear is sensative to the relative phase difference between sounds
>entering both ears.  This is how we (and animals) can tell where sounds
>originate.  By shifting the phase difference a sound can "appear" to 
>come from elsewhere.  Phase-shifting techniques are used extensively in
>all sorts of stereo equipment, from cheap Boom-boxes to very expensive
>mixers.

I've sometimes wondered about this......

It seems to me that the phase of the signal does not have as much to do
with apparent direction as the delay and intensity of the sound reaching
the ears does.  Given the wavelength of the sound, phase-shifting would
seem to have a similar effect as changing this delay.  A quick calculation
shows that the delay apparent (worst case) from one ear to the other is
about .2 usec.  Is this sufficient to resolve directions?  

Also, it is well-known that the eye requires an inherent "jitter" to
resolve images and avoid saturating the optical receptors.  Do the ears
also require some "jitter" to resolve the direction from which a sound
originates?

  Randy Steck
  Intel Corp.

jbm@aurora.UUCP (Jeffrey Mulligan) (07/18/87)

in article <883@omepd>, randys@mipon3.intel.com (Randy Steck) says:
> Posted: Wed Jul 15 12:07:49 1987
> 
> In article <19698@ucbvax.BERKELEY.EDU> kavaler@zion (Robert Kavaler) writes:
>>
>>
>>The human ear is sensative to the relative phase difference between sounds
>>entering both ears.  This is how we (and animals) can tell where sounds
                              ^
                          one way

>>originate.  By shifting the phase difference a sound can "appear" to 
>>come from elsewhere.  Phase-shifting techniques are used extensively in
>>all sorts of stereo equipment, from cheap Boom-boxes to very expensive
>>mixers.
> 
> I've sometimes wondered about this......
> 
> It seems to me that the phase of the signal does not have as much to do
> with apparent direction as the delay and intensity of the sound reaching
> the ears does.  Given the wavelength of the sound, phase-shifting would
> seem to have a similar effect as changing this delay.  A quick calculation
                                                             ^
                                                     too quick, I think

> shows that the delay apparent (worst case) from one ear to the other is
> about .2 usec.  Is this sufficient to resolve directions?  
> 

Let    v       represent the speed of sound.
Let    h       represent the interaural distance (width of head)

v =~ 1000 feet/second    (within a factor of 2, anyway)
h =~ 0.5 feet		 (also within a factor of 2)

The delay (d) for a sound source at some angle theta from the observer
(theta=0 == straight ahead) is  ( h sin( theta ) ) / v .
"Worst case" occurs when theta = +- 90 degrees, so
d = h / v = 0.5 MILLIseconds.

Interaural phase differences are only useful for wavelengths less than
2h, i.e. frequencies less that 1kHz.  For higher frequencies, delays
of transients as well as intensity differences ("acoustical shadow"
effects) are probably important.


-- 

	Jeff Mulligan (jbm@ames-aurora.arpa)
	NASA/Ames Research Ctr., Mail Stop 239-3, Moffet Field CA, 94035
	(415) 694-5150