[net.audio] Unmeasurable differences?

ark@rabbit.UUCP (Andrew Koenig) (01/14/84)

Quote from Phil Rastocny:


	Now if there is a set of technical parameters that will assist me
	in relating what can be heard to what can be measured, fine.  Specs
	are supposed to steer us in the direction of what equipment we want
	by correlating them to what we hear.  AND we should be able to measure
	literally anything that we hear.  Great!  No argument.

	But if I can hear something that any of the specifications supplied
	with a piece of equipment cannot be correlated to the observation, then
	where do we go?  We talk about what we hear (like the soundstage size
	or inner detail) and then poke around in the circuit until we realize
	exactly what causes the observation spec-wise.

	Amplifiers that have similar distortion, damping, power, bandwidth, etc.,
	should all sound about the same.  But when comparing two equivalent
	amplifiers (like an Acoustat TNT-200 to a Kenwood Basic M-2)
	on a suitably refined reference system, they still sound different.
	(Both amps are class B about 200W, < 0.01% THD and IM distortion, both
	slew > 100V/uS, and both are finely engineered.)  All of the specs are
	orders-of-magnitudes greater than the ear should be able to detect and
	essentially identical.  But yet they still do not sound insignificantly
	different.

I'll mention this again.  A number of years ago, Audio magazine did
a carefully controlled double-blind test.  Very briefly, they concluded:

	1.  If two amplifiers with reasonably low distortion
	    figures are made to match VERY accurately in
	    frequency response, and they are not driven into
	    clipping, it is not possible to tell the difference
	    between them by listening, even if the listener
	    is a "golden ear."

	2.  Two amplifiers that differ in frequency response
	    by a little tiny bit will be perceived by a careful
	    listener as different, but not in any way subjectively
	    related to frequency response.  Rather, the listener will
	    hear differences in "depth," "imaging," and so on.

	3.  We are talking about frequency response variations of
	    the order of 0.3 dB -- small enough that two samples of
	    the same make and model will be that far apart.  In fact,
	    a SINGLE amplifier will vary that much from one day to
	    the next, from changes in temperature and humidity.

One of these days, when I have the time, I will scare up the exact
reference.  I posted it the first time I mentioned this test.  In
any event, the description of the test conditions was detailed enough
that unless the author was lying outright there was no doubt in my
mind that the test was fair.

I have NEVER seen any documented evidence that it IS possible
to hear differences in amplifier behavior that cannot be accounted
for by objectively measurable parameters.

rentsch@unc.UUCP (Tim Rentsch) (01/14/84)

It seems worth pointing out that the testing method may have had
more profound impacts on the results than what the experimenters
were trying to test.  In particular, if the amplifiers were
frequency matched by some kind of frequency euqalization circuit,
i (for one) would believe the effects of the F.E. circuit to
swamp the effects of distortion (of the extremely low variety) in
the amplifier.  You have to be VERY careful when doing experients
of this kind that you are measuring what you intend to measure.

(The other results, i.e., that freq. resp. difference was not percieved
as such, was very interesting.  This is, however, exactly what I would
expect if a frequency equalization method was used.)

ark@rabbit.UUCP (Andrew Koenig) (01/15/84)

It has been suggested that equalization might introduce enough distortion
that it would mask the differences between the amplifiers.  The experimenters
considered this objection, and offered the following counter-argument:
the amount of signal processing done in the experiment was far less
than the amount done during mastering of the commercial LPs used as
a signal source.  Even direct-to-disc has to go through RIAA equalization,
and 'conventional' records are made with much more signal processing
than that.

I do not think the people who claim to be able to hear the difference
between power amps insist that they can only do so if you are playing
a live 30 IPS master tape without any equalization...

gregr@tekig1.UUCP (Greg Rogers) (01/16/84)

Well controlled double-blind testing should prevent the problem that you site.
Very simply, the frequency equalization device is only applied to one of the
units under test at a time.  That is one component is equalized to the other, 
not both components to some arbitrary flatness.  There are two distinct cases.

If the objective of the test is to disprove a particular component superior to 
another, the equalization device is applied only to the presumed inferior 
device which can only result in making it even more inferior except with 
regard to frequency response of course.  If the test then shows no detectable 
difference between components, the equalization device certainly cannot be 
blamed for degrading the "superior" component.  The assumed superior
component is hence proved equivalent except for frequency response variations
which can be simply corrected as in the test. 

If the purpose of the test is to prove no differences exist between components 
(neither being claimed as superior) then the test should be run twice with the 
equalization device first applied to one component and then the other.  If 
both tests show no differences between components, then it is proved that no 
differences exist other than frequency response which again can be simply 
corrected as in the test. 

It is of course more complicated if the results of the test indicate that
differences do exist.  It is then necessary to isolate the cause as either
the equalizer or components.  There are numerous ways of doing this but it 
would then be up to the OPPONENTS of component differences to prove the
equalizer at fault.  In other words, the use of an equalizer could incorrectly
influence the result in favor of those that claim differences in components,
but CANNOT favor those that claim no differences when done in the correct 
manner that I've outlined.  I hope this helps you understand how really
objective these objective tests are when designed within the rules of 
scientific method.
				Greg Rogers
 

mmt@dciem.UUCP (Martin Taylor) (01/17/84)

It's a natural presumption that any subjectively detectable variation
in sound quality can be correlated with some variation in objective
measurements, but the question often is "what should we measure?"

The ear is a funny beast. Sometimes it doesn't hear what you might
think was the most awful mess, and other times its sensitivity is
exquisite. Training matters enormously, although the training may have
to be very specific to the listening task.  For example, one classic
old experiment used a specially invented signal whose total energy
was constant during the entire pulse of some tens of msec. But the
energy distribution across the ears was a function of time. It started
at 100% in one ear and wound up up at 100% in the other ear, with
a linear shift over the pulse duration. So it was heard as a left-to-righ
or right-to-left sweep. In one of the four quarters of the sweep, the
energy split reverted for about 10 msec (if I remember properly) to
100% in one ear. The observer had to tell which quarter had the
blip in it. People were trained for days and weeks, and still could
score only chance results, until suddenly they found out how to listen,
and then over the space of a day or two went up to essentially perfect
scores.

Another example: Take someone off the street and get them to listen
for lateralization of sound caused by an interaural time delay.
Typically, an inter-aural shift of some 100+ microseconds is required
for perfect lateralization (telling which side of centre the sound is).
After prolonged training, subjects may suddenly start being able to
do the task with only around 10 usec shifts.

It is quite likely that the standard measurement specs do not take
into account some signal parameters that are used by some listeners
and not by other *equally good* listeners.  I would trust subjective
discrimination tests to tell me that there was something different,
and would then look to objective measures to find out what those
differences might be.  After finding out, I would tend to look for
those differences in any later tests on new equipment.

There is still art in psychoacoustics.
-- 

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt