keithe@teklabs.UUCP (08/03/83)
Some time ago I was playing around with my "audio system" (in quotes 'cause you hi-buck boys would laugh at it); the speaker systems are three-way, and at the time I was tri-amping with active crossovers (obviously) ahead of the amplifiers. I had made the mistake of building 2nd order filters, and later realized the (alleged) problems that can occur: at the crossover frequency the adjacent drivers will be 180 degrees out of phase (+90 for one and -90 for the other) - one speaker is pushing while the other is pulling. But before I corrected the problem (I went to third order filters) I decided to play around with it for a while. I connected a phase-reversal switch in the midrange circuit of one of the speaker systems and (with the other speaker turned down/off) I played around with the effect. The conclusion I drew was that they sounded different, but I could not determine which way sounded better. "Better" depended ENTIRELY on the source material. Male vocal or female vocal wanted different switch selections. Instrumental? Well, how do you want it to sound? Rock sounded best if I flipped the switch back and forth as fast as I could. (Not really.) Without the REAL LIVE SOURCE to compare to - and where do I get one of those in my living room, anyway? - how can I tell? (By the way - that's a rhetorical question. Please clog neither my mail box nor the net with replies :-) ) (Sidebar follows for one paragraph): Some time ago I heard about a guy who had developed some digitization/storage techniques he was making available. He had come up with some clever ways of reducing the number of bits required to store x amount of speech. One of the tricks is that he took parts of speech and phase-shifted the various parts of the spectrum so that the time-domain representation of the signal was symmetrical around some mid-point. That way he could "record" just half of the waveform and play it by reading out the waveform forward to the midpoint and then "backward" to the beginning. It was still easy to recognize any given voice - fidelity was very acceptable (remember this was demonstrated for voice recording now). The reason it works is because the ear is (apparently) very tolerant of phase diffferences. What counts is time delays, not phase delays. You can't have the flute coming out before the tuba; your brain sez "Hey! Something' is wrong here!" (Opinion follows - one more paragraph) I still say that the best thing you can do to spruce up a modest sound system is to get a multi-band equalizer, and make the sound come out the way you like it. It's your system, your ears - have fun and enjoy. Don't let anyone force you into the "that's what it's SUPPOSED to sound like" syndrome. (And, the signature...) keith ericson or keithe o'teklabs (see the signatures debate in net.general if you're wondering what that's all about)