[rec.music.synth] Multi-effects units - How?

rlw@ttardis.UUCP (Ron Wilson) (01/04/90)

In article <523@lexicon.com>, fc@lexicon.com (Frank Cunningham) writes:
>In article <1989Dec31.055208.2339@smsc.sony.com> dce@smsc.sony.com (David Elliott) writes:
>
>> So, I assume that most multi-effects units consist of a general purpose
>> processor, some special-purpose processors, and a bunch of general use
>> RAM.
>> 
>> Is this correct, or do they use multiple general-purpose processors?
>
>A typical multi-effect unit may consist of a general purpose processor
>and its ROM/RAM space to handle the user interface and internal
>house-keeping and usually 1 special purpose processor (usually a
>proprietary DSP) for doing the audio. It will typically have a 64k
>audio RAM which means over a second if used as a pure delay, depending
>on the sample rate.
>
>Pitch shifting involves a complex real-time interaction between the GP
>and the DSP.
>
>Reverb and Delay are RAM intensive, but only require GP/DSP
>interaction when a parameter is changed.
>
>EQ and feedback don't require much RAM.
>
>So the complexity of your multi-effect is a resource allocation
>trade-off.
>
>> Also, how do they get the sound to come through without much perceptible
>> delay?
>
>All these effects are time-domain based. There are no frequency-domain
>processors that you'd care to pay for. There is a small perceptible
>delay through a typical pitch changer.

How does one do EQ in a time domain?  The only digital method I've ever
heard of for doing EQ required transformation to and from the
frequency domain (but then I concentrated on computer EE - so my general
EE knowledge is a bit weak)