[comp.sys.next] SoundDriver help

judge@gpu.utcs.utoronto.ca (Peter Judge) (06/29/90)

I have in mind an application that creates two or three sounds 
algorithmically (stuffing them into SoundObjects), and uses the
DSP to scale each one separately and then combine them into one sound
to output to the DAC. The processing of the previously created sound should
occur in as near to realtime as I can make it.

I understand that this capability will be offered as a Class in the
next version of the OS. In the meantime...

I'm not clear on the convention to download more than one sound (each
of which are longer than than the DSP hardware buffer) to the DSP and then
invoke the appropriate array processing dspwraps to do the work. How
can this be coordinated in such a way as to preserve as much realtime
performance as possible?

Is it even reasonable to expect that, using the sounddriver, I could
output these 'massaged' sounds in near realtime?

Thanks for your help.

Peter Judge 	(judge@credit.erin.utoronto.ca)

-- 
===============================================
judge@credit.erin.utoronto.ca	(Peter Judge) |)
judge@gpu.utcs.utoronto.ca 
===============================================