jkeegan@hawk.ulowell.edu (Jeff Keegan) (07/02/90)
I am currently writing a game where simultaneous sounds will be necessary. I've observed that Beyond Dark Castle and SoundEdit 2.0 (two programs that I have seen do this) do not call DoSndCommand, and must be either using the old StartSound commands or the actual sound driver itself. I need to know basically where to start if I am going to have to use the old sound driver. How is it done? How can I play two sounds that I have in some format (for example without Snd-type headers, like SoundWave resources used to be). I THOUGHT about using the new sound manager and having my completion routine cycle through a sound, copying 512 bytes at a time and playing them as soundWAVES, and actually wrote code, but it sounded scratchy and cut up (like I thought). Any ideas? ..Jeff Keegan jkeegan@hawk.ulowell.edu [129.63.1.2] ------------------------------------------------------------------------------- | Jeff Keegan | I clutch the wire fence until my fingers bleed | | jkeegan@hawk.ulowell.edu | A wound that will not heal | |----------------------------| A heart that cannot feel | | This space intentionally | Hoping that the horror will receed | | left blank | Hoping that tomorrow we'll all be freed -RUSH | -------------------------------------------------------------------------------
hawley@adobe.COM (Steve Hawley) (07/02/90)
In article <1032@swan.ulowell.edu> jkeegan@hawk.ulowell.edu (Jeff Keegan) writes: > >I am currently writing a game where simultaneous sounds will be necessary. >I've observed that Beyond Dark Castle and SoundEdit 2.0 (two programs that >I have seen do this) do not call DoSndCommand, and must be either using the >old StartSound commands or the actual sound driver itself. I need to know >basically where to start if I am going to have to use the old sound driver. >How is it done? How can I play two sounds that I have in some format >(for example without Snd-type headers, like SoundWave resources used to be). Also consider Studio Session and Super Studio Session which do 6 and 8 voices simultaneously. I had hopes of writing the ultimate video game for the Mac (I still do). I tried to attack this problem and hoped to solve it have a general library when I was done. I came pretty close, but gave up in frustration. The general idea is to do all the sound generation yourself. There is a 720 (?) byte (I think -forgive my faulty memory if I'm wrong) buffer that is used for sound generation. Every time the video scan has drawn a row, it blanks and an interrupt is generated. At this time, a byte is fetched from the buffer and thrown to the dogs of the DA converter. The buffer is emptied every vertical retrace interrupt. The only catch is that not all the bytes in the buffer are used - half of them (the even ones, I think) are used for the disk drive. Here's what I did: I wrote a vertical retrace task that would take requests to start sound on one of 4 channels (with options for what to do when that channel is being used) and play it at 11 kHz until completion or interruption. When it gets woken up by the VBL manager, it loads the sound buffer, does house keeping and requeues itself. This worked like a charm --but for only 1 voice. The problem came in handling multiplexing. Whatever technique I tried failed miserably (ie, it played but with lots of unacceptable distortion). Bummer deal! I tried averaging, I tried OR'ing I even tried scrubbing, but they all worked poorly. I knew it wasn't a time problem because I tried each "channel" running with 3 other "null" channels, and it was clean (ie, it still did the math for multiplexeing, but something that would be a NO-OP to the data. The real bad news: Even if this code DID work, it would probably only work on a Mac SE or earlier machine because of the strict hardware dependency (although it did make correct use of the correct sound buffer base address). And it was written in inline assembly for the most part. Good luck... Steve Hawley hawley@adobe.com -- "A blow on the head is... ...worth two in the bush." -Basil Fawlty
wiechman@athos.rutgers.edu (NightMeower) (07/03/90)
If you haven't yet seen MacDTS Sample Code number 23 or 24 (the number escapes me) you might want to take a look. It has a pretty good example which uses the SoundManager that has been available since 6.0.2. I imagine things may change for 7.0 but you might figure that it might be more compatible with Apple's future systems. It was just recently modified in late April or early May, so if you get a copy that is dated from Jan 1990 know that there is something more recent. It handles sound in multiple channels nicely. Kevin -- =========================================================================== Kevin S. Wiechmann arpa: wiechman@rutgers.rutgers.edu This is only a test... for the next sixty seconds...
krisl@hpindwa.HP.COM (Kris Livingston) (07/03/90)
>Here's what I did: >I wrote a vertical retrace task that would take requests to start sound on >one of 4 channels (with options for what to do when that channel is being >used) and play it at 11 kHz until completion or interruption. When it gets >woken up by the VBL manager, it loads the sound buffer, does house keeping >and requeues itself. This worked like a charm --but for only 1 voice. The >problem came in handling multiplexing. Whatever technique I tried failed >miserably (ie, it played but with lots of unacceptable distortion). >Bummer deal! I tried averaging, I tried OR'ing I even tried scrubbing, but >they all worked poorly. I knew it wasn't a time problem because I tried each >"channel" running with 3 other "null" channels, and it was clean (ie, it still >did the math for multiplexeing, but something that would be a NO-OP to the >data. You're on the right track. When your VBL task grabs a chunk of bytes from your sound waveform, it should also grab bytes from 3 other waveforms and ADD THEM TOGETHER. For example, if your current index into any given waveform is i, then the byte you want to stuff into the sound hardware buffer is wave1(i)+wave2(i)+wave3(i)+wave4(i). The trick to avoid the nasty distortion is to pre-divide the values in each of the waveforms by 4 (the number of voices being combined). The advantage to this is that the only processing your VBL task needs to do is addition. The disadvantage is that the volume (loudness) of the overall sound is reduced. You could experiment with dividing the waveform data by 2 (instead of 4) and then check for overflow. (If sum > 255 then sum = 255). I know, it's crude, but I've used it and it works--just don't plan on using the CPU for much else while this is going on. :-) >The real bad news: Even if this code DID work, it would probably only work on >a Mac SE or earlier machine because of the strict hardware dependency (although >it did make correct use of the correct sound buffer base address). And it >was written in inline assembly for the most part. You're right. Hardware dependency is a real concern here. Apple has made some hardware-specific patches to the system software so that programs using this kind of approach might still work on newer Macs (I've tested this stuff on a Mac IIx--works fine) but you can bet that when System 7 shows up, nothing will work except proper Sound Manager programs. >Steve Hawley >hawley@adobe.com >-- Kris Livingston krisl@hpindwa.hp.com
eta@ic.Berkeley.EDU (Eric T. Anderson) (07/06/90)
In article <36690003@hpindwa.HP.COM> krisl@hpindwa.HP.COM (Kris Livingston) writes: > >... The trick to avoid the >nasty distortion is to pre-divide the values in each of the waveforms by 4 >(the number of voices being combined). The advantage to this is that the >only processing your VBL task needs to do is addition. The disadvantage is >that the volume (loudness) of the overall sound is reduced. You could >experiment with dividing the waveform data by 2 (instead of 4) and then >check for overflow. (If sum > 255 then sum = 255). I know, it's crude, >but I've used it and it works--just don't plan on using the CPU for much >else while this is going on. :-) > Hey -- be careful. You know you're throwing away bits for no reason if you pre-divide. How about add first and then post-divide? On a Mac, doing byte-addition is just as quick as 16-bit addition (except if you're adding constants). Right? Why add to your distortion (especially for soft sounds where the lower bits make up for a lot of the sound level)? Let me point out another possibility that you might consider: You want four sounds to play, so you add them up. Map a log function onto the continuum of 1024 possible values you can get by adding. Map it linearly for low sound levels (so it approaches a ramp at small values) and slope it down for higher sound levels as you approach 1024. It seems to me that this might be a good idea and it only costs you a 1K-byte table. (add + lookup = cheap, right?) Yeah, okay -- I haven't tried this, I'm just speculating. Best Wishes -Eric Anderson eta@ic.berkeley.edu
jmunkki@hila.hut.fi (Juri Munkki) (07/09/90)
In <25974@pasteur.Berkeley.EDU> eta@ic.Berkeley.EDU.UUCP (Eric T. Anderson): >Hey -- be careful. You know you're throwing away bits for no reason >if you pre-divide. How about add first and then post-divide? On a >Mac, doing byte-addition is just as quick as 16-bit addition (except >if you're adding constants). Right? Every cycle counts when you are doing things like this. I found it optimal to pre-divide. I also used only 11 Khz sound. Doing two 11 Khz sounds with pre-divide doesn't add all the much overhead, so my animation routine were not hurt too badly. Two sound channels is infinitely better than just one. \ Ws H / \ iu Juri Munkki Macintosh Support jmunkki@hut.fi PS / / nr Helsinki University of Technology, Computing Centre 4X \ / df 8 \