Given an array (of changing length) of frequencies and amplitudes, can I generate a single audio buffer on a sample by sample basis that includes all the tones in the array? If not, what is the best way to generate multiple tones in a single audio unit? Have each note generate it's own buffer then sum those into an output buffer? Wouldn't that be the same thing as doing it all at once?
Working on an iOS app that generates notes from touches, considering using STK but don't want to have to send note off messages, would rather just generate sinusoidal tones for the notes I'm holding in an array. Each note actually needs to produce two sinusoids, with varying frequency and amplitude. One note may be playing the same frequency as a different note so a note off message at that frequency could cause problems. Eventually I want to manage amplitude (adsr) envelopes for each note outside of the audio unit. I also want response time to be as fast as possible so I'm willing to do some extra work/learning to keep the audio stuff as low level as I can.
I've been working with sine wave single tone generator examples. Tried essentially doubling one of these, something like:
Buffer[frame] = (sin(theta1) + sin(theta2))/2
Incrementing theta1/theta2 by frequency1/frequency2 over sample rate, (I realize this is not the most efficient calling sin() ) but get aliasing effects. I've yet to find an example with multiple frequencies or data sources other than reading audio from file.
Any suggestions/examples? I originally had each note generate its own audio unit, but that gave me too much latency from touch to note sounding (and seems inefficient too). I am newer to this level of programming than I am to digital audio in general, so please be gentle if I'm missing something obvious.