I would like to combine SuperCollider with Resonance Audio (the 3D HRTF based audio solution from google).
I would like to generate hundreds of sounds using SuperCollider and send them together with 3D position information to Resonance, which should then generate the two channel audio for the headphone (using HRTF technique).
How could I realize this?
Should I for instance write a separate application that uses the Resonance SDK where SuperCollider connects to? Or should I extent SuperCollider functionaliy?
What would be the best approach? Any ideas?
If what you want is just many audio sources in HRTF, but you don’t care about using the Resonance library specifically, the ATK SuperCollider package can do all of the things Resonance can and quite a bit more. It might be an easier starting point before trying to send hundreds of channels of audio across applications.
Do you think ATK can handle hundreds of channels? Resonance first maps to a number of surounding virtual speakers and then applies the HRTF. This makes it capable of handling lots of channels. Can I do that with ATK too?
yes ATK can indeed do this. You compose your sources into an ambisonic spherical signal and then decode that using HRTF impulses (or speaker arrays or whatever). There are many tools for placing the individual signals in the ambisonic field and for transforming the field as well. Its quite deep…