I want to build a multi speaker array for ambisonic experiments (ATK).
But I’m a little confused. Will the result of the ATK calculation only be valid for a listener position exactly in the middle of the speaker array?
What if I am not in the middle? Can I correct this with speaker delays?
Does the ATK only calculate sound directions? So that the speaker array is generating the correct sound direction only? And is this posiiton independant?
Yes with ambisonics you wil have to think in terms of sweet spot. The size of that sweet spot is determined by the ambisonics order (and the decoding method, eg AllRad incorporates elements from VBAP and virtual speakers to make it more forgiving). You can find the perceived size of the sweet spot in Sitter and Frank’s Ambisonics book but it’s easier to just experiment.
And yes AFAIK you can set up your decoder in a way that allows you to sit off the middle of the decoder method has a built in distance compensator or if you follow it using something like IEM’s Distance compensator plugin.
Can I adjust this middle compensation value in real time? Without audioble artifacts?
I would like to be able to walk around and use trakcing to compensate for the listener position.
Just seeing this now… offering a few comments:
Yes, it is possible to update a decoder for realtime listener tracking, but this isn’t super trivial when it comes to implementation. (The theory side is easy.) As far as artifacts, I expect these would be audible.
Here’s what you’d need to do:
calculate / update new decoder matrix to accommodate changed perspective / array layout
calculate / update new delay lengths to accommodate changed loudspeaker delay
calculate / update new nearfield compensation (NFC) to accommodate changed loudspeaker radii
has to do with time difference, 3. has to do with waveform curvature
will likely result in audible flanging artifacts, particularly as tracking and updating delay times will likely be perceptibly lagging measurement.