Hello everyone !
I’m writing a software that mixes my band when playing live.
The concept is simple : every instrument is plugged within its own SynthDef, which allows to apply relevant filters (musical effects such as distortion/delays, and mixing effects such as high pass/compression).
Then, every SynthDef is sent to a ‘master’ SynthDef which allows overall mixing : ducking, compression, limiting, etc. This is sent to the speakers.
Now the problem is we have both a violin (with a wooden body), and a microphone (SM58). So there’s potential larsen issues when playing live. Even worse, there’s resonators (feedback loops) on the microphone.
I’ve naïvely included BRF filters to their SynthDefs, but I’m not sure those are enough, or even the best approach to solve this problem.
I supposed FFT could help with this, but this would introduce latency, which I would like to avoid, and every time I try to manipulate bins to correct some sound, I end up with ugly noise artifacts .
I also had the intuition that a two stage-comparison would be the best way to ‘detect’ and correct a larsen within a SynthDef, but I have no idea how this could be done.
I suppose some of you have better knowledge about this than I do know, and since the problem seems to be far from trivial, I preferred asking first.
Thank you !
Simon