Hello everyone,
I’m a bit sceptical of the DC removal implementation in LeakDC. As I understand it, DC removal is nothing but a HPF, preferably placed below the audible range. For this purpose, LeakDC seems to be reaching far too high, attenuating the audible low frequencies. You can run the following code and look at the server meter to see the difference in level between the two audio channels (note: there is of course no DC offset in this example that would need fixing):
( play { [
SinOsc.ar(50),
LeakDC.ar(SinOsc.ar(50))
] * 0.2 } )
Also, LeakDC’s behaviour is not consistent between sample rates (as is mentioned in the help file).
I ran the it through Plugin Doctor to back up my suspicions and got these results:
standard coefficient of 0.995:
SR 48 kHz: Filter starts rolling off at 200 Hz, -2dB at 50 Hz.
SR 96 kHz: Filter starts rolling off at 500 hz! It reaches already -2dB at 100 Hz, -5 dB at 50 Hz.
coefficient of 0.999:
SR 48 kHz: Filter starts rolling off at 50 Hz, -2 dB at 10 Hz.
SR 96 kHz: Filter starts rolling off at 100 Hz, -2 dB at 20 Hz.
I wonder why one wouldn’t just use a standard HPF at 10 Hz (or so) instead? It is consistent between sample rates, it only starts to roll of below 30 Hz and it reaches only -0.5 dB at 20 Hz. So this actually looks much better in the server meter:
( play { [
SinOsc.ar(50),
HPF.ar(SinOsc.ar(50), 10)
] * 0.2 } )
So my question is: Is there any reason why the LeakDC implementation would be peferable to such a standard butterworth HPF?
One difference I can see is that the HPF still has an initial offset spike before settling around zero phase, LeakDC does not show that behaviour:
{ LeakDC.ar(DC.ar(0.5)) }.plot(0.5);
{ HPF.ar(DC.ar(0.5), 10) }.plot(0.5);
{ LeakDC.ar(SinOsc.ar + DC.ar(0.5)) }.plot(0.02);
{ HPF.ar(SinOsc.ar + DC.ar(0.5), 10) }.plot(0.02);
So this could be a reason – I don’t know enough about filters to understand why this happens though. Any enlightenment would be appreciated.