Never, probably, thought about it in-deep, but yesterday got wondered
isn’t it better to use audio-rate controls as inputs?
looks much more convinient, though are there any drawbacks? sound quality issues?
here is example:
using \in.ar (audio-rate control) as input signal by mapping it to audio bus. No need to respect order of execution (verb comes first in node tree)
s.plotTree
(
~s = SynthDef(\src, { |out=0|
var s = SinOsc.ar(LFNoise0.kr(0.3).range(222,444).lag(0.1)) * Pulse.ar(2);
Out.ar(out, s * 0.3!2)
}).play(args: [\out, 4]);
~v = SynthDef(\verb, { |out=0|
var in = \in.ar(0!2);
8.do { |i|
in = AllpassC.ar(in, delaytime: LFNoise1.kr(0.3!2).range(0.005,0.04))
};
Out.ar(out, LeakDC.ar(in) * 0.5)
}).play(addAction: \addToHead );
)
b = Bus.audio(s, 2)
~s.set(\out, b);
~v.map(\in, b);
source synth writes to bus 4 and so verb synth reads from it. Using In ugen, so order of execution has to be respected
(
~s = SynthDef(\src, { |out=0|
var s = SinOsc.ar(LFNoise0.kr(0.3).range(222,444).lag(0.1)) * Pulse.ar(2);
Out.ar(out, s * 0.3!2)
}).play(args: [\out, 4]);
~v = SynthDef(\verb, { |out=0|
var in = In.ar(\in.kr(4), 2);
8.do { |i|
in = AllpassC.ar(in, delaytime: LFNoise1.kr(0.3!2).range(0.005,0.04))
};
Out.ar(out, LeakDC.ar(in) * 0.5)
}).play(addAction: \addToTail ); // with \addToHead it won't work
)
If the order of execution changes, you may get control-block sized time glitches at the input. To me this is not acceptable, so I’ll never use NamedControl.ar to try to “cheat” node ordering requirements.
There’s a few big advantages to using audio rate inputs vs reading audio via In.ar(\bus.ir).
When reading with In.ar, there’s no reliable way for the input to be null / silent. You have to specify a bus, and don’t really have a way to specify a sane & reliable default state. With \input.ar(0), you know that your value will be 0 if the input hasn’t been mapped.
Audio rate controls work as expected when setting with constant values, mapping to control rate buses, OR mapping to audio rate buses. In other words, if you make an \audioIn.ar(0) control, every possible way you can set that control is valid: synth.set(\audioIn, 0), synth.set(\audioIn, controlBus.asMap), synth.set(\audioIn, audioBus.asMap). A control rate input responds identically to each of these commands, which means client side code doesn’t change, and it’s solely up to the SynthDef to determine if it’s important to read an input at audio or control rate - you can even change your SynthDef (e.g. to add audio rate modulation) and no client side code has to change.
It can be easier differentiate in code what the intended outcome is. If you see code setting a Synth control to 53, it’s not clear without looking at the SynthDef whether this is a bus index or just a numeric value. With mapping, a value like a53 an unambiguous indication that the input is coming from audio bus 53. This may not matter depending on how you use it, but for me this has saved hours of debugging.
There’s one downside I am aware of: when using audio-rate inputs, you can’t easily crossfade when changing inputs. This is a bit of an advanced behavior, but if I want to modulate the input to a synth dynamically it’s tricky with audio rate inputs. For me, the above advantages outweigh this downside - if I want modulation of inputs, I just create crossfade between audio streams on a bus, and then map that bus to my input.
James is right re node ordering problems: it’s a bit of a “fake” fix, as you can still have problems if you’re re-arranging nodes, and these can be quite subtle. BUT: if you have a relatively simple set of synths, aren’t re-ordering them once they’re created, and don’t care about a tiny extra bit of latency, I’d say this is still a valid way to avoid doing any node ordering at all.
(
s.waitForBoot {
var group = Group.new;
var bus = Bus.audio(s, 1);
var src = {
SinOsc.ar(300);
}.play(group, bus);
var thru = {
(NamedControl.ar(\in, 0) * 0.1).dup
}.play(group, 0, args: [in: bus.asMap]);
var thread = Routine {
loop {
1.0.wait;
thru.moveToTail(group);
1.0.wait;
thru.moveToHead(group);
}
}.play;
var stopFunc = {
bus.free;
CmdPeriod.remove(stopFunc);
};
CmdPeriod.add(stopFunc);
};
)
… where In.ar would alternate between audio and silence (which is no better). The main point being that it’s an exaggeration to say that you don’t have to pay attention to node order at all.
Thank you for the example James.
Sorry for the late response, today I retouched on this subject and I suspect that Ndef monitoring also uses audio controls as inputs. I see that at times monitoring nodes group is before the source synth.