This is a PSA about a SuperCollider gotcha that is sadly ignored in most SC tutorials (or at least ones I’ve seen).
Properly using OSC scheduling is absolutely critical if you’re working with Routines. Patterns ostensibly take care of this automatically for you, but if you start doing anything with graphics, connecting to external software, etc. then you still have to understand OSC scheduling to deal with potential synchronization issues. So really, every SC user should know this stuff.
Example 1
(
var s;
s = Server.default;
Routine({
SynthDef(\ping, { Out.ar(\out.kr(0), (SinOsc.ar(440) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.freeSelf)) ! 2) }).add;
s.sync;
loop {
Synth(\ping);
0.05.wait;
};
}).play;
)
(
var s;
s = Server.default;
Routine({
SynthDef(\ping, { Out.ar(\out.kr(0), (SinOsc.ar(440) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.freeSelf)) ! 2) }).add;
s.sync;
loop {
s.bind { Synth(\ping); };
0.05.wait;
};
}).play;
)
The first one sounds jittery and uneven, but the second one sounds nice and regular.
Example 2
(
var s;
s = Server.default;
Routine({
var synth;
SynthDef(\ping2, { Out.ar(\out.kr(0), (SinOsc.ar(440) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.none, \trigger.tr)) ! 2) }).add;
s.sync;
synth = Synth(\ping2);
loop {
synth.set(\trigger, 1);
0.05.wait;
};
}).play;
)
(
var s;
s = Server.default;
Routine({
var synth;
SynthDef(\ping2, { Out.ar(\out.kr(0), (SinOsc.ar(440) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.none, \trigger.tr)) ! 2) }).add;
s.sync;
s.bind { synth = Synth(\ping2); };
loop {
s.bind { synth.set(\trigger, 1); };
0.05.wait;
};
}).play;
)
Pretty much the same as Example 1, but showing that s.bind { ... }
is necessary for .set
messages too. Again, the first example is jittery, the second one nice and even.
Example 3
(
var s;
s = Server.default;
Routine({
SynthDef(\ping, { Out.ar(\out.kr(0), (SinOsc.ar(\freq.kr(440)) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.freeSelf)) ! 2) }).add;
s.sync;
Synth(\ping);
Pbind(\instrument, \ping, \freq, Pseq([660], 1)).play;
(instrument: \ping, freq: 880).play;
}).play;
)
(
var s;
s = Server.default;
Routine({
SynthDef(\ping, { Out.ar(\out.kr(0), (SinOsc.ar(\freq.kr(440)) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.freeSelf)) ! 2) }).add;
s.sync;
s.bind { Synth(\ping); };
Pbind(\instrument, \ping, \freq, Pseq([660], 1)).play;
(instrument: \ping, freq: 880).play;
}).play;
)
The first example attempts to play a Synth, a Pattern, and an Event at the same time. The Synth arrives early in the first example, while all are on time in the second example.
Why?
The client and server communicate by OSC. OSC messages, when in bundles, can be optionally adorned with a “time tag” that indicates the exact time when the message should be executed. If no time tag is specified, or the message is not in a bundle, the receiver must execute the OSC message as soon as it is received. A common use for time tags is to send OSC messages in advance so their timing can be accurate instead of at the mercy of any inherent latency in OSC communication.
An unadorned Synth.new
sends an /s_new
message with no time tag, and so the server executes the OSC message whenever it’s received.
s.bind { ... }
is shorthand for s.makeBundle(s.latency, { ... })
. .makeBundle
causes the Server object to temporarily change the behavior of sendMsg
so that attempts to send new OSC messages instead add those OSC messages to a bundle. The function is immediately executed, and after it is completed, the OSC messages are scheduled s.latency
seconds ahead. You can change s.latency
if you want; the default of 0.2 is rather high. (s.latency
is commonly misunderstood to be related to audio latency. It isn’t. In fact, the only place it is used is in OSC scheduling, and scsynth isn’t even aware of it. Maybe it should have been called s.oscLatency
?)
It is important to note that s.bind { ... }
, despite having a callback function, is not asynchronous. The function is run immediately, and execution proceeds when the function returns. The OSC bundle is also sent immediately, but scsynth sits on it until the scheduled time in the time tag.
The Patterns system – or more accurately, the default Event type – automatically runs s.makeBundle
. You can override this with the \latency
key in the default Event type. Try setting it to nil
in a pattern, which removes the time tag.
When you shouldn’t use s.bind
There is one case where you shouldn’t use s.bind { ... }
: real-time input, such as from a MIDI controller, sensor, or external program. In such cases, it’s preferable to sacrifice timing accuracy for the sake of minimizing latency.
Discussion
The API here is definitely not ideal. My armchair critique is that the option to schedule a bundle rather than send an immediate OSC message should have been options in Synth.new
, Synth:set
, and any other method that sends OSC. The API hides OSC scheduling from the user, which has resulted in a general lack of awareness of the nuances of this feature.
Also the internal implementation of s.bind { ... }
is, uh… something, but I’ll ignore that for now.
The two-process model of SuperCollider is occasionally touted as a benefit, but to my understanding one reason they were separated was because multithreading within a process was not universally and reliably supported on consumer hardware at the turn of the millennium. (I might be wrong though, I’m a Gen Z Fortnite snowflake.) As a result, SuperCollider users, and our tireless developers that we owe everything to, are burdened with many practical issues as a consequence of inter-process communication. The latency-accuracy tradeoff is one of them. Clock drift is another. (Also, on Windows I get OSC messages completely dropped sometimes, especially for rapid music. Maybe Server.default.options.protocol = \tcp
would help, but it breaks my server meter.)
As Scott C has eloquently written, pretty much every sufficiently complex real-time audio platform follows some kind of client-server model. But should they be separate processes? Probably not in this decade. (Some have argued that sclang surviving when scsynth crashes is a perk, but I don’t consider any situation where the server crashes to be a benefit.)
I would be interested if someone could explain the exact factors that cause timing nondeterminism in the sending/receiving of OSC messages. I don’t know quite enough about computer architecture, nor the internals of sclang timing, to offer a good explanation for that.
EDIT: fixed incorrect use of the term “pre-emption”