Why you should always wrap Synth(...) and Synth:set in Server.default.bind { ... }

This is a PSA about a SuperCollider gotcha that is sadly ignored in most SC tutorials (or at least ones I’ve seen).

Properly using OSC scheduling is absolutely critical if you’re working with Routines. Patterns ostensibly take care of this automatically for you, but if you start doing anything with graphics, connecting to external software, etc. then you still have to understand OSC scheduling to deal with potential synchronization issues. So really, every SC user should know this stuff.

Example 1

(
var s;
s = Server.default;
Routine({
	SynthDef(\ping, { Out.ar(\out.kr(0), (SinOsc.ar(440) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.freeSelf)) ! 2) }).add;
	s.sync;
	loop {
		Synth(\ping);
		0.05.wait;
	};
}).play;
)

(
var s;
s = Server.default;
Routine({
	SynthDef(\ping, { Out.ar(\out.kr(0), (SinOsc.ar(440) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.freeSelf)) ! 2) }).add;
	s.sync;
	loop {
		s.bind { Synth(\ping); };
		0.05.wait;
	};
}).play;
)

The first one sounds jittery and uneven, but the second one sounds nice and regular.

Example 2

(
var s;
s = Server.default;
Routine({
	var synth;
	SynthDef(\ping2, { Out.ar(\out.kr(0), (SinOsc.ar(440) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.none, \trigger.tr)) ! 2) }).add;
	s.sync;
	synth = Synth(\ping2);
	loop {
		synth.set(\trigger, 1);
		0.05.wait;
	};
}).play;
)

(
var s;
s = Server.default;
Routine({
	var synth;
	SynthDef(\ping2, { Out.ar(\out.kr(0), (SinOsc.ar(440) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.none, \trigger.tr)) ! 2) }).add;
	s.sync;
	s.bind { synth = Synth(\ping2); };
	loop {
		s.bind { synth.set(\trigger, 1); };
		0.05.wait;
	};
}).play;
)

Pretty much the same as Example 1, but showing that s.bind { ... } is necessary for .set messages too. Again, the first example is jittery, the second one nice and even.

Example 3

(
var s;
s = Server.default;
Routine({
	SynthDef(\ping, { Out.ar(\out.kr(0), (SinOsc.ar(\freq.kr(440)) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.freeSelf)) ! 2) }).add;
	s.sync;
	Synth(\ping);
	Pbind(\instrument, \ping, \freq, Pseq([660], 1)).play;
	(instrument: \ping, freq: 880).play;
}).play;
)

(
var s;
s = Server.default;
Routine({
	SynthDef(\ping, { Out.ar(\out.kr(0), (SinOsc.ar(\freq.kr(440)) * -5.dbamp * Env.perc(0.001, 0.1).ar(Done.freeSelf)) ! 2) }).add;
	s.sync;
	s.bind { Synth(\ping); };
	Pbind(\instrument, \ping, \freq, Pseq([660], 1)).play;
	(instrument: \ping, freq: 880).play;
}).play;
)

The first example attempts to play a Synth, a Pattern, and an Event at the same time. The Synth arrives early in the first example, while all are on time in the second example.

Why?

The client and server communicate by OSC. OSC messages, when in bundles, can be optionally adorned with a “time tag” that indicates the exact time when the message should be executed. If no time tag is specified, or the message is not in a bundle, the receiver must execute the OSC message as soon as it is received. A common use for time tags is to send OSC messages in advance so their timing can be accurate instead of at the mercy of any inherent latency in OSC communication.

An unadorned Synth.new sends an /s_new message with no time tag, and so the server executes the OSC message whenever it’s received.

s.bind { ... } is shorthand for s.makeBundle(s.latency, { ... }). .makeBundle causes the Server object to temporarily change the behavior of sendMsg so that attempts to send new OSC messages instead add those OSC messages to a bundle. The function is immediately executed, and after it is completed, the OSC messages are scheduled s.latency seconds ahead. You can change s.latency if you want; the default of 0.2 is rather high. (s.latency is commonly misunderstood to be related to audio latency. It isn’t. In fact, the only place it is used is in OSC scheduling, and scsynth isn’t even aware of it. Maybe it should have been called s.oscLatency?)

It is important to note that s.bind { ... }, despite having a callback function, is not asynchronous. The function is run immediately, and execution proceeds when the function returns. The OSC bundle is also sent immediately, but scsynth sits on it until the scheduled time in the time tag.

The Patterns system – or more accurately, the default Event type – automatically runs s.makeBundle. You can override this with the \latency key in the default Event type. Try setting it to nil in a pattern, which removes the time tag.

When you shouldn’t use s.bind

There is one case where you shouldn’t use s.bind { ... }: real-time input, such as from a MIDI controller, sensor, or external program. In such cases, it’s preferable to sacrifice timing accuracy for the sake of minimizing latency.

Discussion

The API here is definitely not ideal. My armchair critique is that the option to schedule a bundle rather than send an immediate OSC message should have been options in Synth.new, Synth:set, and any other method that sends OSC. The API hides OSC scheduling from the user, which has resulted in a general lack of awareness of the nuances of this feature.

Also the internal implementation of s.bind { ... } is, uh… something, but I’ll ignore that for now.

The two-process model of SuperCollider is occasionally touted as a benefit, but to my understanding one reason they were separated was because multithreading within a process was not universally and reliably supported on consumer hardware at the turn of the millennium. (I might be wrong though, I’m a Gen Z Fortnite snowflake.) As a result, SuperCollider users, and our tireless developers that we owe everything to, are burdened with many practical issues as a consequence of inter-process communication. The latency-accuracy tradeoff is one of them. Clock drift is another. (Also, on Windows I get OSC messages completely dropped sometimes, especially for rapid music. Maybe Server.default.options.protocol = \tcp would help, but it breaks my server meter.)

As Scott C has eloquently written, pretty much every sufficiently complex real-time audio platform follows some kind of client-server model. But should they be separate processes? Probably not in this decade. (Some have argued that sclang surviving when scsynth crashes is a perk, but I don’t consider any situation where the server crashes to be a benefit.)

I would be interested if someone could explain the exact factors that cause timing nondeterminism in the sending/receiving of OSC messages. I don’t know quite enough about computer architecture, nor the internals of sclang timing, to offer a good explanation for that.

EDIT: fixed incorrect use of the term “pre-emption”

19 Likes

Excellent!

Are there limits to what can be placed in s.bind, you mentioned midi, but how about SynthDef(...).play()?

This implies that in the Creco question where he uses 6 different routines with wait & set, that it will be out of sync? Also this means that if using multiple Routines, then always use functions that include the timing messages in the Osc bundle?

Wow. This explains a lot. Thank you for this post. It seems this should be the default rather needing the extra code.

As for the separate processes question, my selfish example is that my own software runs on multiple servers. So something you can do in SC that I don’t think you can do in any other setup is have one language and multiple audio servers. Multicore servers could solve this. I’m not sure how DAWs do it, but they seem to pull it off.

So something you can do in SC that I don’t think you can do in any other setup is have one language and multiple audio servers. Multicore servers could solve this. I’m not sure how DAWs do it, but they seem to pull it off.

Even in a multi-server setup there is no technical reason why each Server instance would need a seperate process. It is perfectly possible to imagine a client application that manages several Server instances in the same process. In fact, you can already do that with libscsynth.

So something you can do in SC that I don’t think you can do in any other setup is have one language and multiple audio servers.

IMO, multi-server setups are just a clunky workaround for the lack of proper multi-threading support.

Multicore servers could solve this.

You mean, like, Supernova? :slight_smile:

Hi Nathan,

this is a very interesting topic. Here’s my two cents.

My armchair critique is that the option to schedule a bundle rather than send an immediate OSC message should have been options in Synth.new, Synth:set, and any other method that sends OSC.

I think this would clutter the interface. To be honest, I find the s.bind solution rather elegant. I always assumed it was well known, but apparently it isn’t…

Another approach could have been to enable/disable scheduling on a per-“thread” basis.

The two-process model of SuperCollider is occasionally touted as a benefit, but to my understanding one reason they were separated was because multithreading within a process was not universally and reliably supported on consumer hardware at the turn of the millennium.

I doubt that. scsynth has always used multithreading (network thread, NRT thread, audio callback). I rather think the idea was to seperate the two components as much as possible – the total opposite of SuperCollider 2. I also think that multi-client setups and networking have been an important part of the design.

Note there is also an “internal” Server that runs directly in the sclang process. I think it has been the default at one point in time, but I may be wrong.

As a result, SuperCollider users are burdened with many practical issues as a consequence of inter-process communication. The latency-accuracy tradeoff is one of them.

Even if the Server ran in the same process, you would still need to schedule OSC bundles with latency. (I will go more into details at the end of this post.)

This assumes that language and audio processing run in seperate threads, i.e. they are not tightly synchronized. While this is always true for SuperCollider, it is not necessarily true for all platforms. A prominent example is Pd: the message system and DSP run in the same thread. Moreover, Pd offers two different schedulers:

  • “polling scheduler”: messaging + DSP runs in a dedicated thread; the audio callback just reads/writes audio samples to/from a lockfree FIFO
  • “callback scheduler”: messaging + DSP runs directly in the audio callback

Clock drift is another.

Very true. But note that sclang could follow the audio clock – even if it runs on a seperate process. For example, the server may ask the sclang scheduler to advance by sending a message or posting to a process-shared semaphore. Note that this does not mean that sclang would be in sync with the audio clock, it would still run independently, but always trying to catch up. (The idea is very similar to Pd’s polling scheduler.)

(Also, on Windows I get OSC messages completely dropped sometimes, especially for rapid music. Maybe Server.default.options.protocol = \tcp would help, but it breaks my server meter.)

Yep, UDP packet loss because Windows uses a ridiculously small socket receive buffer by default. I should really fix this, but I always forget… Here’s a reminder to myself: Increase UDP socket receive buffer size · Issue #5993 · supercollider/supercollider · GitHub

pretty much every sufficiently complex real-time audio platform follows some kind of client-server model. But should they be separate processes? Probably not in this decade.

IMO, server-client is very specific to SuperCollider. I would rather think in terms like

  1. UI
  2. interpreter/language/scheduler
  3. audio engine.

In typical audio software, all three live in the same process, but there are outliers. Pd, for example, uses a dedicated process for the GUI.

VST3 plugins are an interesting case: they suggest a clean seperation between UI and audio processing to enforce thread safety, but as a consequence the two components may also run in separate processes, or even on different machines (= remote FX processing). Some plugins even run the audio processing on dedicated hardware.

(Some have argued that sclang surviving when scsynth crashes is a perk, but I don’t consider any situation where the server crashes to be a benefit.)

Yeah, I never bought this argument. If the Server crashes, I usually have to restart my project anyway. On the other hand, it totally makes sense to have the IDE in a seperate process, as we don’t want to lose our (unsaved) code on a language/server crash.

I would be interested if someone could explain the exact factors that cause timing nondeterminism in the sending/receiving of OSC messages. I don’t know quite enough about computer architecture, nor the internals of sclang timing, to offer a good explanation for that.
[/quote]

Generally, the very fact that scheduling and audio processing run independently requires that messages are scheduled in advance. The exact amount of delay depends on at least 3 factors:

  1. network jitter: only relevant with actual network connections; negligible with localhost (in the order of microseconds)

  2. language jitter: each operation in the language takes time – and more importantly: different amounts of time. For example, if you write a loop, the elapsed system time between each iteration will vary by some degree. Some iterations may do more work than others, or another Routines gets scheduled in between, or there is a garbage collector pause, etc.

  3. hardware buffer size: Generally, audio is always processed in blocks. If you want OSC messages to be interpreted in between blocks, you have to schedule them in advance; otherwise messages would only be interpreted at block boundaries.
    Scsynth uses a blocksize of 64 samples by default. However, the audio callback often uses a buffersize that is larger than the Server blocksize. For example, if the hardware buffersize is 256 samples, the audio callback executes 4 Server ticks in a row as fast as possible; as a consequence, OSC messages might be interpreted at an interval of 256 samples in the worst case. Generally, if you want to avoid late OSC bundles, the delay must be larger than the hardware buffersize duration.

I would love this to be the solution. But in practice I just don’t get great performance with Supernova. I can run 16000+ SinOsc synths on a single sc_server without any distortion. A ParGroup on supernova can’t even play 2000. Maybe I’m doing it wrong, but proper documentation is…lacking.

I agree the multi-server approach is a kludge, but it is a very powerful one. Proper multi-threading would certainly be welcome though.

Sam

I can run 16000+ SinOsc synths on a single sc_server without any distortion. A ParGroup on supernova can’t even play 2000. Maybe I’m doing it wrong, but proper documentation is…lacking.

Dispatching a Node to a helper thread has a (rougly) constant cost. If your Synths are very lightweight, this cost can easily outweight the benefits of parallelization. Instead of putting all your tiny Synths directly in a ParGroup, you should rather distribute them into a few Groups and put those into the ParGroup. The Groups will be executed in parallel, but Synths inside each Group will run sequentially.

This is called “data partitioning” and it is an important concept for writing parallel programs. The basic idea is minimize the ratio between dispatching/synchronization cost and actual workload.

Unfortunately, the documentation of ParGroup is very sparse and does not really explain how to use it effectively…

but I’ve never gotten the point of multi-client. Why not have a single server and single client, the latter of which receives and executes messages from performers? This would allow individual pieces to manage ownership of synth nodes, busses, etc. in a way that’s tailored to the work.

I totally agree. I guess multi-client setups are mainly used for free-form collaborative live coding – which is quite niche, TBH.

Same goes for single-client remote server setups, I just don’t see the benefit over a client and server on the same device, with the client receiving and forwarding messages to the server.

Once you have the controller/client and processor/server completely decoupled – which is generally a good thing – you are now able to run them in different processes; the question is whether you really should. Most audio applications decide not to do it. In the case of SuperCollider, I think the rational was that there is no real downside to running scsynth in a separate process per default (which I think wasn’t always the case), and for some people it even has slight upsides. Of course, there is one big downside: client and server are running on different clocks and it is impossible to achieve deterministic (sub)sample accurate scheduling. But as I sketched out in my last post, this could be solved. (I remember I have written about this in more detail somewhere in the forum or on GitHub, but I can’t find it right now.)

If the context is embedded devices that only have the capacity to run scsynth, that’s more of a problem of the resource usage of sclang than a solid argument in favor of separate processes.

One nice thing about running the sclang process on the client machine is that you can use GUI objects. (You cannot run the GUI separate from sclang, at least not out of the box.)

Another point that I’m sure someone will mention is the ability to develop alternate clients in other languages. I don’t know the specifics of libscsynth, but shouldn’t it be at least theoretically possible to make an equivalent of an internal server in any language with a C FFI?

It surely is possible. I think you could already do this in Python or Lua with libscsynth. Having scsynth in another process is still useful for browser based clients, though. On the other hand, scsynth can already be compiled for WebAssembly (Add WebAssembly (wasm) target for scsynth (rebased) by dylans · Pull Request #5571 · supercollider/supercollider · GitHub).

I think that for clients written in scripting languages it can still be a good idea to run scsynth in a separate process; otherwise a Server crash would bring down the whole interpreter. (Remember that we don’t want to lose our unsaved project after a Server crash.) Of course, this is not relevant if the “editor” part is already implemented in a dedicated process.

Yeah. I still can’t find the benefit. I made something like this:

(fork{10.do{
a = ParGroup.new;
6.do{
	b = Group.new(a);
	10.do{{Out.ar(0, GVerb.ar(SinOsc.ar(rrand(2000,3000), 0, 0.0001)))}.play(b)};
};
0.2.wait;
}})

and I can get maybe get 600 of them going vs 400 on the normal server. Vs 7 servers, where I can get 2800. I would love it if someone could show a case where supernova just blows scserver out of the water, but I haven’t been able to find that case myself.

Sam

I would love it if someone could show a case where supernova just blows scserver out of the water

Generally, supernova will never be faster than multiple servers, but ideally it should get close. You would trade some performance for much increased flexibility.

(As a side note: you should also give each Group its own Bus and only sum into the hardware outputs after all Groups have completed. The idea is to keep all data access local and avoid synchronization to achieve better scalability. Again, this is not documented…)

I don’t have time right now to test your code, but I will do it later. If you’re interested in investigating this further, can you open a new thread?

2 Likes

Thank you very much!
This is highly informative and helpful!

I have two questions:

  1. Could Synth be used with .onFree and .register in ```s.bind``? There would be no obstacle, but I would like to know if there are things I am unaware of.

  2. Which is better when working with animation using the Pen class or when controlling windows? Should the animation be delayed, or should the synths not be wrapped by s.bind? There should be no significant difference, but I ask to be sure if there are things I am not aware of.

As far as I know, yes. Register and onFree are purely language side; there is no need to send anything to the server except for the normal Synth messages (which are produced by other methods). They wait for replies from the server, nothing else.

With makeBundle and bind, the function runs now, and you get any objects created within the function now – and also the message(s) are sent now! But the outgoing bundle is timestamped to be performed later in the server.

Second question, I agree with Nathan completely. To delay the visuals, use { ... GUI stuff ... }.defer(s.latency) (defer already is a delay mechanism – we just normally delay by 0).

hjh

1 Like

Thank you for your kind answers!
I have more questions:

  1. The s.bind { ... } examples in your examples and in the server help document uses Out.ar to write the signal to the audio bus. Wouldn’t it be better in terms of timing accuracy to write the output of the SynthDef as OffsetOut.ar instead of Out.ar? OffsetOut.ar produces the correct sound in the following example:
(
fork { 
	SynthDef(\testOut, { |freq = 440, out = 0|
		var sig, env;
		sig = SinOsc.ar(freq) * 0.1;
		env = Env.perc(0.01, 0.05, 0.2).ar(Done.freeSelf);
		Out.ar(out, sig * env)
	}
	).add;
	
	s.sync;
	
	200.do { s.bind { Synth(\testOut) }; 0.01.wait } }
)

(
fork { 
	SynthDef(\testOffsetOut, { |freq = 440, out = 0|
		var sig, env;
		sig = SinOsc.ar(freq) * 0.1;
		env = Env.perc(0.01, 0.05, 0.2).ar(Done.freeSelf);
		OffsetOut.ar(out, sig * env)
	}
	).add;
	
	s.sync;
	
	200.do { s.bind { Synth(\testOffsetOut) }; 0.01.wait } }
)
  1. s.bind { ... } is shorter than s.makeBundle(0.2, { ... }), but it is still extra typing. Can it be enclosed by a function to reduce the typing? The enclosed s.bind { ... } by a function seems to work in the following example, but I am not sure what will happen if the language-side algorithms or SynthDef are more complex than the example:
(
fork { s.bind { Synth(\testOffsetOut, [freq: 440, out: 1]) };
	0.1.wait;
	s.bind { Synth(\testOffsetOut, [freq: 660, out: 1]) };
	0.1.wait;
	s.bind { Synth(\testOffsetOut, [freq: 880, out: 1]) } 
} 
)

(
fork { 
	var synth = { |freq| s.bind { Synth(\testOffsetOut, [freq: freq]) } };
	synth.(440);
	0.1.wait;
	synth.(660);
	0.1.wait;
	synth.(880)
} 
)

(
fork { 
	var synth = { |freq| s.bind { Synth(\testOffsetOut, [freq: freq]) } };
	synth.(440);
	s.bind { Synth(\testOffsetOut, [freq: 440, out: 1]) };
	0.1.wait;
	synth.(660);
	s.bind { Synth(\testOffsetOut, [freq: 660, out: 1]) };
	0.1.wait;
	synth.(880);
	s.bind { Synth(\testOffsetOut, [freq: 880, out: 1]) } 
} 
)
  1. Can { ... }.play be also used when SynthDef(...).play can be used? I think not, because { ... }.play takes extra time to be sent to the server when the code block is evaluated. However, in the following examples, { ... }.play seems to work well when the sound length and repeat interval are not extremely short:
( // seems to work
fork { 
	var synth = { |freq|
		s.bind { { SinOsc.ar(freq) * 0.1 * Env.perc(0.01, 0.05, 0.2).ar(Done.freeSelf) }.play }
	};
	synth.(440);
	0.1.wait;
	synth.(660);
	0.1.wait;
	synth.(880);
	0.1.wait;
} 
)

( // does not work corretly:
fork { 
	var synth = { |freq| 
		s.bind { { SinOsc.ar(freq) * 0.1 * Env.perc(0.01, 0.05, 0.2).ar(Done.freeSelf) }.play } 
	};
	200.do { s.bind { synth.(440); 0.01.wait } }
} 
)

( // seems to work
fork { 
	var synth, funcSynth;
	
	SynthDef(\testOffsetOut_, { |freq = 440, out = 0|
		var sig, env;
		sig = SinOsc.ar(freq) * 0.1;
		env = Env.perc(0.01, 0.05, 0.2).ar(doneAction: Done.freeSelf);
		OffsetOut.ar(out, sig * env)
	}
	).add;
	
	s.sync;
	
	funcSynth = { |freq| 
		s.bind { { SinOsc.ar(freq) * 0.1 * Env.perc(0.01, 0.05, 0.2).ar(Done.freeSelf) }.play } 
	};
	synth = { |freq| 
		s.bind { Synth(\testOffsetOut_, [freq: freq, out: 1]) } 
	};
	
	funcSynth.(440);
	synth.(440);
	
	0.1.wait;
	
	funcSynth.(660);	
	synth.(660);
	
	0.1.wait;
	funcSynth.(880);
	synth.(880);
} 
)

Yes, but often it’s not critical.

If something is possible to execute outside of a function, then it’s possible to execute inside a function. (Actually everything runs inside a function. Interactive code gets compiled into a function, and then this function is executed just like any other.)

Here, it’s helpful to understand the message format instead of just regarding server abstractions as black boxes. SynthDef().play and {}.play both send a SynthDef-receive /d_recv message, with a second message (/s_new) embedded in it, to be executed when the SynthDef is ready for use. Whether this is a freestanding message or part of a bundle, doesn’t matter.

What is odd about it is that bind is used for timing control, but, because the sounding part (/s_new) is the completion message belonging to an asynchronous command, the sounding part will not be timed precisely. So you can but it won’t be exact (thus, not really much point to it).

hjh

1 Like

3 posts were merged into an existing topic: Opinionated Advice for SuperCollider Beginners

Have you tested the same code on Linux, where supernova was developed?

Please see Why you should always wrap Synth(...) and Synth:set in Server.default.bind { ... } - #10 by Spacechild1. The fundamental issue of synchronization/scheduling overhead is the same on every OS.


Another issue is that on Supernova every Synth gets its own wire buffers and local busses because it might execute in parallel with other Synths. This may cause significant memory overhead and cache misses. The smaller the Synths, the more pronounced the overhead. 16000 SynthOsc synths is probably the point where the model breaks down… But then again, it’s not exactly a realworld test scenario :slight_smile:

However, future parallel server implementations should take this issue into account!

I always wondered why that is. The number of DSP threads is known in advance and parallelism can’t exceed this. Wouldn’t it be enough to have one set of wire buffers per DSP thread?

hjh

Synths are not pinned to specific threads. On every DSP tick, the DSP tasks are pushed to a thread pool and any DSP thread might pop and execute them. The wire buffers, however, have to be set when the Synth is created.