Extending SynthDef

Getting right to the point… TL;DR I think efforts to extend the features of SynthDef should favor “has-a” composition rather than “is-a” adding methods (or inheritance). But I can imagine different opinions about the boundaries, so I thought it might be worth starting a conversation.

This is motivated by making Synthdef work in many threads by JordanHendersonMusic · Pull Request #6073 · supercollider/supercollider · GitHub, which changes UGen.buildSynthDef from a single global value to a thread-local value. Just on a technical level, that’s a reasonable idea; it’s bothered me at times that UGen.buildSynthDef is this unprotected global thing hanging out there, in a critical core area. (I don’t have an opinion on the implementation; first glance looks pretty good! Just that making it thread-safe(r) would be a win.)

But the other given rationale – “Further, in my own work I often want to make [an] owned resource as a part of the synthdef and synth creation process” – I have doubts, and the PR conversation thread wasn’t the right place for that.

We have a server abstraction object representing a server-side GraphDef, just like we have server abstraction objects representing Groups, Nodes, Buffers and Buses. GraphDef <–> SynthDef. Opinion: I think it’s necessary to have a language-side object corresponding closely to GraphDef.

If that’s a correct view, then there would be three ways to extend SynthDef.

  1. Push the GraphDef companion down into a lower-level class with a different name, and redo SynthDef as a wrapper around that.
  2. Or, add features to SynthDef, ditching some of the SynthDef <–> GraphDef correspondence.
  3. Or, keep SynthDef as it is, and build superstructures on top of it.

It seems pretty clear to me that 1/ is intrusive and risky.

With 2/, it’s more of a matter of opinion, but I think this idea breaks encapsulation. SynthDef already has a large and complex job; it doesn’t need additional complex jobs added into the logic. (That it’s complex is an argument in favor of encapsulation.)

3/ preserves backward compatibility and encapsulation. For me, this is convincing.

So an alternate solution to the stated requirement would be, instead of managing resources from within SynthDef, to create a third party that manages resources and builds SynthDefs, in the right order. I think the intention behind managing resources in SynthDef is convenience, but even a halfway well written resource+SynthDef agent would be just as convenient, and easier to maintain (because everybody has a clearly defined job). This is “has-a” composition.

Loading resources in a SynthDef has that “wouldn’t it be cool if…?” factor – the thrill of seeing SynthDef from a different angle. But this doesn’t ensure that it’s optimal. Considering changes to the core classes on the basis of personal approaches that may or may not be optimal is potentially risky – and SynthDef seems particularly prone to this temptation (probably because it’s the only server abstraction object that executes a user function in normal operation).

In the present case, however, there are other good reasons to improve thread safety; I don’t object to those at all. And of course, a thread-safe(r) SynthDef could be used however one likes. Part of my concern, though, is what we recommend as a best practice; IMO, clear(er) lines between server and client operations are easier to explain to new users, and “has-a” composition supports this better.

Or am I off-base here? I don’t think so…? But maybe there are cases that I’m not seeing.


1 Like

Agreed. The latest version of the pull request no longer uses inheritance, so I guess as far as the PR is concerned, things don’t look all that bad. Breaking backward compatibility is something I’m not a big fan of, unless it fixes a huge design flaw, which one could argue is currently the case.

“Convenience” is one way to look at it, perhaps a “concern for safety” or “avoiding user error” is another: e.g. no danger of forgetting to clean up resources when they are no longer needed.

Having smaller, more modular building blocks is easier to maintain for developers, but often harder to use for an end user (power users have more flexibility and a beginners have more headaches). For this reason the idea of making a superstructure that combines the best of both worlds to me appears as a sound approach.

Not hindered by much factual knowledge, it appears to me that managing resources in a SynthDef may introduce other problems as well related to error handling (e.g. permissions issues) or unreliable timing (contention, network), etc. Probably better to keep those worlds separate.

1 Like

Ultimately, I just don’t think SynthDef, the only way to make nodes without rewriting all ugen classes, should have an opinion on how and where it’s is used - my pr fixes that.

Honestly, I think that’s a strong enough reason, the rest is implementation.

After the large thread on threading, it seems agreed supercollider’s model for client server synchronisation is bad and should be replaced with promise, this proposal lets you await inside a synthdef, which isn’t even creating a resource, just accessing it. I’ve moved most of my code over to using promises already and this issue has come up multiple times. Obviously there are ways around this, e.g., waiting just before the synthdef… but this is just another thing to remember

I agree with everything you’ve said, but Synthdef’s current design prevents one from building on top of it if you want to involve threads, this pr tries to fix that.

FWIW I did a quick grep on the complete Quarks repo (318 Quarks) and found 3 Quarks in which lines with .buildSynthDef (without UGen before) appear: AlgaLib, cruciallib, and Connection. (it appears in a few other quarks in help files for example so I may be off a one or two) - I didn’t inspect the code to see if it would be a problem but anyhow…

True, but… it remains a fact that the only way to load a buffer is for the language to construct and send a message. Constructing and sending the message is not a server-side operation. Sticking this into a function that represents server-side operations blurs the line between client and server (which is already hard enough for incoming users to understand).

E.g., “I have this SynthDef and I expect each Synth to play a different buffer, but it always plays the same one”:

SynthDef(\bufChooser, {
	var paths = "/some/directory/*.wav".pathMatch;
	var buf = Buffer.read(s, paths.choose).await;
	var sig = PlayBuf.ar(2, buf);

Reply: Well, pathMatch, choose, and read are client-side – the server can’t do that in response to /s_new.

“But ‘we can load buffers in a SynthDef’'…”

This conversation will happen :laughing:

So part of my point here is pedagogical. Simple rules with fewer exceptions are easier to explain than complex rules with a lot of fine print. “Initiating the process of loading the buffer is a client-side operation, and doesn’t belong in SynthDef” is a simple rule. “You can load buffers in a SynthDef if it’s within a thread, and if you use promises, and if you don’t do too much other fancy language-side stuff at the same time (and this boundary is not immediately clear, if ever), and if you understand that the buffer will be hardcoded and you won’t be able to change it after the fact” is… a lot to absorb. So to me, the best that “loading resources in SynthDef” can be is possible, but I’m not convinced that it should be recommended.

Here is how I would handle a SynthDef that depends on a buffer loaded from disk. The SynthDef is pseudocode, but the rest is legit, except .await (and even that might be real code, if it lines up with your Promise implementation).

~bufSynthDef = Proto {
	~path = "";  // fill in at run time
	~defName = { ("def" ++ UniqueID.next).asSymbol };
	~prep = { |path|
		~path = path ?? { ~path };
		fork {
			~buf = Buffer.read(s, ~path).await;
			~defName = ~defName.value;
			SynthDef(~defName, {
				... stuff...
				... something with ~buf.numFrames e.g. ...
			}).add /* .await here too? */;
			// or maybe the constructor itself is a Promise
			// ... in that case, it wouldn't be '.changed' here
		currentEnvironment  // 'this'

	// note that we also gain an easy way to release
	~free = {
		s.sendMsg(\d_free, ~defName);

~bufdef = ~bufSynthDef.copy.prep(Platform.resourceDir +/+ "sounds/a11wlk01.wav");

So now I have an object that I can invoke with a single expression, which loads a buffer and provides a SynthDef that is bound to that buffer, using threads and promises, without requiring any waiting within SynthDef (fully compatible with the current implementation).

This is building on top of SynthDef. It’s “has-a”-style object composition (sort of… it’s not an Adapter, exactly). The object “has a” buffer and a SynthDef, and as such, the object is not required to “is-a” (to be) a SynthDef. (I think the SynthDef is one of the resources to be managed: a sibling of the buffer, not the owner of it.)

FWIW I’ve been managing my resources this way for 18 years now, without needing a radical redesign. I don’t feel like SynthDef’s current single-threaded requirement has prevented me from accomplishing anything.


Synthdefs behave weirdly if their functions include “yield”. - this has been noted before in this post by @Avid_Reader for example Thread safety within SynthDef

Is there some upside to this behavior?

As far as advantages to being able to yield inside of synthdef functions, it may be hard to see them yet from this vantage?

But if this proposal is solid technically (which I don’t feel qualified to judge) I don’t see any concrete downsides yet(aside from being a breaking change).

Regarding clarity, I guess SynthDef functions are already hybrid - you can include any language side data you want by way of constructing the def. You can already load a buffer inside a SynthDef if you want for that matter.

No, and I don’t think anybody in this thread is saying otherwise.

I think James McCartney could “get away with” the thread-unsafe global variable because: 1/ sclang is not preemptive (except for .yield, there’s no way to interrupt SynthDef building) and 2/ he wouldn’t have seen any convincing reason to yield during a SynthDef build. (I’ll go out a little further on the limb and guess that the present arguments in favor of yielding in a SynthDef would not have convinced him either.)

Or such advantages may simply be illusory (in the sense that one can achieve the same result without yielding).

Sure… which is why clear boundaries are more, rather than less, important for them.


This is post-hoc justification of a poor design. There’s no reason for the limitation.


This is what I want to write, and would do exactly what James’ example does but far far more clearly.

var synthA = MySynthDefWrapper.play({
    var b = OwnedBuffer.read(s, ~path, owner: \node).await;
    BufWr.ar(b ...)

To be clear, this is doable now, and I have a quark that does something very similar.

The problem comes when you want to load a bunch of these all at once. The issue has nothing to do with the server-client relation and everything to do with defining multiple synthdefs (client) in different threads (client)… it just so happens that the example I gave uses a server side synchronisation, but you might want to get a value from a routine with yield, which would be a solely client side operation.

SynthDef(\bufChooser, {
	var v = [...some numbers...].choose;

This conversation already happens.

Does that mean it should be impossible? Again, this is a core library class, it shouldn’t prohibit anything that is well defined and safe, and then it should do so explicitly.

No. I’m a bit puzzled that this is being inferred here.


Ooohhh! Thanks for that! So its about 99% coverage, not bad for a breaking change. I will have a look at those as it might be as simple as prefixing it with UGen. in which case I’d make PR on them all.

looking through the grep again - I think I missed one - UGenStructure

Looking through it, it should be easy fixes in all but Bending - https://github.com/supercollider-quarks/downloaded-quarks/tree/3b0d2f1e405587613e3e5f43103184f231552c42/Bending

Upon further reflection (probably my last reflection), it looks like Jordan and I are, in a way, speaking different languages, with mutual misunderstanding, leading both of us to look negatively at the other’s code. (FWIW the OwnedBuffer helps a lot – I’d assumed it was a standard Buffer, which would raise object-leak problems.)

I realize in hindsight that my example looks arbitrary and overworked. It isn’t. 'Round about 2004-2005, I realized that, when we are composing or performing, we are not really interacting with synth(def)s, groups, buses, buffers etc. – we are interacting with sonic behaviors (“processes”), which use synth(def)s, groups, buses, buffers etc. to get the work done. It would make sense, then, to organize resources and activities into process objects, so that the creative work is addressed more toward the behaviors, and less toward the “guts.”

I wanted to define and redefine these processes freely, individually, without having to recompile the classlib. Hence Proto. In my example, I didn’t use Proto “just because” – being able to fix something in one process without stopping other playing processes is part of a creative workflow.

I wanted processes to handle their own resources. Instantiating and releasing should be one-liners – hence, constructor (~prep) and destructor (~free) pseudo-methods. Since these are user hooks, initialization can be as simple or as complex as needed (scalability), and a correctly-written process object doesn’t leak resources.

This workflow hasn’t gained traction, AFAICS, with other users. I suspect it’s because much SC code is ad-hoc and disorganized. This is part of the fun! When I’m working on a new sound or a new type of sequence, I throw things around willy-nilly too… but then when I get something I like, I codify it into a self-managing process definition, for easy reuse. I turn the initial messy code into an organized, reliable, (more) maintainable object. Then performance code just works with the processes’ public interfaces, no exposed internals, less risk of onstage failure.

So where Jordan says “what I want to write [does it] far far more clearly” it highlights the difference in perspective. I’m initializing objects in constructor methods – pretty standard, I’d think. Every part of the operation (object init/free, play init, stop cleanup, task definition) has a place, and things are in their places. What is unclear about that :laughing: ? This type of coding discipline may be superficially offputting, but the maintenance gains are real.

As stated earlier, I’m in favor of improving thread safety in SynthDef on technical grounds (and I find it remarkable that I need to say this yet again, having said it at least twice before), and then people can use that how they like (whether or not I think it’s a best practice doesn’t matter for that… there is a difference between “not recommending” and “prohibiting”). I brought it up because it isn’t the first time “why doesn’t SynthDef do x?” has been asked. My opinion is to keep those boundaries fairly tight.


I think I conflated this thread with the conversation about improving thread safety, might fault!

So there is the thread safety improvement, which everyone appears agreed on?.. if any one has any feedback on the implementation or comments in general, then please let me know, though github is probably the better place for that as you can reference the code :smile:

The only impactful change to synthdef would be the ability to define across threads. While there is no performance benefit here, if you need to wait for some value before defining (from the server, OSC, cli process…), you will be able to do so inside the synthdef. While there are ways around this now, it is a needless restriction.


That’s sort of the problem… what do you mean by practice? Any kind of abstraction (grouping of concepts) implies a use-case, which in turns inforces/restricts/creates a type of artistic practice. What should be happening (IMHO), so that we encourage people to develop there own way of working, is that, given a practice, we consider what recommendations/improvements/alternatives might be beneficial to said practice… obviously there are some commonalities across practices: writing clean code, oddities in the language…

In my work, I hit go, run away from the computer and grab my instrument - I don’t interact with supercollider at all during the performance. I use supercollider like max… if max let you pass objects in patch cables, use key-value data structures with signals, and had better language features so I could manipulate the code at a level that makes artistic sense for my work (if max had a type system beyond grey and stripy I’d switch yesterday). When composing, I’m dealing with relationships between osc addresses, often not originating in supercollider — here I care about signals, filters, how these signals change in relation to one another, and compositions of those processes. I would call this behaviour or relationships, but it looks technically very different from your idea of behaviour. These processes and synths are made at the beginning of the work and destroyed at the end, all the connections are static. The only kind of resource management I need is to ensure the thing exists before it is used, it is also impossible for me to leak as I will just be rebooting the interpreter. For this, your Proto is overkill and adds a great deal of complexity that is just meaningless in the context of my practice and introduces many cases where I might mess up; for your practice which (as I understand it) needs all those levels of control, Proto seems perfect.

Also… just to be clear, I was never recommending this as a change to the class library, merely that being able to wait on promises inside a synthdef would simplify some of the classes I have and is a needless limitation.

Right… not just “best,” but “best for what.” I have to admit to getting stuck in my own perspective at times.

WRT server abstractions, I think I’m always going to be relatively conservative. For instance, this thread popped up again after seven months, where I see that I said:

The Group class is only a thin abstraction layer representing a group node in the server. Because of its name, we want to press it into service for other purposes, but in my opinion, this is a conceptual error.

At least I’m consistent :laughing:

Reminds me of what motivated my design. I had previously been working with a model that focused on instruments (Voicers), with sequencers attached to the instruments and represented in a GUI. But the GUI was based on the initial state; if I added or deleted sequencers, no updates (because I wasn’t clever enough at that time). So every micro-change to one sequence required a recompile. This was 2004, on a G4 iBook. Each recompile + reload cycle took up to 30 seconds, so, 30 changes in the space of an hour would cost 15 minutes of work time: 25% loss of efficiency. A performance almost failed because of probably multiple hours wasted recompiling and reloading during the composing process. So I had to figure out how to make it dynamic. Maybe today I wouldn’t bother, since on my current machine (thank you SSD): “compiled 1278 files in 0.43 seconds” :exploding_head: it was not like this in 2004.

(I keep on with Proto for this on-the-fly flexibility, despite occasional arguments to the effect that “object prototyping in SuperCollider doesn’t work.” If you need dynamically defined objects, then you’re going to find a way to work with the limitations. Overstated pejoratives flow in both directions – I’m guilty of it a bit in this thread too.)

Good that we’ve come around to some positive, interesting reflections. Thanks!


1 Like