Units of instruments, examples for controlling an instrument

This is inspired by some experimentation I am doing with physical models, such as Stk and thinking about intuitive control, so here is the following issue.
Is there a good example/tutorial of something as follows?
I want to have a way to create a new Synth (SynthDef or a function that creates necessary SynthDef), together with some control mechanisms (buses and functions probably),
Given the folowing example
Having a piano, i can produce each sound as a separate event, but make all produced sounds be affected by muffling/sustain pedals. Think a single synth having 88 possible sounds to produce, and all sounds produced by it getting affected by pedal on/off.
I want a synth such that I can send events to it to trigger it, and I can muffle (not mute) all the sounds being produced by the synth unit at once.

I am not asking how to necessarily make these per se, but more how to make them in a modular way, are there good examples of separate units of sound (strings) being tied together and controlled together (by a pedal) or do I need to let go off this kind of thinking when working in SuperCollider?
The best I can think of is a function that produces a collection of synths; plus a collection of controls, with intuitive controls being linked to all the separate sound units by control buses.

What you want is a mididef that changes something on a bus and the bus is mapped to a synthdef arg. For audio, you can route the output of a synth into a bus and then in to another synth. The control instrumentation part is an interesting problem if you have multiple inputs. Not as straightforward as I would have thought.

In the use case of a piano synthdef, and the synth is getting respawned per note, I’d map the felt pedal to a control bus and a synth has that set whenever it is instantiated.

Right now I have formed an ad-hoc plan, that goes somewhat like this:

  1. Have Synth for each key patterned on a common SynthDef.
  2. Have a class whose instances hold pedal states and per-string Synths. Make methods to cause keypresses/change pedals. These methods would put signals onto buses, that cause excitation in string synths and route changes in pedal states trough to string synth envelopes.

This raises these questions:
Does this sound reasonable? Is there a better more idiomatic way?
Is there an existing infrastructure in standard libraries or a quark, that could assist with this kind (or the idiomatic kind) of routing?

One way I loosely think of a continuously processing stream like supercollider with external controls, is the nodes themselves don’t really have state, rather the state of one particular subsystem consisting of a node and a controller is embodied by the controller itself. So a controller that is 440hz should be the state of the synth it is controlling. Another place where a state can be held is in the bus. So if you put 120 into a bus, that bus will continuously output 120. So if you have a sustain pedal, that pedal is going to output momentary signals that is altered by the mididef into a signal that you can set in a bus, and that bus will contiuously output. Or you could get the mididef to set the synth arg.

I’m not sure what constitutes idiomatic, since there’s only a few methods of changing variables in supercollider. There’s the modality toolkit that can help interface the hardware controller with supercollider, I’ve never tried it, but I decided to make my own. I found it best to start with small chunks of code and later integrate them to voltron a full system of behaviours.

1 Like

This is the correct way! (or at least, one of)

As a quick example…
The important point is the synthaphoneShared which is a nice way to store all the buses.
and the line that begins var mute_control = .... where the bus is read.

s.waitForBoot {
	~synthaphoneGroup = Group(s);
	~synthaphoneShared = (
		\mute_bus: Bus.control(s, 1);
	SynthDef(\synthaphone, {
		var sig = Saw.ar(\freq.kr) * EnvGen.ar(Env.adsr, \gate.kr(1)) * \amp.kr();
		var muted = OnePole.ar(sig, 0.8)
		var mute_control = In.kr(~synthaphoneShared[\mute_bus], 1);
		var out = sig.blend(muted, mute_control);
		Out.ar(\out.kr(0), out!2);
	~synthaphoneVoices = ();
	~make_synthaphone_at = {
		|midiKey, velocity|
		try{ ~synthaphoneVoices[midiKey].set(\gate, 0) } {};
		~synthaphoneVoices[amidiKey] = Synth.tail(
			[\freq, midiKey.midicps, \amp, velocity.linlin(0, 127, 0, 1).pow(2)]
	MIDIdef.noteOn(\synthaphone_noteon, {
		|vel, note|
		~make_synthaphone_at.(note, vel)
	MIDIdef.noteOff(\synthaphone_noteoff, {
		|vel, note|
		try{ ~synthaphoneVoices[note].set(\gate, 0) } {};
	MIDIdef.cc(\synthaphone_mute, {
		~synthaphoneShared[\mute_bus].set(val.linlin(0,127, 0, 1))
	}, 2 /* cc number */);

// make note
MIDIIn.doNoteOnAction(1, 1, 35, 20); 
MIDIIn.doNoteOffAction(1, 1, 35, 20); 

// change mute bus
MIDIIn.doControlAction(1, 1, 2, 0); 
MIDIIn.doControlAction(1, 1, 2, 127); 

There are a few other ways of doing this that don’t quite work as expected.

You can communicate to many synths at once by addressing their group, however, when you make a new synth, it won’t know what the previous value was.

You could also store all the state in the language and loop through all nodes when the state changes, but this is pretty inefficient.

You could also use a buffer to store some state, this is pretty useful if you are dealing with large amounts of data!

It is also worth mentioning that synths can alter the state of the buffer if you want them to, this can be very powerful, but also very confusing, as execution order really begins to matter there.

1 Like

This should be in the ide tutorial somewhere

One thing to consider - I haven’t used this extensively, but it’s simplified things for me in a few cases: if you start all of your individual keys paused, and use Done.pauseSelf as your doneAction in them, then you can essentially pre-allocate all your 77 notes at once and turn them on as needed. Then you don’t need to keep track of e.g. freeing the “old” note=33 when you play a new one, and you probably get better retriggering behavior (depending on the kind of synthesis you want to do). Then rather than playing new Synths, you can do this for new notes:

~noteSynths[33].run(true).set(\gate, 1);

One nice thing about this is it alleviates the need to use Groups or tricky node ordering - you allocate every node you need once, at start, and the order you allocate them in code is the order they are on the server. If you can do this, it eliminates a whole class of tricky bugs related to node ordering / groups etc.

I think a class here might actually reduce flexibility / transparency, though I don’t think it’s a BAD idea if you find this a helpful way to organize your code. Decide on the data structure where your parameters will ultimately be stored, build this out in it’s entirety, and then make your connections (a) between MIDIFunc’s and that data structure, and (b) between that data structure and your synths. Choosing something like “a dictionary of buses” as your data structure is easy, because part (b) can be accomplished with Synth:map, which is very low overhead. So something like:

~parameters = (
    pedal: Bus.control(s, 1),
    notes: 77.collect {
           trigger: Bus.control(s, 1)

// MIDI -> parameters
MIDIdef.control(\pedal, {

// parameters -> synths, assuming ~notes is an array of all of your individual note synths
~notes.do {
   |synth, i|
   synth.set(\pedal, ~parameters[\pedal].asMap, \gate, ~parameters[\notes][i][\trigger].asMap);

Sticking to a simple data structure like this can be preferable to a class, since you can easily see and reproduce the entire state at a given time without depending on the internals properties of a class.


As in it should be somewhere in there right now and it’s worth looking for or should be written up as a tutorial?

As in both these examples should be written into the existing tutorials as examples. Maybe it’s there, certainly small snippets on how to set arguments and maps, but I certainly didn’t see examples of how others would approach this from your posed question.

My opinion is that there should be a Guide style document explaining control routing, and the tutorial should link to it.

If the Getting Started tutorial’s scope expands too much, then it’s no longer Getting Started.

We don’t have enough Guide documents for middle-level topics (basic and important, but a bit beyond beginner level).

I do understand the wish to make it easy for new users to encounter the material by putting it right there in the tutorial, but linking has the advantage of searchability (and if, in 2023, one struggles with the concept of following a link, one is not likely to get very far with SC anyway).


Why not both?

I found the barrier to learning something new is the number of context switches needed to achieve the learning objective. With enough jumps through links, one cannot muster enough attention to learn what is needed to meet the original objective. As the initial point of contact for supercollider, the IDE should have a self contained set of tutorials that encompasses the majority of use cases. Unless, the objective is for the user to gel these different concepts into an internalized understanding. Great if there’s enough time, but not so good if there’s no other resource than the ide and the internet.

This is why the processing ide is such a great resource. Though it doesn’t have a tutorial style help, iirc, it includes code examples from most of the books written about it.

I guess we have a different conception of what a tutorial is.

Perhaps one trouble in this discussion is that term, “the IDE tutorial.” I assume you mean the Getting Started series. I fully agree that we don’t have enough “common use case” documents; I’m just not sure that they belong under a rubric of Getting Started.

I’m wary of a tutorial that ends up having 40 chapters because “we want it to cover all the common use cases.” This could give new users the impression that they will have to study for a year just to Get Started.

I do think the front help page could link to two collections of documents: 1/ Getting Started, 2/ Common problems and solutions. 2/ could be expanded indefinitely.

My other reason for preferring this approach is that you don’t know which common problems and solutions a given user needs. I think a tutorial is to be read in sequence; for intermediate guides, users should be able to pick and choose freely.