Composite notes: Auto-mapping synth controls to events

Here’s something I’ve been interested in for a while – finally made a prototype after a question on the mailing list.

The question was about handling continuous control changes in a pattern context. Patterns are oriented toward discrete events, making it harder to handle arbitrary control signals.

I’ve been saying for a long time that I think the best way to handle it is to allocate, map and deallocate buses automatically. That is, instead of trying to handle everything within one SynthDef, we could have a player that would put several synths together.

This example depends on a couple of extensions, which you can find in this gist. (I had hoped to upload the files as attachments, but, inexplicably, a SuperCollider users’ forum does not allow SuperCollider source files to be uploaded!)

With those extensions, then you can do like this:

(
SynthDef(\testMod, { |out, gate = 1, amp = 0.1|
	var eg = EnvGen.kr(Env.asr(0.01, 1, 0.1), gate, doneAction: 2);
	var freq = ModControl.kr(\freq, 440, \exp);
	Out.ar(out, (SinOsc.ar(freq) * (eg * amp)).dup);
}).add;

SynthDef(\lfo, { |out, gate = 1, rate = 2|
	FreeSelf.kr(gate <= 0);
	Out.kr(out, LFTri.kr(rate))
}).add;

SynthDef(\ctlEnv, { |out, levelScale = 1, levelBias = 0, time = 1, connect = 1|
	var env = \env.kr(Env.newClear(12).asArray);
	var init = In.kr(out, 1);
	var start = Select.kr(connect, [env[0], init]);
	Out.kr(out, EnvGen.kr(env, 1, levelScale, levelBias, time));
}).add;
)

// sliding pitches played in the default SynthDef
// *without adding sliding logic into the SynthDef*
(
p = Pbind(
	\type, \notemap,
	\dur, Pexprand(0.1, 0.8, inf),
	\legato, Pexprand(0.5, 3.0, inf),
	\freqEndpoints, Pexprand(200, 1200, inf).clump(2),
	\pan, Pwhite(-1.0, 1.0, inf),
	\detunedFreq, Pfunc { |ev|
		var sustain = ev.use { ~sustain.value } / thisThread.clock.tempo;
		(
			instrument: \ctlEnv,
			env: Env(ev[\freqEndpoints], [1], \exp),
			time: sustain,
			addAction: \addBefore
		)
	}
).play;
)

// LFO with depth envelope
(
(
type: \notemap,
instrument: \testMod,
degree: 2,
amp: 0.5,
freqMod: (instrument: \lfo, addAction: \addBefore),
freqModDepth: (
	instrument: \ctlEnv,
	env: Env([1, 1.5], [2], 4),
	time: 1,
	addAction: \addBefore
),
sustain: 3
).play;
)

There are certainly some problems left to deal with. Automatic pitch conversions are not compatible (because of some funky handling of ‘detunedFreq’, it doesn’t work to apply a control-signal event to \freq directly – this would have to be cleaned up). Also I haven’t fully dealt with releasing the auxiliary synths. And currently it’s necessary to specify addAction in the sub-events (this will get boring very quickly).

But, for a rough prototype, it works surprisingly well. It’s not leaking synths or buses AFAICS.

Posting here to gauge how much interest there is in this sort of idiom.

hjh

6 Likes

This is a very interesting topic I never found an optimal solution
for, mostly because code logic becomes cumbersome. Many times I
thought something like this (which of course doesn’t works fine):

Pgroup(
    Ppar([
        Pbind(
            \degree, Pseq((0..7), inf),
            \dur, 1
        ),
        Pbind(
            \type, \set,
            \id, Pkey(\group),
            \args, #[\amp],
            \amp, Pseq((0.1, 0.2..1), inf),
            \dur, 0.1
        )
    ])
).play;

but the interface is clear regarding parallel control processes, maybe
something like:

// Imaginary automation like ctrl synth seq of type \mod
PparSomething([
    Pbind(
        \degree, Pseq((0..7), inf),
        \dur, 0.5
    ),
    Pbind(
        \type, \mod,
        \amp, Env([1, 1.5], [2], 4),
        \dur, 1
    )
]).play

that creates the group and sets up the control ugens as you did but
looks more clear. That’s just a blurry idea. “Polyphonic expression”
was always a problem, even before MPE (which is better than MIDI
itself, pun).

I guess that the code logic becomes cumbersome because the normal objects (for buses, groups etc.) are too low-level. So we often end up with hacks upon hacks, and then say “well, it’s just really hard.”

A key requirement for my example is a bus that automatically removes itself when the node(s) using it are finished. So, I wrote a TempBus class – delegate the cleanup logic and the main flow becomes clearer. (Here, I think I should tweak it so that the bus is released after the main node and all sub-event nodes finish – shouldn’t be too hard.)

I confess here that I don’t understand what is unclear about using an event-within-an-event to run a control synth…?

Or perhaps we have two different ideas about it. It looks like you might be conceiving of the modulation as a separate process running parallel to the main process. That’s not quite what I have in mind.

The idea that I was trying to express in code is that a single musical event may consist of multiple synths, working together, connected by buses. I’ve talked about this before over the years, but the conversation has never really gone anywhere. One reason might be that I never wrote code for it before.

The other reason (perhaps more important) is that it’s thoroughly, deeply ingrained throughout all of the SC documentation and tutorials that a note = a synth, and we manage notes by managing synths. I’m envisioning a superstructure: Now we use SynthDef to design complete synths, but I think it would be more powerful if we use SynthDef for modules. Then, instead of Synth(\def, ...), we might have Synths or SPatch (being careful here not to collide with crucial library) where an argument could be a reference to another module: SPatch(\patchDefName, [filtLfo: SPatch(...)]) and it would handle the interconnections for you.

(Crucial library’s Instr and Patch do a lot of this already. I used to use them a lot but it also has its problems. It allows non-control arguments, which means you could have multiple SynthDefs for the same Instr, and SynthDef naming and caching became delicate. It might be worth it to take a step back from that and have a structure that focuses on the connections.)

Hot-swappable synth components – who doesn’t want that? Except, when I talk about it, it falls flat, and I think part of the reason is because we are very attached to the idea of SynthDef and Synth being the objects to use. But these are very low-level objects, essentially direct representations of server structures.

This Event idea is a proof of concept (I think a successful one).

Actually, though… polyphonic expression is not a problem in this design!

Arrays of sub-events “multichannel-expand” just like arrays of numbers. The events should be distinct ([aControlEvent, aControlEvent] does not work, while [aControlEvent, aControlEvent.copy] does), but that’s an easy requirement to satisfy, and the following works transparently:

(
(type: \notemap,
pan: Array.fill(2, { |i|
	(
		instrument: \ctlEnv,
		env: Env([-1, 1, -1, 1].rotate(i), [1, 1, 1] / 3),
		time: 5,
		addAction: \addBefore
	)
}),
detunedFreq: Array.fill(2, { (
	instrument: \ctlEnv,
	env: Env(
		Array.fill(8, { exprand(200, 1000) }),
		Array.fill(7, { rrand(1.0, 5.0) }).normalizeSum,
		\exp
	),
	time: 5,
	addAction: \addBefore
) }),
sustain: 5,
amp: 0.3
).play;
)

(Sharing one control event over multiple main synths isn’t working yet. I can see how to fix it but, no time at the moment.)

hjh

I just think that the notion of parallel processes is more clear than the event-within-an-event. Such concepts are found in musical notation with hairpins, in contemporary notation with action mode transitions, e.g. sul tasto → sul ponticello, or alternative notations, etc. It’s found in automation in DAWs. In patternland Ppar expresses that concept for voices already and Pbind uses it for event’s coupled keys (discrete). I’m thinking of a way of keeping what is discrete and what is continuous in parallel because the interface will be more clear if the concepts are well defined given the current practices. Regarding SynthDefs as modules I do completely agree with that, even more, I sent the \amp values to the group instead of the synth :wink: I don’t really have much more to say, it’s just an idea I wanted to share with you for consideration.

It’s an interesting concept. I recently had a somewhat similar problem: I wanted to automate the filter and resonance knobs on my analog hw synth during long playing midi notes generated from supercollider patterns. I solved it with parallel patterns, where the parallel pattern (for knob automation) was generated automatically from the note generating pattern (but with much finer time division, so you could hear the filter/resonance evolve during the sustained note). This was not too hard because it was based on Panola input strings, so I had high-level information to reason on. Everything was precalculated so I didn’t need anything that could have been changed while performing. If I understand well, your event-in-event approach could also be adapted for such a use case.

If you wanted to use automatic pitch conversions, would it help/be possible to add the event-in-an-event using a Pbindf ?

It’s exactly for this use case :grin:

Currently I’m leaning toward a solution that takes inspiration from modular gear, where we would have a V/OCT input for the base pitch, and a further CV input to modulate the frequency.

In fact, the “LFO with depth envelope” example does exactly this – look carefully – there’s degree: 2 in there, and with the default C major scale, it does actually start with E.

  • Isn’t it a pain to create extra control inputs for modulation signals?
    • Not if you swap out NamedControl.kr(\freq...) for ModControl.kr(\freq...). (Another case where we have gotten used to using a lower-level abstraction, instead of building more useful higher-level abstractions.)
  • Where is ModControl?
    • In my gist.

That is, the conventional usage pattern in SC is to have a single frequency input. So the decision here is whether to try to make a more complex modulation scheme fit into this convention, or to expand the concept to include conventions from other synthesizers. (Part of the general thrust of my position here is that SC conventions may be unnecessarily limited, by preferring what is convenient to code over what is commonly available in synth design. It’s consistent, then, for me to propose a new convention for synth inputs.)

I think there are a couple of cases being raised:

  • Something like a filter envelope, which resets every note (and which could conceivably be swapped out for every note).

  • Automation, which spans multiple notes.

I’m not convinced that these are the same problem – i.e., that one and only one coding style is the optimal solution for both problems.

You raise an entirely valid point about modulation signals whose timing does not line up exactly with note boundaries – event-within-event would not handle that well. But, if they do line up with note boundaries, then parallel patterns would require the user to duplicate timing patterns, which is not at all transparent – and event-within-event would be clean and clear.

Probably the best programming solution is to build a common architecture to support both use cases, and then express the use cases (slightly) differently.

hjh

I’m wondering if thinking of it as event-within-event correct? Isn’t it still a parallel event with delta = 0? The distinction here being, though, that these parallel events would be sequenced from a single pbind, e.g. - as opposed to separate pbinds (and of course managing the dependencies and wiring)

Almost, but… check the multivoice expansion example. I can’t think of any way to do that with separate events.

hjh

looking forward to trying this - curious if the ModControl and TempBus classes don’t have other uses outside the \notemap Event

I definitely think that compound Synths with temporary external busses could potentially solve many problems outside the pattern system.

To take a little different tack – ModControl and TempBus would be poorly designed, or even useless, if they could not be used outside of this specific event type.

ModControl has no references to events at all. It creates a couple of extra controls for modulation, if you don’t supply modulation signals, but the class doesn’t (and shouldn’t) put any limits on the ways that you could use the additional controls. And, the \notemap event has no references at all to ModControl. So there is no reason to infer tight coupling here.

The fact that TempBus defines an event type function doesn’t imply that the event type is the only way to use the class. So, again, no reason to assume tight coupling.

hjh

Because streams are strictly sequential as data structures that’s the
best solution for a particular case, to do it with separated streams
will be much more verbose. However it will not work to represent a
second voice with a different rhythm. The recursion of the structure
is only possible at Pbind/Ppar level:

(
Pseq([
    Pbind(\midinote, Pseq([62, 59, 67]), \dur, 0.25),
    Ppar([
        Pbind(\midinote, Pseq([65, 64]), \dur, 0.5),
        Pbind(\midinote, Pseq([60]))
    ]),
    Pbind(\midinote, Pseq([62, 65, 69, 72, 71]), \dur, 0.25),
], 2).play;
)

Sure, which is why I said “Probably the best programming solution is to build a common architecture to support both use cases, and then express the use cases (slightly) differently.”

That is, not either/or but both/and. (If someone were to object that “it’s two ways to do the same thing,” I’d answer that they’re not the same thing.)

hjh

I did some more on this over the weekend – now it’s more flexible about the conditions for releasing the bus and the sub-event synths.

Before posting an update, I wanted to consider Lucas’s case of automation, where the timing of the control events is independent of the notes.

Problems arise immediately. Let’s consider pseudocode such as:

PAutomation(
	parentPattern: Pbind(
		\instrument, \default,
		\dur, Pseq([0.5, 1], 2),
		\legato, 0.99
	),
	autoPatterns: [
		detunedFreq: Pbind(
			\instrument, \ctlEnv,
			\dur, Pseq([0.25, Pn(0.5, inf)]),
			\env, Pfunc { |ev|
				Env(
					Array.fill(2, { exprand(200, 800) }),
					[rrand(0.1, 0.7)],
					\exp
				)
			}
		)
	]
)

If you could be sure that the automation synths all exactly touch, with no gaps and no overlaps, then it would be easy: PAutomation would allocate a TempBus for every autoPattern pair, track synths, and release the bus when all synths have ended.

But you can’t be sure of that.

  • Some of the ctlEnv synths may be shorter than delta – if it’s a control bus, the mapped-in signal would just hold its last value until the next synth, which may be OK. (Audio mapping would suddenly go silent.)

  • Some of the ctlEnv synths may be longer than delta – what happens in that case? Switch to a new bus? (In that case, would currently-playing parent synths also switch to the new bus? Or stay on the old one?) Mix the two events onto the same bus (which would be questionable for, say, frequency – you wouldn’t want two ctlEnvs in the range of a few hundred Hz to be summed)? Or, forcibly free the old \ctlEnv node?

I’m curious enough about the problem to sketch out a pattern class for it – but there’s no point in doing so if the semantics aren’t clear. (The overlap case is actually a fairly serious objection – wrong choices here are likely to produce nonsense behavior – each one of the possible solutions could easily be nonsense in one context or another.)

Any further thoughts about that?

hjh

1 Like