ProxySpace, Ndefs or Standard?

I’m somewhat new to SC (after several months of serious study).

I’m trying to get a system down for a sort of ‘live-coding’ performance but also using it like a DAW and getting stems out from chosen nodes for further processing.

There seems to be a large divide in capabilities between the options (though I’ve also seen people do some really genius stuff).

It seems in my opinion that most of the help in the non-proxy sort of way offers a lot of options not available or pretty cryptic with any kind of Proxy.

I maybe am looking for a magic bullet that doesn’t exist, but would love to have the power and ease of interactive programming and automatic synchronization via ProxySpace or Ndefs, but with the bus routing capabilities of the standard environment (the way most of the help and examples dictate).

I’d like the ability to play samples for beats, much like: https://theseanco.github.io/howto_co34pt_liveCode/

But also use standard Synth code routing without Proxies. And by ‘routing’ I mean sending multiple synths to an FX bus. I generally work that way in a DAW, with some FX in the instrument/track chain, and some on their own buses with various tracks routing to them (which would also be stemmed out).

One of the real intriguing powers of SuperCollider are the Audio and Control bus capabilities for dynamically altering signals.

I seem to get 80% of the way to this with either path, but run into sync issues of not having a common time, or with routing issues in ProxySpace. I did find this most helpful: Ndef Bus question

And @jamshark70 mentioned the Quark for JITLib ProxySubmix for ProxySpace (Thank you)

But I’d love to hear what everyone has done for any working system to get everything functioning in this way. I feel like I’m missing something terribly crucial along with the issue of thousands of ways to approach this.

1 Like

I’ve been mulling this thread over.

It’s definitely a weakness of SC (and probably of many other programming tools with “import”-able libraries) that we end up with a lot of overlapping approaches, none of which does “everything” by itself but which reveal incompatibilities when you try to integrate them. This may be an unsolvable problem. There’s no library that does everything, and perhaps it’s impossible for any library to handle all requirements equally well. (JITLib is brilliant but also creates an expectation that it should handle more than it can.)

It might be useful to take a step back and identify the requirements. You’ve stated two: “common time” and flexible, DAW-like routing. I’m curious to hear more details: Where do you run into sync problems? What are the routing problems that ProxySubmix doesn’t address? Identifying specific requirements leads to new features (i.e., it’s pointless to spend hours developing new features if they end up not meeting the requirement).

This is a non-negotiable requirement for me – I can’t use any audio environment that doesn’t support this. Which is why some of my earliest work in SC was the ddwMixerChannel quark.

~hwOut = MixerChannel(\hwOut, s, 2, 2);

~rvb = MixerChannel(\rvb, s, 2, 2, outbus: ~hwOut, completionFunc: { |chan| /* "playfx reverb here */ });

~track = MixerChannel(\track, s, 2, 2, outbus: ~hwOut, completionFunc: { |chan| chan.newPostSend(~rvb, -6.dbamp) });

And it transparently handles execution order.

But it expects to manage nodes’ groups and buses… meanwhile, JITLib expects to manage groups and buses in its own way. So they’re not quite compatible, and making them compatible would break assumptions of one or the other. (I think I might have hacked it at one point to put a NodeProxy’s group into a mixer’s synthgroup and route it to the mixer’s bus, but I didn’t test it very carefully. That approach may or may not be stable.)

In short… this isn’t an easy question. You might have to build something. That’s the good and bad of SC: You can have what you want, with some effort; maybe someone built something that’s a lot of what you want, but doesn’t handle everything you want; but this is all in contrast to a DAW (which convinces you to stop wanting what it doesn’t offer).

hjh

2 Likes

That situation might be better than I thought.

As a proof of concept, I just added this to downloaded-quarks/ddwMixerChannel/playInMixerGroup.sc:

+ NodeProxy {
	playInMixerGroup { |mixer, target, patchType, args|
		group.moveToTail(mixer.synthgroup);
		this.play(mixer.inbus, this.numChannels, target, vol: 1, fadeTime: this.fadeTime, addAction: \addToTail);
	}
}

Then you can do like this:

s.boot;

m = MixerChannel(\test, s, 2, 2);

Ndef(\m, { SinOsc.ar(220, 0, 0.1).dup });

m.play(Ndef(\m));

Now the NodeProxy lives in the mixer’s node structure. When mixers are created or destroyed (updating the order of mixers), the proxy should move along with the mixer.

The caveat is that there is no permanent link between the proxy and its mixer.

Ndef(\m).stop;  // OK, 'monitor' is removed

Ndef(\m).play;  // no sound

It looks like the last line remembers the mixer’s bus to play onto, but it didn’t remember the group. (Maybe that could be considered a bug in JITLib – maybe a proxy’s monitor should be created with \addAfter relative to the proxy’s group, not added to the tail of the default group. If that JITLib behavior is changed, then it’s likely to be [more?] transparent.)

I think it should work if you remember not to play the proxy directly – i.e., if you always play using theMixer.play(theProxy) instead of the theProxy.play.

Continuing – it works with a reverb postsend too:

n = MixerChannel(\rvb, s, 2, 2, completionFunc: { |chan|
	chan.playfx { |outbus|
		var sig = In.ar(outbus, 2);
		FreeVerb2.ar(sig[0], sig[1], 1, 0.9, 0.2);
	}
});

Ndef(\m, { SinOsc.ar(TExpRand.kr(200, 800, Dust.kr(11))).dup });

m.newPostSend(n, 0.8);

hjh

1 Like

but this is all in contrast to a DAW (which convinces you to stop wanting what it doesn’t offer).

!

2 Likes

This is all highly valuable insight and what looks to be a workable solution @jamshark70.

I think half the battle is trying to learn how to even know what is possible. I don’t want much, just everything. :slight_smile:

But to answer your first question. I’m trying to leverage the unique and powerful capabilities of SC for synth networking and versatile sequencing (like everyone else probably). But I also want to know that I can route various output to stem files from any node. I had that working in ProxySpace using it’s clock so that I had a good starting point on all stems to bring in for mixing in a DAW. I’ve also successfully controlled recording in REAPER for multi track input (just takes more CPU than writing to files from SC).

Timing/syncing may not be as much of an issue as I thought, but the idea is if I start various Pbinds with sequenced ‘hit’ samples, they should all be on the grid, not just arbitrarily when I execute play.

So, I’m mostly there on both the JITLib side, short of some of the very helpful examples you just sent. I’ve also found this as another nifty timing solution: https://youtu.be/P9QaPtrPJbs

It’d be wonderful to route these to different FX bus nodes (ala your example), that can also be stemmed, with possible routing even between those nodes, i.e., any audio rate sent many-to-one or one-to-many. That’s how I’ve worked in a DAW (like Ableton Live or Bitwig etc).

I may be pushing too hard for what won’t work or wanting too much, and I may not be taking enough time building what might. But I needed to get some feedback on if I was even on the right path. Every day I find something new in the help files or examples.

When I get some time, I will experiment with what you sent - I really appreciate the help and knowledge. SC is a wonderful nightmare of opportunity and experimentation (which is why I’ve joined the cult) - but I’m also trying to make sure if I spend more time learning that I’m not painted into a corner with either not being able to get tracks recorded or in sync. It sounds like all I’m doing is making beats, which isn’t my intent, but is a good reference to timing.

Thanks again for taking the time - I’ll be back as I understand more and find a mix that works (no pun intended).

I’m not sure I understand exactly what you’re looking for but with proxies you can route multiple proxies to a single fx like this:

Ndef(\fx).play;
Ndef(\fx)[0] = \mix -> { Ndef(\synth1).ar };
Ndef(\fx)[1] = \mix -> { Ndef(\synth2).ar };
Ndef(\fx).filter(10, {|in| DelayC.ar(in) });

// you can control the wet and dry levels like this
Ndef(\fx).set(\mix0, 0.5, \mix1, 0.9, \wet10, 0.5);

If you want each ndef to go to different channels you can do this:

Ndef(\synth1).play(out:0);
Ndef(\synth2).play(out:2);
Ndef(\fx1).play(out:4);

You can then create a multichannel wav file like this

s.record(numChannels:6);

once you’re done recording you can bring that into Audacity and save the channels into different files - now you have your stems

5 Likes

The one thing missing here is that a “send” in a DAW has its own independent level control. ProxySubmix does implement send levels per source channel (as do MixerChannel’s sends). The \mix technique as shown here assumes every pseudo-send’s level is 1.0 (though you could implement independent send levels in the \mix synth functions).

hjh

Yeah - I think ProxySubmix.addMix is basically doing something like this:

Ndef(\fx1).put(0, { Ndef(\synth1).ar * \synth1.kr } );

instead of using the \mix role.

Such that you can control the input level like this

Ndef(\fx1).set(\synth1, 0.5);

instead of like this:

Ndef(\fx1).set(\mix0, 0.5);

Having the synth name as a control name is more helpful.

But I reckon they are equivalent.

1 Like

Just touching this point for now – you’re never actually painted into this corner for recording, not with any framework. There might not be a pre-written method for synced recording, but the basic procedure for using DiskOut always allows the possibility of starting multiple recordings on exactly the same sample.

Recording works like this:

  1. Allocate one buffer for each track.
  2. buffer.write(...) specifying the path to write to, and leaveOpen: true.
  3. Server .sync to let the server prepare 1 and 2.
  4. Run one DiskOut for each buffer. These could be in one synth handling all of them, or one synth per track (if you create the multiple synths in a bundle, they will be sample-synced).
  5. When finished recording, free the synth(s) and close each buffer (and wait a bit, then you can free the buffers too).

record methods package this into one easy step, and usually do the step 3 sync separately for each track, breaking recording sync. But there is never a reason, in any context, why you can’t run the steps of the process on your own and control the sync as you like.

MixerChannel has a startRecord method but also a separate prepareRecord, which I could use in a helper function to ensure recording sync:

hjh

Thank you all for the valuable info and examples here. It felt like it should be possible but even some of your file writing techniques were new to me. When I had originally tried this either all files were in sync with each other but not on any grid (like it just started when I hit record, even with prepareForRecord first). Could have just been some bad setup on my part. I got around that in ProxySpace with a ProxyRecord class (with a modification to use the proxy clock I set up). That worked. Thought I was good to go, then got into the entire audio FX bus routing issue.

Let me chew on these examples for a while. I’ll post back. I can’t tell you enough how helpful it is to have knowledgeable people giving this kind of advice… I need to buy a round of brews for you all :slight_smile:

Then:

  1. Create buffers.
  2. write the buffers and leave them open.
  3. s.sync
  4. Wait for the next barline (or other quant).
  5. Run the recording synths in a bundle.

For step 4 – at the moment after s.sync unblocks the thread, you are at a moment of time (the thread’s clock’s beats – thisThread.clock.beats or there’s actually a shortcut thisThread.beats). You want to be at a later moment of time calculated according to the quant grid. It happens that there’s already a method to find out how long to wait (timeToNextBeat) but even if there weren’t, you could do it by resolving a quant against the clock’s grid (nextTimeOnGrid) and subtracting the current time: thisThread.clock.nextTimeOnGrid(quant) - thisThread.beats – and .waiting that amount of time.

(
fork {
	theThings.do { |thing| thing.prepareRecord(...) };
	s.sync;
	// quant == -1 is next barline
	thisThread.clock.timeToNextBeat(-1).wait;
	// now you should be right on the barline
	s.makeBundle(s.latency, {
		theThings.do { |thing| thing.startRecord(...) };
	});
};
)

Let me also go out on a limb and point out a little verbal tic that I’m noticing – the orientation in a few of the posts in this thread has been “I tried x and I can’t make it do what I want.” This is a disempowering position – the focus is on the “can’t.” It might be more empowering to ask how to do x. It’s OK to start from a position of e.g. recording being a black box – just, there’s a difference between “recording doesn’t do x and y” vs “How does recording work in SC? I would like x and y and I don’t see where to fit that in.”

hjh

That is a very good point. My apologies for sounding negative instead of asking positive questions. Not exactly what I meant, but was more just trying to explain what my personal road-blocks were. But point well taken. I’ll try and correct that in future posts/questions. It is one thing that I’ve learned working with SC is that everything is possible, often with 100 ways it can be accomplished. Both powerful and confusing - but in a good way. Just takes time to work everything out.

Sticking on topic though: Can someone direct me to where to better understand server bundle messages in the way you have mentioned in some of your above help?

And I think I really need to better understand the Proxy node groups. I was working with this a bit before and realize that every single proxy gets it’s own group. I had tried different ways of rearranging with various success and failure. I’m looking very forward to studying some of the examples here, especially the ProxySubmix quark (which is brilliant!). Thanks for your work. Thanks for sharing and explaining.

@jamshark70

Okay, still using rudimentary examples - this is my straight up test with ProxySubmix:

//Boilerplate code for basic live coding functionality

(
//increase number of buffers the server has access to for loading samples
s.options.numBuffers = 1024 * 16;
//increase the memory available to the server
s.options.memSize = 8192 * 64;
s.options.numOutputBusChannels = 2;
//boot the server
s.reboot;
//display the oscilloscope
// s.scope;
//start proxyspace
p=ProxySpace.push(s);
//start tempo clock
p.makeTempoClock;
//give proxyspace a tempo
p.clock.tempo = 1;
Task({
	3.wait;
	d = Dictionary.new;
	d.add(\foldernames -> PathName("/home/hypostatic/music/samples/808s_by_SHD/Classic").entries);
	for (0, d[\foldernames].size-1,
		{arg i; d.add(d[\foldernames][i].folderName -> d[\foldernames][i].entries.collect({
			arg sf;
			Buffer.read(s,sf.fullPath);
		});
	)});
	// ("SynthDefs.scd").loadRelative;
	//loads snippets from setup folder
	//("Snippets.scd").loadRelative;
	//wait, because otherwise it won't work for some reason
	3.wait;
	//activate StageLimiter - Part of the BatLib quark
	// StageLimiter.activate;
	"Setup done!".postln;
}).start;
)

(
SynthDef(\bplay,
	{arg out = 0, buf = 0, rate = 1, amp = 0.5, pan = 0, pos = 0, rel=15;
		var sig,env=1 ;
		sig = Mix.ar(PlayBuf.ar(2,buf,BufRateScale.ir(buf) * rate,1,BufDur.kr(buf)*pos*44100,doneAction:2));
		env = EnvGen.ar(Env.linen(0.0,rel,0),doneAction:0);
		sig = sig * env;
		sig = sig * amp;
		Out.ar(out,Pan2.ar(sig.dup,pan));
}).add;
)

m = ProxySubmix(\del);
m.ar(2);

p.envir.put(\del, m);

~del = { DelayC.ar(m.ar) };
~del.play;

m.addMix(~h, postVol: true);
m.addMix(~b, postVol: true);

NdefGui(m, 8); // gets these params automagically:

(
~h = Pbind(
	\instrument, \bplay,
	// \out, ~out,
	// \addAction, \addToHead,
	\buf, d["Hats"][2],
	\dur, Pseq([0.25, 0.5], inf),
)
)

// ~h.play;
// ~h.stop;
// ~h.free;
(
~b = Pbind(
	\instrument, \bplay,
	// \out, ~out,
	\buf, d["Bass Drums"][2],
	\dur, Pseq([0.5, 1, 0.5, 1], inf),
	\amp, 0.6
).play;
)

Order of execution:

  1. Initiate a somewhat commented out version of setup code to initiate proxy space by https://theseanco.github.io/howto_co34pt_liveCode/2-3-Setup-Code---Making-Performance-Easier/ (mainly to load some test samples for kits) - not necessary but as I’m using Pbinds with the samples…

  2. create a ProxySubmix (per your help file):

m = ProxySubmix(\del);
m.ar(2);

p.envir.put(\del, m);
  1. Start ~h and ~b Pbinds (this may be my problem) with definition.play - not ~h.play or ~b.play after defining (based on issues I had in previous tries myself in which the only way routing worked was as in my code).

  2. Create an FX channel:

~del = { DelayC.ar(m.ar) };
~del.play;
  1. Add Pbind sources to ProxySubmix:
m.addMix(~h, postVol: true);
m.addMix(~b, postVol: true);
  1. Initiate GUI:
NdefGui(m, 8); // gets these params automagically:

This doesn’t seem to work. I assume I’m using ProxySpace with Pbinds incorrectly, as your Help code works just fine.

I’ve also tried this by replacing straight SynthDefs with ProxySpace Functions but then I have the problem with the Pbinds not finding the \instrument, ~bplay - (I think that syntax is just wrong in general but cannot remember where I saw how to properly assign a proxy (knowing that ~bplay = {...} is the same as Ndef(\bplay, {...}) between ProxySpace and Ndef.

Again, likely missing some large piece of the puzzle here. Any help would be welcome.

I have to also assume that as the SynthDefs are wired to Out to 0 by default and because I’m not supplying anything that this is also breaking (hence my trying the ProxySpace function method instead).

I haven’t tried to go the entire Ndef in default enivronment route yet either, with either your examples or @droptableuser given examples. But at this point I’m try not to jump rails.

I think I have a lot of work to do with Pbinds associated with Proxys. @droptableuser code also works well until I try and and call the Ndef sample player with a Pbind:

(
s.reboot;
Task({
	3.wait;
	d = Dictionary.new;
	d.add(\foldernames -> PathName("/home/hypostatic/music/samples/808s_by_SHD/Classic").entries);
	for (0, d[\foldernames].size-1,
		{arg i; d.add(d[\foldernames][i].folderName -> d[\foldernames][i].entries.collect({
			arg sf;
			Buffer.read(s,sf.fullPath);
		});
	)});
	// ("SynthDefs.scd").loadRelative;
	//loads snippets from setup folder
	//("Snippets.scd").loadRelative;
	//wait, because otherwise it won't work for some reason
	3.wait;
	//activate StageLimiter - Part of the BatLib quark
	// StageLimiter.activate;
	"Setup done!".postln;
}).start;
)

Ndef(\fx).play;
Ndef(\fx)[0] = \mix -> { Ndef(\synth1).ar };
Ndef(\fx)[1] = \mix -> { Ndef(\synth2).ar };
Ndef(\fx)[1] = \mix -> { Ndef(\bplay).ar };
Ndef(\fx).filter(10, {|in| DelayC.ar(in) });

// you can control the wet and dry levels like this
Ndef(\fx).set(\mix0, 0.5, \mix1, 0.9, \wet10, 0.5);

Ndef(\synth1, { LFPulse.ar(110, 0, 1, 0.5).dup }).play;
Ndef(\synth2, { LFSaw.ar(5).dup }).play;

(
Ndef(\bplay,
	{ | buf = 0, rate = 1, amp = 0.5, pan = 0, pos = 0, rel=15 |
		var sig,env=1 ;
		sig = Mix.ar(PlayBuf.ar(2,buf,BufRateScale.ir(buf) * rate,1,BufDur.kr(buf)*pos*44100,doneAction:2));
		env = EnvGen.ar(Env.linen(0.0,rel,0),doneAction:0);
		sig = sig * env;
		sig = sig * amp;
});
)

(
~b = Pbind(
	\instrument, \bplay,
	// \out, ~out,
	\buf, d["Bass Drums"][2],
	\dur, Pseq([0.5, 1, 0.5, 1], inf),
	\amp, 0.6
).play;
)

you can play an Ndef with a Pbind in at least two different ways:

using the \set NodeProxy role:

(
Ndef(\bplay)[10] = \set -> Pbind(
    \buf, d["Bass Drums"][2], 
    \dur, Pseq([0.5, 1, 0.5, 1], inf), 
    \amp, 0.6
)
)

alternatively, you can use a Pbind with the \set Event type

~b = Pbind(
    \type, \set,
    \id, Pfunc({ Ndef(\bplay).nodeID}),
    \args, #[\buf, \amp],
    \buf, d["Bass Drums"][2],
    \amp, 0.6,
    \dur, Pseq([0.5, 1, 0.5, 1], inf),
)
1 Like

droptableuser is right – some background details:

Pbind doesn’t call synths or SynthDefs or NodeProxies by itself. It only stuffs information into events. The events do the work.

The kinds of work that events can do are defined in the default event prototype, built in Event’s class definition. The default event prototype sets up a number of event types, which are different actions to take based on the event’s data.

The default event type is \note, which looks up the SynthDef whose name is given by the \instrument key, and plays a new synth. (If the SynthDef has a gate input, the \note event type will also try to release after \sustain beats.) This is valid only for named SynthDefs.

If you want to do something other than play new synths based on a named SynthDef, then it’s necessary to use a different event type. You can’t just write \instrument, somethingElse – the \note event type doesn’t know what to do with somethingElse.

(When you create a NodeProxy based on a synth function, it’s playing one synth continuously – so this idea is not compatible with the default event type.)

Both of the solutions droptableuser posted work like this: either \type, \set (with \id and \args to control what will be set), or using a NodeProxy role that implicitly sets the event type. These will change the behavior of the existing continuous synth in the node proxy. If that isn’t what you want, then you need a SynthDef.

Your out control input will be 0 by default, but ~aProxy = Pbind(\instrument, \synthdefname, ...) should set out to point to the proxy’s bus automatically.

hjh

1 Like

Once again, thanks for the detail. I had gone through the Pattern and Event tutorials a bit ago, saw most how you explained them - but it makes a lot more sense in context! I also just found your website HJH. I knew that I recognized the name! Haven’t made it to your chapter in the book yet, but have been studying that as well. I’ve only been working with SuperCollider for maybe 4 or 5 months - I have a long way to go.

So, my recent tests were sticking with one paradigm for now - JITlib but not proxyspace (using Ndef) with the examples given.

(
s.reboot;
Task({
	3.wait;
	d = Dictionary.new;
	d.add(\foldernames -> PathName("/home/hypostatic/music/samples/808s_by_SHD/Classic").entries);
	for (0, d[\foldernames].size-1,
		{arg i; d.add(d[\foldernames][i].folderName -> d[\foldernames][i].entries.collect({
			arg sf;
			Buffer.read(s,sf.fullPath);
		});
	)});
	// ("SynthDefs.scd").loadRelative;
	//loads snippets from setup folder
	//("Snippets.scd").loadRelative;
	//wait, because otherwise it won't work for some reason
	3.wait;
	//activate StageLimiter - Part of the BatLib quark
	// StageLimiter.activate;
	"Setup done!".postln;
}).start;
)

Ndef(\fx).play;
Ndef(\fx)[0] = \mix -> { Ndef(\synth1).ar };
Ndef(\fx)[1] = \mix -> { Ndef(\synth2).ar };
Ndef(\fx)[1] = \mix -> { Ndef(\bplay).ar };
Ndef(\fx).filter(10, {|in| DelayC.ar(in) });

// you can control the wet and dry levels like this
Ndef(\fx).set(\mix0, 0.5, \mix1, 0.9, \wet10, 0.5);

Ndef(\synth1, { LFPulse.ar(110, 0, 1, 0.5).dup }).play;
Ndef(\synth2, { LFSaw.ar(5).dup }).play;

Ndef(\synth2).stop;
Ndef(\synth1).stop;
Ndef(\fx).stop;

(
Ndef(\bplay,
	{ | buf = 0, rate = 1, amp = 0.5, pan = 0, pos = 0, rel=15 |
		var sig,env=1 ;
		sig = Mix.ar(PlayBuf.ar(2,buf,BufRateScale.ir(buf) * rate,1,BufDur.kr(buf)*pos*44100,doneAction:2));
		// env = EnvGen.ar(Env.linen(0.0,rel,0),doneAction:0);
		// sig = sig * env;
		// sig = sig * amp;
});
)

(
~b = Pbind(
	\type, \set,
	\id, Pfunc({ Ndef(\bplay).nodeID}),
	\args, #[\buf, \amp],
	\buf, d["Bass Drums"][2],
	\dur, Pseq([0.5, 1, 0.5, 1], inf),
	\amp, 0.6
).play;
)

(
Ndef(\bplay)[10] = \set -> Pbind(
    \buf, d["Bass Drums"][1],
    \dur, Pseq([0.5, 1, 0.5, 1], inf),
    \amp, 0.6
).play;
)

As this is using samples that won’t translate, I’ll just say that

(
Ndef(\bplay)[10] = \set -> Pbind(
    \buf, d["Bass Drums"][1],
    \dur, Pseq([0.5, 1, 0.5, 1], inf),
    \amp, 0.6
).play;
)

yields a piano tone. I assume this is because somehow it playing something like
{SinOsc.ar(note from Pbind)}.play

Instead of actually playing the sample. When I copy out
d["Bass Drums"][1].play

In the same running environment it plays the sample once.

I seem to be missing something crucial. Is it that the samples are loaded to the standard environment and the Proxies have their own? I know you can share between proxy spaces.

If you want to set a NodeProxy, then you need to keep the synth running continuously.

doneAction: 2

So you reach the end of the sample, and this deletes the synth. \set then has nothing to set.

To do this by Ndef, you need to retrigger the PlayBuf (and envelope, if you’re using one).

This is pointing to a gap in the documentation. There’s a “new synth per note” way of doing things, and a “retrigger existing synth” way of doing things. We don’t explain why you would want to do one or the other, or the gotchas inherent in both (such as, Ndef playing a Pbind can do “new synth” style but Ndef playing a function doesn’t, or, if you’re re-triggering, you have to handle discontinuity at the retrigger point – with “new synth” style, you get envelope crossfading for free, but it takes more care with retriggering).

IMO “new synth” style is better for this use case. Retriggering is a bit of a pain. If you think you’re confused now, wait until you try to make retriggering really work properly.

When using patterns with NodeProxies, you should not play the pattern independently.

You play / stop the NodeProxy. The NodeProxy controls the pattern.

Similarly:

// works
~a = { SinOsc.ar(440, 0, 0.1).dup }.play;
~a.release;

// but...
Ndef(\a).ar(2);
Ndef(\a)[0] = { SinOsc.ar(440, 0, 0.1).dup }.play;

^^ The preceding error dump is for ERROR: A synth is no valid source for a proxy.
For instance, ~out = { ... }.play would cause this and should be:
~out = { ... }; ~out.play; or (~out = { ... }).play;

What you’re doing with the pattern here is the equivalent of playing an Ndef function before passing it to the Ndef. When you assign something to an Ndef, you are providing a proxy with source. The source does not need to play separately (or rather, you should take care never to play the source separately).

hjh