How to call function from Done? (aka how to free buffers from complex synth)

TLDR: this might be the wrong way to go about things but I think I need something like

var sig = PlayBuf.ar(2, bufnum),
doneTrig = Done.kr(sig);
// somehow call buf.free() + output diagnostics when Done fires

Longer version:
I am playing a large number of samples (thousands, too much to fit in RAM at once) for a durational installation. My basic setup with a Routine/SystemClock works, but after a while I run out of bufnums. I’ve discovered that this is because PlayBuf’s doneAction: freeSelf doesn’t free the Buffer, only the synth, presumably because normally you’d want to keep buffers cached for next playback.

with a simple synth I can do what I want fairly easily:

Buffer.read(s, path, 0, -1, { |buf|
	("playing buffer " + buf.bufnum).postln;
	{ PlayBuf.ar(2, buf, 1.0, doneAction: Done.freeSelf) }.play.onFree({
		("freeing buffer " + buf.bufnum).postln;
		buf.free;
	});
})

However, in a more complex patch I can’t call whenFree on the whole thing because only the PlayBuf node gets freed, not the subsequent ones (GVerb, Splay, etc). I don’t want to free the whole chain, because I want reverb tails to remain, etc. I also can’t call whenFree on PlayBuf itself because it isn’t actually a Node yet (not playing), as far as I understand.

The closest thing I can see is the suggestion from here: Msg from Server upon PlayBuf completion?

a = { |bufnum|
	var sig = PlayBuf.ar(2, bufnum),
	doneTrig = Done.kr(sig);
	// [0]: little trick, doneTrig is actually 2 channels; only need 1
	// if it's mono, leave out '[0]'
	SendReply.kr(doneTrig[0], '/sampleDone', [bufnum]);
	FreeSelf.kr(doneTrig);
	sig
}.play(args: [bufnum: b]);

this sends a reply when Done happens on sig, all I want to do instead is to run a block that says buf.free with the correct buffer reference retained, which is probably not possible with messages like this

so:

(1) is there a way to run a Function from a Done?
(2) is there a much simpler way of doing this? it feels a bit too roundabout to achieve something that should be relatively commonplace (freeing buffers)

SendReply is correct; then you would use OSCFunc to receive the message and act on it.

However, in a more complex patch I can’t call whenFree on the whole thing because only the PlayBuf node gets freed, not the subsequent ones (GVerb, Splay, etc). I don’t want to free the whole chain, because I want reverb tails to remain, etc.

Here’s an example that demonstrates how to free the buffer based on SendReply:

s.boot;

b = Buffer.read(s, Platform.resourceDir ++ "/sounds/a11wlk01.wav");

(
a = {
	var sig = PlayBuf.ar(1, b),
	rvb = GVerb.ar(sig * 0.5),
	done = Done.kr(sig);
	// this removes the synth at the end of the reverb tail
	DetectSilence.ar(rvb, doneAction: 2);
	SendReply.kr(done, '/bufDone', [b.bufnum]);
	rvb
}.play;

OSCFunc({ |msg|
	Buffer.cachedBufferAt(s, msg[3].asInteger)
	.debug("freeing")
	.free
}, '/bufDone', s.addr).oneShot;
)

I was concerned that this would give you an error message when PlayBuf is trying to access the buffer after you freed it, but it looks like, after PlayBuf registers as “done,” it doesn’t keep accessing the buffer, so the above might be OK.

The tricky thing (and admittedly, I’m pretty sure this is not documented well) is getting the buffer object from the buffer number. SendReply sends ['/yourpath', nodeID, replyID, values...] so the buffer number is msg[3] – sent as a float, which needs to be converted to an integer – and then cachedBufferAt can look up the object (which has been saved to populate its variables after loading).

it feels a bit too roundabout to achieve something that should be relatively commonplace (freeing buffers)

But it isn’t commonplace, not really. The thing that is not commonplace is knowing exactly when you want to free the buffer. (The commonplace case here would be to do some cleanup when the node ends. For that, we have onFree. Since you wanted to do something more specific, then the code is also more specific.)

SC Server’s core architectural decision is to split server-side audio processing and language-side control calculations, with communication by OSC. Some of the communication is automatic, or prepackaged (onFree). For nonstandard usages, it’s necessary to learn how to send messages (from the server, SendReply) and receive them (in the language, OSCFunc or OSCdef).

hjh

Hi, there is a way to find out in Lang when a Synth is freed, using the Notification ‘n_end’ from the server. For that you need to register the Synth with NodeWatcher. I have done this in my sc-hacks library defining a method onEnd for Node. This uses my own Notification class, which is in the sc-hacks library. Prompted from your question I will propose a pull request for Notification and Synth onStart/onEnd as these are basic and important things to know imho. In the meanwhile you could try adding the following 2 files in your library, to provide onEnd functionality.


The only disadvantage is that you will have to isolate your buffer playback in a synth separate from the effects. (You’ll need to feed the buffer playbuf output to a bus from which the effects). But that might help tidy up your code in fact by separating the buffer source mechanism from the effects mechanism.

The relevant code for Node::onEnd is below - you can construct your own substitute of my addNotifier method custom-tailored for your synths, but I believe my solution on the above 2 files should work out of the box. This would be a good example case to test usefulness in order to propose my pull request. I would appreciate feedback and will help you out if more code is needed for this to run.

Usage: Arguments: the listener can be any object, for example your installation main class or some other object involved. The action is a function that will be called when your synths stop. This could be something like { MyInstallation.freeBuffer(buffer) } or similar.

+ Node {
onEnd { | listener, action |
	NodeWatcher.register(this);
	listener.addNotifierOneShot(this, \n_end, action);
}
}

aha, so if I’m understanding correctly doing things like buffer.free is a “control calculation” (client-side aka sclang) whereas something like WhiteNoise.ar(Done.kr(line)) is a (server-side aka scsynth) graph node, therefore it’s not possible to call control actions from Done

this is very illuminating to my (still very basic!) understanding of the Client-Server split in SC, thank you for this explanation

it’s worth noting that I’ve also tried just scheduling the free like this:

			{
				("deallocating buffer " + buf.bufnum).postln;
				buf.free;
			}.defer(buf.duration + 120);

(where 120 is an arbitrary fudge factor)

however, in addition to being extremely icky feeling from a concurrency perspective (my background is in distributed/asynchronous systems :sweat_smile:), it seems to cut off the samples abruptly sometimes – is that expected?

thanks, these methods definitely feel more familiar to me from an async programming perspective in other languages (being able to call a completion method, decoupled from a Done.freeSelf, in particular), I definitely support adding them!

however, it also seems like I can combine the approach of routing samplers into an effects bus with freeSelf + onFree, but I would not be able to position these sounds across the stereo field with Splay the way I’m doing now – each sample has a randomized spatial position and I don’t see a way to route that via a bus

doing things like buffer.free is a “control calculation” (client-side aka sclang)

Sort of… the server has audio rate and control rate calculations. Language functions are separate from that, so I’d use some term other than “control calculation.”

Buffers are tricky to understand because the functionality is split between server and language.

  • Server: Memory allocation, sound file read/write, wavetable filling etc. Buffers are addressed only by number in the server. The server does nothing to avoid buffer number conflicts.

  • Language: Allocating and releasing buffer numbers, tracking buffer state, sending messages to the server to manipulate buffers (the server can’t send those messages to itself).

To free a buffer and reclaim its number, you do have to go through theBuffer.free on the language side.

(where 120 is an arbitrary fudge factor)

2 minutes is a pretty long fudge factor. Max and Pd use milliseconds for scheduling. We don’t.

however, in addition to being extremely icky feeling from a concurrency perspective

Certainly that’s not an ideal solution. SC does have an answer for that – the server can send arbitrary messages back to the language based on arbitrary triggers, and the language can receive them and take whatever action is necessary. That’s been demonstrated above.

it seems to cut off the samples abruptly sometimes – is that expected?

I would guess that a buffer number is being reused while one of these deferred ‘free’ functions is still scheduled to execute later. I think messaging is a better solution.

I would not be able to position these sounds across the stereo field with Splay the way I’m doing now – each sample has a randomized spatial position and I don’t see a way to route that via a bus

Can you explain more what you’re after? Certainly there is a way to do this, but I’m not clear how you’re using Splay or what the obstacle is.

hjh

I recommend using Pan2 instead of Splay. Splay is a help function that converts to stereo ugen, but Pan2 is a UGen itself and permits setting the position in a stereo field with a numeric parameter - thereby allowing better control. Regarding randomization, I see two options in your case:

  1. Randomly choose which one of two buses you are outputting
  2. Use a parameter set randomly to determine the pan position in your buffer playback.

In both cases the output must be to a stereo bus. You can allocate a stereo bus to use. You can use this bus as output for the buffer playback and also as input for the effects synth. I see no problem implementing this.

Alternatively you can set the output bus of your playback at the beginning of the synth and notify the effects synth to use that bus. This is also feasible, but I seems to me more prone to errors and glitches. Using a stereo bus is preferable.

Iannis