NRT score&audio rendering issues

Hi, I am relatively new to SuperCollider & having some difficulty setting up non-realtime score & audio rendering.

My primary problem, I think, is in figuring out how to load my synthdef and audio buffer onto the NRT server. I have prepared an example (three .scd files and an example .wav to read into buffer) that presents my issue: http://petervanhaaften.net/files/nrt-example.zip

In ‘main.scd’, I give two examples: 1) working pattern playback with synthdef and buffer on the default server, 2) my first attempt to convert that to work in NRT.

In this example, I have a Pdef pattern (‘score.scd’) controlling a granular Synthdef (‘gransyn.scd’) which operates on a stored buffer (check ‘main.scd’). I know for sure I am incorrectly allocating my buffer and synthdef on the NRT server in my second example in ‘main.scd’, however I have tried a lot of iterations of this without success.

I have also made some experiments with the CTK library to resolve that, due to info found in old threads like: NRT mode and soundfile buffers …which I could not get working and didn’t include for reasons of conciseness. But I am also open to a solution using CTK.

Might anyone be able to offer any pointers on this topic? & thank you in advance.

Hi, I have attempted to simplify my problem, so that it could be clearly shown in a single post.

Below is a working example with default server playback. A sample buffer is loaded, a granular synthdef is loaded, and a pattern performs the synthdef:

(
b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01-44_1.aiff");

SynthDef(\buf_grain_test, { |out, gate = 1, amp = 1, pan = 0, sndbuf, envbuf|
    var  env, freqdev;
    env = EnvGen.kr(
        Env([0, 1, 0], [1, 1], \sin, 1),
        gate,
        levelScale: amp,
        doneAction: Done.freeSelf);
    Out.ar(out,
        GrainBuf.ar(2, Impulse.kr(20), 0.1, sndbuf, LFNoise1.kr.range(0.5, 2),
            LFNoise2.kr(0.1).range(0, 1), 2, pan, envbuf) * env)
}).add;

~pattern = (
	Pdef(\scene0, Pdef(\part1,
	Ppar([
			Pmono(
				\buf_grain_test,
				\sndbuf, b,
				\envbuf, Pseq([-1, 0, -0.5, 0.5], 1),
				\dur, Pseq([2, 2, 2, 2], 1),
				\pan, Pseq([-1, 1, -1, 1], 1),
			),
	])
)).play;
);
)

Below is a non-working example of converting this to NRT. It throws me the error: “ERROR: Message ‘at’ not understood”. I am guessing this is because my synthdef and/or audio buffer is not correctly called in the NRT server.

(
var score, sndbuf, options;

SynthDef(\buf_grain_test, { |out, gate = 1, amp = 1, pan = 0, sndbuf, envbuf|
    var  env, freqdev;
    env = EnvGen.kr(
        Env([0, 1, 0], [1, 1], \sin, 1),
        gate,
        levelScale: amp,
        doneAction: Done.freeSelf);
    Out.ar(out,
        GrainBuf.ar(2, Impulse.kr(20), 0.1, sndbuf, LFNoise1.kr.range(0.5, 2),
            LFNoise2.kr(0.1).range(0, 1), 2, pan, envbuf) * env)
}).load(s);

~score = Score.new;

// create a Buffer object for adding to the Score
sndbuf = Buffer.new;

// for NRT rendering, the buffer messages must be added to the Score
~score.add([0, sndbuf.allocReadMsg(Platform.resourceDir +/+ "sounds/a11wlk01-44_1.aiff")]);

~pattern = Pdef(\scene0, Pdef(\part1,
	Ppar([
			Pmono(
				\buf_grain_test,
				\sndbuf, b,
				\envbuf, Pseq([-1, 0, -0.5, 0.5], 1),
				\dur, Pseq([2, 2, 2, 2], 1),
				\pan, Pseq([-1, 1, -1, 1], 1),
			),
	])
));

~pattern = ~pattern.asScore(60);

~score.add(~pattern);

// the ServerOptions for rendering the soundfile
options = ServerOptions.new.numOutputBusChannels_(2);

(
// Destination path and file name
~outFile = "~/test.wav";
// Render the score as wav file
~score.recordNRT(outputFilePath: ~outFile.asAbsolutePath, headerFormat: "WAV", options: options);
);
)

So, maybe someone has an idea where I’m making a mistake. I think it is something simple!
Thank you.

Hi,
You’re using global variable b instead of local variable sndbuf in Pmono.
And there’s a catch which the .add function.

// ~score.add(~pattern);
~pattern.score.do{|e| ~score.add(e)}; // I found myself using it this way
~score.score.do{|e| e.postln}; // Posts each OSC-msg as a line. Use this to see the difference. 

Here’s the full code:

(
var sndbuf, options;

SynthDef(\buf_grain_test, { |out, gate = 1, amp = 1, pan = 0, sndbuf, envbuf|
    var  env, freqdev;
    env = EnvGen.kr(
        Env([0, 1, 0], [1, 1], \sin, 1),
        gate,
        levelScale: amp,
        doneAction: Done.freeSelf);
    Out.ar(out,
        GrainBuf.ar(2, Impulse.kr(20), 0.1, sndbuf, LFNoise1.kr.range(0.5, 2),
            LFNoise2.kr(0.1).range(0, 1), 2, pan, envbuf) * env)
}).load(s);

~score = Score.new;

// create a Buffer object for adding to the Score
sndbuf = Buffer.new;

// for NRT rendering, the buffer messages must be added to the Score
~score.add([0, sndbuf.allocReadMsg(Platform.resourceDir +/+ "sounds/a11wlk01-44_1.aiff")]);

~pattern = Pdef(\scene0, Pdef(\part1,
	Ppar([
			Pmono(
				\buf_grain_test,
				\sndbuf, sndbuf,
				\envbuf, Pseq([-1, 0, -0.5, 0.5], 1),
				\dur, Pseq([2, 2, 2, 2], 1),
				\pan, Pseq([-1, 1, -1, 1], 1),
			),
	])
));

~pattern = ~pattern.asScore(60);

// ~score.add(~pattern);
~pattern.score.do{|e| ~score.add(e)}; // I found myself using it this way

// the ServerOptions for rendering the soundfile
options = ServerOptions.new.numOutputBusChannels_(2);

(
// Destination path and file name
~outFile = "~/test.wav";
// Render the score as wav file
~score.recordNRT(outputFilePath: ~outFile.asAbsolutePath, headerFormat: "WAV", options: options);
);
)

Hi Jildert,
Thank you for your solution! I have tested and it is working.

However my simplified explanation of my problem has slightly complicated my problem.
By chance do you have a moment to look at the following, which is my attempt to resolve my original issue, using my actual instruments & pattern? Again, I think it is something very simple, inside of the “main.scd” code (bottom of this post).

I have a score [“score.scd”]:

~pattern = Pdef(\scene0, Pdef(\part1,
	Ppar([
			Pmono(
				\gransyn,
				\soundBuf, ~sndbuf,
				\dur, Pseq([1, 5, 4,5], 1),
				\attack, Pseq([0, 0, 0, 1], 1),
				\release, Pseq([0, 0, 0, 1], 1),
				\posLo, Pseq([0, 0, 0, 1], 1),
				\posHi, Pseq([1, 1, 1, 1], 1),
				\posRateE, Pseq([0, 0, 0, 0], 1),
				\posRateM, Pseq([1, 0.5, 1, 1], 1),
				\posRateMLag, Pseq([0, 0, 0, 0], 1),
				\posRateMCurve, Pseq([0, 0, 0, 0], 1),
			    \overlap, Pseq([1, 2, 1, 1], 1),
				\overlapLag, Pseq([0, 2, 1, 1], 1),
				\overlapCurve, Pseq([0, 0, 0, 0], 1),
				\trigRate, Pseq([0, 10, 2, 2], 1),
				\trigRateLag, Pseq([0, 0, 3, 3], 1),
				\trigRateCurve, Pseq([0, 0, 0, 0], 1),
				\rate, Pseq([1, 3, 1, 1], 1),
			    \rateLag, Pseq([0, 3, 2, 0], 1),
				\rateCurve, Pseq([0, 0, 0, 0], 1),
				\lpFreq, Pseq([20000,20000, 20000, 20000], 1),
				\lpLag, Pseq([0, 0, 0, 0], 1),
				\lpCurve, Pseq([0, 0, 0, 0], 1),
				\hpFreq,Pseq([10, 10, 10, 10], 1),
				\hpLag, Pseq([0, 0, 0, 0], 1),
				\hpCurve, Pseq([0, 0, 0, 0], 1),
				\panMax, Pseq([0, 0.2, 0, 0], 1),
			    \amp, Pseq([0, 1, 0, 0], 1),
			    \ampLag, Pseq([3, 1, 1/2, 4], 1),
			    \ampCurve, Pseq([0, 0, 0, 0], 1),
			),
	])
));

That controls a synthdef [“gransyn.scd”]:

(
~m = 5;
~n = 2 * ~m;

SynthDef(\gransyn, { |out = 0, soundBuf, gate = 1, attack = 0.01, release = 0.5, posLo = 0.1, posHi = 0.9, posRateE = 0, posRateM= 1, posRateMLag = 0, posRateMCurve = 0, rate= 1, rateLag = 0, rateCurve = 0, panMax = 0, bpRQ = 0.1, bpRQLag = 0, bpRQCurve = 0, bpLo = 50, bpLoLag = 0, bpLoCurve = 0, bpHi = 5000, bpHiLag = 0, bpHiCurve = 0, amp = 1, ampLag = 0, ampCurve = 0 overlap = 2, overlapLag = 0, overlapCurve = 0, trigRate = 1, trigRateLag = 0, trigRateCurve = 0, interp = 2, posRate = 0, lpFreq = 20000, lpLag = 3, lpCurve = 0, hpFreq = 20, hpLag = 0, hpCurve = 0, rateRandomness = 0, rateRandomnessLag = 0, rateRandomnessCurve = 0, overlapRandomness = 0, overlapRandomnessLag = 0, overlapRandomnessCurve = 0, verbMix = 0, verbMixLag = 0, verbMixCurve = 0, verbRoom = 0, verbDamp =0|

	var sig, sigL, sigR, sigOut, sigLimiter, sigCompressor, env, bpFreq, chan, dUgen, trig, trigs, bufDur, pos, lpfSig, rateNoiseSig, overlapNoiseSig, verbSig;

	//trigger for grains
	trig = Impulse.ar(trigRate);

	//randomness for rate
	rateNoiseSig = PinkNoise.kr(mul: VarLag.kr(rateRandomness, rateRandomnessLag, rateRandomnessCurve), add: 0);
	//rateNoiseSig = rateNoiseSig / 2
	//randomness for overlap
	overlapNoiseSig = PinkNoise.kr(mul: VarLag.kr(overlapRandomness, overlapRandomnessLag, overlapRandomnessCurve), add: 0);

	//define all of your VarLag controlled values here
	trigRate = Demand.ar(trig, 0, VarLag.ar(Demand.ar(trig, 0, trigRate, inf), trigRateLag, trigRateCurve), inf);
	lpFreq = VarLag.kr(lpFreq, lpLag, lpCurve);
	hpFreq = VarLag.kr(hpFreq, hpLag, hpCurve);
	rate = VarLag.ar(Demand.ar(trig, 0, rate + rateNoiseSig, inf), rateLag, rateCurve);
	overlap = VarLag.ar(Demand.ar(trig, 0, overlap + overlapNoiseSig, inf), overlapLag, overlapCurve);
	//trigRate = VarLag.kr(trigRate, trigRateLag, trigRateCurve);
	posRateM = VarLag.kr(posRateM, posRateMLag, posRateMCurve);
	amp = VarLag.kr(amp, ampLag, ampCurve);
	verbMix = VarLag.ar(Demand.ar(trig, 0, verbMix, inf), verbMixLag, verbMixCurve);


    // we need a multichannel trigger that steps through all consecutive channels
    trigs = { |i| PulseDivider.ar(trig, ~n, ~n-1-i) } ! ~n;

    chan = Demand.ar(trig, 0, Dseq((0..~n-1), inf));

	env = Linen.kr(gate, attack, 1, release, 2) * amp;

    posRate = 10 ** posRateE * posRateM;
    bufDur = BufDur.kr(soundBuf);
    pos = Phasor.ar(0, BufRateScale.kr(soundBuf) * posRate * SampleDur.ir / bufDur, posLo, posHi);


	sig = TGrains.ar(~n, trig, soundBuf, Demand.ar(trig, 0, rate, inf), pos * bufDur, Demand.ar(trig, 0, overlap, inf) / Demand.ar(trig, 0, trigRate, inf),
    // Panning convention is that from PanAz,
    // speakers should be from 0 to 2, but (orientation)
    // 1/n has to be substracted for n speakers.
    // If this isn't done correctly grains are spread onto more than one channel
    // and per-grain application of fxs fails.
	chan.linlin(0, ~n-1, -1/~n, (2*~n - 3)/~n), 1, interp);

    dUgen = Dwhite(0.0, 1);

	sig = sig.collect { |ch, i|
		// this is the place to define fxs per channel/grain
		lpfSig = LPF.ar(in: ch, freq: lpFreq, mul: 1, add: 0);

		HPF.ar(in: lpfSig, freq: hpFreq, mul: 1, add: 0);
	};

    // routing to two channels ...
    sigL = Mix(((0..(~m-1)) * 2).collect(sig[_]));
    sigR = Mix(((0..(~m-1)) * 2 + 1).collect(sig[_]));

	//route stereo sig thru cheap verb
	verbSig = FreeVerb2.ar(sigL, sigR, mix: Demand.ar(trig, 0, verbMix, inf), room: verbRoom, damp: verbDamp);

	//output
	Out.ar(out, Pan2.ar(verbSig[0], panMax.neg) + Pan2.ar(verbSig[1], panMax) * env);

}).store;
)

And this code [“main.scd”] strings them together to render in NRT, and is based on the example working code you provided me. However this version is not working, it prints an empty file:

(
var options;
// create new score
~score = Score.new;

// call external score
Require("score.scd");

// create a Buffer object for adding to the Score
~sndbuf = Buffer.new;

//call synthdef
Require("gransyn.scd");

//~pattern.play;
// for NRT rendering, the buffer messages must be added to the Score
~score.add([0, ~sndbuf.allocReadMsg(Platform.resourceDir +/+ "sounds/a11wlk01-44_1.aiff")]);

~pattern = ~pattern.asScore(100);

// ~score.add(~pattern);
~pattern.score.do{|e| ~score.add(e)}; // I found myself using it this way

// the ServerOptions for rendering the soundfile
options = ServerOptions.new.numOutputBusChannels_(2);

(
// Destination path and file name
~outFile = "~/test.wav";
// Render the score as wav file
~score.recordNRT(outputFilePath: ~outFile.asAbsolutePath, headerFormat: "WAV", options: options);
);
)

Otherwise. . . I will keep hacking away at this issue. Thank you again for your help.

Here’s a simpler version of the problem:

SynthDef(\bufGrainPan, { |start, time, bufnum, pan, rate = 1, amp = 1,
		attack = 0.001, decay = 0.02, outbus|
	var sig;
	sig = PlayBuf.ar(1, bufnum, rate * BufRateScale.kr(bufnum), 1, start, 0)
		* EnvGen.kr(Env.linen(attack, time, decay), doneAction:2);
	OffsetOut.ar(outbus, Pan2.ar(sig, pan, amp));
}).add;

p = Pbind(
	\instrument, \bufGrainPan,
	\bufnum, b,
	\start, 15000,
	\time, 0.1,
	\dur, 0.2
);

b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");

q = p.trace.play;  // nothing

Many many many users get confused about variables in patterns.

Variables are resolved to their values immediately, but in many pattern cases, users seem to expect that the variable will be resolved only at the moment of getting a value or event from the stream.

In the above, at the moment of creating the Pbind, b is nil.

So the Pbind really means:

p = Pbind(
	\instrument, \bufGrainPan,
	\bufnum, nil,
	\start, 15000,
	\time, 0.1,
	\dur, 0.2
);

nil in Pbind means to stop the stream – so this pattern, by definition, will produce zero events.

It seems that users often expect \bufnum, b to mean “create a reference to either a current or future value of b” – in reality, when you use a variable name, it behaves right now as the variable’s current value, only. (If you want the “reference to future,” you have to write something explicit for it, e.g., \bufnum, Pfunc { b }.)

In your example, you’re Require-ing the pattern first, and then populating ~sndbuf after that. You should do ~sndbuf = first, and then Require.

hjh

1 Like

HI jamshark70, thank you very much for your super detailed explanation! So, with the benefit of your example, I now have this partially working.

I am still having a small issue that is likely due to syntax, or my method of calling ~pattern.asScore. I am using Ppar() to run two Pmono instances in parallel, like this:

(
Pdef(\scene0, Pdef(\part1,
	Ppar([
			Pmono(
				\gransyn,
			    \soundBuf, ~sndbuf,
				\dur, Pseq([1, 5, 4,5], 1),
				\attack, Pseq([0, 0, 0, 1], 1),
				\release, Pseq([0, 0, 0, 1], 1),
				\posLo, Pseq([0, 0, 0, 1], 1),
				\posHi, Pseq([1, 1, 1, 1], 1),
				\posRateE, Pseq([0, 0, 0, 0], 1),
				\posRateM, Pseq([1, 0.5, 1, 1], 1),
				\posRateMLag, Pseq([0, 0, 0, 0], 1),
				\posRateMCurve, Pseq([0, 0, 0, 0], 1),
			    \overlap, Pseq([1, 2, 1, 1], 1),
				\overlapLag, Pseq([0, 2, 1, 1], 1),
				\overlapCurve, Pseq([0, 0, 0, 0], 1),
				\trigRate, Pseq([0, 10, 2, 2], 1),
				\trigRateLag, Pseq([0, 0, 3, 3], 1),
				\trigRateCurve, Pseq([0, 0, 0, 0], 1),
				\rate, Pseq([1, 3, 1, 1], 1),
			    \rateLag, Pseq([0, 3, 2, 0], 1),
				\rateCurve, Pseq([0, 0, 0, 0], 1),
				\lpFreq, Pseq([20000,20000, 20000, 20000], 1),
				\lpLag, Pseq([0, 0, 0, 0], 1),
				\lpCurve, Pseq([0, 0, 0, 0], 1),
				\hpFreq,Pseq([10, 10, 10, 10], 1),
				\hpLag, Pseq([0, 0, 0, 0], 1),
				\hpCurve, Pseq([0, 0, 0, 0], 1),
			    \amp, Pseq([0, 1, 0, 0], 1),
			    \ampLag, Pseq([3, 1, 1/2, 4], 1),
			    \ampCurve, Pseq([0, 0, 0, 0], 1),
			),
			Pmono(
				\gransyn,
			    \soundBuf, ~sndbuf,
				\dur, Pseq([1, 5, 4, 7], 1),
				\attack, Pseq([0, 0, 0, 0], 1),
				\release, Pseq([0, 0, 0, 0], 1),
				\posLo, Pseq([0, 0, 0.4, 0.4], 1),
				\posHi, Pseq([1, 1, 0.5, 0.5], 1),
				\posRateE, Pseq([0, 0, 0, 0], 1),
				\posRateM, Pseq([1, 0.5, -1, 1], 1),
				\posRateMLag, Pseq([0, 0, 0, 0], 1),
				\posRateMCurve, Pseq([0, 0, 0, 0], 1),
			    \overlap, Pseq([1, 2, 2, 3], 1),
				\overlapLag, Pseq([0, 2, 0, 0], 1),
				\overlapCurve, Pseq([0, 0, 0, 0], 1),
				\trigRate, Pseq([0, 10, 10, 20], 1),
				\trigRateLag, Pseq([0, 0, 0, 2], 1),
				\trigRateCurve, Pseq([0, 0, 0, 0], 1),
				\rate, Pseq([1, 1, 0.75, 0.6], 1),
			    \rateLag, Pseq([0, 3, 0, 0], 1),
				\rateCurve, Pseq([0, 0, 0, 0], 1),
				\lpFreq, Pseq([20000,20000, 2000, 1000], 1),
				\lpLag, Pseq([0, 0, 3, 3], 1),
				\lpCurve, Pseq([0, 0, 0, 0], 1),
				\hpFreq,Pseq([10, 10, 10, 10], 1),
				\hpLag, Pseq([0, 0, 0, 0], 1),
				\hpCurve, Pseq([0, 0, 0, 0], 1),
			    \amp, Pseq([0, 1, 1, 0], 1),
				\ampLag, Pseq([0, 3, 2, 3], 1),
			    \ampCurve, Pseq([0, 0, 0, 0.5], 1),
			),
	])
));
);

In this example, only the the second Pmono() stream is included in my NRT recorded audio file. For reference, my current code for recording to NRT is as follows. I use a Pdef(), Pseq(), and Pfindur() to run through the above score (which is actually in 3 parts):

(
var options;
// create new score


~score = Score.new;

// create a Buffer object for adding to the Score
~sndbuf = Buffer.new;

// for NRT rendering, the buffer messages must be added to the Score
~score.add([0, ~sndbuf.allocReadMsg(thisProcess.nowExecutingPath.dirname +/+ "voice.wav")]);

//call synthdef
Require("gransyn.scd");

// call external score
Require("score.scd");

~pattern = Pdef(\scene0, Pdef(\main,
	Pseq([
		//start score
		Pfindur(17, Pdef(\part1)),
		Pfindur(20, Pdef(\part2)),
		Pfindur(22, Pdef(\part3)),
	],1),
));

~pattern = ~pattern.asScore(59);

// ~score.add(~pattern);
~pattern.score.do{|e| ~score.add(e)};

// the ServerOptions for rendering the soundfile
options = ServerOptions.new.numOutputBusChannels_(2);

~score.recordNRT(
    outputFilePath: "~/test.wav".standardizePath,
    headerFormat: "wav",
    sampleFormat: "int24",
    options: options,
    duration: 59,
    action: { "done".postln }
);

)

Is there anything obvious I am doing here which would cause .recordNRT to record only one part of two parallel processes in my score? Thank you sincerely for your assistance!