GrainUtils - sub-sample accurate EventScheduler and dynamic VoiceAllocator

Created helpfiles for RampAccumulator, RampIntegrator and HanningWindow, GaussianWindow, TukeyWindow, TrapezoidalWindow, PlanckWindow and ExponentialWindow

4 Likes

With the GrainDelay its worth playing around with a longer delay time and freezing of the captured audio (here with a little pulsar test sequence):

or with really short delay times, higher overlap and feedback for echo to resonator vibes (with the same pulsar test sequence):

5 Likes

Really great stuff, thanks a lot!
One question @dietcv - is there a reason why you always reset your git repo? (There is only one commit Commits ¡ dietcv/GrainUtils ¡ GitHub)
It would be easier to follow along for others and just recompile as needed if you just commit changes on top.

1 Like

hey, thanks :slight_smile:
I completely refactored everything behind the scenes, figured out some bugs in several night shifts and now nearly have completed all the helpfiles and the guides. All the core Ugens have helpfiles now, only 2-3 are still missing for the unit shapers.
After i have added the helpfiles every new release will have a new tag. With all the different changes i have made i didnt want to always create a new tag. This started work in progress and now its 98% done.

1 Like

ok yeah this is great – had some fun with spectral granulation combining your example with the BufFFT library, lots to explore.

(
~velvet = {|t, density, bias|
    var n = Dseries(0, 0, inf);
    var high = 1 - density;
    var low = density * 2;
    var nx1 = Diwhite(0, 1, inf);
    //bias
    var nx2 = (Dwhite(0, 1, inf) < (0.5 + (bias * 0.5))).if(1, -1);
    var out = Dswitch1([0, nx2], nx1 >= high);
    Demand.ar(t, 0, out);
};
)

(
~maxGrains = 50;
~fftSize = 4096*8;
d = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");
e = Array.fill(~maxGrains, {Buffer.alloc(s, ~fftSize)});
)

(
e.do{|item| item.zero};

{
    var numChannels = 50;
    var reset, events, voices, grainWindows, overlap, overlapMod, tFreq, tFreqMod, posRate, posRateMod;
    var pitchRatio, pitchMod;
    var trig, pos, chain, accumChain;
    var polarity, sig;
    
    reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

    tFreqMod = LFDNoise3.ar(\tFreqMF.kr(1));
    tFreq = \tFreq.kr(100) * (2 ** (tFreqMod * \tFreqMD.kr(0)));

    events = SchedulerCycle.ar(tFreq, reset);

    overlapMod = LFDNoise3.ar(\overlapMF.kr(1));
    overlap = \overlap.kr(4) * (2 ** (overlapMod * \overlapMD.kr(0)));

    voices = VoiceAllocator.ar(
        numChannels: numChannels,
        trig: events[\trigger],
        rate: events[\rate] / overlap,
        subSampleOffset: events[\subSampleOffset],
    );

    grainWindows = HanningWindow.ar(
        phase: voices[\phases],
        skew: \windowSkew.kr(0.5),
    );

    posRateMod = LFDNoise3.ar(\posRateMF.kr(0.3));
    posRate = \posRate.kr(0.25) * (1 + (posRateMod * \posRateMD.kr(0)));

    pos = Phasor.ar(
        trig: DC.ar(0),
        rate: posRate * BufRateScale.kr(d) * SampleDur.ir / BufDur.kr(d),
        start: \posLo.kr(0),
        end: \posHi.kr(1)
    );

    pos = Latch.ar(pos, voices[\triggers]) * BufFrames.kr(d);

    //BufFFTTrigger2 receives the triggers, one for each chain buffer, and outputs the correct information for the FFT Chain
    chain = BufFFTTrigger2(e, voices[\triggers]);
    
    pitchMod = LFDNoise3.ar(\pitchMod.kr(1)) * \pitchModDepth.kr(0);
    pitchRatio = (\midipitch.kr(-12) + pitchMod).midiratio;

    chain = BufFFT_BufCopy(chain, d, pos, BufRateScale.kr(d) * pitchRatio);

    //set to rect window because we are using grainWindows later
    chain = BufFFT(chain, wintype: -1);
    
    //whatever
    chain = PV_MagGate(chain, MouseY.kr(0, 1), MouseX.kr(0, 50));
    // chain = PV_BinScramble(chain, MouseX.kr, 0.1, MouseY.kr > 0.5);
    chain = PV_MagSmear(chain, MouseX.kr(0, 100));
    // chain = PV_RectComb(chain, MouseX.kr(0, 32), MouseY.kr, 0.2);
    chain = PV_Compander(chain, 10, 0.1, 1.0);
    
    //turn on or off
    // accumChain = LocalBuf(~fftSize);
    // accumChain = PV_AccumPhase(accumChain, chain);
    // chain = PV_CopyPhase(chain, accumChain);

    sig = BufIFFT(chain, 0);
    //set polarity w polarityMod 0-1
    polarity = ~velvet.(voices[\triggers], \density.kr(1), 1 - \polarityMod.kr(1));
    sig = sig * grainWindows * polarity;
    Mix(Pan2.ar(sig, TRand.kr(-1,1,chain)));
}.play
)
1 Like

Thats awesome, thanks for sharing :slight_smile:

Based on your example in some thread, I have also used the Flucoma toolkit for slicing, analysis and sorting:

(
ProtoDef(\slice, {

	~init = { |self|

		self.data = IdentityDictionary.new();
		self.analysis = FluidDataSet(s);

	};

	~cleanUp = { |self|

		self.data.keysValuesDo{ |key, value|
			if(value.isArray) {
				value.do{ |val| val.clear };
			};
			if(value.isKindOf(Array)) {
				value.clear;
			};
		};

		if(self.soundfile.notNil) {
			self.soundfile.free;
		};

		if(self.analysis.notNil) {
			self.analysis.clear;
		};

		if(self.data.notNil) {
			self.data.clear;
		};

	};

	~loadBuffer = { |self, path, callback|

		// Check if path is a folder or single file
		if(PathName(path).isFolder) {

			// Load folder
			var loader = FluidLoadFolder(path);

			loader.play(s, {
				"folder loaded".postln;
				self.soundfile = loader.buffer;

				// Convert to mono if needed
				if(self.soundfile.numChannels > 1) {

					var monoBuffer = Buffer(s);

					self.soundfile.numChannels.do{ |index|

						FluidBufCompose.processBlocking(
							server: s,
							source: self.soundfile,
							startChan: index,
							numChans: 1,
							gain: -6.dbamp,
							destination: monoBuffer,
							destGain: 1
						);

					};

					"converted to mono".postln;

					self.soundfile = monoBuffer;
				};

				if(callback.notNil) { callback.value };
			});

		} {
			// Single file
			self.soundfile = Buffer.readChannel(s, path, channels: [0]);
			if(callback.notNil) { callback.value };
		};

	};

	~sliceAndAnalyze = { |self, threshold, metric, callback|

		// Get and store analysis buffers
		var indicesBuffer = Buffer(s);
		var specsBuffer = Buffer(s);
		var statsBuffer = Buffer(s);
		var meansBuffer =  Buffer(s);

		// Slicing
		FluidBufOnsetSlice.processBlocking(
			server: s,
			source: self.soundfile,
			metric: 9,
			threshold: threshold,
			indices: indicesBuffer,
			action: { "slices found".postln }
		);

		// Analysis
		indicesBuffer.loadToFloatArray(action: { |indices|

			// Store the indices for later use
			self.data.put(\indices, indices);

			// Iterate through adjacent pairs of indices
			indices.doAdjacentPairs{ |startFrame, endFrame, i|
				var numFrames = endFrame - startFrame;

				// Compute spectral features per fft frame
				FluidBufSpectralShape.processBlocking(
					server: s,
					source: self.soundfile,
					startFrame: startFrame,
					numFrames: numFrames,
					features: specsBuffer,
					select: [metric]
				);

				// Get mean statistics
				FluidBufStats.processBlocking(
					server: s,
					source: specsBuffer,
					stats: statsBuffer,
					select: [\mean]
				);

				// Compose into mean features buffer
				FluidBufCompose.processBlocking(
					server: s,
					source: statsBuffer,
					destination: meansBuffer
				);

				self.analysis.addPoint(i, meansBuffer);
			};

			"analysis complete".postln;

			// Free analysis buffers after use
			indicesBuffer.free;
			specsBuffer.free;
			statsBuffer.free;
			meansBuffer.free;

			// Call the callback when analysis is complete
			if(callback.notNil) { callback.value };
		});
	};

	~sortSlices = { |self, callback|

		self.analysis.dump({ |dict|

			// Get the feature values
			var pointData = dict["data"];

			// Sort by feature value, get the indices
			var sortedIndices = pointData.keys.asArray.sort({ |a, b|
				pointData[a][0] < pointData[b][0]  // Sort keys by their values
			}).collect(_.asInteger);

			// Create serialized array of sorted slice points
			var slices = sortedIndices.collect{ |index|
				var startOnset = self.data[\indices][index];
				var endOnset = self.data[\indices][index + 1];
				[startOnset, endOnset];
			}.flatten;

			self.data.put(\sortedSlices, slices / self.soundfile.numFrames);

			"slices sorted".postln;

			// Call the callback when sorting is complete
			if(callback.notNil) { callback.value };
		});
	};

	~getSortedSlices = { |self, path, threshold = 0.1, metric = \centroid|

		var condition = CondVar.new;
		var done = false;

		Routine({

			self.cleanUp;

			// Step 1: Load buffer
			self.loadBuffer(path, {
				done = true;
				condition.signalAll;
			});
			condition.wait { done };
			done = false;

			// Step 2: Slice and Analyze slices
			self.sliceAndAnalyze(threshold, metric, {
				done = true;
				condition.signalAll;
			});
			condition.wait { done };
			done = false;

			// Step 3: Sort slices
			self.sortSlices({
				done = true;
				condition.signalAll;
			});
			condition.wait { done };

			"done".postln;

		}).play(AppClock);
	};

});
)
x = Prototype(\slice);

(
x.getSortedSlices(
	"C:/Users/.../Folder/",
	0.1,
	\centroid // eg centroid, spread, skewness, kurtosis, rolloff, flatness, crest
);
)

~slices = x.data[\sortedSlices];
~slicesBuf = Buffer.loadCollection(s, x.data[\sortedSlices]);
~sndBuf = x.soundfile;

Then you could load a serialised array into ~slicesBuf and pass it to the Synth and use something like this in the SynthDef:

posRateMod = LFDNoise3.ar(\posRateMF.kr(0.3));
posRate = \posRate.kr(1) * (1 + (posRateMod * \posRateMD.kr(0)));

offset = Ddup(2, Diwhite(0, BufFrames.kr(sndBuf))) * \offsetMD.kr(0);
index = Ddup(2, Dseries(0, 1)) + offset;
startPos = Demand.ar(events[\trigger], DC.ar(0), Dbufrd(slicesBuf, index * 2));
endPos = Demand.ar(events[\trigger], DC.ar(0), Dbufrd(slicesBuf, index * 2 + 1));

//[startPos, endPos].poll(events[\trigger]);

pos = Phasor.ar(
	trig: DC.ar(0),
	rate: posRate * BufRateScale.kr(sndBuf) * SampleDur.ir / BufDur.kr(sndBuf),
	start: startPos,
	end: endPos,
);
pos = Latch.ar(pos, voices[\triggers]) * BufFrames.kr(sndBuf);
2 Likes

That velvet function is cool :slight_smile: If you are passing a multichannel trigger you want to make sure your demand sequence is multichannel expanding correctly by collecting the triggers and polling from the sequence for every trigger. If you then overlap your grains you would still get a different value per channel. Could .if(1, -1) be replaced by * 2 - 1?

(
~velvet = { |triggers, density = 0.05, bias = 0|
    var nx2 = (Dwhite(0, 1, inf) < (0.5 + (bias * 0.5))) * 2 - 1;
    var out = Dswitch1([0, nx2], Dwhite(0, 1, inf) > (1 - density));
    triggers.collect{ |localTrig|
		Demand.ar(localTrig, DC.ar(0), out);
    };
};
)
1 Like

that seems way better, it’s something I ported from gen~ a while ago and it became spaghetti code. I think you actually fixed a bug w the function! the density wasnt working properly before.

slice prototype also looks nice – I will take a look :slight_smile:

Oh also, there is REALLY good window function hidden in the gen~ book library (in a file called ‘go.lib.genexpr’ I don’t think it’s used in any of the examples) that is called unitGaussGeneralized – w/ triangle input it allows for 4 params that interpolate between most of the shapes that are used in granular synthesis. Might be worth adding to the windows…

// raise gaussian to power >= 1
// approaches square as power -> inf
// if u < 0.5 and power == 1, the function might not reach zero
unitGaussGeneralized(p, u, power) { 
	u = 1/(1-clip(u, 0, 0.999999));  // 1 to inf
	power = floor(power);
	return exp(-0.5*pow(2*u*(p-1),2*power)); 
}
1 Like

cool, the approach however has a problem, because it is not tapering off to 0 on the sides because of the exp function. For a clean Gauss one can multiply that with a hanning window (thats what im doing in the c++ library), but not with this approach when you also want to control the width.

What you can do instead is to combine the unitshapers like this:
First you get a trapezoid via unitTrapezoid and then you can create a tukey window which is just a trapezoid shaped via unitHanning and create the gaussian window using unitGauss driven by the trapezoid and then multiply both of them. I could add that to the windowfunctions if you like.

EDIT: In my opinion Duty and Index are more or less Doing the same thing we wouldnt Need the Duty param at all.

(
var unitHanning = { |phase|
	1 - cos(phase * pi) * 0.5;
};

var unitGauss = { |phase, index|
	var cosine = cos(phase * 0.5pi) * index;
	exp(cosine * cosine.neg);
};

var unitTrapezoid = { |phase, width, duty = 1|
	var sustain = 1 - width;
	var offset = phase - (1 - duty);
	var trapezoid = (offset / sustain + (1 - duty)).clip(0, 1);
	var pulse = offset > 0;
	Select.ar(BinaryOpUGen('==', width, 1), [trapezoid, pulse]);
};

var unitUniversal = { |phase, width, index, duty|
	var trapezoid = unitTrapezoid.(phase, width, duty);
	var gaussian = unitGauss.(trapezoid, index);
	var tukey = unitHanning.(trapezoid);
	gaussian * tukey;
};

{
	var phase = Phasor.ar(DC.ar(0), 50 * SampleDur.ir);
	var warpedPhase = UnitTriangle.ar(phase, \skew.kr(0.5));
	unitUniversal.(warpedPhase, \width.kr(0), \index.kr(1), \duty.kr(1));
}.plot(0.02);
)
1 Like

cute little test sequence using the sorted slices :slight_smile:

The Phasor for the position can be a bit off for a tight loop between startPos and endPos, i guess because its not in sync with the accumulator, which is necessary for the grain rate param. But we cannot reset the pos phasor with the derived trigger, because it has to be slower than grainphase (for the worried reader, this has nothing to do with the library).
But im Sure there could be a different Solution, do you know some from other Buffer Granulation attempts ? You can test what I mean when just passing one slice (starPos and endPos) then it’s Going out of Sync over time.

3 Likes

The end is near! I have updated the library with helpfiles for UnitKink, UnitTriangle, UnitCubic and JCurve and SCurve. The only helpfile missing now is for the ShiftRegister.

Im sorry if you fell in love with the PlanckWindow, but it didnt make it to the current release.
I will potentially output some more unitshapers including the UnitPlanck, but as a window for granulation its not needed.

I have additionally prototyped a warped FilterBank based on virtual analog linear trapezoidal SVF Bandpass filters by Andrew Simper, probably will make its presence into the next release.

1 Like

I have added a helpfile for the ShiftRegister, now all the helpfiles and the additional guides are available.

1 Like

Hello @dietcv,

Thank you very much for these wonderfull tools.
I’d like to ask you some questions:
In the guide Voice Allocation
2.2) Pulsar Synthesis with Phase shaping for frequency trajectory per grain
Can you explain why the namedControl overlap has to be audio rate ?
I can hear the difference if I change it to kr but I don’t understand why.

(
var lfo = {

    var measurePhase = Phasor.ar(DC.ar(0), \rate.kr(0.5) * SampleDur.ir);
    var stepPhase = (measurePhase * \stepsPerMeasure.kr(2)).wrap(0, 1);

    var measureLFO = HanningWindow.ar(measurePhase, \skewA.kr(0.75));
    var stepLFO = GaussianWindow.ar(stepPhase, \skewB.kr(0.5), \index.kr(1));

    stepLFO * measureLFO;
};

{
    var numChannels = 8;

    var reset, flux, tFreqMod, tFreq, overlap;
    var events, voices, windowPhases, triggers;
    var grainFreq, grainPhases, grainWindows;
    var grainOscs, grains, sig;

    reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

    flux = LFDNoise3.ar(\fluxMF.kr(1));
    flux = 2 ** (flux * \fluxMD.kr(0.5));

    tFreqMod = lfo.().linlin(0, 1, 1, 50);
    tFreq = \tFreq.kr(20) * flux * tFreqMod;

    grainFreq = \freq.kr(1200) * flux;
    overlap = \overlap.ar(5); // has to be audio rate, we will latch that later!

    events = SchedulerCycle.ar(tFreq, reset);

    voices = VoiceAllocator.ar(
        numChannels: numChannels,
        trig: events[\trigger],
        rate: grainFreq / overlap, // grain duration depending on grainFreq scaled by overlap
        subSampleOffset: events[\subSampleOffset],
    );

    grainWindows = HanningWindow.ar(
        phase: voices[\phases],
        skew: \skew.kr(0.01)
    );

    // phase shaping for a frequency trajectory per grain:
    // 1.) using normalized phases into JCurve
    // 2.) scaling to number of cycles by latched overlap before wrapping between 0 and 1
    grainPhases = JCurve.ar(voices[\phases], \shape.kr(0));
    grainPhases = (grainPhases * Latch.ar(overlap, voices[\triggers])).wrap(0, 1);

    grainOscs = SinOsc.ar(DC.ar(0), grainPhases * 2pi);

    grains = grainOscs * grainWindows;

    grains = PanAz.ar(2, grains, \pan.kr(0));
    sig = grains.sum;

    sig = LeakDC.ar(sig);
    sig * 0.1;
}.play;
)

And I don’t understand why you have to multiply your grainPhases with overlap here.

hey, in this example we are using the grain frequency scaled by overlap to set the duration of our multichannel events, so the grain duration is binded to the grain frequency instead of the trigger frequency, which means higher grain frequencies lead to shorter grain durations and vice versa.

We then take these phases between 0 and 1 to shape them with JCurve for a frequency trajectory per grain. To then get the desired number of cycles per grain we have to multiply the shaped phases between 0 and 1 by the overlap value, but the value we are using to multiply our grain phases with has to be latched per multichannel trigger (it should not be changed in the middle of a grain). To latch that value with an audio rate trigger overlap has to be audio rate itself.

Sorry for being a noob on this domain,
I don’t understand why SchedulerCycle return a phase ? You never use this phase on your examples, you always use phases from voiceAllocator.
I don’t understand the rate return neither, why do we have to calculate a value that we already know (cause we pass it to the fisrt arg of the SchedulerCycle) ?

here is a simplified example of what we are doing here:

(
{
	var tFreq = 100;
	var trig = Impulse.ar(tFreq);
	var grainFreq = 400;
	var overlap = \overlap.kr(4);

	var windowFreq = grainFreq / overlap;
	var windowPhase = Sweep.ar(trig, windowFreq);
	var rectWindow = windowPhase < 1;

	var phaseShaped = JCurve.ar(windowPhase, \shape.kr(0));
	var phaseShapedScaled = (phaseShaped * overlap).wrap(0, 1);
	var sig = sin(phaseShapedScaled * 2pi);

	[
		windowPhase,
		phaseShaped,
		phaseShapedScaled,
		sig * rectWindow
	];
	
}.plot(0.02);
)

What you want to do here is to change the distribution of all the carrier cycles per grain, not only the shape of one cycle. If you leave out the multiplication with overlap before using that phase for your carrier signal you only get one cycle.

SchedulerCycle does return a phase because you dont necessarily have to use that to plug it into VoiceAllocator. But when using VoiceAllocator, yes then you dont need the phase from SchedulerCycle / SchedulerBurst, you use the rate, trigger and sub-sampleoffset outputs.

The rate output of SchedulerCycle / SchedulerBurst is latched per scheduling ramp cycle, this makes sure when you use their rate output to integrate our window phases these stay linear when the rate of SchedulerCycle / SchedulerBurst is beeing modulated, the raw rate value plugged into SchedulerCycle is continuously changing when beeing modulated so the phase wouldnt be linear when using that directly.

Thats one of the big things the library solves for you. You can read more about that in the scheduling guide.
especially in chapter 2) Accumulation vs Integration of Ramps from a Scheduling Phasor
and chapter 5) Modulating the Rate of a Scheduling Phasor

compare the output of SchedulerCycle with Phasor here, when beeing modulated:

(
{
    var rate = 1000 * (2 ** (SinOsc.ar(50) * 3));
    [
        SchedulerCycle.ar(rate)[\phase],
		Phasor.ar(DC.ar(0), rate * SampleDur.ir),
    ];
}.plot(0.0021).plotMode_(\plines);
)

EDIT: Maybe to be more precise, if you use VoiceAllocator the rate input is also sampled and held per trigger, so you could also plug tFreq scaled by overlap directly into that. The reason we are outputting phase, rate, triggers and subsampleoffsets from SchedulerCycle / SchedulerBurst is that the user can decide what to do with these values. There might be some redundancies but this made the most sense to me to have a clear distinction between event scheduling and voice allocation and to output all of the values for a flexible interface.

using events[\rate] into VoiceAllocator:

(
{
    var numChannels = 5;

    var reset, tFreqMD, tFreq;
    var events, voices;

    reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

    tFreqMD = \tFreqMD.kr(2);
    tFreq = \tFreq.kr(400) * (2 ** (SinOsc.ar(50) * tFreqMD));

    events = SchedulerCycle.ar(tFreq, reset);

    voices = VoiceAllocator.ar(
        numChannels: numChannels,
        trig: events[\trigger],
        rate: events[\rate] / \overlap.kr(2),
        subSampleOffset: events[\subSampleOffset],
    );

    voices[\phases];

}.plot(0.041);
)

using tFreq into VoiceAllocator:

(
{
    var numChannels = 5;

    var reset, tFreqMD, tFreq;
    var events, voices;

    reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

    tFreqMD = \tFreqMD.kr(2);
    tFreq = \tFreq.kr(400) * (2 ** (SinOsc.ar(50) * tFreqMD));

    events = SchedulerCycle.ar(tFreq, reset);

    voices = VoiceAllocator.ar(
        numChannels: numChannels,
        trig: events[\trigger],
        rate: tFreq / \overlap.kr(2),
        subSampleOffset: events[\subSampleOffset],
    );

    voices[\phases];

}.plot(0.041);
)

Thank you, for your patience and explanations.

It’s clearer for me thanks to your explanations.
One last question for today:
In your example code " using events[\rate] into VoiceAllocator:"
you set numChannels to 5 but your grains are distributed on 4 channels why is that ?