GrainUtils - sub-sample accurate EventScheduler and dynamic VoiceAllocator

hey, while creating the GrainDelay, i have figured out a way to implement a dynamic VoiceAllocator. This comes in handy for server side polyphony (especially for granulation), instead of using the round-robin method (increment a counter for every received trigger and distribute the next voice to the next channel), the VoiceAllocator finds a channel which is currently free and schedules the next event on that channel, if no channels are available the event gets dropped.

You can grab the latest release here

The EventScheduler has two arguments, a triggerRate and reset and outputs:

  • a derived trigger on output[0]
  • a linear ramp between 0 and 1 on output[1]
  • a derived and latched rate in hz on output[2] output[1]
  • a derived sub-sample-offset on output[3] output[2]

its own rate is sampled and held for every ramp cycle, which means we make sure the internal ramps are linear and between 0 and 1, while beeing modulated.

The VoiceAllocator has four arguments, numChannels, a trigger, a rate and a sub-sample offset and outputs:

  • an array of sub-sample accurate phases
  • an array of triggers

This setup then allows you to:

  • modulate the trigger rate of the EventScheduler, without distorting its phase and calculate the sub-sample offset for one of its outputs
  • distribute the events across the channels via the VoiceAllocator and make sure the channel where you distribute your next voice to, is currently free.

These things in combination then enable:

  • trigger frequency modulation for audio ratchets without distorting the phase
  • overlapping grains while the events have different durations without distorting the phase

Currently the interface is a bit convoluted, but i hope i can think this through (try out adjusting the numChannels, and look at the plot when adjusting tFreqMD and overlapMD):

(
{
	var numChannels = 5;

	var reset, tFreqMD, tFreq;
	var overlapMD, overlap;
	var events, voices, phases, triggers;
	var sig;

	reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

	tFreqMD = \tFreqMD.kr(2);
	tFreq = \tFreq.kr(400) * (2 ** (SinOsc.ar(50) * tFreqMD));

	overlapMD = \overlapMD.kr(0);
	overlap = \overlap.kr(1) * (2 ** (SinOsc.ar(50) * overlapMD));

	events = EventScheduler.ar(tFreq, reset);

	voices = VoiceAllocator.ar(
		numChannels: numChannels,
		trig: events[0],
		rate: events[2] / overlap,
		subSampleOffset: events[3],
	);
	phases = voices[0..numChannels - 1];
	triggers = voices[numChannels..numChannels * 2];

	phases;
}.plot(0.041);
)


In the context of granular synthesis this setup allows you to generate windowPhases to drive an arbitrary stateless window function (have a look here) from your multichannel phase output of VoiceAllocator, while using the sub-sample offset output of EventScheduler and the multichannel trigger output of VoiceAllocator to accumulate or integrate your grainPhases which are driving your carrier oscillator.

The reason we cant put this all together in one Ugen is, that we want to be able to use the multichannel windowPhases from VoiceAllocator to be able to drive modulators used for FM of the multichannel grain frequencies and we additionally need the multichannel trigger from VoiceAllocator and the subsample offset from EventScheduler to pass it to the grainPhase accumulator / integrator to reset its phase and add the sub-sample offset.

Here is a still bit convoluted test example, where i have been adding the multichannel accumulator for the grainphase manually in sc:

(
var accumulatorSubSample = { |trig, subSampleOffset|
	var hasTriggered = PulseCount.ar(trig) > 0;
	var accum = Duty.ar(SampleDur.ir, trig, Dseries(0, 1)) * hasTriggered;
	accum + subSampleOffset;
};

var multiChannelAccumulator = { |triggers, subSampleOffsets|
	triggers.collect{ |localTrig, i|
		accumulatorSubSample.(localTrig, subSampleOffsets[i]);
	};
};

var multiChannelDwhite = { |triggers|
	var demand = Dwhite(-1.0, 1.0);
	triggers.collect{ |localTrig|
		Demand.ar(localTrig, DC.ar(0), demand)
	};
};

{
	var numChannels = 8;

	var reset, tFreqMD, tFreq;
	var overlapMD, overlap;
	var events, voices, windowPhases, triggers;
	var sig;

	var grainFreqMod, grainFreqs, grainSlopes, grainPhases, sigs;
	var grainWindows;

	reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

	tFreqMD = \tFreqMD.kr(2);
	tFreq = \tFreq.kr(10) * (2 ** (SinOsc.ar(0.3) * tFreqMD));

	overlapMD = \overlapMD.kr(0);
	overlap = \overlap.kr(1) * (2 ** (SinOsc.ar(0.3) * overlapMD));

	events = EventScheduler.ar(tFreq, reset);

	voices = VoiceAllocator.ar(
		numChannels: numChannels,
		trig: events[0],
		rate: events[2] / overlap,
		subSampleOffset: events[3],
	);
	windowPhases = voices[0..numChannels - 1];
	triggers = voices[numChannels..numChannels * 2 - 1];

	grainWindows = HanningWindow.ar(windowPhases, \skew.kr(0.1));

	grainFreqMod = multiChannelDwhite.(triggers);
	grainFreqs = \freq.kr(440) * (2 ** (grainFreqMod * \freqMD.kr(2)));
	grainSlopes = grainFreqs * SampleDur.ir;

	grainPhases = (grainSlopes * multiChannelAccumulator.(triggers, Latch.ar(events[3], triggers))).wrap(0, 1);

	sigs = sin(grainPhases * 2pi);
	sigs = sigs * grainWindows;

	sigs = PanAz.ar(2, sigs, \pan.kr(0));
	sig = sigs.sum;

	sig!2 * 0.1;

}.play;
)

If you have additional ideas let me know :slight_smile:

10 Likes

This specific problem is something we are trying to solve for 2-3 years now and finally I have made a first draft @mousaique. I couldnt be more happy :slight_smile:

2 Likes

:fire: :fire: :fire: wooow, what an amazing achievement @dietcv!! kudos for your patience & persistence in solving this gordian knot of a problem that could drive one into granular despair :exploding_head::grinning:

2 Likes

This could be a conference paper :+1:

hjh

6 Likes

I think i can encapsulate the grainphase generation into another ugen RampIntegrator which then needs these three arguments:

  • trig, multichannel expands when receiving an array of triggers from our VoiceAllocator
  • rate, multichannel expands when receiving an array of frequencies
  • subSampleOffset, from our EventScheduler

The user can then for example use the multichannel windowPhases from VoiceAllocator to drive an oscillator / window function which then can be used for multichannel FM of the RampIntegrator.

Instead of latching the rates like we are doing internally for EventScheduler and VoiceAllocator, the RampIntegrator just integrates its frequency, like Sweep is doing and therefore enables FM and additionally adds the sub-sample offset on phase reset from the trigger input, to be subsample accurate.

The potential example would then look like this, where the RampIntegrator is still pseudo code and we for example use an ExponentialWindow to modulate its frequency per grain.

(
{
	var numChannels = 8;

	var reset, tFreqMD, tFreq;
	var overlapMD, overlap;
	var events, voices;
	var fmods, grainFreqs, grainPhases, grainWindows;
	var grainOscs, grains, sig;

	reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

	tFreqMD = \tFreqMD.kr(2);
	tFreq = \tFreq.kr(10) * (2 ** (SinOsc.ar(0.3) * tFreqMD));

	overlapMD = \overlapMD.kr(0);
	overlap = \overlap.kr(1) * (2 ** (SinOsc.ar(0.3) * overlapMD));

	events = EventScheduler.ar(tFreq, reset);

	voices = VoiceAllocator.ar(
		numChannels: numChannels,
		trig: events[\trigger],
		rate: events[\rate] / overlap,
		subSampleOffset: events[\subSampleOffset],
	);
	grainWindows = HanningWindow.ar(voices[\phases], \windowSkew.kr(0.1));

	fmods = ExponentialWindow.ar(voices[\phases], \pitchSkew.kr(0.1), \pitchShape.kr(0.5));
	grainFreqs = \freq.kr(440) * (1 + (fmods * \fmIndex.kr(2));

	grainPhases = RampIntegrator.ar(
		trig: voices[\triggers], 
		rate: grainFreqs, 
		subSampleOffset: events[\subSampleOffset]
	);

	grainOscs = sin(grainPhases * 2pi);
	grains = grainOscs * grainWindows;

	grains = PanAz.ar(2, grains, \pan.kr(0));
	sig = grains.sum;

	sig * 0.1;

}.play;
)

now you can use the RampIntegrator and the WindowFunctions in the following example to do FM and PM for the grainPhases:

(
var multiChannelDwhite = { |triggers|
	var demand = Dwhite(-1.0, 1.0);
	triggers.collect{ |localTrig|
		Demand.ar(localTrig, DC.ar(0), demand)
	};
};

{
	var numChannels = 8;

	var reset, tFreqMD, tFreq;
	var overlapMD, overlap;
	var events, voices, windowPhases, triggers;

	var grainFreqMod, grainFreqs, grainPhases, grainWindows;
	var grainOscs, grains, sig;
	var fmods, modPhases, pmods;

	reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

	tFreqMD = \tFreqMD.kr(1);
	tFreq = \tFreq.kr(10) * (2 ** (SinOsc.ar(0.3) * tFreqMD));

	overlapMD = \overlapMD.kr(1);
	overlap = \overlap.kr(2) * (2 ** (LFDNoise3.ar(0.1) * overlapMD));

	events = EventScheduler.ar(triggerRate: tFreq, reset: reset);

	voices = VoiceAllocator.ar(
		numChannels: numChannels,
		trig: events[0],
		rate: events[1] / overlap,
		subSampleOffset: events[2],
	);
	windowPhases = voices[0..numChannels - 1];
	triggers = voices[numChannels..numChannels * 2 - 1];

	grainWindows = HanningWindow.ar(windowPhases, \skew.kr(0.05));

	grainFreqMod = multiChannelDwhite.(triggers);
	grainFreqs = \freq.kr(440) * (2 ** (grainFreqMod * \freqMD.kr(1)));

	fmods = ExponentialWindow.ar(windowPhases, \pitchSkew.kr(0.03), \pitchShape.kr(0));

	grainPhases = RampIntegrator.ar(
		trig: triggers,
		rate: grainFreqs * (1 + (fmods * \pitchMD.kr(2))),
		subSampleOffset: events[2]
	);

	modPhases = RampIntegrator.ar(
		trig: triggers,
		rate: grainFreqs * \pmRatio.kr(1.5),
		subSampleOffset: events[2]
	);
	pmods = SinOsc.ar(DC.ar(0), modPhases * 2pi);

	grainPhases = (grainPhases + (pmods * \pmIndex.kr(1))).wrap(0, 1);

	grainOscs = SinOsc.ar(DC.ar(0), grainPhases * 2pi);
	grains = grainOscs * grainWindows;

	grains = PanAz.ar(2, grains, \pan.kr(0));
	sig = grains.sum;

	sig = LeakDC.ar(sig);

	sig * 0.1;

}.play;
)

here via the plot:

(
var multiChannelDwhite = { |triggers|
	var demand = Dwhite(-1.0, 1.0);
	triggers.collect{ |localTrig|
		Demand.ar(localTrig, DC.ar(0), demand)
	};
};

{
	var numChannels = 5;

	var reset, tFreqMD, tFreq;
	var overlapMD, overlap;
	var events, voices, phases, triggers;
	var grainFreqMod, grainFreqs, grainPhases, grainWindows;
	var grainOscs, grains;

	reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

	tFreqMD = \tFreqMD.kr(0);
	tFreq = \tFreq.kr(400) * (2 ** (SinOsc.ar(50) * tFreqMD));

	overlapMD = \overlapMD.kr(0);
	overlap = \overlap.kr(5) * (2 ** (SinOsc.ar(50) * overlapMD));

	events = EventScheduler.ar(tFreq, reset);

	voices = VoiceAllocator.ar(
		numChannels: numChannels,
		trig: events[0],
		rate: events[1] / overlap,
		subSampleOffset: events[2],
	);
	phases = voices[0..numChannels - 1];
	triggers = voices[numChannels..numChannels * 2];

	grainWindows = HanningWindow.ar(phases, \skew.kr(0.5));

	grainFreqMod = multiChannelDwhite.(triggers);
	grainFreqs = \freq.kr(800) * (2 ** (grainFreqMod * \freqMD.kr(2)));

	grainPhases = RampIntegrator.ar(
		trig: triggers,
		rate: grainFreqs,
		subSampleOffset: events[2]
	);
	
	grainOscs = SinOsc.ar(DC.ar(0), grainPhases * 2pi);
	grains = grainOscs * grainWindows;

}.plot(0.041);
)
1 Like

I will probably make a selection of window functions which are interesting for granular synthesis (trapezoidal window, tukey window, gaussian window, hanning window and exponential window) and merge the UnitShapers release with the grainUtils, then all the stuff is in one place.

2 Likes

added convenience wrappers around the Ugens
now you can access the different outputs by their key:

(
var multiChannelDwhite = { |triggers|
	var demand = Dwhite(-1.0, 1.0);
	triggers.collect{ |localTrig|
		Demand.ar(localTrig, DC.ar(0), demand)
	};
};

{
	var numChannels = 8;

	var reset, tFreqMD, tFreq;
	var overlapMD, overlap;
	var events, voices, windowPhases, triggers;

	var grainFreqMod, grainFreqs, grainPhases, grainWindows;
	var grainOscs, grains, sig;
	var fmods, modPhases, pmods;

	reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

	tFreqMD = \tFreqMD.kr(1);
	tFreq = \tFreq.kr(10) * (2 ** (SinOsc.ar(0.3) * tFreqMD));

	overlapMD = \overlapMD.kr(1);
	overlap = \overlap.kr(2) * (2 ** (LFDNoise3.ar(0.1) * overlapMD));

	events = EventScheduler.ar(triggerRate: tFreq, reset: reset);

	voices = VoiceAllocator.ar(
		numChannels: numChannels,
		trig: events[\trigger],
		rate: events[\rate] / overlap,
		subSampleOffset: events[\subSampleOffset],
	);

	grainWindows = HanningWindow.ar(voices[\phases], \skew.kr(0.05));

	grainFreqMod = multiChannelDwhite.(voices[\triggers]);
	grainFreqs = \freq.kr(440) * (2 ** (grainFreqMod * \freqMD.kr(1)));

	fmods = ExponentialWindow.ar(voices[\phases], \pitchSkew.kr(0.03), \pitchShape.kr(0));
	grainPhases = RampIntegrator.ar(
		trig: voices[\triggers],
		rate: grainFreqs * (1 + (fmods * \pitchMD.kr(2))),
		subSampleOffset: events[\subSampleOffset]
	);

	modPhases = RampIntegrator.ar(
		trig: voices[\triggers],
		rate: grainFreqs * \pmRatio.kr(1.5),
		subSampleOffset: events[\subSampleOffset]
	);
	pmods = SinOsc.ar(DC.ar(0), modPhases * 2pi);

	grainPhases = (grainPhases + (pmods * \pmIndex.kr(1))).wrap(0, 1);
	grainOscs = SinOsc.ar(DC.ar(0), grainPhases * 2pi);

	grains = grainOscs * grainWindows;

	grains = PanAz.ar(2, grains, \pan.kr(0));
	sig = grains.sum;

	sig = LeakDC.ar(sig);

	sig * 0.1;

}.play;
)
2 Likes

i have added the window functions as well and started fresh here:

now everything is in the same place :slight_smile:

3 Likes

this is awesome work. Just interested – why not sum the channels to stereo output at the plugin level? the other grain libraries in sc don’t perservere multichannel output - you could get some massive performance gains working in a for loop right?

If you mix them, then you can’t separate them to control individual grains.

hjh

1 Like

This setup gives you maximum flexibility, which i have tried to illustrate with the example above :slight_smile: On this plot you can see that we can have individual frequencies for overlapping grains. If you first sum the output of the VoiceAllocator and then apply the Dwhite sequence or FM/PM, then the moment you get our next event we would change the frequency for all the channels.

2 Likes

Hi @dietcv,
these Ugens seem to currently be limited to 32 channels, is that right? Will that eventually become expandable or kept like that for efficiency reasons?
Thanks!

yes, thats true. numChannels is currently clipped between 1 and 32 channels. This could be easily changed. For my purposes 32 channels is enough, with all the additional stuff i have implemented in the SynthDef im only using about 8 channels right now.

i see! if that could be easily configurable i`d think it would be great to have the flexibility to keep it open and user defined!

whats important to note is that this approach cant be as efficient as using GrainBuf with up to 512 channels for example. There should be an upper limit. Its flexibility vs. performance here.

yes absolutely, but i think there’s still headroom for channels in realtime, and nonrealtime processing could of course handle more if necessary!

so what kind of upper limit would you suggest?

i’d go for 64 or 128 (nrt case), but thats very subjective based on my synthesis habits;)

I have updated to 64 channels, added a ShiftRegister Ugen, added an EventData Ugen and some additional stuff.

You can get the latest release here

The ShiftRegister is inspired by the Rung Divisions. It has a normalized encoded 3-bit ouput [\bit3], a normalized and reversed encoded 8-bit output [\bit8]and a ramp output [\phase]. The reversed encoded outputs result in contrapuntal shapes (see plot below). The user can set the trigger frequency via freq, the initial state of the shift register via seed, rotate it to the left or the right via rotate, XOR it with a random bit via chance (if chance is 1.0, this effectively doubles the shift register length, where the register is first filled with 1s and then with 0s, so you get this initial kind of “shark fin” pattern, see plot below). You can additionally set the length and use either the 3-bit or 8-bit output via fbSource to modulate its own trigger frequency in a chaotic feedback loop, where the mod index is controlled via fbIndex.

(
{
    var register = ShiftRegister.ar(
        freq: 1000,
        chance: 1.0,
        length: 8,
        rotate: 1,
		fbIndex: 1,
		fbSource: 0,
		seed: 0,
		reset: 0,
	);
	[
		register[\bit3],
		register[\bit8],
		register[\phase]
	];
}.plot(0.021).plotMode_(\plines);
)

What you can do now with the ShiftRegister is to use it as your main source of time, since it is outputting a ramp signal via its [\phase] output. You can take this ramp and plug it into EventData, which calculates the trigger, slope and sub-sampleoffset for you and outputs them via [\trigger], [\rate], [\subSampleOffset]. You can plug those into VoiceAllocator to distribute your events across the channels and you can use the chance and fbIndex params to modulate your event distribution.

If you can make sure your scheduling ramps are linear and between 0 and 1, you can use any other ramp to plug it into the EventData Ugen, it then calculates all the necesarry data to be used for VoiceAllocator or RampIntegrator.

(
{
	var numChannels = 5;
	
	var register, events, voices;

	register = ShiftRegister.ar(
		freq: 500,
		chance: 0.5,
		length: 8,
		rotate: 1,
		fbIndex: \fbIndex.kr(2),
		fbSource: 1,
		seed: 0,
		reset: 0
	);
	
	events = EventData.ar(register[\phase]);

	voices = VoiceAllocator.ar(
		numChannels: numChannels,
		trig: events[\trigger],
		rate: events[\rate] / \overlap.ar(1),
		subSampleOffset: events[\subSampleOffset]
	);

	HanningWindow.ar(voices[\phases], \skew.kr(0.5));

}.plot(0.081);
)

You can then additionally use the 3-bit and / or 8-bit output to sequence the pitch of your grains or something else.

(
{
	var numChannels = 8;

	var reset, overlapMod, overlap;
	var register, events, voices;
	var grainFreqMod, grainFreqs;
	var grainPhases, grainPhasesShaped, grainWindows;
	var grainOscs, grains, sig;

	reset = Trig1.ar(\reset.tr(0), SampleDur.ir);

	overlapMod = LFDNoise3.ar(\overlapMF.kr(0.1));
	overlap = \overlap.kr(1) * (2 ** (overlapMod * \overlapMD.kr(0)));

	register = ShiftRegister.ar(
		freq: \tFreq.kr(12),
		chance: \chance.kr(0.5),
		length: 8,
		rotate: 1,
		fbIndex: \feedback.kr(1),
		fbSource: 1,
		seed: 0,
		reset: reset
	);

	events = EventData.ar(register[\phase]);

	voices = VoiceAllocator.ar(
		numChannels: numChannels,
		trig: events[\trigger],
		rate: events[\rate] / \overlap.kr(1),
		subSampleOffset: events[\subSampleOffset],
	);

	grainWindows = GaussianWindow.ar(
		phase: voices[\phases],
		skew: \windowSkew.kr(0.03),
		index: \windowIndex.kr(2)
	);

	grainFreqMod = Latch.ar(register[\bit8], voices[\triggers]);
	grainFreqs = \freq.kr(440) * (2 ** (grainFreqMod * \freqMD.kr(2)));

	grainPhases = RampIntegrator.ar(
		trig: voices[\triggers],
		rate: grainFreqs,
		subSampleOffset: events[\subSampleOffset]
	);	
	grainPhasesShaped = SCurve.ar(grainPhases, \shape.kr(0.85), \inflection.kr(1));
	grainOscs = SinOsc.ar(DC.ar(0), grainPhasesShaped * 2pi);

	grains = grainOscs * grainWindows;

	grains = PanAz.ar(2, grains, \pan.kr(0));
	sig = grains.sum;

	sig = LeakDC.ar(sig);

	sig * 0.1;

}.play
)
5 Likes