Insert fx per grain

hey, i would like to be able to insert different fxs per grain. i have an array of signals which is multiplied by an array of grain windows and the fx should be inserted before i sum the signals. I would like to map these fxs to a bus, is there a way to use In.ar here, for example inserting a BPF? I also considered Select.ar for selecting different fxs and hard code them in the SynthDef, but i would like to have the setup most modular. is this possible? thanks.

arrayOfSigs = [...];
...insert fx per grain here...
arrayOfSigs = arrayOfSigs * arrayOfGrainWindows;
arrayOfSigs = PanAz.ar(2, arrayOfSigs, panning);
signal = arrayOfSigs.sum;

It isn’t possible to dynamically instantiate additional Synth nodes from a synth node. One way though, is to maybe generate a slice of timings, send it back to the client, and get the client to generate the audio graph,ie. the network of synth nodes, instead for each grain.

s.bind { SynthDef(...).play } may be able to allow you to define a different SynthDef graph per grain. It sends an OSC bundle that both defines and instantiates a SynthDef. Be aware that compilation takes time on the client, so you may need to bump up s.latency to avoid late messages. I’m not sure if there are other downsides.

a more experimental way is to use the incredibly powerful FrameLib by Alex Harker. It is experimentally on SC I think with @Sam_Pluta being part of it all, but that is only Huddersfield corridor gossip, so I hope he will confirm the state of it all

thanks for all the replies. the SynthDef structure with offsetting audio rate triggers and using the multichannel trigger to do different things, like triggerering several multichannel windows, spatializing per grain etc. is fixed. so the fx has to be inserted into the SynthDef, before i sum the channels. I think the only possibility is to hardcode the different fx into the SynthDef and use Select. This leads to unnecessary Ugens for my different presets though. Or i just apply them afterwards to the sum.

I’d like to hear how this sounds, because with a dense enough grain, it would sound like a group effect on one sound source for many sound sources. Also isn’t getting select to switch different segments be limited by the switching speed? Whereas the microsound granular part of the technique is a bunch of playheads into a buffer.

I’m not sure if this captures what you need, but I’ve often used a solution like this:

  1. Use one of the TGrains variants with N channels.
  2. Output to an N-channel bus
  3. Run N effects in parallel, each reading from one channel of the N-channel bus, and writing to the same output.

You can set the per-grain pan setting to control which effect chain is applied to the grain. If you pan only to discrete outputs (e.g. with 2 channel output, only a pan value of -1 or 1), then you’ll get each grain assigned to one effect chain. If you have effect chains that are “ordered”, e.g. that adjacent effects are coherent together (e.g. a bank of 8 tuned CombC’s), then you can freely choose a pan to cross-fade between adjacent effects.

This doesn’t allow you to send a single grain to multiple effects - you could achieve this by using multiple TGrains effects that have the same inputs and triggers EXCEPT for pan - so if oyu wanted to be able to route to 4 effect chains at once, you’d need 4 copies. TGrains is still likely to be MUCH more efficient than handling individual grains yourself, so I would suspect you could get 5-10 simultaneous effects routes (e.g. 5-10 duplicated TGrains) before it was less performant than the individual grains approach.

Something to note re. performance: adding more output channels should not significantly harm performance, since it’s only ever panning between two adjacent channels. So you can do cool things like — Run 4x different reverbs with 6x input channels for each (panned differently) - then you can spatialize grains in really complex interesting ways while the cost of the granulation part of the signal chain is the same as regular stereo panning.

3 Likes

This is a great idea! Is this how fennesz does his grain clouds?

hey, thanks alot. i think using an n-channel output bus is the solution. maybe the DX Ugens could also be used for smoothly switching between the effects. My initial idea was not about sequencing different fx and more about not writing a single source SynthDef for every fx i would like to use, so i could then make different presets using different fx and be able to interpolate between these for composition. But i will investigate some more possiblites. Im sticking with BufRd instead of Grain Ugens because you can do FM / PM per grain with a dedicated frequency window. There of course is an effiency tradeoff. I will try out the bank of tuned CombCs, sounds great.

No idea, because Fennesz research page has been coming soon since 2006… :wink:

i have for example used an array of overlapping stateless windows which are driving the filter frequencies of an array of bandpass filters with different min and max values triggered by the multichannel trigger. thats quite cool already. sounds a bit like self vocoding.

For my understanding, what you’re describing is not exactly the same as what you’re sketching in pseudocode. Applying fxs per grain – separate the output from other grain/fx outputs and then mixing – is what PbindFx is doing (and needs a lot of bus management to be done behind the scenes).
In your example, you would rather apply fx(s) to grain streams, which is certainly possible and much easier to accomplish. Actually, you already suggested it yourself: you can work with multichannel buses. @scztt suggested such strategies with TGrains.
See also my recent example in this thread: Real-Time Attack and Decay Control of grain envelope for Granular Synthesis - #5 by dkmayer You could route the signals grains or sig to a multichannel bus (your fx ins). Similar can be done with DXEnvFan. The Buffer Granulation Ex. 1f does such in one SynthDef, but you can also modularize it (again with sending to multichannel buses before mixing).

thanks, my approach is quite similiar to the example you have shared. Im using Impulse, Pulsedivider and Sweep to create a multichannel audio trigger and a multichannel phase to drive BufRds and several stateless windows for amplitude and frequency or phase modulation.
I additonally use the multichannel trigger and some Demand ugens for spatializing per grain in PanAz and then sum the multichannel signal to stereo afterwards (when there is actually a multichannel setup, you could leave out the sum and set the number of channels in PanAz accordingly). Im also using Demand ugens for masking the triggers of Impulse to create rhythmic figures at audio rate. I have used some pattern solutions before but for pulsar synthesis which im mostly using the triggers have to be audio rate.
With this setup you can overlap the grains inside the SynthDef up to maxOverlap which is given by the number of channels you are defining with Synthdef evaluation.
This is already working really great after been doing alot of work on the individual parts for the last year.

When inserting the fxs at the point where i have been putting them in the pseudo code you can use a multichannel window or a multichannel trigger (all distributed round robin accross the channels) and trigger or drive different values of these fxs per channel and if you increase overlap you can have overlapping fxs with different values per grain.
It would be straight forward just to apply the fx to the summed output, but if i apply the fx after i have summed the channels i cant have overlapping values for the center frequency of the BPF for example. Beside the BPF there are several other fxs i would like to apply, i dont want to put them all in the grain Synthdef when im just using one fx per Synth preset and dont want to write different grain Synthdefs when their difference is just the fx because then i cant interpolate between different presets for composition.

I will try out to send the signals, windows and triggers out to busses, apply an fx and route the multichannel signal back into the synthdef before summing it.

That all makes sense to me and is certainly an interesting approach – one that’s slighty different from the grain + fx variants that I have used myself so far

i have not been able to route the output of the fx back into the main SynthDef. So i thought i could split source, fx and the final output stage with the panning and summing into three SynthDefs:

(
~numChannels = 5;

SynthDef(\multiChannelGrains, {
	
	[...]
	
	OffsetOut.ar(\trigOut.kr(0), arrayOfTriggers);
	OffsetOut.ar(\windowOut.kr(0), arrayOfWindows);
	OffsetOut.ar(\grainOut.kr(0), arrayOfGrains);
}).add;

SynthDef(\fxPerGrain, {
	
	var arrayOfTrigs = In.ar(\trigIn.kr(0), ~numChannels);
	var arrayOfWindows = In.ar(\windowIn.kr(0), ~numChannels);
	var arrayOfGrains = In.ar(\grainIn.kr(0), ~numChannels);
	
	[...]
	
	OffsetOut.ar(\fxOut.kr(0), sig);
}).add;

SynthDef(\panPerGrain, {
	
	var sig, arrayOfFxGrains;	
	
	arrayOfFxGrains = In.ar(\fxIn.kr(0), ~numChannels);
	arrayOfFxGrains = PanAz.ar(2, arrayOfFxGrains, \pan.kr(0));
	sig = arrayOfFxGrains.sum;
	
	OffsetOut.ar(\out.kr(0), sig);
}).add;
)

// create groups and busses

(
~makeBusses = {
	~bus = Dictionary.new;
	~bus.add(\trigOut -> Bus.audio(s, ~numChannels) );
	~bus.add(\windowOut -> Bus.audio(s, ~numChannels) );
	~bus.add(\grainOut -> Bus.audio(s, ~numChannels) );
	~bus.add(\fxOut -> Bus.audio(s, ~numChannels) );
};
~makesBusses.();
)

(
~makeGroups = {
	~synthGrp = Group.new;
	~fxGrp = Group.new(~synthGrp, \addAfter);
	~finalGrp = Group.new(~fxGrp, \addAfter);
};
~makeGroups.();
)

// routing to busses

(
Routine {

	s.bind {

		Synth(\multiChannelGrains, [

			\trigOut, ~bus[\trigOut],
			\windowOut, ~bus[\windowOut],
			\grainOut, ~bus[\grainOut],

		], target: ~synthGrp);

	};

	s.bind {

		Synth(\fxPerGrain, [

			\trigIn, ~bus[\trigOut],
			\windowIn, ~bus[\windowOut],
			\grainIn, ~bus[\grainOut],
			\fxOut, ~bus[\fxOut],

		], target: ~fxGrp);

	};

	s.bind {

		Synth(\panPerGrain, [

			\fxIn, ~bus[\fxOut],
			\out, 0,

		], target: ~finalGrp);

	};

}.play;
)

what do you think of this approach?

what do you think of this approach?

Are you changing the “shape” of the signal graph dynamically?

Could you “modularise” by writing pseudo-Ugens?

Using “array expansion” rules to get the behaviour you want?

Something like the below?

var tr = GenTrig.ar(trigParam...);
var wn = GenWindow.ar(windowParam...);
var gr = GenGrain.ar(grainParam...);
var fx = ApplyFx.ar(tr, wn, gr, fxParam...);
var pn = ApplyPan.ar(fx, panParam...);
Out(FinalMix.ar(pn, mixParam...))

Scsynth works perfectly with (very) large Ugen graphs, but of course the shape is fixed.

On the other hand, you can schedule these large graphs in whatever pattern and at whatever “granularity” you like.

thanks, i have already created different functions for triggers, windows, grains, panning etc. the problem is that you cant exchange the fxs which should be implemented exactly between the “grain module” and the “panning module”. I dont want to use Select.ar(..., fx1, fx2, fx3) here because of unecessary Ugens. The shape of the synth graph has to be changed dynamically which leads me to different synths, busses and groups.

Ah, apologies, I wasn’t sure.

Out of curiosity, if there were a “Select” that didn’t evaluate the “unused” Ugens (a kind of generalised demand system) would that work for you?

Would it be preferable?

(Quite off-topic and speculative, I know…)

Im trying to build a system which is most flexible in creating different timbres. which means different configurations for source, modulation, fxs etc. so you could create different timbral states and then interpolate between those to create musical form. so its necessary to smoothly exchange the different modules on the fly.
How the interpolation is done is another topic (i would differentiate between interpolation of modules and interpolation of values), but at first you need enough flexibility to create different timbral states by exchanging these different building blocks. Ive modularized alot of these building blocks already and use busses for all of the modulators, so the source SynthDef is more of a basic framework which is accompanied by modulation and fx SynthDefs.
I have created a bunch of different musical ideas already using this basic framework, but all of these are using different modulators (LFOs or Demand ugens), different fxs etc.
To glue these ideas together other then playing event A, stopping event A and playing event B, all the sounds have to be created from the same instrument by transitioning between these predefined states in different ways to create an evolving but not totally random form.

I’m sure I’m missing something obvious, but it seems like this at least vaguely in the area of the kind of thing Ndef does so nicely? In terms of taking care of all the book-keeping and cross fading?

Ndef('src', { SinOsc.ar({ 220.rrand(440) } ! 8, 0) * 0.1 });
Ndef('trg', { Dust.ar({ 2.0.rand } ! 8) });
Ndef('env', { Decay2.ar(Ndef.ar('trg'), 0.1, 2) });
Ndef('grn', { Ndef.ar('src') * Ndef.ar('env') });
Ndef('fx', { CombC.ar(Ndef.ar('grn'), 0.2, 0.2, 2) });
Ndef('mix', { Splay.ar(Ndef.ar('grn') + Ndef.ar('fx'), 1, 1, 0, true) }).play