DynGen - Dynamic UGen

I think that feature would be pretty straightforward to add to my Voicer quark. (Cue up my standard editorial comment about the ways that we make our code harder to extend by relying directly on low level abstractions, instead of building higher level abstractions… I already have a higher level abstraction to fit this requirement into :wink: )

I’d imagine you can get away with quite a bit, apart from timestamps (which will not be possible to respect, if there’s a NRT part of synth instantiation).

That’s not a complaint, btw – really impressed with this. Just noting a boundary condition (and there’s a lot of space inside the boundary).

hjh

I don’t fully understand the timestamp aspect - DynGen includes two UGens - the default DynGen which offloads the compilation to a NRT thread and therefore introduces a delay of at least one block size, and DynGenRT which compiles blocking on the RT thread and is therefore available from the first sample but may introduce dropouts. See DynGenRT help for more information.

The audio sample/snippet in the previous post actually uses DynGenRT to demonstrate that one can actually use DynGenRT w/o dropouts, but if you don’t need starting from the first sample it is always better to use DynGen.

I don’t see any harm in developing ways to use DynGen with accurate timestamp resolution and no risk of dropouts.

I’m of course aware of DynGenRT – it’s mentioned in the same note that I quoted earlier. But, if I’m onstage, I would like to reduce the risk of xruns as much as possible, and I’m perfectly fine with developing a structure to do that transparently with DynGen. (As noted above, I don’t think it will take long, though it will have to wait until after a late November show.)

hjh

1 Like

This is amazing! Thanks for sharing!

I have one little question (I am not familiar with EEL2)

The help file states that ’ each variable within EEL2 is already an array which allows us to use it as a delay line.’ As a matter of fact, I see in your examples that you use buffers directly instantiated in the script, like in:

buffer[writeIndex] = in0;

But what is the default size of such arrays? Are they instantiated with a default max initial size? Or are they resized at runtime if needed?

Thanks for your kind help!

I had a go at converting a schroeder reverb I found on the max forums, but it sounds like repeated definitions of the allpass are sharing the same buffer or something (it’s giving a weird pitch shifter effect)

anyone have any tips?

(
~genDef = DynGenDef(\funcs, 
        "
        function wrap(x, low, high)local(range, result)
        (
            range = high - low;
            result = range != 0 ? (x - floor((x - low) / range) * range) : low;

            result;
        );

        function mstosamps(ms)( srate * ms * 0.001 );

        function linear_interp(x, y, a) ( x + a * (y-x));

        function lpf_op_simple(in, damp) local(init, prev, lpf)(
            lpf = linear_interp(in, prev, damp);
            prev = lpf;
            lpf;
        );

        function allpass_delay(in, gain, delay_samps, buf) local(init, ptr, max_delay, read_pos, frac, a, b, tap, sig, out)
        (
            !init ? (ptr = 0; init = 1;);
            max_delay = mstosamps(3000);

            read_pos = wrap(ptr - delay_samps, 0, max_delay);
            frac = read_pos - floor(read_pos);
            a = wrap(floor(read_pos), 0, max_delay);
            b = wrap(a + 1, 0, max_delay);
            tap = linear_interp(buf[a|0], buf[b|0], frac);

            sig = (in - tap) * gain;
            out = tap + sig;

            buf[ptr] = sig;
            ptr = wrap(ptr + 1, 0, max_delay);

            out;
        );

        function fbcomb(in, gain, delay_samps, damp, buf) local(init, ptr, read_pos, max_delay, frac, a, b, tap, out)
        (
            
            !init ? (ptr = 0; init = 1;);
            max_delay = mstosamps(3000);

            read_pos = wrap(ptr - delay_samps, 0, max_delay);
            frac = read_pos - floor(read_pos);
            a = wrap(floor(read_pos), 0, max_delay);
            b = wrap(a + 1, 0, max_delay);
            tap = linear_interp(buf[a|0], buf[b|0], frac);

            tap = lpf_op_simple(tap, damp);

            out = (in - tap) * gain;

            buf[ptr] = out;
            ptr = wrap(ptr+1, 0, max_delay);

            out;
        );
        
        size=in1; damp=in2;

        ap1_buf[0]+=0;
        ap2_buf[0]+=0;
        ap3_buf[0]+=0;

        sig = lpf_op_simple(in0, damp);
        ap1 = allpass_delay(sig, 0.7, 347 * size, ap1_buf);
        ap2 = allpass_delay(ap1, 0.7, 113 * size, ap2_buf);
        ap3 = allpass_delay(ap2, 0.7, 370 * size, ap3_buf);
        
        sig = ap3;

        x1_buf[0]+=0;
        x2_buf[0]+=0;
        x3_buf[0]+=0;
        x4_buf[0]+=0;

        x1 = fbcomb(sig, 0.773, 1687, damp, x1_buf);
        x2 = fbcomb(sig, 0.802, 1601, damp, x2_buf);
        x3 = fbcomb(sig, 0.753, 2053, damp, x3_buf);
        x4 = fbcomb(sig, 0.733, 2251, damp, x4_buf);

        s1 = x1 + x3;
        s2 = x2 + x4;
        o1 = s1 + s2;
        o2 = ((s1 + s2) * -1);
        o3 = ((s1 - s2) * -1);
        o4 = s1 - s2;

        left=o1+o3;
        right=o2+o4;

        out0=left;
        out1=right;
    "
).send;

Ndef(\schroeder).clear;
Ndef(\schroeder, {
    var test = Decay.ar(Impulse.ar(1), 0.25, LFCub.ar(1200, 0, 0.1));
    var sig = DynGen.ar(2, ~genDef, test, 100, 0.5).sanitize;
    sig;

});

Ndef(\schroeder).play;

);

I tihnk you might want to address EEL2 questions to the Cockos forum:

this is the subforum for jsfx/EEL2

I think it might be better to make a new thread for eel ideas then - moving this stuff off forum kills the momentum behind the project imo (plus there’s already a FAUST tag on here)

1 Like

Good thought - please do!

I have noticed that the memory of EEL2 is a bit more tricky than one expects and my understanding/example was wrong - see https://github.com/capital-G/DynGen/issues/51

I am still working on v0.3.0 which will break existing code b/c I will change how you pass audio inputs and parameters to DynGen - see https://github.com/users/capital-G/projects/4 for more information about the progress of v0.3.0.

I think it would be nice to keep to have a community within this forum which explores the possibilities of DynGen and EEL2 code.
I also intend to make this more interesting for usage within SC, which may cause that this diverges from JSFX (which at some point also could be added, but is not my priority currently).

2 Likes

Ahhh that would be why my reverb attempt was doing weird pitch-shifting feedback. Thanks for this, I don’t think I saw the memory boundaries point in the official eel doc haha

DynGen v0.3.0

This is a big update!

  • Pass parameters via name by prepending variables with _ in DynGen scripts
  • Address in- and output channels dynamically using in() and out() functions. This combined with named parameters allows to write effects such as PanAz which operate on an arbitrary number of in- and output channels but have fixed parameters (e.g. pos) for each instance
  • Proper multi-channel expansion
  • Write and read SC buffers using bufRead, bufReadL, bufReadC and bufWrite
  • Introduce @init, @block and @sample sections which allow to set init values for e.g. dynamic systems
  • Add FFT support via phase vocoder
  • Adding helper methods: clip, fold, wrap, mod, lin, cubic
  • Replacing DynGenRT by using the modulatable parameter sync in DynGen
  • Add modulatable update parameter to indicate if the running DynGen unit should listen for code updates or not
  • Add method free and freeAll to free scripts from the server

Check the docs for proper explaination of each feature.

If you are interested in some meta-programming in sclang: there is a discussion to see if DynGen scripts could be replaced with a sclang-native binding, see Add sclang transpiler · Issue #55 · capital-G/DynGen · GitHub

A big shout out to @Spacechild1 for code-reviewing and contributing!

Feedback, bug reports and nice snippets and sounds are always appreciated in this thread!

Here is a small sound snippet, feeding the output of a lorentz attractor into a complex oscillator

(
DynGenDef(\lorentz, "
@init
x=1.0;
y=1.0;
z=1.0;
oversample = 16;

@sample
loop(oversample,
  x += _dt * (_a *(y - x));
  y += _dt * ((x * (_b - z)) - y);
  z += _dt * ((x*y) - (_c*z));
);

out0 = x / 22;
out1 = y / 29;
out2 = (z - 27) / 54;
").send;

DynGenDef(\complex, "
@init
oversample = 32;
sRateOS = srate * oversample;
phaseA = 0;
phaseB = 0;

@sample

// calculate subsaples
loop(oversample,
  phaseA += _freqA / sRateOS;
  phaseB += _freqB / sRateOS;

  phaseA -= floor(phaseA);
  phaseB -= floor(phaseB);

  signalA = sin(2 * $pi * phaseA);
  signalB = sin(2 * $pi * phaseB);

  phaseA += _modIndexA * signalB;
  phaseB += _modIndexB * signalA;
);

out0 = signalA;
out1 = signalB;
").send;
)

(
Ndef(\x, {
	var lorentz = DynGen.ar(3, \lorentz, params: [
		a: \a.kr(10.0, spec: [9.0, 11.0]),
		b: \b.kr(28.0, spec: [26.0, 29.0]),
		c: \c.kr(8/3, spec: [7/3, 9/3]),
		dt: \dt.kr(0.1, spec: [0.001, 1.0, \exp]) * 0.0001,
	]);
	var modDepth = \modDepth.kr(0.01, spec: [0.0001, 0.1, \exp]);
	var sig = DynGen.ar(2, \complex, params: [
		freqA: \freqA.ar(300.0, spec: \freq) * 0.1,
		freqB: (lorentz[2].abs+0.01).exprange(0.01, 1.0, \loFreq.kr(20.0, spec: \freq), \hiFreq.kr(800.0, spec: \freq)),// \freqB.ar(200.0, spec: \freq),
		modIndexA: lorentz[0] * modDepth,
		modIndexB: lorentz[1] * modDepth,
	]);
	
	LeakDC.ar(sig) * 0.2;
}).play.gui;
)
14 Likes

this is SOOOOOO good for prototyping. I’ll try to get my fft_delay going… stay tuned!

1 Like

Thanks again for what is a fantastic addition to SC. I am excited by the FFT processing, so I straight away went for this… but I have 2 issues with it, probably my problem:

  1. the FFT example sounds very glitchy with a pulse at the windows freq…

  2. I don’t understand how the FFT size and buffer works if differently than EEL2. I tried to recode this simple 90-deg phase shifter I have coded in EEL2 for Reaper (which works) - it seems buffers sizes and the symmetry of the FFT are not the same in the SC and the EEL version (if I read the demo code accurately.)

desc: FFT 90deg rotation
//tags: FFT PDC filter
//author: PATremblay

slider1:10<6,14,1>FFT size (bits)

@init
fftsize=-1;

@slider
  fftsize != (0|(2^slider1)) ? (
    fftsize=(2^slider1)|0;
    bpos=0;
    curblock=0;
    lastblock=65536;
    window=120000;
    hist=240000;
    invfsize=1/fftsize;
    hfftsize=fftsize*0.5;
    tmp=0;
    tsc=3.14159/hfftsize;
    loop(hfftsize,
      window[tmp]=0.42-0.50*cos(tmp*tsc)+0.08*cos(2*tmp*tsc);
      tmp+=1;
    );
  );
  pdc_top_ch=2;
  pdc_bot_ch=0;
  pdc_delay=fftsize;

@sample

bpos >= fftsize ? (

  t=curblock;
  curblock=lastblock;
  lastblock=t;

  fft(curblock,fftsize);
  fft_permute(curblock,fftsize);
  i=0;
  loop(hfftsize,
    i2=fftsize*2-i-2;
    temp = curblock[i]* invfsize;
    curblock[i]= curblock[i+1]* invfsize * -1;
    curblock[i+1]= temp;
    temp2 = curblock[i2]* invfsize * -1;
    curblock[i2]= curblock[i2+1]* invfsize ;
    curblock[i2+1]= temp2;
    i+=2;
  );
  fft_ipermute(curblock,fftsize);
  ifft(curblock,fftsize);
  bpos=0;
);

// make sample
w=window[bpos*0.5];
iw=1-w;

os0=spl0;
os1=spl1;

spl0=(curblock[bpos]*w + lastblock[fftsize+bpos]*iw);
spl1=(curblock[bpos+1]*w + lastblock[fftsize+bpos+1]*iw);

lastblock[bpos]=hist[bpos];
lastblock[bpos+1]=hist[bpos+1];
lastblock[fftsize+bpos]=os0;
lastblock[fftsize+bpos+1]=os1;

hist[bpos]=os0;
hist[bpos+1]=os1;
bpos+=2;

Maybe it is due to the stereo processing of reaper’s version? In all cases, the reaper code just above does indeed rotate the phase at 90degree, but when I implement it in DynGen I get back my original sample (delayed by fftsize * 2) so I am confused…

A new patch release: There was a bug where .free removed too many DynGenDef from the server which has been fixed.


Thanks for your kind words :slight_smile:

This is to be expected since no windowing is applied here. Left as exercise to the reader :wink: I find it difficult to have an interesting example but also keep it simple so the basic principle comes across clearly.
Or maybe someone can provide a better example, I really have bare knowledge about FFT and would love to learn some more about FFT via DynGen. Maybe you can share some resources/books what to do with FFT?

Please note that DynGen is not JSFX compatible - i.e. you set pdc_delay which defines “[t]he current delay added by the plug-in, in samples.” - but this functionality is not (yet?) implemented in DynGen. (edit: this can not be implemented in DynGen/SC since there is no way that a Unit can tell other Units to add a delay afaik - maybe this would be possible on a SynthDef level though).

You can achieve something similar by delaying the input signal through the Delay UGen - here is an example which should be pretty close to what you want?

(
DynGenDef(\rot, "
@init
fftsize=1024;
bpos=0;
curblock=0;
lastblock=65536;
window=60000;
hist=120000;
invfsize=1/fftsize;
hfftsize=fftsize*0.5;
tmp=0;
tsc=$pi/hfftsize;

loop(hfftsize,
  window[tmp]=0.42-0.50*cos(tmp*tsc)+0.08*cos(2*tmp*tsc);
  tmp+=1;
);
/*
jsfx variables not available in dyngen
pdc_top_ch=2;
pdc_bot_ch=0;
pdc_delay=fftsize;
*/

@sample

bpos >= fftsize ? (

  t=curblock;
  curblock=lastblock;
  lastblock=t;

  fft(curblock,fftsize);
  fft_permute(curblock,fftsize);
  i=0;
  loop(hfftsize,
    i2=fftsize*2-i-2;
    temp = curblock[i]* invfsize;
    curblock[i]= curblock[i+1]* invfsize * -1;
    curblock[i+1]= temp;
    temp2 = curblock[i2]* invfsize * -1;
    curblock[i2]= curblock[i2+1]* invfsize ;
    curblock[i2+1]= temp2;
    i+=2;
  );
  fft_ipermute(curblock,fftsize);
  ifft(curblock,fftsize);
  bpos=0;
);

// make sample
w=window[bpos*0.5];
iw=1-w;

os0=in0;
os1=in0;

out0=(curblock[bpos]*w + lastblock[fftsize+bpos]*iw);
out1=(curblock[bpos+1]*w + lastblock[fftsize+bpos+1]*iw);

lastblock[bpos]=hist[bpos];
lastblock[bpos+1]=hist[bpos+1];
lastblock[fftsize+bpos]=os0;
lastblock[fftsize+bpos+1]=os1;

hist[bpos]=os0;
hist[bpos+1]=os1;
bpos+=2;
").send;
)

(
{
	var sig = SinOsc.ar(100.0!2);
	var modSig = DynGen.ar(2, \rot, sig, sync: 1.0);
	// apply fft size delay on the sig for delay compensation!
	sig = DelayN.ar(sig, 0.2, 1024/SampleRate.ir);
	[sig[0], modSig[0], sig[0]+modSig[0]];
}.plot(0.1)
)

Hope this helps!

thank you for what I reckon is an immense effort of porting a codebase!

now, I know a fair share but I can tell you that EEL2 manipulations are not trivial nor straighforward compare to SC (which is not the best either) or Max (which is slightly better for starting at least) or Pd (which is brutal but straightforward)

So I would say that Miller’s book for Pd is probably the best to understand the principles. For me, at the moment, i need to understand how EEL2 is actually parsing the FFT bins - it seems to be a full FFT hence the mirroring, but stereo seems interleaved and the permut thing is unclear. Until I get their shennaningans with memory and conversion, I cannot help you.

but:

If I get my head around it, a simple spectral gate would give a user all the needed information (bin location, how to convert real/imag and back) - I know we can do that natively in SC but that is why it is a good example I think.

More soon. Maybe I should start another thread entitled DynGen FFT forensics :slight_smile:

I was doing this indeed (like we do in FluCoMa all the time) but still couldn’t get it to be 90deg out of phase. I’ll check your code and compare to my SC port and see where I got it wrong.

thanks again!

I think the problem is that I wasn’t activating the ‘sync’ so 512 of delay and 512 of svs made 1024…

(btw there is a reference to DynGenRT in the code example of sync in the doc, I reckon it is a typo)

(and in your translation of my code you wrote os1=in0; which should be os1=in1; )

with that correction, this patch shows clearly the 90 deg of phase achieved in stereo (at 0.05 sec):

(
{
	var sig = SinOsc.ar([75, 150]);
	var modSig = DynGen.ar(2, \rot, sig, sync: 1.0);
	// apply fft size delay on the sig for delay compensation!
	sig = DelayN.ar(sig, 0.2, 1024/SampleRate.ir);
	sig ++ modSig;
}.plot(0.1)
)

sync=1.0 doesn’t introduce a delay to the signal - it only determines if the script should be JIT-compiled in the audio thread or not. While the latter is safer in regards to audio-dropouts, it also introduces a delay of (at least) two block sizes until the code gets executed - so there is a small gap in the beginning where there is no signal. I don’t see how this could add additional delay in this case since the moment when the process starts simply gets shifted.

Thanks - fixed it.

super excited to try this!!!