Writing FFT-based psuedo-uGens?

Hi all -

I’ve been looking at Trevor Wishart’s Composer’s Desktop Project lately - where there are many inventive spectral transformations to apply to sounds - although they generally rendering, whereas SuperCollider can do a number of FFT processes with a tiny allocated buffer.

I’m curious about trying to translate between programs a little bit (assuming someone hasn’t done it) - and even noticed someone had asked a bit about this “path” previously on the forum (though, unanswered) - but I was hoping to talk through what I’d hope to be a fairly straight-ahead example, to see what the more skilled SC users would say.

One effect in CDP is called “Interleave” - the description is: “This process interleaves windows from each input file. The number of windows is set by the chunk size. At low values the sounds will merge together, at high values recognisable chunks from each sound will become audible.”

The overall impression I get is that it somehow manages the frequency domain in the time domain - alternating back and forth between two inputs, depending on the size of their respective windows.

Is something like this possible in SC? Would this be the kind of thing that could be written entirely with the FFT/IFFT uGens?

Thanks!

Unfortunately, FFT in SC is relatively inflexible because FFT transformations generally must be implemented as UGen plugins. For example, even a trivial operation like multiplying two spectra requires a dedicated UGen plugin (PV_Mul).

In Pure Data, however, a FFT is just a regular audio signal (in a subpatch that has been reblocked to the FFT size) so the algorithm you’ve described would be easy to implement. (You would just have to switch between the windows every N blocks.)

For this case, I had hoped that one could Select between two FFT chains, but it looks like at least IFFT is locked to the FFT buffer that it initialized with, so, no dice.

… though you could PV_Morph or PV_XFade and push the xfade factor to 0 or 1.

But I imagine this won’t sound much different from a regular crossfade – since FFT is really just a special case of granular synthesis, you’d just get the fade between two grains from different sources. So I’d have to conclude that the CDP process is doing something different, and I’ve no idea what that is.

Btw according to another thread, it will soon be possible to do new PV logic using DynGen.

hjh

1 Like

I think the idea is that the sources are only advancing when they are “on”, so it would slow the respective sounds, a la phase vocoder, while interleaving them.

Looking forward to checking out DynGen - maybe that’ll do it.

Oh ok – then you could do that in SC with PV_RecordBuf and PV_BufRd.

The FFT UGen can’t freeze its audio source while the other source is active. To alter timing in that way, it’s necessary to have the FFT frames laid out in advance, and then playback would scan through them at a different rate (which may be linear [faster or slower] or nonlinear).

hjh