How to implement IIR filter code snippet?

Agreed. In practice, I think you’re better off considering anything below the resolution of a “block” as the responsibility of individual UGens.

Depending on how broad your definition of “SC code” is, the plugin API isn’t too hard to understand and work with even if your C++ isn’t great (mine certainly isn’t!). The basic filter types are implemented here, with their corresponding sclang classes here.

I mention this because when I started out with SC and was porting tools I’d made in other environments, I was initially concerned about the difficulty of doing single-sample feedback, but I was pleasantly surprised to find that it was a lot easier than I’d expected to drop down to C++ when I needed to work on the single-sample level.

1 Like

Not to send you down a path of pain and torment, but single sample feedback is super “easy” in faust:


import("stdfaust.lib");
process = *(0.5):+~*(0.5);

faust is really challenging, but for this kind of application, and anything where you are at the signal level, it is great. Plus, you can compile code to run in sc and other languages.

Just to reiterate - not necessarily recommended.

Sam

1 Like

Check Fb1 from miSCellaneous_lib quark. You can write it like this (supposed blockSize 64):

(
y = { 
	Fb1(
		{ |in| (in[0] + in[1]) / 2 }, 
		in: WhiteNoise.ar(0.3),
		inDepth: 2,
		leakDC: false
	) 
}.play
)

There’s a SMC paper on Fb1:

2 Likes

As much as I like trying to flip my brain into recursive functional land, this is so much easier to look at. I’ve got to start playing with Fb1!

Sam

Could you let me know the reason that single-sample feedback is tricky?

Isn’t that what the Fb1 ugen does ?
It’s a bit strange that supercollider doesn’t handle single sample feedback natively .

You can do single sample feedback in SuperCollider, just not in a SynthDef.

SynthDefs are intended for assembling a graph of operations to perform on blocks of audio, because it’s more efficient and closer to what is provided by the audio device - it’s a higher-level abstraction.

UGens can do single-sample feedback: where SynthDefs operate on blocks, UGens operate on individual samples: they are the lower-level, sub-block abstraction in the SuperCollider environment. These have to be written in C / C++ or another compiled language like Kronos or Faust for performance reasons.

There is a split between these two levels of abstraction (sample level and block level) in almost every audio engine. In the few cases where there is no split (e.g. all operations are on single samples, and no block processing at all), performance is significantly worse. Count the number of SinOsc’s you can run in Reaktor (which iirc is sample-based, no blocks) and SuperCollider and it should be apparent…

You can’t write UGens in SuperCollider code (sclang) probably because LLVM didn’t exist / wasn’t mature for most of SC’s history - and before LLVM, it was rather an ordeal to write a multi-architecture machine code compiler for a new language. :slight_smile:

1 Like

To put it a bit more in context: Fb1 is a pseudo ugen, a compound structure built on native SC UGens. It might result in a high number of basic UGens (can be hundreds or even thousands). This doesn’t necessarily mean a high CPU usage while running, but might result in a relatively long SynthDef compile time.
If applicable, as in the example given in this thread, I agree, FOS / SOS (or LTI from SC-plugins in the case of linear filters of arbitrary length, but with fixed coefficients) are a quick and easy solution and likely to be less CPU-costly.
On the other hand, Fb1 provides options that are out of their scope: larger lookback-depths with varying coefficients, varying lookback-depths, arbitrary non-linear operations, multichannel feedback / feedforward with cross-channel relations etc.
That’s where, I find, the experimental fun starts, for straight linear filters there exist the aforementioned other classes (and for special non-linear filters there exist dedicated classes too, like NL from the SC-plugins). Two related threads:

TPT filter with Fb1:

On filters in general:

1 Like

The sine osc’s in reaktor are not sample based , maybe you’re confusing with max msp cycle~ object ( which is a 512 samples lookup table ) , altough reaktor can do that to in primary and in core
In reaktor the primary ones and certainly the core ones are calculated osc’s , a ramp driver ( takes frequency from sample clock ) into a sine appro. ( fr sine )
Reaktor core only has 34 modules categorised under : Math,Bit,Flow,Memory,Scoped bus and bundle
These are hardcoded in the exe and lowest level available …al the rest ( like oscillators and filters ) are macros made out of these modules .
The max msp equivalent is gen~
With these, everything is made from amazing sounding zdf filters to polyblep osc’s etc… ( except for gui stuff _) , including the awesome sounding blocks stuff

And I easily run 512 to 1024 sin osc’s in parallell , whcih I have used to build a vocoder , where the input signals is rebuid using sine waves , that’s 1024 sine osc’s @ 50 percent cpu on a 10 year old pc :slight_smile:Of course core osc’s and filters will use more cpu
And it sound pretty great , here only 128 sine are used for reconstruction , playing with peak release settings for longer decaying sines , and harmonic spacing between sines
Everything is totally wet , no dry unprocessed sound
https://app.box.com/s/br59hdv1rvbiqaqfmq8ameh9397719vg
And here drums , deconstructed with 128 bp filters and recosntructed using sine osc’s
https://app.box.com/s/cgpe9v8pjo6dopu7vfsia2annfoc8fla

Edit : 1024 osc’s :slight_smile:

Fb1 is indeed impressive – it seems like it handles more cases than previously available workarounds. And it’s definitely simple to use (though the complexity of its implementation supports my initial assertion – if one wants to understand what I meant by “single-sample feedback is tricky,” take a look at what’s required to do it right: miSCellaneous_lib/Classes/Nonlinear/Fb1.sc at master · dkmayer/miSCellaneous_lib · GitHub .)

I think scztt is not talking about the sine oscillator’s implementation, but rather about a sample-by-sample calculation loop (which may be how Reaktor works? I’m not sure) vs the block calculation loop used in scsynth.

hjh

1 Like

Also, why do we need block calculation?

// big synthdef, let's compile just once
(
SynthDef(\test, {
	var n = 1500;
	Out.ar(0, SinOsc.ar(Array.fill(n, { ExpRand(200, 800) })).sum / n)
}).add;
)

x = Synth(\test);

// about 15% with a 64-sample block size

x.free;

s.options.blockSize_(1);
s.reboot;

x = Synth(\test);

// 98-99% with a 1-sample block size

x.free;

(Reaktor must be doing block calculation, then – there’s no way you would get a thousand sine oscillators on a 10-year-old PC, using only 50% of the calculation time.)

This thread also led me to think about what it would take to have the equivalent of Pure Data’s [block~] (where you can mark part of a patch to run with a smaller block size, for a tighter feedback loop).

The first requirement would be that UGens shouldn’t make any assumptions about a global block size – the next functions should just calculate an input numSamples. It seems the core UGens (except PartitionedConvolution) already do this. I don’t know about sc3-plugins.

It might be fairly straightforward to do it at synth node level – an entire synth (or group?) running with a different block sizes. But usually you just want a small part of a graph to be affected. So it made me think – what would it look like to have UGens that override the block size, and then at the end, reblock to the global setting?

This kind of interface would not work:

Reblock.ar(RLPF.ar(Saw.ar(...), ...), 1)

… because the Saw --> RLPF chamber (presumably) depends on other UGens – the whole tree gets fed into the input. So there’s no clear beginning to the reblocked section. (By contrast, Pd’s [block~] applies to a subpatch or abstraction, which naturally has clear boundaries, and clear [inlet~] / [outlet~] units where the reblocking occurs.)

Un-reblocking(!?) would be easy: EndReblock.ar(signal).

But the beginning… you would need to be sure a hypothetical Reblock.ar() would come up before the first unit to be affected. The only way we can be sure it comes before is by using a UGen input. That would look quite weird, but in theory it would describe the right graph:

arg freq, ffreq, rq;  // OutputProxy from a Control
// reblocking instruction, prepared as an input to the Saw
var startBlock = freq <! Reblock.ar(newBlockSize: 1);
var osc = Saw.ar(startBlock);
var feedback = LocalIn.ar(0);
var filter = RLPF.ar(osc, ffreq, rq) + (0.01 * feedback);
var routeBack = LocalOut.ar(filter);
var result = EndReblock.ar(filter);

LocalOut’s input is reblocked so it would have to be, as well.

How to implement that… I don’t know.

hjh

Perhaps this can be applied at the SynthDef level.

(
SynthDef(\test, {
	var n = 1500;
	Out.ar(0, SinOsc.ar(Array.fill(n, { ExpRand(200, 800) })).sum / n)
}, blockSize: 1).add;
)

I was also thinking of an RFC with a similar approach for upsampling SynthDefs:

(
SynthDef(\test, {
	Out.ar([0, 1], (SinOsc.ar(\freq.kr(440)) * 100).tanh)
}, upSample: 4).add;
)

Anyway, since we’re talking about single sample feedback, I also want to highlight the option to use Omni, which is a DSL I have been developing. It already has SuperColllider bindings: omnicollider

1 Like

There is also the reaktor sine and modal bank modules ( primary modules introduced in reaktor 5.5 )
These can do a whopping 10.000 partials for additive synthesis , with independent pan,gain,decay,damping per partial , setting these up and controlling can be bit tricky
Lazerbass and Razor are using the sinebank , prism uses the modal bank
Altough in reality , 10.000 partial will bring any modern cpu to it’s knees .

The modal bank / sine bank modules in Reaktor are quite cool - for anyone looking for a fun and challenging DSP task in SuperCollider, building a UGen to do something comparable would be straightforward and I’m sure very widely appreciated. When dealing with large, complex SynthDefs, the overall CPU cost tends to be overwhelmed by the function call overhead of calling each UGen separately. You could achieve a speed-up of probably 20% purely by introducing a new SinOsc implementation that is able to batch parallel SinOsc’s together and calculate them in a single call.

FWIW, and before anyone gets too discouraged by SC - Razor has a bank of something like 320 partials x 2 per voice (so lets say 640…). On my machine, it can run one voice at ~7% CPU. SC can run a bank of 640 SinOsc’s with independent amplitude control at ~13% CPU. So, we’re off by a factor of 2 in terms of efficiency - significant, but not enough to stop anyone from building a cool additive synths (this is also not accounting for Supernova, where multi-processing buys you a lot more CPU overhead).

1 Like

Definitely, I think additive synthesis is a typical example where the mind trap „more is better“ is very tempting.

As additive synthesis is a favour of some of my students, I did quite a lot of SC experiments during the last years. IMO this synthesis method is a real strength of SC because of its easy multichannel handling. I think this outweighs limitations of the number of oscillators (which I hardly ever wanted to come close to).

Thinking about modulation and modulation control is crucial. BTW I had a lot of fun with additive FM and feedback networks. If find the time on occasion I will collect some examples.

1 Like

Wondering if anyone here has tried Omni? Looks as though it will be easier to use than Faust (although project is still in its infancy it seems)

Might be a good candidate for making a sine bank…

Well, being myself the creator, I use it daily :slight_smile:

I’ll give this a test and report back.

1 Like

Just to be clear about my comment about faust earlier. I really enjoy faust. It tickles the brain in ways that object oriented thinking doesn’t. And thinking about sample level stuff and waveforms is super fun. Working with it really changed how I think. It just isn’t for the feint of heart…but, of course, neither is SC.

Omni looks cool too. Gotta check it out! It is great that you can compile right into SC.

Sam

If you think this is impressive, people have done millions of sinusoids on the GPU… a decade ago, and in real time. The only issue is that there’s not much of a musical application of that many partials, insofar. See Skare and Able (2019) and/or Renney, Gaster, and Mitchell (2020) for recent round-ups of GPU-based audio. The recent on-die (Intel) GPUs make it possible have low latency GPU audio too.


Pretty hard to say what the Reaktor compiler really does.

Also worth noting that there are different kinds of block/control clocks in different parts of Reaktor. In Core the control rate is the same as audio rate by default, but in other NI Reaktor lib it’s not so. Quoting from the Reaktor Core manual:

The SR bus is intended to be used for audio processing Modules like oscillators and filters, while the CR bus is intended to be used for control processing Modules like envelopes and LFOs. However, this SR/CR distinction is purely conventional and is made with the sole purpose of being able to provide different audio and control clocks ‘by default’. There is always a possibility of overriding the default connection and/or providing even more clocks by defining further buses with different names and the same Structure.
The default CR source in REAKTOR Core is identical to SR signal-wise, runs at the audio
rate and has nothing to do with the Control Rate source in the REAKTOR Primary. If
desired, the Primary Control Rate can be ‘imported’ into the CR bus by using the Library
Clk Bundle > Control > CR From Prim Macro.

Pd’s [block~] (= temporary reblocking/upsampling/overlapping) is indeed very powerful. Of course, reblocking to 1 sample incurs some CPU overhead, but only for the reblocked subpatch.

Also, there’s the [fexpr~] object which let’s you access previous input and output samples. This would be a better fit for IIR filters.