Fake resonance , aka windowed sync

For a bit of thread resurrection, as I have been evaluating Reaktor myself recently (FYI: you have to pay for this or find and employer that does, there’s no eval license for Reaktor core)… it’s also the case that Reaktor (core) has both hard sync and non-sync “macros” for the basic oscillators. The latter are called “‘Slave’ Oscillators” in the Reaktor core, see e.g. p. 129 in the manual. I suppose the slight downside in SC is that there’s not a core SC class library that does that, but as noted above it’s not hard to write one.

Also a couple of points I’d like to make here about Reaktor core: it uses some kind of (jit?) compiler, but the documentation on how feedback loops and mergers are handled seems incredibly obscure to me, even though there’s a fair attempt to document it in the aforementioned manual (pp. 100-120). To give some examples of huh moments:

Often in Structure building there is a question of whether a Latch should be used at a particular position. In answering this question the logical function of the Latch and the associated CPU consumption can be considered. As it was already mentioned, the compiler is treating latches (and the ‘read followed by a write’ pattern in general) in a special optimized way. Thus, the relationship between the usage of a Latch at a particular place and the associated change in the CPU load is not straightforward.
• Generally, if it is not necessary to store the value into the memory (because the value is
immediately read afterwards anyway), the compiler will not do so. In such situations a
Latch by itself will not add to the CPU load.
• On the other hand, not placing a Latch on some signal path may result in a more complicated triggering logic of the downstream Structure and thus in a higher CPU load produced not by the Latch itself, but by this downstream Structure.Thus, there is no general rule whether using a Latch will increase the CPU load or decrease it.
It is best to simply use latches wherever logically appropriate.

As Modulation Macros are simply shortcuts for the mathematical operations combined with Latches, the same applies to the Modulation Macros.

Or

At each mergepoint the compiler attempts to detect whether a splitting endpoint occurs. In cases where ‘chaotic routing’ (when the routing branches are not merged back together, but rather some signals with unrelated triggering sources are mixed) has been used in excessive amounts, the analysis time can grow drastically. In the worst cases this can cause the compiler to appear to ‘hang indefinitely’ (the compiler’s progress bar stops completely).

NI is aware of this issue and is looking for a solution.

In a ‘chaotic merging’ situation practically each Module represents a new mergepoint with a new set of triggering conditions. While the analysis of these triggering conditions consumes the compilation time, the respective runtime check eventually generated by the compiler consumes the runtime.

I think few other than the Zavalishin guy that NI hired to write it really understand what that means. To me, it seem the compiler attempts to do block processing by merging branches, somewhat similar to how some GPU-based audio is done, although Reaktor does it only on the CPU.

Also, the Reaktor compiler does a modicum of peephole optimization like replacing division with multiplication where appropriate (p. 123). The somewhat more clear part(s) are that some feedback loops are automatically handled in Reaktor core by the compiler inserting a one sample delay; this is in contrast with Faust where you have to be explicit about those.

But, in general, I have not been impressed with the real-world performance of some example instruments like the nGrano sampler (which is also not free, by the way). It’s much slower than most other granulators I’ve seen implemented in other systems, including gen-based stuff in Max 8. Pretty much the same goes for NI’s founder’s pet project, the Kontour synth, which is some kind of mega-FM synth with waveshapers and lots of feedback paths. (To their credit though, the NI synths based on Reaktor tend to have good documentation in terms of block diagrams.)

Faust probably blows Reaktor core away in terms of optimizations and performance. But it’s hard to find true comparative benchmarks due to the walled-gardened nature of the Reaktor ecosystem. The Reaktor fanboys also love to rub it your face that it has been used on “multiple platinum-award works” and the like (and ask you if Max/gen, SC or Faust have won the same), so real technical discussions of the Reaktor compiler performance etc. tend to be rather nonexistent on the NI-focused forums.


To come back to the actual topic at hand here, using a BufRd with a Phasor make this easy enough and also works with arbitrary “wavetables”, without having to actually use that special SC format. I’m honestly not sure if there’s any point in doing it with a Sweep and a SinOsc, since the latter is also table-based internally, other than for typing a bit less. The FSinOsc might be a different matter, but I recall seeing some discussions that it’s not actually faster than the table-based SinOsc. I’m not too surprised since modern processor have decent cache memory including at L1 (64Kb on Intel ones of the past decade), so unless you exceed that you’re probably not going to see much of an issue. SinOsc seems to use 8K tables by default.