Continuing a secondary topic from this discussion by @Dasha
That is a limitation of all real-time audio/dsp environments.
If you want to avoid a frame-based approach, there is no problem in rewriting the code as a sample-based operation. The downside is that there is a price, you will use most of your CPU, and you won’t be able to vectorize and use SIMD instructions. And in the end, it won’t work. It will not be a professional system.
You have some options. You can write that some units in faust (they have an option for single-sample, but it is still “frame-based”; it’s just numFrames=1 [tongue in cheek]
… or write in a lazy functional language, for example, using Arrows in Haskell, and render it in non-real-time. You can do fun things with that; the paradigm would be different. Here, de facto is a different paradigm.
https://en.wikibooks.org/wiki/Haskell/Understanding_arrows
import Control.Arrow
{- | SF represents a signal function that takes an input of type a and produces
an output of type b along with a new signal function for subsequent inputs.
-}
newtype SF a b = SF {runSF :: (a -> (b, SF a b))}
-- | feedback loops instance for Arrows
instance ArrowLoop SF where
loop sf = SF (g sf)
where
g f x = f' `seq` (y, SF (g f'))
where
((y, z), f') = runSF f (x, z)
-- | one-step delay with an initial value i.
instance ArrowCircuit SF where
delay i = SF (f i)
where
f i x = (i, SF (f x))
This can do nice things; I’m writing stuff with this right now.
But remember: Signals, Arrows, and similar types and categories in Haskell are lazy evaluated, which allows higher-level operation with signals, BUT is (apparently) incompatible with the strictness of traditional professional audio code.
How do you build a lazy, complex, real-time system? It doesn’t even make much sense.
Implementing such an extensive system would be difficult (in practical terms). However, it is possible (the article in the footnote is just about that) to combine laziness and strictness through calculi equivalence in a more general category and supported by “translations” through new ideas that were formalized not long ago.
So, any implementation can use a call-by-need context, and the system would be capable of any necessary translation.
Still, the design must be pretty good for that to work; not only will translations be correct implemented in the right places in the right way, but the design of such a system is a blank page. This is another world in terms of qualitative implementation: a formal way to handle lazy evaluation in a functional programming context into a “normal” evaluation. etc. etc. usw
When discussing the matter properly, the frame-based process loop with frameSize=1
is irrelevant. If this is unclear, you must explore this discussion more patiently because it is something else, at the risk of mixing (some more, some less) unrelated things.
- Reference
- Side note
The frame-based approach, with process loops and audio callbacks in C and C++, is virtually how professional audio is done.
But mixing different frame sizes in the same DSP graph, with some restrictions (let’s say the number of samples must be the power of 2), is perfectly possible now, and Nodoby does. One must develop a good design and grammar to combine different numFrames.
This idea has yet to inspire people to hack an environment like that. It’s not that hard. Someone?