Next_a, next_k or next and RTAlloc?

hey,

i would like to improve my SC plugins and currently dont understand how next_a and next_k are beeing used and have an additional question about RTAlloc.

1.) Lets imagine we have three input params where you could have all the combinations of one of them beeing audio rate and the others control rate. As far as I understand next_a and next_k, we are assuming here that all input params share the same rate or?

I have stumbled across this thread Control rate parameter sounding crunchy - #7 by elgiano and thought i could just use next and test with isInputParamAudio rate or not and either use makeSlope or not instead of using next_a and next_k. Whats the best way here, currently im brute forcing alot of params to be audio rate in the .sc file.

An example:

FilterBank.hpp

#pragma once
#include "Utils.hpp"

class FilterBank : public SCUnit {
public:
    FilterBank();
    ~FilterBank() = default;
private:
    void next(int nSamples);
    
    Utils::FilterBank filterBank;
    
    // Store sample rate and number of bands
    const float m_sampleRate;
    static constexpr int NUM_BANDS = 24;
    
    // Cache for SlopeSignal state
    float freqPast, spreadPast, warpPast, resonancePast;
    
    // Rate checks (cached in constructor)
    bool isFreqAudioRate, isSpreadAudioRate, isWarpAudioRate, isResonanceAudioRate;
    
    enum InputParams {
        Input,      // Audio input
        Freq,       // Base frequency
        Spread,     // Frequency spread
        Warp,       // Frequency warping
        Resonance   // Filter resonance/Q
    };
    
    enum Outputs {
        Out
    };
};

FilterBank.cpp

#include "SC_PlugIn.hpp"
#include "FilterBank.hpp"

static InterfaceTable *ft;

FilterBank::FilterBank() : m_sampleRate(static_cast<float>(sampleRate()))
{
    // Initialize parameter cache
    freqPast = in0(Freq);
    spreadPast = in0(Spread);
    warpPast = in0(Warp);
    resonancePast = in0(Resonance);
    
    // Check and cache rates
    isFreqAudioRate = isAudioRateIn(Freq);
    isSpreadAudioRate = isAudioRateIn(Spread);
    isWarpAudioRate = isAudioRateIn(Warp);
    isResonanceAudioRate = isAudioRateIn(Resonance);
    
    // Resize vectors to NUM_BANDS
    filterBank.resize(NUM_BANDS);
    
    mCalcFunc = make_calc_function<FilterBank, &FilterBank::next>();
    next(1);
}

void FilterBank::next(int nSamples) {
    // Audio input
    const float* input = in(Input);
    
    // Create slope interpolators for control-rate params
    auto slopedFreq = makeSlope(sc_clip(in0(Freq), 20.0f, 20000.0f), freqPast);
    auto slopedSpread = makeSlope(sc_clip(in0(Spread), 0.0f, 2.0f), spreadPast);
    auto slopedWarp = makeSlope(sc_clip(in0(Warp), -1.0f, 1.0f), warpPast);
    auto slopedResonance = makeSlope(sc_clip(in0(Resonance), 0.0f, 0.99f), resonancePast);
    
    // Output pointer
    float* outbuf = out(Out);
    
    // Process audio
    for (int i = 0; i < nSamples; ++i) {
        // Get parameter values based on rate
        float freq = isFreqAudioRate ? 
            sc_clip(in(Freq)[i], 20.0f, 20000.0f) : slopedFreq.consume();
        float spread = isSpreadAudioRate ? 
            sc_clip(in(Spread)[i], 0.0f, 2.0f) : slopedSpread.consume();
        float warp = isWarpAudioRate ? 
            sc_clip(in(Warp)[i], -1.0f, 1.0f) : slopedWarp.consume();
        float resonance = isResonanceAudioRate ? 
            sc_clip(in(Resonance)[i], 0.0f, 0.99f) : slopedResonance.consume();
        
        // Process filter bank
        outbuf[i] = filterBank.process(
            input[i],
            freq,
            spread,
            warp,
            resonance,
            m_sampleRate,
            NUM_BANDS
        );
    }
    
    // Update parameter cache for next block
    freqPast = isFreqAudioRate ? in(Freq)[nSamples - 1] : slopedFreq.value;
    spreadPast = isSpreadAudioRate ? in(Spread)[nSamples - 1] : slopedSpread.value;
    warpPast = isWarpAudioRate ? in(Warp)[nSamples - 1] : slopedWarp.value;
    resonancePast = isResonanceAudioRate ? in(Resonance)[nSamples - 1] : slopedResonance.value;
}

2.) Another question i have is how to deal with real-time safe memory allocation if you are working with an array of signals inside your Ugen, one example from my GrainDelay Plugin. Im using RTAlloc for the buffer beeing used for the delay line, but std::vector<GrainData> m_grainData; for my different channels. I havent had any issues with that, but i guess thats wrong.

GrainDelay.hpp

#pragma once
#include "SC_PlugIn.hpp"
#include "Utils.hpp"
#include <vector>

class GrainDelay : public SCUnit {
public:
    GrainDelay();
    ~GrainDelay();

private:
    void next_aa(int nSamples);
    
    // Constants cached at construction
    const float m_sampleRate;
    const float m_sampleDur;
    const float m_bufFrames;
    const int m_bufSize;
   
    // Core trigger system
    Utils::SchedulerCycle m_scheduler;
    Utils::VoiceAllocator m_allocator;
    Utils::IsTrigger m_resetTrigger;

    // Constants
    static constexpr int NUM_CHANNELS = 32;
    static constexpr float MAX_DELAY_TIME = 5.0f;
   
    // Audio buffer and processing
    float *m_buffer;
    int m_writePos = 0;
   
    // Grain data structure
    struct GrainData {
        float readPos = 0.0f;
        float rate = 1.0f; 
        float sampleCount = 0.0f;
        bool hasTriggered = false;
    };
   
    // grain voices
    std::vector<GrainData> m_grainData;
   
    // Feedback processing filters
    Utils::OnePoleNormalized m_dampingFilter;  // For feedback damping (0-1)
    Utils::OnePoleFilter m_dcBlocker;          // For DC blocking (3Hz)
   
    // Input parameters for audio processing
    enum InputParams {
        Input,          // Audio input
        TriggerRate,    // Grain trigger rate (Hz) - density control
        Overlap,        // Grain overlap amount  
        DelayTime,      // Delay time in seconds
        GrainRate,      // Grain playback rate (0.5-2.0, 1.0=normal)
        Mix,            // Wet/dry mix (0=dry, 1=wet)
        Feedback,       // Feedback amount (0-0.95)
        Damping,        // Feedback filter (0=dark, 1=bright)
        Freeze,         // Freeze buffer (0=record, 1=freeze)
        Reset           // Reset trigger
    };
   
    enum Outputs {
        Output        
    };
};

GrainDelay.cpp

#include "GrainDelay.hpp"
#include "UnitShapers.hpp"
#include "SC_PlugIn.hpp"

static InterfaceTable* ft;

GrainDelay::GrainDelay() : 
    m_sampleRate(static_cast<float>(sampleRate())),
    m_sampleDur(static_cast<float>(sampleDur())),
    m_bufFrames(MAX_DELAY_TIME * m_sampleRate),
    m_bufSize(static_cast<int>(m_bufFrames)),
    m_allocator(NUM_CHANNELS)
{
    // Initialize graindata
    m_grainData.resize(NUM_CHANNELS);

    // Allocate audio buffer
    m_buffer = (float*)RTAlloc(mWorld, m_bufSize * sizeof(float));

    // Check the result of RTAlloc!
    auto unit = this;
    ClearUnitIfMemFailed(m_buffer);
    
    // Initialize the allocated buffer with zeros
    memset(m_buffer, 0, m_bufSize * sizeof(float));
    
    mCalcFunc = make_calc_function<GrainDelay, &GrainDelay::next_aa>();
    next_aa(1);

    m_scheduler.reset();
    m_resetTrigger.reset();
}

GrainDelay::~GrainDelay() {
    RTFree(mWorld, m_buffer);
}

void GrainDelay::next_aa(int nSamples) {
    
    // Audio-rate parameters
    const float* input = in(Input);
    const float* triggerRateIn = in(TriggerRate);
    const float* overlapIn = in(Overlap);
    const float* delayTimeIn = in(DelayTime);
    const float* grainRateIn = in(GrainRate);
    
    // Control-rate parameters
    float mix = sc_clip(in0(Mix), 0.0f, 1.0f);
    float feedback = sc_clip(in0(Feedback), 0.0f, 0.99f);
    float damping = sc_clip(in0(Damping), 0.0f, 1.0f);
    bool freeze = in0(Freeze) > 0.5f;
    bool reset = m_resetTrigger.process(in0(Reset));

    // Output pointers
    float* output = out(Output);
    
    for (int i = 0; i < nSamples; ++i) {
        
        // Sample audio-rate parameters per-sample
        float triggerRate = triggerRateIn[i];
        float overlap = sc_clip(overlapIn[i], 0.001f, static_cast<float>(NUM_CHANNELS));
        float delayTime = sc_clip(delayTimeIn[i], m_sampleDur, MAX_DELAY_TIME);
        float grainRate = sc_clip(grainRateIn[i], 0.125f, 4.0f);
        
        // 1. Get event data from scheduler
        auto scheduler = m_scheduler.process(triggerRate, reset, m_sampleRate);
        
        // 2. Process voice allocation with scaled rate
        float rateScaled = scheduler.rate / overlap;
        m_allocator.process(
            scheduler.trigger, 
            rateScaled, 
            scheduler.subSampleOffset,
            m_sampleRate
        );
        
        // 3. Process all grains
        float delayed = 0.0f;
   
        for (int g = 0; g < NUM_CHANNELS; ++g) {

            // Trigger new grain if needed
            if (m_allocator.triggers[g]) {

                // Calculate read position
                float normalizedWritePos = static_cast<float>(m_writePos) / m_bufFrames;
                float normalizedDelay = std::max(m_sampleDur, delayTime * m_sampleRate / m_bufFrames);
                float readPos = sc_wrap(normalizedWritePos - normalizedDelay, 0.0f, 1.0f);
                
                // Store grain data
                m_grainData[g].readPos = readPos;
                m_grainData[g].rate = grainRate;
                m_grainData[g].sampleCount = scheduler.subSampleOffset;
                m_grainData[g].hasTriggered = true;
            }
            
            // Process grain if voice allocator says it's active
            if (m_allocator.isActive[g]) {

                // Increment sample count
                m_grainData[g].sampleCount++;
                
                // Calculate grain position: readPos + (accumulator * grainRate)
                float grainPos = (m_grainData[g].readPos * m_bufFrames) + (m_grainData[g].sampleCount * m_grainData[g].rate);
                
                // Get sample with interpolation
                float grainSample = Utils::peekCubicInterp(
                    m_buffer,
                    m_bufSize, 
                    grainPos
                );
                
                // Apply Hanning window using voice allocator's sub-sample accurate phase
                grainSample *= WindowFunctions::hanningWindow(m_allocator.phases[g], 0.5f);
                delayed += grainSample;
            }
        }

        // 4. Apply amplitude compensation based on overlap
        float effectiveOverlap = std::max(1.0f, overlap);
        float compensationGain = 1.0f / std::sqrt(effectiveOverlap);
        delayed *= compensationGain;
        
        // 5. Apply feedback with damping filter
        float dampedFeedback = m_dampingFilter.processLowpass(delayed, damping);
        dampedFeedback = zapgremlins(dampedFeedback); // Prevent feedback buildup
        
        // 6. DC block input and write to delay buffer (only when not frozen)
        float dcBlockedInput = m_dcBlocker.processHighpass(input[i], 3.0f, m_sampleRate);
        
        if (!freeze) {
            m_buffer[m_writePos] = dcBlockedInput + dampedFeedback * feedback;
            m_writePos++;
            m_writePos = sc_wrap(m_writePos, 0, m_bufSize - 1);
        }
        
        // 7. Output with wet/dry mix
        output[i] = Utils::lerp(input[i], delayed, mix);
    }
}

PluginLoad(GrainUtilsUGens) {
    ft = inTable;
    registerUnit<GrainDelay>(ft, "GrainDelay", false);
}
1 Like

With a quick look, I think there is no problem with variables or anything, but there is something requiring some attention. You are using vector

std::vector uses dynamic allocation, which is not real-time safe – std::vector::resize() calls malloc/new, which can block and cause audio dropouts… Excellent for standard code, not good for real-time audio.

Just use a fixed-sized array.

1 Like

thanks, so just swapping vector for array would solve the real-time safety? Where is the difference between allocating memory with RTAlloc and using array then? I guess i dont understand some fundamentals here and where one might want to use one over the other. Why do we have to allocate memory for the delay line with RTAlloc but can use array for the different channels?

Additionally i would still like to know whats up with next_a, next_k and next together with makeSlope etc. and if my approach using next together with makeSlope makes sense and is in the line with SCs best practice and if i can forget about next_a and next_k (see my first bullet point).

I think so. For me, your RTAlloc usage for the audio buffer is correct.

A vector can give you problems.

Things that should not be in a real-time thread: malloc, new, delete, free, std::vector::push_back(), resize()

You can choose the traditional fixed-size C Array,

static constexpr int NUM_CHANNELS = 12;
GrainData m_grainData[NUM_CHANNELS];

Another option is std::array (C++17).

If on C++20, you can also use std::span

1 Like

hey, thanks im just trying to figure out a resuable plugin development template and not understanding some of the details.

1.) I will use RTAlloc for memory allocation for my delay lines (like im already doing)
2.) I will use C++17 style std::array for real-time safe voice allocation

and still have to figure out how to deal with different combinations of audio and control rate inputs. I really hope that the next / makeSlope approach is universal and would be in line with SCs best practice, then i could encapsulate that into my plugin development.,

I dont know if someone wants to look at the source code after i have made an update to my GrainUtils repo, but that would be really awesome :slight_smile: Im really trying my best but certain things take some time.

1 Like

I would be glad to check it out, homie

Anytime the size is known at compile time, you can just store the array in place (C-Style array or std::array).

If the size is only known at runtime, you need to allocate the memory dynamically. Usually, you would do this with std::vector, but as @smoge pointed out, this would use the system allocator by default, which is forbidden on the RT thread. That’s why you have to use RTAlloc, which is essentially a realtime-safe replacement for malloc.

Another option is std::array (C++17).

Nitpick, but std::array is C++11.

If on C++20, you can also use std::span

No, std::span is a view type, similar to std::string_view.

2 Likes

Yea, it is not another array, it does not “own” anything, but you can interact, map, etc. You can use it as a " cheap copy" to work with.

Totally useful for realtime audio, isn’t it?

Totally useful for realtime audio, isn’t it?

It’s a glorified pointer + size. It is useful, but I don’t really see the connection to realtime audio. It’s mostly used as a replacement for C-style pointer + size function arguments:

void foo(const float* bufData, size_t bufSize);

VS

void foo(std::span<const float> buf);

You are correct. There is absolutely no connection. Although it is. If useful or not, depend what you are doing.