Trying to understand Wavetable format

I’m seeing via the source code and the documentation that Wavetable and Signal have different storage formats. I think I understand the conversion between these two formats, but I’m trying to understand the optimization behind them, and the reasoning for this change in formats is not obvious to me. This may be a simple lack of knowledge on my part.

The documentation says that this is for making float to integer conversions more optimal in the underlying C code, although I’ve yet to locate this code where this optimization takes place.

I’m wondering if at the very least there is a link to the code that takes advantage of the Wavetable format as opposed to the Signal format. Any readings on any relevant topics would be helpful as well!

Really appreciate any help on this!

I don’t have time to trace out exactly how it works, but the bits you’re looking for are in functions such as Osc_iaa_perform() in OscUGens.cpp. This calls lookupi1() defined in include/plugin_interface/SC_SndBuf.h, where there are bits like this:

inline float PhaseFrac(uint32_t inPhase) {
    union {
        uint32_t itemp;
        float ftemp;
    } u;
    u.itemp = 0x3F800000 | (0x007FFF80 & ((inPhase) << 7));
    return u.ftemp - 1.f;
}

inline float PhaseFrac1(uint32_t inPhase) {
    union {
        uint32_t itemp;
        float ftemp;
    } u;
    u.itemp = 0x3F800000 | (0x007FFF80 & ((inPhase) << 7));
    return u.ftemp;
}

inline float lookup(const float* table, int32_t phase, int32_t mask) { return table[(phase >> 16) & mask]; }


#define xlobits 14
#define xlobits1 13

inline float lookupi(const float* table, uint32_t phase, uint32_t mask) {
    float frac = PhaseFrac(phase);
    const float* tbl = table + ((phase >> 16) & mask);
    float a = tbl[0];
    float b = tbl[1];
    return a + frac * (b - a);
}

inline float lookupi2(const float* table, uint32_t phase, uint32_t mask) {
    float frac = PhaseFrac1(phase);
    const float* tbl = table + ((phase >> 16) & mask);
    float a = tbl[0];
    float b = tbl[1];
    return a + frac * b;
}

inline float lookupi1(const float* table0, const float* table1, uint32_t pphase, int32_t lomask) {
    float pfrac = PhaseFrac1(pphase);
    uint32_t index = ((pphase >> xlobits1) & (uint32_t)lomask);
    float val1 = *(const float*)((const char*)table0 + index);
    float val2 = *(const float*)((const char*)table1 + index);
    return val1 + val2 * pfrac;
}

It looks like the LSBs of an integer phase are used for the fractional part, and this feeds into a linear interpolation formula. But I haven’t delved into it at bit level (and won’t).

hjh

This is exactly the type of response I was looking for. I appreciate the response!