I’m seeing via the source code and the documentation that Wavetable and Signal have different storage formats. I think I understand the conversion between these two formats, but I’m trying to understand the optimization behind them, and the reasoning for this change in formats is not obvious to me. This may be a simple lack of knowledge on my part.
The documentation says that this is for making float to integer conversions more optimal in the underlying C code, although I’ve yet to locate this code where this optimization takes place.
I’m wondering if at the very least there is a link to the code that takes advantage of the Wavetable format as opposed to the Signal format. Any readings on any relevant topics would be helpful as well!
I don’t have time to trace out exactly how it works, but the bits you’re looking for are in functions such as Osc_iaa_perform() in OscUGens.cpp. This calls lookupi1() defined in include/plugin_interface/SC_SndBuf.h, where there are bits like this:
It looks like the LSBs of an integer phase are used for the fractional part, and this feeds into a linear interpolation formula. But I haven’t delved into it at bit level (and won’t).