Ohhh I just noticed the graphs’ scales: The first is unclear, the second is 1000 ± 0.2, and the third is 1000 ± 1000 – which would render the jitter invisible. I bet if you matched y axis scaling between graphs 2 and 3, then the third would exhibit jitter as well.
1000/48000 (the scaling factor) has one factor of 3 in the denominator, so it isn’t exact in binary. So quantization error will come into play – mathematically impossible for it to be fully accurate. Therefore there must be jitter in the third graph.
I tried the changes and just edited a few lines. There is no accumulation with tiny increments. It just avoids it entirely but sets a sample position and calculates each output (same as Phasor, instead of phase value, it’s a sample counter)
It seems to keep the same functionality without all the floating-point accumulation artifacts. I see a win-win.
Make sure to adjust the scaling of the last plot. Slope.ar(Sweep2.ar()) is initializing to zero, which is zooming the y axis way way out. If there’s jitter on the order of an 8th of a unit, but it’s jamming 1500-2000 units into a hundred pixels, you won’t see the jitter.
Yes, I saw that. The original initializes with level = frac * rate where rate is scaled by SAMPLEDUR. I believe it makes it starts with a minimal initial value? Sweep2 starts by calculating sampleCount * (rate/SAMPLERATE) directly, which gives the value from the first sample.
The real improvement (?) of Sweep2 is that these errors don’t accumulate over time, not that it gets rid of quantization error. It will reappear when we do any fp operation.
Something else would be needed to truly improve precision. For example, would doing all operations in integer math until a final conversion be appropriate? But the output format has its limitation, it’s unavoidable.
Sure, I understand. Just pointing out for other readers that the graphs might look like the new approach eliminates error, but it only scales it to invisibility.
Observing the changes in the UGEN code (Sweep2),we can draw some conclusions. The useful case would be restricted to precision measurements, not “normal” audio stuff. The differences (around 10^-15 for short durations) are well below the noise floor for audio processing. But there is a difference, I don’t know where they are necessary in SC maybe if it runs for long time periods (looking into accumulated values).
Another difference that I didn’t get if it were by design or by accident is a small value being added from the beginning and constant, around 2.27e-15, that’s why the original ugen doesn’t initialize with zero.