The derivative of sine is cosine, so the minmax is -1 and 1, but in your case *1000, because the frequency is 1000:
(
{
a = Sweep.ar(0, 1000);
b = sin(a);
c = cos(a);
d = Slope.ar(b)/1000;
e = Slope.ar(a);
[a, b, c, d, e]
}.plot(0.02, minval:-1, maxval:1, separately:true)
)
The jitter isn’t all that bad. Obviously, it should be just a straight line at 1000, but the value is off by less than .1:
(
{
a = Sweep.ar(0, 1000);
b = sin(a);
c = cos(a);
d = Slope.ar(b)/1000;
e = Slope.ar(a);
[a, b, c, d, e]
}.plot(0.02, minval:999.9, maxval:1000.1, separately:true)
)
Looking at the source code, I’m not sure why it would be off. Could it be because it is calculating a double and then converting to float? This shouldn’t be off by .05…
Interestingly, Phasor gives the expected result without the jitter.
(
{
var a = Sweep.ar(0, 1000);
var b = Phasor.ar(0, 1000 / s.sampleRate, 0, inf);
[a, b, Slope.ar(a), Slope.ar(b)]
}.plot(0.02, separately:true)
)
I tried examining the output of Phasor and Sweep like this:
(
var a, b, c, d;
{
a = { Sweep.ar(0, 1000) }.asBuffer(0.02);
b = { Phasor.ar(0, 1000 / s.sampleRate, 0, inf) }.asBuffer(0.02);
0.05.wait;
a.loadToFloatArray(action: {|fa| c = fa });
b.loadToFloatArray(action: {|fa| d = fa });
c.differentiate[1..].maxItem.debug("Sweep max difference"); // 0.020833969116211
c.differentiate[1..].minItem.debug("Sweep min difference"); // 0.020832061767578
d.differentiate[1..].maxItem.debug("Phasor max difference"); // 0.020833969116211
d.differentiate[1..].minItem.debug("Phasor min difference"); // 0.020832061767578
}.fork;
)
I don’t understand why Phasor behaves differently from Sweep given the above output.
Jitter (aka quantization noise) should double in magnitude every time the floating point exponent increases by 1… matching the graph’s appearance.
1 <= x < 2, noise floor should be 2 ** -23
2 <= x < 4, noise floor should be 2 ** -22
4 <= x < 8, noise floor should be 2 ** -21
…etc
Edit:
So by the time Sweep reaches 20, the quantization noise should be on the order of 2 ** -19.
Then Slope is multiplying that by the sample rate, on the order of 2 ** 15 or 2 ** 16… so the quantization noise comes up to the order of 3 or 4 bits below the binary point, at best 1/16.
Worst case for floating point is adding a large number and a small number, which is exactly what Sweep does.
Just to add: Slope doesnt output the rate of change per sample. The rate of change per sample is delta = current sample - last sample.
The formula implemented is:
out[i] = (in[i] - in[i-1]) * sampling_rate which does give you frequency in Hz
if you divide that by the initial frequency the output for your sin/cos is normalized between -1 and 1.
Thank you for mentioning this. The rate of change is always dx/dt, so it should not matter in magnitude whether you measure it by sample or by second, only by unit.
To compare:
(
{
var a, b, c;
a = Sweep.ar(0, 1000);
b = Slope.ar(a);
c = (a - Delay1.ar(a)) / SampleDur.ir;
[a, b, c]
}.plot(separately:true)
)
(And (a - Delay1.ar(a)) / SampleDur.ir is the same as (a - Delay1.ar(a)) * SampleRate.ir)
The “problem” is fundamentally about floating point precision (unavoidable)
Accumulates small values over time, and floating point precision gets worse.
This happens when we subtract consecutive values, which is what Slope does
At 0-1 range, tiny values. However, in the 1000 range, differences are apparent.
Slope implementation is one way of expressing the calculus concept of a derivative, and the units will work out the same regardless of how you calculate it (Important point!)
(meta-question): in threads like these, which start with a number of oversights (like me misreading the plot), should one correct the first entry? If we do, not all the replies may make sense anymore, if we don’t it is tedious to find out what the solution was.
(
var regularSum = 0.0;
var kahanSum = 0.0;
var c = 0.0;
var y, t;
100000.do { |i|
var value = 0.1003;
regularSum = regularSum + value;
y = value - c;
t = kahanSum + y;
c = (t - kahanSum) - y;
kahanSum = t;
};
["Standard:", regularSum, "Kahan:", kahanSum].postln;
)
// Standard: 10030.000000005
// Kahan: 10030.0
I think we also could fix the unmodulated version of Sweep by just counting up sample frames and multiplying by rate. Then we don’t do repeated additions of very small values.
But maybe using Phasor is a way around this. Sweep may never have been intended to be an ever rising slope, but something triggered regularly.
I see what you’re thinking, but – this would deal with drift coming from cumulative rounding error. The multiplication result still has to be quantized to the available precision, and that precision is less than you think. I would still predict quantization error on the order of 1/16 or maybe even 1/8 in the original test case, just because multiplying by the sample rate eats up 16 bits of fractional precision.
You are right, and multiplication will be less precise. A few low bits will be lost, but they will not be as relevant as before. I don’t know if this is a good explanation, but the lost bits are uniformly lost across all calculations, although they don’t accumulate like before becoming a real problem.
I think the multiplication idea is a good balance, it loses precision, but it will do the job. The compensation sum (all variations of it) will deliver much more precision but will also require more operations each time, instead of a simple sum and a simple mul.