Slope and derivative

I’m sure someone with more signal processing background can explain this:


(
{
	a = Sweep.ar(0, 1000);
	b = sin(a);
	c = cos(a);
	d = Slope.ar(b) * ControlDur.ir;
	e = Slope.ar(a);
	[a, b, c, d, e]
	
}.plot(0.02, separately:true)
)

I wonder what the right scaling would have to be in the case of the slope of sin, and of course why there is so much jitter in the slope of the line.

The derivative of sine is cosine, so the minmax is -1 and 1, but in your case *1000, because the frequency is 1000:

(
    {
        a = Sweep.ar(0, 1000);
        b = sin(a);
        c = cos(a);
        d = Slope.ar(b)/1000;
        e = Slope.ar(a);
        [a, b, c, d, e]
        
    }.plot(0.02, minval:-1, maxval:1, separately:true)
)

The jitter isn’t all that bad. Obviously, it should be just a straight line at 1000, but the value is off by less than .1:

(
    {
        a = Sweep.ar(0, 1000);
        b = sin(a);
        c = cos(a);
        d = Slope.ar(b)/1000;
        e = Slope.ar(a);
        [a, b, c, d, e]
        
    }.plot(0.02, minval:999.9, maxval:1000.1, separately:true)
)

Looking at the source code, I’m not sure why it would be off. Could it be because it is calculating a double and then converting to float? This shouldn’t be off by .05…

Sam

Floating point error?

((44100*0.022675736961451*97)-(44100*0.022675736961451*96)).postln
((44100*0.022675736961451*98)-(44100*0.022675736961451*97)).postln;
((44100*0.022675736961451*99)-(44100*0.022675736961451*98)).postln;

Sam

Ah ok, that was my original thought, but somehow it was something else. Good, this is reliable:

(
    {
        a = Sweep.ar(0, 1000);
        b = sin(a);
        c = cos(a);
        d = Slope.ar(b)/1000;
        [b, c, c-d]
        
    }.plot(0.02, minval:-1, maxval:1, separately:true)
)

You are right, I read the plot window incorrectly, and thought it moves between 0 and 1000. Like this is somewhat more acceptable …

Interestingly, Phasor gives the expected result without the jitter.

(
{
	var a = Sweep.ar(0, 1000);
	var b = Phasor.ar(0, 1000 / s.sampleRate, 0, inf);
	[a, b, Slope.ar(a), Slope.ar(b)]
}.plot(0.02, separately:true)
)

I tried examining the output of Phasor and Sweep like this:

(
var a, b, c, d;
{
	a = { Sweep.ar(0, 1000) }.asBuffer(0.02);
	b = { Phasor.ar(0, 1000 / s.sampleRate, 0, inf) }.asBuffer(0.02);
	0.05.wait;
	a.loadToFloatArray(action: {|fa| c = fa });
	b.loadToFloatArray(action: {|fa| d = fa });
	c.differentiate[1..].maxItem.debug("Sweep max difference"); // 0.020833969116211
	c.differentiate[1..].minItem.debug("Sweep min difference"); // 0.020832061767578
	d.differentiate[1..].maxItem.debug("Phasor max difference"); // 0.020833969116211
	d.differentiate[1..].minItem.debug("Phasor min difference"); // 0.020832061767578		
}.fork;
)

I don’t understand why Phasor behaves differently from Sweep given the above output.

EDIT: or is me not reading the plots correctly?

Jitter (aka quantization noise) should double in magnitude every time the floating point exponent increases by 1… matching the graph’s appearance.

1 <= x < 2, noise floor should be 2 ** -23
2 <= x < 4, noise floor should be 2 ** -22
4 <= x < 8, noise floor should be 2 ** -21
…etc

Edit:

So by the time Sweep reaches 20, the quantization noise should be on the order of 2 ** -19.

Then Slope is multiplying that by the sample rate, on the order of 2 ** 15 or 2 ** 16… so the quantization noise comes up to the order of 3 or 4 bits below the binary point, at best 1/16.

Worst case for floating point is adding a large number and a small number, which is exactly what Sweep does.

hjh

2 Likes

Just to add: Slope doesnt output the rate of change per sample. The rate of change per sample is delta = current sample - last sample.
The formula implemented is:

out[i] = (in[i] - in[i-1]) * sampling_rate which does give you frequency in Hz

if you divide that by the initial frequency the output for your sin/cos is normalized between -1 and 1.

Thank you for mentioning this. The rate of change is always dx/dt, so it should not matter in magnitude whether you measure it by sample or by second, only by unit.

To compare:

(
{
	var a, b, c;
	a = Sweep.ar(0, 1000);
	b = Slope.ar(a);
	c = (a - Delay1.ar(a)) / SampleDur.ir;
	[a, b, c]
	
}.plot(separately:true)

)

(And (a - Delay1.ar(a)) / SampleDur.ir is the same as (a - Delay1.ar(a)) * SampleRate.ir)

Here are the two relevant implementations of Phasor and Sweep:

void Sweep_next_kk(Sweep* unit, int inNumSamples) {
    float* out = ZOUT(0);
    float curin = ZIN0(0);
    double rate = ZIN0(1) * SAMPLEDUR;
    float previn = unit->m_previn;
    double level = unit->mLevel;

    if (previn <= 0.f && curin > 0.f) {
        float frac = -previn / (curin - previn);
        level = frac * rate;
    }

    LOOP1(inNumSamples, level += rate; ZXP(out) = level;);

    unit->m_previn = curin;
    unit->mLevel = level;
}

void Phasor_next_kk(Phasor* unit, int inNumSamples) {
    float* out = ZOUT(0);

    float in = ZIN0(0);
    double rate = ZIN0(1);
    double start = ZIN0(2);
    double end = ZIN0(3);
    float resetPos = ZIN0(4);

    float previn = unit->m_previn;
    double level = unit->mLevel;

    if (previn <= 0.f && in > 0.f) {
        level = resetPos;
    }
    LOOP1(inNumSamples, level = sc_wrap(level, start, end); ZXP(out) = level; level += rate;);

    unit->m_previn = in;
    unit->mLevel = level;
}

The “problem” is fundamentally about floating point precision (unavoidable)

Accumulates small values over time, and floating point precision gets worse.
This happens when we subtract consecutive values, which is what Slope does

At 0-1 range, tiny values. However, in the 1000 range, differences are apparent.

~value1 = 1000.123456
~value2 = 1000.123457  
(~value2 - ~value1) * 44100

Slope implementation is one way of expressing the calculus concept of a derivative, and the units will work out the same regardless of how you calculate it (Important point!)

Yep – I already said something about this but nobody picked up that thread :+1: :laughing:

hjh

1 Like

I just reinforced your correct (and fundamental) point. :blush: And it is in fact unavoidable.

(meta-question): in threads like these, which start with a number of oversights (like me misreading the plot), should one correct the first entry? If we do, not all the replies may make sense anymore, if we don’t it is tedious to find out what the solution was.

This is certainly something that one should know, and also know of the alternatives (or we should change it).

1 Like

I believe there are a few alternatives for compensation:

Example: Kahan summation algorithm - Wikipedia

(
var regularSum = 0.0;
var kahanSum = 0.0;
var c = 0.0;
var y, t;

100000.do { |i|
    var value = 0.1003;
    regularSum = regularSum + value;
    y = value - c;
    t = kahanSum + y;
    c = (t - kahanSum) - y;
    kahanSum = t;
};

["Standard:", regularSum, "Kahan:", kahanSum].postln;
)

//  Standard: 10030.000000005
//  Kahan: 10030.0
1 Like

Interesting.

I think we also could fix the unmodulated version of Sweep by just counting up sample frames and multiplying by rate. Then we don’t do repeated additions of very small values.
But maybe using Phasor is a way around this. Sweep may never have been intended to be an ever rising slope, but something triggered regularly.

1 Like

The Phasor is the best practical solution; it’s clear. What I posted delivers more precision, I believe

I see what you’re thinking, but – this would deal with drift coming from cumulative rounding error. The multiplication result still has to be quantized to the available precision, and that precision is less than you think. I would still predict quantization error on the order of 1/16 or maybe even 1/8 in the original test case, just because multiplying by the sample rate eats up 16 bits of fractional precision.

hjh

1 Like

What I meant is that multiplication wouldn’t accumulate, because its result is just the output and not reused internally. Here is a comparison:


(
{
	var rate = 1000;
	a = Sweep.ar(0, rate);
	b = Sweep.ar(0) * rate;
	c = Duty.ar(SampleDur.ir, 0, Dseries(0,1)) * (SampleDur.ir * rate);
	e = Slope.ar(a);
	f = Slope.ar(b);
	g = Slope.ar(c);
	[e, f, g]
	
}.plot(0.02, separately:true)
)

1 Like

You are right, and multiplication will be less precise. A few low bits will be lost, but they will not be as relevant as before. I don’t know if this is a good explanation, but the lost bits are uniformly lost across all calculations, although they don’t accumulate like before becoming a real problem.

I think the multiplication idea is a good balance, it loses precision, but it will do the job. The compensation sum (all variations of it) will deliver much more precision but will also require more operations each time, instead of a simple sum and a simple mul.