The odd (bad) case of Line

For the lulz :-/

play {
	Line.kr(0, 2000000, 2000000).poll(1, "line");
}

play {
	Line.kr(0, 3000000, 3000000).poll(1, "oh noes");
}

When is the day this software does not surprise me?

If you don’t need doneAction:

~lineAr = { arg start, end, dur;
	var sw = Sweep.ar(0, end/dur);
	sw.clip(start, end);
};

play {
	~lineAr.(0, 3e6, 3e6).poll(4, "ok");
	0.0;
}

If you do need it:

~lineAr = { arg start, end, dur;
	var sw = Sweep.ar(0, end/dur);
	FreeSelf.kr(A2K.kr(sw > end));
	sw.clip(start, end);
};

hjh

1 Like

BTW both examples post the same thing at 44.1 kHz.

    int counter = (int)(dur * unit->mRate->mSampleRate + .5f);

At kr, unit->mRate->mSampleRate is block rate = sr / blocksize, so:

3000000 * (48000 / 64) = 2250000000

(signed) 32 bit integer limit = 2147483647 – overflow

At 44.1 kHz, (3000000 * 44100.0) / 64 = 2067187500, no overflow.

I suppose James McCartney could be forgiven for failing to anticipate that someone might want to run a Line continuously for 34.7 days :no_mouth:

hjh

A safe maximum would be 25,550 days = three score and ten years

hmm, I’d always calculate that something could run for a year or more :wink:

Anyway, in my system the virtual sampling rate is 14112000 Hz, so you can see how far that will get me here.

Using BufRd, you’ll hit the precision limit in less than two seconds that way.

hjh

I’m just curious: why would you use such a high sampling rate?

This is not the rate at which any interface runs, but the model resolution of points in time. It’s simply a number that can represent all common sampling rates up to 96K without rounding errors (*) when translating to any other common rate. And since I anyway use 64 bit integers, this has zero chance of causing any issues, unless of course, they are translated into a 32 bit float system like scsynth.

Those time positions essentially do not appear on scsynth (correction: they do never appear in my case), but are usually translated to seconds before. The problem with Line only appeared, as I was using inf for unbounded durations, and after some problems I started seeing the issue with the resolution the initial post was about.

I like integers better than floating point numbers for representing time in a computer music system. You get precise arithmetics for free.


(*) 960 lcm 882 = 141120

but the model resolution of points in time. It’s simply a number that can represent all common sampling rates up to 96K without rounding errors (*) when translating to any other common rate.

I see! Actually, that’s also how Pd internally represents time:

Since Pd uses double precision floating point numbers for time values, it doesn’t cause issues, either - unless your patch runs longer than 10 years :slight_smile:

Ha, interesting, I wasn’t aware it chose the same number.

Out of curiosity. That gives a time resolution per second of 7.086167800453515e-08 while OSC time is 2.3283064365386963e-10 and it uses doubles anyway and big numbers grow faster, say for 192kHz. I don’t get why it would be better to represent time that way.

Generally, we have two options how to represent quantities: a) fixed point (Fixed-point arithmetic - Wikipedia) and b) floating point (Floating-point arithmetic - Wikipedia)

OSC time (= NTP time stamps) is a 64-bit fixed point format with a scaling factor of 1/(2^32).

@Sciss uses a fixed point format with a scaling factor of 1/(14112000)

So both are fixed point formats, the difference is just the choice of the scaling factor:

NTP uses 1/(2^32) because it is a natural choice when working in a binary number system (e.g. you can use bit shifting + masking to get the integer resp. fractional part).

@Sciss uses 1/14112000 because it is the LCM of all common sample rates, so the duration of a single sample can be represented as an integer. This means you can work with time on a sample level without rounding errors or precision problems.

However, fixed point arithmetic has a big problem: it’s very easy to lose precision or even cause overflow/underflow in intermediate computations. Integer overflow is especially bad since it will yield a completely different number. In C/C++, signed integer overflow is even undefined behavior. Therefore fixed point arithmetic requires special care by the programmer.

Pd has chosen a clever compromise: it represents time as double precision floating point numbers, but with a scaling factor of 1/14112000! This means it can accurately represent sample durations while avoiding the dangers of fixed point arithmetic. Also, Pd uses the fractional part to support sub-sample accurate messaging, e.g. in the [vline~] object. (Sub-sample accuracy deteriorates as the number of seconds increases because there will be fewer available bits in the mantissa to represent the fractional part.)

The obvious disadvantage of doubles over 64-bit integers is the reduced integer range. With a scaling factor of 1/14112000 a 64-bit integer can hold 653,583,616,780 seconds (ca. 20,000 years) while a double can only hold 319,132,653 seconds (ca. 10 years) sample-accurately. (It can, of course, hold larger number of seconds, but the “pseudo-fractional” part [= the number of fractional samples] will suffer because there are not enough bits in the mantissa.)


On a completely different note, NTP time stamps are unsigned, so they are generally only suited for representing time points (assuming they are always positive) but they can’t represent durations (which can be negative). So in practice it is not even an option.

2 Likes

Some more trivia:

One interesting property of using doubles as “pseudo fixed-point” numbers is that you don’t have to use the actual LCM as the scaling factor. Instead, you can take the LCM and divide it by any power of 2. For example, you can just as well use 14112000 / 256 (= 55125) or even 14112000 / 4096 (= 13.458251953125)

Now, when you calculate the duration of a single sample in such smaller time units, you might get fractional results. However, this is not a problem because the result can always be represented precisely. You can add/subtract such values and still don’t run into precision problems or cumulative errors.

Two examples:

  1. scaling factor: 1/14112000; duration of 1 sample @ 44100 Hz: 14112000 / 44100 = 320

  2. scaling factor: 1/55125; duration of 1 sample @ 44100 Hz: 55125 / 44100 = 1.25

If you type those two values in IEEE-754 Floating Point Converter, you will see that the bit pattern in the mantissa actually stays the same, it is only the exponent that changes! This is true for any multiple of 320 resp. 1.25. (This won’t suprise anyone familiar with the IEEE754 floating point format.)

The range of possible values that can be represented sample-accurately with doubles is really independent of the actual scaling factor as long as the ratio between the scaling factor and the LCM of the sampling rates is a power-of-two. (This make sense because it is really only about the size of the mantissa.) This means that you don’t really have to care about sampling rates that are just factors of 2, like 176400 or 196000, they just work out of the box.

The same is not true when using integers: you have to set the scaling factor large enough so that you don’t get a fractional result for a single sample duration at any sampling rate. This means that you have to pay upfront for the mere possibility of higher sampling rates. (Larger scaling factors lead to a reduced range.) On the other hand, 64-bit integers have such a large range that it doesn’t really matter in practice, I guess.

1 Like

@Spacechild1 Killin’ it

I do understand that now, however I guess that conversion is done only as the last part of the processing chain because doing it by operation would be too expensive and if the processing chain uses floating point precision the rounding error will be packed in the final result when the conversion is done. But I know nothing about the implementation details.