OSCFunc latency testing

I am little surprised about the results of running this test code:

(
SynthDef(\onsets, {
	var time = Sweep.ar(Impulse.ar(0));
	SendReply.ar(Impulse.ar(1), '/reply', time);
}).add;
);

(
~initTime = Main.elapsedTime;
o = OSCFunc({ |msg, timeStamp|
	var time = timeStamp - ~initTime;
	var time2 = msg[3];
	("oscFuncTime" + time).postln;
	("oscFuncTime deviation in ms" + ((time - time.round(1)) * 1000).round(0.001)).postln;
	("SynthDefTime" + time2).postln;
	("SynthDefTime deviation in ms" + ((time2 - time2.round(1)) * 1000).round(0.001)).postln;
	"".postln;
},'/reply', s.addr);
x = Synth(\onsets);
)

Here is what I found:

  1. “SynthDefTime deviation in ms” stemming from Sweep.ar seems to be running 1 sample late, is this expected? I am not concerned with sample accuracy at this point, just wondering.

  2. “oscFuncTime deviation in ms” stemming from the OSCFunc timestamp is producing both positive (expected) and negative values (unexpected). The range of deviations is proportional to the the current samplerate. With a samplerate of 44100 I get values varying by upwards of 10 ms, eg in the -2 to 10 ms range. The deviations seems to have some periodicity (see the post window when running the code). If I double the samplerate, the deviations are cut in half.

Can other users verify the behaviour described above?

How can the negative values be explained, how can the timestamp happen before the trigger?

Why are the deviations so relatively large and why do they seem periodic?

The testing leads me to the conclusion that you cannot rely on the time stamp of OSCFunc for precise timing applications, rather one has to get the time from the SynthDef using Sweep.ar or similar Ugens.

Any thoughts?

SendReply only sends single OSC messages and not bundles. The difference here is that bundles have a time stamp, but individual message do not. So, the time you’re seeing in timeStamp is rather the time that the OSC message is processed by the language. The message send/receive process with SendReply I believe looks something like this:

  1. SendReply ugen is triggered, the input value is captured, and a job is queued to send the OSC Message on a background thread.
  2. Background thread picks up the job from #1, constructs the (non-timestamped) OSC message and sends it.
  3. A TCP or UDP thread in the language wakes up, and sees a new message - a timestamp is added to the message at this point.
  4. The TCP/UDP thread waits for the language to finish anything it’s doing (waits to lock a mutex)
  5. When it can lock the language, the TCP/UDP thread constructs an sclang object with a timestamp etc and does the OSCFunc callback.

There are async operations at every step of the above chain, so any jitter you see is accumulated between all these steps.

To be totally honest, the 10-30 ms values that I see seem shockingly large to me - but I realize there’s maybe a small measurement mistake. You’re capturing your start time before you even send the Synth - once you send the Synth, it is only scheduled to start some time later (affected by the server latency, audio block size, and other factors). You can get better results by reading the start time only when the first packet arrives:

(
~initTime = nil;
o = OSCFunc({ |msg, timeStamp|
	var time = timeStamp;
	var time2 = msg[3];
	
	~initTime = ~initTime ?? { Main.elapsedTime };
	time = time - ~initTime;
	
	("oscFuncTime" + time).postln;
	("oscFuncTime deviation in ms" + ((time - time.round(1)) * 1000).round(0.001)).postln;
	("SynthDefTime" + time2).postln;
	("SynthDefTime deviation in ms" + ((time2 - time2.round(1)) * 1000).round(0.001)).postln;
	"".postln;
},'/reply', s.addr);
x = Synth(\onsets);
)

With this, I see jitter of around ± 5ms. This still seems very large, to the point of it possibly being a bug… but much better. Negative values make more sense in this case, because they just indicated that the initial message was slightly more late than the median.

2 Likes

Thanks for the insights. I ran your code several times on my computer and it only produces negative “oscFuncTime deviation in ms” - values which is still baffling me, no positive values after running it several times. The jitter is still in the 10+ ms range, from approximately -12 to -1 ms.

One thing to be aware of here is that you are measuring the server’s sample clock against the language’s system clock. These are almost certainly not running at the same rate.

I ran scztt’s modified test (with one exception – I replaced Main.elapsedTime [physical time] with SystemClock.seconds [logical time]) and I got negative values consistently increasing in magnitude (becoming more negative). This would be explained by the server running fast. The time between messages would then be less than 1s, so time would fall further and further behind the rounded value.

Then:

s.actualSampleRate
→ 44311.018217177

That’s quite fast, actually.

There’s not a good way to control this. The server has to follow the sample clock. The language doesn’t have access to the sample clock (there was some talk a while back about how to have a sample-synced language clock, but I’m not sure of the current status).

In that sense, your original conclusion was correct – you cannot measure anything meaningful about server timing from the incoming OSCFunc time.

This could depend on OS.

A few years ago, before LinkClock stabilized, I was working on OSC syncing clocks. It almost worked. It worked very well under some limited circumstances, but when I was testing outside those limitations, I observed a similar quite weird pattern. When I would measure OSC round-trip between the follower clock and the leader, even on different machines, I got very small values, less than 1 ms. But the timing jitter coming from, for instance, a Windows machine into my Linux machine was 10, 20, 30 times larger – and also showed this weird modulo pattern, as if message timing was somehow being quantized.

At one point, I had some time in my college’s Mac media lab. I tested Mac versus Mac and… timing was near perfect. Then I tested Mac (sending) and Linux (receiving) and saw exactly the same pattern as with Windows (sending) and Linux (receiving).

That’s when I had to give up the project… because, if you see the same bad behavior with one receiving environment and two different sending environments, then the consistent factor in the bad behavior is the receiver. Incoming OSC timing in Linux seems unreliable in some way that I was never able to explain (and which is not consistent across all OSes, as Mac seemed fine).

That’s a long way of saying – the behavior you’re observing may simply be an obscure, difficult to analyze and probably unsolvable bug.

hjh

1 Like

Thanks @jamshark70. The difference in ideal sr and actual sr your are describing is BIG. On my machine, right now (OSX, M1):

s.sampleRate → 44100.0
s.actualSampleRate → 44100.333685179

Which is obviously much better although not perfect. In your experience, will s.actualSampleRate change a lot during a session or is it pretty steady? If the latter is the case, would it make sense to adjust for the difference by offsetting by s.sampleRate/s.actualSampleRate or is that just another huge can of worms?

I wonder if there are similar issues with MIDIFunc. When I got some more time I will test what values are being picked up from a DAW sending MIDInote On messages at regular intervals. I understand that clock interpretation in e.g. Logic X and SC is most likely different but it could still be interesting to examine the amount of drift between the two clock interpretations and also if the drift is consistent upon several tests. Constant drift seems more manageable than erratic fluctuations.

That 44.3 kHz reading was pretty extreme – this is the only time I’ve ever seen such a far-out reading. It was on a built-in sound card (i.e. cheap hardware) and it did settle later back into a much closer value.

I’d say +/- 10 Hz is pretty typical for a ThinkPad built-in soundcard. I believe USB interfaces would be more stable.

I wouldn’t bother going down this road. You have no assurance that the transport time for OSC messages will be consistent.

I just thought to try a little bit simpler test – just differentiating the incoming time values. The earlier test assumed that 1.0 is a common standard between language and server, but we could forget about that and find 1/ the average time between OSC messages and 2/ the amount of fluctuation.

(
var lastTime = nil;

a = List.new;

OSCdef(\x, { |msg, time|
	if(lastTime.isNil) {
		lastTime = time
	} {
		a.add(time - lastTime);
		lastTime = time;
		if(a.size == 200) {
			z.free;
			OSCdef(\x).free;
			"done".postln;
		};
	};
}, '/time');

z = {
	var time = Sweep.ar;
	// '4' because it evenly divides 44100
	// integer number of samples between triggers
	SendReply.ar(Impulse.ar(4), '/time', time);
	Silent.ar(1)
}.play;
)

a = a.array;
a.plot;

On this machine, the plot shows a pattern of 3-4 values slightly longer than true, followed by one value that is much shorter than true. This might be different on other machines, or with different soundcards.

// some stats: start with the obvious = average
m = a.mean;
-> 0.249975831345  // quite close!

[a.maxItem - m, a.minItem - m]
-> [ 0.0068162446550105, -0.01864244534504 ]

// standard deviation
// (not using the '-1' correction bc this isn't a constructed random sample)
(a.sum { |y| (y - m).squared } / a.size).sqrt
-> 0.0098690657995142

So the maximum error in this test is (a.minItem absdif: m) / m = about 7.46%, which is kind of a lot. A 9 ms standard deviation also doesn’t inspire confidence.

I recall that you were trying to grab Onsets times in a buffer – just use the server’s time values. OSCFunc time will not be reliable for this, not at all.

I would expect jitter here too.

hjh

I ran your code and got similar but slightly better results, standard deviation around 5-6 ms. As far as I can tell, the variable ‘time’ inside the z function is never in play in your code as the ‘time’ variable inside the OSCdef takes its value from the OSC timestamp and not the msg[3] where the Sweep.ar time is located. Below I am using Sweep.ar(Impulse.ar(0)) instead of just Sweep.ar. When I initially tested the Sweep Ugen I would get slightly better results using Impulse.ar(0) - something I don’t really understand. In any case it was a slight difference - Sweep.ar(Impulse.ar(0)) would be 1 sample delayed, and Sweep.ar about 3 or 4 samples delayed.

When testing with msg[3] (time from Sweep.ar), I get results very close to ‘perfect’, standard deviation of 1.5996892845977e-06 = less than 1 sample at 44100

(
var lastTime = nil;

a = List.new;

OSCdef(\x, { |msg, time|
	if(lastTime.isNil) {
		lastTime = msg[3]
	} {
		a.add(msg[3] - lastTime);
		lastTime = msg[3];		
		if(a.size == 200) {
			z.free;
			OSCdef(\x).free;
			"done".postln;
		};
	};
}, '/time');

z = {
	var time = Sweep.ar(Impulse.ar(0));
	// '4' because it evenly divides 44100
	// integer number of samples between triggers
	SendReply.ar(Impulse.ar(4), '/time', time);
	Silent.ar(1)
}.play;
)

m = a.mean;
-> 0.25000011444092
[a.maxItem - m, a.minItem - m]
-> [ 2.2565126419072e-05, -1.44243240352e-07 ]

// standard deviation
// (not using the '-1' correction bc this isn't a constructed random sample)
(a.sum { |y| (y - m).squared } / a.size).sqrt
-> 1.5996892845977e-06

That is a correct observation – but we are measuring network timing. I had thought it was fairly safe to assume that the server’s time measurement between impulses would naturally be accurate because all of server timing is based on the sample clock, so there would be not much need to check it (and your results confirmed that in fact there was nothing to worry about with the Sweep values, at least not with the time deltas).

I’m puzzled by the few samples’ delay but for myself, I don’t see that as a major issue.

hjh

1 Like

I get:

  • With the trigger, it starts at 0 (“-0”) and then subsequent samples are one sample’s duration too large.

    • This is explained by the trigger causing a reset-to-0 on the first sample, but then later samples are calculated by the logic of first increment, then output. Probably output-then-increment would be better.
  • Without the trigger, increment-then-output logic causes the first output sample to be one sample’s duration too large. Then the rest are two samples too large!

So it seems there’s a double-counting somewhere at Sweep init, but I don’t see in the source code where this is happening. (Also it’s probably better to set ZXP(out) first before updating the unit, but I’m not sure if that’s always feasible – and it didn’t fix the double counting.)

(FWIW this is not the only off-by-one bug in the UGen suite – so one rational option is to just accept it and move on.)

hjh

The 1 or 2 samples off does not bother me too much at this point, but good to understand why things like this can happen inside a ugen. I tested MIDIdef.noteOn receiving 200 sixteenth notes at 60 BPM from Ableton Live over the IAC bus on OS X - the equivalent of your OSC test. I was running on the internal clock off my M1 laptop. The result was very pleasing, using the same calculations from before:

a.mean → 0.249997928335
max/min deviation → [ 0.00013715466480335, -0.00013330333490102 ]
standard deviation → 5.8836234046698e-05