Server clock vs. Language clock

What I find striking in your example, though, is the exact period of 4 seconds!

Yes, and this also is the case if I let it run for longer periods of time.

Here I am testing rates which are all related: [1, 2, 4, 8, 16].

I have never used Supernova before. What is the easiest way of testing this in Supernova?

There’s a long history of server-language timing issue discussions, some of them going into elusive details. Maybe you find something helpful in these old threads (also see the update from May '21)

3 Likes
Server.supernova;
s.boot;

Would it be technically possible to ever have synchronized server-language clocks, maybe in SC4?

Thanks for the heads up, I will look into these old threads.

Would it be technically possible to ever have synchronized server-language clocks, maybe in SC4?

Yes, I think it would be possible. See the following discussion: Keeping sclang and scsynth in hard sync - #7 by Spacechild1

The principles outlined in the discussion could, of course, be applied to any new computer music system. (Personally, I wouldn’t bet on “SC4” to ever happen.)

1 Like

I just tested the code with Supernova - same result basically. There is still a periodicity of exactly 4 seconds for rate = 1, also when tested over er period of 60 seconds. The other rates also show a similar periodicity but with occasional outliers.

Turns out that macOS vs. Windows was a red herring.

I have just tested gain, but with an ASIO driver and very small block size (64 samples). Now I also get the same periodicity:

To be clear: it is expected that there is some difference between language and server time. The only thing that does surprise me is that ominous 4 second period :face_with_monocle:

To be clear: it is expected that there is some difference between language and server time. The only thing that does surprise me is that ominous 4 second period

Yes, but isn’t it also surprising that any rate shows periodicity and that periodicities for different rates don’t align?

All plots do seem to have the same period of approx. 4 seconds. They only differ in their phase, which is not surprising as the Synths are just created via messages and not scheduled as bundles.

More specifically:

x = rates.collect{|rate, i| Synth(\time, [rate: rate, id: i]) };

This does not guarantee that the 3 Synths are started at the same time, but the following would:

s.bind {
    x = rates.collect{|rate, i| Synth(\time, [rate: rate, id: i]) };
};

All plots do seem to have the same period of approx. 4 seconds. They only differ in their phase, which is not surprising as the Synths are just created via messages and not scheduled as bundles.

Yes that was an oversight on my part. I just tested the opposite - sending triggers from a pattern:

(
s.latency = 0.2;
Pdef(\test).clear;
t = TempoClock(1);
x = Synth(\serverTime);
Pdef(\test,
	Pbind(
		\type, \set,
		\id, x,
		\args, #[\serverTime],
		\serverTime, Pseq([1], inf),
		\dur, Pseq([1], inf),
)).play(t, quant: 1);

l = List.new;
o = OSCdef(\o, {|msg|
	var time = t.beats;
	l.add(msg[3].postln - msg[3].round); 
	}, '/reply');
)

(
Pdef(\test).clear;
o.free;
l.array.plot
)

The time stamps from the server are really steady so it seems that there are no timing issues with language-to-server communication, only with server-to-language.

Again: I have observed considerable timing inaccuracy in sclang, receiving UDP packets. I don’t think you can gather any accurate data about server-client sync by measuring the timing of messages coming into sclang, because you have no guarantee of good timing on the incoming messages.

hjh

1 Like

so it seems that there are no timing issues with language-to-server communication

The Event stream player schedules its messages in advance as OSC bundles, exactly to enable precise timing.

only with server-to-language.

Which is expected because these are sent as plain messages.

It would be an interesting idea to support timestamped OSC bundles, though. In fact, I’m planning a plugin API extension – for entirely unrelated reasons – that would allow sending arbitrary OSC messages back to the Client; this would also include bundles! This could be quite useful for sending data back to client without losing all the timing information.

2 Likes

That would be amazing!

You are right, I misspoke when counting the packet sending as part of the server - client timing.

One of the first projects of miSCellaneous_lib was dedicated to a special case of this problem. I wanted to have synth values as control data in Pbinds. Therefore, I designed a mechanism that introduces some extra latency for the response and allows a synchronisation – though obviously not sample-exact, and this is also not necessary for control. I found that this framework has become obsolete by the invention of synchronous buses and didn’t use it since then. However, it still works and might be useful in some contexts. See “Guide to HS and HSpar” as a starting point.

When I was working on my OSC sync clock quark (which I ended up abandoning because of these findings), I found that an “as soon as possible” ping roundtrip between two machines on the same LAN completed very quickly (0.5 ms, on that order) but that the sync messages from the leader to follower clocks (which were timestamped btw) exhibited timing jitter greater than the roundtrip time, but oddly, not when Mac was the receiver :face_with_raised_eyebrow:

  • Mac or Windows sending, Linux receiving: Weird pattern of jitter. (Also if Windows and Linux switch places.)
  • Anything sending, Mac receiving: Significantly less jitter.

Sync messages are going out to a broadcast address, which may affect the behavior – but it was puzzling to me that the only variable that mattered was the OS of the receiver.

This may not be immediately relevant to this specific question, but I guess, if we do get timestamps for server → client messages, there’s a chance that timing may yet not be super accurate in Linux or Windows. But this was all years ago and hazy in my memory.

OSCBundle is supposed to support something like this, but the implementation has a couple mistakes. My ddwPlug quark fixes that with a method “sendOnTime” – it’s pretty neat, it can send SynthDefs immediately, sync to server, and then adjust the latency so that the synth is exactly on time.

hjh

I went on to test the server time stamps and found the same oddity as when testing the language clock in my first example: If latency is non-nil, the server time stamps are sample locked, but when latency is set to nil, the same 4 seconds periodicity appear in the server time stamps and is also seen when inspecting audio produced by quantized patterns - values are off by pretty much the same amount in a 4 seconds cycle. I would assume latency = nil would result in more randomized offset from the quantized value, but values are very consistent.

So then I thought I could use this information to offset the delta times in a pattern with Prout (like the swingify Prout from the help files) to get better timing with latency = nil. The code I am working on force me to work with latency set to nil, since the extra added latency from ie. latency = 0.05 (which has been the smallest I can go in the past without getting late messages) breaks the ‘real time - feel’.

For this idea to work for eg. a pattern quantized to 16th notes, the deviation between server time stamp and quantized value for any given 16th note in the current 4 second cycle should be the same of very close to that of the previous 4 second cycle.

Below is the test, which on my setup shows that the majority (something like 95%) of 16th notes has the same deviation as in the previous cycle.

The test was done on a first gen. M1, SC 3.13.0, OSX 13.6.4, Apollo Solo interface, sr = 48000. I would be very curious to know if other SC users get similar results.

There are some odd cases where ‘this offset’ significantly differs from ‘previous offset’ (see post window). I don’t really know what to make of these cases.

(
s.latency = nil;
Pdef.removeAll;
~beats = 4;
~tatum = 1/~beats;
l = Array.newFrom(0!(~beats * 4));
t = TempoClock(1);
x = Synth(\time);

Pdef(\analyze,
	Pbind(
		\type, \set,
		\id, x,
		\args, #[\trig],
		\trig, 1,
		\dur, Pseq([0.25], inf),
)).play(t, quant: 1);

OSCdef(\o, {|msg| 
	// feedback starts after 4 seconds of play
	var i = (t.beats.round(~tatum) * ~beats).asInteger%(~beats * 4);
	var val =  msg[3] - msg[3].round(~tatum); 
	if (t.beats >= 4) 
	{ 
		var dev = (val - l[i]);
		dev.debug('Deviation from previous cyle');
		if (dev.abs > 0.001) 
		{ 
			val.debug('This offset'); 
			l[i].debug('Previous offset');
			i.debug('i of 16th note in 4-bar-loop')
		}
	};
	l[i] = msg[3] - msg[3].round(~tatum); 
}, '/time');
)

In danger of stating the obvious, it just isn’t possible to get predictable timing – without scheduling messages in advance as timestamped bundles – due to several factors:

  1. quantization effects due to the audio hardware blocksize
  2. jitter in the audio callback
  3. jitter in the language
  4. network jitter (on localhost, that’s actually the least problem!)

Make sure that you understand each of these factors!

1 Like

Minimum usable latency will depend on the soundcard’s hardware buffer size. Reducing the HW buffer should also reduce the lower limit for messaging latency (although I haven’t tested to see what the practical relationship is).

EDIT: Did test. With HW = 2048 samples (at 48 kHz), 2048/48000 = 42.6667 ms and I could go to about s.latency = 0.067 but no lower. HW = 512, 512/48000 = 10.6667 ms and s.latency = 0.0175 was OK :+1:

hjh

1 Like