Server clock vs. Language clock

The internal server might help because it doesn’t go through UDP, but I didn’t try that.

What is the internal server, is it not just the default server in s?

It’s a server that runs within the sclang process (loaded like a library rather than a fully separate process). Because it’s in the same process, messages can be sent by straightforward calls instead of going through a network protocol. (It doesn’t even open a UDP port, which is why the IDE can’t report its status.)

Server.default = Server.internal;

hjh

I tried running the above line, rebooting the server and running the initial code again. The result was pretty much the same, except for one ‘rogue value’ in the middle plot:

Here on Windows the resulting plot looks pretty random (as expected):

Since you are on macOS, I guess what you are seeing is the effect of clock drift. Sclang runs on the system clock, but the server runs on the audio clock. These two clocks do not run at the same speed. To enable precise OSC bundle scheduling, the server must somehow estimate the current system time for every control period.

On macOS, scsynth periodically resyncs the time very 20 seconds (see syncOSCOffsetWithTimeOfDay in server/scsynth/SC_CoreAudio.cpp). This means that the clocks are very gradually drifting apart for 20 seconds before being readjusted.

On Windows and Linux, however, the OSC time is continuously estimated with a time DLL filter. This means that there is no gradual drift.

You can try running the same code on Supernova, which uses a time DLL filter on all platforms (unless, of course, useSystemClock is set to false). Do you still get similar results or do they look more like mine?

What I find striking in your example, though, is the exact period of 4 seconds!

1 Like

What I find striking in your example, though, is the exact period of 4 seconds!

Yes, and this also is the case if I let it run for longer periods of time.

Here I am testing rates which are all related: [1, 2, 4, 8, 16].

I have never used Supernova before. What is the easiest way of testing this in Supernova?

There’s a long history of server-language timing issue discussions, some of them going into elusive details. Maybe you find something helpful in these old threads (also see the update from May '21)

3 Likes
Server.supernova;
s.boot;

Would it be technically possible to ever have synchronized server-language clocks, maybe in SC4?

Thanks for the heads up, I will look into these old threads.

Would it be technically possible to ever have synchronized server-language clocks, maybe in SC4?

Yes, I think it would be possible. See the following discussion: Keeping sclang and scsynth in hard sync - #7 by Spacechild1

The principles outlined in the discussion could, of course, be applied to any new computer music system. (Personally, I wouldn’t bet on “SC4” to ever happen.)

1 Like

I just tested the code with Supernova - same result basically. There is still a periodicity of exactly 4 seconds for rate = 1, also when tested over er period of 60 seconds. The other rates also show a similar periodicity but with occasional outliers.

Turns out that macOS vs. Windows was a red herring.

I have just tested gain, but with an ASIO driver and very small block size (64 samples). Now I also get the same periodicity:

To be clear: it is expected that there is some difference between language and server time. The only thing that does surprise me is that ominous 4 second period :face_with_monocle:

To be clear: it is expected that there is some difference between language and server time. The only thing that does surprise me is that ominous 4 second period

Yes, but isn’t it also surprising that any rate shows periodicity and that periodicities for different rates don’t align?

All plots do seem to have the same period of approx. 4 seconds. They only differ in their phase, which is not surprising as the Synths are just created via messages and not scheduled as bundles.

More specifically:

x = rates.collect{|rate, i| Synth(\time, [rate: rate, id: i]) };

This does not guarantee that the 3 Synths are started at the same time, but the following would:

s.bind {
    x = rates.collect{|rate, i| Synth(\time, [rate: rate, id: i]) };
};

All plots do seem to have the same period of approx. 4 seconds. They only differ in their phase, which is not surprising as the Synths are just created via messages and not scheduled as bundles.

Yes that was an oversight on my part. I just tested the opposite - sending triggers from a pattern:

(
s.latency = 0.2;
Pdef(\test).clear;
t = TempoClock(1);
x = Synth(\serverTime);
Pdef(\test,
	Pbind(
		\type, \set,
		\id, x,
		\args, #[\serverTime],
		\serverTime, Pseq([1], inf),
		\dur, Pseq([1], inf),
)).play(t, quant: 1);

l = List.new;
o = OSCdef(\o, {|msg|
	var time = t.beats;
	l.add(msg[3].postln - msg[3].round); 
	}, '/reply');
)

(
Pdef(\test).clear;
o.free;
l.array.plot
)

The time stamps from the server are really steady so it seems that there are no timing issues with language-to-server communication, only with server-to-language.

Again: I have observed considerable timing inaccuracy in sclang, receiving UDP packets. I don’t think you can gather any accurate data about server-client sync by measuring the timing of messages coming into sclang, because you have no guarantee of good timing on the incoming messages.

hjh

1 Like

so it seems that there are no timing issues with language-to-server communication

The Event stream player schedules its messages in advance as OSC bundles, exactly to enable precise timing.

only with server-to-language.

Which is expected because these are sent as plain messages.

It would be an interesting idea to support timestamped OSC bundles, though. In fact, I’m planning a plugin API extension – for entirely unrelated reasons – that would allow sending arbitrary OSC messages back to the Client; this would also include bundles! This could be quite useful for sending data back to client without losing all the timing information.

2 Likes

That would be amazing!

You are right, I misspoke when counting the packet sending as part of the server - client timing.

One of the first projects of miSCellaneous_lib was dedicated to a special case of this problem. I wanted to have synth values as control data in Pbinds. Therefore, I designed a mechanism that introduces some extra latency for the response and allows a synchronisation – though obviously not sample-exact, and this is also not necessary for control. I found that this framework has become obsolete by the invention of synchronous buses and didn’t use it since then. However, it still works and might be useful in some contexts. See “Guide to HS and HSpar” as a starting point.