LinkClock.tempo differing on clients? Timing when using a common remote server?

Hi list,

just discovered that LinkClock.tempo differs slightly between different
hosts.

On the first client (Debian/GNU Linux):

~myLinkClock = LinkClock(130/60).latency_(Server.default.latency).permanent_(true);
~myLinkClock.tempo

gives 2.1666666666667

on three other clients (Mac OS or Windows):

~myLinkClock = LinkClock.new.latency_(Server.default.latency).permanent_(true);
~myLinkClock.tempo

gives 2.1666688333355

When the tempo on the first client is changed:

~myLinkClock.tempo=110/60;

it gives 1.8333333333333
while on all other clients it is 1.8333318055568

I wonder what the reason for this may be, and if the clocks are really
running in sync or not.

When all clients, including the first, used a remote scserver on yet
another machine, Patterns such as

Pbind(\degree, 8.rand, \legato, 0.1).play(~myLinkClock, quant: 4);

played in sync but with all phases offset to each other. Patterns are
said to apply Server latency automatically in the Scheduling and Server
timing help file.

I wonder if Server.default.latency will also include (changing?) network
latency when using remote servers? That remote server was made the
default server before instantiating LinkClock:

r = Server.remote(name: \remote1, addr: NetAddr("192.168.2.1", 57110), options: Server.default.options.copy);
Server.default = r;

Is there a (recommended) way to get multiple sclang clients, synced via
LinkClock and playing via a common remote server, to play in sync and in
phase?
Or is it simply impossible due to network latency and the fact that
LinkClock is only affecting sclang?

Thanks to all!
P

I’d guess that sclang is internally setting the tempo to a double-precision float, and probably LinkClock is transmitting the tempo to other peers as a single-precision float.

Different computers’ hardware clocks run at slightly different speeds, so software timers on separate machines will drift, even if they are set to the same high-precision tempo. So even if you did get exact double-precision matching tempi, this is not sufficient to guarantee sync. (Hence, the differing precision that you’re seeing is not relevant = don’t worry about it.)

Link works by having all the peers make micro-adjustments to their timing to match each other. Everything is approximate, but together. You do not get sample-accurate tempo precision with Link! You get togetherness that is close enough for onstage use. The slight difference in tempo that you see is an order of magnitude or two or three smaller than the micro-adjustments, so again, not relevant.

I believe machines need to be ntp synced for server message timestamps to work correctly (even without Link in the picture). But I haven’t done much with this.

hjh

Thanks James,

I do now understand that LinkClock adjusts constantly. Will test with
local scservers on each of the clients next.

Does anyone else have any experience with LinkClock-ed sclangs and a
common remote server?

Furthermore, with the remote server, evaluating single lines of code
such as
().play
or
{SinOsc.ar * 0.1}.play;
would sometimes take up to 4-5 seconds to produce a sound. The remote
server is on an apple mac mini and accessed via wlan from clients of all
three operating systems.

best, P

The ().play problem could happen if the server machine’s clock is a few seconds behind the client machine. NTP sync should help with that.

But that wouldn’t explain the {}.play problem, because the messages aren’t timestamped in that case – not sure there.

hjh

Thanks again James, will check if ntp is enabled on all machines. Do you
think it would make sense to increase Server.latency to be larger
than the worst network latency measured via Server.ping?

What kind of ping results are you getting? On a lan, the actual network transmission time should be less than a ms; if you measure language-server time, then the audio hardware block size factors in too.

If it’s networking latency, I can’t imagine that would be 4 seconds considering that online video chat has much lower latency over longer distances.

hjh