How to sync between clock and running synth

hi,

Considering the following example, where there’s a pulse oscilator on the server at 2.2Hz, and a pattern in lang with clock at 2.2Hz creating events:



(
~myClock = TempoClock.new(2.2);
SynthDef(\beep, { Out.ar(0, SinOsc.ar(\freq.kr(440), mul:\amp.kr)!2*EnvGate()) }).add;
{
	Ndef(\a).proxyspace.clock = ~myClock;
	Ndef(\a).proxyspace.quant = 1;
	Ndef(\b).proxyspace.clock = ~myClock;
	Ndef(\b).proxyspace.quant = 1;
	
	Ndef(\a, Pbind(\instrument, \beep,
		\dur, Prand([1,1.5,3], inf),
		\degree, Prand((0..14),inf),
		\amp, 0.4,
		\legato, 0.01));
	Ndef(\b, {LFPulse.ar([2.2,2.2], 1, mul:0.5)});
	
	Ndef(\a).play;
	Ndef(\b).play;
	
}.fork(~myClock);
)

The code above causes Pulses and notes from Pbind to go out of sync, which to me is unexpected. I read Scheduling and Server timing doc, but I’m not sure I understand how to force clock on the lang be in sync with an oscilation on the server. Could anyone provide any insight? Is this drift inherent in the architecture and needs to be overcome in a different way - Synth on server controling events in the lang? I might getting some things completely wrong.

First, I don’t encounter any drift with your example here on my system (SC 3.9.3 on Mac with 10.10).

There are two points that confused me a bit with this example: The doubling of the LFPulse frequency (due to causing 2 DC clicks per period, which is kind of harsh) and the random duration in the Pattern. OffsetOut improves accuracy, but even with Out this variant doesn’t run out of sync here:

(
~myClock = TempoClock.new(2.2);
SynthDef(\beep, { OffsetOut.ar(0, SinOsc.ar(\freq.kr(440), mul:\amp.kr)!2*EnvGate()) }).add;
{
	Ndef(\a).proxyspace.clock = ~myClock;
	Ndef(\a).proxyspace.quant = 1;
	Ndef(\b).proxyspace.clock = ~myClock;
	Ndef(\b).proxyspace.quant = 1;
	
	Ndef(\a, Pbind(\instrument, \beep,
		\dur, 1,
		\degree, Prand((0..14),inf),
		\amp, 0.4,
		\legato, 0.03));
	Ndef(\b, { Decay.ar(Impulse.ar(2.2!2), 1, mul:0.2).lag(0.001) });
	
	Ndef(\a).play;
	Ndef(\b).play;
	
}.fork(~myClock);
)

How long does it need to get out of sync on your machine ?

Besides of the concrete example: yes, syncing between Patterns and Server can be demanding and the reason for this is a necessary realtime adjustment that probably can’t be overcome. I have given an example here:

Appologies. Maybe the following is a better example:

////////////////////////////////////////////////////////////////////////
(
~tempo = 8;
~myClock = TempoClock.new(~tempo);
SynthDef(\beep, { Out.ar(0, Impulse.ar(\freq.kr(440), mul:\amp.kr)!2*EnvGate()) }).add;
{
	Ndef(\a).proxyspace.clock = ~myClock;
	Ndef(\a).proxyspace.quant = 1;
	Ndef(\b).proxyspace.clock = ~myClock;
	Ndef(\b).proxyspace.quant = 1;
	
	Ndef(\a, Pbind(\instrument, \beep,
		\dur, 1,
		\freq, 1,
		\amp, 0.4,
		\legato, 0.2));
	Ndef(\b, {Impulse.ar(~tempo, 1, mul:0.5)!2});
	
	Ndef(\a).play;
	Ndef(\b).play;
	
}.fork(~myClock);
)

after a minute it’s pretty clear the something that started as pretty much well aligned impulses (as one), become two impulses. After 5 minutes it’s very audible.

Here’s are two screenshots of a recording - one at the beginning, and the other after 1min 20sec:

I tried to use TempoBusClock (a clock that synchronizes its tempo with the server):

(
~tempo = 8;
~tempoSynth = { |tempo=8| Impulse.ar(tempo)*0 }.play;
~myClock = TempoBusClock.new(~tempoSynth);
SynthDef(\beep, { Out.ar(0, Impulse.ar(\freq.kr(440), mul:\amp.kr)!2*EnvGate()) }).add;
{
	Ndef(\a).proxyspace.clock = ~myClock;
	Ndef(\a).proxyspace.quant = 1;
	Ndef(\b).proxyspace.clock = ~myClock;
	Ndef(\b).proxyspace.quant = 1;
	
	Ndef(\a, Pbind(\instrument, \beep,
		\dur, 1,
		\freq, ~tempo,
		\amp, 1));
	Ndef(\b, {Impulse.ar(~tempo, 1, mul:1)!2});
	
	Ndef(\a).play;
	Ndef(\b).play;
	
}.fork(~myClock);
)

But I’m not sure I really understand how. It drifts just the same.

Ok, that’s pretty much the same example I did in the test in the mentioned thread. Did you check it out? Short summary: to improve accuracy take OffsetOut, besides it’s impossible to avoid that kind of drift with Patterns in RT mode. Using NRT would help – if this is an option in your use case.

We commonly assume that the soundcard’s sample clock is accurate.

It isn’t.

Seeing that much drift after 80 seconds is extremely unusual though. I typically see 2-5 samples/second difference. I’m estimating up to 20 samples/second difference for you.

Could you take a running average of s.actualSampleRate?

hjh

Yes. Is this, what you wanted to see?:

-> 44097.45827975
-> 44095.851406917
-> 44094.620051697
-> 44094.540066577
-> 44094.475766213
-> 44094.475766213
-> 44094.177057743
-> 44094.177057743
-> 44094.095323192
-> 44094.095323192
-> 44094.090065454
-> 44094.090065454
-> 44094.138458763
-> 44094.138458763
-> 44093.755708634
-> 44093.755708634
-> 44094.050871133
-> 44093.860142959
-> 44093.819505445

Pretty much.

On my system, I measure:

(
var sum = 0, count = 0;

OSCdef(\avg, { |msg| 
	sum = sum + msg[9];
	count = count + 1;
	if(count == 100) {
		"Average sample rate = %\n".postf(sum / count);
		OSCdef(\avg).free;
	};
}, '/status.reply', s.addr);
)

// after about a minute...
Average sample rate = 44099.416424442

So on my system, after 80 seconds, I would expect to be about (44100 - 44099.416424442) / 44100 * 80 = 0.0010586404680225 late.

What’s strange about your readings is that it seems to be getting progressively worse (which I don’t see on my machine).

What sound card are you using? I wonder if it’s defective. Those numbers are well out of typical bounds.

Also, what OS? Maybe the OS is not handling sound card interrupts with sufficient priority.

I’m not an expert on this so I don’t know how to fix it… but this does explain the drift that you’re seeing. (44100 - 44093.8195) / 44100 * 80 = 0.011211791383224 and by eye, it looks like 10-15 ms drift in your screenshot.

hjh

thank you both @dkmayer and @jamshark70 for your responses!

soundcard: Beringher UMC204
OS: UbuntuStudio
I’m starting jackd from commandline.
I think I have setup realtime priorities correctly. Never had (any/much) xruns.
how can I determine if the card is defective? Are there more tests that I can do?

Ubuntu Studio is pretty good about that.

Maybe ask the manufacturer. I’ve never seen actualSampleRate be so far off on any of my systems over the years.

hjh

You can check with setting different values of hardwareBufferSize, which must be multiple of blockSize, e.g.:

s.options.hardwareBufferSize = 256;
s.reboot;

Pretty late reply here, but this is a common issue and the thread mostly devolved into hardware troubleshooting rather than what the solutions should be in general. Basically, there are only two simple solutions: master clock on the client or master clock on the server.

The former has been explained in some threads here e.g. search for TempoClick; it issues “clicks”, meaning triggers, on a server bus, which you then have to use to trigger stuff over there, as appropriate. It’s also more or less what the usual “synth spam” via pattern.play does, i.e. synths are short lived so the long-term master clock of the composition is essentially on the client.

If the synths need to be longer lived, for example: a “manual” turntable scratch effect that varies the play rate of a syth under direct user control (e.g. MouseX etc.) then you might need to have the master clock on the server. This means that you send SendTrig events from there to the client and no longer play patterns but you instantiate them asStream and pull next values from them in an OSCdef, values that you use e.g. to set something on the server. This OSCdef basically replaces the EventStreamPlayer with an “on demand” version, where the demands now come from the server. (If this is not clear enough, I can post some code… just ask.)

The biggest issue with this is client-server latency (s.latency). With the default settings, you have 0.2s latency, which is pretty huge in some cases. If you lower it, you’ll get some “late” messages. Typically 10 to 80 milliseconds jitter are to be expected on a zero-latency setting, at least on a Windows box.

I suppose you could try to concoct some kind of two-way synchronization by the combining the above somehow, but it’s going to be harder. While stuff like NTP (which does that so some extent, but there are ultimate atomic time sources in the network) is wonderful between computers, it’s probably not very feasible to implement it real-time so as to sync client server clocks, even at a virtual level. If you look at some VST-related discussions having to choose the master clock is basically how stuff is done in the world of audio.

TempoBusClock is a fake solution for this kind of synchronization, by the way. If you look at its trivial source it does nothing in terms of synchronization except set a tempo parameter server-side, which your synths are supposed to deal with. That’s not a real synchronization if the clocks drift apart.

Actually, the conclusion that the soundcard clock is bad and the the OS or mainboard clock is good isn’t warranted from this, unless you have an atomic clock on your mainboard (you don’t). The OS sometimes slows its clock down gradually due to NTP; it’s how it prevents the clock from sudden jumps backwards, i.e. just slows it down over (real) time instead. Comparing two clocks doesn’t tell you which one is good, only how they differ.

Frankly I suspect that people with integrated audio will report less drift because the audio clock is probably derived from the mainboard clock by some divisor or similar mechanism; see e.g. (2010 era) Intel documnetation for “High Definition Audio” in which the “codec
derives its sample rate clock from a clock broadcast (BCLK) on the link”; BCLK is a 24.0MHz clock basically in the southbridge (at the time). They comment later on (p. 92) that 48KHz or even 192KHz sampling is therefore more accurate on their HDA because of the simple divisor involved. On the next page they discuss the more complicated method to get 44.1KHz from that, using a “12-11 cadence”. The OS clock that the client uses can still drift from that that due to software adjustments, even it’s ultimately sourced from the same oscillator, which it probably isn’t anyway. I’m not too sure if BCLK was supposed to use its own oscillator or e.g. in turn derive one from the higher clock of the northbridge.