Server sync messages sampled at hardware buffer size

Hi,

does anybody here know why server sync messages return only after the next hardware buffer has been processed and not already after the next block? When running with larger hardware buffers, one may loose quite some time in waiting for sync messages. The following code demonstrates the question:

(
Server.default.quit();
Server.default.options.hardwareBufferSize = 64;
Server.default.options.sampleRate = 48000;
Server.default.waitForBoot({
	{
		1000.do({
			Server.default.sync();
		});
	}.bench(); // => ~ 2.6s, i.e. 2.6 ms / sync
});
)

(
Server.default.quit();
Server.default.options.hardwareBufferSize = 512;
Server.default.options.sampleRate = 48000;
Server.default.waitForBoot({
	{
		100.do({
			Server.default.sync();
		});
	}.bench(); // => ~ 2.1s, i.e. 21 ms / sync
});
)

Best regards,

Gerhard

When the soundcard driver calls into the audio system to get the next hardware block, it requires the complete hardware buffer right now. The audio driver doesn’t know or care that SC further subdivides that duration into smaller blocks. The mandate is for SC to return the whole hardware buffer ASAP.

Assuming 64-sample blocks and a 512-sample hardware buffer, it goes like this:

  • Audio driver says “gimme 512 samples”
    • SC does block 0, 1, 2, 3, 4, 5, 6, 7 immediately
  • Audio driver waits for next hardware interrupt…
  • Audio driver says “gimme 512 samples”
    • SC does block 8, 9, 10, 11, 12, 13, 14, 15 immediately
  • Audio driver waits for next hardware interrupt…

At 44.1 kHz, a 64-sample block represents about 1.45 ms of audio, but there is no thread in scsynth that awakes monotonically every 1.45 ms to calculate block by block. Calculations are quantized to hardware buffer boundaries.

That’s probably a disappointing answer in some ways, but it is the way it is (and not worse than the behavior of expen$$$$$ive DAWs).

hjh

Yes, I get this. But how does it interfere with OSC messaging? Are all messages treated only at buffer boundaries?

This, I’m not completely sure. I’d have to read the source code to be certain, and I don’t have time at the moment.

But – if messages were handled immediately upon receipt, I can imagine cases where, for instance, a b_set command might modify buffer data in the middle of a DSP cycle. Now, you can already modify buffer data in the middle of a hardware block, by applying an OSC timestamp to the message, or by BufWr etc. But in those cases, you have control over the order of evaluation. If the same were allowed by the OSC thread contending against the DSP thread, then it would be indeterminate. In theory that shouldn’t happen because the DSP thread is supposed to have very high priority; even so, the risk of indeterminate behavior would (IMO) be worth disallowing explicitly.

I suspect the solution to your problem may be to reduce the number of sync calls – clump the messages into bundles of 10-20 messages each, and issue one sync per bundle.

hjh

I thought that generally OSC messages were handled every processing block… I need to go back and check. I notoriously set control buses from OSC at around 200Hz, which would be fine for hw buffer size of 128, but would get totally messed up for a buffer of 1024, if the messages were to be processed once every hw buffer size.
I also wonder if sync messages are handled more infrequently than other OSC messages (like bus set for example).

Yes! OSC packets are received on the network thread and pushed to a lockfree queue. In the audio callback, before calculating a new block of audio (= control period), it would take packets from the queue. If the packet is an OSC message, it is executed immediately. If it is an OSC bundle with a future time stamp, it is put on a priority queue, so it gets executed later at the desired time. As @jamshark70 has pointed out, the callback might compute several blocks in a row: number of blocks = hardware buffer size / block size

I also wonder if sync messages are handled more infrequently than other OSC messages (like bus set for example).

/sync is an asynchronous command. Asynchronous commands travel between the RT and NRT thread and look like this:

(stage 0: receive packet on network thread)
stage 1: RT thread → interpret OSC message and dispatch command
stage 2: NRT thread → do work (e.g. read a soundfile into a buffer)
stage 3: RT thread → exchange results (e.g. swap buffer data)
stage 4: NRT thread → send /done message + NRT cleanup (e.g. free old buffer data)
(stage 5: RT thread → RT cleanup)

Unlike “real” async commands, /sync doesn’t do any real work. Its sole purpose is to travel through all stages to make sure that all previous async commands have finished.

Generally, the duration of a Server.sync call is indeterminate, as it depends on pending asynchronous commands. If sync becomes a bottleneck, you’re likely using it in a non-optimal way :slight_smile:

I can second @jamshark70’s recommendation:

I suspect the solution to your problem may be to reduce the number of sync calls – clump the messages into bundles of 10-20 messages each, and issue one sync per bundle.

Thank you for this very helpful explanation!
Best regards,
Gerhard