When the Client wakes up, the Server might already have posted to the semaphore several times. Or the Server might post to the semaphore while the Client is running a Routine. (This is almost guaranteed to happen because of language jitter and because the Server tends to process blocks in batches.) In that case, the Client can simply decrement the semaphore and run the next time slice. Maybe I was not precise enough: what I meant was that the Client does not have to sleep between each time slice. Anyway, I am pretty sure that wake up latency is not a thing to worry about.
Maybe to clarify, here’s what I imagine would happen with your proposal - maybe you can check if my assumptions here are correct?
To be clear: I don’t aim to drive the language client with OSC messages - at least not with the RT Server -, instead I would use a named semaphore together with a lockfree FIFO in a shared memory segment. The language scheduler would run independently from the network thread, just like the current system. There is no need for a SampleClock, either. SystemClock and TempoClock would all be running on logical sample time.
With a NRT server, on the other hand, it would make sense to drive the language with OSC messages because it gives us an easy way to sync with Server reply messages! Asynchronous commands are executed synchronously and the /done
message is guaranteed to be delivered at the same slice - before computing any audio. This means that people can use action
functions and OSC responders with the NRT Server and get deterministic results! Note that this would only work reliable with a TCP connection:
- Server sends
/tick
to Client and waits for incoming messages - Client receives
/tick
and dispatches Routines - the Client might send OSC messages/bundles to the Server
- [for each asynchronous command, the Client waits for the
/done
message] - finally, the Client sends
/tick_done
to the Server and waits for more messages - the Server reads all incoming messages up to the
/tick_done
message - finally, the Server computes a block of audio
Since buffer size can easily change in ways that are not visible to the user / between audio devices, will this create user scenarios where patch might have NO late messages with one audio device, and MANY late messages for another audio device?
The audio hardware buffer size already plays a role when trying to find the minimum workable Server latency! The sad answer is: the latency has to be adjusted per system.
looks like it will send MIDI message at a rate of 30/second, but with hardware buffer size of 4092 @ 44100, you’ll get only 10 wake-ups per second, resulting in three MIDI messages being sent at once, every 1/10 of a second
Uhhh, I forgot about MIDI. Thanks for pointing this out! Actually, Pd has this very problem. However, sclang has a similar problem: although MIDI messages are scheduled with latency to compensate for language jitter, this latency value is only actually used in the CoreMIDI backend - in the portmidi backend is completely ignored! Check the implementation of prSendMIDIOut
in SC_CoreMIDI.cpp
and SC_PortMIDI.cpp
. I remember discussing this issue on the mailing list 1-2 years ago.
For both kinds of schedulers, the solution could be to use a dedicated MIDI send thread for the portmidi backend.
For the sample time language scheduler, we can do the same as for OSC bundle scheduling/dispatching: for each tick, the Server estimates the current NTP with a DLL filter (like it currently does for OSC bundle dispatching) and sends it to the Client together with the logical sample time. In the Client, we would then know the (estimated) NTP time for each logical time point and both MIDI backends can use it for their scheduling.