Client wakes at audioDeviceTime=1000+n
with a message that logicalTime=1000
.
Waking up another process is a matter of microseconds. If the Client is currently idle, it will be woken up more or less immediately. If it is still busy with a previous task, that is the typical language jitter - which can be compensated with Server latency.
Note that even in the current implementation, the Client might wake up a bit later than the desired time point because std::condition_variable::wait_for
is not 100% precise, either.
- In [3], when does the client schedule “now” events, e.g. messages meant for the current logical
time=1000
? It can’t schedule them for time=1000
because this time has already passed on the server.
Yes, the Client would simply use the current logical time as the basis for scheduled OSC bundles. Of course, if latency is zero, the bundle will always be late, but this is also the case with the current implementation!
Generally,the Client can only schedule things for the future, never for the present. The only exception is a NRT Server that can wait on the Client (as outlined in my other post above).
OSC messages, on the other hand, are not scheduled; they are sent immediately, so the time base doesn’t matter.
Probably it schedules them for logicalTime + server.latency
?
It simply schedules for logicalTime
+ delay
. The latter can be Server.latency
or any other value, just like with the current system.
But then, for debugging and code purposes, the server is processing events at audioDeviceTime=1200
that are nonetheless meant for logicalTime=1000
This only happens if the OSC bundle arrives late, just like with the current system.
- this seems like a recipe for confusion? It would be nice to have an implementation where the logical time on both the client and the server were semantically identical, and the latency was implemented by waking up the client early and effectively hiding the offset from
latency
from most code?
What would waking early mean? For OSC messages it would make no difference. For OSC bundles the only effect would be that you get some hidden extra latency. This is not necessary because we already control the latency in the Client.
In [4], the client needs to tell SOMETHING about logical time of it’s next wake-up (we can’t wake the client every 16 samples…). This presumably happens as an OSC message?
The Client simply dispatches all scheduled Routines that fall into the current time slice (= duration of a block). Something similar already happens in the current language scheduler: when the Client wakes up, it reads the current time, compares it against the desired schedule time - it might have been woken up late! - and dispatches all ready Routines.
In addition to scheduler-based wake-ups, the server needs to regularly update the sclang client’s logicalTime
even if no events are being processed. This is fine, but these updates will happen with SOME granularity - either one of our choosing (so that we avoid flooding with updates every 16 samples) or the hardware buffer size.
We only have this problem if we want to introduce a dedicated SampleClock that should coexist with SystemClock and TempoClock. But why would we need this in the first place? I think it would just make things more complicated.
Personally, I still think it makes more sense to drive the entire Client scheduler from the audio callback - as an option, of course! -, so that there is only one logical time (= sample time) that is used by SystemClock and TempoClock.
In sclang there’s no fundamental reason why events cannot be scheduled on different clocks, e.g. AppClock
- or threads, e.g. a network callback thread. In order to reasonably schedule server events (meaning: send any OSC message at all…), we need SOME notion of what the current logical time is,
The AppClock implementation does not have to change at all: after receiving a signal, it reads the current logical time and compares it against the top of the priority queue to schedule the next UI clock event. Logical sample time and system time are close enough, so unless you schedule an event several hours in the future, this should be ok.
For sending/receiving OSC bundles (to other applications) we need to know the NTP time. This can be done by sampling the NTP time on the Server and sending it to the Client. This means that for every block we know the logical sample time + the corresponding NTP time. We might use a time DLL filter to estimate time points between blocks.
Multiple (local) Servers are a bit tricky. One solution would be to set one Server as the master that drives the language scheduler. Each Server samples the current system time at the very first callback and uses it as an offset. Local servers running on the same audio device should not drift apart, so the respective logical sample times wouldn’t drift, either. However, there would be a slight constant offset - depending on the accuracy of the system clock and the time difference between the Server starts.