Server clock vs. Language clock

Assume that two processes A and B are running independently from each other. Process A might send a message to B just when the latter has already started or finished its cycle, so the message will only be dispatched at the next cycle of B.

Note that the Server computes control blocks in batches, For example, with a hardware buffer size of 256 samples, every audio callback will compute 4 blocks of 64 samples in a row. Now, if you’re out of luck, your message might be received just while or after the last block has been computed, after which the audio thread goes to sleep for the remaining time slice. This is the reason why message dispatching is quantized to the hardware buffer size (in the worst case) and not to the Server block size (typically 64 samples). The actual quantization depends on the CPU load. If the CPU load is low, the audio callback finishes very quickly and spends most of its time sleeping, so quantization approaches the hardware buffer size period (e.g. 5.3 ms for 256 samples @ 48kHz). If the CPU load is high, the quantization is less pronounced as the callback spends less time sleeping and more time processing blocks, giving messages the opportunity to “sneak in”.


To illustrate further why you cannot “directly” turn MIDI data into audio in sclang:

First the MIDI timer thread needs to wake up and read incoming MIDI messages. Then it needs to obtain the global interpreter lock – which might be currently held by another sclang thread! Only then it can dispatch the MIDI message to the user code which may in turn send messages to the Server. The Server will finally receive the message and dispatch it in the upcoming control block. Depending on the “phase” of the audio callback, the message may be dispatched in a few microseconds – or in a few milliseconds.

1 Like