None of this would change the fact that the client needs to send messages to the Server and the exact time of reception cannot be controlled. If the Server would wake up the Client, it might already have finished before the Client could send its messages. This delay is inherent in asynchronous processes. The upside is that the Client cannot block the Server. That’s the fundamental trade off!
SuperCollider Server has pretty good latency for my understanding. I like this design. One could try to just mitigate scheduling problems on the language side as seriously as the server is optimized. That’s all I’m saying. )))))
I usually use a system with minimal desktop and real-time kernel and low latency. I never had problems with that playing live. But I would be very concerned if I could perceive jitter, irregularities, or latency in the language when I’m playing. I would find another option maybe
I don’t care much about how large is the std lib or anything, I’m more concerned that the language implementation starts to become “legacy code” we got used to but not up to the engineering excellence of the server(s).
Even in that case, a message can arrive at the server just a few samples too late, and have to wait for the next callback.
Set your soundcard to a large buffer size, then set up a basic MIDI synth in VCV Rack and you’ll hear the same quantization (i.e. unplayability). Or just about any DAW. These systems are synchronous, unlike SC, so synchronicity by itself isn’t enough to solve this problem.
FWIW even two decades ago, when I was running SC on an underpowered G4 iBook, I didn’t notice significant delays in the handling of incoming MIDI messages. If MIDI hasn’t been optimized, it’s because it already works well enough.
True! The more important thing is to decrease the hardware buffer size as much as possible.
My point is rather that in a synchronous system, once the MIDI event is received by the audio thread, it is guaranteed to be processed in that audio block. If it is received in another process, you don’t know when it will eventually makes its way into the audio callback, so it adds another layer of uncertainty. Probably not a big deal for most people, though.
BTW, there has been a recent discussion about adding MIDI functionality to the Server, but I can’t find it… In general, I can imagine a plugin API that registers UGens for MIDI events. For example, you could have a UGen that outputs the current value of a given MIDI CC channel. Or with the appropriate API functions you could even have UGens that spawn/destroy other Synths based on MIDI note-on resp. note-off messages. Of course, this would be significantly less flexible than handling MIDI events on the Client side, but you could shave off some extra latency. Just to point out some possibilities. I don’t really except something like this to ever be implemented in SC3.
I thought it was unavoidable, when you’re trying to react As Fast As Possible to incoming MIDI, when the hardware block size is too large.
MIDI’s designed for hardware devices with minimum latency, not general-purpose computers with latency on the order of a dozen or more ms. (This is why, when I performed with a friend who was using his Elektron drum machine, SC was the MIDI clock source.)
I was trying to produce some empirical findings related to the notion that optimizing the MIDI receipt chain would make a noticeable difference. What I found is, it doesn’t make a big difference, not even in Pd.
I tried:
SC MIDI pattern (with outgoing message latency – in theory Pd to take advantage of timing information) → Pd. Jittery timing.
Pd [noteout] → Pd [notein]. Jittery timing.
SC MIDI pattern → VSTPluginMIDISender (with latency – VSTPluginMIDISender is my own way of dealing with latency for VSTPlugin MIDI communication). Timing is near-perfect.
This doesn’t eliminate the variable of the ALSA MIDI layer, which could be introducing jitter…? But if the jitter were that bad, it would be unusable and they would have had to fix it a long time ago. (And I got good timing with the aforementioned SC-generated MIDI clock.) Although… what does reduce the impact of that variable is that jitter was considerably reduced by pulling down the HW buffer size.
So my conclusion is that Pd is not immune to the hardware buffer size limitation… which I think is reasonable: if MSPuckette had a magic solution to the latency problem, everyone would have stolen it by now.
Yes, but 25 ms seems exceedingly high. This would roughly correspond to a HW buffer size of 1024 samples at 44.1 kHz. I would be curious to know your audio settings. However, I have already gone very off-topic, so if you are interested in working this out (I am!) then maybe either open a ticket on GitHub or send me a PM.
Yes, I was deliberately testing a poor performance case, to demonstrate that this isn’t uniquely SC’s problem. I do know how to configure my system for good realtime response, but that wasn’t the point of this test. I was concerned that some folklore might get started that Pd’s timing is substantially better and that we just need to fix timing in sclang… nope.
Yes, I was deliberately testing a poor performance case,
Ahhh! You should have said that
to demonstrate that this isn’t uniquely SC’s problem
Of course, the hardware buffer problem is not unique to SC! I am sorry if I gave the opposite impression!
I was concerned that some folklore might get started that Pd’s timing is substantially better and that we just need to fix timing in sclang… nope.
Yes! But that’s not the point I was trying to make. The thing that Pd can do – and SC can’t – is deterministic sequencing without additional latency. In particular, the issue that @Thor_Madsen is trying to solve wouldn’t exist in Pd in the first place. See again Server clock vs. Language clock - #39 by Spacechild1.