What I was thinking goes more like this: scztt raised the case of MIDI, where a large hardware buffer (say, nearly 100 ms) would effectively cause outgoing MIDI to be quantized to block boundaries, because tick = 1000 covering 1000.0 to 1000.1 (roughly) would immediately run a task scheduled for 1000.002 and one for 1000.07 at the same time.
And you’re saying that they would run with logical time set appropriately – but you haven’t said whether they are physically waking up as soon as possible after tick time or not (but it sounds to me like they would).
If you get a tick for 1000.0, could 1000.07 wait for 70 ms first, before firing?
I’m ignoring jitter in the tick transport layer, but is there any other reason why it has to be “pop, set logical time, go” instead of “pop, schedule for the real wake-up time and let that thread handle it”?
I guess that would be less efficient for small block sizes (which could be a deal-breaker), but more accurate for large blocks.
FWIW I’m well out of my depth here – speculating. I wouldn’t be surprised if there’s a very good reason why not to do that.
hjh