Patterns with external midi timing

This is something I’ve been struggling with for a few years now, but at least I’ve found a workaround, which may give a clue to more advanced users as to what goes wrong.

Here’s the thing: I often create scores by first calculating a large number of patterns and combining them with Pseq/Ppar/Ptpar. All these calculations happen before .play is called on the final pattern. The patterns using (\type, \midi) send the midi to external synths.

Yet, once the number of patterns I calculate becomes large enough, the first notes of the first pattern start to be timed completely wrongly or even go missing, as if the calculations where still ongoing when .play got activated and the first notes are crammed together in whatever time is left in the first beat (often producing some kind of chord, where I expect separate notes).

The workaround I have found is to play my patterns as follows:

score = calculate_score.();
fork {
	1.wait;
	~player = score.play;
}

as opposed to what I’d intuitively do:

score  = calculate_score.(); // generates a few hundred short patterns and Pseq/Ppars them together
~player = score.play;

Since the calculations should be completely finished by the time .play is called I cannot really make heads or tails from this behavior. Does this happen for someone else as well?
Changing MIDIOut latency (both increase and decrease) has no observable effect on this phenomenon.

I think it has to do with the difference between logical time and physical time.

In your second example, without wait, physical time has advanced but event timing is calculated relative to logical time. Logical time has not advanced. If it took one second to calculate the score, then the first second’s worth of events are late according to logical time.

The missing piece of the puzzle is that logical time is still the same as it was when you started calculating (and it needs to be so).

The wait is what allows the logical time to advance (and this needs to be at least as long as the time the calculations took – which you could find by Main elapsedTime - thisThread.clock.seconds and convert to beats if needed).

Edit: Meant to include Scheduling and Server timing | SuperCollider 3.12.2 Help

hjh

Ah! That link is very useful. I tried this:

fork {
        [thisThread.clock.beats, thisThread.clock.elapsedBeats].postln;
        ~player = score.play;
}

and get: [ 4740.134994767, 4740.626046031 ]
so indeed it took half a second to generate the score and that would be causing the problem

Would it be correct from now on to write something as follows? Maybe there’s a cleaner way. (I was hoping to find something like TempoClock.sync; ).

fork {
    (TempoClock.beats2secs(thisThread.clock.elapsedBeats) - TempoClock.beats2secs(thisThread.clock.beats)).debug("latency compensation").wait;
    ~player = score.play;
}

I noticed in the past that the problem didn’t occur as quickly when using internal synthesis with the default instrument, but that must have been because of the default server latency that can compensate for at least some of the setup time.

It looks like the right idea. If you’re forking on a TempoClock, it’s likely better to keep everything in beats rather than converting to seconds – it may be enough just to write (thisThread.clock.elapsedBeats - thisThread.clock.beats).

hjh

I had wrongly assumed that wait works in seconds, but it’s in beats. Thanks!

It’s the clock that controls how the wait duration is interpreted. If a routine is running on SystemClock, then the wait time will be seconds. If a TempoClock, then it’s in beats.

hjh

1 Like