How to use s.makeBundle with a specific timestamp

Hello

So I love the ability to specify a latency and be precise as the first argument of s.makeBundle… but I’d like to be able to make it absolute instead of relative. In this case, I get a function triggered by a synth, which has a [time] tag sent from the server. I’d like that function to trigger another synth at a specific time after that server-given time stamp (aka an absolute time in the future)

Is there a way to do that?

thanks!

The timestamps that are sent over are absolute, per the OSC standard.

In SC, the absolute time is calculated as “now” + latency, where “now” is the current logical time in seconds i.e. SystemClock.seconds.

So if you know the absolute time in seconds within this time base (which should match the time base used when sclang receives OSC messages), then you can subtract SystemClock.seconds and that should do it.

hjh

ok great thanks. There is one thing though:

I thought that the timestamp would have been provided by the server not the language. Are they sync’d somehow?

p

It’s not possible to do it that way.

One of the main points of timestamps is to control network jitter. If the message is sent first, and then the timestamp gets resolved in the server relative to the time of receiving the message, the receipt time is affected by network jitter and then you almost might as well not use timestamps at all. The timestamp has to be generated in the language, relative to a stable time base (which internally is a “high resolution clock” IIRC).

Here I forget some details, but basically, yes.

OSC timestamps are always absolute: “Time tags are represented by a 64 bit fixed point number. The first 32 bits specify the number of seconds since midnight on January 1, 1900, and the last 32 bits specify fractional parts of a second to a precision of about 200 picoseconds.” So the question then is, how do sclang and scsynth know how many seconds have elapsed since 1900? On the same machine, this is based (I think) on the same OS-level clock (“std chrono high resolution clock” or something like that). On separate machines, we assume both machines’ clocks are NTP-synced (if the client machine thinks the time is 9:00:30 and the server machine thinks it’s 9:00:25, timestamps would execute 5 seconds late, and there’s nothing SC can do about that).

There’s a bit more to it than that (macOS has functions in both language and server to resync the OSC offset to the system clock every 20 seconds or so, Linux has another mechanism) but that’s the gist.

A few of the details are a bit fuzzy in my memory this morning, but I think that’s more or less how it works.

hjh

I enjoy reading posts about timing on this forum regarding the SuperCollider. This discussions about improving the timing for hard sync (perhaps sample-accurate) between scsynth and the language have come up periodically for over a decade.

https://github.com/supercollider/supercollider/issues/2939
https://scsynth.org/t/server-clock-vs-language-clock/9135
https://scsynth.org/t/keeping-sclang-and-scsynth-in-hard-sync/5526/24

Have I missed any improvements?

Personally, I have a distrust of macOS’s syncOSCOffsetWithTimeOfDay. Situations where NTP jumps certainly exist, and the timing at which recognize and correct it is different between server and language. it results in late messages. This phenomenon does not occur often, but it does occur occasionally.

I’d like to defend SC’s current approach, but… I’m also having problems now in Linux. Will have to reboot and hope that fixes it.

  1. I was using my LC environment quite OK.
  2. I put the computer to sleep to have a snack and a cup of coffee.
  3. Woke from sleep, connected a different audio device.
  4. BOOM… “node not found” during the LC environment’s startup script which had worked 15 minutes prior.

Uh. What.

FAILURE IN SERVER /s_new Group 14 not found

OK, let’s backtrack:

latency 0.2	SysClock logical time 641.285197649	thisThread's logical time 641.285197649
	[21, 12, 1, 1]
	[21, 13, 1, 12]
	[21, 14, 1, 12]
	[9, 'mixers/Mxb2x2', 15, 1, 12, 'busin', 12, 'busout', 8, 'pan', 'c8', 'level', 'c10', 'clip', 'c9']
	[23, 1, 4]

So group 14 should be up and running at 641.285 + 0.2 = 641.485.

latency nil	SysClock logical time 641.652887002	thisThread's logical time 641.652887002
	["/d_recv", "data[ 509 ]", [9, 'temp1008', 1000, 1, 14, 'i_out', 12, 'out', 12, 'outbus', 12]]

The failing command is running well after 641.485.

ON: [644.469629545, [/n_go, 14, 12, 13, -1, 1, -1, -1]]

/n_go for 14 is received FULLY THREE SECONDS LATE.

Uh. What.

Remember, this code worked fine at 2 pm and failed at 2:30 pm. Rebooted the interpret, no luck. Multiple tests.

Intellectual curiosity aside, I have to reboot (or at least reboot ALSA and Pipewire) now to see if I can get back to work… can’t stay in this condition just to investigate.

But… uh. What.

hjh

PS The issue didn’t immediately resolve after a reboot (“uh, what”) but it did resolve after unplugging the other audio device, booting up my LC system using the built in hardware (no timing errors), recompile class lib, reconnect the audio device, reboot the LC system and then no errors. I mean…

rant of dubious hinged-ness

like, work has got a lot of extra junk tasks lately and I’ve got a show in one week. I don’t have time for SC to flip out for half an hour in the only window I’ve really had in over a month to work on this show.

Just reminding myself that I’d be less happy using Max… :laughing:

indeed that sort of dark magic to get things working… although I never dare to put machine to sleep between soundcheck and gig anymore - the iPad-Mac connection over local-link tends to get ‘creative’ - but so far SC had behaved like other software still doesn’t

Anyway, good luck with the gig!

Ideally, the CoreAudio backend should gradually ramp the OSC offset to the new value. This is called “clock slew” and it’s also what good NTP clients should do. However, they might do a jump if the time difference is deemed too large. JMC himself has mentioned this in one of this talks.

At the very least we could print a warning when the difference between the new and the old OSC offset is above a certain threshold.

On Linux and Windows, we sample the current NTP time in every audio callback and use a DLL filter to get rid of the jitter and estimate the “real” sample rate and control period. However, if the NTP client does a jump, this can seriously mess up the DLL filter. I think it would be good to detect such jumps and reset the DLL filter.

Shouldn’t clock slew be implemented on both the server and the client? Even if the server gradually moves to a new offset value, it seems like there would be an issue if the client sends a jumped NTP value… or am I wrong?

For now, I’m building and using a version of scsynth that doesn’t use gOSCoffset for my personal use. I’ve removed gOSCoffset from the CoreAudioHostTimeToOSC function, and on the client side, I’m using AudioGetCurrentHostTime as the timestamp for the OSC bundles. I use own clients other than sclang.

I consider this merely a temporary measure. Sometimes I need to use Windows or Linux (like on a Raspberry Pi). For short events like performances, I might simply turn off the system’s NTP synchronization, but for a long-running sound installation project, I’d be a bit worried. I thought the TimeDLL Filter on Windows and Linux were a more improved solution, and I wondered why they weren’t applied to macOS, but it seems they aren’t completely flawless either.

Could it be that most NTP jump issues aren’t a big problem in the community because they appear intermittently and usually resolve through resynchronization within a short time (up to 20 seconds)? Yet, a reliable timing system is surely the most critical component in music software.

Because of this, I previously worked exclusively with server-side sequencing. I did it as a way to work around the issue, but the server-side approach has its own advantages and appeal (creating sequences using only UGens feels genuinely like a modular system), so I was immersed in that method for a while. https://www.youtube.com/watch?v=P9QaPtrPJbs

In any case, I believe that SampleClock will resolve latent timing-related issues while simultaneously opening up new functional possibilities. Am I right?

Caught in the act…

late 0.000213978
late 0.054429375
late 0.062910356
... snip
late 0.762310006
late 0.763666950
late 0.768959200
late 0.770127025
late 0.769154586
late 0.769885076
late 0.770846420
late 0.771802519
late 0.773511679
late 0.772478317
late 0.770858653
late 0.770437122
late 0.770506473
late 0.769611206
late 0.767849014
late 0.765777899
late 0.765355574
late 0.764849787
... snip
late 0.020505189
late 0.020806794
late 0.021485828
late 0.016405815
late 0.016756707
late 0.012587336
late 0.012008915
late 0.007875672
late 0.007557952
late 0.003338960
late 0.004758438
late 0.002486279
late 0.002258766

That’s cutting out a lot, but… it just suddenly started making late messages, and it got later and later until about 0.77-something seconds late, then it gradually reduced back down to 0.

hjh

Shouldn’t clock slew be implemented on both the server and the client? Even if the server gradually moves to a new offset value, it seems like there would be an issue if the client sends a jumped NTP value… or am I wrong?

Clock slew only makes sense when you’re trying to synchronize one time source with another. In the Server we’re trying to synchronize sample time to NTP time. In sclang, however, everything is scheduled on (logical) NTP time. In that case, how would you even know that the clock has jumped (as long as the new timestamp is larger than the last one)?

I thought the TimeDLL Filter on Windows and Linux were a more improved solution, and I wondered why they weren’t applied to macOS, but it seems they aren’t completely flawless either.

In theory, the time DLL approach is not realtime safe because it involves calling a system function like gettimeofday on the audio thread. That’s why JMC’s CoreAudio backend polls the time on a dedicated thread. In practice, the implementations seem to be non-blocking. At least on Linux we could verify it (Making sure you're not a bot!), but on Windows we can only make assumptions.

One potential issue with the time DLL (which I think I have already mentioned somewhere) is that sudden larger jumps in time can mess up the filter state. IMO we should detect such jumps and reset the filter. (We also get the same problem when something is blocking the audio thread for an extended period of time.)

In any case, I believe that SampleClock will resolve latent timing-related issues while simultaneously opening up new functional possibilities. Am I right?

Yes, but if you just want to address the issue of NTP time jumps there is a much simpler solution:

Just make sure that the server and the client use the same monotonic clock source.

There is no reason for using NTP time to synchronize processes on the same machine. We can just as well use the CPU clock.


BTW, I would not create a new Clock object, but instead have a global option that all Clocks use the sample time as their time source.

Yes! So, I have removed gOSCoffset and are building and using a version that utilizes AudioGetCurrentHostTime() on both the server and the client side. Since then, I have not encountered a timing issue.

I think it would be great if there were an option to just use the system’s monotonic time when the server and client on the same machine across all platforms. However, I suspect the developers wouldn’t want to implement features that are specific to certain server/client scenarios.

But aren’t use cases involving communication between multiple computers actually the more specialized scenario… apart from things like the “Stanford Laptop Orchestra”? :sweat_smile:

Shouldn’t SuperCollider focus more on perfect timing within a single machine? If I wanted tight synchronization for multi-user ensemble performances across several computers right now, I would probably use LinkClock.

I’ve had bad experiences with LinkClock when NTP was disabled. It’s no magic bullet.

To be honest (and frank), I was super excited about LinkClock round about… 2018, 2019? But the track record hasn’t really panned out.

Even the first time I tried it, doing a duo concert in 2019 with a student (best student I ever had), “let’s use LinkClock” but it just didn’t work at all. We tried Ableton ↔ SuperCollider, mobile metronome app ↔ SuperCollider, multiple wifi routers, they just couldn’t see each other.

I had some successes with it, at least twice it worked perfectly on stage.

But there were times it just wouldn’t connect, or times when it connected but beat sync drifted off by half a beat over time, or times when it worked pretty well for awhile and then just dropped the connection mid-show (even using a dedicated router).

It’s left me rather disenchanted with the approach. Ableton did it just about as well as it’s possible to do, but… not good enough :man_shrugging: . Not reliable.

So when I worked with a guy playing an Elektron drum machine, or with modular folks, I used MIDI clock out from SC. No :cow::poop:, just run it and it works.

hjh

Thank you for sharing your experience. In that case, Link must be difficult to try out in an important performance. It’s truly ironic that MIDI Clock is still the primary choice for device synchronization, just like it was 20 years ago when I first went to music college. …in the age of AI , no less!:smiling_face_with_tear:

That’s what I have been saying.

Note that you still have to relate the monotonic clock time to the sample time. This is particularly true on platforms other than macOS where the audio callback does not have a precise OS timestamp.

In practice, we could keep all the existing synchronization code and just add an option that replaces all system clock calls (gettimeofday, std::chrono::system_clock, etc.) with a monotonic clock source.

Shouldn’t SuperCollider focus more on perfect timing within a single machine?

I would say yes!

Never had much problems with Link so far - it is limited in regards to its musical parameters, but never had problems with its network discovery or clock drift - it demands that it can broadcast (using 255.255.255.255 - which some routers and especially public wifis have disabled) and that the operating system has NTP setup properly. Maybe the SC LinkClock implementation is a bit faulty, Ableton provides a test suite for Link, I think we never checked our implementation against that test suite…

I always thought about wrapping the UDP packages into websockets such that it can be used also for synchronization with devices over the internet / in browsers - could be interesting but I don’t know if this would even work…

Regarding syncing → Maybe the WebAudio implementation of Firefox (see firefox/media/libcubeb at main · mozilla-firefox/firefox · GitHub) could help as a source of guidance - they also account for the drift between multiple devices (e.g. different I/O devices) and also account for failover if e.g. the device gets plugged out - two things which would be great for SC.

I also know that WebAudio will support access to precise CPU timing within the audio thread next year - I always thought that this would be an expensive operation, but it seems this is stored in some register? At least the developer told me that this isn’t too hard and too demanding. But I don’t know if this is too relevant, clocks are really

I bought a wifi router to use only for performance networking, and found that it works sometimes and fails at other times (e.g. the electronic ensemble concert where my SC LinkClock got separated from the other machines, after working correctly for some time – had broadcast been blocked, or had a firewall been up, it would have simply not worked at all).

Maybe I’m just unlucky? I’m still willing to try it but my results have been mixed.

The sclang test suite for LinkClock is based on Ableton’s testing specs, but you’re correct that it might not rule out bugs in the underlying implementation.

I think, though, that this wouldn’t affect networking. In sclang, we just create an instance of the Link object and repeatedly ask it what time it is. The Link object is fully responsible for networking; I’m quite sure sclang isn’t doing anything to interfere with that.

FWIW there are Ableton forum threads about Link failures across multiple software packages, not unique to SC. My conclusion is that the network is a weak link, pardon the pun. There are some wifi configuration options that may help (which I admit I haven’t tried).

hjh