Realtime Software considerations

This is unquestionably very cool. I think the main thing to note about domain specific vs general languages is the lack or presence of things like reliable RT performance. I don’t really know about how that works in recent Python, so may be fine, but I know that is something some people have faced issues with when trying to write scsynth clients in general purpose languages. Of course usage has a big effect on that.

Sclang does not really have reliable performance charasterics that would make it suitable for RT use. “Language jitter” is a known issue. That’s why sclang does not run directly in the audio callback and we need to schedule bundles into the future (-> Server latency).

Sclang uses a global method table to get O(1) method calls, but that’s about it. It still allocates memory and has a garbage collector.

1 Like

Okay. But incremental garbage collection is an example of improving real time reliability since it amortises the work? Reliable recovery to logical time scheduling when overloaded, etc. likewise? Not all general languages have these sorts of features especially if optimised for throughput? Not scheduling anything risky within the audio callback was actually highlighted as a ‘realtime’ feature when SC3 first came out IIRC. But I am aware that there are many design decisions from more than 20 years ago, and that some of the choices may not make sense today.

I’m no specialist in this, and I’m aware that you’ve been dealing with this quite a lot in your work @Spacechild1. So I would be curious what you see most important in RT software in 2025. (Maybe we can start a separate thread or DM if you’re happy to continue, as I realise this is drifting off-topic. Mea culpa!)

This sound very cool! I’m positive there are examples in the sc help files that are broken.


I’m not a specialist here either, but if I remember correctly CPython uses a reference counter (for short lived objects), meaning memory use is more deterministic than sc’s — at least for those short lived objects.

1 Like

Thanks! I’d be curious to know more. Reference counting for tracking but I think what matters more is how the GC runs? SC does bounded amounts of GC work with each object allocation. There are disadvantages to this (to reference counting as well of course), but it avoids big collection cycles. (Certainly an old design now though.)

In general, my understanding has been that the big divisions are real-time, soft-real-time (e.g. Erlang), and not real-time. I thought scsynth fell into the last category, but maybe it can be considered soft real-time? My intuition has always been that any efficiency gains Sclang has in its design are going to be swamped by the person-centuries (or -millennia) invested into speeding up many general-purpose languages, even accounting for the good fit incremental GC has for live coding music.

But that’s just an intuition! It would be fun to have a sort of Computer Language Shootout/Benchmarks Game for SC-compatible languages, where we measure average and worst-case latency for common simple programs or musical tasks. I’d be willing to submit some Vivid code if other users had interest.

It would be interesting to see, in particular, if there are any cases where sclang consistently avoids latency hiccups, while other languages trip up. My guess is there aren’t any/many, but there’s only one way to find out!

2 Likes

Interesting the mention of Erlang… it gives each process its own completely isolated memory space - including heap, stack etc. This design choice means that garbage collection happens at the process level. When GC runs on one process, others continue uninterrupted. Per-process heaps are smaller, making collection pauses shorter and more consistent. This approach ensures that critical processes remain responsive even while others undergo garbage collection.

There is a ticket to implement this idea in Haskell, but I don’t know how it is going.

1 Like

I guess that depends on use case. In many circumstances a late synth would be useful (soft) in some cases not useful but the failure is tolerable (hard). I don’t think most computer music RT systems would be hard? I’m not sure why one might categorise SC as not real-time, but perhaps there are different definitions in use?

I’m more or less thinking of these:

  • Hard – missing a deadline is a total system failure.
  • Firm – infrequent deadline misses are tolerable, but may degrade the system’s quality of service. The usefulness of a result is zero after its deadline.
  • Soft – the usefulness of a result degrades after its deadline, thereby degrading the system’s quality of service.

Yes, this is sort of what I was getting at. Again the design choices were made a long time ago. What might one do differently now? I tend to think of Ross’s text about RT audio stuff, but again it is also over a decade old now, so I’d be curious for more recent sources! Ross Bencina » Real-time audio programming 101: time waits for nothing

1 Like

I’m more or less thinking of these:

  • Hard – missing a deadline is a total system failure.
  • Firm – infrequent deadline misses are tolerable, but may degrade the system’s quality of service. The usefulness of a result is zero after its deadline.
  • Soft – the usefulness of a result degrades after its deadline, thereby degrading the system’s quality of service.

Yes, that’s the classic categorization. Here’s how you would map it to SuperCollider:

  • Server RT thread (audio processing): firm realtime; a missed deadline results in an audio dropout, i.e. not fatal, but also not usable.

  • Server NRT thread (asynchronous commands): non-realtime; no deadlines need to be met

  • sclang: soft realtime; late OSC bundles degrade timing accuracy, but the resulting audio may still be usable

2 Likes

Yes, that’s pretty much what I thought.

Ok, just found the reference on the topic: this and this. He says this approach extends beyond a garbage collection model, so it is a bit unjust to put in the same bag. It is in fact a new concurrency design and “failure model.” Comparing with other GC can be more nuanced than initially apparent.

Here are some things I’ve noticed about SC’s performance, and hence its latency, particularly regarding scheduled routines.

The performance can easily vary by 10% between otherwise identical evaluations. I suspect this is due to the GC being unpredictable.

The performance can vary by an even larger degree each time you recompile the language. This makes benchmarking changes to the class library hard/impossible. I don’t know what causes this, perhaps its memory fragmentation?

Performance degrades after allocating a large amount of memory, if you bench something, then allocate a large of amount of small objects, then bench again, often performance decreases. This must be GC related, but I’m not too sure on the specifics.

2 Likes

One thing we fail to do when benchmarking sclang is provide a better breakdown of how predictable a code is. We usually just measure average performance of a short block of code. Compare to the additional information given by other benchmark libraries: criterion report

Okay. But incremental garbage collection is an example of improving real time reliability since it amortises the work?

Yes. But I don’t know of any modern popular scripting language that uses naive stop-the-world GC. They typically use incremental GC (e.g. Lua) or generational GC (e.g. Ruby).

As already mentioned, reference counting is an interesting alternative. (Note that you still need some kind of tracing GC to deal with reference cycles.) Python is the most famous example, but it is also used by some embeddable scripting languages such as Fabrice Bellards QuickJS (QuickJS Javascript Engine) or Squirrel (http://squirrel-lang.org/).

The advantage is that memory management is fully deterministic. The major downside is that deletion of large objects or collections can in turn trigger the finalizers of many other objects and thus cause a long pause. There is no silver bullet…

Reliable recovery to logical time scheduling when overloaded, etc. likewise?

This type of scheduler can be written in any language though.

Not scheduling anything risky within the audio callback was actually highlighted as a ‘realtime’ feature when SC3 first came out IIRC.

But that’s only the Server!


Since you asked about languages that are suitable for RT use, it might be good to distinguish again between “firm realtime” and “soft realtime”.

While sclang can handle “soft realtime” tasks, it is not suited for “firm realtime”, mainly because of memory allocations and garbage collection. This is the reason why JMC separated the language from the Server in SC3. This is true for basically any other general purpose scripting language.

However, there are indeed languages suited for “firm realtime”. This includes audio DSLs like gen~, Faust or JSFX. These languages are very restrictive in what you can do and they often compile to native code.

Another, maybe surprising, example would be Pd’s message system: all objects are allocated upfront and all messages are allocated on the stack (up to a certain size). As long you don’t pass huge lists of atoms around, the message system itself is fully deterministic and therefore suitable for “firm realtime”. If you avoid certain non-realtime-safe objects or operations, you can safely run Pd’s message system in the audio callback.

2 Likes

(Side note for other readers: Erlang “processes” are lightweight threads, not OS processes)

That’s an interesting technique indeed. You can achieve a similar thing in Lua with Lua Lanes (Lua Lanes - multithreading in Lua). In fact, you can do this with every scripting language that supports multiple interpreter instances and does not have a global interpreter lock.

I thought about this from time time. You could have a main interpreter and one or more “helper interpreters” that run in separate threads; they can exchange messages but are fully isolated otherwise. Users could defer expensive tasks to a helper interpreter and asynchronously wait for the result, similar to asynchronous Server commands, but on the language level.

(sclang only supports a single interpreter instances, so “helper interpreters” would need to run as separate processes.)

The tricky part is finding a reasonably efficient way to exchange large objects between threads/interpreters, otherwise the serialization overhead defeats the whole idea.

Exactly! :slight_smile:

Yes, can be. Some music libraries I think have not done this though?

Fair of course. Though it’s probably fair to say that SC also gets a lot more stable if you are less dynamic, and avoid certain operations.

Alongside these common formal definitions of realtime we might consider more informal dynamic/musical/improvisatory ones as well. They’re relevant I think. If you think of what you’re doing as building an instrument that’s one thing, but I always felt one of the most beautiful things about SC was that at least to some extent that wasn’t the main design metaphor. Spawning processes in SC always felt so much more flexible and musical to me than triggering your synth; designing music more than designing a fixed instrument. Of course this makes things ‘softer’, but in some sense (maybe an artistic one) this is a strength.

1 Like

@Jordan I sometimes wonder if something hasn’t changed with this at some point.

It could just be overall system load varying? But in what sense do you understand the GC as being unpredictable? It should do the same amount of GC work for a given number of allocations, so presumably not that?

I did notice that bench often seemed slower the first time you try to test something. I’ve never figured out why, but I just did some tests and that doesn’t seem to be as extreme as it was. Maybe something at OS level?

Yes, I would agree that the dynamic natur of SC is one of its major selling points!

Just to be clear: sclang, and it particularly the Class Library, has lots of features that makes it particularly suitable for musical applications. After all, that’s what it has been designed for :slight_smile:

The most important language feature IMO is stackful coroutines (Routine) because they are the backbone of SC’s musical sequencing/scheduling model. Unfortunately, most modern languages went for stackless coroutines (in the form of generators or async/await). One notable exception is Lua.

Then there are important Class Library features such as array operations, multi-channel expansion, the Pattern library, the Event Stream Player, etc.

I just wanted to challenge the notion that sclang is particularly suited for (soft) realtime operations. In this respect it is not necessarily better than other scripting languages.

2 Likes
  • sclang: soft realtime; late OSC bundles degrade timing accuracy, but the resulting audio may still be usable

This labeling of “soft realtime” would apply to all languages that talk to scsynth though, correct? I.e., this is a definition of the requirements of the system, not a guarantee that a given language will meet a performance threshold?

I certainly had a lot more OSC bundles miss their deadlines back when I wrote complex sclang code, vs. now when I use Vivid (Haskell) for the same tasks. I can’t remember the last time I saw a late OSC message with Vivid.

I agree sclang has its advantages, but I’m skeptical that performance is one of them.

1 Like

This labeling of “soft realtime” would apply to all languages that talk to scsynth though, correct?

Yes!

I.e., this is a definition of the requirements of the system, not a guarantee that a given language will meet a performance threshold?

Modern CPUs are so fast that language performance itself is not that much of an issue anymore. Things like setting the right thread priorities can have a much bigger impact. I just remembered this issue for example: Bad sclang timer granularity on Windows 10 · Issue #5972 · supercollider/supercollider · GitHub