What limits the maximum server workload in SC?


Since I couldn’t quite find the information I was looking for in the internet I am asking you:

When I see the server workload of SC going close to 100% my CPU and RAM are maybe used to about one third. Is it possible to make SC use more of my notebook’s capabilities? What possibilities do I have?
Just kinda curious.


The relevant factor is not memory or sustained CPU throughput, so, no. Also, scsynth uses one core for DSP while the OS generally measures the average over all cores, so the OS measurement will be lower. (If you have 4 cores, a single-threaded app will max out at 25%.)

Your audio hardware configures a buffer for a frame of audio data. Let’s say it’s 512 samples. If your sample rate is 44100, then the duration of that frame is 512/44100 = roughly 0.011 seconds.

The audio hardware issues a system interrupt every 11 ms, and the interrupt handler calls into every audio app to get the next block of audio. So SC plus every other audio-producing app has 11 ms to produce the 512 samples. If SC reports 5% CPU usage, it means it’s taken 5% of the 11 ms to complete. By 70-80%, you’re at risk of one frame being slower and dropping out.

OS CPU measurements are looking for sustained high activity, but real-time audio processing is in short bursts that are highly time sensitive. So the OS doesn’t know to optimize performance.

About the only things you can do to get more juice are: 1/ disable hyperthreading (common in recent Intel chips – it’s good for databases but not for real-time audio) – search online, this is a BIOS setting, you won’t find an OS control panel for it; 2/ disable CPU frequency scaling (hardware setting) and set for high performance mode (the OS might have a control panel). Frequency scaling is bad for real-time audio because, as you noticed, it doesn’t look like the CPU is so busy when measuring throughput, so the system may throttle to a lower clock speed and then your audio calculations are slower.



Wow, that really made a huge difference!

After doing the two things you suggested sounds that produced a server workload of about 100% now remain between 20-40%.

Now only two CPU cores seem to be active instead of four when hyperthreading was enabled. Don’t know whether that makes any sense.

I am using a ThinkPad T430 with manjaro Linux.
Regarding frequency scaling I considered this: https://wiki.archlinux.org/index.php/CPU_frequency_scaling#Scaling_governors

Would you say that using a real-time kernel adds any benefit?
I don’t really sense a difference.

It does – Google hyperthreading.

I’m using something similar in Ubuntu Studio, seems to work well.

Low latency kernel, yes. Real-time kernel, no. A low latency kernel is likely to handle real-time audio noticeably better than a vanilla kernel. A real-time kernel will not be much better than a low latency kernel.


1 Like

Interesting topic – will also post in a separate thread because of OSX. Just to report: I deactivated hyperthreading (MacBook Pro, Catalina) and saved roughly 10 % CPU. Concerning frequency scaling I haven’t found consistent info yet. This seems to be more problematic with MacBooks because of temperature, no ?

You might want to check this thread. @scztt made some valuable suggestions and I ran some quick tests.

1 Like