there’s something that’s been a little hazy for me in my understanding of supercollider for awhile now and i was hoping to get a bit of an explanation here.
if i’m on a quad-core machine, supercollider will, by default, use all four cores - correct? so, i’m curious what supernova does, exactly? my understanding is that it somehow improves the CPU headroom, but i’m unclear about how - and by how much?
Technically, the multiple threads of the three SuperCollider executables may run on any available cores at any time. Don’t forget that SC consists of a code editor, a language compiler + virtual machine, and an audio engine, and these three components run in separate processes – there is no SuperCollider process! Since each of these runs multiple threads, it is possible to have the SuperCollider environment using all four cores simultaneously.
But, usually what people mean when they say “using cores” is real-time signal processing. You didn’t specify this but I’ll assume that’s what you meant.
scsynth, the default server, has one DSP thread only. It will not use multiple cores simultaneously for signal processing. So your assumption was not correct. (Note, though, that it’s good for performance if language operations, GUI interactions etc. are running on other available cores instead of competing with the DSP thread for CPU cycles. So the fact that scsynth uses only one core for DSP does not mean that a multicore machine is useless for SC! Far from it – you want non-DSP to be running alongside, not competitively against, DSP.)
Supernova manages multiple DSP threads, up to the number of cores. If you use ParGroups correctly, it can parallelize signal processing. The performance gain depends on OS, system configuration, use case etc. Some users get quite good results from it; other users have reported slower performance with supernova.
Just to add to this, if you run multiple servers, not just the default server, the OS will most likely put these on separate cores. This takes some work and planning and you cannot send audio between servers (very easily), so it is probably a last resort for tackling cpu usage issues.
Hey @Sam_Pluta , would you mind sharing how you get audio between servers? Maybe you’ve already posted something in the forum? I LOVE this approach of using multiple servers instead of supernova with apple silicon, and its working much better for me than supernova is (arm64 release). But I failed today with trying to pipe the output from one server to a reverb Synth running on another with a Bus.audio(), which is something I’d like to be able to do. Thanks so much any help you can offer!
First of all, this requires BlackHole and the Aggregate Device feature of MacOS. I don’t know if there is an equivalent in Linux or PC land.
You need to make an Aggregate Device that has BlackHole(16ch) as the first 16 channels and your audio interface after that. You can use the 2 channel version, but the channels will be different. You can make any Agg Device, so I have different ones set up to work with my Mac audio, my UFX, my UCX, etc.
Then the following should work:
//set the local options to use the Aggregate device of BlackHole plus Mac outs (which you make in Audio Midi Setup)
Server.local.options.device_("BlackMac");
//make 4 servers, each with their own id
(
var id = 57100;
~servers = 4.collect{|i|
Server.new("server"++i, NetAddr("localhost", id+i), Server.local.options).boot
};
)
//see the servers in the array - you should also look in Activity Monitor and see a bunch of scsynth instances
~servers.postln;
//each of the servers send the audio out channels [0,1], which are the first two channels of BlackHole (you won't hear anything at this point)
(
{SinOsc.ar(200, 0, 0.1).dup}.play(~servers[0]);
{SinOsc.ar(300, 0, 0.1).dup}.play(~servers[1]);
{SinOsc.ar(400, 0, 0.1).dup}.play(~servers[2]);
)
//on server4, we take the input from [0,1] on BlackHole and output to channel [16,17], which are the first two channels of the audio interface
{Out.ar(16, SoundIn.ar([0,1]))}.play(~servers[3]);
My whole system is set up to work this way. I used TotalMix to do this for years, but switched to this method because it allows me to send audio between Servers even when I am just using the stock mac audio interface. The latency is no different.
In linux you can do this with jack, qjackctl gives a very nice node-base layout of all the active audio programs, and their respective inputs and outputs.
Hey thanks so much! I really appreciate the help. I’ll do some more research and tinkering now that I have a (BlackHole) direction to head in. And very glad to know that Jack will work for linux! I wonder if SC would benefit from this type of capability, so that it could be platform-independent. There are lots of wonderful things about using multiple servers for parallel audio. I haven’t discovered any downsides yet but still exploring it.
I think the complexity of implementing a SC-specific inter-process communication protocol for audio is likely to be prohibitive, considering developer resources. Also, by using inter-app audio tools for your OS, you can interface with non-SC software. (Some of my live sets incorporate VCV Rack, which pipes a stereo feed into a couple of scsynth input buses.)
However, SC is a “do-ocracy” and if someone implemented it in C++, it would be reviewed.
Yeah I understand how limited coding resources can be with community projects; I’ve done a lot with openFrameworks over the past several years, which is similar in nature. I sure wish I could just pass some shared_ptrs amongst servers! But the Blackhole solution will work OK too.
Anecdotally, supernova seemed to work better with the x64 release (3.13.0-rc1) and Rosetta than with the arm64 release.
A trick I learned from @Hunter_Brown is that the servers can also have different blockSize(s). In my instance (and I believe Hunter’s), I am running most everything with a blockSize of 64, but then some single sample feedback stuff goes on its own server, with a blockSize of 1. See the example below. In this example, one of the servers with 400 oscillators will have a much higher cpu level than the other one. Single sample is much less efficient. But they both sum to the 4th server just fine.
//set the local options to use the Aggregate device of BlackHole plus Mac outs
Server.local.options.device_("BlackMac");
//make 4 servers, each with their own id
(
var id = 57100;
~servers = [64,64,1,64].collect{|item, i|
var options, server;
options = Server.local.options.copy;
options.blockSize_(item);
server = Server.new("server"++i, NetAddr("localhost", id+i), options).boot;
};
)
{Splay.ar(SinOsc.ar(Array.fill(400, {rrand(1000,2000)}), 0, 0.001))}.play(~servers[0])
{Splay.ar(SinOsc.ar(Array.fill(400, {rrand(1000,2000)}), 0, 0.001))}.play(~servers[2])
{Out.ar(16, SoundIn.ar([0,1]))}.play(~servers[3]);
Hey @Sam_Pluta this is an awesome example of a benefit of multiple servers! I think this “individualized approach” for them and the flexibility and opportunities it affords is really powerful with modern multicore processors. There are added benefits outside of basic multithreading; the servers can become another component in a composition.
It seems like each server has its own set of non-sharable stuff. So for example I have to load a Buffer for each server that plays a GrainBuf synth. Multiple servers do seem to output their own audio on a common [0, 1]. I tried expanding this with ~server01.options.numOutputBusChannels = 16;, but only [0,1] were common amongst them.
Fortunately, Pbind has a default key (\server) that can determine which server(s) get each Event (?). And keeping several servers in an Array as an environment variable allows them to be easily used by any of the list patterns (Pseq).
And then a multiple server approach seems like it could translate pretty easily to different hardware. Something written on a multicore laptop might be pretty easy to adapt to several RPis for instance, where each Pi has just 1 server running.
I’d think of this in terms of virtual buses. Each server has its own set of virtual input and output buses – SC calls them “hardware” buses but that’s not necessarily the case. These buses are separate for each server.
Then: the “hardware” buses may be connected to an audio device. By default, the first two output buses are connected to hardware outputs, and the first two input buses to the hardware inputs. If you have every server connected in this way, then one server’s output to these buses will be mixed with other servers’ output to their own “first-two” buses, and they come out the speaker “in common” – but the signals within each server are not shared. The mixing happens outside of SC.
Basically nothing is shared. Connections between the IO buses are made outside of SC, and that’s the only data sharing. They’re separate processes; common structures shouldn’t be expected.