All these comparisons were a (possibly avoidable) detour to get to the bottom of the phenomenon that triggered this thread and it’s predecessor:
When using a small Jack blocksize (e.g. 64), the update of the CPU load numbers in scide causes xruns, at least on my machine.
I could verify this in two ways:
-
Run the same code from the IDE or by loading it directly into sclang (see example below). In the former case xruns appear at a much lower CPU load than in the latter case.
-
Disabling the load display in the source code. With the so modified IDE I could get almost the same performance than when loading the test code directly into sclang.
Below you find my test code, which is designed to load the CPU unevenly. This can be verified by comparing it to another Jack client which produces a very even load. The test client can sustain higher CPU loads without causing xruns. All depends of course on the Jack blocksize, as xruns are more likely to happen with smaller blocks. See the test results earlier in this thread.
My initial goal was to see how much I can do in SC in Linux with a small blocksize and I was frustrated by the fact that - if I ran my code in the IDE - I could do very little (num = 200 in the code below). When run directly in sclang from the command line I can run 10 times more: sclang test.scd 2000
. I tuned my numbers (200 and 2000) in order to be able to run for at least one minute without an xrun.
( // file: test.scd
var arg1, num;
arg1 = thisProcess.argv[0];
arg1.isNil().if({
num = 200;
}, {
num = arg1.asInteger();
});
Server.default.quit();
Server.default.options.maxNodes = 4096;
Server.default.options.memSize = 16384;
Server.default.waitForBoot({
num.do({ { SinOsc.ar([200, 202], 0, num.reciprocal()) }.play() });
});
)