Server based performance distinctions

This thread is dedicated to the differences between the two main classes of server objects in SC.

In Mapping & Visualization in SuperCollider, there is a piece which explicitly modifies the default server arrangement, which the the provided explanation that the code could only run using the internal server at the time it was written.

With the visual-based performance distinctions between AppClock and SystemClock being all but undocumented this subtle nuance comes with a grain of curiosity, given that the only distinctions made between server objects in Server Guide under Local vs. Internal are:

Regarding Server.local while :


Which leaves one with the impression that the internel server is all but deprecated… the light of it’s service extending only to legacy compositions from a past era (or some aspect of SuperCollider’s past we’re not entirely aware of) …and so why would one ever use it?

Remembering the piece from Mapping & Visualization in SuperCollider… in which only the internal server could reliably produce results for ScopeOut, I recently decided to start using the internal server, over the local server, as the default setup… finding it to be :

  • More stable (crashes less)
  • Faster on initial / startup boot
  • Faster reboot (before and after crash)

…and even though the position that Server.local is generally more robust (given that server-side compositions will persist in the light of client / interpreter derailment…) I’ve since come to the realization that this is not the only argument… and the internally bound server (using the dedicated IDE) has held every competing advantage over (with what is, to my understanding: a remote server sharing the same networking address as one’s current (local) IP) the local server, which after some consideration, would only seem only to make sense… that the former would be yield a more stable performance than the latter, given that it is accessible through one less layer of abstraction… or:

“The less that can go wrong, will…”


Certain (undocumented) performance distinctions exist between two classes of server objects in SC.

Also worthy of mention, are performance based distinctions between the two remaining classes of Server objects: remote servers, and server objects made using custom setup values.

I can’t imagine why that should be the case. After all, it’s the same application.

From the SC docs:

The local server, and any other server apps running on your local machine, have the advantage that if the language app crashes, it (and thus possibly your piece) will continue to run. It is thus an inherently more robust arrangement. But note that even if the synths on the server continue to run, any language-side sequencing and control will terminate if the language app crashes.

I don’t think this is really useful in practice. If the interpreter crashes, you lose any control over the Synths running on the Server (unless you use fixed Node/Buffer IDs), so you probably have to restart the Server anyway.

The answer is: it depends on your work.

If you were to compose everything up front, hit, step back & Voilá … then the ideal arrangement is a separate machine dedicated to a single process, the next best thing being… two exclusive processes existing independent of & in direct communication with each other,

unless you’re a live coder. in which case control is understood to be such a necessity, that no technical advantage could ever outweigh one’s control & stability, if such an exchange were to exist.

These two applications are indeed in terms by powers separate & separated… explicitly referenced in the only standalone tutorial, across the language:

SuperCollider exists in a potentially infinite number of powersets.

SuperCollider is comprised of many different applications. Potentially, many different machines will run SuperCollider compositions, connections and networks, of power, and expression, to and beyond extents no less infinite than it is itself, to & truly infinite.

I would like a 32nd portion of whatever they put in this man’s kool-aid

1 Like

The internal server requirement is outdated, for about a decade. The server’s shared memory interface, enabling localhost scoping, dates back to roughly 2011.

At this point, any inferences or assertions that the internal server is somehow better for Scope-anything are simply not correct.

Do you have evidence of this?

(By the way, did you notice that the IDE’s server status bar doesn’t function at all with the internal server? Before you ask… no, there is no way to fix that.)


So, I’ve moved around a lot the past 5-7, and I can remember all the way back at seven, having a MacBook Pro & spending all… of my time writing server architecture, ever since the day I bought Live / M4L, to discover you can’t control the frequency of any periodic signal with M4L objects, not even a sine wave.

…and after 3 years of heavy client scripting & breaking on OS X (only in a few manual pages have I ever seen any evidence that anyone other than I invented “server architecture”) the local server was always chosen… of course… and I recall having issues with persistent server objects when the client would freeze (I admire sclang, it almost never breaks, it is usually you that has power) there was also, I remember:

…and so, what I’ve come to realize:

  • The client and s.local talk through shared memory interface
  • s.internal returns false for hasShminterface
  • Each chain in a link is where exists the possibility of breaking
  • A single chain link is a loop which is perfect

and so is that also why, the internal server doesn’t update graphically?

I can easily see that as another advantage.

Moving on with the timeline, after losing about 6 G’s in equipment for a backdrop I’ll leave absent… going broke, hitting the streets of LA, learning the art of writing/developing new architecture by using a method I’m nearly certain I invented:

and after 2 & 1/2 then 3 of what might be the hardest “vision quest” in history… following a final return to SC, on two PC’s, both running 10:

  • A Getac B300
  • A modern Samsung/sleek Hp

…the last one occasionally borrowed from a roomate… the Getac was used for 7-8 months before being ran over by a vehicle (you know how you feel when you lose your car keys? …and they might be in the car… ) I didn’t sweat a thing … these are basically out of service Military/CIA hardbooks with the OS wiped slate and auctioned off on eBay for, $300 bucks, whereas an X500 new will run you 5 G’s… they take load any boot drive like magic (all Getacs)… mine rebooted imediately, and lasted about 3 more months, before retiring to it’s “legacy” state… I’m ninety percent sure all it needs is a new battery.

Here is where I started to use the Samsung (both machines running Windows ten) and after developing an early prototype for a system I’m currently finishing, there was an initial server setup process, where the user could easily switch servers by changing a single number, on a single line… to either a one or a zero… for local vs. internal as well as scsynth vs supernova

Remembering that one ScopeOut piece that is, as James has enlightenettd me, entirely irrelevant (I mean this without even a hint sarcasm, thank you as always)… I thought:

What if there’s other advantages to the internal server?

I was still in the habit of using Sever.local as always, & I’m sure 99% of us are.

But in this early version, the lines local & internal are explicity visible at all times, regardless of which one you select as the default server, and so, I kept staring at the internal option from time to time… wondering.

So far… it has all but missed every mark.

But remember that, it still depends on your work, I’m probably a rare example… But it leaves at least what at this point, deserves to be a valid discussion, if not already overly obvious, which setup might work better in any particular situation…

I’m completely unclear which server architecture you mean.

If it happens that you’re referring to SuperCollider’s server architecture, this was described in a 2002 paper by James McCartney, “Rethinking the Computer Music Language: SuperCollider”.

My reason for asking about the relative crashability of the internal vs local servers is that the server crashes so seldom for me that I don’t have any sort of a reasonable sample – my margin of error for any conclusion would be huge. If the local server already almost never crashes for me, then it’s unlikely I would discover a situation where the internal server crashes less.

In any case, a crash is a bug. Bugs should be investigated, diagnosed and fixed. Speculation is less useful here.


Uhm… it was a joke.

What I meant was, I had to go through the exhaustive and difficult process, of teaching myself how to setup a fully capable and functional system, with no guidance or point of reference… Other than, the help source for SC.

…I thought you wrote those pages

So, I find that crashes are usually one of three things:

  • OS threading or RAM allocation discrepancies
  • Poor coding skills
  • This in itself or in combination with number one… losing that connected “shared memory interface”

And perhaps through greater advancements in ones development skills, as well as ones greater understanding of computer science, software engineering, networking practices, etc… One can simply perform the proper measures in order to ensure that there is never a crash, and never too much going on at once, which is really, I think in an abstract way, the root problem we’re somehow getting closer to here… that stability & performance of & involving not bugs, or logical discrepancies, but in the timeless interplay between the cosmic ebb & flow of order and madness / chaos.

In my experience, the internal server has been more, agreeable, in general… There’s nothing I would label a bug…

I also think too, that perhaps… It is possible, that at around nearly the same time as I started to use s.internal… that I simultaneously started to ramp up the server allocation in kilobytes, and I’ve always wondered, if this allocation is in any way affected, by the “shared memory interface”… and instance variable that returns true for local, and false for internal

Apparently that joke is on me :laughing:

Of crashes: I’m wary of a “feeling” that one or the other is more stable, without quantifying. Such as, use the internal server for a week, count how many crashes. Then use the local server for a week and count crashes. That doesn’t control for hours of usage or types of activity, but it’s something.

I’m a little concerned about folklore getting started – “well, I heard someone say the internal server is more stable” and several generations of the rumor later, nobody can remember who said it or why, or if the reasons were solid. At this point, we have nothing but one person’s gut feeling – no numbers, no analysis of causes, nothing. There’s some speculative musing but nothing that demonstrates a significant difference from a placebo effect.

(I did confirm your finding that the internal server boots faster.)

Server crashes are often in plugins. There is no reason why internal vs local would make a difference.

The disconnected shm needs to be investigated. It may well be a result of a crash rather than the cause (obviously a shm interface can’t be kept open if the process at one end dies).


Server.default = s = Server.local;
-> 65705

("kill -9 " ++;
-> 65727
Server 'localhost' exited with exit code 0.
server 'localhost' disconnected shared memory interface

Here, I’m forcing a “crash” by killing the process – that is, the cause of the scsynth process terminating is known to be a kill signal, and definitively not an error internal to the process.

And… we get the “disconnected shared memory interface” message.

So, indeed, the message is the result of a server process going down.

The message is printed in Server:disconnectSharedMemory, and this is called from ServerStatusWatcher:serverRunning_ – if the server was running and is now not running, then try to disconnect shm.

That’s conditional on this.hasShmInterface – which, as you noted, is never true for the internal server – so, this message will never be printed for the internal server. But, as it’s a result of the process going down, the message has no diagnostic value… red herring.

Forgot this one yesterday… the reason why the IDE’s status bar doesn’t update for the internal server is because the IDE sends its own /status messages – it doesn’t rely on the language. (Why? If the language crashes unexpectedly and a server remains running, with the current design, the IDE will continue to show an active local server. But if the IDE depends on the language for status updates, then it would look like both the language and server went down – when a server might still be running.)

The internal server doesn’t open a network socket. So there is no way for the IDE to request its status. It’s as simple as that.


1 Like

Thank you so much James.

I honestly opened this thread with more & in the light of asking questions / seeking expert advice… and not trying to seem like I had all the answers.

That being said… would you or someone else be able to give perhaps one final word, regarding s.options.memSize ?

…as in how much of a good idea is it, to allocate how much % of our current machine’s local RAM, to make our setup as efficient & ideal as possible.

Do local and internal processes treat this value any differently?

And what about setting .memoryLocking to true ? & will this cause the server to use physical drive space like just like RAM…?

Again, thank you so much as always,


OK, I see… in light of that, I’d have to read the thread differently.

I was reacting to the fact that the thread began with assertions, about the heightened complexity of the local server and the benefits of a more streamlined data flow with the internal server.

Some of those assertions can be substantiated (faster boot time).

f = { |server|
	fork {
		var cond =;
		var start; { |server|
			if(server.serverRunning) {
				server.quit({ cond.unhang });
		start = SystemClock.seconds;
		server.waitForBoot {
			"Boot time = %\n".postf(SystemClock.seconds - start);

Boot time = 2.107849731
Boot time = 2.1137933719999
Boot time = 2.0902503100001

Boot time = 1.69532076
Boot time = 1.4399132220001
Boot time = 1.2536909820001

… which is an interesting finding, though tbh it doesn’t matter that much to me because, in my typical use cases, I’m not rebooting the server very often.

Of the other assertions, I have real doubts.

I set mine to 512 MB (2**19 KB).

If you want to be more scientific about it: real-time memory is used for LocalBuf, and for UGens that may consume a variable amount of memory (delay lines, some PV units, convolution, reverbs). So you could estimate how much you need (though this is difficult, because UGen documentation generally doesn’t give a detailed formula for real-time memory use), and then for safety, add another, oh, 25%…?

Buffer allocates further memory from the operating system, NOT from real-time memory. So, if you allocate almost all of available RAM for real-time use, then there would be no memory left for regular buffers. So it wouldn’t be a good idea to set memSize to a very large value.

With 512 MB RT memory, I haven’t had a single problem (and I do use e.g. JPverb, whose help recommends at least 256 MB). If you’re using the above-mentioned UGens heavily, maybe bump up to 1 GB RT. I personally wouldn’t go any higher than that.

No… That really wouldn’t be sensible, if they did :grin:

AFAICS, exactly the opposite: memoryLocking = false permits swapping, memoryLocking = true forbids swapping.

Again, here, I’ve never messed around with this setting, and never had a problem. “If it ain’t broke, don’t fix it.” – For years, 8 GB physical RAM, and I never had a problem with scsynth going into swap space.


For .memSize, I’ve been leaving it at 2 ** 18… it’s interesting to discover you mostly use 2 ** 19.

I only found the internal server to be somewhat more… conducive, to only strictly & unique modes, and methods of operations.

In no way was it meant to offend the very pinnacle & beauty of SC’s composition.

It is altogether more appreciated, just by exploring what lies beyond the current documentation, or perhaps our current comprehension, if only for a moment.