Reconnecting to scsynth after sclang crashes

Hi!
After sclang crashes, I’m trying to reconnect to scsynth, which is still running, and I can’t without the new sclang killing the old scsynth.

Here is a sequence of steps for what I’m doing:

  1. sclang and scsynth are working and making sound
  2. I crash sclang (my bad), sound is still playing
  3. I restart sclang, sound is still playing
  4. If I boot now, it will send a /quit to the old server…

Is there any way I can connect to the old scsynth from a rebooted sclang?

Thanks

1 Like

If I boot now, it will send a /quit to the old server…

I guess the first issue is – if the server process is already running, then of course you don’t want to boot it (because it’s already booted).

Is there any way I can connect to the old scsynth from a rebooted sclang?

As far as I can see, currently the answer is no.

When you boot the server, sclang registers using a /notify, id command. (Usually id is 0.) Now that ID belongs to that particular client. If the client crashes, then it hasn’t unregistered – so, nobody else can use that ID (including a new sclang). That’s a good thing – in a multi-client situation, you don’t want one user messing around with another user’s nodes.

If you try (s.startAliveThread without booting), then you get “localhost - could not register, too many users” errors.

If you increase the number of clients and connect under a different ID, then you don’t have access to the nodes that the old client created.

But even if you did have access to them – proper functioning of SC depends on the sclang objects being in sync with the server’s state. You don’t have any of the sclang objects anymore – so there is no sync, so your code won’t work anyway.

Crashing either sclang or scsynth is, really, a catastrophic failure, and there’s pretty much no alternative but to start over.

hjh

Thanks, very precise as always!

This is what I was looking for, because by reconnecting to the server it’s actually possible to access the nodes, wrap them into Synth objects, and for example, release them gracefully.
Here is some example code, in case anybody is interested:

// first run: set maxLogins, create some synths, crash interpreter
s.options.maxLogins_(2);
s.reboot;
SynthDef(\asrSine){|freq=440,gate=1,amp=0.1|
	Out.ar(0,
		SinOsc.ar(freq)
		*EnvGen.kr(Env.asr,gate,doneAction:2)
		*amp
	)
}.add;
10.do{Synth(\asrSine,[freq:exprand(20,2000),amp:0.01])};
"killall sclang".unixCmd;

// second run: reboot interpreter, get synths, release them:
s.startAliveThread
(
OSCdef(\releaseTree,{|msg| 
	var reply = msg.drop(2); 
	var groups = Order[];
	var nodes = Order[];
	
	var parseChildren = {|parent|
		var numChildren = reply[1];
		if(numChildren == -1){
			parent[reply.first] = reply[2];
			nodes[reply.first] = reply[2];
			reply = reply.drop(3);
		}{
			var newParent = Order[];
			parent[reply[0]] = newParent;
			reply = reply.drop(2);
			numChildren.do{
				parseChildren.(newParent);
			};
		}
	};
	while{reply.size>1}{
		parseChildren.(groups)
	};
	nodes.indices.postln.do{|i|Synth.basicNew(nodes[i],s,i).release(10)}
},'/g_queryTree.reply').oneShot;
s.sendMsg('/g_queryTree',0,0);
)

It’s not nice to crash the language and occupy clientIDs like that… especially if after the first time I manage to crash it again and again… but it is nice to know that it’s possible to do something about it, especially in a single client live-coding situation, where I might have scheduled too many events, too fast, on the language, and I’m left with a drone of static synths playing.

1 Like

It’s not nice to crash the language and occupy clientIDs like that

But, in a multi-client situation, it would not be nice if some random machine could log into your session and start messing around with your nodes.

It would be worth discussing whether there’s some change that could make this better, but currently the design is intended to keep clients using separate resources – client A and client B both log in, and they should not step on each other’s toes. Clients can exit their session gracefully (btw I had the “notify” interface wrong earlier – it’s ["/notify", 1, id] to log in and ["/notify", 0] to log out), but if they don’t log out cleanly, then their resources are still reserved.

I had the thought that you could perhaps send ["/notify", 0] before starting the alive thread, but for this to work, sclang would have to get the same network port number that it had before. This is impossible because the still-running scsynth process is still bound to the old port! Because clients are identified by IP and port, the new sclang process is considered a new client and can’t log the old session out.

One possible feature requests (not possible right now) would be to add a command line switch to identify clients only by IP address – assuming only one sclang process per machine per server. It would have to be a commandline switch because there are cases (especially for testing) where you would want to run multiple sclang processes on one machine connecting to one server.

You raise a good point – the fact that it’s currently impossible to reconnect to a running server is not ideal. It’s going to take a feature request, though.

As a workaround, you might try setting s.options.maxLogins higher. If you log in with a new client ID, you might be able to get the old client’s node IDs from /g_queryTree (but I’m not sure – I haven’t tried it).

hjh

That’s exactly what I did in the code above… and I can confirm it works, at least on my local machine (running linux) hosting both clients and server.

Thanks for all the insights!