Switching unit tests to use TCP and increase parallelization

There a few discussions going on over at GitHub about speeding up tests by switching to TCP and increasing parallelization in unit tests.

I wanted to get more visibility on the issues because this is where some historical knowledge can be useful. Not sure if it’s best to consolidate discussion here or just comment on GH, but the PRs are beginning to sprawl a bit.

These will get you started:

and related: wrt memory allocation

Update: @dscheiba has suggested centralizing the discussion here.

also this one

1 Like

I’ll post the benchmarks I made here rather than on github

Here is the test.
It has three variables, the protocol, the number of calls to loadToFloatArray and the batch size, which is how many are processed until s.sync is called (this can be inf to disable syncing). Would be good if others could rerun this as this could be machine specific, I’m on current dev on linux manjaro.


// boot one of these
(
s.options.protocol = \tcp;
s.options.maxSynthDefs = 2000;
s.boot;
)

(
s.options.protocol = \udp;
s.options.maxSynthDefs = 2000;
s.boot;
)

// test
(
fork {
	var batch_size = 50;  // change me 
	var num_synths = 2000; // change me
	
	var cond = CondVar();
	var completed = 0;
	
	
	
	var sucess = false;
	
	var time_taken = {
		var timeOut = false;
		num_synths.do{ |n|
			{ SinOsc.ar }.loadToFloatArray(0.1, s, { |data|
				completed = completed + 1;
				cond.signalOne;
			});
			
			if (n != inf and: { n % batch_size == 0 }) {
				s.sync
			};
		};
		
		timeOut = cond.waitFor(1, { (completed == num_synths) });
		sucess = completed == num_synths and: timeOut;
	}.bench(false);
	
	if (sucess) {
		"YIPPEE! protocol: %, num_synths: %, batch size: %, duration: %"
	} {
		"OH NO! protocol: %, num_synths: %, batch size: %, duration: %"
	}
	.format(s.options.protocol, num_synths, batch_size, time_taken).postln	
}
)

Results:

// 50
YIPPEE! protocol: upd, num_synths: 50, batch num_synths: 1, duration: 0.23709218199997
YIPPEE! protocol: upd, num_synths: 50, batch num_synths: inf, duration: 0.12756647099991 // unstable

YIPPEE! protocol: tcp, num_synths: 50, batch size: 1, duration: 0.6725414460002
YIPPEE! protocol: tcp, num_synths: 50, batch size: inf, duration: 0.20374475600011

// 500
YIPPEE! protocol: upd, num_synths: 500, batch num_synths: 1, duration: 1.446075911
OH NO! protocol: upd, num_synths: 500, batch num_synths: inf, duration: 1.0025891390001 // unstable

YIPPEE! protocol: tcp, num_synths: 500, batch size: 1, duration: 2.5703848850001
YIPPEE! protocol: tcp, num_synths: 500, batch size: inf, duration: 0.51924312599999

// 2000
YIPPEE! protocol: upd, num_synths: 2000, batch num_synths: 1, duration: 6.044210629
YIPPEE! protocol: upd, num_synths: 2000, batch size: 2, duration: 2.7730885449998 // unstable
YIPPEE! protocol: upd, num_synths: 2000, batch size: 5, duration: 1.431585868 // unstable
YIPPEE! protocol: upd, num_synths: 2000, batch size: 10, duration: 1.3675357319999 // unstable
// going higher than 20 always hangs.

YIPPEE! protocol: tcp, num_synths: 2000, batch size: 1, duration: 8.8403414470001
YIPPEE! protocol: tcp, num_synths: 2000, batch size: 2, duration: 4.4968370539998
YIPPEE! protocol: tcp, num_synths: 2000, batch size: 5, duration: 1.8096767680001
YIPPEE! protocol: tcp, num_synths: 2000, batch size: 10, duration: 1.4560599450001
YIPPEE! protocol: tcp, num_synths: 2000, batch size: 15, duration: 1.4959977839999
YIPPEE! protocol: tcp, num_synths: 2000, batch size: 50, duration: 1.6424540979999
// tcp with 2000 and inf batch_size runs out of buffers

// 10000
YIPPEE! protocol: upd, num_synths: 10000, batch size: 1, duration: 100.095942255
YIPPEE! protocol: upd, num_synths: 10000, batch size: 5, duration: 6.832033951
OH NO! protocol: upd, num_synths: 10000, batch size: 10, duration: 7.594547819

YIPPEE! protocol: tcp, num_synths: 10000, batch size: 10, duration: 6.64938284

Conclusions:

There are huge benefits to be had by batching the calls.
Take this perverse case

YIPPEE! protocol: upd, num_synths: 10000, batch size: 1, duration: 100.095942255
YIPPEE! protocol: upd, num_synths: 10000, batch size: 5, duration: 6.832033951
OH NO! protocol: upd, num_synths: 10000, batch size: 10, duration: 7.594547819

YIPPEE! protocol: tcp, num_synths: 10000, batch size: 10, duration: 6.64938284

UDP is always unstable when the batch size is greater than one. It often works for small numbers, then all a sudden, it won’t. Unit tests should not do this. Hopefully this can be improved somehow? Perhaps its a machine dependent thing? So don’t write parallel code without syncing in UDP, it will randomly fail, sometimes nicely, sometimes hanging for ever.

Even small batch sizes (2) have a huge performance impact.

TCP always work — unless you run out of buffers or threads.

TCP is slower with batch sizes of 1. It appears to have some fixed cost.

TCP is mostly equivalent to UDP at a batch size between 5 and 15, but it always works.

TCP doesn’t favour inf batch sizes.

This is complicated.

Minor typo in your code:

s.options.protocol = \upd;

This should obviously be \udp… although it happens to make no difference, because the ServerOptions call to protocol just checks if it’s \tcp, and if it’s any other value, it defaults to UDP. (-;

1 Like

Thank you! I’ll fix the code :smiling_face:

1 Like