Limit to maximum number of OSC messages that can be received in short time?

Hi, I seem to have hit some kind of limit in the number of OSC messages that SC is able to receive in a short period. When running the code below on my machine with the lates SC it stops at [ /test, 4099 ], where actually 10000 messages are sent. It’s not a posting issue, if I collect the values in an array I hit the same limit. When I send the same amount of messages to Max from SC they are all received, so it is not in the sending part. The slightly frightening thing is that on my Debian linux virtual machine I get a much lower count; 278, and I have clues that on other (actual) linux machines the count is also quite low. The result is the same every time I try. Can you guys please test how much you can receive on your machines? And perhaps explain why this happens (and ideally propose a fix for it :wink: )?

cheers & thanks,
Wouter


(
n = NetAddr( "127.0.0.1", NetAddr.langPort );

OSCFunc({ |msg|
	msg.postln;
}, "/test" );

10000.do({ |i|
	n.sendMsg( "/test", i );
});

// last post in post window should be [ /test, 9999 ]
// but on (my) macOS is [ /test, 4099 ] and on 
// virtual debian Linux [ /test, 277 ]
)

UDP (the default protocol) doesn’t make any guarantees that the message will actually be delivered.

You could use bundles and play with the sizes until you find something that works.

The following works for me on Linux.

(
n = NetAddr( "127.0.0.1", NetAddr.langPort );

~results = [];

OSCFunc({ |msg|
	~results = ~results ++ msg[1];
}, "/mytest" );

~count = 10000;
~bundle_size = 1000;

(~count / ~bundle_size).do({|batch|
	var bundle = ~bundle_size.collect{ |n| 
		["/mytest", ((batch * ~bundle_size) + n).asInteger ] 
	};
	n.sendBundle(0.0, *bundle);
});

)

// wait a moment.
~results
~results.size

You could also use TCP, but I am not sure if the language supports this, I know the server does…

You are experiencing dropped packages. In linux you can check this by doing

ss -ulm

This will list the listening ports. Look for a tuple starting with skmem, the number prefixed with a d shows dropped packages. You can solve this by resizing the buffer, check https://www.baeldung.com/linux/udp-socket-buffer. I assume that macos has a similar mechanism and a somewhat larger buffer by default.

2 Likes

Well, the problem is that i encountered this limit after problems with bookkeeping of synths on the server. i.e. the amount of messages sent back to sclang exceeded this invisible limit. With the 4000 something on macOS it’s not often a problem, but with only 278 on the Linux one it really is. Also, it’s not unpredictable but actually exactly the same every time I try. And, also, Max doesn’t seem to have this issue, so it’s not impossible to receive 10000 OSC messages at once… Looks like a bug to me. I’ll also raise an issue on GitHub for it

Looks like a bug to me. I’ll also raise an issue on GitHub for it

This is not a bug, but a fundamental problem with UDP. Sockets have a receive buffer. If this buffer is full, UDP typically drops incoming messages. The (default) buffer size varies across systems and is often too low.

See also sclang: udp messages dropped when they are too fast/too many · Issue #5870 · supercollider/supercollider · GitHub

In general, when a UDP application receives messages at a faster rate than it consumes, messages will get lost. There are a few things you can do:

  1. increase the socket receive buffer size (Increase UDP socket receive buffer size · Issue #5993 · supercollider/supercollider · GitHub)
  2. if you are in control of the sender, insert waits between every N messages
  3. switch to a reliable protocol, such as TCP
1 Like

This is great to know. I was having issues with dropped OSC messages just from newer ipads sending way more messages than older ones.

I ran:

sudo sysctl net.inet.udp.recvspace=1573792

on the command line in macos and my received messages went from 8197 to 10000. I did have to recompile the class library after running that command.

you can see the current setting with:

sysctl -a|grep net.inet.udp

Sam

2 Likes

Yes, indeed it fixed the issue here too. I increased the buffer * 64 via set sudo sysctl -w net.core.rmem_max=13631488 and …_default=13631488 and can now receive exactly 17750 messages (which checks out as it is exactly 64 times the 278 I could do before) on the virtual linux. Still I wonder why Max doesn’t have this issue, did they solve it internally somehow?

They are likely buffering the packets on a dedicated IO thread, it would be a very sensible solution.

2 Likes

They are likely buffering the packets on a dedicated IO thread, it would be a very sensible solution.

Indeed! That’s what I suggested in the other thread: sclang: udp messages dropped when they are too fast/too many · Issue #5870 · supercollider/supercollider · GitHub

The general idea is to have a dedicated thread that does nothing but drain the socket buffer as fast as possible and throw the packets onto an (unbounded) queue. Then the main application can pop packets at its own pace, without having to worry about data loss. I regularly do this in my own applications when I expect a certain amount of incoming traffic.

2 Likes