Osc messages supercollider

Hi,

I am working on a quite ambitious project. I would like to build my own groovebox. I already have all the hardware, including a couple of analog endless encoders.

They have two wipers and send values from 1 to 1024 per rotation. After they reach 1024, they start again from 1. I managed to build a script that basically only gives me a delta. When I rotate to the right, I get +1, and when I rotate to the left, I get -1.

So far, everything is working. I am sending OSC messages for each of the 8 encoders to SuperCollider. In SuperCollider, I decide what to do with those values, and then I send an OSC message back to the display so I can see the changes visually.

My issue is that, with this approach, I am constantly sending and receiving OSC messages, depending on how fast I poll the encoders. This seems to create a lot of traffic, and my encoders sometimes lag quite a bit. Occasionally, it even feels like OSC messages are getting dropped.

I was wondering if there is a better solution for this, or if my workflow is completely wrong. It would probably be better to send the encoder data directly to the display, but then I wouldn’t be able to tell SuperCollider what to do with it. Especially each encoder will have multiple purpose, depending on which display page I am. Thats why I decided for endless encoders in the first place.

you could switch to tcp and see if it helps.

s.options.protocol = \tcp;

https://doc.sccode.org/Classes/ServerOptions.html#-protocol

there also may be some additional info in this thread:

Hm… first question, then, is, how often are you polling the encoders vs how often are they actually changing? If, for example, you’re not touching the encoders for 10 seconds but you’re sending 20 updates per second during that whole time, it’s 200x8 = 1600 wasted messages.

I’m not sure how your hardware works. If it requires polling, you could keep track of the current value of each encoder and send a message only when it really changed. Or maybe there’s a mode where it would call a handler (callback) when it’s moved.

I think sclang can only open UDP ports, so (AFAIK) you’re stuck with that. UDP can drop messages when under stress.

TCP: A NetAddr object can try to connect to a TCP port. I created a Pd patch that ostensibly listens to TCP on port 55150 and tried:

p = NetAddr("127.0.0.1", 55150);
p.tryConnectTCP({ "ok".postln }, { "failed".postln }, 20);
// says 'ok', and Pd reports one connected peer, so far so good

But when I p.sendMsg('/hello', 1);, Pd doesn’t receive the message. I don’t know if I’ve configured the Pd receiver incorrectly, or if there’s a bug in SC sending TCP (but if there were, then a Server in TCP mode wouldn’t work, so that’s unlikely I guess?). Anyway I’ve done precious little with TCP so it’s wholly possible that I just don’t know what I’m doing lol.

Broadcast messages (sent from the device to 255.255.255.something) are much more likely to drop messages, so definitely don’t use broadcast in this context.

On the sclang side, my ddwSpeedLim quark can reduce the rate of sending OSC back to the device. That might not improve responsiveness (unless the lag is because of the high traffic).

Not relevant because the issue involves OSC communication with a device over the network, not the server.

hjh

2 Likes

@jamshark70 I wasn’t aware of the tryConnectTCP method before reading your post. I just tried connecting to TouchOSC via TCP, and lo and behold, it works like a charm, as long as you configure TouchOSC to act as a TCP server. Not sure what’s going on with Pd in your case, but maybe it’s not acting as a server but rather as a client? Anyway, thanks for the tip, this may prove very useful…

That’s probably it.

tryConnectTCP is as yet undocumented (at least in 3.14) so there’s bound to be confusion.

hjh

The issue is that OSC is a content format, not a transport protocol. UDP is packet based, so it automatically preserves message boundaries.

However, when sending OSC message over stream based protocols, such as TCP or serial, you need some strategy to delimit messages. The original OSC 1.0 specification mandates that one must prefix messages with the message size as a 32-bit integer (see the section “OSC Packets” in OSC spec 1_0). While this works with reliable streams such as TCP, this is not practical for unreliable streams such as serial connections because you cannot recover from a temporary failure. That is why OSC 1.1 (which only exists in the form of a NIME paper) recommends the SLIP protocol instead (see 4. in https://opensoundcontrol.stanford.edu/files/2009-NIME-OSC-1.1.pdf).

Pd has no built-in capability for parsing OSC messages sent over TCP. There is a unpackOSCstream abstraction in the osc library, but this actually assumes a SLIP encoded stream. If you want to send/receive OSC over TCP with the OSC 1.0 encoding, you have to build your own abstractions.

scsynth, just like sclang, uses the OSC 1.0 encoding, that’s why it works out of the box.


Just to demonstrate that Pd’s TCP networking is not broken :wink: here’s how you can send FUDI *) messages over TCP from sclang to Pd:

~toFUDI = { |str| (str ++ $\n).collectAs(_.asInteger, Int8Array) };

// This assumes a matching [netreceive 8000] object in Pd.
a = NetAddr("localhost", 8000);

a.tryConnectTCP;

a.sendRaw(~toFUDI.("list foo bar 1 2 3;"));

*) FUDI is Pd’s own message protocol. It is mostly used in patch files but it can also be used as a network protocol. (In fact, Pd’s GUI process uses FUDI to send messages back to the core process.)

Ok! Then that was my misunderstanding. Thanks!

hjh

Thanks everyone for the tips :slightly_smiling_face:

I’m only polling when the encoder value changes.

But I think I did the mistake that I set polling time and osc_send both to 20 updates per sec, but I am actually able to set the polling time much higher without causing more osc traffic. Like 500 updates a sec, basically just for my delta calculation. Then for osc_send I am just sending in 20 updates per sec. That solved it mostly.

Still, 20 updates per second feels a bit low. I am basically looking at updates at 20 Hz, which is quite noticeable and a bit jaggy on my display. But is that normally the standard?

I could probably go higher, but I’m worried it would use quite a lot of CPU and generate a lot of OSC traffic again.

I was also just wondering what the right workflow is in general. I assumed OSC would be the standard approach. I haven’t tried TCP yet, though.

20 updates per seconds is not too much. One tip: try to put all 8 value in a single message. This will save quite a bit of bandwidth. But even with 8 different messages, this should be fine.

In SuperCollider, I decide what to do with those values, and then I send an OSC message back to the display so I can see the changes visually.
…
and then I send an OSC message back to the display so I can see the changes visually.

What is the app that is sending OSC messages to SuperCollider resp. that receives the replies from SuperCollider?

One thing that is important to keep in mind: UDP sockets have a buffer for sending/receiving messages. If that buffer overflows, e.g. because the application is receiving messages at a faster rate than it can process, packets will be dropped. If you are in control of the app, try to increase the UDP socket buffers as the default values are often quite small.

Actually, we have recently increased the UDP socket buffers in SuperCollider itself: https://github.com/supercollider/supercollider/pull/6989

Its not an app. It’s just a python script.
I basically built a groovebox with a raspberry pi. So I would need to have a python script communicating between supercollider, encoders, buttons and the display. I am not sure if there is already an app which could help me with that. It would probably save quite a lot of headaches :smile:

Ah great. I will try to increase the socket buffer. thank you :ok_hand:

Oh yea, sending all 8 encoders at once is actually a great idea. But I think for now its fine, because I never really touch all 8 encoders at once. maybe 2 max.

Would you mind sharing the Python script, or at least the networking part?

Movies in a theater run at 24 frames per second. There might be some benefit to 30 fps but going significantly higher would probably become a diminishing returns problem (i.e. increased resource usage for a negligible perceptual effect). (VCV Rack claims to update its display at 60 Hz, but I always thought that was not really meaningful to the user.)

If it doesn’t seem smooth at 20 Hz, I’d guess that the actual update rate is less than 20 Hz. So there’s a bottleneck somewhere.

Are the OSC messages transmitted between two physically distinct machines, or both scripts on the pi?

Btw I’m still puzzled about “polling only when the encoder changes… at 20 Hz.” Does that mean the script responds to all encoder changes, at any speed, but caps the rate of outgoing messages? Is it sending messages when the controls are idle, or not?

hjh

20 Hz is not always enough for time sensitive tasks like dynamic control of sound :slight_smile: Film analogy is not fully relevant here - movement and audible feedback work with finer time resolution than the eye.
But I think 50Hz is good for most cases.

I forward accelerometer data from custom-built sensors to sclang at 200 Hz and there’s no issue parsing that, even when working with multiple sensors (packets get dropped occasionally, but more often because of wifi than language being busy). I do pack all the messages (x, y, z, sometimes other values) into a single OSC message though. On the receiving end I set control buses with their values, with further processing happening on the server.

I find 200Hz to be enough for capturing most movement. IIRC, for haptic feedback applications I heard that sensors sometimes work at higher sampling rates (over 1kHz I think?).

Marcin

That’s fair. I guess it depends on the use case. I don’t see how I could twiddle a fader manually faster than 4 or 5 cycles per second, which 20-30 Hz should be able to capture; with a smoothing function, it’s probably more or less indistinguishable to the ear at 20 Hz vs 50 Hz. (Without a smoothing function, 20 Hz will definitely sound rough.) Other types of gesture recognition might need a higher rate.

I still think there isn’t enough information in this thread about the rate of data transmission. I don’t understand what it means to poll at x Hz but only when it changes. A person can touch only so many controls at the same time. If it’s sending updates only when a control is actually changing, I don’t quite see how the data rate is so high as to cause backlogs or dropouts. If it’s hitting real bottlenecks in message transmission, then it’s probably sending redundant messages (which could be eliminated). I think that troubleshooting this issue really depends on the conditions under which messages are sent, but this is still a bit confusing. I think whether the message rate is 20 Hz or 50 Hz is less important than this.

hjh

1 Like

Yes, absolutely!

I just meant to provide context from my experience with controllers. Definitely there’s no reason to have bottlenecks at this rate of data - the underlying issue should be resolved before considering limitations to message rate.

I might be totally wrong with my mathematics, but let’s say my encoder is 10-bit and can produce values from 1 to 1023 over a full 360-degree rotation. If I quickly wiggle it by half a rotation, that would still be about 500 value changes.

So even if you say we won’t wiggle faster than 4–5 cycles per second, and that we would need maybe half a second to rotate 180 degrees (which is probably still a low estimate), that would already be about 1,000 value changes per second, way above 20 Hz.

Let’s say I’m only working with a 1–127 resolution. That would still be 127 value changes per second, which is also much higher than 20 Hz. Or am I completely wrong here?

So basically my issue is this: when I rotate slowly, let’s say 180 degrees, I get my 64 value changes. But if I rotate quickly, I end up with much fewer values because I’m only sending OSC at 20 Hz.

But I actually just realized that the real issue is bigger. In hardware, I’m sharing the same SPI bus between my encoders and the display. I have two MCP3008s: one uses a hardware CS pin, and the other uses a software CS because I ran out of hardware CS lines.
To run the display and the encoders without issues, they have to run in the same main loop; otherwise I get graphical glitches on the display. That means the encoder polling rate is competing with the display update rate. I can’t poll the encoders faster than the display refresh. I think that’s the real problem.
If I don’t use the display, everything works fine.

Changing hardware would be a real pain though, I would need to re-do my pcb and lots of other things.

That’s true. But, assuming that those 1000 value changes follow a more or less linear pattern, then those changes can be sampled at a much lower rate. Then if the playback end applies a smoothing function, since we are talking about control parameter automation, the result would be practically indistinguishable to the ear from the full 1000-value series.

This is what happens, for instance, if you record a MIDI CC curve into a DAW, and after/during the recording, it thins out the data. (IIRC Cubase is especially assertive about data reduction.)

The thing that downsampling and linear smoothing can’t handle is a change of direction. If it’s just one direction, 0 to 1000 over 1 second, you could decimate that down to 100 or even 50 values and get an easily acceptable result. But if it’s 0 - 1000 - 0 - 1000 etc. for, say, 10 cycles in one second, if it’s decimated down to 100 values, that’s 10 per cycle, which might be acceptable but that’s pushing it, I’d think.

But, practically speaking, no human can wiggle a knob at 10 Hz, even on 5 cups of coffee. It’s probably more like 3 or 4 Hz. Then, even for 100 values per second, it’s 25-33 values per cycle, which is probably enough. (20 Hz technically can handle oscillation up to 10 Hz, but linear or one-pole smoothing wouldn’t accurately reconstruct the curve above probably 3 or 4 Hz anyway.)

Indeed… that would count as a bottleneck. (I’m afraid I don’t have any ideas to fix it, though.)

hjh

One additional comment which may be useful, @jamshark70: to connect to TouchOSC via TCP, it’s essential for TouchOSC to be acting as a server with framing OSC 1.0. Messages from SC don’t go through if TouchOSC is using OSC 1.1 framing.