Alternative Osc communication with multiple machine

Hi, I want to build a proof of concept with visual / sound and physical/biological modeling data.
Before that, I used Osc data send/receive signal when I worked with audiovisual stuff.

but for that model need a vast amount of data to send/receive,
Is there any recommendation or alternative to communicate between 2 or multiple mac/Windows machines with stable communication?

Wirelessly sending vast amounts of data between different operating systems with as low as latency as UDP? Not that I know of, but if you compromise one of those parameter you might be able to find something. Which would be appropriate for your project?

How big is “vast”?

Would it be possible to copy the data across the machines and share some other data to synchronise the systems? e.g., a time offset.

Also, OSC does not have stable communication, instead UDP (the underlying tech) optimises for latency. If you want stability at the cost of latency, you should use TCP.

A wired OSC connection might work here - again, depending on how big “vast” is.

Are you sure that OSC really is your bottleneck? Most of the size overhead of OSC comes from the address pattern. If you really need to save bandwidth, try to avoid sending many small messages and instead try to pack your data in a single message.

One thing to watch out with large OSC UDP traffic is that you can easily overflow the UDP receive buffer (even on a local machine!) and consequently lose packets. OSC over TCP would be the safer option, but it is a bit slower.

In any case I would recommend to abstract away the actual transport mechanism, so you can swap it out for something more efficient once it actually becomes a problem.

Around a 1million balls in the box. Send a trigger signal when It contacts the boundary of the box then the osc sends the trigger to synth def

now I didn’t try more than 100 balls and one trigger signal detect from the border and sends the trigger signal to sc (1 synthDef), It doesn’t work properly.
so I try to build 1 to 1 system like
100 balls have each trigger → 100 Synthdef receive each trigger.
before that I always wanted to find the good way to communicate other pc, so I asked in the forum :slight_smile:

You can try to batch multiple triggers into a single message. Then the OSC message overhead (and UDP + IP header overhead!) becomes negligible.

Again, sending large data over UDP can be risky. If packet loss is unacceptable, you have to use TCP (or implement your own retransmission scheme).

signal to sc (1 synthDef), It doesn’t work properly.
so I try to build 1 to 1 system like
100 balls have each trigger → 100 Synthdef receive each trigger.

You want to send trigger messages over the network. What you actually do with these triggers shouldn’t make a difference…

Interesting. So is such overflow the main or even only reason why OSC UDP messages can be lost?
When does such a overflow happen, how many messages are we talking about in that scenario?
The UDP socket has a buffer I assume. Could adding a extra buffer in the receiving program help to avoid such data overflow?

Supercollider uses UDP by default? Are people using it with TCP as well? When?

So is such overflow the main or even only reason why OSC UDP messages can be lost?

Yes, buffer overflows are the main reason for UDP packet loss. When sending messages between programs on the same machine, there are only two buffers involved: the UDP socket send buffer and the UDP socket receive buffer. If the UDP send buffer is full, what happens depends on the OS: some may drop the packet, others may block until space is available. Same with the UDP socket receive buffer, but on most systems incoming packets are dropped if the buffer is full. The receive buffer can overflow if it is not big enough and the receiving application is too slow.

When sending UDP packets over the internet, there are many intermediate buffers involved - and consequently many more opportunities for packet loss :slight_smile:

Could adding a extra buffer in the receiving program help to avoid such data overflow?

First, you can increase the socket receive buffer size, either in the source code of the program (see SO_SNDBUF and SO_RCVBUF socket options), or globally with some OS-specific configurations. On some systems, the default socket buffer size is rather small, particularly on Windows.

Also, you can write your program in a way that the network receive thread receives packets as fast as possible and only puts them on the queue for another thread to process. This is how scsynth works. sclang, on the other hand, interprets OSC messages directly on the network thread; if the message handler takes too long, subsequent incoming messages might get lost…

Supercollider uses UDP by default? Are people using it with TCP as well? When?

Yes, it uses UDP by default. You can switch the Client/Server communication to TCP with the ServerOptions.protocol option. I don’t think many people do this, but it can help in situations where you send many, many messages in batches and you don’t want to bother with adding appropriate wait times.

Currently, sclang itself cannot act as a TCP server (i.e. accept incoming connections), but it can connect to TCP servers as a TCP client and subsequently receive messages. See also TCP input to SuperCollider.

1 Like

Thanks a lot for providing this info, those where question I had on my mind for some time already. I’m using Linux. I found this:
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/5/html/administration_and_configuration_guide/jgroups-perf-udpbuffer

Makes you wonder if it’s worth to change that buffer size and what the downsides are.

Interesting.

Is there anything to say about how much the length of such a address pattern affects 1) reliability and 2) speed?
If there is a effect, it might be good to choose short addresses in general when using osc.

That’s what scsynth does :slight_smile: It even introduced non-standard “integer address” patterns, where the first byte is 0 (to distinguish it from “real” address patterns that start with “/” or “#”), and the whole 4-byte sequence can be parsed as a big-endian 32-bit integer. If you want to send a command to scsynth, you can either use an OSC address pattern string or directly use the command number as an integer address. For example, the address pattern for the “s_new” command may be either the string “/s_new” (8 bytes with padding) or the byte sequence “0 0 0 9”.

Note that for typical use cases the address pattern overhead shouldn’t matter too much. Just don’t make them excessively large :slight_smile: Just to give some perspective: the overhead of every UDP message is 28 bytes for IPv4 or 48 bytes for IPv6. This also shows that it can be rather inefficient to send many messages that only contain a single argument, as the actual payload would be just a fraction of the protocol overhead. That’s typically not much of a concern with loopback connections or local (ethernet) networks, but it can matter when sending data over the internet or in crappy WLAN networks.

1 Like

And using Bundles with time tags? Does that have any (significant) effect on the performance of OSC, compared to using OSC message without a time tag?

If there is no such effect, it might be good design to use time tags always when using OSC.

OSC bundles have two features that are both orthogonal and optional:

  1. timetag (typically used for scheduling)
  2. the contained messages must be dispatched atomically and in the exact order as they appear

If you don’t need any of these two features, there is just no point in using OSC bundles. Bundle are not free because they have an overhead of at least 20 bytes: 8 bytes for the “#bundle” string, 8 bytes for the timestamp (even if not used) and 4 bytes for the message size of every contained message.

1 Like

But for reliable sequencing using OSC (as MIDI sequencing alternative), one should use time tags right, to adjust for the different latencies of OSC messages (notes)? Like supercollider is doing.

Scsynth is checking the time tag of the OSC message it gets from sclang and just adds 2ms latency to that time tag, so every OSC msg is scheduled in the future and their timing is automatically adjusted for their delay in arrival?

You get a late message when the time tag + 2ms latency is in the past?

Well, yes, for proper relative timing you’d need bundles with timetags. But this is not the only use case of OSC. For example, if you play a live instrument, you want the messages to be dispatched immediately.

Scsynth is checking the time tag of the OSC message it gets from sclang and just adds 2ms latency to that time tag

I don’t know where you got the 2ms number from. Anyway, the latency is not added by the Server, but in the Client!

When you schedule OSC bundles in sclang, you typically add a fixed latency (Server.latency) to every timetag. This is done automatically by the EventStreamPlayer (Patterns) and Server.bind. The Server then stores incoming bundles in a priority queue and dispatches the contained messages at the corresponding time. If the timetag is already in the past, the bundle is considered “late”.

1 Like

2ms should be 200ms I guess:
https://doc.sccode.org/Guides/ServerTiming.html

Ok, so live instruments immediately, so the differences in latency aren’t adjusted. Hm, I probably would expect the added latency to be just a smaller number, but yeah you want to have the latency as small as possible in this case I guess.

2ms should be 200ms I guess:

First, 0.2 seconds is just the default – and much too high for modern systems! Note that this is not automatically applied to all outgoing OSC bundles. It is just a variable that is used by certain subsystems, such as Pbind/EventStreamPlayer and Server.bind.

The reason why it does not make sense to forward incoming MIDI/network messages to the Server as bundles is that it would only help to preserve the exact jitter as on the Client, which is rather pointless.

Note that for live input you also want to reduce the hardware buffer size as much as possible. See also the last few posts in this recent thread: Server clock vs. Language clock - #41 by Spacechild1. For sequencing, the hardware buffer size does not matter, so you can set it to a higher value to play it safe. It’s a bit like recording vs. mixing in a DAW.

1 Like

For an example, assume the server is set to the default 200 ms latency, and that an event is scheduled for beat 100. The clock wakes up at beat 100, and a timestamp is calculated as beat 100 + 0.2 sec, and you hear the event at this (delayed) time.

There’s no further adjustment.

So you hear the event “late” relative to the clock – but this doesn’t matter because you never hear the clock itself. If everything is late by the same amount, then everything sounds together – even if “behind” the inaudible clock, the clock is inaudible! So the “lateness” is a non-issue. There is no absolute time reference – the only thing that matters is that the events’ timing relative to each other is consistent.

Now, that’s not true of LinkClock, where we need to sound on time relative to other peers. To handle this, we subtract latency from the beat’s seconds-value. SC’s clock runs “early” relative to other peers but the sound is late relative to the early clock, and comes out on time relative to the peers.

Basically… it’s a bit mind-bending, but, if it sounds good, don’t worry about it.

hjh

2 Likes