Why would b_getn be much slower than b_setn?

Hi there,

I’m experimenting with work arounds for scsynth.wasm’s lack of libsndfile support. I am transferring buffer contents forth and back from audio files stored in IndexedDB. The performance is good for filling a long buffer by sending contents from the client, but the other direction is a bottleneck, receiving the buffer contents in the client. Here are some quick numbers in milliseconds for a 42 seconds stereo buffer at 48 kHz:

BUF READ - time spent reading 1113, spent sending 1871
BUF WRITE - time spent writing 3139, spent receiving 25661

The first shows that file I/O (in the client, reading from IndexedDB) takes around 1 second, OSC transmission takes around 2 seconds. The second shows that file I/O (in the client, writing to IndexedDB) takes around 3 seconds, but receiving the contents from scsynth.wasm takes wopping 25 seconds.

So I wonder if anyone has a clue why the b_getn part is so much slower than the b_setn part. Both are asynchronous buffer commands, the code for the two directions I use is more or less symmetrical (no parallelism, always waiting for the OSC commands to complete).

Both are asynchronous buffer commands

/b_set[n] is synchronous!

1 Like

Ha. Ok, that would explain it. What is the technical reason that b_setn can operate synchronously, but b_getn does not?

Any ideas to speed up the b_getn part then? What if I “clump” multiple b_getn requests per cycle, would that help? (implementing that is a bit of work that’s why I’d like some opinions beforehand)

By the way, I do add a /sync to the /b_setn part, so it does perform a full cycle till the non-realtime thread, I suppose.

What is the technical reason that b_setn can operate synchronously, but b_getn does not?

In both cases, the reading/writing of buffer data happens synchronously in stage1 (= RT thread). It’s just that b_getn also needs to send a reply message, which happens in stage2 (= NRT thread), so there’s always an additional NRT thread roundtrip.

no parallelism, always waiting for the OSC commands to complete

I think that might be a problem. It’s not necessary to sync for every b_getn message. You can just fire them in batches and then collect the OSC reply messages. However, if you’re using UDP, you would need to add some time between batches, otherwise you risk overflowing the UDP receive buffer and lose messages.

For b_setn you don’t need to sync at all! But the same caveat about UDP applies.

I see. Well, I didn’t want to “clot” the server, that’s why I added the sync for client-to-server as a “natural” throttle. I will try to send multiple b_getn independently then and see what happens. Perhaps I add a minimum duration between requests, again not to clot the NRT queue, as other stuff should be able to be processed without much delay. I use TCP on the desktop (BTW, the receive duration here is 12 seconds, so twice as fast as browser, which gives a good indication of the current wasm performance; still too slow IMHO), so no problem with losing UDP packets. In the browser, no network is involved, the messages are directly passed between JS and WASM, so again no danger of losing packets.

You can check out the implementation of Buffer.sendCollection and Buffer.getToFloatArray for ideas.

1 Like

I use clumping of ten requests now which quite directly translates into a 10 times speed up. Seeing occasional xruns in the browser now, but not dramatic (should move to Audio Worklets at some point in any case).

1 Like