Joining two mono buffers to one stereo buffer?

for some reason i need stereo buffers but only have mono soundfiles and buffers.
is there a fast serverside way to join two mono buffers to one stereo buffer.
or something like Buffer.writeChannel ?

thank you

Get the FluCoMa stuff and use FluidBufCompose?

Sam

@sam: thank you very much ! i will have a look at FluCoMa. it seems to be a quite complex thing itself.
so there are no (more) onboard solutions besides the language’s .loadToFloatArray() and .sendCollection() ?

As far as I know, no, there isn’t (not server-side).

Merging multiple mono files into a single multichannel buffer requires interleaving the channels: L0, R0, L1, R1, L2, R2, L3, R3… I’m not aware of any direct way in the server to spread out the samples from a mono sound file with a “number of channels” multiplier.

If you absolutely must convert the data, I’d suggest SoundFile and its .readData method to get the samples into the language rather than loadToFloatArray. Compare:

readData way:

  1. Open the SoundFile.
  2. d = Signal.newClear(numSamplesToRead)
  3. readData(d) to get the samples.

loadToFloatArray way:

  1. Language tells the server to load the entire sound file into a server Buffer.
  2. Language tells the server to write the buffer to a temporary file.
  3. Language uses SoundFile to read the temporary file.

… well… you already have the data in a disk file, so it’s a bit of a waste of time to have the server copy that to a different temporary file.

But…

You might not actually need stereo buffers.

In a SynthDef, a multichannel signal is just an array of UGens.

When you PlayBuf.ar(2, ...) a stereo buffer, you get [ an OutputProxy, an OutputProxy ].

There is no reason why this should be the only way to get a stereo signal from buffers. You could PlayBuf the mono buffers and assemble them into a stereo array.

SynthDef(\monox2ToStereo, { |out, gate = 1, lBuf, rBuf, amp = 0.1|
	var eg = EnvGen.kr(Env.asr(0.01, 1, 0.01), gate, doneAction: 2);
	var left = PlayBuf.ar(1, lBuf);
	var right = PlayBuf.ar(1, rBuf);
	Out.ar(out, [left, right] * (amp * eg));
}).add;

And there’s an easier way to write this – multichannel expansion means that if you give PlayBuf an array of buffer indices, it will produce an array of buffer players.

SynthDef(\monox2ToStereo, { |out, gate = 1, lBuf, rBuf, amp = 0.1|
	var eg = EnvGen.kr(Env.asr(0.01, 1, 0.01), gate, doneAction: 2);
	var stereo = PlayBuf.ar(1, [lBuf, rBuf]);
	Out.ar(out, stereo * (amp * eg));
}).add;

I guess there might be cases where this wouldn’t work. But for playing back, there is no significant difference between playing a stereo buffer vs playing two mono buffers and distributing them in stereo (except, the latter you can do right now, without waiting for new server features).

hjh

1 Like

@jamshark: thanks (again) for the explanation !
initially i was trying to feed .recordNRT with an multichannel file via it’s “inputFilePath” parameter. for that i tryed to join mono buffers to multichan.
till i learned on this forum how to load multiple soundfiles into Score().

Ah ok, that makes sense – but I think you’ve found the right solution (multiple buffers).

In some ways, a stereo buffer is a convenience (e.g., Pure Data doesn’t support them at all – the only way to handle a stereo file is to load the channels into separate arrays, and then it’s up to you to build the parallel signal chain). Conveniences don’t apply in every case, at which point (in any language) it’s necessary to dig a little deeper.

Glad you got it working :+1:

hjh