Buffer.allocConsecutive

Hi,

I have a couple of questions concerning Buffer (including VOsc), which I’ve been using for years, but really don’t quite understand some of the finer points.

When working a soundfiles using Buffer.alloc, I understand that numFrames is easy to understand as you can set your numFrames as s.sampleRate * x, depending on the number of seconds needed.

However, when working with several envelopes (or whatever), building Signals (.asSignal), and then loading into buffers using Buffer.allocConsecutive, I see that numFrames is described as:

The number of frames to allocate in each buffer. Actual memory use will correspond to numFrames * numChannels.

How can we know what the number of Frames are?

Here’s a working example (of what I’m testing) built from tutorials and past stuff. It’s a bit basic (and probably not correct), but at least it can give someone a basis on which to explain and reply.

(
	~sigArrays = 4.collect({
		arg i;
		i = i.linexp(0, 3, 4, 40).round(1);
		Env(
			{rrand(0.0, 1.0)}.dup(i),
			{rrand(0.5, 1)}.dup(i),
			{rand(-5, 5)}.dup(i)
	).asSignal(512); // I've used 512, but why not 1, 2, 16, etc .. they're also pow2? 
		
	});
)

~sigArrays.plot;

// Same question for numFrames, how do I calculate my frames from the above Envs?
// I notice I can use any pow2 number in the Buffer, 
// except very low figures just 'click' - too small I imagine. 
// So how do I find the numFrames to use ( * numChannels is 'one' I see).  

(
	b = Buffer.allocConsecutive(4, s, 512*2, 1, {
		arg buf, index;
		buf.setnMsg(0, ~sigArrays[index].asWavetable);
	});
)

(
	a = { VOsc.ar(LFNoise1.kr(5).range(b.first.bufnum+0.1, b.last.bufnum-0.1),
		LFNoise0.kr(10).range(120, 240), 0, 0.2) }.play;
)

b.free;

Oh, and a supplementary question about pow2 numbers. Why do the examples mostly use 512 (513), and above? As I understand, VOsc/Signal, etc, need pow2 numbers. But what’s wrong with smaller pow2 numbers (or much bigger numbers even, ex: 1.2676506002282e+30)

Thanks in advance - joesh

It’s exactly the same as “frames” for alloc.

This comment in the help may not be stated for alloc, but it’s true (and exactly the same) for both alloc and allocConsecutive.

If you alloc 1 channel of 44100 frames, memory use is 44100 x 1 x 4 (bytes per sample).

If you alloc 2 channels of 44100 frames, memory use is 44100 x 2 x 4 (bytes per sample).

So memory use depends on the number of channels, but with buffer UGens, you’re not dealing with memory directly, only with frames. So there’s nothing here to worry about.

hjh

Thanks James for your answer, however, I guess (as always) my questions aren’t clear.

Firstly, what I don’t understand is, (using the example I gave) what should I put in my 3rd argument for Buffer.allocConsecutive, and why? The argument asks for numFrames, but as I said how do we know how many numFrames are needed? Secondly, I notice that unless my numFrames is a pow2, (in this case) my VOsc throws an error telling me as much. So, does this simply mean that in fact I should simply use any pow2 number, and if so what’s the connection between my pow2 number and the number of frames?

Lastly, in a Shaper example, I notice that Buffer.alloc uses numframes 512 (pow2 again). In this example the buffer is allocated first, but why 512 and not 16, or 1.2676506002282e+30? BTW, I indeed notice if one changes the numFrames argument in alloc (in the example below), 1 doesn’t work, 2 produces an interesting wave in scope with few sidebands, and 16 starts to produce a wave much like 512, although not as refined, etc.

// Interesting to see scopes results using 2, 4, 16, 512 or 512*2
b = Buffer.alloc(s, 512, 1, { |buf| buf.chebyMsg([1,0,1,1,0,1])});

(
{
    Shaper.ar(
        b,
        SinOsc.ar(300, 0, Line.kr(0,1,6)),
        0.5
    )
}.scope;
)

b.free;

Thanks again - joesh

Oh ok, sorry that I misunderstood.

How many frames depends on what you’re using it for. Seems like you’re after wavetables. Usually a wavetable buffer is 1024, 2048 or 4096 frames (corresponding to a 512, 1024 or 2048 point wavetable – btw for wavetable use, you need to turn the array into a Signal and then do asWavetable on it – .asSignal(512). asWavetable and this will give you a 1024 point collection, and the Buffer should be this size).

For wavetables, it’s a balance between memory and fidelity.

If you have a one sample wavetable, then you don’t have a wave at all ("1 doesn’t work”).

If you have two samples, then you get at best a triangle wave – no matter what you were trying to put into it. Almost every point you try to look up in the table will be interpolated. Interpolation is always an approximation, so you don’t want to rely on it too heavily.

But if you have 512 samples, then the effect of interpolation is greatly reduced. More samples, more accuracy, but also more memory use. Not a big deal for a few tables, maybe a big deal for a thousand tables.

Per the sampling theorem, the highest harmonic you can get is at most 2 samples per cycle. So a 512 sample table gives you up to 256 harmonics etc. (but no need to go up that high).

It’s a long way around to say – the number of frames is up to you, but not too small (inaccurate waveform) and not much too big (wastes memory).

Hope that helps –
hjh

Great, that seems to be clearer.

Just for the info, in my example (1st one), I called .asSignal directly on the ‘collect’ Env, and then in Buffer.allocConsecutive used .asWavetable directly. Is that not a good idea, or should I somehow convert my Envs to a Signal first, before loading it into the Buffer?

Thanks - Joesh

That’s perfectly fine!

Each Env needs to go through asSignal → asWavetable → transmit to buffer (setnMsg). collect is a loop and allocConsecutive is implicitly a loop. It doesn’t matter which loop does asSignal and asWavetable (could be both in the first loop, or both in the second, or as you have it).

I was looking for them to be together; reading the code on my phone, I just overlooked it.

Also the 512*2 is correct.

hjh

1 Like