Splay - not getting spread, wrong use?

Hi,

I was trying to spread 4 channels that are produced in this code evenly spread out using Splay, but all I get is a mixdown to mono and .dup.
I either don’t understand how Splay works or …

example:

b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");

(
x = {
	arg out = 0, bufnum, dur=0.2, freq=140, amp=0.5;
	var snd;

	snd = GrainBuf.ar(numChannels: 2,
		trigger: Impulse.kr(freq * LFNoise0.kr(freq).range(0.5,1.5)),
		dur: dur,
		sndbuf: bufnum,
		rate: [0.2,0.3,0.4,0.5],
		pos:LFTri.ar(0.01, iphase:[0.5,1,0,2]).range(0,1),
		interp: 4
	);
        // this works:
	// snd = [snd.at(0),snd.at(1)] + [snd.at(3),snd.at(2)] * amp;

        // this is like mixdown to mono and .dup: 
	snd = Splay.ar(snd * amp, spread:1);
	
	//Out.ar(out, snd);
}.play(s,[\bufnum, b]);
)

I would be greatful to any pointer to what I’m doing wrong.

Splay expects an array of the items to spread out. Instead, you might be getting an array of 4 left channels and 4 right channels. You might need to .flop the array before Splay…? Not at the computer so I can’t check.

hjh

Yes, it’s a nested array quirk. You are producing 4 totally correlated stereo pairs. Check with a poll of GrainBuf. So it could be done lilke this

(
x = {
	arg out = 0, bufnum, dur=0.2, freq=140, amp=0.5;
	var snd;

	snd = [[0.2,0.3,0.4,0.5], [0.5,1,0,2]].flop.collect { |vals| 
		GrainBuf.ar(numChannels: 1,
			trigger: Impulse.kr(freq * LFNoise0.kr(freq).range(0.5,1.5)),
			dur: dur,
			sndbuf: bufnum,
			rate: vals[0],
			pos: LFTri.ar(0.01, iphase: vals[1]).range(0, 1),
			interp: 4
		)
	};
	
	snd = Splay.ar(snd * amp, spread:1);
	
	Out.ar(out, snd.poll);
}.play(s, [\bufnum, b]);
)

But as you are not panning with GrainBuf it can also be done like this:

(
x = {
	arg out = 0, bufnum, dur=0.2, freq=140, amp=0.5;
	var snd;

	snd = GrainBuf.ar(numChannels: 1,
		trigger: Impulse.kr(freq * LFNoise0.kr(freq).range(0.5,1.5)),
		dur: dur,
		sndbuf: bufnum,
		rate: [0.2,0.3,0.4,0.5],
		pos:LFTri.ar(0.01, iphase:[0.5,1,0,2]).range(0,1),
		interp: 4
	).poll;
 	snd = Splay.ar(snd * amp, spread:1);
	
	Out.ar(out, snd);
}.play(s,[\bufnum, b]);
)

This line from your workaround …

snd = [snd.at(0), snd.at(1)] + [snd.at(3),snd.at(2)] * amp;

does somthing maybe somewhat surprising, each component is a correlated stereo signal which is mixed down, so in the end it gives what you expect but louder than with the “original” mono sources

snd = [snd.at(0).at(0), snd.at(1).at(0)] + [snd.at(3).at(0),snd.at(2).at(0)] * amp;

thank you @jamshark70 and @dkmayer.

I can see now that I my confusion comes from the fact that I completely forgot that my code is producing two channel stereo output, with GrainBuf.ar(numChannels:2), but with pan in the center.

using rate: [0.2,0.3,0.4,0.5] further on, creates 4x center-panned-stereo channels. When Splay takes each of those, it, puts them left and right, and since they are centered/correlated, they sound like mono but duplicated.

So imply adding, for example, pan: [-1,1,-1,1] to GrainBuf.ar would (and does) perform the spreading (and mixing 8 channels down to 2).

Or, of course, as Daniel suggested, simply using numChannels:1

hey, following Luka s post I noticed that his example only worked with the grainbuf rate in an array (kind of deftune).

is it possible to work out Splay spread without detuning sound ?


b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");



(
x = { arg spread=1, level=0.2, center=0.0;
 Splay.arFill(10,
  GrainBuf.ar(numChannels: 1,
		trigger: Impulse.kr(1.2875),
		dur:  Impulse.kr(1.2875),
		sndbuf: b,
		pos:0,
		interp: 1
),
  spread,
  level,
  center
 );
}.play;
)


x.set(\spread, 1,   \center, 0);  // full stereo
x.set(\spread, 0,   \center, 0);  // mono center

in this example spread does not work.

thanks for your help =)

reedit: it works better with Splay.ar , instead of Splay.arFill but very subtle, is there a way to get a stronger result? such as the example in the documentation of Splay

If I see correctly your code, it seems you are creating 10 identical signals.

however you spread them it will all sound like mono.

they have to be different somehow in order to hear any difference.

introduce a little rand or Rand there on some parameter and you should hear the difference.

please luka where would you introduce the Rand?

I want to send a Bus to the Splay which means my “in” of “function” is In.ar(bus,1).

thanks

this is the actual version i want to develop spread for

SynthDef(\stereo,{|out=0, in, spread=0, level=1, center=0.0|

		Out.ar(out,
			Splay.ar(
				In.ar(in, 1)
			,spread,level,center));
	}).add;

ok maybe using

In.ar(in, 1)* [0.1,0.5,0.6,0.2,0.3,0.4,0.12,30.43,0.23,0.11]

If you have only one input channel, then it doesn’t matter how many output channels or how many amplitudes in your array: all output channels will be 100% phase correlated and you will just get a change in volume, but NO stereo effect.

You must introduce phase decorrelation if you want any sense of stereo at all.

Very short delays (20-60 ms) often will do the trick.

hjh

please James how do I do it? is it possible to do it with Splay ?

is it somehow offset one of the channel ?

thanks

You need to undestand that if you have two identical signal coming from left and right it will always be mono. No stereo spread possible. So you need to introduce (at least) two different signals - and just different amplification will not really work.

To expand your first example, we can introduce an array of values for paramtere pos:

(
x = { arg spread=1, level=0.2, center=0.0;
	Splay.ar(
		GrainBuf.ar(numChannels: 1,
			trigger: Impulse.kr(1.2875),
			dur:  Impulse.kr(1.2875),
			sndbuf: b,
			pos:[0,0.5],
			interp: 1
		),
		spread,
		level,
		center
	);
}.play;
)

x.set(\spread, 1,   \center, 0);  // full stereo
x.set(\spread, 0,   \center, 0);  // mono center

By way of Multichannel Expansion this creates two different signals (channels, an array of 2 UGen graphs) that you can hear as different and therefore observe the stereo spread.

And similarly, to use your other example with In.ar where I have added in=2, so that it takes my mic in (but you can specify another bus) and wrapped it into a feedback delay with an array of two values for delayTime. Multichannel expansion thus creates two channels that are different because they have different delaytimes but the rest of UGen graph is the same (a copy). Changing spread now works:


(
SynthDef(\stereo,{|out=0, in=2, spread=1, level=1, center=0.0|
	Out.ar(out,
		Splay.ar(
			CombN.ar(
				In.ar(in, 1), 1, [0.3,0.4], 2)
			,spread,level,center));
}).play;
)

I hope that clarifies the issue better.

1 Like

a little something though it seems that with the comb the “center” disable and left or right only works in stereo

thanks again geniuses =D

reddit solution: reducing decayTime to very low value reduce stereo/delay

A chorus effect is good for sustained sounds:

(
a = { |freq = 220, amp = 0.1, center = 0, spread = 1,
	chorusDelay = 0.01, chorusPct = 0.7|
	var n = 11;
	var osc = Saw.ar(freq);  // one channel source
	
	// chorus effect: multiple delay lines
	// with lfo on delay time
	// delay + (delay * pct * lfo) = delay * (1 + (pct * lfo))
	var lfos = Array.fill(n, { SinOsc.kr(Rand(0.08, 0.11)) * chorusPct + 1 });
	var delays = DelayC.ar(osc, 0.2, chorusDelay * lfos);
	
	// now you have 11 signals with different delays/lfos
	// so these are not phase correlated
	// and it's meaningful to Splay them
	Splay.ar(delays, spread, amp, center)
}.play;
)


// use XY plot for an imaging meter
// if there is no stereo image, the plot will be a vertical line
b = Bus.audio(s, 2);

(
c = {
	var sig = In.ar(0, 2);
	var mid = sig[0] + sig[1];
	var side = sig[1] - sig[0];
	[side, mid] * 0.5
}.play(a, outbus: b, addAction: \addAfter);
)

z = s.scope(2, index: b.index).style_(2).yZoom_(16);

Read up on mid-side encoding to understand the use of scope here.

hjh

1 Like

I don’t understand this. Can you explain?

This sounds awfully sarcastic and toxic.

Again, I don’t understand. Can you explain this better?

I think that when somebody tries to answer you with a solution where you have a problem, it means, they are trying to help you, also by volunteering and spending their time. It’s common curtesy to somehow ackowledge that in your response. Or am I not getting this right? I might have misunderstood something.

1 Like

Tbh I don’t read it that way… “=D” I think is like :grin: which never seemed sarcastic to me.

I don’t understand the other questions either so I didn’t try to answer them… I still suspect there is some misunderstanding of what is stereo imaging but if so, the mid-side article that I posted should help (Sound On Sound, generally an excellent resource).

hjh

1 Like

Hey Luka,i wrote a first answer to say thank you but accidentally deleted when writing the comment about the panning, my apologies!!

I think each person who ever took time to answer me is smart which makes them geniuses from my point of view, no sarcasm but i understand things get be read another way that they are written

you really solved my question about stereo with the phase decorrelation/CombN but when used the “center” argument does not respond very well anymore.

I found that by reducing the comb value it was possible again to bring the sound more to left or right

These are mutually contradictory goals. You can’t have a wide stereo image and precise center localization at the same time.

If you want to position the aggregate sound more precisely within the stereo field, then you would have to narrow the width of the field. In my example above, a.set(\spread, 0.25, \center, -0.75) positions the sound very clearly to the left.

If you want the image to be wide, then naturally it will seem to be coming from a wide area, and it won’t be clear where the center is.

hjh

1 Like

Thanks for clearing it up. All good.