Using a Signal.window as an envelop?

Hello;

Sorry, I ask beginner’s questions (but after spending time trying to find an answer, without success!) :

Is it possible to use windows such as
Signal.hanningWindow(1024) or Signal.blackmanHarrisWindow(1024, 0)
as an envelope? I know the possibility exists in the GrainBuf, i.e., but I’d like to use it to wrap fragments in a PlayBuf.
Thanks
:slightly_smiling_face:

thats possible, but you can also use a simple function for the hanning window, driven by your phase:

(
var hanningWindow = { |phase|
	(1 - (phase * 2pi).cos) / 2 * (phase < 1);
};

{
	var phase = Sweep.ar;
	hanningWindow.(phase);
}.plot(1);
)

grafik

Here is a granular approach with the hanning window using Playbuf:

(
var multiChannelTrigger = { |numChannels, trig|
	numChannels.collect{ |chan|
		PulseDivider.ar(trig, numChannels, chan);
	};
};

var multiChannelPhase = { |triggers, windowRate|
	triggers.collect{ |localTrig, i|
		var hasTriggered = PulseCount.ar(localTrig) > 0;
		var localPhase = Sweep.ar(localTrig, windowRate * hasTriggered);
		localPhase * (localPhase < 1);
	};
};

var hanningWindow = { |phase|
	(1 - (phase * 2pi).cos) / 2 * (phase < 1);
};

SynthDef(\granular, { |sndBuf|

	var numChannels = 8;

	var tFreq, trig, windowRate, triggers, windowPhases, grainWindows, pos, sig;

	tFreq = \tFreq.kr(10);
	trig = Impulse.ar(tFreq);
	windowRate = tFreq / \overlap.kr(1);

	triggers = multiChannelTrigger.(numChannels, trig);
	windowPhases = multiChannelPhase.(triggers, windowRate);
	grainWindows = hanningWindow.(windowPhases);

	pos = Phasor.ar(
		trig: DC.ar(0),
		rate: \posRate.kr(1) * BufRateScale.kr(sndBuf) * SampleDur.ir / BufDur.kr(sndBuf),
		start: \posLo.kr(0),
		end: \posHi.kr(1)
	);

	sig = PlayBuf.ar(
		numChannels: 1,
		bufnum: sndBuf,
		rate: \playBackRate.kr(1),
		trigger: triggers,
		startPos: pos * BufFrames.kr(sndBuf),
		loop: 1
	);

	sig = sig * grainWindows;

	sig = Pan2.ar(sig, \pan.kr(0));
	sig = sig.sum;

	sig = sig * \amp.kr(-10.dbamp);

	sig = LeakDC.ar(sig);
	OffsetOut.ar(\out.kr(0), sig);
}).add;
)

b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");

(
x = Synth(\granular, [
	\tFreq, 1000,
	\overlap, 8,
	\sndBuf, b,
	\amp, -25.dbamp
]);
)

x.free;

(
x = Synth(\granular, [
	\tFreq, 10,
	\posRate, 1,
	\playBackRate, 1,
	\overlap, 0.5,
	\sndBuf, b,
	\amp, -5.dbamp
]);
)

x.free;
1 Like

Thank you very much, this is a very high-level solution and I’ll have to study it very carefully to understand it properly.
However, I’m looking to avoid using the Ugen Impulse for grain reading. I would like to use patterns for the playback control of each “grain”, indicating for each note the playback duration of a function table that will act as an envelope (with an integrated Done.freeSelf). Also, the idea of starting from the Ugen Signal in order to obtain the forme of the envelop is that it already contains a wide variety of windows. Maybe it’s a logic too marked by the use of Csound or Max?

You could load the signal into a buffer and use a Line UGen to drive a BufRd, releasing the synth when done:

b = Buffer.loadCollection(s, Signal.hanningWindow(1024))

(
SynthDef(\grain, { |buf, freq=440, sustain=0.1|
	var phase = Line.ar(0, BufFrames.kr(buf), sustain, doneAction: Done.freeSelf);
	var snd = SinOsc.ar(freq) * BufRd.ar(1, buf, phase);
	Out.ar(0, snd * \amp.kr(0.1))
}).add.play(args: [buf: b, sustain: 0.01]);
)

Pbind(\instrument, \grain, \dur, 0.01, \legato, 2, \freq, Pexprand(220, 440)).play

The ways suggested by @dietcv and @jpburstrom are already great, but here is my attempt to implement your intention (There is no error message from sclang, but I am not sure if this is the correct way, and I think it is too complicated as an envelope. The other two options seem better.):

(
~envXYC = { |time = 1, levelScale = 1|
	var sampleSize = 1024, temp;
	temp = Signal.hanningWindow(sampleSize);
	temp = temp.collect { |item, index| [index / sampleSize * time, item * levelScale, \lin]};
	temp = temp.insert(sampleSize + 1, [time, 0, \lin])
}
)

x = ~envXYC.(0.1)

Env.xyc(~envXYC.(0.1)).test.plot

{ SinOsc.ar * Env.xyc(~envXYC.(0.1)).ar(Done.freeSelf) }.play

I would like to see how you could use it to wrap fragments in a PlayBuf. Would that be possible?

For granulation the standard window is the hanning window because of its symmetry and its continuous slope. I think beside the hanning window the only ones which are interesting for granulation are the tuckey window, with control over the width or an exponential or reversed exponential window shape.

You could create those shapes and load them into buffers but i find this very unflexible:

(
~getEnvBufs = {

	var exponential = Env(
		levels: [0, 1, 0],
		times: [0.01, 0.99],
		curve: [4.0, -4.0]
	);

	var exponentialReversed = Env(
		levels: [0, 1, 0],
		times: [0.99, 0.01],
		curve: [4.0, -4.0]
	);

	var envBufs = [
		exponential,
		exponentialReversed
	];

	envBufs.collect{ |envBuf|
		Buffer.sendCollection(s, envBuf.discretize(4096));
	};
};
~envBufs = ~getEnvBufs.();
)

~envBufs[0].plot;
~envBufs[1].plot;

Its better if you have control over the shapes in realtime.

note that patterns are running at control rate, so you wont get a pitched tone when using them for granulation. The carrier waveform should reset its phase for each trigger and the window / envelope you are using should be stateless. So using patterns with an abitrary carrier osc and EnvGen is at least 3 deviations away from granulation imo. One optional thing might be sub-sample accurancy, the phase of the carrier should not reset to zero but to the sub-sample offset.

1 Like

One other option is to use IEnvGen to create stateless window shapes, which is also not modulatable:

(
{
	var phase = Sweep.ar;
	IEnvGen.ar(Env([0, 1, 0], [0.5, 0.5], \sin), phase);
}.plot(1);
)

grafik

Thank you very much for your answers which helped me a lot to understand SC.
My idea is not really to create a new granular syntesis program, GrainBuf works very well and I haven’t finished exploring all the existing capabilities in SC. I’m just doing an exercise that translates a tool I made in Csound and then in Max, something between a shuffle, a freeze and a sptializer, which I often use in my pieces, to familiarize myself with SC. You can see the current state here.

(
Buffer.freeAll;
(b = (sample: Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav"))));

(~window = (
	triangle: Buffer.loadCollection(s, Signal.bartlettWindow(1024)),
	blackman: Buffer.loadCollection(s, Signal.blackmanHarrisWindow(1024)),
	tukey: Buffer.loadCollection(s, Signal.tukeyWindow(1024)),
	hanning: Buffer.loadCollection(s, Signal.hanningWindow(1024)),
	saw: Buffer.loadCollection(s, Signal.interpolation(1024, 1, 0))));

(
SynthDef.new(\play, {
	arg outs = 0, duree = 1, buf, startPos = 0, win;
	var sig, env, amp = 1, phase = Line.ar(0, BufFrames.kr(win), duree, doneAction: Done.freeSelf);
	sig = PlayBuf.ar(
		numChannels: 1,
		bufnum: buf,
		rate: \rate.kr(1.0),
		trigger: \trig.kr(1.0),
		startPos: startPos * BufSampleRate.ir(buf),
		loop: \loop.kr(0),
		doneAction: Done.freeSelf
	);
 sig = sig * BufRd.ar(1, win, phase);
	sig = PanAz.ar(6, sig, \pan.kr(0));
	Out.ar(outs, sig * amp);
}).add;
);

(
~sound = b.sample;
~durSound = ~sound.duration;
~speed = Pseq(Array.interpolation(30, 0.05, 0.3).mirror1, inf);
~step = 0.01;
Pbind(
	\instrument, \play,
	\dur, ~speed,
	\buf, ~sound,
	\duree, 2 * ~speed,
	\startPos, Pseq([Pseries(1, ~step, 50)].mirror2, inf),
	\rate, 1,
	\amp, 1,
	\win, ~window.triangle,
	\pan, Pseq([1/6, 3/6, 5/6, 7/6, 9/6, 11/6], inf),
	\outs, 0,
).play
)

Now I need to optimize it, following your advice. Also, I’m looking at different ways of controlling audio processing, like here, in order to compose with streams. I understand that there are basically two possibilities: patterns and signal modulation.
In fact, I’ve posted another question to make envelopes better suited to Pbind, without using time. I’ll keep looking…

It could be worth looking into Demand Ugens. All the pattern classes you have been using in your example, have a Demand equivalent.

The Grain Ugens like GrainBuf, GrainSin etc. are really outdated imo. They sample and hold each parameter per grain so its not possible for example to do FM / PM or to modulate the window shape at audio rate per grain . The only advantage you get is, that they are taking care of the overlapping in the SynthDef, but you can do this by hand. I find using BufRd / Playbuf way more flexible.

For now, I’m just getting started, so this example is very basic. The idea is to go further in working on patterns, applied to different processing programs and evolving over time, in a much more complex architecture than what you see here. It seems to me that the use of objects in the Demand class was interesting in a single SynthDef, for example, but if you need to control several of them, and develop more complex temporal evolutions, it seems to me that Patterns are better suited? I don’t know, I’ll have to dig deeper. I also don’t know what the difference is in terms of computational complexity between the different techniques.

In case it’s not clear, synthesis nodes can be scheduled “sample accurately”.

Cf.

Things like Waveset synthesis &etc. rely on this:

I dont think that this is true for a pitched tone:

(
SynthDef(\trig_test, {
	var trig = \trig.tr(0);
	var freq = \freq.kr(440);
	var phase = Sweep.ar(trig, freq);
	var window = IEnvGen.ar(Env([0, 1, 0], [0.01, 0.99], [4.0, -4.0]), phase);
	var sig = sin(phase * 2pi);
	sig = sig * window;
	sig = Pan2.ar(sig, \pan.kr(0), \amp.kr(0.25));
	OffsetOut.ar(\out.kr(0), sig);
}).add;
)

// nothing new, sounds aweful at audio rate!!!
(
x = Pmono(\trig_test,
	\trig, 1,
	\dur, 50.reciprocal,
	\freq, 440,
	\amp, 0.25,
).play;
)

x.stop;

(
SynthDef(\impulse_test, {
	var tFreq = \tFreq.kr(1);
	var trig = Impulse.ar(tFreq);
	var freq = \freq.kr(440);
	var phase = Sweep.ar(trig, freq);
	var window = IEnvGen.ar(Env([0, 1, 0], [0.01, 0.99], [4.0, -4.0]), phase);
	var sig = sin(phase * 2pi);
	sig = sig * window;
	sig = Pan2.ar(sig, \pan.kr(0), \amp.kr(0.25));
	OffsetOut.ar(\out.kr(0), sig);
}).add;
)

// works fine!!!
x = Synth(\impulse_test, [\tFreq, 50]);
x.free;

See below for putting a sine tone back together? You can swap Out & OffsetOut while the pattern is running to hear/see the difference, which will depend, of course, on the block size.

SynthDef('sin') { |out=0 freq=440 sustain=1 amp=0.1|
	var osc = SinOsc.ar(freq, 0);
	var env = EnvGen.ar(Env.sine(sustain, amp), 1, 1, 0, 1, 2);
	OffsetOut.ar(out, osc * env);
}.add

Pbind(
	'instrument', 'sin',
	'dur', 1 / 100,
	'legato', 2,
	'freq', 800
).play

s.scope

I am a bit confused with this part:

  1. Line.ar(0, BufFrames.kr(win), sustain, doneAction: Done.freeSelf);
  2. Line.ar(0, BufFrames.kr(win) - 1, sustain, doneAction: Done.freeSelf);

Which one is correct?

The first seems to be correct:

~triangle =  Buffer.loadCollection(s, Signal.hammingWindow(8))

{ var phase = Line.ar(0, BufFrames.kr(~triangle), 1); BufRd.ar(1, ~triangle, phase) }.plot(1)

{ var phase = Line.ar(0, BufFrames.kr(~triangle) - 1, 1); BufRd.ar(1, ~triangle, phase) }.plot(1)

I remembered that the second one is used in some tutorials, so I was confused for a moment.

For the moment, I’m following the logic of separating the “instrument” part (SynthDef) from the “score” part (Patterns), whereas it would probably be more efficient not to make this separation and to use modulators (like Impulse, EnvGen, etc) instead of Patterns. That’s more efficient working with .ar, without doubt.
The question is: can we avoid working with patterns in a certain complexity of “algorithmic composition”?

For example, if we need to describe evolutions of various parameters (like, in my example, the fragments of a sound), which can be applied to different SynthDef at different times in a piece, and then describe correlations between different evolutions, use a multiplicity of clocks, etc.

I’ll have to find a compromise between computational efficiency and the readability of complex processes.

rdd:

See below for putting a sine tone back together? You can swap Out & OffsetOut while the pattern is running to hear/see the difference, which will depend, of course, on the block size.

So what’s the advantage of using Out instead of OffsetOut …? Might as well use the second one all the time, right?

Even though I am not asked to answer, I will try to explain it to test myself:

OffsetOut outputs signals that are to be sent faster than the control rate (default: 1/64th of the audio rate) with sample accuracy by buffering (delaying) them.
(<- Please correct me if the explanation is not correct from word to word!)

It displays the following messages in the Post window if it has not output the signal with sample accuracy:

late 0.08973143
late 0.007235696
...

Here are some listening examples:

(
SynthDef(\sin, { |out=0, freq=440, sustain=1, amp=0.1|
	var osc = SinOsc.ar(freq);
	var env = EnvGen.ar(Env.sine(sustain, amp), Done.freeSelf);
	Out.ar(out, osc * env);
}).add;

SynthDef(\sin_offset, { |out=0, freq=440 sustain=1, amp=0.1|
	var osc = SinOsc.ar(freq);
	var env = EnvGen.ar(Env.sine(sustain, amp), Done.freeSelf);
	OffsetOut.ar(out, osc * env);
}).add;
)

// ex. 1: noisy (dirty) and the pitch does not correspond to the 440 Hz frequency.
(
Pbind(
	\instrument, \sin,
	\dur, 1 / 440,
	\freq, 440
).play;
)

 // ex. 2: normal SinOsc with a frequency of 440 Hz.
{ SinOsc.ar(440) * 0.1 ! 2 }.play

 // ex. 3: a clean pitch with the same frequency as ex. 2.
(
Pbind(
	\instrument, \sin_offset,
	\out, 1,
	\dur, 1 / 440,
	\freq, 440
).play
)

Well, it depends on the situation.

If you have an input to the synth, it will be coming in and its normal time, then mixed in your synth, and then delayed with the output. So you shouldn’t use OffsetOut for effects or gating. (from the Note of OffsetOut help document)

You may only need OffsetOut if both of the following conditions are met at the same time:

  • you want to control a synth faster than the control rate, and
  • your application is not one of the cases mentioned in the notes of the OffsetOut documentation.

As mentioned above, OffsetOut may not send the signal at the correct time.

In general i think its a good practice to separate the instrument from the composition, either made with Patterns or Synths and Routines. This is more flexible in terms of composition.

I think my point here is not so much efficiency, its more about accuracy and sound quality.
With these microsound Instruments you are often moving between different time scales.
I find using audio rate and Demand Ugens the most reliable option for this.

In my opinion you have this conflict between accurancy and flexibility and you have to chose one.

But im also happy that @rdd showed the A/B example with OffsetOut. Still wondering why thats not true for my former Pmono example.

When creating a “one-shot” Synthdef and using Pbind to play it, the evolution can just be per step. That might be fine for certain types of static sounds, but if the timbral transformations are the core of the composition, i think you should add some continuous modulation sources on a different time scale. Which can be done for example by using Pmono instead or by having a separate modulation Synth and route it to the Pbind via a Bus.

At what trigger rate (for instance per-second) do you begin to feel the difference in accuracy/quality when compared between server-side vs language-side approaches?