Using a Signal.window as an envelop?

I am a bit confused with this part:

  1. Line.ar(0, BufFrames.kr(win), sustain, doneAction: Done.freeSelf);
  2. Line.ar(0, BufFrames.kr(win) - 1, sustain, doneAction: Done.freeSelf);

Which one is correct?

The first seems to be correct:

~triangle =  Buffer.loadCollection(s, Signal.hammingWindow(8))

{ var phase = Line.ar(0, BufFrames.kr(~triangle), 1); BufRd.ar(1, ~triangle, phase) }.plot(1)

{ var phase = Line.ar(0, BufFrames.kr(~triangle) - 1, 1); BufRd.ar(1, ~triangle, phase) }.plot(1)

I remembered that the second one is used in some tutorials, so I was confused for a moment.

For the moment, I’m following the logic of separating the “instrument” part (SynthDef) from the “score” part (Patterns), whereas it would probably be more efficient not to make this separation and to use modulators (like Impulse, EnvGen, etc) instead of Patterns. That’s more efficient working with .ar, without doubt.
The question is: can we avoid working with patterns in a certain complexity of “algorithmic composition”?

For example, if we need to describe evolutions of various parameters (like, in my example, the fragments of a sound), which can be applied to different SynthDef at different times in a piece, and then describe correlations between different evolutions, use a multiplicity of clocks, etc.

I’ll have to find a compromise between computational efficiency and the readability of complex processes.

rdd:

See below for putting a sine tone back together? You can swap Out & OffsetOut while the pattern is running to hear/see the difference, which will depend, of course, on the block size.

So what’s the advantage of using Out instead of OffsetOut …? Might as well use the second one all the time, right?

Even though I am not asked to answer, I will try to explain it to test myself:

OffsetOut outputs signals that are to be sent faster than the control rate (default: 1/64th of the audio rate) with sample accuracy by buffering (delaying) them.
(<- Please correct me if the explanation is not correct from word to word!)

It displays the following messages in the Post window if it has not output the signal with sample accuracy:

late 0.08973143
late 0.007235696
...

Here are some listening examples:

(
SynthDef(\sin, { |out=0, freq=440, sustain=1, amp=0.1|
	var osc = SinOsc.ar(freq);
	var env = EnvGen.ar(Env.sine(sustain, amp), Done.freeSelf);
	Out.ar(out, osc * env);
}).add;

SynthDef(\sin_offset, { |out=0, freq=440 sustain=1, amp=0.1|
	var osc = SinOsc.ar(freq);
	var env = EnvGen.ar(Env.sine(sustain, amp), Done.freeSelf);
	OffsetOut.ar(out, osc * env);
}).add;
)

// ex. 1: noisy (dirty) and the pitch does not correspond to the 440 Hz frequency.
(
Pbind(
	\instrument, \sin,
	\dur, 1 / 440,
	\freq, 440
).play;
)

 // ex. 2: normal SinOsc with a frequency of 440 Hz.
{ SinOsc.ar(440) * 0.1 ! 2 }.play

 // ex. 3: a clean pitch with the same frequency as ex. 2.
(
Pbind(
	\instrument, \sin_offset,
	\out, 1,
	\dur, 1 / 440,
	\freq, 440
).play
)

Well, it depends on the situation.

If you have an input to the synth, it will be coming in and its normal time, then mixed in your synth, and then delayed with the output. So you shouldn’t use OffsetOut for effects or gating. (from the Note of OffsetOut help document)

You may only need OffsetOut if both of the following conditions are met at the same time:

  • you want to control a synth faster than the control rate, and
  • your application is not one of the cases mentioned in the notes of the OffsetOut documentation.

As mentioned above, OffsetOut may not send the signal at the correct time.

In general i think its a good practice to separate the instrument from the composition, either made with Patterns or Synths and Routines. This is more flexible in terms of composition.

I think my point here is not so much efficiency, its more about accuracy and sound quality.
With these microsound Instruments you are often moving between different time scales.
I find using audio rate and Demand Ugens the most reliable option for this.

In my opinion you have this conflict between accurancy and flexibility and you have to chose one.

But im also happy that @rdd showed the A/B example with OffsetOut. Still wondering why thats not true for my former Pmono example.

When creating a “one-shot” Synthdef and using Pbind to play it, the evolution can just be per step. That might be fine for certain types of static sounds, but if the timbral transformations are the core of the composition, i think you should add some continuous modulation sources on a different time scale. Which can be done for example by using Pmono instead or by having a separate modulation Synth and route it to the Pbind via a Bus.

At what trigger rate (for instance per-second) do you begin to feel the difference in accuracy/quality when compared between server-side vs language-side approaches?

To clarify “faster than control rate”:

A synth can begin/end only on a control block boundary. This means, if a SynthDef uses Out.ar, its sound can begin only on a control block boundary, quantized to the block size / sample rate interval.

If you want the sound to begin mid-block, the only way is to start the synth on the preceding boundary, and delay the output to compensate. That’s what OffsetOut does.

“Faster,” then, means “higher timing resolution” than control rate.

One possible issue with OffsetOut is, if the audio source is In.ar, the input will not be delayed but the output will (I think). Mixing them back together could produce phase cancellation.

OSC message scheduling isn’t sample-accurate in any case.

hjh

2 Likes

Confirmed:

(
SynthDef(\noise, { |out, amp = 0.1|
	Out.ar(out, PinkNoise.ar(amp).dup);
}).add;

SynthDef(\offsetout, { |out, gate = 1|
	var in = In.ar(out, 1);
	var eg = EnvGen.kr(Env.asr(0.01, 1, 0.01), gate, doneAction: 2);
	OffsetOut.ar(out, (in * eg).dup);
}).add;
)

n = Synth(\noise);

(
p = Pbind(
	\instrument, \offsetout,
	\dur, Pwhite(1, 4, inf) * 0.2,
	\addAction, \addAfter,
	\group, n
).play;
)

p.stop; n.free;

hjh

1 Like

@jpburstrom
I noticed that there is no example for BufFrames in Line. I think it would be nice if your example could be added to the help document. (In the Pbind part, \buf, b should be added.) Not sure where the best place would be.

@jamshark70
The last example of Offset help document does not work. I think it would be nice if your example could be added to the help document (maybe replace the last one?).

Sorry for these comments, but I think I cannot do it myself because these codes are not mine.

Many thanks to all of you for these very clear explanations. I had no idea of the advantages of using OffsetOut in certain cases, which is an important point.

If I understand correctly, when using a Pbind or a Pmono, the whole program is transmitted to the server and the data is calculated “internally”, on the server side, and not, as I thought (and as the diagram in Client vs Server | SuperCollider 3.12.2 Help suggests), transmitted by OSC successively from the client, according to the time indicated by \dur ? Using OffsetOut, my example works remarkably well at speeds of around 5 ms, which seems very fast for OSC…

No, it isn’t like this. NRT (non-real-time synthesis) is more like that.

It is definitely transmitted successively, as you’d see by using the server dump OSC option (Server menu).

hjh

Following your advice, I’m looking into Demand Ugens but I have to say that it’s not easy for me to understand how to replace patterns. For example, with patterns, I can separate the duration of each call, and the speed of the calls, by doing something like a polyphonic sampler. I’m trying to do something similar here, with a very simple example, but I can’t figure out how to do it…

Is there a tutorial on how to use Demand in depth?


(SynthDef (\ptitsynt, 
	{
	arg freq = 440;
	var env, sig, amp = 0.1;
	env = EnvGen.ar(
		(Env.new(
			levels: [0,1,1,0],
			times: [0.9, 0.2, 0.9],
			curve: \lin)),
		doneAction: Done.freeSelf);
		sig = (SinOsc.ar(freq));
	Out.ar(0, sig * env * amp);
})).add;


z = {
	var a=Impulse.kr(3);
	Demand.kr(a, 0, Synth.new(\ptitsynt, [\freq, rrand(100, 1000)]));
};

z.play;

okay, here is a basic example using Impulse.ar creating a trigger at a specific rate specified by the Named Control \tFreq.kr. This is then used to trigger an Envelope Generator. I have normalized the durations of the envelope to 1, you can use .normalizeSum for this and have scaled the duration of the Envelope by 1 / trigger frequency. If you specify a control paramter, here \overlap.kr instead of using 1, you can change the legato between events. Try using different values for tFreq and overlap with the example below and you will see what i mean:

(
SynthDef(\test, {

	var tFreq, trig, env, sig, duration;
	
	tFreq = \tFreq.kr(10);
	trig = Impulse.ar(tFreq);
	
	duration = \overlap.kr(1) / tFreq;
	
	env = EnvGen.ar(Env([0, 1, 0], [0.03, 0.97], \lin), trig, timeScale: duration);

	sig = SinOsc.ar(\freq.kr(440));
	
	sig = sig * env;
	
	sig = sig * \amp.kr(-10.dbamp);
	
	sig = Pan2.ar(sig, \pan.kr(0));
	
	Out.ar(\out.kr(0), sig);
}).add;
)

x = Synth(\test, [\tFreq, 4, \freq, 440, \overlap, 1]);

x.set(\tFreq, 2);
x.set(\overlap, 0.2);

x.free;

One simple example of using Demand could be a sequence of ratios which you use to multiply your frequency with. For each trigger Demand receives, the next item in the Dseq sequence is being played, over and over again.

(
SynthDef(\test, {

	var tFreq, trig, env, sig, duration, ratios, freq;
	
	tFreq = \tFreq.kr(4);
	trig = Impulse.ar(tFreq);
	
	duration = \overlap.kr(1) / tFreq;
	
	env = EnvGen.ar(Env([0, 1, 0], [0.03, 0.97], \lin), trig, timeScale: duration);
	
	ratios = [1.0, 1.5258371159539, 1.6603888559977, 1.8068056703405];
	freq = \freq.kr(440) * Demand.ar(trig, 0, Dseq(ratios, inf));

	sig = SinOsc.ar(freq);
	
	sig = sig * env;
	
	sig = sig * \amp.kr(-10.dbamp);
	
	sig = Pan2.ar(sig, \pan.kr(0));
	
	Out.ar(\out.kr(0), sig);
}).add;
)

x = Synth(\test, [\tFreq, 4, \freq, 440, \overlap, 0.5]);

x.free;

You could additionally use a binary sequence of 0 and 1 to gate the triggers.

(
SynthDef(\test, {

	var tFreq, trig, env, sig, duration, ratios, freq;
	
	tFreq = \tFreq.kr(4);
	trig = Impulse.ar(tFreq);
	
	trig = trig * Demand.ar(trig, 0, Dseq([1, 0, 0, 1, 0, 0, 1, 0], inf));
	
	duration = \overlap.kr(1) / tFreq;
	
	env = EnvGen.ar(Env([0, 1, 0], [0.03, 0.97], \lin), trig, timeScale: duration);
	
	ratios = [1.0, 1.5258371159539, 1.6603888559977, 1.8068056703405];
	freq = \freq.kr(440) * Demand.ar(trig, 0, Dseq(ratios, inf));

	sig = SinOsc.ar(freq);
	
	sig = sig * env;
	
	sig = sig * \amp.kr(-10.dbamp);
	
	sig = Pan2.ar(sig, \pan.kr(0));
	
	Out.ar(\out.kr(0), sig);
}).add;
)

x = Synth(\test, [\tFreq, 8, \freq, 440, \overlap, 1]);

x.free;

Thanks again for your quick reply, dietcv. The thing is that in your examples, I can’t have an event duration greater than the Impulse period either, so I can’t get the equivalent of a polyphonic instrument. I could do this using a Routine, but is there a simple solution Into Demand Ugens ?

The name overlap is a bit misleading in my example. You will find out that you cant specify overlap bigger then 1 with this attempt. This is because the envelope is created for each trigger and the last envelope would need to exist in parallel to the new one.

This is the same for playing notes on a piano. If you want to play three notes simultaneously, you need at least three fingers. Same principle is true here. For three simultaneously existing EnvGens, you would need at least three voices.

If you use a “one shot” SynthDef instead, where you have specified doneAction 2 for your envelope and use a Pbind as a recipe for an EventStreamPlayer to create Events on the server. The polyphony is taken care of automatically.

If you for example take the example by @rdd, where you specify legato > 1 and open the Node Tree. You can see that two Synth Nodes are created on the server. If you adjust legato to 3. You will create three Synth Nodes on the server.

(
SynthDef('sin') { |out=0 freq=440 sustain=1 amp=0.1|
	var osc = SinOsc.ar(freq, 0);
	var env = EnvGen.ar(Env.sine(sustain, amp), 1, 1, 0, 1, 2);
	OffsetOut.ar(out, osc * env);
}.add
)

(
Pbind(
	'instrument', 'sin',
	'dur', 0.5,
	'legato', 2,
	'freq', 800
).play;
)

If you would like to have polyphonic behaviour inside your SynthDef. You have to use multichannel expansion. The easiest way to have the opportunity to overlap the EnvGens inside of your Synthdef would be to use something called “the round-robin method”. For this you have to create multiple channels and distribute your triggers round robin across these channels. Here done with PulseDivider.

(
var multiChannelTrigger = { |numChannels, trig|
	numChannels.collect{ |chan|
		PulseDivider.ar(trig, numChannels, numChannels - 1 - chan);
	};
};

{
	var numChannels = 4;
	var tFreq = \tFreq.kr(400);
	var trig = Impulse.ar(tFreq);
	var triggers = multiChannelTrigger.(numChannels, trig);
	var durations = min(\overlap.kr(1), numChannels) / tFreq;
	EnvGen.ar(Env([0, 1, 0], [0.5, 0.5], \lin), triggers, timeScale: durations); 
}.plot(0.02);
)

grafik

If you now increase overlap you can see that the channels are overlapping:
overlap = 2;
grafik

overlap = 3;
grafik

etc.

The maximum overlap possible with this attempt is the number of channels which you have to define with SynthDef evaluation and cant be changed afterwards. This is because of how the SynthDef Graph works in SC.

If you instead run this example and specify overlap > 1 and open the Node Tree, you can see that you have just one Synth Node on the server. The polyphony is done inside the SynthDef:

(
var multiChannelTrigger = { |numChannels, trig|
	numChannels.collect{ |chan|
		PulseDivider.ar(trig, numChannels, numChannels - 1 - chan);
	};
};

SynthDef(\multiChannelTest, {
	
	var numChannels = 5;

	var tFreq, trig, triggers, envs, sigs, sig, durations, ratios, freqs;
	
	tFreq = \tFreq.kr(4);
	trig = Impulse.ar(tFreq);
	
	triggers = multiChannelTrigger.(numChannels, trig);
	
	durations = min(\overlap.kr(3), numChannels) / tFreq;
	
	envs = EnvGen.ar(Env([0, 1, 0], [0.03, 0.97], \lin), triggers, timeScale: durations); 
	
	ratios = Dseq([1.0, 1.5258371159539, 1.6603888559977, 1.8068056703405], inf);
	freqs = \freq.kr(440) * Demand.ar(triggers, 0, ratios);

	sigs = SinOsc.ar(freqs);
	
	sigs = sigs * envs;
	sig = sigs.sum;
	
	sig = sig * \amp.kr(-10.dbamp);
	
	sig = Pan2.ar(sig, \pan.kr(0));
	
	Out.ar(\out.kr(0), sig);
}).add;
)

x = Synth(\multiChannelTest, [\tFreq, 4, \freq, 440, \overlap, 1]);

x.free;

I have renamed all the variables, so it is clearer that you have multiple instances of envs, freqs and SinOscs in your Synthdef, you can use .debug to see that in the post window. If you want to play these five channels of audio just on a single pair of stereo speakers, you would have to sum the signals to a single channel and use Pan2.ar afterwards to create a stereo output.

There is a more sophisticated method for polyphony described in the gen~ book called “overdubbing the future”, where you have one single sample buffer reader and several single sample buffer writers to throw events into the future and specify when to trigger them, without having to specify the maximum polyphony. But thats not possible in SC.

Okay, that’s very clear. Still, it’s less intuitive than using Pbind, in this case, but it’s still a programming technique worth exploring further !

I doubt that. RecordBuf, for instance, can overdub. BufWr can’t do it directly, but there’s nothing to stop you from reading the buffer first, mixing in the new signal, and writing it back.

hjh

okay, but with single sample precision? I guess only with a minimum delay between reading and writing of the blocksize.

I’m perhaps misunderstanding your intent, but BufRd → + (mixing old and new data) → BufWr is a 100% feed-forward graph, so there is no need for a block delay. (But those “future events” would need to be at least a block into the future.)

hjh

cool! In the context of granulation one block into the future might be too large for the future events.