Why does EnvGen restart at every loop iteration?

I am trying to generate sound piecewise, play the pieces consecutively, and have the concatenation as a whole modulated by an envelope. I am running a loop inside a Task, in which I play one piece per iteration and then wait for the duration of the piece. I also create the EnvGen outside the loop and expect it to “play itself out” in its entirety as the loop spins, but instead the EnvGen seems to be restarting at each iteration of the loop. How can I avoid this?

This very simplified example may not seem to make much sense at first sight because I am just cutting 100ms pieces of signal from a single SinOsc, but in my original attempt each piece was different and involved crossfading between two Klangs over the 100ms, which is why it had to be piecewise.

//Exponential decay over 1 second
var envelope = {EnvGen.kr(Env.new([1,0.001],[1],curve: 'exp'), timeScale: 1, doneAction: 2)};

var myTask = Task({

    //A simple tone
    var oscillator = {SinOsc.ar(880,0,1);};

    var scissor;

    //Prepare a scissor that will cut 100ms of the oscillator signal
    scissor = {EnvGen.kr(Env.new([1,0],[1],'hold'),timeScale: 0.1)};

    10.do({

	    var scissored,modulated;

	    //Cut the signal with the scisor
	    
	    scissored = oscillator*scissor;

	    //Try modulating with the envelope. The goal is to get a single 1s exponentially decaying ping.
	    
	    modulated = {scissored*envelope};

	    //NASTY SURPRISE: envelope seems to restart here every iteration!!!
	    //How do I prevent this and allow the envelope to live its whole
	    //one-second life while the loop and the Task dance around it in 100ms steps?
	    
	    modulated.play;
	    
	    0.1.wait;
    });
});

myTask.play;

A side note: this is a weirdness I originally struggled with for a few MONTHS without making an inch of progress, and it eventually forced me to shelve my efforts at learning SuperCollider for two years. Now I am trying to pick up where I left off.

When you do {...}.play, you are creating a synth node. One synth node.

This synth node has a lifespan.

At the beginning of its lifespan, everything within the synth node starts from time 0.

At the end of its lifespan, everything within the synth node is deleted from memory – gone forever.

Within your loop, you’re doing modulated.play 10 times. That’s 10 synth nodes, and every one of those synth nodes starts from time 0. They have to. They are all independent synth nodes. There is really no other way to do it.

You’re building an EnvGen into every one of these 10 nodes. So you get 10 independent EnvGens, and by definition, they all have to start at the beginning at the moment of being created.

The solution is, if you want a single EnvGen to span multiple pieces, then the EnvGen has to be a separate synth node, with a longer lifespan. Probably the best way is to write the EnvGen signal onto a control bus, and then the “piece” synths read from that bus.

s.boot;

c = Bus.control(s, 1);

(
fork {
	SynthDef(\boop, { |out = 0, freq = 440, amp = 0.1, envbus|
		var sig = SinOsc.ar(freq),
		eg = EnvGen.kr(Env.perc(0.01, 0.1), doneAction: 2);
		amp = amp * In.kr(envbus, 1);  // HERE, get the envelope's value
		Out.ar(out, (sig * eg * amp).dup);
	}).add;
	s.sync;
	
	// global envelope
	{
		EnvGen.kr(Env.linen(1.5, 0, 1.5), doneAction: 2)
	}.play(outbus: c);
	
	Pfindur(3.0, Pbind(
		\instrument, \boop,
		\freq, Pexprand(200, 800, inf),
		\dur, 0.125,
		\envbus, c
	)).play;
}
)

hjh

I am sorry but I struggle to see any equivalence between what I was trying to do and your answer, and I am afraid I am months away from being able to understand it. It also doesn’t seem to be producing any sound, but maybe I am running it wrong. How exactly should I run it?

Edit: apologies, I failed to copy the first two lines that set up the control bus. Now I can hear the music, but this is not what I am trying to achieve. I am trying to create one sound that is a concatenation-in-time of several pieces. Something like each segment being a short interpolation step between two waveforms, first one interpolating between waveform 1 and waveform 2, second between waveform 2 and waveform 3 etc., so it is seamless. However I want the idea of “concatenating pieces in time” to be completely decoupled from what the “pieces” are, and ideally I want it to work in a way that doesn’t necessitate actually running the synthesis of all the pieces at the same time.

I also create the EnvGen outside the loop and expect it to “play itself out” in its entirety as the loop spins, but instead the EnvGen seems to be restarting at each iteration of the loop. How can I avoid this?

^^ This is the question in the title of the thread, and it’s the question that I answered. As it is the title question, I assumed that it was your main question. Now I find that you are somewhat upset because I didn’t also answer a different question.

In my answer, I assumed that the pieces would be up to you. You can take the global envelope mechanism from my answer and apply it to any piecewise audio sources.

Something like each segment being a short interpolation step between two waveforms, first one interpolating between waveform 1 and waveform 2, second between waveform 2 and waveform 3 etc., so it is seamless. However I want the idea of “concatenating pieces in time” to be completely decoupled from what the “pieces” are, and ideally I want it to work in a way that doesn’t necessitate actually running the synthesis of all the pieces at the same time.

I think you should re-factor the approach.

Original proposal:

  • Synth A xfades wave1 → wave2
  • Synth B xfades wave2 → wave3 such that wave2 sounds completely uninterrupted
  • etc.

It’s going to be incredibly difficult to coordinate that. If you insist on doing it ontologically such that the crossfade must be in one synth, and the next crossfade in the next synth, it is only going to make you angry.

But, if you produce a synth for wave1 with a falling envelope, and wave2 with a rising-and-falling envelope, and wave3 with a rising-and-falling envelope, you can arrange the synth onsets so that wave1 is fading out while wave2 is fading in, and so on.

  • Synth A fades out wave1
  • Synth B starts at the same time, fades up wave2 and then fades down
  • Synth C starts halfway through synth B, fades up wave3 and then fades down

This approach would be trivially easy in SC.

Use the approach that works with, rather than against, the tool’s architecture.

hjh

Thanks! I’m all for refactoring. By “Synth C starts halfway through Synth B”, do you mean it actually starts using up cycles halfway through B, or is it running all the time and just producing silence before its “window” comes?

Also, in your example you are using incantations from the School of Sequencing - a school I haven’t even begun exploring yet, hoping that it is possible to separate these concerns: first learn how to produce individual notes and gradually gain as much strength in this area as possible, including the ability to build synths that are rather involved while reasonably optimal, and only then start learning how to sequence notes into music. I was hoping I wouldn’t have to actually employ sequencing techniques to play a single note. Nothing is trivially easy for me in SC.

I’m not entirely clear on your goal, but let me rephrase what I think you’re looking for, and then propose a relatively simply place to start. If I got it a little wrong, sorry - talk about this stuff is difficult!

  1. There are individual, separate sounds (Synths) played back-to-back
  2. There are short crossfades between these sounds
  3. There is a long envelope controlling the amplitude of the WHOLE THING.

Here’s a prototype:

  1. Here’s the back-to-back, which I think you’ve already figured out (this assumes you have some synth named \sound).
Task({
	10.do {
		~duration = rrand(1, 4);
		Synth(\sound, [\bus, 0, \duration, ~duration, \freq, rrand(100, 400)]);
		~duration.wait;
	};	
}).play
  1. Here’s a simple crossfading synth:
SynthDef(\sound, {
	|bus, duration, freq|
	var sig, fadeEnv;
	
	// A short 0.1 fade in, and then hold at 1 for the duration. Use doneAction to free automatically
	fadeEnv = Env([0, 1, 1], [0.1, duration]).kr(doneAction:Done.freeSelf);
	
	// The sound
	sig = SinOsc.ar(freq);
	
	// Now, we want to use fadeEnv to fade in our new sound,
	// replacing whatever was playing back before
	XOut.ar(bus, fadeEnv, sig);
}).add;

We need to modify our Task slightly: we want new synths nodes to be AFTER the previous ones, because they’ll be overwriting the sound they made (for the crossfade) - so we to change to:

		~synth = Synth(\sound, 
			[\bus, 0, \duration, ~duration, \freq, rrand(100, 400)],
			target:~synth, addAction:\addAfter
		);
  1. Here’s our overall envelope synth - we want to read in the sounds from our \sounds, apply the envelope, and write them back out.
SynthDef(\envelope, {
	|bus|
	var sig, env;
	
	// Some random envelope...
	env = Env([0, 1, 0.4, 0.6, 1, 0], [4, 1, 3, 8, 4]).kr.poll;
	sig = In.ar(bus, 1);
	sig = env * sig;
	
	// We need to use ReplaceOut because we want to REPLACE the sound 
	// that was already with our version that is modulated by the envelope
	ReplaceOut.ar(bus, sig);
}).add;

Finally, we need the envelope to come AFTER all of our sounds in the signal chain, because it’s modifying them - the easiest way to do this is to put our \sounds in a Group, and then put the envelope synth after that group. After this, our task looks like:

Task({
	~bus = 0;
	~synthGroup = Group();
	~envelopeSynth = Synth(\envelope, 
		[\bus, ~bus],
		target: ~synthGroup, addAction: \addAfter  
	);
	10.do {
		~duration = rrand(1, 4);
		~synth = Synth(\sound, 
			[\bus, ~bus, \duration, ~duration, \freq, rrand(100, 400)],
			target:~synthGroup, addAction:\addToTail    // add new sounds to the end of our group instead....
		);
		~duration.wait;
	};
    ~envelopeSynth.free;
}).play

I think you can probably image now how you could do the above, but use a different synth for each \sound iteration - you’d still want to do the crossfade part for each of the Synths. It would also be straightforward to e.g. add some additional effects or modulation to your \envelope synth, which would be applied to all your synths. You’ll need to do a little more work to e.g. make sure your envelope matches the overall duration of your pieces, and of course if you want to do anything more complex there will be problems to solve :).

In the code you originally posted, you’ve got Synth code mixed into Task code etc. - this is a common misunderstanding… It’s best to start from a basis of thinking of SynthDefs as relatively fixed objects you define ahead of time, and then e.g. run via Synth(\foo, ...) inside your task. You CAN generate SynthDefs more dynamically, and be a little more fluid with these distinctions - but that is definitely advanced territory… You don’t need it (yet) for what you’re trying to do, and you’ll almost definitely get stuck if you try to push that edge too soon.

Making sure nodes are in the right order is critical - you can’t modify (e.g. crossfade) a sound before you generated it! If you are careful about ordering, however, you can make the synth graph do a lot of work for you - the server node tree can help visualize if you’re not sure whats happening (check the Server menu).

Finally: if solving this problem is a deeper dive into SuperCollider than you are looking for, and you want to just be moving your piece forward, you might consider reading the help files for NodeProxy and Ndef - these objects wrap up some of the bookkeeping stuff a little easier, and they can do things like crossfading automatically.

1 Like

In fact I have the crossfade thing already solved. Please assume that the individual synths are already performing the crossfade - each between two different “input” synths. The real problem is only the concatenation in time. I am thinking of it in a very abstract way:

Given a sequence of “signals”, i.e. functions in time domain:

F₁(t), F₂(t), F₃(t), .... , Fn(t)

each defined for 0 <= t <= T, I need a general, universal method of concatenating them into signal:

        /       F₁(t)           for  0 <= t < T
       |        F₂(t-T)         for  T <= t < 2T
C(t)= <         F₃(t-2T)        for 2T <= t < 3T
       |        ...
        \       Fn(t-(n-1)T)    for (n-1)T <= t <= nT

defined for 0 <= t <= nT, with the additional implementation requirement that each “stage” signal is to be evaluated only at the time when it is needed.

If I could concatenate signals and obtain the result in the form of a new signal, then modulating that signal with an envelope would become trivial. The “EnvGen in a loop” problem only arose because I was trying to approximate the desired functionality imperatively.

A possible answer has already been given by @scztt: play to a dedicated bus and read with a synth that places an envelope on the incoming signal (like in his SynthDef(\envelope))

The example I posted is more or less the correct form to solve the concatenation problem. If each of your Fn’s is represented by a SynthDef (I assume that there are significant differences in processing between each F - if the F’s are just parameterizable variations on the same graph, then this is even easier), then the Task can schedule them by running with Synth(\f1, ...) and waiting with ~duration.wait. In my example, your C(t) in this case is simply the output bus (AFTER the point in the synth graph where the F’s have been executed). You can improve the start time accuracy of your Synths by using OffsetOut, if it is required for your use-case.
Also, I should note that the Pattern system may be a a somewhat better abstraction for this as well. Describing your concatenation scenario with patterns might be as simple as:

Pbind(
   \instrument, Pseq(\f1, \f2, \f3, \f4, \f5, \f6], inf),
   \dur, 0.4
)

If you want a strict function C(t) encapsulating many F’s, this is simply not possible in SuperCollider, nor in most other audio frameworks, without calculating all samples for all F’s. For most audio dsp algorithms (anything using e.g. IIR filters), the t=0...9T-1 samples are required in order to calculate F(9T), so you are stuck running all F’s all the time and concatenating by simply crossfading. For practical purposes, you can get 90% of the way by using something like the technique I described, plus OffsetOut for time accuracy. You can get 98% of the way by doing this PLUS providing some pre-roll for each synth (e.g. running it for a few ms or s before it actually starts, in order to fill delay lines), and e.g. supplying time as a synth argument to calculate correct starting phase values for oscillators. These won’t be acoustically relevant in the vast majority of use cases, but I’m not sure exactly what kind of synthesis you’re wanting to do, so it’s hard to say.

If your goal is mainly to compose pure functions in order to generate audio sample-by-sample, SuperCollider is probably not the best tool? SC is optimized for producing hot-swappable (but generally stable) signal graphs that run in real-time. For very good reasons, SC does not optimize for cases where parts of the graph are conditionally used / turned on and off - the SC signal graph is comparable to shader computation on the GPU, where rather than branching both branches are calculated and the result is AND’d into place based on the branch value.

1 Like

Thank you! This is exactly the kind of response I hoped for. The no-go statements are actually the most helpful. Knowing with 100% certainty that something is impossible is a huge time saver. Yes, what I hoped for was a combinator that takes a sequence of strict functions and provides a strict function.