How do you make a song?

No really…

I’ve only been working with SC for a few months now, but I’ve noticed something. It seems like the most common way to write actual songs is to make: SynthDefs, then Patterns, then either use Tasks or Spawners to put it all together. Sometimes people use homebrew functions, Proxies, Ndefs, etc…(usually looks like advanced programmers use these)

How do you compose? What do you think is the most flexible and dynamic method? I haven’t seen any tutorials or help files that help put it all together. Like making a full stereotypical song where you have fades, chord changes, effects changes…high level abstraction. I’m just not getting it. I can make Patterns until I’m blue in the face, but after that I dont really see how to be able to control the Macro of the micro. I can make a Tdef that changes aspects of patterns over time for example…is that the best/ most preferred? I hope I make sense, and I’m not asking for hand-holding or an essay. Just any insights or suggested code to look at. (PS. I’m talking about just SC+sc3-plugins, no extra quarks…I’m now trying to stay away from quarks until I feel fluent in SC)

SO much appreciated

1 Like

I would never make a full stereotypical song.
I honestly don’t think I could, even if I wanted to.
Strictly weird music for me.
If the survival of every living thing depended on it, I would use a DAW for my undoubtedly failed attempt.

Drawing from the things I am aware of, I think that you may enjoy Ptpar.

1 Like

I don’t think there’s a predefined recipe to make a “song”. But e.g. you could think a bit about form/structure of your composition. You can model small building blocks of your piece in patterns, and then sequence different patterns one after another with Pseq (or simultaneously using Ppar). Earlier patterns could e.g. be less “intensive” and subsequent patterns could be more “intensive”, making sure that each new pattern adds or varies something substantial compared to the previously scheduled patterns, and building up to some climax.

Obviously these are very generic/abstract instructions, and you may need to interpret these a bit in the context of the style of music you want to make. Supercollider is often used in the context of more “experimental” stuff but it’s certainly possible to make more conventional things as well. For making more conventional music, it usually pays off to create your own abstractions (e.g. implement a simple language to implement drum patterns, or to interpret note names or chord symbols).

1 Like

Eli Fieldsteel has these 3 videos on composing a piece:

I think those videos use the Pseq([Ppar([Pbinds]), ...]) approach that others have mentioned in this thread, and that is generally what I do as well, but you may find some other useful tips in there.

Hey here you have a system ive have been using last year for a composition.
the example is not really meaningful from a musical perspective but brings the point across i guess.

1.) first you define the functions
2.) you define your SynthDefs
3.) you define your Sound and Fx Patterns with Pdef + Pbind and combine them via
~pbindFx like Pdef(\sinOsc_fx, ~pbindFx.(\sinOsc2,\comb_fx)); you can also combine different Pdefs inside a Pspawner.

(
Pdef(\sinOsc_par, Pspawner {|sp|
    sp.par(Pdef(\sinOsc1));
    sp.par(Pdef(\sinOsc2), 2);
});
)

4.) for the composition you put all your Pdefs inside a Pspawner and use sp.par or sp.seq and some wait times with the corresponding beats (durSeconds mapped to \dur * \legato with ~utils).
You can also use ~transC to make a transition over time from one Pdef to another Pdef. et voila :slight_smile:

(
Pspawner({|sp|

	sp.par( Pfindur(28, Pdef(\sinOsc1)));

	sp.wait(8);

	sp.par( Pfindur(4, Pdef(\sinOsc2)));

	sp.par( ~transC.(
		\sinOsc2, 4, (
			overlap: Env([1,3],1,\exp),
			trigRate: Env([5,15],1,\exp),
			panMax: Env([0.90,0.10],1,\lin),
	), 8));

	sp.wait(16);

	sp.seq( Pfindur(16, Pdef(\sinOsc_fx)));

}).play(t, quant:1);
)

you can also multichannel record the piece with ~rec (see the example at the bottom)

hope that helps :slight_smile:

/////////////////////////////////////////////////
///////////////////////functions/////////////////
/////////////////////////////////////////////////
(
// create a new event type called hasEnv
// which checks every parameter whose key ends in Env or env:
// - convert non-env values to envs (e.g 0 becomes Env([0,0],[dur]))
// - stretch envelope to last for the event's sustain (converted from beats to seconds)
~utils = ();
~utils.hasEnv = {
    // calc this event's duration in seconds
    var durSeconds = ~dur * ~legato / thisThread.clock.tempo;
	//var durSeconds = ~sustain.value / thisThread.clock.tempo;
    // find all parameters ending in env or Env
    var envKeys = currentEnvironment.keys.select{|k|"[eE]nv$".matchRegexp(k.asString)};
    envKeys.do{|param|
        var value = currentEnvironment[param];
        if (value.isArray.not) { value = [value] };
        value = value.collect {|v|
            // pass rests along...
            if (v.isRest) { v } {
                // convert non-env values to a continuous, fixed value env
                if (v.isKindOf(Env).not) { v = Env([v, v], [1]) }
            };
            // stretch env's duration
            v.duration = durSeconds;
        };
        currentEnvironment[param] = value;
    };
};

Event.addParentType(\hasEnv,(
    finish: ~utils[\hasEnv]
));

// assist data-sharing in pbindfx creation
// pbindfx is wrapped in a private environment (Penvir)
// arguments are just pdef names
~pbindFx = {|srcName ... fxNames|
    // add private environment, shared between source and fxs
    Penvir(Event.new(parent:currentEnvironment),
        PbindFx(
            // source: record latest event in ~src
            *[Pdef(srcName).collect(~src=_)]
            // add all fx: they can access source event saved in ~src
            ++ fxNames.collect(Pdef(_))
        )
    )
};

// recorder
~rec = (
    new: {|self, maxChannels=16, path|
        self.stopAll;
        self.maxChannels = maxChannels;
        self.bus = Bus.audio(s, maxChannels);
        self.monitor = Monitor().play(self.bus.index, maxChannels, 0, 2 );
        self.recorder = Recorder(s);
        self.recorder.record(path, self.bus, maxChannels);

    },

    pdef: {|self,pdefName,dur|
        self.pat(Pdef(pdefName), dur);
    },
    pat: {|self,pat,dur|
        if(dur.notNil){ pat = Pfindur(dur, pat) };
        pat <> (out: self.getChannel)
    },
    getChannel: {|self, numChannels=2|
      if((self.nextFreeChannel + numChannels) > self.maxChannels){
            Error("[Rec] no more free channels! Increase maxChannels when calling ~rec.new").throw;
        }{
            var ch = self.bus.subBus(self.nextFreeChannel).index;
            self.nextFreeChannel = self.nextFreeChannel + numChannels;
            ch;
        }
    },
    stopAll: {|self|
        self.recorder !? {
            self.recorder.stopRecording;
        };
        [self.bus, self.monitor].do{|it|
            it !? {it.free};
        };
        self.bus = nil;
        self.monitor = nil;
        self.nextFreeChannel = 0;
    }

);

// pattern transitions:

/* transD: Penv based, parameters are updated discretely for every successive event
- define transitions as Pdefs
e.g. Pdef(\a2b, Pbind(\amp, Penv([0.1,1],1,'exp'), ...))
- trans duration is defined in each Penv, and total trans duration is provided as arg
- durTrans doesn't affect individual Penv's dur

~transD.(\a,3,\a2b, 1) // \a for 3 seconds, then \a2b for 1 second
*/
~transD = {|patA, durA, patTrans, durTrans|
    Pspawner{|sp|
        var trans = PatternProxy();
        trans.source = Pbind();
        sp.par(trans<>Pdef(patA));
        durA.wait;
        trans.source = Pdef(patTrans);
        durTrans.wait;
        sp.suspendAll();
    }
};

/* transD example
Pdef(\a, Pbind(\note,Pseq((0..10)), \amp, 0.1));
Pdef(\b, Pbind(\note,Pseq((0..10).reverse), \amp, 0.5));
Pdef(\trans_a2b, Pbind(\amp, Pn(Penv([0.1,0.5],5))));

~transD.(\a,3,\trans_a2b,5,\b,3).play
*/

/* transC:
continuous transition, pars are set using a custom synth that writes to busses
- trans is defined as a dictionary of envelopes
e.g (amp: Env([0.1,0.5],1), ...)
- all envelopes are stretched to last transDur
- event-specific parameters like \legato are converted to Penvs and not written to busses

// \a for 3 seconds, then trans for 5 second
~transC.(\a,3,(
amp: Env([0.1,0.5],1)
),5)
*/

~transC = {|patA, durA, transDef, transDur|
    Pspawner{|sp|
        var trans = PatternProxy();
        trans.source = Pbind();
        sp.par(trans<>Pdef(patA));
        durA.wait;
        trans.source = Pbind(*~mapTrans.(transDef,transDur).asKeyValuePairs);
        transDur.wait;
        sp.suspendAll();
    }
};

// used by transC
~mapTrans = {|parEnvs, transDur= 1|
    var penvs = parEnvs.select{|v|v.class===Penv}.collect{|penv|
        penv.times = penv.times*transDur
    };
    var busses = parEnvs
    .select{|v,k| penvs.keys.includes(k).not}.collect{Bus.control(s,1)};

    {
        busses.collect{|bus, parName|
            Out.kr(bus, EnvGen.kr(parEnvs[parName],timeScale:transDur));
        };
        Line.kr(0,1,transDur,doneAction:2);
        Silent.ar;
    }.play.onFree{
        busses do: _.free
    };

    busses.collect(_.asMap) ++ penvs
};

/* transC example
Pdef(\a, Pbind(\note,Pseq((0..10)), \amp, 0.1, \pan, -1));
Pdef(\b, Pbind(\note,Pseq((0..10)), \amp, 0.5, \pan, 1));

// \a for 3 seconds, then trans for 5 second, then \b for 3 seconds
Pspawner{|sp|
sp.seq(~transC.(\a,3,(
amp: Env([0.1,0.5],1),
pan: Env([-1,1])
),5));
sp.seq(Pfindur(3,Pdef(\b)))
}.play
*/
)


/////////////////////////////////////////////////
///////////////////////SynthDefs/////////////////
/////////////////////////////////////////////////

(
u = Signal.sineFill(512, [1]);
b = Buffer.loadCollection(s, u, 1);

t = TempoClock.new(60/60).permanent_(true);
)

(
SynthDef(\sinOsc, {
	arg out=0, amp=0.1, sndBuf=0, trigRate=1, shapeAmount=0.2,
	rate=1, freq=20, overlap=2, panMax=0.5, minGrainDur=0.001, syncRatio=2, time=1;

	var gainEnv = \gainEnv.kr(Env.newClear(8).asArray);

	var sig, pos, trig, pan, grainDur;
	var k = 2 * shapeAmount / (1 - shapeAmount);

	// amp envelope
	gainEnv = EnvGen.kr(gainEnv, doneAction:2);

	// Granulation
	trig = Impulse.ar(trigRate);
	grainDur = max(trigRate.reciprocal * overlap, minGrainDur);
	pan = Demand.ar(trig, 0, Dseq([-1, 1], inf) * panMax);

	pos = Phasor.ar(trig, freq * BufFrames.ir(sndBuf) * SampleRate.ir.reciprocal, 0, BufFrames.ir(sndBuf));

    sig = GrainBuf.ar(
			numChannels: 2,
			trigger: trig,
			dur: grainDur,
			sndbuf: sndBuf,
			rate: pos,
			pos: 0,
			interp: 4,
			pan: pan
	);

	// waveshaper
	sig = ((1 + k) * sig / (1 + (k * sig.abs)));

	sig = sig * amp * gainEnv;
	sig = Limiter.ar(sig, 0.95);
	OffsetOut.ar(out, sig);
}).add;
)

(
SynthDef.new(\combL, {
  arg in=0, out=0, mix=(-0.5), decay=1, amp=1, delHz=0.55, delStereoRatio=0.9, delMin=0.001, delMax=0.4;
  var sig, comb;
  sig = In.ar(in, 2);
    delHz = delHz * [1,delStereoRatio];
  comb = CombL.ar(
    sig,
    delMax,
    LFPar.kr(delHz,[0,pi/2]).exprange(delMin,delMax),
    decay,
  );
  sig = XFade2.ar(sig, comb, mix) * amp;
  Out.ar(out, sig);
}).add;
)

/////////////////////////////////////////////////
///////////////////////Patterns/////////////////
/////////////////////////////////////////////////


(
Pdef(\sinOsc1,
	Pbind(
		\type, \hasEnv,
		\instrument, \sinOsc,

		\sndBuf, b,

		//waveshaper
		\shapeAmount, 0.3,

		\overlap, 15,
		\trigRate, 5,
		\panMax, 0.80,

		\midinote, Pseq([
			[31,43],
		],inf),

		\dur, 28,

		\legato, 0.80,
		
		\atk, 0.01,
		\sus, (1 - Pkey(\atk)) * Pexprand(0.55,0.85,inf),

		\gainEnv, Pfunc{|e|
			var rel = (1 - e.atk - e.sus);
			var c1 = exprand(2,6);
			var c2 = exprand(-2,-6);
			Env([0,1,1,0],[e.atk, e.sus, rel],[c1,0,c2])
		},

		\amp, 0.10,

		\out, 0,
		\finish, ~utils[\hasEnv],
		\cleanupDelay, Pkey(\dur) * Pkey(\legato),
		\fxOrder, [1]
	)
);
)

(
Pdef(\sinOsc2,
	Pbind(
		\type, \hasEnv,
		\instrument, \sinOsc,

		\sndBuf, b,

		//waveshaper
		\shapeAmount, 0.3,

		\overlap, 1,
		\trigRate, 5,
		\panMax, 0.90,

		\midinote, Pseq([
			[57,64,70,76,77],
			[53,58,64,72,76,81],
			[55,62,64,69,74],
			[57,60,64,65,70],
		],inf),

		\dur, 8,

		\atk, 0.01,
		\sus, (1 - Pkey(\atk)) * Pexprand(0.55,0.85,inf),

		\gainEnv, Pfunc{|e|
			var rel = (1 - e.atk - e.sus);
			var c1 = exprand(2,6);
			var c2 = exprand(-2,-6);
			Env([0,1,1,0],[e.atk, e.sus, rel],[c1,0,c2])
		},

		\amp, 0.03,

		\out, 0,
		\finish, ~utils[\hasEnv],
		\cleanupDelay, Pkey(\dur) * Pkey(\legato),
		\fxOrder, [1]
	)
);
)

(
Pdef(\comb_fx,
	Pbind(
		\fx, \combL,
		\mix, 1,
		\amp, 1,
		\delStereoRatio, 0.9,
		\delHz, Pfunc{ thisThread.clock.tempo.reciprocal/2 },
		\delMin, Pwhite(0.25,0.50,inf),
		\delMax, Pwhite(0.25,1,inf),
		\decay, 2,
		\cleanupDelay, Pkey(\decay)
	),
);
)

Pdef(\sinOsc_fx, ~pbindFx.(\sinOsc2,\comb_fx));

Pdef(\sinOsc1).play;
Pdef(\sinOsc2).play;
Pdef(\sinOsc_fx).play;

(
Pdef(\sinOsc_par, Pspawner {|sp|
    sp.par(Pdef(\sinOsc1));
    sp.par(Pdef(\sinOsc2), 2);
});
)

Pdef(\sinOsc_par).play;

////////////////////////////////////////////////////
///////////////////Make a Piece/////////////////////     
///////////////////////////////////////////////////

(
Pspawner({|sp|

	sp.par( Pfindur(28, Pdef(\sinOsc1)));

	sp.wait(8);

	sp.par( Pfindur(4, Pdef(\sinOsc2)));

	sp.par( ~transC.(
		\sinOsc2, 4, (
			overlap: Env([1,3],1,\exp),
			trigRate: Env([5,15],1,\exp),
			panMax: Env([0.90,0.10],1,\lin),
	), 8));

	sp.wait(16);

	sp.seq( Pfindur(16, Pdef(\sinOsc_fx)));

}).play(t, quant:1);
)


////////////////////////////////////////////////////
///////////////////Multi Channel Recording/////////   
///////////////////////////////////////////////////


(
Pspawner({|sp|
	~rec.new(14, Platform.recordingsDir+/+"piece1_4-%.aiff".format(Date.localtime.stamp));
	
	sp.par(~rec.pdef(\sinOsc1,28));

	sp.wait(8);
	
	sp.par(~rec.pdef(\sinOsc2,4));
	
	sp.par(~rec.pat(~transC.(
		\sinOsc2, 4, (
			overlap: Env([1,3],1,\exp),
			trigRate: Env([5,15],1,\exp),
			panMax: Env([0.90,0.10],1,\lin),
	), 8)));
	
	sp.wait(16);
	
	sp.seq(~rec.pdef(\sinOsc_fx,16));
	
	3.wait;
    sp.suspendAll();
    10.wait; // wait a bit for eventual tails
    ~rec.stopAll;

}).play(t, quant:1);
)

There’s a higher-level way to consider this problem as well, and even though I personally only use sc for more “abstract” (i.e. electroacoustic) music, I think this is good to keep in mind even if you’re trying to make some sort of techno or something else on the more “idiomatic” side of things.

Unless you’re a live coding type (I’m not), ultimately the top-level sclang construct that realizes your composition can always be expressed as a Function evaluated in some Environment, which maps its inputs and the contents of that Environment to some set of outputs and/or side effects. If we limit our idea of a “composition” to some series of things happening over some period of time, our top-level Function needs to at least generate a Routine and play it on a Clock. So the basic form of a “composition” is:

(
    {
        // things happen here
    }.fork
)

That might seem reductive and unhelpful when you’re deep into problems like “how do I get all these chord changes to hit at the same time” or “how do I get the shapes of all these crossfades right”, but I’ve found it extremely helpful and clarifying to think of things in these terms. Usually once I’m looking at that outline, I can start filling it in by asking myself “what resources do I need and how do I want to initialize them?” and go from there.

To me the great strength of sc is that unlike Max it actually has a usable conceptual model of events taking place in time, and unlike a daw it doesn’t force you to constantly hack around someone else’s idea of what constitutes musical form. sc enabled me to stop composing in daws and use them for what they’re good at, which imo is postproduction.

I make a lot of music that proceeds from rhythms of speech rather than more usual “beats”.

I made a custom class called Song which has an array of lyrics and associated pitches. I call a method to record rhythms for the tunes by tapping on the ‘j’ key. Those are associated with the lyric strings in an object called Dur. I then make instances of another class called Part that starts on a given syllable and Song passes the associated rhythm into that Part’s “music” function.

This lets me reflow the tempo of a whole composition after the fact, and refer to sections and “beats” using words instead of numbers.

@semiquaver that sounds pretty handy. I personally need to learn to make something like a virtual piano that I can mess with, and i can record an array of midi notes…

@dietcv I wish I understood that more. You are using that example and implying that you declared those global variables before hand right? like ~dur and ~legato? Or are those symbols, \dur and \legato, able to be expressed as global variables like that already?

@shiihs One of my favorite things about SC is the fact that it’s more for abstract electronica and sound design. That’s my favorite kind of music actually, avant garde / IDM, and ambient etc…I’m mostly using the stereotypical song idea as a way to relate the processes like changing chords at the same time, fading,… stuff like that.

@PitchTrebler I will go back over those. I think I remember them not explaining the exact things I am looking for though

@t36s Nice. that actually is going to help me…Honestly I have ADD so it’s hard for me to remember all these classes. or rather I just get focused on one, then another, etc… can’'t stay still for any length of time…haha. Give it a year though…I’ll get it :slight_smile:

Thank you all!!!

The Example is Using Passed envelopes in the Pbinds which Are scaled to \dur * /legato By ~utils with \type, /hasEnv.
Recently i Switched This to \sustain because of an Advice By @jamshark70 maybe the whole pattern Infrastructure has to be adjusted for this case. im not sure.

OH, heh, I was only looking at about 1/10th of your code because it didn’t all copy over. So nothing made sense at all. Thanks! I can chew on this for some time. I do like this method as it seems really flexible.

Maybe you will find our CuePlayer Quark useful. It is meant for live performance using cues but each cue can also be a separate timeline. Not a conventional song structure for sure but I thought I would share :slightly_smiling_face:

2 Likes

I had the same question about how to compose a song when I started using SC. For me I found TempoClocks to be the key. Here are some code examples of songs that I’ve written in SC…

and some audio so you can hear how the songs sound…

3 Likes

Great stuff @wigglytendrils , really good example. Listening to your tracks now :slight_smile: I liek space bike! Thanks. I see you don’t really use any Busses. Do all processing before hand? This is one of the main problems I’m having when scheduling, is getting the order of execution right for a larger piece.

@Dionysis This is probably dumb, but I’m trying to stick to vanilla SC right now since I’m so new. I will definitely check that out sometime soon-ish though thank you

2 Likes

The SynthDef / Patterns / Pspawner [or fork] workflow, which is what you described in the original post, is how I teach my students to create their first SC projects. In the class we unfortunately don’t have time to go into additional structures like effects, crossfades, mixing controls, fancier pattern composition etc. That said, this basic template has proven to be solid ground for most of my students to get started. Here’s one very good example: Judges by Michael Noonan

1 Like

@poison0ak, how would you ‘normally’ go about making a track? The hardest part for me, as I’m also thinking about how to starting putting stuff together and go from little sketches to something more thought out, or thorough, or something, I dunno, substantial, is that - as a long time DAW user - there isn’t an inherent workflow that I’ve found Supercollider push me toward, and there haven’t really been preset modes of the software in which to ‘noodle’, without building up all that noodle-able stuff as well.

But, that being said, I think one of the great things about the fact that there is no inherent built-in structure to do this is that you can also mimic other structures you like.

For instance it possible to create several independent patterns and/or routines, and then either use code, a GUI, or some sort of controller to trigger them, making a sort of Ableton-style clip launcher. That’s relatively straight-forward and maybe not so interesting to explore, so perhaps an avenue of exploration in that paradigm is having one pattern affect another, or triggering conditions, so that at the end of one pattern, a different one can play, and that behavior can be coded, or randomized, or etc - like follow actions in Ableton.

For me personally, in this workflow the large disconnect that’s missing with pattern and routine sequencing is the ability to play in material live, loop it, and record. Absolutely is be possible, but much more difficult I think, and so as I’m also in a similar boat as you, working exclusively with patterns and the like, even with randomization, still feels a bit like entering values by hand, with some of the expressivity missing.

I’m also not so interested in ‘progressions’ of notes or strict parameter values, and so I’ve been mainly exploring and thinking about grouping various parameters in SynthDefs and sound making and modifying structures to allow for gestural maneuvering. Create some interesting correlations between various parameters, and play those live and record, and/or using some sort of sequenced structure to manipulate it over time and record that. Another approach, though maybe not leading toward more traditional song structures… Though I don’t see why it can’t as well.

Anyway, just some thoughts. Interesting thread! Thanks for starting it. I’m looking forward to digging into everyone’s responses!

I would suggest to look for ideas in the examples in the /examples/pieces/ subdirectory that you will find in the SuperCollider installation directory (in my Linux machine is in /usr/share/SuperCollider, not sure what it is in Win/OSX).

Thank you, yes, these are the best examples I can find of the Pbind based composition.

I’m pretty curious about the people that say Routines are the most flexible. Any pieces to share?
I’ve mostly just found snippets. Then there is otophilia, Yamato Yoshioka the Japanese coder. His work really is pretty awesome, but I only have what’s in the MIT SC book and a few scattered remnants. It is also older code.

Side note, but wondering if anyone has any non-English SC forums they know about that are still going?

Hello Bruno

I have one confusion with the example you posted.
In the section with the Pbinds there is written:
.play.stop;
What’s the reason for writing both of them, one right after another?

Thanks

Hello,

The typical template for code organization that I offer to my students in that particular class is 1) Big block of SynthDefs at the top; 2) Big block of Patterns where they compose their ‘score’; and 3) Pspawner or fork at the end to play it all in time.

The use of .play.stop in the Pattern block is a pragmatic ‘trick’ to avoid creating two separate layers of variables (one for Pbind ‘scores’ and another for the EventStreamPlayers that play them).

Keeping Pbind and EventStreamPlayer in separate variables typically looks like this:

~melody1 = Pbind(\degree, Pseq([0, 1, 6, 3], inf), \dur, 0.3);
~player = ~melody1.play;
~player.stop;
~player.play;

While the above is very clear, I found it beneficial to my students to reduce the amount of variables in this context, so I purposefully ‘obscure’ the difference between Pbind and EventStreamPlayer. Using .play.stop; within the context of this kind of code organization lets you work with less variables and looks more straightforward:

// keeping only the EventStreamPlayer in a variable
~melody1 = Pbind(\degree, Pseq([0, 1, 6, 3], inf), \dur, 0.3).play.stop;
~melody1.reset.play;
~melody1.stop;

The EventStreamPlayers are created right away and saved into the variables with meaningful names like “melody1”, “kick2”, etc. I could use just .play, but they would all actually make a jumble of sound right away when I evaluate the big block of Patterns. Adding the .stop right after the .play just does the job of preventing that.

Then actual playing is left for the third and last code block within a fork or Pspawner.

In short, it is a pedagogical hack :slight_smile:

PS. Pbindefs would be another possible solution here (as each Pbindef has its own ‘name’ so you don’t need to put it into a variable), but I have opted to stay with vanilla Pbinds in that course.

1 Like

Thanks a lot for clarification! :slightly_smiling_face: