Tips for organising polyphonic compositions?


I’m composing a long piece of electroacoustic music and I was wondering if you had any tips on how to code the different voices so that it’s easy to control (and adjust) when a musical event starts in relation to another one on another voice.

Working with patterns, monophonic works seem to be relatively easy, but with a polyphonic composition, I find it very difficult to keep track.

So if you can spare a few minutes to share your techniques, that’d be really useful!

Thanks :slight_smile:


1 Like


an interesting general question how polyphony can be defined in the electronic domain. I think the pattern system is flexible enough to cope with a rather “instrumental” understanding or alternative concepts of dependency/parallel structures.

Some thoughts:

.) Relations can be established e.g. with data sharing (see this chapter in James’ guide). A special kind of data sharing could e.g. be done with one (or more) master patterns for whatever (say a timed harmonic sequence) and voice patterns which get their harmonic content from the master. This can of course be done separately for arbitrary voice parameters.

.) Pspawner is great for sprouting multiple processes in many ways. You could use it for polyphony in a more traditional sense of the wording or arbitrary nestings and dependencies (Pspawner can sprout other Pspawners, if you’re interested in such you could also check the “recursive_phrasing” help file).

miSCellaneous_lib contains some Patterns for polyphonic tasks:

.) PSPdiv, a wrapper class from Pspawner, is for polyrhythmic structures

.) PmonoPar and PpolyPar generalize the Pmono pardigm to multiple synths and/or setting streams

.) If polyphonic fx processing is a topic, there are options to do this with PbindFx (help file examples 3, 4)

.) Recursive functions can be used to define polyphonic behaviour (PSrecur)

.) PLbindefPar (like PLbindef) is a Pbindef wrapper, but for parallel Pbindefs




The question is very general (but also very interesting).

If your music uses some kind of time signature, it’s just a matter of counting beats to ensure two events happen simultaneously - e.g. last week I created a simple timeline to trigger start and stop of (infinitely) running patterns on given beats. I’ve demonstrated it in a tutorial video

But maybe you want something more dynamic - where one voice is triggered or generated based on what happened in a different voice some time ago? A more concrete example of what you want to achieve could help in getting more answers that are likely to benefit your goal.


I use a global array of segment durations (which in my case are associated with syllables of a libretto) and have a scheduling function that hangs musical gestures on that grid. I have an object that lets me capture performances of the rhythm (which I perform on the j-key) and then all the music reflows…

So I’ll write something like Part(sentence: 3, syl: 5, music {arg song; Pseq(*[dur: song.parseRhythm(3,[1,1,1/3]( etc etc)

where parseRhythm.() subdivides my grid and spits out dur pseqs…

and my Part class takes care of the scheduling

1 Like

But more concretely you can create arrays of times where things happen, and then just index into them in your different musical items to make things coincide…

@dkmayer Thanks for the pointers towards data sharing and Pspawner, I really need to wrap my head around these. A bit tricky!

This sounds very much like the sort of thing I need. I normally compose with graphical DAWs and I’m struggling with the text based approach at the moment.

Any chance you could share some of your code? I have an idea of what you mean, but it’d be easier with code.

To specify a bit more my question: Coming from the graphical DAW world, I miss the ability to move a section or just a few bars around (or just delaying them by a few seconds), while keeping the structure of the rest of the piece. I was wondering if people have come up with practical solutions to deal with this.
Apologies for not sharing any code, I’m working on my first large scale SC project, and at the moment I’m a bit stuck as I’m not sure how to organise it. I think this discussion is helping though!

my own code is pretty chaotic as well but let me try to find something to share…

while I look around off though, you want to be familiar with calling the sched and schedAbs method on Clocks …

instead of using routines with .wait you can scheudle a function on a clock like TempoClock.schedAbs(5,{play an awesome sound));

this will play the awesome sound at second 5 - in this case its an absolute time meanin not 5 seconds from now but when the clock reaches 5. so you can have a dictionary of important times (or an event) for example ~times = (climax:50,windDown:75)

then you can schedule events on these points like TempoClock.sched(~times.windDown,{function that fades all the reverbs out})

then if you want the second section to go longer you just change windDown to 90 ie ~times.windDown=90;

to delay something 5 seconds from now you can write SystemClock.sched(5,{play awesome sound})…

its really much more flexible than in the DAW world as functions can reference their own clocks yet be scheduled on another clock - and all of these can be warped or fast forwarded programatically

1 Like

In a DAW, you’re working with fixed data. What is happening right now, at this point in the timeline, depends on nothing except for the data that exist at that point in the timeline. So, changing the timeline-position of some of the data is easy.

In SC, you’re manipulating the state of a system. The state of the system depends on everything that the system has done up to this point. Consequences are that it becomes much more difficult to start in the middle (because a piece of code that should run at, say, bar 100 might depend on something that was initialized in bar 67 – if you skipped over 67 to start at bar 92, then bar 100 may very well fail). It’s also harder to change the time relationships between these code blocks.

Tools for handling those challenges are IMO underdeveloped in SC, probably because few users seriously confront them (instead choosing different working methods that mesh better with SC’s capabilities and limitations).

I have some undocumented code that organizes a piece into “command” objects, which can then be grouped into sections. With some limitations, I was able to start at the beginning of any section. But it was never fully transparent. If a “command” needed to span multiple sections, I had to write the command once for each section: if one element should start in the middle of section B and play through C and D, there was no way to start at D and automatically look back to see that something was still active from section B. There might have been a way with a different data structure, but I never worked on it beyond that.


For a “fixed enough” composition, what I do to “move around” section is change the timingOffset of all the events in the track/instrument to move. E.g.

c = Pbind(*[dur: 0.3, degree: Pseries(0, 1, 8)])
d = Pbind(*[dur: 0.3, ctranspose: 12, degree: Pseries(0, 1, 8)])

Ppar([c, d]).play // in sync

Ppar([c, d <> (timingOffset: 0.9)]).play // 2nd one delayed 3 "beats"
Ppar([c, d <> (timingOffset: 0.1)]).play // 2nd one delayed "a little"

Ptpar basically does the same, but you pass the offsets interleaved with the patterns:

Ptpar([0, c, 0.9, d]).play 
Ptpar([0, c, 0.1, d]).play 

ScTimeLine (linked in a post above) basically uses this in combo with Pfin or Pfindur to also limit how long a “track” plays, e.g.

Ptpar([0, c, 0.9, Pfin(3, d)]).play 

There’s a more complicated way I do this to for things “cued” symbolically from other events… i.e. timingOffset can be calculated if the durations are known in advance. But let’s say durs are random

// riddle me a sync on this, dur not known until it plays!
c = Pbind(*[dur: Pfunc { 0.1 * rrand(1, 3) }, degree: Pseries(0, 1, 8)])

// a basic idea (a bit like Spawner), but this won't change durs in d
(c <> Pbind(*[cnt: Pseries(0, 1), callback: { if(~cnt == 3) {} } ])).play

// pull events from d and replace durs with what c is at
// this will not fully "consume" d though
q = d.asStream;
(c <> Pbind(*[cnt: Pseries(0, 1),
	callback: { if(~cnt >= 3) { ([\dur] = ~dur).play } } ])).play

You could make that more encapsulated with a Plazy. Also, combine with previous (fixed offset) idea, and/or also use Pdrop to skip some beats from the “slave” pattern etc.

	var narr = [7, 4, 1];
		var n = narr.pop;
		var q = Pdrop(n, d <> (amp: 0.05, timingOffset: 0.04)).asStream;
		(c <> Pbind(*[cnt: Pseries(0, 1),
			callback: { if(~cnt >= n) { ([\dur] = ~dur).play } } ]))
	}), narr.size)

Another way to trigger streams is with Pgate, but you have use a “fake” first event in a stream, e.g a rest, which makes it a bit annoying to use (because actual start time is somewhere within that rest).

e = ()
Pgate(Pseq([Rest(0.1), d]), 1, \go_d).play(protoEvent: e)
(Pfunc{|ev| if(ev[\degree] == 4) {e[\go_d] = true}; ev} <> c).play

Since a Stream is a Routine, you could in theory can “hang it”, but this doesn’t quite work in combo with the EventStreamPlayer

q =

// hang doesn't know how to "playAndDelta"; ibid if you use q.wait
(Prout({|ev| q.hang; loop { ev = ev.yield } }) <> d).play // err

This might be fixable, I don’t know for sure, but probably not because you cannot hang a nested routine that’s just having values pulled from with next (which is what EventStreamPlayer does to a stream).

q =;
fork { 0.5.wait; "started ...".postln; {q.hang};  "... and finished.".postln };

// to make that work, the outer routine would have to yield the value received
fork { 0.5.wait; "started ...".postln; {q.hang};  "... and finished.".postln };

Actually that wasn’t too hard to make work:

EventStreamPlayerH : EventStreamPlayer {

	prNext { arg inTime;
		var nextTime;
		var outEvent =;
		{outEvent.isNil} {
			streamHasEnded = stream.notNil;
		{outEvent === \hang} { ^outEvent } // the only addition basically
			nextTime = outEvent.playAndDelta(cleanup, muteCount > 0);
			if (nextTime.isNil) { this.removedFromScheduler; ^nil };
			nextBeat = inTime + nextTime;	// inval is current logical beat

Test that with

d = Pbind(*[dur: 0.3, ctranspose: 12, degree: Pseries(0, 1, 8)])

q =
p = (Prout({|ev| q.hang; loop { ev = ev.yield } }) <> d)

r = EventStreamPlayerH(p.asStream, ()).play

Actually, that change/hack into ESP not entirely necessessay as you can do instead

d = Pbind(*[dur: 0.3, degree: Pseries(0, 1, 8)])
q =;
(Prout({|ev| (play: { q.hang }, dur: 0).yield; loop { ev = ev.yield } }) <> d).play;


Actually, 2nd workaround, which avoids even generating an extra event, but does change the \finish of the first event; although you can even chain the old finish. I’m not doing that in the example below, for simplicity (you’d have to test if the old one is not nil before executing it).

d = Pbind(*[dur: 0.3, degree: Pseries(0, 1, 8)]);
q =;
(Prout({|ev| ev[\finish] = { q.hang }; loop { ev = ev.yield } }) <> d).play;


More sophisticated ideas were discussed in Time-aware merging of two Event Pattern streams.


Useful insight, thanks. I try to run SC like I’d run a DAW, and it’s not. Let’s see if it helps me compose differently.

That’s a good one, I always tried to write using .wait, but this allows for more flexibility.

Great post all over, but I particularly like this one, because the conditional can be absolutely anything related to the other pattern.

Thank you all for your feedback. I’ll get coding and hopefully post some snippets when I figure out some interesting ways of doing what I want to do!