Limitations of Composition in SC

Hey, ive made this piece entirely in SC. Which made me think of the limitations of this workflow:

10 Likes

it sounds great! - what was the biggest challenge with this?

With “entirely in SC”, do you mean that you used no hardware interfaces/control surfaces? Or do you mean that everything is synthesis and no samples are used? Something else?

Recently i have been reading “Laura Tripaldi - Parallel Minds”, a great book.
She is discussing some concepts of nanotechnology for synthesizing complex compound structures and describes complexity as beeing made of two things: “All definitions, however agree on the fact that there are two fundamental characteristics of a complex system: a multiplicity of constituent elements, and the presence of non-negligible interactions between them”
and points out that in “Newtonian physics, infact, although able to study systems of bodies interacting with one another, through gravitational or electrostatic force for example, can provide exact predictions only when that interactionis limited to a maximum of two objects” known as the “three body problem”
so complex compound structures are made out of at least three individual parts interacting with each other.

I have also been listening to this podcast series called objecthood and read a lot about sound as an object. For example this thesis.

Coming from these ideas my main interest is to create complex compound structures of sound objects and gradually transform their timbre over time.
For this to work im trying to come up with SynthDefs which have a lot of degrees of freedom in terms of timbral transformation.

For this piece i used two SynthDefs. This squeaky FM instrument and a pulsarish microsound instrument (i say pulsarish because the pulsar synthesis implementation was not 100% correct at the time ive written the instrument).

Im using this setup for composing (ive also shared it in another post), which enables me to do transitions using ~runTrans between two Patterns. ~runTrans is using an Envelope and sends it to a bus. Im normally using Pdefs with Pmono.

// functions for beeing used to make transitions and organize rhythm
(
~pParTracks = { |...tracks|
	var fxTracks = tracks.collect { PatternProxy(()) };
	var evPatternFns = tracks.collect { |trackArgs, n|
		var dataPatternName = trackArgs[0];
		var durs = trackArgs[1];
		fxTracks[n] <> Pdef(dataPatternName) <> Pbind(\dur, durs)
	};
	var rhythmPars = Ppar(evPatternFns);
	(\rhythm: rhythmPars, \fxs: fxTracks);
};

~mapTrans = {|parEnvs, transDur= 1|
    var penvs = parEnvs.select{|v|v.class===Penv}.collect{|penv|
        penv.times = penv.times*transDur
    };
    var busses = parEnvs
    .select{|v,k| penvs.keys.includes(k).not}.collect{Bus.control(s,1)};

    {
        busses.collect{|bus, parName|
            Out.kr(bus, EnvGen.kr(parEnvs[parName], timeScale:transDur));
        };
        Line.kr(0,1, transDur, doneAction:2);
        Silent.ar;
    }.play.onFree{
        busses do: _.free
    };

    busses.collect(_.asMap) ++ penvs
};

~runTrans = {|transProxy, transDef, transDur|
	transProxy.source = Pbind(*~mapTrans.(transDef, transDur).asKeyValuePairs);
};
)

// test SynthDef
(
SynthDef(\test, {
	var trig = \trig.tr(0);
	var gainEnv = EnvGen.ar(Env.perc(\atk.kr(0.01), \rel.kr(0.5)), trig, doneAction: Done.none);
	var sig = SinOsc.ar(\freq.kr(440));
	sig = sig * gainEnv * \amp.kr(0.25);
	sig = Pan2.ar(sig, \pan.kr(0));
	Out.ar(\out.kr(0), sig);
}).add;
)

// Patterns

(
Pdef(\patternA,
	Pmono(\test,
		\trig, 1,
		\freq, 440,
		\pan, -1,
	);
);

Pdef(\patternB,
	Pmono(\test,
		\trig, 1,
		\freq, 880,
		\pan, 1,
	);
);
)

// compose a piece inside Pspawner

(
Pdef(\player,
	Pspawner({ |sp|

		var partA, partB;

		\partA.postln;

		// play both Pdefs with a different rhythm pattern
		partA = ~pParTracks.(
			[\patternA, 0.25 * Pseq([3, 1, 2, 2], inf)],
			[\patternB, 0.25 * Pseq([2, 1, 2, 1, 2], inf)]
		);
		sp.par(Pfindur(8, partA.rhythm));

		// make individual transitions for \freq over 8 secs for both Pdefs
		~runTrans.(partA.fxs[0], (
			freq: Env([440, 220], 1, \exp),
		), 8);

		~runTrans.(partA.fxs[1], (
			freq: Env([880, 1760], 1, \exp),
		), 8);

		sp.wait(8);

		\partB.postln;

		// play both Pdefs with a different rhythm pattern
		partB = ~pParTracks.(
			[\patternA, 0.25 * Pseq([3, 3, 2], inf)],
			[\patternB, 0.25 * Pseq([2, 1, 3, 2], inf)]
		);
		sp.par(Pfindur(2, partB.rhythm));

		sp.wait(2);

		\done.postln;
		sp.wait(2);
		sp.suspendAll;
	});
).play;
)

This is the actual composition. The SynthDefs, Patterns and utility functions are stored somewhere else in the project:

(
Pdef(\player,
	Pspawner { |sp|

		// Part I
		var glisson_I_sequence;
		var pulsar_ornament;
		var vowel_A;

		// Part II
		var glisson_liquid_fx;
		var glisson_Interlude;

		// Part III
		var glisson_III_sequence;

		// Part IV
		var vowel_bass_A;
		var vowel_bass_B;
		var glisson_high_glitch;

		// Part V
		var vowel_trans, pulsar_rhythm;

		///////////////////////////////////////////////////////////

		\part1.postln;

		~absArray = ~getAbs.(~blend.(5, 8, 1), 8);
		~resetL.(~absArray);

		~index = 4;

		glisson_I_sequence = ~pParTracks.(
			[\glisson_Sieve, 0.1875 * Pdict(~patDict, PL(\index)) ],
		);
		sp.par(Pfindur(3.6, glisson_I_sequence[0]));
		sp.wait(3.6);

		glisson_I_sequence = ~pParTracks.(
			[\glisson_I_sequence_A, Pseg([10, 5.1913], [9], \exp, 1).reciprocal ],
		);
		sp.par(Pfindur(9, glisson_I_sequence[0]));

		sp.wait(3);

		vowel_A = ~pParTracks.(
			[\vowel_A, Pseg([51.913, 5.1913], [6], \exp, 1).reciprocal ],
		);
		sp.par(Pfindur(5.9, vowel_A[0]));
		sp.wait(6);
		sp.wait(0.2);

		vowel_A = ~pParTracks.(
			[\vowel_A2,  Pseg([5.55386, 55.5386], [2/3], \exp, 1).reciprocal ],
		);
		sp.par(Pfindur(2/3, vowel_A[0]));
		sp.wait(2/3);
		sp.wait(1/3);

		pulsar_ornament = ~pParTracks.(
			[\ornament, 0.0625 * Pdict(~patDict, PL(\index)) ],
		);
		sp.par(Pfindur(1.0, pulsar_ornament[0]));
		sp.wait(0.5);

		///////////////////////////////////////////////////////////

		\part2.postln;

		~absArray = ~getAbs.(~blend.(5, 8, 1), 8);
		~resetL.(~absArray);

		~index = Pseq(~fibTrans.(12, [4, 0]), inf);

		glisson_I_sequence = ~pParTracks.(
			[\glisson_I_sequence_A, 0.0625 * Pdict(~patDict, PL(\index))],
		);
		sp.par(Pfindur(18, glisson_I_sequence[0]));

		sp.wait(6);

		vowel_A = ~pParTracks.(
			[\vowel_A, Pseg([51.913, 5.1913], [12], \exp, 2).reciprocal ],
		);
		sp.par(Pfindur(25, vowel_A[0]));

		sp.wait(6);

		~runTrans.(vowel_A[1][0], (
			tFreqMF: Env([1.0, 5.00], 1, \sqr),
			tFreqEnvAmount: Env([0.0, 5.00], 1, \sqr),
		), 6);

		sp.wait(13);

		~runTrans.(vowel_A[1][0], (
			tFreqEnvAmount: Env([5.0, 3.00], 1, \sqr),
		), 5);

		sp.wait(5.6);

		vowel_A = ~pParTracks.(
			[\vowel_A2, Pseg([5.55386, 55.5386], [2/3], \exp, 1).reciprocal ],
		);
		sp.par(Pfindur(2/3, vowel_A[0]));
		sp.wait(2/3);
		sp.wait(1/3);

		pulsar_ornament = ~pParTracks.(
			[\ornament, 0.0625 * Pdict(~patDict, 4)],
		);
		sp.par(Pfindur(0.8, pulsar_ornament[0]));
		sp.wait(0.5);

		///////////////////////////////////////////////////////////

		\part3.postln;

		~absArray = ~getAbs.(~blend.(5, 8, 1), 8);
		~resetL.(~absArray);

		~index = 4;

		glisson_liquid_fx = ~pParTracks.(
			[\glisson_liquid_fx, 0.0625 * Pdict(~patDict, PL(\index)) ]
		);
		sp.par(Pfindur(16.25, glisson_liquid_fx[0]));

		sp.wait(4.0);

		glisson_Interlude = ~pParTracks.(
			[\glisson_Interlude, 0.25 * Polybjorklund2([[7,16], [9,16]], _|_, inf, false)],
		);

		sp.par(Pfindur(1.80, glisson_Interlude[0]));
		sp.wait(7.0);

		glisson_Interlude = ~pParTracks.(
			[\glisson_Interlude, 0.25 * Polybjorklund2([[7,16], [9,16]], _|_, inf, false)],
		);

		sp.par(Pfindur(5.00, glisson_Interlude[0]));
		sp.wait(5.25);

		pulsar_ornament = ~pParTracks.(
			[\ornament, 0.0625 * Pdict(~patDict, PL(\index))],
		);
		sp.par(Pfindur(0.8, pulsar_ornament[0]));
		sp.wait(0.8);

		///////////////////////////////////////////////////////////

		\part4.postln;

		~absArray = ~getAbs.(~blend.(5, 8, 1), 8);
		~resetL.(~absArray);

		~index = 4;

		~envBufIndex = 43;

		glisson_III_sequence = ~pParTracks.(
			[\glisson_III_sequence_A, 0.25 * Polybjorklund2([[7,16], [9,16]], _|_, inf, false)],
		);
		sp.par(Pfindur(3.90, glisson_III_sequence[0]));
		sp.wait(4.00);

		glisson_III_sequence = ~pParTracks.(
			[\glisson_III_sequence_B, 0.1875 * Pdict(~patDict, PL(\index))],
		);
		sp.par(Pfindur(41.60, glisson_III_sequence[0]));

		sp.wait(12.60);

		~runTrans.(glisson_III_sequence[1][0], (
			fmFreq: Env([5.0, 4.0, 3.00], [0.5, 0.5], \sqr),
		), 6);

		sp.wait(6);

		~index = Pseq(~fibTrans.(12, [4, 3]), inf);

		sp.wait(23);

		glisson_III_sequence = ~pParTracks.(
			[\glisson_III_sequence_B, 0.1875 * Pdict(~patDict, PL(\index))],
		);

		sp.par(Pfindur(4.20, glisson_III_sequence[0]));

		~envBufIndex = Pseq(~fibTrans.(12, [43, 12]), inf);

		~runTrans.(glisson_III_sequence[1][0], (
			freqEnvAmount: Env([1000.0, 8190.0], 1, \exp),
			fmFreq: Env([3.0, 4.0, 5.00], [0.5, 0.5], \sqr),
			iEnvAmount: Env([5.0, 0.0], 1, \exp),
			overlap: Env([1.0, 0.1], 1, \exp),
		), 4.20);

		sp.wait(4.20);

		///////////////////////////////////////////////////////////

		\part5.postln;

		~absArray = ~getAbs.(~blend.(5, 8, 1), 8);
		~resetL.(~absArray);

		~index = 4;

		vowel_bass_A = ~pParTracks.(
			[\vowel_bass_A, Pseq([51.913], inf).reciprocal],
		);
		sp.par(Pfindur(30, vowel_bass_A[0]));

		glisson_high_glitch = ~pParTracks.(
			[\glisson_high_glitch, 0.1875 * Pdict(~patDict, PL(\index))],
			[\grains, Pseq([0.5], inf)],
		);
		sp.par(Pfindur(30, glisson_high_glitch[0]));

		sp.wait(6);

		\glisson_transition.postln;

		~runTrans.(glisson_high_glitch[1][0], (
			freq: Env([25.957, 103.826], 1, \sqr),
			overlap: Env([0.1, 0.5], 1, \sqr),
			fmFreq: Env([5.0, 3.0], 1, \sqr),
			index: Env([1, 0.125], 1, \sqr),
			iEnvAmount: Env([0.0, 5.0], 1, \sqr),
			clipEnvAmount: Env([0.0, 0.5], 1, \sqr),
		), 24);

		sp.wait(12);

		\bass_transition_A.postln;

		~runTrans.(vowel_bass_A[1][0], (
			formIndex: Env([0.0, 1.0], 1, \wel),
			lpfCutoff: Env([51.913, 500.0], 1, \exp),
			lpfEnvAmount: Env([0.0001, 1000.0], 1, \exp),
		), 12);

		sp.wait(12);
		sp.wait(0.001);

		\bass_transition_B.postln;

		vowel_bass_B = ~pParTracks.(
			[\vowel_bass_B, Pseq([51.913], inf).reciprocal],
		);
		sp.par(Pfindur(20, vowel_bass_B[0]));

		sp.wait(2);

		~runTrans.(vowel_bass_B[1][0], (
			peak: Env([0.0, 1.0], 1, \sqr),
			combDensity: Env([0.0, 1.0], 1, \sqr),
			combEnvAmount: Env([0.0, 2.0], 1, \sqr),
		), 18);

		sp.wait(18);
		sp.wait(0.1);

		///////////////////////////////////////////////////////////

		\part6.postln;

		vowel_trans = ~pParTracks.(
			[\vowel_B, Pseg([51.913, 5.1913, 51.913], [8, 9], \exp, inf).reciprocal ],
		);
		sp.par(Pfindur(17, vowel_trans[0]));

		sp.wait(8);

		~runTrans.(vowel_trans[1][0], (
			overlap: Env([0.50, 1.00], 1, \exp),
			formEnvAmount: Env([0.00, 1.00], 1, \lin),
		), 9);

		sp.wait(9);
		sp.wait(0.1);

		///////////////////////////////////////////////////////////

		\part7.postln;

		~absArray = ~getAbs.(~blend.(5, 8, 1), 8);
		~resetL.(~absArray);

		~index = 4;

		pulsar_rhythm = ~pParTracks.(
			[\rhythm_A, (0.0625 * Pdict(~patDict, PL(\index)))]
		);

		sp.par(Pfindur(36, pulsar_rhythm[0]));
		sp.wait(27);

		\rhythm_transition.postln;

		~runTrans.(pulsar_rhythm[1][0], (
			overlap: Env([0.50, 0.75], 1, \exp),
			lpfCutoff: Env([2000, 500], 1, \exp),
			burst: Env([5, 0], 1, \lin),
			rest: Env([0, 3], 1, \lin),
		), 9);

		sp.wait(8.3);

		///////////////////////////////////////////////////////////

		\part8.postln;

		~absArray = ~getAbs.(~blend.(5, 8, 1), 8);
		~resetL.(~absArray);

		~index = Pseq(~fibTrans.(15, [4, 2, 3, 1], 0, 1), inf);

		pulsar_rhythm = ~pParTracks.(
			[\rhythm_B, 0.0625 * Pdict(~patDict, PL(\index))],
		);

		sp.par(Pfindur(17, pulsar_rhythm[0]));

		sp.wait(17);
		sp.wait(0.3);

		pulsar_ornament = ~pParTracks.(
			[\ornament, 0.0625 * Pdict(~patDict, 4)],
		);
		sp.par(Pfindur(0.8, pulsar_ornament[0]));
		sp.wait(0.8);

		///////////////////////////////////////////////////////////

		\done.postln;
		sp.suspendAll;
	}
).play(t, quant:1);
)

1.) Complexity / Scalability
Most of the time in the piece you just have two planes of tone, there is only one moment, beginning at 1:53 min after the transformation of the Squeaky FM instrument which has three different instruments playing at the same time. I was trying to develop this moment beginning at 1:53 to transform it into the last part but gave up on that. Thats the reason why two of the instruments are abruptly ending at 2:23 min. There would have been some potential to transform some of these sounds to accompany the ending of the piece but I lost track of the timeline and gave up on that.
When you scroll trough the code of the actual composition you can see that its already quite long.
The piece is only 4 min and there is not so much going on at the same time.
So i think this approach is not scalable to larger pieces with more instruments at the same time.
I also think that keeping track of all the individual parts on the timeline in one Pspawner is not possible.
I have multichannel recorded the output of SC into Ableton live. When you see the actual music layed out in the DAW you see how difficult and laborious it is to work like that in SC, it looks kind of cute in the DAW :slight_smile:

The reason to use this ~pParTracks function is to separate rhythm from all the other SynthDef arguments and be able to access them individually for transitions using ~makeTrans.
When i first started using ~pParTracks i thought of putting several Patterns in there and make individual transitions for each of them, which is working theoretically but it turns out that in music not everything starts and stops at the same time. Therefore Im having parallel ~pParTracks on the timeline which start and stop at different moments determined by Pfindur and also their transitions which you all have to track somehow.

2.) Programming Skills
Ive started using SC about three years ago and spent every free minute I have with it. I have no background in programming. My skill level in programming and DSP increased rapidly in the last three years but the time im spending with this is crazy.
For some of the functions i have been using to create the piece i got a lot of help from members of the community. Especially that ~runTrans function.
I think the structure im using is not beginner level SC and is really hard for maintenance with my intermediate SC skillset.
Ive run into some problems lately with my setup and not able to solve these on my own. see that post.
Both of the SynthDefs i have been using for the composition use the hybrid approach of Impulse.ar for audiorate triggering and \tFreq, 1 / Pkey(\dur) which im describing in the post above.This leads to timing inaccuracy between language and server and has a negative sideeffect when using ~makeTrans. I often times had to adjust the time of Pfindur by some small amount to make sure the Synth Node is still on the server as long as the transition lasts.
Beside the issues described in the post one cannot transition keys which are defined in the pattern itself:

\myFreq, 440,
\freq, Pkey(\myFreq),

you cannot make a transition for myFreq because it cannot be send to a bus?! dont know.

But in general i think when you compare it with a DAW approach to use automations to gradually change parameters over time is really basic stuff. My point here is that i think the skill level you need to work on a composition in SC is really high and i think the rare appereance of music made in SC underlines this.
Ive started studying composition two years ago and got really stuck because of programming. There are so many issues i have to fix right now, just pointed out a few which are keeping me away from composing.

Simplify it?

One could make an argument for having a 30 min jam, record the audio to the DAW and chop that.
I think using SC as a “noise box” like that is not really helping out when wanting to make gradual transitions between states of musical objects and compose longer forms out of these.
Especially when these transitions are the core of the composition and not foreign objects which are just used to fill in the gaps between other music material. There are some transitions in the piece one would not be able to do with this approach IMO.

Ive just seen these two other examples lately which are using the idea of transition between events How to create a popup menu whose menu options are dynamic - #4 by jordan
and alga
It seems beside these two attempts nobody is really missing this feature in SC, im really curious why that is?

This was a lot. If you have any other questions feel free to ask :slight_smile:

3 Likes

hey, i mean that everything from Synthesis to composition was made in SC. i hit play and recorded the piece multichannel to Ableton Live

nice piece!

Simplify it?

perhaps not immediately useful, but for non-interactive processes “offline” models can be quite a lot simpler to work with

and scsynth in non-real time mode can be very fast (i.e. much faster than real-time)

and there are libraries for generating reaper projects procedurally

but, it’s a very different model!

ps. offline as in https://www1.icsi.berkeley.edu/pubs/techreports/TR-92-044.pdf, “an off-line algorithm knows the future, but an on-line algorithm does not”

pps. non-strict systems (i.e. haskell) are particularly nice for this, because of the process-as-value model (i.e. the “known future” is infinite), but of course sclang is nice too

1 Like

thanks for poiting out NRT processing. ive never worked that way but think that the intermediate response of sounds is probably missing then. i mean trying out parameter sets and listen to how they sound right away. But I could be wrong.
Indeed my workflow right now is pretty much deterministic.
I think at some point it would be nice to loosen this compositional attempt to be more “outside of time”.
But i cannot think of a way right now which could lead to thoughtful developments of musical form via algorithmic composition in general.

Some thoughts on this:

There is a quote from the book i mentioned above which is really interesting “when you work in nanotechnology its best practice to not shift every atom around”
which could be a plea for algorithmic composition in genereal.

There is also a nice talk by RM Francis where he describes his workflow with the flucoma toolkit.
What stick with me from this presentation is the analytical language of John Wilkins which you can also find in the introduction of the order of things by Michel Foucault.
There animals are divided in these different categories:
(a) belonging to the emperor
(b) embalmed
(c) tame
(d) sucking pigs
(e) sirens
(f) fabulous
(g) stray dogs
(h) included in the present classification
(i) frenzied
(j) innumerable
(k) drawn with a very fine camelhair brush
(l) et cetera
(m) having just broken the water pitcher
(n) that from a long way off look like flies

This taxonomy shows the limitations of our own thinking and rationality and paves the way for approaches which could be more delightful in terms of coming up with “new things”.
I think a great example for this is the 2D Corpus Explorer where your next decision is based on the model you have chosen for analysing your data.
In the presentation he is sharing this really great piece IMO which was made using the flucoma toolkit Form I Edit, S-Mapped | RM Francis | $ pwgen 20.

Ive also been investigating Xenakis Sieves lately and read for example this PhD thesis
What i found really interesting that beside all the things which are mentioned on the 400 pages of the thesis, for example their periodicity, symmetry, etc. is not leading to the composers decisions to actual choose one set of pitches followed by another one to lay out a specific harmonic progression.

One example from the thesis:
In “Xenakis - Akea” the thesis points out that sieves could be used to create non-octave repeating scales. so far so good.
The piece opens with piano arpeggios where the pitch content is based on the sieve and is accompanied by strings which take their pitch content from the complement of the sieve (all integer values which are not in the sieve). So basically he is dealing with all chromatic notes here at the same time. But what is not mentioned in the analysis is the reason why Xenakis picked these actual pitches in this moment and these other pitches at that moment.
Some of the progressions of pitch clusters look really geometric (look at bar 71-78), so maybe cellular automa could be used to select pitches from the sieve but who knows.

To wrap it up i think that algorithmic composition can really enlarge the thinking one could have with musical material. But actual music always happens “in time” and i havent found a better approach which leads to sophisticated musical results to create long forms, not saying that the piece ive shared is one of those, ive just started.

4 Likes

trying out parameter sets and listen to how they sound right away

again, not immediately useful, but:

lots of quite interactive systems are structured around “finite” types, i.e. where the future is known up to some “horizon”

i.e. the keykit “phrase” type, the tidal “pattern” type, the kyma “sound” type, &etc.

transforms that require traversing or querying structure can be much simpler to write at such types

but yes, algorithms for music are a deep well, i only meant that with regards simplicity, simpler types can help!

1 Like

thanks for your suggestions. i think this whole process of developing a musical language partly based on algorithms to come up with sophisticated long formal structures will take a lot more time on my end.

I’m not sure I understood the problem you are facing, but if you haven’t already, you may have a look at the AlgaLib framework where transitions come for free.

1 Like

hey, thanks for your reply. Ive managed to find a way for transitions when not wanting to compose audio rate triggers already (which would be really cool, and actually also not possible with alga).
My main point is that when it comes to developing long formal structures and make individual transitions of parameter sets i havent seen any single example in SC which would help me out on that. I mean beside all the tech talk and some ideological ideas of algorithmic composition, composition is hardly discussed. This is not ment to be offensive, im just trying to open up the discussion for finding ways for composition in SC.
Would be really interested in approaches other people might have.

2 Likes

Another option could be to use an OSC timeline editor such as Ossia or Vezér to work on a long structure, while SuperCollider would react to the OSC messages to render the audio.

2 Likes

I haven’t explored these personally yet, but maybe Bjarni Gunnarsson’s repositories may be of interest, for example: GitHub - bjarnig/OF: SuperCollider sound streams and operation sequences. Models for shaping algorithmic processes and sharing information are partially inspired by data transformation pipelines and signal flows.. These are libraries that he develops as a conceptual grounding for the means of creating music. He typically documents it in a paper, and possibly videos too.

There’s a lot of control of and algorithmic processes which seem to be at play in Gunnarsson’s compositions. He’s also an educator, so perhaps there’s something to his presentation of the underlying elements of his work that provides some pathway for discovery.

6 Likes

His music is great !

ive checked out the github repository and some videos on youtube.
OF and his music is pretty cool. unfortunately the “interpolation between states feature” is also missing there and i think you can hear that for example at min 2:43 on his latest release on superpang, some textures end abruptly and are beeing replaced by others.
Its based on Ndef filtering as far as i understand it. Maybe could be applied to individual Synthdefs instead of using the factory ones.
I liked the use of BBandstop in the synthesis part of it and the DemanEnvGen gendy like stuff.

“interpolation between states feature”

Are you looking for a preset morpher? Jitlib has a one-dimensional form, see ProxyPresetGui & NdefPreset.

The two-dimensional form is nice, too, i.e.

http://www.audiomulch.com/downloads/articles/Metasurface_nime2005.pdf

There’s also Polansky’s work on “Morphological Metrics”, i.e.

http://hdl.handle.net/2027/spo.bbp2372.1987.028

I don’t have “big ideas” for this thread at the moment, but – of LFOs for compositional control, one helpful gadget is an object that bridges values between client and server.

With the ddwCommon quark:

s.boot;

~dur = GenericGlobalControl(\dur, nil, 0.2, [0.05, 1, \exp]);

(
p = Pbind(
	\dur, ~dur.asPattern,
	\freq, Pexprand(200, 800, inf)
).play;
)

~dur.watch.automate { SinOsc.kr(0.05).linexp(-1, 1, 0.05, 0.4) };

p.stop;
~dur.stopAuto.stopWatching;

So then it isn’t necessary to rewrite LFO logic in the client.

hjh

6 Likes

hey, that looks great :slight_smile:

i think the setup which im using and described above suits my needs when composing with control rate triggers. I just use Pmonos with \trig.tr inside of Pspawner and use ~runTrans to transition between parameter sets. its deterministic but quite straight forward.
This is indeed not really handy for building complexity (a longer piece with a lot of instruments) which i have decribed above but i havent seen any better way yet which leads to the same control over the formal structure of a piece of music.

But my whole pattern infrastructure of course does not work when using audio rate triggers with Impulse.ar to sequence different parameters per grain and to interpolate between them.
How should one go about composition with the same flexibitily then with the Pmono / Pspawner / ~runTrans system and audio rate triggers.

Really personal opinion below:

IMO the problematic limitations of SC are more related to somehow the longer boilerplate code that it sometimes demands or not providing live in/out tools as easily as Pd/Max offers them.

About this idea of doing smooth sound transformation, it remembers me some ideas of the composer Trevor Wishart, who made several acousmatic pieces based on the idea of doing a sound metamorphosis. He is definitively a master in doing this kind of thing not because of the tools he uses but because of the way he imagines and implements a sound transformation. For him, often a good convincing sound metamorphosis depends strongly on the source/destination material rather than on the transformation path. Some stuff works, others just don’t.

What is incredible is that he has managed to implement theses sound metamorphoses both on the analog studio (cutting tape and turning filter knobs) as well as using a set of C function with old PCs (1994)…

Everytime I see myself struggling with some kind of tool, I think of this example to provide some kind of relief.

4 Likes

Hey, thanks :slight_smile: Im aware of the work of trevor wishart and his use of waveset transformations and also of curtis roads writings on microsound and electronic composition where you would divide time in micro, meso and macro time. You can already find this in stockhausens kontakte and his famous essay how time passes by.
In this piece you can for example hear the continuous transition between rhythm and tone in one of its freely improvised electronic parts (min 16:50 - 17:55).
This idea of organising music across multiple time scales came to new live with the nuPG lately and was also presented in this talk by marcin pietruszewski.

I think its difficult for a lot of people to translate these lower levels of organisation like synthesis (microtime) and patterns (mesotime) into musical form which is the essence of composition and i havent seen any good examples in SC which are doing that. For some people sonification seems to work, but i often times find this only conceptually appealing but not perceptually appealing. I think there is no shortcut.

If you want to be an author, painter composer etc. you have to know what others have written, painted or composed, so you can place your own upbringings in the historical context of art. What is of interest to me is how one would then at some point formulate own ideas instead of pure imitation.

In this thesis by Peter Hoffmann which deals with Xenakis - Gendy3 he is defining Explicit Computer Music like this:

This attitude, put forward by some computer music composers aims at creating music which is specific
to machines, stressing the computational aspect in its composition, by
using rigorous formalisms, machine sounds which have no equivalent in
Nature, and by conceptualizing and problematizing the use of computers
in music.

instead of Disguised Computer Music which he defines like this:

This majority trend in computer music strives at emulating human music making by computers, e.g.
by using Artificial Intelligence, Expert Systems, Neural Networks, Psy-
choacoustics, and Cognitive Sciences. These people want the machines to
do what humans do. Humans are supposed to appreciate the machine’s
artifacts within their inherited cultural framework.

Already in the post war era old musical forms like the sonata which are not only based but can only exist within tonal harmony have been abolished and timbre with its negative definition (everything which is not pitch or amplitude) has been the main interest in composition.
I think one key concept of creating musical form within this Explicit Computer Music definition is to make gradual transitions of musical parameters across multiple time scales.
The setup which im currently using for composition and which i have explained in detail above solves some of these issues but not in the best way i can imagine. Interestingly enough because it does not work with audio rate triggers, one could not use it for the continuous transition between rhythm and tone like in stockhausens kontakte.

1 Like