Limitless Abstraction

the really cool thing about pure data is that you can just keep wrapping your patches into abstractions ad infinitum.

I found this aspect made pure data really fun and powerful to work with.

the process of continual folding into abstraction seems more difficult with SuperCollider.

take for example the following code:

(
~minor_eleven = [ 0, 3, 7, 10, 14, 17 ];
~major_thirteen = [ 0, 4, 7, 14, 18, 21 ];
~chord_shapes = [ ~minor_eleven, ~major_thirteen ];
~chords = Array.fill (4) {|i| 60 - i + ~chord_shapes[i % 2] };

SynthDef (\simple_sine) {
	var env = EnvGen.kr (Env.perc, doneAction:2);
	var sig = SinOsc.ar (\freq.ir (440)) * env;
	sig = Pan2.ar (sig, \pan.ir (0), \amp.ir (0.2));
	Out.ar (0, sig);
}.add;

~roll_chord = {
	arg chord, roll_amount = 0.1;
	Task {
		chord.do {
			arg note;
			Synth (\simple_sine, [
				\freq, note.midicps,
			]);
			roll_amount.rand.wait;
		};
	}.play;
};
)

~roll_chord.(~chords[0])

Task { 64.do {|i| ~roll_chord.(~chords[i % 4]); 1.wait } }.play

here I have used Task to schedule a chord roll. but the roll itself (~roll_chord) would be nice to have as a synth that lived on the server, that could itself be controlled by a pattern or what have you.

similarly, it would be convenient for the whole chord progression (last line of code) to live on the server as an invokable synth.

or is this the correct way to approach abstraction in SuperCollider, to have sclang deal with those structures that pertain to scheduling and sequencing, and have scsynth be invoked as needed to deal with with DSP?

Personally I donā€™t see anything wrong with what you have here. I donā€™t think there is any conceptual difference between invoking some compiled object on the server and invoking a piece of saved client-side code that is subsequently JITed.

This particular example can be expressed in patterns along these lines:

(
Pbind(
	\instrument, \simple_sine,
	\root, Pseq([0, 1, 2, 3], inf),
	\scale, Scale.chromatic,
	\degree, Pxrand([
		Pn(~minor_eleven, 1),
		Pn(~major_thirteen, 1),
	], inf),
	\sustain, 0.1,
	\strum, Pfuncn({0.1.rand}, inf),
	\dur, 1
).play
)

Overall, I think so, but taste is also an argument.
@droptableuser showed the possibility with strum for that specific case, and for more complex nesting see Pspawner, which is a big hit: unlimited nesting of parallel and sequential sequencing, a kind of own language within the pattern language!
In general Iā€™m convinced that SC is hard to beat concerning abstraction, thatā€™s the power of a text-based programming language! E.g. compare to do massive additive synthesis with different LFOs for all params in Pd and SC ā€¦
Indeed, as always, there are limitations, and one of the limitations is that you cannot change a defined synth structure. But there are ways to circumvent that in practice, so that you donā€™t feel this limitation exists. E.g. the number of channels in a synthdef is immutable, you can however build a synthdef factory which produces different synthdefs for i channels (i = 1 ā€¦ n) or define a maximum number of channels within the synthdef and do zero padding.

Concerning server-side sequencing; IMO there are only special cases where itā€™s really an advantage, the most important one is very fast sequencing (more than som hundred events per second). Often you can combine language- and server-side sequencing.

Daniel

1 Like

Thereā€™s another aspect of that besides sequencing. You can use Functions as helpers for SynthDef building. This is an enourmously powerful option, but it is often overllooked and partially difficult to grasp. Writing the functional operations into classes turns them into ā€œpseudo ugensā€, which are similar to macros in other programming languages. To give an example, the last update of the miSCellaneous_lib quark contains two pseudo ugens, Fb1 and GFIS, the first one enables single sample feedback, the second a generalization of Agostino Di Scipioā€™s functional iteration synthesis.
WIth GFIS arbitrary operators are applied iteratively, audio comes from changing init or parametrization data. Have a look at the implementation, itā€™s surprisingly simple and short. Core is a straight do loop, func contains the operator combo you pass to the UGen:

		n.do { |i|
			sigs[i] = (i == 0).if {
				func.(init, 0)
			}{
				func.(sigs[i-1], i)
			}
		};

Another one: last term a student asked me about iterative FM. This was a bit more effort, but same principle: define an iteration depth and a Function (or pseudo ugen) is the working horse to turn that into a complex ugen graph.
No idea how much effort thatā€™d be in other environments.

Daniel

2 Likes

@capogreco

If you havenā€™t read:

  • A Practice Guide to Patterns by James Harkins (Excellent tutorial).
  • The JITLib Tutorial (not as good, but JITLib is probably what you want)

then you should. You might also then want to read: [How I Live Code](https://theseanco.github.io/howto_co34pt_liveCode/] to see a practical way in which you might apply these these things.

As a general point you can do sequencing on the server just like you can in PureData using Demand UGens. People mostly donā€™t do this for two reasons. First of all the documentation isnā€™t very good so it can be hard to wrap your head around, but secondly because thereā€™s mostly no point. Unless youā€™re doing something where you need sample accurate sequencing, or youā€™re firing off a huge number of events a second, thereā€™s no need.

Patterns are just way better (on a side note I wish the tutorials didnā€™t emphasize tasks/routines - which really should be an advanced topic for the rare cases where patterns are the wrong abstraction) for most tasks. Everything you want to do there can be done with patterns. Patterns can call other patterns, use other patterns. They can also (so long as a small amount of latency isnā€™t an issue) sync with server UGens fairly easily. And there are some incredibly powerful abstractions built into patterns that far exceed anything that PureData can do. My advice would be to forget for the moment the PureData way of doing things - but to immerse yourself in JITLib, patterns and just see what you can do. At the end of it if there are things that you feel youā€™re missing post a question. But I feel in this instance itā€™s best to familiarize yourself with the SuperCollider way of working because it is generally better.

Iā€™ll post some concrete examples of pseudo-ugen, SynthDef factories and wrap when I get a chance.

Hereā€™s a refactoring of your original example that captures some of the modularity of PD youā€™re talking about, but does it on the event / pattern side - which is something thatā€™s very much outside the capabilities of PD.

( // only change here is the addition of a \release arg
SynthDef (\simple_sine) {
	var env = EnvGen.kr (Env.perc(0.01, \release.kr(1)), doneAction:2);
	var sig = SinOsc.ar (\freq.ir (440)) * env;
	sig = Pan2.ar (sig, \pan.ir (0), \amp.ir (0.2));
	OffsetOut.ar (0, sig);
}.add;
(
~minor_eleven = [ 0, 3, 7, 10, 14, 17 ];
~major_thirteen = [ 0, 4, 7, 14, 18, 21 ];
~chord_shapes = [ ~minor_eleven, ~major_thirteen ];
~chords = Array.fill (4) {|i| 60 - i + ~chord_shapes[i % 2] };

Pdef(\rollChord, {
	|chordIndex, rollDelta, count|
	
	Pbind(
		\instrument,	\simple_sine,
		\amp,			Pfunc({ rrand(-30, -20).dbamp }),
		\midinote, 		Pser(~chords[chordIndex], count),
		\dur, 			Pfunc({ rollDelta * rrand(0.5, 1) }),
	)
});

Pdef(\rollBase, Pbind(
	\instrument, 	\rollChord,
	\type, 			\phrase,
	\legato, 		1,
	\rollDelta, 	0.1,
	\count, 		16,
	\chordIndex, 	1,
	\release, 		1,
	\chords, 		~chords,
	\dur, 			Pkey(\rollDelta, inf) * Pkey(\count) / 2,
));	

Pdef(\fast, Pbind(
	\rollDelta, 0.02,
	\dur, Pkey(\rollDelta, inf) * Pkey(\count) / 2,
));

Pdef(\slow, Pbind(
	\rollDelta, 0.5,
	\dur, Pkey(\rollDelta, inf) * Pkey(\count) / 2,
));

Pdef(\fastSlow, Pbind(
	\rollDelta, Pn(Penv([0.15, 0.25, 0.15], [30, 30]), inf),
	\dur, Pkey(\rollDelta, inf) * Pkey(\count) / 2,
));

Pdef(\clipped, Pbind(
	\count, Pfunc({ rrand(4, 8) })
));

Pdef(\long, Pbind(
	\dur, 8,
	\count, inf,
));

Pdef(\strum, Pbind(
	\dur, 4,
	\count, 8,
	\release, 10,
	\rollDelta, 0.05
));

Pdef(\chordSequence, Pbind(
	\chordIndex, Pstutter(2, Pseq([
		0, 1, 2, 3, 2, 3, 2, 2, 1
	], inf))
));


Pdef(\rollA, Pdef(\slow) <> Pdef(\clipped) <> Pdef(\chordSequence) <> Pdef(\rollBase));
Pdef(\rollB, Pdef(\long) <> Pdef(\fast) <> Pdef(\chordSequence) <> Pdef(\rollBase));
Pdef(\rollC, Pdef(\strum) <> Pdef(\chordSequence) <> Pdef(\rollBase));

Pdef(\roll).source = Pdef(\rollA);
Pdef(\roll).play;
)

This breaks several of the gestural properties apart into separate Pdefs. Then, it uses the chain operator <> to compose several of these abstractions together into a composite Pdef, and plays it. You can change the source of your final Pdef(\roll) by switching the source = ... line. In addition, you can make modifications to any of the Pdefs being composed and the changes will make their way into your already-playing output Pdef. For example, adding a \release, 5 key to Pdef(\slow) will hold the note longer in a really nice way.

This is obviously more verbose than your original, but itā€™s factored into semantically meaningful chunks. With experimentation, you can build out a library of interesting Pdefs for a particular SynthDef or scenario, and then work at a compositional level more like the Pdef(\rollA) synths at the bottom. If you are smart about using Pkey in your base patterns, you can parameterize them to some extent. For example:

Pdef(\fastBase, Pbind(
	\rollDelta, 0.5 / Pkey(\howFast),
	\dur, Pkey(\rollDelta, inf) * Pkey(\count) / 2,
));
Pdef(\fast, Pbind(\howFast, 4) <> Pdef(\fastBase));
Pdef(\faster, Pbind(\howFast, 6) <> Pdef(\fastBase));
Pdef(\fastest, Pbind(\howFast, 8) <> Pdef(\fastBase));
4 Likes

And, just a note about performance - the Pdef(\rollB) def is generating probably ~70 notes a second, and the CPU impact on the sclang side is pretty inconsequential.

1 Like

This is a nice addition to Pdef(\rollB):

SynthDef (\simple_sine) {
	var env = EnvGen.kr (Env.perc(0.01, \release.kr(1)), doneAction:2);
	var sig = LFSaw.ar (\freq.ir (440)) * env;
	sig = BLowPass4.ar(sig, \filt.ir(200), 0.9);
	sig = Pan2.ar (sig, \pan.ir (0), \amp.ir (0.2));
	OffsetOut.ar (0, sig);
}.add;
Pdef(\slow, Pbind(
	\rollDelta, 0.5,
	\release, Prand([5, 5, 5, 5, 5, 5, 12], inf),
	\detune, Pfunc({ rrand(-2, 2) }),
	\filt, (Pfunc({ rrand(0, 350) }) + Pn(Penv([200, 1200, 200], [25, 5], [4, -4]), inf)),
	\dur, Pkey(\rollDelta, inf) * Pkey(\count) / 2,
));

The \filt variations are also nice inside of \rollChord instead, so they vary note-by-note, instead of phrase-by-phrase.

1 Like

thanks to everyone who contributed to this thread.

looks to me like a workflow of continuous abstraction is possible in two dimensions:

1 . with patterns (with the extra benefits of stateless & lazy)
2 . with pseudo ugens

this has given me plenty to think about / learn / explore for the time being.

I do have a specific question that pertains to how a pattern-based workflow might integrate with CV clock, which I have posted here.

again, many thanks!

from recursive_phrasing:

Pdef can be used as a global storage for event patterns. Here a way is provided by which these definitions can be used as an instrument that consists of several events (a phrase), such as a cloud of short grains. Furthermore, this scheme can be applied recursively, so that structures like a cloud of clouds can be constructed.

without getting into the initial question about different languages, Iā€™d like to express my gratitude to you since the code you wrote revealed to be very instructive and useful to me.

Thanks and cheers from Italy!

kriyananda

2 Likes

Iā€™d just like to add that SCLang would be much more encapsulatable if it had classes in the usual sense. I know that standard class behaviour can be emulated in the language using events and functions but itā€™s not very natural. This is nevertheless how I abstract things away and get class-like functionality.

If classes, not ones that need to be compiled into the language, but classes that can be defined in the language itself in the usual sense, abstraction as you describe it would become as natural as it is anywhere else I think.

Itā€™s you lucky day! SCLang has classes!
http://doc.sccode.org/Guides/WritingClasses.html :slight_smile:

I know I know, but you have to recompile every time etc.

FWIW, I have been using prototype-based programming in SC for about 14 years now.

Prototype object oriented programming uses generalized objects, which can then be cloned and extended. Using fruit as an example, a ā€œfruitā€ object would represent the properties and functionality of fruit in general. A ā€œbananaā€ object would be cloned from the ā€œfruitā€ object and general properties specific to bananas would be appended. Each individual ā€œbananaā€ object would be cloned from the generic ā€œbananaā€ object. Compare to the class-based paradigm, where a ā€œfruitā€ class would be extended by a ā€œbananaā€ class.

ddwPrototype quark: Proto class ā€“ The basic concept of a prototype

ddwChucklib quark: PR class ā€“ Global storage for prototypes that are to be used like classes (that is, prototypes stored in the PR collection are not to be modified for specific uses, but only copied/cloned)

Protos are a bit slower than standard classes (they use SC environments ā€“ environment variable lookup is slower than local variable access ā€“ and method calls have an extra layer of dispatch through doesNotUnderstand). In practice, for my work, this has never been a major problem.

(
~operator = Proto {
	~a = 2;
	~b = 3;
	~op = '+';
	~prep = { |a, b, op|
		if(a.notNil) { ~a = a };
		if(b.notNil) { ~b = b };
		if(op.notNil) { ~op = op };
		currentEnvironment  // i.e. 'self' -- must return this!
	};
	~calc = { ~a.perform(~op, ~b) };
};

// inheritance
~mul = ~operator.clone {
	~op = '*';
};
)

~mul.copy.prep(5, 7).calc
-> 35

Itā€™s possible to build large systems this way ā€“ almost all of my live coding dialect (ddwChucklib-livecode quark) is implemented in prototypes. This means, if I find a bug in a generator, I can change the code, reexecute the definition, and rerun the pattern-set statement that used the generator, and check the results immediately ā€“ no library recompilation.

E.g., hereā€™s an abstract syntax tree, all in prototypes: https://github.com/jamshark70/ddwChucklib-livecode/blob/master/parsenodes.scd

hjh

2 Likes

Iā€™m almost certainly nowhere near smart enough :smile: but if I can I will!

Okay, I think with plenty of time to get to grips with it all I may be able to help, but this kind of thing is a bit of a step up for me so you may receive a number of questions. As long as thatā€™s cool, Iā€™m ā€˜in,ā€™ as they say.

Something thing to keep in mind also - thereā€™s another version of ā€œdynamically compile-able classesā€ that also solves most (not all) of the same workflow problems, but is easier and safer: we improve the library compile time so that itā€™s near instantaneous, via caching and trickery. This is more like a couple of months of work, and is highly testable / far less risky to implement. This means you still lose state when you recompile, but you gain months and months of developer time to work on a different sclang project :slight_smile:

Regarding contributing to either of these ā€œdynamic classlibā€ proposals: much more of the work required for these is in testing and not engineering work on the C++ internals (I would guess 80% testing/validation, 20% actual sclang internals work) - so thereā€™s lots of room for anyone to contribute to a project like this even if youā€™re just an sclang user.

1 Like

Actually, one of the reasons why I wrote Proto back in 2005(?) is because the biggest time sink when recompiling the library is exactly reloading the state that you lost, and not so much the library compilation time.

If I hit ā€œrecompileā€ while working on some music, it goes like this.

  1. ā€œcompiled 1522 files in 1.45 secondsā€
  2. Reload the composition/performance environment (5-10 seconds).
  3. Reload instruments and musical processes ā€“ because most of my work now is live-coded improvisation, of course I wonā€™t have prepared instant-loading scripts, so this could take quite some time.

So, if I estimate 30-40 seconds to reload the environment and get back to it, library compilation time amounts to 3-5% of that. So speeding up library compilation would really not make much difference for me.

By contrast, by using runtime-defined objects in chucklib-livecode, fixing a bug in one of those pseudo-classes goes like this:

  1. Reevaluate the code with the changed Proto definition (2 seconds).
  2. Rerun the relevant cll statement (2-3 seconds).

While admitting that Proto is sometimes awkward to use, it does solve a problem (by a factor of 6-8x) that class library caching wouldnā€™t, for my use cases.

hjh

Hereā€™s a fun C++ experiment for anyone to try, if you want to learn a little about sclang or the possibility of dynamic class modifications: add two primitives (probably to Kernel? but it doesnā€™t really matterā€¦) to directly call buildBigMethodMatrix and traverseFullDepTree2. Then, call those primitives from SCLang (with as little stack / extra wrapping code as possible). If either one works without completely exploding, then we are very very close to at least runtime modification of existing classes, and possibly even adding new classes.

Apart from some glaring memory management issues, I donā€™t see any conceptual reason why the method table rebuild (at leastā€¦) shouldnā€™t be runnable at any point, without a full recompile. I havenā€™t tried this myself - I would very very curious to hear the result if anyone else tries!

1 Like