Little live-coding improvisation

Bit of texture-building practice – from zero to the start of this video, I guess about 20 minutes (and that’s why I love SC) though I wasn’t counting.



Very nice texture and then some more Char

And, almost diametrically opposed – same live-coding instrument, completely different sonic material and treatment. Here I was looking for more things to do with a live rapid-fire crossfader process.


1 Like

Also I’m gonna add here (with permission) a comment thread from Facebook about this – one takeaway: real programmers look at SC and go “holy …” –

Friend: “As someone who has been coding for most of my adult life, this is weird, wild code. It was as if an alien species invented it.”


“I’m glad you asked” (nobody ever did before)… one confusing thing is that there are two languages in the middle window.

The lines starting with / are my own dialect. Some of those are utility commands to instantiate instruments and players. The longer statements are for the materials being played. These use a set of functions to manipulate lists of events located at specific times within a measure. This is pretty bizarre from an imperative-code viewpoint. For example, /sw = "\ins("2", 3..5, 0.25)::\stuttN("2", 1..4, 0.125, 0.9, 4..12, "3")::\choke({rrand(0.2,0.6)}, "1")";

\ins("2", 3..5, 0.25) – insert 3 to 5 “2” items, on a 16th note grid

::\stuttN("2", 1..4, 0.125, 0.9, 4..12, "3") – then locate 1 to 4 of those and stutter them in 32nd notes, with 90% probability of a note (10% rest), in bursts of 4 to 12 notes, and use item “3”

::\choke({rrand(0.2,0.6)}, "1") – then, if there are any notes longer than 0.2 to 0.6 beats, choke them off with item “1”

… and 1, 2, 3 are defined by the /sw process. That’s evaluated on the downbeat of every bar, and the result gets streamed out over the duration of the bar. There are definitely results that are not obtainable this way, but there’s enough room to move around that I’m still discovering new things to do with it. (So it’s not a general-purpose language – more like a “sequencer on steroids.”)

The other statements, beginning with BP or VC, are plugging signal processors into specific places. These are straight-up SuperCollider code, defining DAGs of signal processing units and transmitting them to the audio engine to be evaluated continuously. So this part is more like functional programming (not exactly FP – it’s using imperative method calls to build the graph, but the meaning is in the graph’s structure, and this has more in common with FP).

Friend (emphasis added by myself):

Thanks for the detailed explanation. What I was specifically referring to is the synchronous and asynchronous execution of the audio (that is, without getting glitches or interruptions or pauses in the continuous feed of the audio). There are a lot of asynchronous coding steps, involving reserving large amounts of buffer, garbage collection, and making sure the timing and placing the right audio is played. (I could go on but I’m going to stop – time for bed. But, I will add, as someone who works with a lot of people under 30, this code reminds me of the cloud computing code. The function ‘async’, which is ubiquitous on AWS, is very sophisticated and is responsible for a lot of perfectly timed requests between the front and backend. My point: the only people who really know how sophisticated it is are people like me, who had to hand code asynchronous timing and garbage collection algorithms.)

(Me, silently: “Someone noticed!”)


That’s an excellent observation. Basically this boils down to the original creator of SuperCollider, James McCartney, being 1/ brilliant and 2/ an absolute stickler for doing things Correctly when it comes to timing and low-latency responsiveness of the DSP code. (For instance, larger memory buffers are allocated only in a lower priority thread, safely away from the DSP loop; DSP units that need memory are expected to go through his own real-time pool, which is pre-allocated at audio-engine startup. Stuff like that. He left very little to chance in this regard.)

Yes. It’s actually pretty shocking that SuperCollider was already getting glitch-free results back in the early aughts, and it’s only gotten better as processors have gotten faster.

Project MUSE - Rethinking the Computer Music Language: SuperCollider

That is – let’s not forget how much stuff is going on under the hood here, which we often take for granted.