Podcast with James McCartney

Cool to hear his ideas about his own music (and of course about compute science!):

All the best,


Definitely interesting insight. I wonder if anybody knows more information about his new project, what he calls his own “SuperCollider successor”?

I could not find anything relevant online, if not just the name: “sapf (sound as pure form)”.

Great podcast. I learned a lot about the history of SuperCollider and about the technological / cultural / musical environment that led to its creation. It’s really mind-boggling to hear how fast audio technology and computer music evolved during the 80s and 90s, when programs became able to compute faster than sound and when the music tech industry wasn’t that developed compared to nowadays.

I would love to hear about that new language too. It looks like a really fun project. Does anyone have something to share about it? It looks like James McCartney already presented it to small audiences.

Totally agreed. The historical development is super interesting. The late 80s, 90s and early 00s were a “gold rush” period of computer music. The majority of realtime environments emerged just then – and kept the position! Thereafter, I always wondered when, due to the progress in technology, something new would appear and replace the proven working horses SC, Pd, ChucK etc., but no big alternative showed up so far. And yes, on the other hand, audio technology is much more ubiquitous.

James McCartney has been on past SC symposiums. You can do a web search and will find some info snippets about his ideas. He should have been a speaker (on his new projects) at the cancelled 2020 symposium:

You can find some stuff here:

Looks like he did a demo in 2015 at pdcon

Screen cap on twitter showed what looked like a ton of arrays of integers being entered or echoed at an interactive prompt.

Here’s the text from Mccartney’s soundcloud:
This programming language is called:

“A tool for the expression of sound as pure form.”

It is an interpreter for a language for creating and transforming sound. The
language is mostly functional, stack based and uses postfix notation similar to
FORTH. It represents audio and control events using lazy, possibly infinite
sequences. It intends to do for lazy sequences what APL does for arrays: provide
very high level functions with pervasive automatic mapping, scanning, and
reduction operators. This makes for a language where short programs can achieve
results out of proportion to their size. Because nearly all of the programmer
accessible data types are immutable, the language can easily run multiple
threads without deadlock or corruption.


Other languages that inspired this one:
APL, Joy[1], Haskell, Piccola[2], Nyquist[3], SuperCollider[4].

APL and FORTH (from which Joy derives) are both widely derided for being
write-only languages. Nevertheless, there has yet to be a language of such
concise expressive power as APL or its descendants. APL is powerful not because
of its bizarre symbols or syntax, but due to the way it automatically maps
operations over arrays and allows iterations at depth within arrays. This means
one almost never needs to write a loop or think about operations one-at-a-time.
Instead one can think about operations on whole structures.

Here is a great quote from Alan Perlis[5] on APL, which that also reflects my
interest in this way of programming :
“What attracted me, then, to APL was a feeling that perhaps through APL one
might begin to acquire some of the dimensions in programming that we revere in
natural language — some of the pleasures of composition; of saying things
elegantly; of being brief, poetic, artistic, that makes our natural languages
so precious to us.”

The Joy language introduced concatenative functional programming. This generally
means a stack based virtual machine, and a program consisting of words which are
functions taking an input stack and returning an output stack. The natural
syntax that results is postfix. Over a very long time I have come to feel that
syntax gets in between me and the power in a language. Postfix is the least
syntax possible.

There are several reasons I like the concatenative style of programming:
Function composition is concatenation.
Pipelining values through functions to get new values is the most natural
Functions are applied from left to right instead of inside out.
Support for multiple return values comes for free.
No need for operator precedence.
Fewer delimiters are required:
Parentheses are not needed to control operator precedence.
Semicolons are not needed to separate statements.
Commas are not needed to separate arguments.

When I am programming interactively, I most often find myself in the situation
where I have a value and I want to transform it to something else. The thing to
do is apply a function with some parameters. With concatenative programming this
is very natural. You string along several words and get a new value


Coincidentally enough, one of the biggest changes I made in my live coding dialect was to switch from function composition by argument-passing toward a function-composition (“chain”) operator:

// old way, IMO hard to read
/snare = "\shift(\ins(" - -", ".", 1..2, 0.5), ".", 1..2, 0.25)";

// new way
/snare = "[ - -]::\ins(".", 1..2, 0.5)::\shift(".", 1..2, 0.25)";

The latter reads naturally: 1. Start with strong strokes on 2 and 4; 2. Insert one or two ghost notes onto any empty 8ths; 3. Shift one or two of those ghost notes forward or back by a 16th.

Also, over the years I’ve gravitated toward a style of writing SynthDefs that uses much less nesting. This is not strictly concatenative (which isn’t possible in the current UGen system), but flattening the nesting out into separate vars (where a line may add an operation to what was done before) is IMO a million times easier to read:

// totally nested style (like in some older tutorials)
SynthDef(\analog, { |out, gate = 1, amp = 0.1,
	freq = 440, detun = 1.008, ffreq = 2000, rq = 0.5,
	ffDecay = 0.1, ffEgMul = 3,
	atk = 0.01, dcy = 0.1, sus = 0.6, rel = 0.1|
				Saw.ar(freq * [1, detun]),
					ffreq *
					(EnvGen.kr(Env.perc(0.01, ffDecay)) * ffEgMul + 1)
				).clip(20, 20000),
		) * (
			EnvGen.kr(Env.adsr(atk, dcy, sus, rel), gate, doneAction: 2)
			* amp

// pseudo-concatenative:
// each var is max 1 line. NOOOO long expressions!
SynthDef(\analog, { |out, gate = 1, amp = 0.1,
	freq = 440, detun = 1.008, ffreq = 2000, rq = 0.5,
	ffDecay = 0.1, ffEgMul = 3,
	atk = 0.01, dcy = 0.1, sus = 0.6, rel = 0.1|
	var osc = Saw.ar(freq * [1, detun]);
	var feg = EnvGen.kr(Env.perc(0.01, ffDecay)) * ffEgMul + 1;
	var filt = RLPF.ar(osc, (ffreq * feg).clip(20, 20000), rq);
	var eg = EnvGen.kr(Env.adsr(atk, dcy, sus, rel), gate, doneAction: 2);
	Out.ar(out, filt * (eg * amp));

Both of those SynthDefs define the same graph! But… honestly… if you structure your code like the first one, you’re going to scratch your head years later when you come back to it. It’s a mess. So we can take some ideas from “concatenative style” and apply them even in ()-arg syntax.


I use a chuck like operator => defined as => {|that| that.value(this)} so I can write

{Saw.ar(400) => LPF.ar(_,800)}.play


3 + 4 => _.postln

Hello everyone !

Do you think this new language will replace SC / Csound / Pure Data… or there is just another beautiful one that come to the party ?

@VIRTUALDOG @madskjeldgaard and others : I’m worried about Csound and SuperCollider avaibility in a few decades, am I wrong ? If not, which one will have the best longevity / long term compatibility for you and why ?

All the Best,

Hi and welcome,

as I understand it in the podcast, he’s doing this mainly for his own usage.

Well, Csound has been around for 35 years, SC for ca. 25. As I argued in my post above, no big alternatives have shown up since 2003 (ChucK). Nobody knows the future but worrying about the metioned two, at least for the next 10 years or so, seems unnecessary to me. If you are familiar with both of them I think you’re on a save side concerning the application of your skills. Concerning the reusability of certain scripts, that’s a bit different: things might always change in details. In general you can’t expect the same audio code running decades from now with the exact same results. The sustainability of live-electronics is limited in that sense. That is proven by many complaints on the SC mailing list in the past, slight implementational changes as well as operating system specifics can always totally change the sounding results.

Thank you for your time !

I understand there is not difference between Csound and SC in the long term development side ? The fact SC user base seems to be larger than the Csound one will not impact the system avaibility for you ?

In the worst case scenario… As a developer do you think SC would be the easiest to maintain if the development had to stop or would be Csound or Pure Data since they’re not writen in C++ with complex server architecture ? By the way, appart from 3rd party like Portaudio … what will break system compatibility with new OS ? Has SC more dependencies than Csound ?

Please sorry my (too many !) begginer questions but I would know what you think about that.

Many thanks ! Please sorry my poor english…

It’s some time ago people compared activity on mailing lists etc. (you’d find threads in the archives). My impression is that Csound had a comeback a few years ago after it looked like that it would decline but I’m not really following it. I think that activity in SC raised as well since the mid-10s but I might be biased.

Thank you for posting this interview! Interesting!

There is also a transcript: http://www.darwingrosse.com/AMT/transcript-0350.html


Russell Pinkston likes to tell the story about getting the prof gig at UT, as a hot CM composer/programmer coming out of Columbia U as a newly minted doctor, and then his first student was James…and he had no idea what to do with him. A lot of us studied with Russell: Eli, Brian, Jo, me. But having someone like JM as a (first!) student must have been intimidating.

1 Like

There’s a talk by Miller Puckette floating around somewhere, where he argues that the computer music world needs more interesting DSP but maybe doesn’t need another platform (or rather that there is much less to gain at this point from a new language/environment than from new DSP). If his feeling was right, that may have been borne out in the lack of new environments.

It’s a general problem with computers… I’m pretty sure I can do nothing now with my Digital Performer projects from graduate school.

Currently, SC development practice is to review every change before merging it into the main line. One of the considerations is, “will this break existing code?” vs “how bad is the bug?” where, if the change is going to break user code, it had better be a really bad bug. That is, we’re trying to prioritize stability as much as possible, and avoid changes along the lines of “well, this is a little annoying, let’s change it and… oh, now these 300 user scripts have changed behavior” :man_facepalming:

Future proofing is difficult, but we’re trying to be sensitive to that.



About more DSP instead of more environments, it’s fine.

Appart from code changes (SC syntax, improving internal SC or Csound components…), what would make an environment obsolete regarding new OS (not about SC / Csound projects backward compatibility) ? If this is just about dependencies (Portaudio…) it seems easy to maintain Csound or SC in the long term, right ? Maybe one of the two is less-dependent from 3rd party components ? Simple curiosity :wink:

Thank you !

Best regards,

In 2014 Miller Puckette held a lecture here in Graz (fascinating like James McCartney’s podcast) and I asked him about his expectations about new platforms. I remember that he didn’t expect anything groundbreaking new but rather a better connection between existing worlds. Looking at certain developments which happened meanwhile (e.g. Christof Ressi’s VST integration with SC and Pd) he was absolutely right.
Funny side-note from the event then: my colleague Winfried Ritsch unpacked a NeXTstation from the early 90s. They performed an old Pd patch, which worked like a charm!
For those of you speaking German: I even found a blog about that evening! Mostly about sustainability:


This is a very good question - I think it would take looking at how OTHER projects become obsolete. I think, especially in open source, popularity and support can grow exponentially as they become popular (whoo!) but it’s easy to forget that they can equally shrink exponentially when they lose support. SuperCollider is lucky right now to have a pretty diverse and skilled group of developers working on it, but plenty of OS projects only have 1 or 2 core developers - if one leaves, that means fewer or zero updates, which means the days tick away until a major change is needed and no one is around for the task.

The first thing you can do if you’re wondering about the future of a project: check it’s GitHub page. How many people are making changes to the code? Was there a main developer that made most of the commits and then left the project? Are people reporting bugs? Are the bugs getting fixed? Are there best practices things in place - unit tests, contributor guides, code of conducts, etc? These are the things that make a project last.

It’s worth keeping in mind that SC is on it’s third complete UI framework (Cocoa, Java, and now QT), it’s third major CPU architecture (PPC -> Intel -> Arm) - I think there’s SOMETHING about it’s user community and structure that has managed to be persistent over the years?

1 Like