SuperCollider 4: First Thoughts

Hi Luke, this sounds amazing!

I am particularly excited about the following:

The Hadron C API will hopefully enable a wide variety of interoperability and extensibility options, while also allowing the core library to remain small, delegating many functions (such as UI and MIDI frameworks) to extensions.

One reason I’ve personally stayed away from sclang is the missing interoperability with other languages/libraries, so this would be a real game changer!

Sweet, thanks for your interest! By making SC code run faster, and by allowing some kind of bridge to C code, I’m hoping that it will allow SuperCollider users to do more with the language without having to learn C++. I’ll keep hacking away, and report back here when I have something that can do things a bit more interesting than compute the Fibonacci sequence.

1 Like

Hi all,

Just to add to what has already been said re: documentation:

In another context I (I think the Tidal Cycles forum??) someone shared this resource for thinking about documentation

I find the four-part distinction between Tutorials, How-Tos, Explanations, and Reference to be a very useful way for thinking about what to include and exclude from any documentation system. Here’s the graphic used to explain these distinctions:

4 Likes

Currently there are:

  • 1124 (excluding Meta) classes in the core library

  • 524 undocumented classes (…half of the library ! )

  • 2502 doc / reference pages in total

…there must be some way to streamline the process.

YEah, apropos documentation. I would love to see SC4 be documented in markdown. Makes it easy to convert to any format and also easy to maintain and then have a static site generator system create links between documents, tags, etc. automatically (and “inherit” documentation in subclasses from superclasses).

3 Likes

Extremely exciting!!! keep up the good work!

1 Like

Super interesting project, thanks for the sneak peak!

Do you have a sense at the moment what the potential may / may not be for Hadron on embedded platforms and single-board computers? (for reference, http://bela.io support was recently merged into SC 3.12 after extensive work)

I don’t really know enough about the domain, but my intuition is that JIT compilation is for the most part not well suited for embedded systems, where the resources for continuous JIT compiling on-the-fly don’t necessarily exist. But perhaps I’m misinterpreting how you anticipate Hadron being used in such a context.

Best of luck with the next stages of development! :slight_smile:

@lnihlen this sounds great! I would love for the next version of SuperCollider to be mostly written in SuperCollider. (As an aside, I’d flirted with moving to doing most of my work in Julia as it is a “modern” JIT platform.)

Speeding up the language would make a huge difference with how I work with SC. In practice, I’d like to be able do speedily do the kind of “out of time” signal processing tasks that are super easy to do in Numpy/SciPy and/or MATLAB, rather than moving to these platforms.

Anyway, great to hear you’re working on this!

1 Like

@jarm, thanks for the kind words, responses below:

I’m currently targeting x86 and ARM backends, both 32 and 64 bits, and macOS, Linux, and Windows. An important requirement for Hadron is that it can run everywhere sclang can run. If it didn’t, I don’t think I could claim that it is a suitable replacement for sclang.

It looks like the Bela is a Cortex M8, which is 32-bit ARM, with 512 MB of RAM, running a Linux distro. So I would expect that Hadron would be able to run well on that. I haven’t started in yet on the 32-bit builds, but I’m going to be targeting a first-generation Raspberry Pi as my 32 bit ARM reference development platform.

The term “embedded systems” covers a wide variety of situations, from single-board computers running Linux all the way down to tiny microprocessors with ~few kB of on-chip RAM running bare-metal code with no host OS. I think setting the bar for Hadron at “everywhere sclang runs” is a good first step. My intuition is that compiling to machine code may allow Hadron to run acceptably fast on even less powerful hardware, but there’s a lot of fine print to that, and faster compiled code usually comes at the cost of longer compilation times, so it may actually be the case that getting Hadron to run almost as well as sclang on lower-end hardware becomes an aspirational goal of the project. Time will tell.

1 Like

Hey @joslloand, thanks for your interest in Hadron.

I’ve definitely been looking at Julia for inspiration, there’s a lot there to like. And I have some thoughts about possible speedups for floating-point code in Hadron, but am deferring questions about optimizations like that until after the basic runtime is operational.

I’ll do my best avoid a long digression into comparing speeds of language implementations, it’s a complex topic that I’ve been doing some reading on recently, mostly in JavaScript land, where there has been much debate about which JS interpreter is the “fastest.” I want to be very careful when talking about Hadron vs. sclang, and only make assertions about things when they are evidence-based and relatively clear. What is clear from my reading is that it is possible to design code that will run extremely fast, and also code that will run extremely poorly, on any language implementation. The goal is to build an implementation that runs very well for a broad variety of program inputs.

My intuition suggests that for some use cases of sclang Hadron may be significantly faster than sclang. I believe this because of some design decisions that Hadron has made in this direction. Along with myriad smaller decisions there are three big design approaches that Hadron takes specifically for speed of compiled code:

a) Type deduction. Hadron does its best to deduce types for every variable in a block of code. So if you’re using a loop counter, for example, and it starts as an Integer and only interacts with other Integers, Hadron may be able to inline all of the binop calls down to single (or few) instruction calls. This has a widely varying impact on compiled code. Because of method calls, and SuperCollider is very much a message-driven language, much type determination has to happen at runtime. So, for example, in that aforementioned loop counter, if it is inside a function that takes a number of iterations as an argument, we can’t assume that that argument will always be an Integer, and so the type ambiguity creeps into the rest of the operations on the loop counter. There may be extensions to the SuperCollider language down the road to allow Hadron to determine types and optimize further, but these are for later. There’s also the realm of speculative (or profile-driven) optimization, but that’s for even further later. :slight_smile:

b) Register-driven vs. stack-based. I think sclang could fairly be described as a stack-driven language, where most operations happen values stored as Slots on the program stack. Stack-driven interpreters have a variety of advantages, but operate mostly on values that, as they reside on the stack, therefore reside in memory. Hadron takes pains to keep as many values as possible in CPU registers, saving values out to memory only when sending messages or in situations of register overflow. Registers are the fastest form of storage on a computer, and so the hope is that by keeping values in registers Hadron won’t face memory bandwidth limitations as often as sclang might.

c) JIT compiling to machine code. From a certain point of view sclang is also a JIT compiler, it’s compiling input code down to virtual machine bytecode which gets run on the VM interpreter. Hadron takes this a step further by generating host machine code. In theory this should be faster from an instruction cache memory perspective - straight-line Hadron code will not branch, whereas sclang has to go through a jump table after every operand.

Now for the bad news - I haven’t done comparisons yet but Hadron is doing quite a bit more work on input code than does sclang, so intuition suggests that compilation times may be noticeably slower for Hadron than they are for sclang. This could be a real problem, particularly for the live coding use cases, where the performer may care a great deal more that their snippets are being executed quickly after sending them to the interpreter than that those snippets are being heavily optimized. I want to include some options users can set to disable some optimizations, but there’s a baseline amount of processing that’s required just to get code lowered from SuperCollider input down to machine code, so disabling optimizations might not be enough to get the compilation speeds comparable.

Also, as I stated earlier, it’s actually easy to conceive of code for which points (a), (b), and (c) are either neutral or possibly even bad for performance. For example very branch-heavy SuperCollider code with a lot of method calls is likely to reduce the impact of all three optimization approaches.

The sclang interpreter has been worked on by a lot of smart people for a very long time. A lot of optimizations have already been applied to it. Furthermore, it’s compiled C++ code that is itself built ahead of time and by an optimizing compiler (msvc, clang, or gcc) built by massive teams of compiler experts, and that can take all the time it needs to generate optimized code for the sclang runtime. It’s a tall order to try and build something that can beat that, and I have a lot of respect for things like the Second System Effect that might come in to play here as well. So I’m hopeful, and I have some ideas for design approach that may have merit. But we’re a long way from even being able to characterize Hadron runtime performance, and even further from any sort of credible claim of it being “faster.”

Sorry for the long screed! I want to ensure expectations are being set correctly here, and also to show appropriate respect to the work of the developers of sclang. They’ve built a great work of software, no doubt about it.

2 Likes

In my system, there’s a lot of support code that is compiled only once but executed many times (which would be great to optimize as much as possible), and there are instructions issued in performance (which will run once and probably never again).

It would be useful for live coding to profile what people are actually doing. My LC system compiles expressions into data structures – if I dumpByteCodes on a compiled statement in my system, there will be a lot of Something.new and put, and some glue (except passthrough expressions, which do generate closures). Pbind is similar – the key-value pairs are just data. A Pfunc OTOH is likely to run many times though (so closures should probably enable more optimization, even in an interactively submitted block).

hjh

Thanks for this additional context, I think it will be super interesting to work with Hadron on these platforms!

I am now wondering if there might also be opportunities for Hadron that make use of some form of cross-compilation toolchain. In the case of Bela, we frequently use distcc for doing compilation on a non-embedded machine (Using the `distcc` distributed compiler with Bela · GitHub), and this has also been used with SuperCollider (Cross-compiling SuperCollider | TAI Studio).

Yeah so immediately I think this all sounds great, but I wanted to ask if there are any RT safety/performance issues? I can think of lots of use cases (large scale number crunching, pre-optimising demanding functions that will be extensively reused), but of course part of the reason that SC performs the way it does is it’s designed for RT. I know people have struggled sometimes with that aspect when implementing other SC clients, especially within general purpose languages. Anyway, this sounds very exciting, Luke!

So far, for my own limited use, I have no requests for new features in supercollider.

I am often upset by software projects that decide to make some sort of great leap forward, instead of incremental improvements. I highly value back-compatibility, i.e. old code should always continue to work. One of my heroes in this is the TeX typesetting language, current release is 3.141592653. A possibly more relevant example would be javascript, where a lot of features have been added over the years without breaking anything old (as far as I know, I’m only a casual javascript user). There is some discussion of improving the documentation in the previous comments, which I think is very important.

In summary: my vote would be for an effort for complete documentation, coupled to comprehensive testing of examples from that documentation that would serve as a permanent compatibility reference.

2 Likes

I’m interested to see this happening too. I hope it goes very well!

And yes, careful attention to latency seems to explain many aspects of SuperCollider’s design.

In some contexts latency seems to require multiple compiler passes?

(C.f. the “new non-optimising JavaScript compiler”, https://v8.dev/blog/sparkplug)

I’m also a little bit curious how the sclang argument/arity semantics, which seem intricate to me, affects making things very fast?

f = {arg x=1, y; [x,y]}
f.value == [1,nil]
f.value() == [1,nil]
f.value(3) == [3,nil]
f.value(x:3) == [3,nil]
f.value(3,2,1) == [3,2]
f.value(z:1,x:3) == [3,nil]
-3.abs == 3
-3.abs() == 3
-3.abs(1) == 3
-3.abs(x:0) == 0
-3.performList(selector: 'abs', arglist: [1,2,3]) == 3

I know the usage is very idiomatic:

[1,2,3].collect({'x'.postln}) == ['x','x','x']
[1,2,3].collect({arg i; i * i}) == [1,4,9]
[1,2,3].collect({arg i, j; i * j}) == [2,4,6]
[1,2,3].collect({arg i, j, k = 3; i * j * k}) == [0,6,18]

Is this a non-issue for a compiler? (Lisps are a little bit like this and they can be very fast!)

Best,
Rohan

Ps. I notice .perform is like .if about keyword arguments, it doesn’t like them…

-3.perform(selector: 'abs') // => error

This is a bug for sure – seems like the failure to match the keyword arg doesn’t pop values off the stack, when it should.

hjh

I have… an idea, which is more of an ideal vision, of how the docs could be organized:

Client Server SC X

  • Client: Core, Scheduling, Live Coding, Utilities
  • Server: Bus, Buffer, Ugens, Server
  • SC: Development, Platform, SC4, Walkthroughs
  • X: Quarks, Plugins, Interfaces, Qt/GUI

…refined to ensure all existing doc entries have there place

Thoughts?

Your work on Tidal documentation sounds fantastic! I also echo your and several other people’s sentiments here that a documentation overhaul does not need a version 4.

There is a tutorial available on how to contribute to the documentation on the SuperCollider github wiki page: The entire Developer Reference > General Workflow is great, with specifically the Creating Pull Requests section describing branch naming conventions, as well as formatting pull request messages, etc. Creating pull requests · supercollider/supercollider Wiki · GitHub.

Eli Fieldsteel also put together a great tutorial on this:

It might be good to break out documentation refactoring into a new thread. There’s the one that @Rainer started recently, though seems geared toward a few specific pages-- Writing Help: Workflow, or we can start a new one specifically for refactoring and reorganizing on a larger scale…

1 Like

So I discovered something interesting just now, actually navigating the help documentation for your plugins extension. In the help you link to your github page, and well, the link works, and the IDE help browser displays github, and markdown. Navigating to github links found at the bottom of the page was also possible. Likely not helpful on its own, I just happened to notice this today, and thought of your post. Not sure if others know about this.

2 Likes

I would love to see SC4 be documented in markdown.

I’m not entirely sure SC4 is a good idea, but I do think making SC3 nicer is an excellent one!

So, if people are working on the help system, and are collecting thoughts, here are mine:

Would it make sense to implement more of the usual the Smalltalk infrastructure?

It’s not very complicated, and it provides a super clear design model to follow.

Also I think some aspects of the current help system are a bit confusing.

For instance, how it treats all the classes the same way.

There are lots and lots (and lots) of UGen classes, but they basically all have the same structure and follow the same protocol.

It doesn’t seem to make sense to document SinOsc and String (for instance) using the same template.

I think it might be clearer if the “Language” documentation lived in the class/method tree (as in a standard Smalltalk system) and was accessed using standard “Browser” and “Finder” tools.

Then the “Help” could be a separate system altogether, “Help” files being more like little essays on what something does and how to use it.

At which point, of course, you could just resurrect the old SC3 help system!

For people who don’t remember, SC could formerly read and display and edit rich text files.

So as well as ordinary plain text .scd files you could open .rtf files, in the editor.

The help files were just .rtf files that followed some simple conventions, and lived in a known place for looking up.

You navigated the Help system like a sort of implicit wiki, following references to other help files.

Because they were just ordinary files in the ordinary editor all of the ordinary editor commands worked in them, for running and editing examples, and so on.

Also you could annotate the help files as you liked and save the changes.

It was a very nice literate programming system!

I miss it, it would be lovely to have it back…

Markdown is nice too, but it requires an edit/compile/render/view cycle, and a mode switch.

It seems QT has a rich text editor now? So the old style might be feasible?

My understanding is the old system was discarded because it was too hard to get it to work anywhere other than on Apples.

Maybe things have changed?

3 Likes