Keeping SuperCollider evolving with minimal impact on users work

Tbh, hooking up OSC enabled interfaces to supercollider is (fairly) easy already. The problem with such approach is that if the existing OSC UIs do not provide what you want, you now have to learn a completely new system for creating UIs and sending OSC.

Last summer, I did a quick’n dirty experiment with AvaloniaUI on linux (think C# and WPF, but it works truly cross-platform). Not much problem there, until I wanted to have a knob control and suddenly I found myself having to learn the ins and outs of Avalonia to find out how to create user defined controls (interesting stuff really, but far from trivial for a beginner in the C#/AVvalonia eco-system).

In my brainstorm, the OSC GUI app is like a canvas, where you put your knobs / faders / sliders in, set the values. A already existing tool, no need to program anything here. Maybe there is also a graphical patchbay where you can connect the outs from that OSC tool into the ins of the synthdef (using some exposing / metadata mechanism). No idea if something like this is realistic from the SC side.

(Zynaddsubfx is doing something with a osc port in Jack: https://github.com/zynaddsubfx/zynaddsubfx/blob/806674849b276a560d2996f25ba0ddbfe50838ef/src/Nio/jack_osc.h#L81)

Did you know SC4 already came, went, and is in the process of being born again from the ashes? (s/o @lnihlen, and thx @semiquaver for the heads up!)

Lucile made an appeal for feedback and support in v1, but it seemed the community wasn’t ready to embrace it at that time. For those ready for a revolution, there’s your opportunity :wink:

To the contrary @jamshark70 has suggested that agreeing on “standard” usage (in terms of docs and guiding new users) would bring more focus to the core. The quarks will still be there for all those excited by the expressive possibilities and dialects that SC has.

Both Scott and Nathan have explained well that “officially supported” quarks (dev-maintained) are not at odds with a smaller core, rather, it’s part of the solution.

In the above post Scott has given the road map of how to do it. Lucile has even made a pretty picture (in addition to a great explication of the problem!).

I mention this because unless the nuance is embraced, a false dichotomy will keep us chasing our tails. The solutions are there, what remains is a lot of legwork…

2 Likes

9 posts were split to a new topic: James McCartney’s new language

I heard a lot of things… but so far just trying to maintain my own little contribution and not killing the little artist-using-those-tools remaining is too much, so I am just trying to see how this will go.

(and the extensive verbosity of the Hadron read-me gives me a lot to think about :slight_smile: )

ok I found this and it is a little more explanatory than the github repo.

indeed.

1 Like

For a person that puts personal time and effort towards maintaining SC3, reading a comment like that feels rather disrespectful. I know that it personally affected at least one other developer.

There are many valid points in this thread and the way forward is not clear. Forum is useful for some things, but having a verbal conversation on these matters might be more beneficial to move things forward. I’d really encourage everybody who wants to move SC forward to come to the dev meetings! I’d be more than happy to do my best to find the time to accommodate folks, if the current weekend times don’t work.

7 Likes

If there’s going to be any project focusing on backwards-compatible, incremental advancement of SuperCollider, it will require that (1) we can break a given piece of SC while leaving other pieces unchanged, (2) we can support several versions of SOME pieces where there is demand (and people to maintain them), and (3) we can provide reasonable paths to update user code to newer versions. This is not “SC3” nor is it “SC4” - it’s “SC4, SC5, SC6…SC99” - no flashy all-at-once redesign, just incremental forward progress that walks the line between tender and ruthless.

I proposed iterating Quarks into a proper versioned module system that would support the kind of piecemeal breaking changes I’m thinking of. I would consider this a hard requirement for making breaking changes, if we want to avoid simply forking development and the community in ways that are gonna be very disruptive. I would be very happy to turn this into a more concrete and detailed formal roadmap, if it would be helpful to organize work on this.

IMO this work (or something equivalent) has to be done before any other “future of SC” conversations can bear fruit - there’s a lot of very clear and straightforward engineering required here, and I’d estimate moving this forward is going to be a 6+ month effort - this is a lot of time to continue discussing exactly how to break apart and push the class library forward. Even when this groundwork is in place, there are some very obvious and non-controversial class lib re-organization tasks (e.g. a separate module for GUI) that will be great starting points to work out the kinks of the “SC-next modularization” workflow. Even the most radical “SC-next” version, full of cool breaking changes, will still need a proper versioned module system else it will run aground of the same problems as we have now.

I’d gently encourage anyone invested in this topic to focus on these immediate engineering challenges, and let the more challenging topics simmer for a while. These near-term projects are interesting and challenging, and have some open questions but are still nonetheless pretty solve-able. They are also achievable with the engineering resources and expertise we have right now - no utopian transformation or huge influx of contributors is needed, just some proposals, pull requests, some polite discussion, and some people writing code to make it work. We will learn a LOT from doing this work - we will probably learn things that will invalidate much of the “future of the class library” conversations we’re having now anyway.

6 Likes

Regarding the specific topic of this thread (impact on work), this can be very concrete. Here’s a proposal off the top of my head - obviously up for debate. This breaks historical conventions a little, but I think is coherent and makes clear commitments. Some of this is basic semver, but it probably helps to specify it for SC specifically.

  1. Any major version of the SuperCollider runtime (e.g. C++ internals + CORE class library) should be backwards compatible with code written against that version. So, sclang 4.6 should be able to run code written against 4.0 → 4.6 with no breakages or significant changes in behavior or audio, apart from things that are obvious bugs. I’d say small visual or functional differences in UI are acceptable, broken code or different audio is not (except in case of fixing obvious bugs).
  2. New major versions (e.g. SC4 → SC5) can break compatibility - in fact, breaking compatibility is the DEFINITION of a major version.
  3. Breaking changes should be clumped together - major changes should be collected across the runtime and core class library modules and released e.g. as SC5 with a slow periodicity (1-2 years), else too much dependency management is pushed to users.
  4. Code can be deprecated at any time. Deprecated code should be removed in the next major version, unless there are extenuating circumstances.
  5. When a new version is released, older versions can be frozen apart from back-porting fixes that are deemed important. More active maintenance of parallel versions is feasible, but it should be considered labor-of-love. NEW features for old versions should be discouraged as much as possible but not banned, but definitely would need to be de-prioritized in terms of core dev team support.
  6. There should be a concrete time-based support window, meaning: code written on Oct 15th 2023, on the current version of SC, on a current operating system version, should be easily runnable in that version of SC by a non-dev user for X years. This does not have to be forever, and it does not entail that code can make use of any new features.
  7. Large breaking changes can be mitigated with compatibility quarks (e.g. a new Buffer class can be introduced, and the old Buffer class can be moved to a compat quark to allow “old” code to more easily be upgraded).
  8. Some “future” changes (e.g. UGen fixes, breaking class lib changes) can be added to a future quark, to allow incremental adoption and testing, and soften the boundary between major versions.
  9. User projects should be able to pin concrete dependencies to the core runtime and core / non-core modules and quarks, and be able to assemble a coherent and working build automatically and without technical knowledge, if that configuration is inside the support window (e.g. NOT if they’re depending on a 15 year old runtime version).
6 Likes

This would be a huge help :pray:, either as a Discussion or RFC over on GH.

It seems pretty clear that this is a necessary project to clear the way for other large-scale future development.

2 Likes

I’m glad to see @scztt taking the lead in organizing the dependency and backward compatibility situation. I read a lot of good ideas in there.

Couple of points:

a) Yes I’m working on a new version of Hadron, written in Rust. It’s early days but I have some ideas about how I could get some help for those interested. I’m very busy at the moment but will try to write up some documentation (if it’s not too verbose! That’s the first time I’ve ever read about someone complaining something of mine is overdocumented! ;))

b) So one thing about my (now >1 year old!) modular class library proposal - reading the reactions to it now, I think using the word core was probably a poor choice. I think most people are reading it as a value judgement around what is “essential” for the “average” SuperCollider developer. But that’s not all what I meant. SuperCollider allows for many many different workflows and usage styles. Folks may agree or not on what idiomatic sclang looks like, but as an interpreter/compiler writer, I don’t have that luxury. Any valid sclang interpreter needs to accommodate all syntactically and semantically valid programs.

Instead, my definition of “core” is explicit and technical. it turns out there are a very few classes, and a very few methods on some of those classes, that are required for the compiler/interpreter to work correctly. I think kernel is a better choice for those things that must be forever distributed with the language. I’m talking about literal data types, things like String, Symbol, Array, Function, and Integer. Also things that the interpreter uses to manage its own internals, like everything in Kernel.sc which includes Class and the Interpreter itself. That’s what I meant by core/kernel. Basically if you pared the SC class library down to the absolute minimum that you could without breaking the compiler.

c) The big objective of my proposal was to break the class library down into smaller, more manageable components, so that we could all have an easier time understanding it and evolving it. I think it’s too much to ask a single person (or even a small committee) of people to try and re-design the entire class library. Rather, I’d like to see it broken down by areas of responsibility and interest. Again, not a value judgement on importance. I also wanted to try and give the maintainers a lever to remove untested and unmaintained code from their maintenance burden. There’s so much cruft in there! So, why not find someone who is dependent on it being there, and get them to maintain it. If you can’t find anyone dependent on it, then you can delete it.

d) Super stoked to hear about other folks developing new music languages! I think that’s 3 or 4 I’ve heard about now by my own counting. Yay for a world with more, different takes on music programming!

e) I don’t think SC3 is dead, far from it! SC3 will only die when nobody cares about it anymore. I think SC3 is suffering from severe gridlock brought on by technical debt, maintainer burnout, and a monolithic architecture, but folks are motivated for finding paths forward. I’m going to publish more of my “extensive verbosity” on hadron-sclang.org, but my new Hadron roadmap I’ve been envisioning is somewhat compatible with @scztt’s proposal, and could run in parallel. Essentially I want to version 3 different pieces of the SuperCollider interpreter:

  1. The compiler, also sometimes called 'the frontend," which produces bytecode for execution on a virtual machine, either Ahead-of-Time (AoT) when compiling the class library or Just-in-Time (JIT) when interpreting code,
  2. The virtual machine (VM), which executes that compiler bytecode, and
  3. The “runtime environment” which supports the VM, think things like the garbage collector and all 700+ the primitives in the class library (more on that in a minute).

Using versioning for each, testing each individually, and documenting each piece, we could end up with some very interesting options for moving stuff forward.

For instance, I’m working on (1) right now, the frontend, in Rust. My goal is to produce VM bytecode that could be suitable for consumption by the current SuperCollider VM in sclang. That means we could start testing and deploying the Hadron frontend much earlier. It also means that we can test Hadron’s frontend against sclang’s.

This modular design exposes some interesting possibilities, like being able to pretty freely mix SC3 and SC4 code, on the file or even smaller level, or automatic translation of SC3 into SC4 code, or other fun things like new language frontends for the virtual machine.

For (2) I have some ideas for a byte-for-byte re-implementation of the SC3 VM, except for maybe that thing I complained about before with exception handling. I’d like to fix that in the existing SC implementation, too. More on that later, as well.

(3) is the sticky wicket. The monolithic primitives inside of SC3 are a real challenge to think about. There are over 700 primitives now in SCLang. They do all sorts of stuff, from core language functions like allocating memory for objects, to calling exotic math functions, to MIDI, file and network access, and the Qt UI system.

I think the primitives are holding the language back more than any other single problem. They’re not so easy to abandon, they represent a lot of value to sclang, I think @jamshark70 was saying on another thread that primitives are where the work really happens in sclang, and, I like 95% agree with him. The problem with the primitives is that they are completely not portable. If Hadron made really different choices with its compiler and virtual machine, I would have to re-write all of those primitives.

(3) is also tough to talk about because it’s back in the class library, and so gets mired in to some of those discussions about class library reorganization. As if the problem weren’t complex enough, there’s this whole other substrate of complexity on the C++ side of the primitives that will also require attention, and impacts some of the design choices folks have on the SC side.

Sure, we could break all of that with SC4, but I think then we’re really losing out on a ton of work that’s gone on over the years in adding important functionality to the primitives. I’d like to see more of a gentle onramp to the next thing. What if we could slowly do that? Port some things to SC4, deprecate some others, keep some things legacy but maintain a compatibility mode. I’m thinking about the break from python2 → python3. There always will be breakage, of course, but I’d rather we take our users along with us, and carve a path forward for everyone.

I’ve been going through the primitives and breaking them into buckets representing the different needs they address. I’m going to propose that we slowly evolve the primitives into a much smaller required set inside the kernel, but again need either help or more time to finish the writeup.

So as you can see, lots of plans and thoughts, and I was caught a bit off-guard by this attention to my old proposal and new activity. Happy to talk more 1-1 with folks who have questions or want to contribute, but I don’t monitor the forums actively so if you really want to get my attention either DM me or @ me in the thread. Otherwise, I’ll be back in a while with more articulated thoughts and plans for Hadron.

Toodles!

Lucile

8 Likes

I certainly didn’t intend to imply that SC3 is currently dead (on the contrary) or that the efforts that go into it are not appreciated nor highly valuable, more that, more a users perspective, it would be beneficial to accept that something needs to change in the future and this will break backwards compatibility, meaning some works made in SC3 might not work at some point. Poor choice of words on my part!

I meant it more like…
SC3 is dead, long live SC4, and 5, and 6, and ...

Sadly I see the same thing happening to several software. Graceful ageing is difficult, partly by progressing (changeing) technology and partly by the discrepancy between users wishes and developers wishes.

There may be ways out.
Software of this kind should not depend on an IDE. It has to be usable with the simplest tools. If IDE’s are built there should be language independency for doing so.

The difficult one, the DSL scripting language should be usable in various ways. Interpreted and compileable. That way a user does not have to deal with C++ for “something extra”. From user scripts libraries can be built, compiled for speed etc. Such a thing could, in theory, be done with the macro systems of LISP or Nim.

A related comment (though it doesn’t de-mud any waters) – I’m at ICMC in Shenzhen. The Tuesday morning keynote was Pamela Z. During Q&A, someone asked about preservation of works. In her answer, she admitted that she has not upgraded to Max 8 – she still uses Max 5(!) onstage because of some externals that aren’t compatible with 8. So it’s not only a question for us – it’s everybody. Progress leaves some things behind… but you can’t freeze the software because toolchains and OS libraries keep changing.

She may be an uncommon case of a live electronic performer with a decades-long body of fixed-form work to maintain (where most of us a/ have a shorter history or b/ abandoned earlier works or c/ updated earlier works). Still, that was a remarkable comment.

hjh

2 Likes

When I first met Clarence Barlow in The Hague, he was actively using his Amiga/Atari programs and delving into simulators. Over time, he rewrote certain things, though I believe not all.

The availability of free software and platforms like Linux allow us to rethink our perspective on long-term software archiving. With proprietary software, such an option is often not even on the table.

http://www.musikinformatik.uni-mainz.de/Autobusk/

Mike M. and jrsurge had an interesting discussion about this in a dev meeting.

Apparently, there’s a point in time where SC’s git history has been modified in such a gigantic way that it prevents understanding the change easily.

jrsurge pointed out that modifying a file and moving some of the changes in a new file inside the same commit (I’m really bad at git, those are probably the wrong terms, please correct me if you see where I’m wrong) prevents from seeing some of the modification, and that this should be avoided. new file > commit > modifications > commit seems to be the right way to make the change visible.

The git history is an invaluable resource. It allows to backtrack the philosophy of the software development. It also allows to fetch older versions of the software. If the git history is clean, I think it would allow motivated people to reconstruct the software step by step without including breaking changes (maybe Pamela Z. could slowly ‘get back to Max 8’ while retaining her old externals compatibility).

jrsurge advocated making commits as small as possible, and to separate them when they include several steps (i.e. I need to install something to fix something, don’t push everything at once, install > commit > fix > commit). Sorry if this unclear.

1 Like

There was a mass reformatting to unify whitespace style, meaning that some non-trivial percentage of the C++ code base at some point has a “Clang Format” commit. (To check, I randomly picked a lang cpp file – PyrArrayPrimitives.cpp – and it looks like well over 90% of the git blame reports this commit.)

Github’s “blame” handles it pretty well though, AFAICS – it’s an extra click to go earlier than that point in the history, but (most of? all of?) the history is still there.

This is true. Don’t “git mv” (rename or relocate a file) and change contents in the same commit. “git diff” should show 100% identical contents for a “git mv” commit or problems will follow later. (But the big reformat didn’t do this.)

hjh

Relevant: Linux devs talking about “how not to do changes:”

Since v2.23 git blame supports --ignore-rev to hide giant commits like that, and before this there were CLI tools built on top of git to handle it. This and older formatting commits (e.g. the many "remove trailing whitespace) could be added to a .git-blame-ignore-revs at the base of the project.

I don’t know the details of the other issue but -M/-C/-CCC may be able to help track down moves/copies.

I agree it’s good to approach your git commits with discipline and don’t mix too many operations together. There has to be some flexibility though since many people cannot perform basic git tasks beyond “commit all changes”.

It seems that every time I try to rejoin the community around this thing I created, I am made to feel that my contributions are no longer wanted. E.g., I tried to argue against the reformatting of the code. And was told that we’ve already decided to do it and my opinion didn’t count. Now this:

Hello,

This is an automated message from scsynth to let you know that your post was hidden.

Keeping SuperCollider evolving with minimal impact on users work - #26

Your post was flagged as off-topic: the community feels it is not a good fit for the topic, as currently defined by the title and the first post.