SuperCollider 4: First Thoughts

I totally agree.
I like SC precisely because it isn’t a technicolor slotmachine DAW, with deeply embedded workflows which inevitably steer the user to make generic four on the floor techno bangers.

Also, the gui stuff is there in SC, so when folks feel they need it, that is available to them. Also Ndef style “throwaway guis” are nice for that kind of stuff I think.

Enough of this negative style rebuttal tho!
I also want to say that I am totally about increasing all kinds of creative freedoms of expression to the absolute maximum!

I did a course for SKH a while back called SC4Reaper. In that material I walk through setting up SC and Reaper to be BFFs and to work together in different ways, the hawtest of which I feel is OSC control. For me that is the ticket. Controlling Reaper from SuperCollider using OSC. This way I feel like I am getting the best of two worlds.

5 Likes

As a side-note: an interesting intersection of DAW meets programming seems to be the radium tracker (https://users.notam02.no/~kjetism/radium/). I’ve just started looking at it, but it seems very powerful, with built-in live faust for synthesis (in addition to loading sound fonts and supporting midi), and scriptability in scheme and python 2 (the programming part), as well as user interface for entering notes and automation curves (the DAW part). Independent of how cool it all looks and how flexible it probably is, I don’t see something like this as a desirable future for supercollider since it imposes certain preconceptions of how music is supposed to be organized (it’d be cool if they could also include supercollider in addition to faust of course :)).

3 Likes

Part of the reason for creating Mellite in the first place was to be able to programme like in SC, and also be able to arrange materials like in an electroacoustic composition, resulting thus in a DAW/programming cross-over (timeline editor), even though it is not (and doesn’t aim to be) as comfortable and complete as a native DAW. So you can place audio file snippets and sound objects on a timeline, but still open them to code the actual DSP.

1 Like

Reaper’s scripting is not limited to DSP, it also allows for full-fledged DAW scripting with EEL2, Lua and Python: https://www.reaper.fm/sdk/reascript/reascript.php

You can also add new functionality with C/C++ extensions: REAPER | Extensions SDK

REAPER also offers custom extensions for VST plugins.

For an extreme example, have a look at the “Playtime” plugin, which turns Reaper into something like Ableton Live: Playtime - Home

I totally agree with that. DAWs are fine for doing some standard tasks smoothly (I didn’t want to be forced to do mastering in SC) – but they are structured in a way that tends to exclude many things and strongly suggest others (e.g. the preference of sequential fx processing in most of them), not at least with aesthetical implications (bar preference in many cases). In others words, they tend to narrow musical thinking – in contrast to the super abstract and versatile concepts of a programming language like SC.

Very likely!

I’m a bit surprised by that observation, polls partially confirm, partially contradict it ( https://sleepfreaks-dtm.com/en/dtm-materials/2020-daw-ranking/, GitHub - smadgerano/DAW-Usage-2020: Results of Reddit polls for how people are using their Digital Audio Workstations). To my impression, at universities and electronic studios it has become a quasi standard at many places, especially for people working with 3D audio as it’s very supporting in this regard.

3 Likes

Side note while we’re here - NRT one area where SC falls short of DAW users expectations. in DAW-land the ability to render or freeze tracks compositions or items and pre-processing of FFT or other info in the background are standards. Recording in SC is not so bad (interface still a little clunky) but NRT requires a different syntax and is opaque - Ctk and ScoreClock are both possible ideas to make this friendlier - it should be a low friction operation.

and FWIW REAPER is gaining ground in the audio post world (where I earn the odd dollar). I drive it via osc from SC for scoring.

@Rainer check out Lnx Studio LNX studio poll results or
GitHub - Sciss/Mellite: An environment for creating experimental computer-based music and sound art.

2 Likes

Perspective from a relative neophyte here… I’ve used SC for maybe four years, and I’m not a particularly heavy user nor by any stretch a power user. I am not a programmer by training, though I have done some programming in various engineering contexts. Some things that I have liked about SC3 that I would miss if they went away…

  1. The integrated IDE is immediately intuitive and useful. Adding new steps for non-programmers to get a different IDE up and running seems like an unnecessary deterrent. I probably wouldn’t have screwed around with it.
  2. There is a huge wealth of existing material, tutorials, and guides to the language. The Patterns guide alone saved me countless hours of grief and opened up tons of new ideas for me. Anything that is not backwards-compatible both wipes out existing work people have done and eradicates the value of an enormous amount of existing knowledge to get new users educated.
  3. The idea of DAWifying SC is horrifying to me. A big part of the reason I like using it is specifically because it’s not a DAW–it’s not graphical, you don’t have to click around in a bunch of stuff, etc. It allows me to work in music without a strong visual component, so I can hear with my ears instead of my eyes, as they say, and I vastly prefer sequencing and arranging by typing than by clicking and dragging stuff.

The biggest thing I’d like to see changed is simply, like @jamshark70 said, better syncing capabilities between SC and other software. Right now I get SC output into my DAW by either MIDI or audio loopback software. MIDI timing is often horrible, because it runs language-side (as I have recently been educated), and audio cannot be synced to anything. Or at least I haven’t figured out how. So a lot of the time I record a bunch of stuff into my DAW and then use tempo detection to match the DAW tempo to the audio and whatnot, and while that is a workaround that gets the job done, it’s a pain in the ass and must be repeated every time I change the arrangement or whatever.

Since I’m not a programmer by trade or by training, and my SC projects are not very large, I can’t speak to the value of changing the language; but I sure appreciate the existing language and I would miss it if it were retired.

9 Likes

I also agree. Keep SC slender and code oriented. Code is less comfortable for those with little time in their hands but more than recompenses with its immense exensibility and clarity. A main advantage of SC is its power as a language to model many very different approaches to composition and performance. We should keep that at all costs.

Iannis Zannos

5 Likes

My 2 cents on new features I would like to see:

having non-realtime (NRT) mode fully working like the realtime (RT) mode.

Of course the NRT is working properly, but its language implementation is too verbose when compared with the RT (e.g. there is no SCTweet made using NRT nor we have an {}.render when compared with {}.play) and some crucial features are missing (e.g. it is not strictly possible to interchange data back and forth between the server and the client). Moreover, although it support patterns, IMO its general usage is too much CSound oriented.

IMO, having more, flexible, easy and well documented possibilities of blurring the boundaries between NRT and RT (e.g. doing soft RT analysis of an instrumentalist’s buffered execution or playing samples from a big audio dataset while it is being analysed) would make SuperCollider the perfect playground!

1 Like

this is noted here: a new NRT mode with full OSC socket · Issue #279 · supercollider/supercollider · GitHub and closed for unknown reasons (I guess since it was stale - I think it should still be an active issue)

1 Like

It needs someone to take the lead on implementing it. About 9 years and no one has stepped forward.

hjh

Well, I think there needs to be a distinction between “closed - solved / won’t solve” and “closed - for future exploration”. the comment suggests it’s on a future project, but the project page of GH doesn’t have long term tickets, either. :woman_shrugging:

1 Like

Perfect argument… a lot of excellent points have been made here.

Members

&

Authors

There are (monthly?) developer meetings noticed here - Developer meeting polls - and minutes show up around here somewhere as well - seems like 3-5 folks, not always the same, attend. I’d encourage any of you in this thread with chops and time to jump in.

I’d be interested to know what the required skills are. I have literally zero knowledge of software development and also zero time at the moment to make a contribution, but can imagine taking part at some time in the future…SC has really changed my life and I guess it’d be nice to give back somehow!

Here’s a great vid from @elifieldsteel targeted at non-developers that shows how to make contributions (a simple correction of a typo in the help in this case)

1 Like

| Sciss
June 7 |

  • | - |

fmiramar:

and some crucial features are missing (e.g. it is not strictly possible to interchange data back and forth between the server and the client)

this is noted here: a new NRT mode with full OSC socket · Issue #279 · supercollider/supercollider · GitHub and closed for unknown reasons (I guess since it was stale - I think it should still be an active issue)

My 2 cents on new features I would like to see: having non-realtime (NRT) mode fully working like the realtime (RT) mode. Of course the NRT is working properly, but its language implementation is too verbose when compared with the RT (e.g. there is no SCTweet made using NRT nor we have an {}.ren…

This is perfectly possible without changing the server. In fact, I have implemented it in (muttering) another language that starts with p, inpired on the NRTClock by Jonathan L. and changing the timing mechanishm of the library to run in logical time only. The problem of implementing it in sclang is that it involves changinges in low level C primitives, and we are back to the development support problem (note: I couldn’t do it low level in sclang). To have an nrt thread or subprocess in the server comunication with sclang would be a plus for sure.

But there are also other technical problems if you whant to mix rt and nrt at the same interpreter session, the library keeps global states regarding resources on the server, for example, it is not the same to define a synth, bus or buffer for rt and nrt. That is solvable but then there is yet another problem, to run the same code in rt and nrt without modifications due to time and asynchromic behaviour being different. So, I opted not to mix them. In any case is a work that requires some design is not as trivial as it seems.

1 Like

To have fully compatible code, I think you would need to modify the server (unless I’ve misunderstood you) to wait on ‘advance messages’ from the lang. For example NRT server sends a /tr message to the lang, and then must wait before continuing calculation to see if any new work has been added. I think the pattern would be NRT clock sends a message saying server can calculate until the next time in its queue, but this can be overridden by messages in return that could generate new tasks.

2 Likes

| muellmusik
June 7 |

  • | - |

lucas:

This is perfectly possible without changing the server. In fact, I have implemented it in (muttering) another language that starts with p, inpired on the NRTClock by Jonathan L. and changing the timing mechanishm of the library to run in logical time only.

To have fully compatible code, I think you would need to modify the server (unless I’ve misunderstood you) to wait on ‘advance messages’ from the lang. For example NRT server sends a /tr message to the lang, and then must wait before continuing calculation to see if any new work has been added. I think the pattern would be NRT clock sends a message saying server can calculate until the next time in its queue, but this can be overridden by messages in return that could generate new tasks.

That would be a nice plus. What I did was to run everything offline. It generates the score in logical time, as fast as possible, and then renders by calling the server command. Is like having:


s.options = ...
s.boot();

// this happens at time zero.

SynthDef(\a, ...).add;
SynthDef(\b, ...).add;

b = Buffer(...);
c = Buffer(...);

// always starts from time zero (absolute time), can spawn other timer routines or resources in different moments.

fork {
loop {

Synth(\a, [buf, b]);
1.wait;
// etc.
}

};

// always starts from time zero (absolute time)

Pbind(...);

s.render(); // set file name etc.

then you can run sclang [script.sc](http://script.sc) and have the audio file back. Suppose that s.render runs the script in nrt, generates the score file and then calls the server command that writes the sound file. The only difference is that everything is synchronous.

A problem I imagine about sending a stream of nrt commands is how to tell how much time to compute and then it will not be possible to go back. It should implement a non sequential rendering strategy for that. And that is something beyond my reach, instead, an nrt score sorts timed events with ease.