SuperCollider 4: First Thoughts

Okay. Though I’ve been doing exactly that sort of stuff with SC for more than 20 years for what it’s worth, and never found that an issue. I always thought that was very much one of its strengths, especially compared to other RT systems.

If you run into a problem again post it here and tag me in. Maybe we can help! :slight_smile:

That has not been my experience. Or apparently of other people over the years.

Okay. Might be worth us taking a look at. VDiskIn seems a little more ‘modern’ than DiskIn for instance, and I think there could be some low cost improvements. At the very least a few notes in the doc…

With my moderator hat on: I think disagreeing about this in the absence of specifics is probably not very productive.

If we have tangible examples there might be more to discuss, but otherwise, people have had their experiences, and there’s not much more to be said.

I’d struggle to discuss it here as it’s not a recent problem so I can’t remember the details particularly well. Given I did discuss it at the time, I’m not sure there’d be much benefit in me trying.

My experience generally with SuperCollider is that it struggles with lots of events in a very short period of time (unsurprising), or creating lots (1000+) synths in a very short period of time (10-100ms range). And SCLang is really not very good when you’re working at that level (again - not hugely surprising).

None of this is a criticism particularly - it’s using SuperCollider in a way it was never meant to be used.

There are limits to what is possible in any RT system, and it is possible to overwhelm it of course. Is csound (for example as that’s where this started) better at this (in RT mode)?

I don’t think it’s the RT audio so much as its the scheduling that’s the problem. The separation between synth and scheduling in SuperCollider is a weakness of the environment imho (albeit one that makes some of its strengths possible). SuperCollider is great for music that works in what are essentially fixed architectures. Music that requires flexible architectures that are essentially sequenced are far trickier to get right in SuperCollider.

For me it’s just about choosing the right tool for the job. No tool is perfect at everything, and general purpose tools are usually mediocre at everything. :person_shrugging: SuperCollider is a very good tool for certain things, not so good for others. CSound is an archaic tool that is surprisingly useful in certain situations, but I certainly don’t prefer it.

Though now the part of my brain that likes to try impossible things is wondering what livecoding would be like in it…

Such generalisations are not questionable of course.

But you makes specific points, and those are interesting, and maybe useful to explore. There are limits of course. If you’re talking about scheduling, then the limits in terms of number of events in a given time period I’d guess are network bandwidth, CPU, or the size of the OSC message queue?

If the cases you’re referring to are hitting one of those and that’s an issue for significant numbers of users / use cases, it would be great to figure out what’s the bottle neck. Similarly, if csound or another environment can perform better it would super instructive to figure out why/how.

Again a simple real world example would be super useful if you can dig one up! :slight_smile:

I’d be curious to hear about your experience if you do!

Don’t agree at all. Just to give one counter example: I think live coding would be a prime example of a “flexible architecture that is essentially sequenced” and scsynth seems to be the platform of choice.

In general, I’m a bit surprised by your experience because I would have thought that SC is rather excellent at dynamically creating (and destroying) large amounts of nodes in real time. Which language (or rather: audio engine) does a better job at this? Of course, you will always hit a limit, but so will any other language. Keep in mind that you can also create your Synths upfront and run/stop them as needed (= the Max/Pd way), although that should really only be necessary in extreme cases.

As @muellmusik said, it would be good to see a practical example.

FWIW, I have once attended a workshop about live coding in C sound by Joachim Heintz, so it’s very much possible! But probably not my platform of choice :slight_smile:

1 Like

But you makes specific points, and those are interesting, and maybe useful to explore. There are limits of course. If you’re talking about scheduling, then the limits in terms of number of events in a given time period I’d guess are network bandwidth, CPU, or the size of the OSC message queue?

I think it’s 1 and 3, along with limitations in the performance of SCLang. Often the solution has been to use Demand rate sequencing - but though then you have a new problem :wink:

If the cases you’re referring to are hitting one of those and that’s an issue for significant numbers of users / use cases, it would be great to figure out what’s the bottle neck. Similarly, if csound or another environment can perform better it would super instructive to figure out why/how.

It’s largely an architectural thing. CSound doesn’t separate the control language from the DSP language, so I can write scheduler code inside of a DSP graph. This makes scheduler code more flexible and faster (no latency, no issues with synching different clocks, you’re not limited by krate, etc), but at the cost of RT DSP performance and probably some stability. Tradeoffs basically. I haven’t used CSound for livecoding, but I suspect if I did there are things I would not dare try in CSound that I take for granted in SuperCollider.

In general, I’m a bit surprised by your experience because I would have thought that SC is rather excellent at dynamically creating (and destroying) large amounts of nodes in real time. Which language (or rather: audio engine) does a better job at this?

The limitation isn’t the server (SuperCollider I’d guess probably has the best RT audio performance of anything out there), but more the limitations imposed by the separation of control/audio into separate servers. The bottleneck is generating and parsing the events themselves. If you are generating a lot of events in a short period of time I have found that I tended to run into issues. Maybe with faster processors/multicore this is less of a problem than it used to be. I haven’t tried recently.

I also generally find it annoying to simulate a single instrument with lots of synths. Just hard to manage. The book-keeping code becomes a bit of a mess.

Listen folks, this thread is now coming up on its three year anniversary and its 500th post. As such I think it’s become essentially unusable and impossible to digest, especially for newcomers.

As such, I’m going to close it.

There were a huge number of interesting ideas and discussions here. I think the most useful thing for the forum as a whole would be to pick up any of those of interest and restart them in their own dedicated threads, so that users can find them easily and the history doesn’t get buried further.

If anyone strongly objects to this, please do write me or one of the other mods and we can discuss reopening.

Hope that’s okay with everyone, and thanks for so many fascinating contributions here!

6 Likes