SuperCollider 4: First Thoughts

I believe there’s nothing, but I was curious about recent developments just because of convenience and curiosity since there was a very good library out there.

But thinking a bit more, several factors could impact users’ musical “philosophy”. For instance, Patterns have a significant presence in the SuperCollider, while non-real-time (NRT) synthesis hasn’t been developed in a user-friendly manner, although it could. And I remember software like Paul Berg’s ACToolBox that created NRT output for supercollider that felt like a different approach using SC. Unfortunately, he passed away and his system was not free software.

I wrote a bit about it here on Interval Algebra, and how it could be done in SC: Interval Algebra and Event Scheduling?

But those are just random thoughts. Nothing that would shift a paradigm.

EDIT: the only thing I know csound can do and sc can’t is playing SOUNDFONTS :grin:

Been many years since I looked at csound. What can it do that SC can’t?

  • Single sample graphs within block graphs.
  • You can define ‘ugens’ using CSound (and these again run at single sample level).
  • Better sequencing if you’re running stuff non-live (e.g. single sample accuracy).
  • You can create VSTs using Cabbage (including the UI).
  • Some pretty high quality ‘ugens’ as a number of the users are DSP researchers, and so use it as a prototyping environment (e.g. it’s had very good ‘analog’ filters for a long time).
  • Much more embeddable. E.g. it can easily be used inside IOS apps.
  • For NRT stuff it’s generally superior (not surprising, as this was it’s original use case).
  • 64 bit, so audio buffers can play until you die of old age.
  • For connecting MIDI controllers to predesigned instruments, I’ve generally had better experiences with stuff like latency (as MIDI is built into the synth server).
  • Runs on the web. Pretty flawlessly best I can tell.

It also has weaknesses compared to SuperCollider:

  • The language is, um, primitive (it’s improved a lot, but still).
  • I’ve never checked, but I’d guess that it’s realtime performance is probably inferior to SuperCollider as realtime was kind of a hack added many years later, and also 64 bit imposes some costs.
  • Sequencing in CSound directly is brutal (just lists of numbers), though there are numerous front ends. It is very easy to add a front end to generate these if you want.
  • I’d guess it’s generally less flexible than SuperCollider when it comes to livecoding, though I don’t know for sure, as I’ve never tried. I wouldn’t be surprised if it suffered from some of the same problems as PureData, but I could be wrong.
  • Generally you’re going to have to do more work to get it to do what you want.

Both environments have strengths and weaknesses, and I wouldn’t say either is better. But depending upon your use case CSound can be a better fit. And certainly for some of the use cases people say they want for SuperCollider4, CSound can already do those things.

1 Like

Also some of the stuff that SuperCollider has recently gained (such as support for VST plugins) has been in CSound for many years.

Two areas where I’d say CSound is probably superior/easier to work with are granular synthesis and physical synthesis. You can do things in CSound that would require writing a custom UGen (in C++) in SuperCollider.

Thanks, sort of what I thought, plus a few things I didn’t know about. FWIW…/

This is possible in SC with correct handling, or?

That would be a straightforward change I’d guess, FWIW? At the moment the limit is what though 13.5 hours at 44.1kHz?

Interesting. What can’t you do in SC in terms of granular stuff without writing a UGen?

SC could use 64-bit double-precision floating-point numbers for Signal, as it is used in DoubleArray. It’s a design decision since it’s considered a waste.

@cian

All those things (single sample, for example) are being done in real time with newer systems. Check this one out: Architecture

That’s true, SuperCollider does support it, but you have to do some weird stuff in the synthdef to make use of it (or maybe I just didn’t understand the documentation). It just works in CSound.

I’m don’t think 64 bit in SuperCollider would be a straightforward change, but it’s certainly possible. 64 bit certainly isn’t something I particularly care about, but I know people do.

Signal is a lang-side representation. It is true that it supports 32 rather than 64 bit samples. But that doesn’t constrain duration, which is what I understood @cian to be referring to. Size and access are represented using the 32-bit scaling Integer which does constrain that lang side.

The situation is similar on the server side. frames in SndBuf is a 32 bit int, and all values in a UGen graph are (at least between and as input to UGens) 32 bit floats. You could support very long files for playback by changing the former, though you would have less precision when jumping in a buffer the farther you go.

What can’t you do in SC in terms of granular stuff without writing a UGen?

You’re limited by what the UGens that exist do. That gives you a lot of flexibility (as the existing ones are pretty good), but if there’s something you want to do that is not supported then you’ll need to write your own UGen.

It’s not very hard. Just use OffsetOut and schedule with enough latency to be after the next audio interrupt. It should be subsample accurate. (There’s some jiggery pokery around clock drift correction, but that’s another discussion.)

To be clear, I just meant if you wanted to be able to play really long files “until you die of old age” that wouldn’t be hard to support. Actually IIRC VDiskIn may already allow for that…

SC could use 64-bit double-precision floating-point numbers for Signal, as it is used in DoubleArray. It’s a design decision since it’s considered a waste.

I’m aware of the tradeoffs. One of those tradeoffs is that audio buffers have a max playback time which has caused people doing installations problems in the past.

All those things (single sample, for example) are being done in real time with newer systems. Check this one out: Architecture

I’m not sure what point you’re making here. He literally says he’s copying CSound’s architecture (unsurprising given he’s one of the developers). Pink runs in block size, unless the user requests single sample for an individual block. There are very good performance reasons for this - you take a big hit when you run a graph sample by sample.

What’s wrong with a lang side approach (i.e. make a synth for each grain)? It’s super flexible, and tbh I’ve always thought it was superior for granular stuff.

Do you have a specific example of something you can’t do?

To be clear, I just meant if you wanted to be able to play really long files “until you die of old age” that wouldn’t be hard to support. Actually IIRC VDiskIn may already allow for that…

I don’t, but I know in the past that people who’ve done installations have run into issues with this and there doesn’t seem to be a good solution. If I was designing an audio system I would not personally want it to run at 64 bit as this seems like a waste of resources, but I know there are people who have run into this limitation. Certainly for installation work I can see why you might want this.

True, those are different things. Running the server and UGens with 64bit, even in NRT, would give a benefit?

Unrelated, file formats like RF64 have practical benefits, and support file sizes greater than 4 GB and 18 audio channels.

Yes, I think with VDiskIn you may be able to stream in really long CAF and RF64 files. You couldn’t do this in memory, but you probably wouldn’t want to.

Allowing a 64 bit frame index is trivial in terms of resources. Agree about 64 bit samples being unnecessary for the large majority of use cases.

Do you have a specific example of something you can’t do?

No it’s been too long. I just remember running into issues, doing some research (including asking on the mailing list) and deciding that SuperCollider wasn’t going to work well for what I wanted to do. SuperCollider in my experience doesn’t work well if you are creating a lot of synths in a short period of time, or sending a high volume of OSC messages. It’s just a limitation of the architecture.

It’s not my problem so I"m not super familiar with it. But there have been discussions both here, and on the previous mailing list, about it - and there doesn’t seem to be a solution that people are happy with. It may be an overflow issue. I don’t think it’s a problem with long audio files per se, but more when you have to keep a count going over time.

But like I said - it’s not my problem. I just know it’s a thing because it keeps coming up.