Interfacing SuperCollider: what is your setup?

Hi everyone!

IMO one of the best features of SuperCollider is its flexibility and ease to communicate, integrate and interface with other systems and platforms. There are plenty examples: laptop orchestras, multiples servers on a single machine, MIDI communication between music controllers, OSC communication through mobile apps, scripting with other programming languages, data sonification, controlling DAWs, interfacing with video and animation environments, using plugins, etc…

The information and practical examples about these topics are spread over the web, moreover interfacing is strongly related with the user style and artistic preferences. So I would like to open this topic for gathering the possibilities and pointing some directions for those trying to solve its own interfacing problems.

What is your interfacing setup? Which hardware do you use? preferred protocols? which other programming/scripting languages? DAWs? What are the problems and what works seamless?

Here is the list of (“most common”) possibilities from the official website:
https://supercollider.github.io/community/systems-interfacing-with-sc

All the best,
Fellipe

4 Likes

macbookAir , just perfect, java programmer, the only program i encounter always get solved here hehe many thanks and love

1 Like

Although this paper is 11 years old, its is a really nice resource for those thinking about the variety of setups used in sound installations. I have not seem much text/documentation like this, specially regarding SC, which is a pity because it would help me think about many conceptual and practical problems regarding sound installations.

Marije A.J. Baalman. 5 years of using SuperCollider in real-time interactive performances and installations – retrospective analysis of Schwelle, Chronotopia and Semblance. In Linux Audio Conference (LAC) 2010, Hogeschool voor de Kunsten, Utrecht,The Netherlands, May 1-4, 2010.

1 Like

Nice idea (this thread) and thanks for the link to the Systems interfacing with SC page. I’m not sure if I misread the page, but shouldn’t FoxDot be on the Clients Using SC Server list?

1 Like

Hi,

I will take the opportunity to talk briefly about the environment I’ve been using and working on for about ten years, Mellite, and which is interfacing with SuperCollider server for the real-time sound synthesis part. In brief, it is my attempt to bundle together multiple abstractions that represent different practices I’m engaged in—sound installation (Control/Ex language), live improvisation (Wolkenpumpe interface), electroacoustic music (Timeline interface and FScape processing)—defining ways to combine them in a common object model (called SoundProcesses) and through a hybrid text-graphical interface.

Like any quite personal environment, this has grown along the directions needed to conduct specific projects with it (artworks and research). It originally started as a research in preserving the working history in compositional processes by introducing a fine-grained automatic versioning system, something that is still part of Mellite but I don’t use very actively now. The text interface is based on fragments written in the Scala programming language, although I should mention that this is strongly shaped by the embedded domain specific models that you work with (there is one for real-time sound synthesis through SuperCollider, one for offline processing losely based on the UGen concept, one for writing actions and control flows as a sort of patcher-like glue language).

Running on the JVM (recently also on JS without the IDE), it is relatively resource hungry, although I could use it on small computers like the Raspberry Pi 3 (the Pi 4 is much more suitable for the required speed and RAM). I am not that much invested in classical live coding or digital musical instruments, so I have given relatively little effort to questions of performance and latency—which you can do, since scsynth runs in its own process.

Since I am basically its only developer, it is more or less shaped according to my experiences and priorities, and as such I enjoy using it and it has become second nature. The most challenging bit is probably the concept of text and GUI mix, and the fragmentation of text. This is especially true, if you are used to powerful editors and IDEs for regular programming languages, and for several years I had always gone back to writing my sound installations in IntelliJ IDEA just with the framework (SoundProcesses) and generic Scala, and little use of the custom IDE (Mellite). It’s very hard to compete with the functionality of a commercial IDE, once you get used to navigating, moving and renaming symbols at the touch of a key press. I’d say the comfort of editing code fragments in Mellite is probably on par with other custom IDEs such as Processing.

A big issue is making a network of objects that interoperate. For example, when I write a sound installation in generic Scala, I put configuration, constants and parameters, utility functions in one object that I can reuse throughout the code. This is currently difficult in Mellite, so it’s high on my list of things that I want to look at. How can you “import” text modules in several objects, and how can you make sure that, when you touch a text module, the dependent modules know about this and can be updated? This is due to the peculiarity that objects are always compiled and rendered into serialised trees in order to avoid on-the-fly compilation as a performance bottleneck. And I guess the reason why most other hybrid systems will embed a dynamically typed language instead that is interpreted ad-hoc.

The interface between different objects now is “stringly” (naming ports or dictionary keys), so there is a point where you don’t have the comfort that a statically typed language normally provides. Another UI question is whether to hang on to the old school multi-window system (think of the old SmallTalk’ish SC 3 GUI before SCIDE came), or to succumb to the fashion of one-window applications. I still like the multi-window interface, but sometimes it gets in your way, and some workflows could be faster and streamlined. In general, I think it requires quite a disciplined way of working not to produce chaos and head-scratching in the workspace, as soon as you do non-trivial projects.

Three other questions I find interesting:

  1. When working on distributed pieces, say you run a workspace (Mellite’s bundle of objects that make up a project) on a dozen Raspberry Pis, how to be able to evolve them together? Workspaces can store the state of a piece (e.g. a sound installation), which I find beautiful and very fascinating; it’s sad having to “reset” the state by overwriting the twelve copies of the workspace. So how can I update twelve workspaces each with their own state? How can I designate the parts that need change, and how to apply the changes consistently across multiple workspaces that are not identical?

  2. How to work collaboratively on a piece. Could there be a way to identify different users on a workspace? Or could there be ways to connect multiple workspaces on a network (other than just sending user-defined OSC messages)? This could go back to the versioning system research. It could also begin in a particular sub-system, like allowing multiple people to improvise together in Wolkenpumpe.

  3. Browser-based sound pieces could be interesting (I am working on one at the moment). Right now you export a workspace to put it up in the browser, but could you also persist its state, could you retrieve it again for the desktop? And what would be a good way to work with graphics? I am not convinced by the schism between Processing and p5.js. Java2D and HTML5 Canvas are too similar not to think of a solution that could work in both, and Scala already compiles to both JVM and JS, solving the problem that Processing has (and will never solve if it stays on its Java’ish language). But then, is Processing such a great way of writing graphics code? Would it not make more sense to use the patcher-like Control/Ex language in Mellite based on reactive expressions? Like thinking of a reactive SVG scene, or a bundle of Pen-like functions (like JPen used to work in SwingOSC)?

Many many questions and questionmarks :slight_smile:

Best, .h.h.

2 Likes

One recent addition to my live-coding set up is VCV Rack (because it has some really juicy oscillators and filters, mainly).

MIDI: I faced a catch-22. If I open the Rack patch first, then the SC MIDI port doesn’t exist and Rack’s MIDI input modules revert to an empty device selection (no input). This requires manual intervention to correct (i.e. just one more mistake you can make loading the environment on stage).

On the other hand, if I MIDIClient.init first, then the Rack input ports don’t exist and SC will never see them.

In Linux, there is a neat solution (suggested to me by a Rack user): I run an a2jmidid process first, which creates virtual thru ports. Rack connects to one of the JACK-MIDI ports, and SC to the corresponding ALSA-MIDI port. Because a2jmidid was launched first, both Rack and SC can find the ports with no trouble.

In Mac, the IAC bus should exist prior to launching both, so it shouldn’t be a big problem. In Windows, I guess something like Tobias Erichsen’s loopMIDI would do the same.

Audio: I’m piping a stereo signal from Rack into two extra scsynth hardware input channels. This is for multitrack recording: I have a function that will record several mixer channels with a sample-synchronized onset. If I just piped Rack to the audio hardware, I could record its audio within Rack but I wouldn’t be confident that the audio file is in sync with the SC audio files. (Also I can drop effects onto the SC MixerChannel for Rack, on the fly, in a show – didn’t build a delay or glitch module into the Rack patch? No problem – SC can do it.)

It took some fiddling, but I now have a launch script for my live setup that makes all the connections automatically. (That’s one benefit of Linux – JACK connections can be scripted!)

hjh

3 Likes