Hi everyone,
I hope this message finds you well!
I’m reaching out to ask if anyone in the community knows of a SuperCollider-based project that provides functionality similar to platforms like myNoise.net or Ambient-Mixer.com. These platforms allow users to create rich, evolving soundscapes by mixing large libraries of sounds, often contributed by the community.
The features I’m particularly interested in are:
- The ability to specify parameters such as the frequency of appearance of specific effects,
- Control over the relative intensity and panning of each sound element,
- A user-friendly way to interact with these settings.
I’d love to know if there’s a project, tool, or even an approach within SuperCollider that aligns with this kind of functionality. Alternatively, if there isn’t an existing project, I’d also be interested in discussing ideas on how such a system could be implemented in SuperCollider.
Looking forward to hearing your thoughts, suggestions, or experiences. Thank you in advance for your insights!
Warm regards,
Maybe two projects of mine that I am working on could be relevant:
The first one is https://gencaster.org/ which spawns multiple SuperCollider instances on a web server and make those instances available via WebRTC streaming. One or multiple listeners get asigned a stream and what happens on such a stream can be controlled via a mixture of sclang/python (which is organized in a graph-based structure). It is also possible to get the state of a listener (e.g. transmit GPS location or create and send slider values via the frontend).
The other project is called Stecker (german for Jack, currently in a alpha state) which is basically a stripped down version of Gencaster: Send or receive an audio signal from a local SuperCollider instance to another SuperCollider instance or another client (aka a browser) via the internet using WebRTC. Smartphones or laptops can therefore become mobile loudspeakers or microphones. It can also be used to transmit/share control signals via the web.
The idea is to use Stecker to foster some kind of bi-directional community radio with remix capabilities for SuperCollider, but let’s see how it resonates with the community once this is finished.
DM me if you are interested in a demo, access or a paper of the projects.
I also still have the integration of the scsynth wasm build in my backlog , this would be probably most suitable for mimicking and extending the mentioned websites as this would allow to run scsynth within a browser.
4 Likes
From what I’ve seen, those websites are simply looping ‘texture recordings’, and provide a mixer to balance tracks together. I didn’t see any effect (except an overall equalisation) so I don’t know what you’re referring about when saying ’ The ability to specify parameters such as the frequency of appearance of specific effects’.
My idea was that, even if SuperCollider provides you everything you need to set up such project, if you’re only looking for volume, stereo and global equalisation on recorded loops, I think you could also achieve this with a Python/JavaScript/C/… music library, which would probably be easier to use than SC itself. But I might be wrong here (and SC is by far the best programming language ). I think SC comes handy if you’re not looping pre-recorded tracks, but effectively generating them in real-time.
Thanks for the interest Dindoleon, indeed, they are simple loopers with mix control and possibly a layer of equalization on top. I had precisely thought of an equivalent environment in supercollider to be able to incorporate more functionalities, including, as you point out, the use of synthesizers in addition to recorded samples, or the possibility of controlling the environment through OSC/MIDI messages from external devices or environmental sensors, etc.
With the frequency of appearance of effects I was referring to the possibility of parameterizing, for example, how often a certain sample will appear in the mix or the frequency with which reverb or distortion will be applied to the output of a SynthDef in a simple UI layout.
Wow! These projects have truly impressed me, thank you so much for sharing. I had never thought about this before, and it seems like a fantastic idea. At the moment, I believe I’m in a different league, and the application for creating environments is merely an excuse to learn and develop or collaborate on a SuperCollider project. However, I’ll definitely take note of this “Gencaster” and will try to study the project - I have no doubt it offers enormous learning opportunities.
In fact, my initial intention in developing an environment generator was to dynamically overlap or mask unwanted noisy environments, creating a more pleasant sound experience (for example, when faced with annoying music from neighbors in an outdoor area). In this scenario, the environment should adapt according to the rhythmic and frequency content of the hostile disturbance, and also depending on the time of day and the specific area in which the speakers are located (there are places where the noise is heard with less intensity)
An obstacle I encounter is how to distinguish with ambient microphones between the hostile disturbance and the generated sound meant to counteract its presence. If any ideas emerge from this opportunity, I would be delighted to hear them.
In any case, thank you very much for this contribution - I find it an incredibly interesting project.