over the weekend I put together a little project that I’ve been thinking about for a while and thought I’d share it here since supercollider plays a central role in it.
It started with the idea of visualizing supercollider tweets (for example having a twitter bot replying to #sc140 tweets with a spectrogram image or a rendered .mp3). After experimenting a bit with rendering code snippets in non-real time using
File.readAllString(inputFilePath).interpret on .scd files and rendering spectrograms with SoX, I gravitated more and more towards the idea of writing SC code not for sound as the end result but specifically for visual interpretation through sound, looking for visually interesting synthesis, amplitude, frequency and timbral shapes and so on.
The result is a Github repo with a growing collection of sc snippets which will be rendered into spectrograms and posted alongside the code on a Github page. The whole thing is still fairly rough around the edges but my goal was to first get it off the ground and see where it goes over time. Of course I’m open for contributions and pull requests adding new sketches which will be published on the site. I’m also very interested in contributions to the design of the page (specifically to the formatting/syntax highlighting etc of the code blocks) since I’m quite inexperienced with it.
to cut a long story short - here is a very loose and unsystematic exploration of visual sound synthesis. Feel free to add posts