Hi all!
Today i recorded some of my WIP for an upcoming installation. By now, it is just noise modulated custom synthdefs, stereo mixed, but the final versions of this sounds will be multichannel and macro-structurally organized.
Even not being final (is generative music ever “final”?), i was willing to share it with you and hear what you think…
Well… the way i work with supercollider is not very idiosincratic, and hence it is kind of complicated to share the code which unfolds this music.
Nevertheless, you can find my personal synthdef repository here:
And for an understanding of why sharing the code is not an option here: I sequence the synthdefs and modulate its parameters in a custom c++ nodal environment we’ve been developing for years in the studio. I have already shared this on this forum in the past, but if it helps illustrating why…
Playing with and adaptation of the flute waveguide code here Physical Modelling | scoring , and with the hexagonal membranes. How cool is physical modelling in SC!
Just finished this piece for a museum in Amsterdam, on display until January.
All sound comes from raster scanning sonification of the image, using additive synthesis in Supercollider (a bank of 1000 sinusoidal oscillators). This is similar to how Metasynth worked.
I generate the images in Processing (Java), and use Max/msp as an easy GUI generator to control the whole thing…
The trick here is that, along the years, i have learned to create synthetic visual spectrograms: I manually coded timbres, pitches, rhythms… which give rise to images that “sound good” when sonified.
In fact, if you visualize the resulting sound in a melodic scale spectrogram you get the exact same image than the one it was used to generate the sound…