I’m here again with another season of SuperCollider meetups, hosted by our friends at Notam! The meetings will take place online approximately once a month on Zoom at 7pm CET (Oslo Time):
At these meetups, SuperCollider users (usually 2 per meetup) present a project, class library, instrument, or artistic practice featuring our favourite audio programming environment. These presentations are informal, vary in their format, and are intended to showcase the diversity and flexibility of expression our beloved SC permits.
If you’re interested in presenting a project/workflow/tool/whatever at one of the Meetups, send me a DM and I’ll find a slot for you - absolutely everyone is welcome to share!
All community events at Notam fall under the NOTAM Code of Conduct to make them as inclusive as possible. Please read the full Code of Conduct before joining an event. If you have accessibility related requests or questions about the meetup, please send me a message here and I’ll do what I can to address them!
In the week before each meetup I’ll return here to present info about the forthcoming presenters, so be sure to follow this thread via the on the right!
A graduate in graphic design and passionate about sound art, Santi Vilanova has managed to combine these two disciplines through the catalyst of “creative technologies”, of which he is a self-taught developer.
(De) formed in the rave scene of the early 2000s, his sound work has evolved to integrate this influence into new territories. His recent research combines digital algorithms and sonification engines with classical staves and acoustic ensembles, focusing on the idea of a visual music.
Since 2006 he runs the audiovisual research lab Playmodes in collaboration with Eloi Maduell. At Playmodes they develop artistic installations and multimedia performances which are showcased around the world.
//////////////
On this talk, Santi Vilanova will go through the creative insights of Playmodes artistic practice in the field of installation and performance. The second half of the conference will be focused on the technical details of their own software framework developed in C++ and SuperCollider, which features a visual nodal environment that allows flexible mapping of control data for multimedia devices (video, light, kinetics…) with custom multichannel synthdef parameters.
Our second presenter next week will be @tedmoore - for those of you that expressed interest in the FluCoMa project earlier this spring, this will be worth checking out!
//////////////
Ted Moore (he / him) is a composer, improviser, and intermedia artist whose work fuses sonic, visual, physical, and acoustic elements, often incorporating technology to create immersive, multidimensional experiences.
After completing a PhD in Music Composition at the University of Chicago, Ted served as a postdoctoral Research Fellow at the University of Huddersfield as part of the FluCoMa project, where he investigated the creative potential of machine learning algorithms. Ted’s music has been presented by leading cultural institutions such as MassMoCA, South by Southwest, The Walker Art Center, The American Academy in Rome, and National Sawdust.
//////////////
I’ll be presenting on how I use FluCoMa in SuperCollider in coordination with openFrameworks to create audio-visual works such as quartet and saccades.
Our next SC meetup is already in a week! Our first presenter is @nammedit:
Martin Tidemann Kvalø is an oslo-based composer working in the cross-section between electronics and acoustical score-music. He is currently doing his masters in composition at the Norwegian Academy of music, exploring the aesthetical potential of the dialogue between his computer-assisted systems developed in Supercollider and intuitive, hands-on compositional strategies.
In this meetup I will showcase my implementation of Supercollider both as a compositional tool and as a musical instrument in a work-in-progress for piano and live-electronics. On the computer-assisted-composition side this includes a class for generating cell-based patterns which I’ve named StateChanger() and an early-stage, buggy but sometimes functional Lilypond-class which converts data in Supercollider to auto-generated notation as PDFs or SVGs. If there’s time, I will also showcase the live-electronics system of the piece, which consists of a piano sample-instrument and ways of controlling and writing code for this instrument.
Sam Pluta is a composer, electronics performer, and sound artist. Though his work has a wide breadth, his central focus is on using the computer as a performance instrument capable of sharing the stage with groups ranging from new music ensembles to world-class improvisers. By creating musical systems of shared agency, Pluta’s vibrant sonic universe focuses on the visceral interaction of instrumental performers with reactive computerized sound worlds. He has worked with Wet Ink Ensemble, Peter Evans Ensemble, Rocket Science, and PANG, and his performances and production can be found on over 50 albums of new music and jazz.
Sam Pluta will be presenting on his recent SuperCollider UGens: BufFFT, NessStretch, Oversampling Oscillators, Proteus, RTNeural, and Onnx. These UGens extend SC’s functionality to allow FFTs on buffers, complex synthesis with high partial-count oscillators, wavetable synthesis, and audio-rate neural processing.
Ah, I realized there was a typo in the dates listed above - the next meetup is 2024-10-24T17:00:00Z. Sorry for the confusion - looking forward to see many of you there!
For those interested here is a quick github-repository for the StateChanger-class with some examples to get started.
Thanks for having me, for tuning in and thanks to @Sam_Pluta for some brain-melting DSP and machine-learning stuff!
New month, new meetup! Our first presenter next week is @Eric_Sluyter:
ERIC SLUYTER is a New York-based sound artist. His work intricately collages disparate sonic material (memories) into living sculpture, and investigates musical human/computer collaboration. He is interested in time as experienced individually and collectively, in relation to the clock and to the natural world. He is a member of downtown theater company The Wooster Group since 2015, and has collaborated on many other theater, film, art, and sound works. He has been a SuperCollider user since 2010.
/========/
I will be presenting a work in progress timeline system / DAW that I started developing this summer, aiming for an alternative to my frequent workflow “improvise in SC and then arrange the recordings in Reaper”, also thinking ideally it could become a flexible real-time tool. Allowing for determinism and total control, also allowing for randomness and non-linearity. So far I’ve prioritized sequencing SC-specific things (synths, envs, patterns, routines) over things other DAWs can handle (audio clips, MIDI piano roll), although I would like to add some of these features in the future. The GUI is written entirely in SCLang, for better and worse. Just beginning to tackle live recording of envelopes, ideally could somehow record an entire improv session. Able to fast-forward envelopes and certain routines and patterns to (optimistically) start playing correctly from any point in time. Mixing interface with DDWMixerChannel. I will show where it’s at and maybe share some interesting problems I’ve encountered, and will be grateful for feedback / ideas / discussion.
Our second presenter this coming Wednesday is @dkmayer:
Daniel Mayer (*1967) is a composer who focuses on works including electro-acoustics. Numerous international electronic and contemporary music festivals have programmed his pieces. In 2007, he received the Giga-Hertz production prize for electronic music at ZKM Karlsruhe. Daniel Mayer completed master’s studies in pure mathematics, philosophy, and music composition. Since 2011, he worked at IEM Graz, teaching electro-acoustic composition as a visiting professor since October 2016. In the winter term 2022/23, he was Edgard-Varèse guest professor of DAAD at TU Berlin.
The talk refers to a topic that involves me from a compositional, scientific, and didactic perspective: the tension between a canonical corpus of knowledge and techniques and the need for the transgression that drives any artistic endeavor. The nature of an environment like SuperCollider particularly supports the latter, offering a vast playground for experimentation and exploration.
Our last SC meetup in 2024 (2024-12-18T18:00:00Z) will feature @Thor_Madsen:
Thor Madsen is a Danish guitarist, producer, and composer renowned for his versatility across jazz, electronic, and fusion genres. His innovative approach blends technical prowess with creative experimentation, contributing significantly to the global music scene. As an associate professor at the Danish National Academy of Music, he imparts his extensive knowledge to aspiring musicians. Thor’s latest creation, Thormulator, is an interactive performance app that enhances solo performances through real-time, responsive musical interactions, exemplifying his commitment to advancing music technology.
////
As the creator of Thormulator, I am excited to introduce this innovative tool for improvisation. Thormulator enhances solo performances with real-time, interactive musical responses. By interpreting your MIDI and audio inputs, it dynamically modifies sound, alters structures, and generates new audio, fostering a playful musical dialogue.
In this presentation I will be talking about the thoughts behind Thormulator and demonstrating its use through videos and a walk through of the user interface.
Due to a sudden cancellation, we have one empty slot for tomorrow’s Notam SC Meetup! If anyone has anything they’d like to share, feel free to send me a DM! Otherwise we can allocate more time for the discussion after Thor’s presentation, I can present some of my recent work, or we can let things flow in whatever direction they want to. Looking forward to see you tomorrow!!