NOTAM Meetups: Fall 2023

Greetings fellow SuperColliderers!

I have the pleasure of hosting Notam’s monthly SuperCollider meetups for this coming season! The meetings will take place online approximately once a month on Zoom at 7pm CET (Oslo Time):

SuperCollider meetup:
Meeting ID: 974 3258 0111
Link: Launch Meeting - Zoom

At these meetups, SuperCollider users of all skill levels get together to share ideas, frustrations, help each other and show off projects and workflows in an inspiring and friendly way.

If you have accessibility related requests or questions about the meetup, please send me a message here and I’ll do what I can to address them!

All community events at Notam fall under the NOTAM Code of Conduct to make them as inclusive as possible. Please follow the link and read the full Code of Conduct before joining an event: Notam Code of Conduct - Notam

I’ll return here every once in a while to advertise forthcoming meetups, to follow up on any discussions, etc. so be sure to follow this thread via the :bell: on the right!

The meetup dates for the fall will be:

2023-09-20T17:00:00Z
2023-10-17T17:00:00Z
2023-11-13T18:00:00Z
2023-12-11T18:00:00Z

Happy SuperCollisions in the meantime!

11 Likes

I’m very pleased to be able to host @nathan at our meetup next week - here’s what he plans to share:

“In this talk, I will discuss my album Haywire Frontier, released on September 9th on the record label tokinogake. There are lots of strands to the album, but in particular I’ll talk about a personal approach to composing rhythmic material that coalesced during the project. This approach was borne out of an interest in rhythms that are neither completely grid-based nor completely random, and that are also very easy to write in sclang.”


Nathan Ho (he/him) is an SF Bay Area-based electronic musician, educator, and specialist in digital signal processing. He draws from his mathematical and scientific background, using a vocabulary of custom synthesis and algorithmic sequencing techniques to make aggressive, maximalist music spanning academia to rave. He is a former SuperCollider developer, and now maintains a YouTube channel (SynthDef) and blog which host educational materials on sound design, synthesis, and DSP.

9 Likes

Looking forward to see you all in a few hours! :nerd_face:

1 Like

Here is the code I used for the audio examples. Sorry about the mixed tabs and spaces.

(
SynthDef(\kick, {
	var snd, duration, velocity;
	duration = \duration.kr(1.0);
	velocity = duration.linlin(1, 0, 1, 0);
	snd = SinOsc.ar(
		60
		* (1 + (8 * Env.perc(0, 0.001).ar * velocity))
		* (1 + (8 * Env.perc(0, 0.03).ar * velocity))
		* (1 + (0.5 * Env.perc(0, 0.3).ar * velocity))
		* ([1, -1] * 0.1).midiratio
	);
	snd = snd * (1 + (Env.perc(0, 0.03).ar * velocity));
	snd = snd + (BPF.ar(Hasher.ar(Sweep.ar), 8321, 0.3) * Env.perc(0.001, 0.003).ar * 1.dbamp * velocity);
	snd = snd.tanh;
	snd = snd + (BPF.ar(Hasher.ar(Sweep.ar), 3321, 0.3) * Env.perc(0.03, 0.05).ar * -10.dbamp * velocity);
	snd = snd * velocity.sqrt;
	snd = snd + GVerb.ar(snd.sum * -30.dbamp, 30, 1);
	snd = snd * Env.perc(0.001, duration.min(0.6)).ar(Done.freeSelf);
	snd = snd * -3.dbamp;
	Out.ar(\out.kr(0), snd);
}).add;

SynthDef(\snare, {
	var snd;
	snd = SinOsc.ar(
		260
		* (1 + (3 * Env.perc(0.001, 0.04, curve: -6).ar))
		* [1, 4.3, 8.4]
	);
	snd = snd * [0, -8, -12].dbamp;
	snd = snd * Env.perc(0.001, [0.3, 0.1, 0.03]).ar;
	snd = snd.sum;
	snd = snd + (BPF.ar(WhiteNoise.ar, 2310, 0.25) * Env.perc(0.03, 0.3).ar * 12.dbamp);
	snd = snd + (BPF.ar(WhiteNoise.ar, 7310, 0.3) * Env.perc(0.003, 0.04).ar * 8.dbamp);
	snd = snd.tanh;
	snd = snd + PitchShift.ar(snd, 0.06, 2.4);
	snd = snd + PitchShift.ar(snd * -5.dbamp, 0.08, 1.3);
	snd = snd * Env.linen(0.001, 0.23, 0.01).ar(Done.freeSelf);
	snd = snd * -7.dbamp;
	snd = snd ! 2;
	Out.ar(\out.kr(0), snd);
}).add;
)

// Pwhite-into-dur rhythm: BORING
(
Routine({
    20.do {
        s.bind { Synth(\kick) };
        rrand(0.03, 0.6).wait;
    };
}).play;
)

// Accelerating rhythm
(
Routine({
    3.do {
        (0.75 ** (0..10)).do { |duration|
            s.bind { Synth(\kick) };
            duration.wait;
        };
    };
}).play;
)

// Recursion, level 1
(
Routine({
    ((0.8 ** (0..8)).normalizeSum * 15.0).do { |phraseDuration|
        ((0.75 ** (0..10)).normalizeSum * phraseDuration).do { |duration|
            s.bind { Synth(\kick) };
            duration.wait;
        };
    };
}).play;
)

// Velocity variation
(
Routine({
    ((0.8 ** (0..8)).normalizeSum * 15.0).do { |phraseDuration|
        ((0.75 ** (0..10)).normalizeSum * phraseDuration).do { |duration|
			s.bind { Synth(\kick, [duration: duration]) };
            duration.wait;
        };
    };
}).play;
)

// Interruption with snare
(
Routine({
    ((0.8 ** (0..8)).normalizeSum * 15.0).do { |phraseDuration|
        ((0.75 ** (0..10)).normalizeSum * phraseDuration).do { |duration|
            s.bind { Synth(\kick, [duration: duration]) };
            duration.wait;
        };
        if(0.5.coin) {
            s.bind { Synth(\snare) };
            0.28.wait;
        };
    };
}).play;
)

// Three levels of acceleration
(
Routine({
	((0.8 ** (0..8)).normalizeSum * 50.0).do { |sectionDuration|
		((0.8 ** (0..8)).normalizeSum * sectionDuration).do { |phraseDuration|
			((0.75 ** (0..10)).normalizeSum * phraseDuration).do { |duration|
				s.bind { Synth(\kick, [duration: duration]) };
				duration.wait;
			};
			if(0.5.coin) {
				s.bind { Synth(\snare) };
				0.28.wait;
			};
		};
	};
}).play;
)

// Hierarchy of mixed acceleration/deceleration
(
Routine({
	((0.8 ** (0..8)).normalizeSum * 50.0).do { |sectionDuration|
		((0.8 ** (0..8)).normalizeSum.reverse * sectionDuration).do { |phraseDuration|
			((0.75 ** (0..10)).normalizeSum * phraseDuration).do { |duration|
				s.bind { Synth(\kick, [duration: duration]) };
				duration.wait;
			};
		};
	};
}).play;
)

// Sorted random rhythm
(
Routine({
    (({ exprand(0.01, 0.5) } ! 20).sort.normalizeSum * 3.5).do { |duration|
		s.bind { Synth(\kick, [duration: duration]) };
        duration.wait;
    };
}).play;
)

// Self-similar rhythms, a la John Cage
(
Routine({
    var row;
    row = [0.2, 1, 1.4, 0.8, 1].normalizeSum;
    (row * 20.0).do { |sectionDuration|
        (row * sectionDuration).do { |phraseDuration|
            (row * phraseDuration).do { |duration|
                s.bind { Synth(\kick, [duration: duration]) };
                duration.wait;
            };
        };
    };
}).play;
)

// Sinusoidal rhythm
(
Routine({
    30.do { |i|
		var duration;
		duration = cos(i * 2pi / 20).linlin(-1, 1, 0.05, 0.2);
        s.bind { Synth(\kick, [duration: duration]) };
        duration.wait;
    };
}).play;
)

// Gridless discrete rhythms
(
Routine({
    30.do { |i|
		var duration;
		duration = [0.02, 0.1, 0.432].choose;
        s.bind { Synth(\kick, [duration: duration]) };
		duration.wait;
    };
}).play;
)
11 Likes

For anyone who missed the talk, I’ve expanded it into a blog post.

5 Likes

Hello again!

The next Notam SC Meetup is this coming Tuesday, 2023-10-17T17:00:00Z! We’re going to continue with SC user presentations, but feel free to bring any questions, works-in-progress, or discussion topics - there will be plenty of time for them!

Call it shameless self-promotion or leading by example: our next meetup will feature a presentation by me! (I promise I have other users scheduled later this fall and into the spring) I’m going to share the work I do with the experimental metal band YAWN and the various ways SC has helped to shape both our sound in the studio and our live performances. Our live setup relies on a big ol’ SC program to synchronize our click tracks, backing tracks, light show, and digital amplifier automation as well as giving us access to a collection of digital instruments we use in improvised passages. I’m currently in the process of redesigning the whole setup, so feedback and suggestions are very welcome!

Bandcamp
Instagram
My website


Mike McCormick (he/him) is an artist and programmer working with sound, text, and visual media. Equally inspired by the constrained writing techniques of the Oulipo and the technomaterialism espoused by Xenofeminism, his work combines custom algorithms with human performers and autobiographical material to explore human intimacy, male vulnerability, and our relationship with emerging technologies. He grew up in Canada’s subarctic, lived nomadically for a decade, and has been based in Oslo since 2017.

5 Likes

…and if SC-powered metal isn’t your thing - fret not (that’s a guitar joke)! Tomorrow’s meetup will also feature a presentation by @scztt - what a treat!

I’ll share my overall setup for performance and music-making, with a walk-through through some custom classes and tools, and how I organize projects. And, if all goes well, I’ll play some cool sound examples as well.


Scott Carver (they/them) is a sound artist, musician, and programmer living in Berlin.
https://www.artificia.org

1 Like

Thanks so much to @scztt and all those in attendance for a great Meetup yesterday! Someone asked about the noisy synth I showed briefly; here’s the SynthDef:

SynthDef(\xFeedNoise,{
	var bufnum = \bufnum.kr;
	var val    = FluidBufToKr.kr(bufnum,0,33);
	var sin    = SinOsc.ar(val[1].linexp(0,1,1,12000),                mul: val[2]);
	var saw    = VarSaw.ar(val[3].linexp(0,1,1,12000), width: val[4], mul: val[5]);
	var square = LFPulse.ar(val[6].linexp(0,1,1,12000),width: val[7], mul: val[8] * 2,add:-1);
	var tri    = LFTri.ar(val[9].linexp(0,1,1,12000),                 mul: val[10]);
	var osc    = SelectX.ar(val[0].linlin(0,1,0,3),[sin,saw,square,tri]);
	var noise0 = SelectX.ar(val[11].linlin(0,1,0,2),[
		LFNoise0.ar(val[12].linlin(0,1,0.2,10)),
		LFNoise1.ar(val[13].linlin(0,1,0.2,10)),
		LFNoise2.ar(val[14].linlin(0,1,0.2,10))
	]);
	var noise1 = SelectX.ar(val[15].linlin(0,1,0,2),[
		LFNoise0.ar(val[16].linlin(0,1,0.2,10)),
		LFNoise1.ar(val[17].linlin(0,1,0.2,10)),
		LFNoise2.ar(val[18].linlin(0,1,0.2,10))
	]);
	var sig, sigL, sigR;
	
	var local = LocalIn.ar(2);
	
	sigL = VarSaw.ar(
		freq: osc.linexp(-1,1,20,10000) * local[0].linlin(-1,1,0.01,200) + (val[19].linexp(0,1,80,2000) * noise0.range(1,val[20].linlin(0,1,2,10))),
		width:local[1].linlin(-1,1,0.01,0.8),
		mul: val[21]
	);
	sigL = RLPF.ar(sigL,val[22].linexp(0,1,20,20000),val[23].linlin(0,1,2.sqrt,0.01)).tanh;
	sigL = sigL + CombC.ar(sigL,0.25,val[24].linexp(0,1,0.01,0.25).lag(0.01),val[25]);
	
	sigR = VarSaw.ar(
		freq: osc.linexp(-1,1,20,10000) * local[1].linlin(-1,1,0.01,200) + (val[26].linexp(0,1,80,2000) * noise1.range(1,val[27].linlin(0,1,2,10))),
		width:local[0].linlin(-1,1,0.01,0.8),
		mul: val[28]
	);
	sigR = RLPF.ar(sigR,val[29].linexp(0,1,20,20000),val[30].linlin(0,1,2.sqrt,0.01)).tanh;
	sigR = sigR + CombC.ar(sigR,0.25,val[31].linlin(0,1,0.01,0.25).lag(0.01),val[32]);
	
	sig = [sigL, sigR];
	LocalOut.ar(sig);
	sig = LeakDC.ar(sig).tanh * -6.dbamp;
	sig = sig * Env.asr().ar(2,\gate.kr(1));
	Out.ar(\out.kr(), sig  * \amp.kr(0));
}).add

The FluidBufToKr class comes from the brilliant FluCoMa library (s/o @tremblap), allowing me to manipulate 33 control values of this synth via a single XY pad - super convenient when one of my hands is busy wrangling a guitar/wiping the sweat out of my eyes…

For some insight into this technique, I recommend @tedmoore’s demo of the basic principle and thereafter @Sam_Pluta’s research with parallel models (an approach I’m also using).

See you all next month! :slight_smile:

5 Likes

@Mike_McCormick it was me who asked and thank you for posting the code. @nathan thank you for your earlier code too. I am finding them a great learning tool, in seeing how others code and to get ideas to steal/borrow/develop :smiley:

1 Like

On Monday’s meetup we have the pleasure of hearing first from @semiquaver:

“Return to Tomorrow” is a kind of opera from a Star Trek episode, written over a period of years in sclang with “vocals” rendered by SynthesizerV. A voice emanates from deep within a dead planet - a voice in need of a body… a drama about possession and oblivion ensues. Technically the piece required developing some code for manually reflowing rhythms (since the composition is organized around speech rhythms rather than beats/bars) as well as tools to make video stills and SynthesizerV project files.

///

Michael Webster is a composer and recording artist living in Los Angeles with a special interest in the rhythm of language. In addition to making his own records, operas and concert music, he has collaborated with many poets and visual artists such as Eileen Myles and Mungo Thomson, as well as musicians like Van Dyke Parks, the Circle Jerks, Tracy Chapman and Winnie the Pooh.

1 Like

On Monday we can also look forward to a presentation by @t36s:

I’d like to talk about using Patterns in SuperCollider with SuperClean.

https://danielmkarlsson.com/superclean-installparty/
https://www.youtube.com/watch?v=7pRIzkU8fzg

///

Daniel M Karlsson is a composer focused primarily on texture and timbre. He works extensively with algorithmic composition. He is a Marxist transhumanist singularitarian meaning his biggest short-term goal is Economic Democracy for everyone. A simpler wording would be “everything for everyone”. Practical breakdown includes, but is not limited to: Free access to food, housing, clothing, education, transportation and computation. He is entirely anti-capitalist and regards any and all speculation as fundamentally unacceptable. His biggest long-term goal is a fun times fully automated luxury space communism with optional mind uploading for all.

3 Likes

The last SC Meetup of 2023 is gonna be a banger!! First up is @Forces:

Joonas Siren (b. 1983) is a Helsinki based interdisciplinary artist and experimental electronic musician. They release music as Forces and perform live-coding concerts often in Finland and abroad. So far there has been six different Forces albums in different international record labels, the latest one is called Chimæras on the Slovakian label mappa released in August 2023.

SuperCollider has been Siren’s main tool for composition since 2016 and also a frequent tool to build sound installations in gallery settings. In the Notam SuperCollider meetup Joonas will show some of the methods and workflows of this installation practice + the various SuperCollider methods used in Forces albums.

4 Likes

After Joonas, our last SC presenter of 2023 is none other than Mads Kjeldgaard!!

Mads Kjeldgaard (b. 1988 Horsens, Denmark) is an electronic music composer. His main fields of interest are computer music, interactive sound, and algorithmic composition. He has studied Electronic Music Composition at the Danish Institute of Electronic Music (DIEM) at the Royal Academy of Music and has a degree in journalism from the Danish School of Media and Journalism. He has worked a number of years at The Norwegian Center for Arts and Technology (aka Notam) and is now employed as a software developer at Torso Electronics. He is a member of The Danish Composers’ Society.

In this talk, Mads will be exploring the idea of the composition as an instrument. He will showcase various techniques and code snippets for creating playable large scale musical structures that you can perform with on stage using SuperCollider.

3 Likes

Starts in less than an hour!