Musical form, creative tools

I’d like to pick up a couple of the posts from the Dec '23 sound design thread. It’s an extremely interesting topic, about something that’s often assumed to be transparent but which isn’t, but doesn’t quite fit into the sound design conversation.

I’ve been giving this exact advice to students for years, even sometimes demonstrated it in lessons, and find that students are highly resistant to this idea. Not because they actively disagree with it – they acknowledge that it’s a good idea – they just… don’t do it. Every new project begins from bar 1, and then they ask “I’m not sure how to continue” – never “here’s where I want to go with this piece, let’s talk about ways to get there.” So I had plenty of chances to think about why.

When you open a new DAW project, one of the first things you see is a timeline, beginning with bar 1, proceeding to bar 2 etc. This conditions the mind to think in terms of linear time.

My first composition teacher used to say things like, “Hm, here I think you need about three more beats to let this breathe a little.” Then he would take a sheet of staff paper from me and physically cut just enough for those three beats, and scotch-tape it onto my sketch, so there’s a little barn door where you would read the old music that I wrote, then the insertion, then flip over the taped bit to get back to the old stuff that I had written previously. This image of the malleability of musical material is burned into my brain: nothing is set in stone. The little flippy piece of paper looks kind of dumb, but it makes the point better than an hour of talking.

DAWs, on the other hand, promise infinite malleability, but in practice, cut/paste is quite clumsy in them, isn’t it? Your MIDI part has a keyswitch just before the barline – oops, it didn’t cut/move. Automation channels – don’t forget to set your selection options before selecting! So we just don’t. I tell the students to go out to bar 150 and build the climax, but, “eh, here’s bar 1 right in front of me, guess I’ll start here, then I don’t have to deal with those cut/paste problems.”

Also, imagining new variants on musical material is a process of active listening. Hitting space bar on the computer is passive listening. I saw a Hollywood guy in a video say, “When you listen to your intro a hundred times, what you’re really doing is training your mind to believe that the music stops there.” Get away from the machine – sing it in your head, and let the mental energy carry forward. (One nice side effect of practicing live coding is the ability to think while noise is happening.)

And SC… gosh. SC excels at texture. Form is quite hard. I won’t even say I have good solutions for that. I did a sectional sort-of timeline once for a piece in 2009… then used the same framework in 2010 and quickly pushed it a bit past its limits (with two timelines runinng concurrently). But now I’m mostly improvising, meaning I’m working in linear time – exactly what I tell my students not to do :laughing:

hjh

1 Like

This is an interesting topic indeed, and probably itself consists of multiple topics:

  • one could be the topic of compositional strategies, e.g.

    • working backwards from a climax (as mentioned above)
    • communication/feedback between small “generative kernels”
    • strategies to creating a gradual mood shift (overlapping of sections? gradually introducing/reducing randomness over time?)
    • approaches to generating larger scale form/longer time coherence (e.g. “recording” certain generative decisions and later reusing them to create something like an A-B-A’ form)
  • another one could be about tools/constructs/quarks one can use in the supercollider language to organize larger generative pieces over time, things like

    • patterns scheduled on timelines
    • tasks which are activated/paused

In SC, I often find that working forwards in time involves just whacking new code on the end — it is easy, requires little effort, though gets messy —, whereas working backwards often involves some form of refactoring and is quite painful.

A little while ago, I made a quark thing (GitHub - JordanHendersonMusic/JX-supercollider: A supercollider quark for realtime interactive system.) that lets you specify relationship between data sources and data sinks (over OSC and usually cross program but also within supercollider) designed for live audio-visual + instrumental works. Here, each set of relationship between data sources and sinks are called maps, which can themselves be composed together — here is a little example of that, where each map contains numerous mappings between source and sink (or generated) and is expressed in UGens so can be arbitrarily complex.

JXOscMapperSynth({|src|
	var a = mapAmk.makeMap(src);
	var b = mapBmk.makeMap(src);
	var c = mapCmk.makeMap(src);
	var d = mapDmk.makeMap(src);

	var lerp = JXOscMapLinSelectX(MouseX.kr(0, 3), a, b, c, d);
	
	JXOscMapOutput.kr(lerp);
});

This lets you build up structure is quite an interesting way, as you can also use data source to interpolate and otherwise mutate maps (sections). This means it is possible to have nested structure, and control the structural changes both with live data and fixed. Further, rearranging the structure is trivial.

The negative is that you lose a lot of control over the smaller details of the piece (here is a link to a piece made with it — https://youtu.be/tRFPDGTPHSk) as since all the maps are evaluated on the server, you can’t use patterns or the other usual structural mechanisms.

1 Like