Avoiding straight lines in electronic sounds

I’ve been reading a lot of critiques recently of the Industrial Revolution. I’d previously been a bit dismissive of these ideas, thinking they were some forlorn hope attempt to stop industrialisation. I realise they are more subtle. This was all provoked by going to a visual arts exhibition that showed the influence of the Industrial Revolution on fine arts. Two people in particular I am researching, John Ruskin and Frank Loyd Wright, both of whom advocated the influence of nature on architectural design and art. Frank Loyd Wright called this ‘organic architecture’, long before Whole Foods embraced the terms to sell mangos. Even though at a glance his architecture might look quite angular, his approach was more subtle.

This preamble serves a point. I’m not simply treating this as a technical problem but as a method of exploration. The critique that has really hit me is the use of straight lines, which rarely if ever appear in nature (light I suppose is an exception). This made me realise that straight lines are everywhere in electronic sound. Even when we use something like an LFO, it is modulating around a straight line. When oscillators are detuned, they are essentially detuned into straight parallel lines.

What strikes me is: why are we doing this? A string quartet wouldn’t dream of playing music this way. The fact that percussionists often have to play like this, is not a strength of percussion. It seems like something that is built into our engineering practice because it’s simpler and it’s more akin to how we think of how computers work.

I am keen to carry out some experiments. I thought I would put the question out there into this community, as other people may have thought, or be thinking, along similar lines, and likewise, there may be critiques of my critique.

My question with this is where to start with working with lines that have curves that do not repeat in periodic cycles? I am immediately drawn to the idea of Perlin noise for which there is a Ugen. However, I also thought perhaps some forms of randomisation with an Env might work as well. If it is of interest to people, I will try to post some results here. Please share any ideas or thoughts you have on this subject.

3 Likes

This is a really interesting topic!

Just some thoughts.

Straightness here seems to mean euclidean, are you familiar with the concept of a geodesic and different spatial geometries? For an example, on a sphere, there are no parallel lines, also light bends around mass (gravitational lensing), meaning ‘straightness’ is a function of space’s topology.

There is a history of computer music that asks, ‘what would music sound like if made by a computer rather than a human?’ — a kind of alien phenomenology. It is an interesting question, but not the only interesting question concerning computers! Particularly given that ‘we’ have made computer and have at least some agency over what form they take.

I think this is a little more nuanced. Small ensembles do follow very strict rules. If your goal (metric) is to draw a line between two points in the shortest distance, then the solution is a straight line (or more broadly a geodesic). Here, it sounds like you are equating metronomic playing with ‘euclidean straightness’. Ensembles are not aiming to play metronomicly, but together. That is their metric. To push the metaphor, the ‘lines’ (correct path) run between and across the ensemble, navigated through communication and our embodied place within the physical-social world. The social dynamics and content of the music form this topology, playing together then, requires navigating this space and drawing lines, which from inside the space (inside the ensemble) will appear straight. The point I’m making is that straightness is relative.

4 Likes

Thanks for your very thoughtful response Jordan.

Straightness here seems to mean euclidean, are you familiar with the concept of a geodesic and different spatial geometries? For example, on a sphere, there are no parallel lines, also light bends around mass (gravitational lensing), meaning ‘straightness’ is a function of space’s topology.

I did think that spacetime contradicts the idea of straight lines, but I suppose I am thinking about this on the level of human perception, rather than the deep nature of the universe. However, even more credence to the idea that straight lines exist in the human mind rather than the world we came from. My maths isn’t great, but Euclidean geometry and geodesic design are familiar to me.

Here, it sounds like you are equating metronomic playing with ‘euclidean straightness’. Ensembles are not aiming to play metronomicly, but together.

I didn’t really have timing in mind. I was more thinking in terms of amplitude and intonation. A string quartet is only as good as the player with the worst intonation. While it is much more obvious in Indian classical music, where players use gamakas to express and move towards swaras (notes), a good string player (and excuse me if this seems patronising, because if I remember correctly you are a string player) will often approach notes ‘imperfectly’ by coming in sharp or flat.

In Sheku’s interpretation of Elgar’s Nimrod, he glides into each note in a very distinctive way. Perhaps it’s overdone for some people’s tastes, but I have never been able to forget the way he plays this piece, nor his incredible ability to navigate the microtonality of intonation.

Let me pass over the question of computers making music. I think that is a rabbit hole I want to avoid, although it is undoubtedly interesting.

What I am really questioning is an aesthetic we seem to have unconsciously developed around computer music. Perhaps some of this comes from the restrictions of previous technologies that have kept our thinking constrained. Perhaps some of it comes from us internalising an aesthetic that is reinforced by living in a post-industrial society.

What I am curious about, is if one can make ‘organic architecture’ from concrete, glass, and steel, can one make ‘organic sounds’ from electronic mediums?

For example, could an amplitude envelope look more like a branch or a tree? Or the frequency of a sound look more like the gentle curve of an uneven horizon? What would that sound like, and what would it look like?

To take this out of the abstract for a moment. Here’s some simple code that attempts just this. It’s not sophisticated, but it’s something.


(
a = Array.fill(1000, {rrand(0,1000)}).sort;
Env.new(a).plot;
)

Now, my question is: why so we so often using straight lines in our amplitude envelopes, or best simple curves or periodic oscillators, when we could be using lines that have a more organic and imperfect nature? Have we developed an aesthetic without having experimented with alternatives, or considering why we use the techniques we do?

1 Like

Use lowpass filters with extremely low cutoff values to smooth control
signals, or even physical models (mass spring models) instead of linear
ramps?

2 Likes

Do we really? I hope I don’t and I tend to encourage students at every possible opportunity to avoid or at least rethink static repetitions/frequencies and simple linear developments for the reasons you describe. Having said that, each musical situation is different and sometimes you might want something very regular, static, empty or even boring, e.g., just as a contrast or as a minimalist thing for its own sake. Aesthetic aspects are dialectic, in a historical sense and possibly within art works themselves.

To be more technical: my absolute favorite workhorse in this regard is LFDNoise3, you can make almost everything more organic with it, yet keep some overall directionality if you want to. When regarding Event Patterns, Pseg is one of my tools of choice. E.g., one can connect random points with sine segments. You can check out miSCellaneous_lib’s tutorial Event patterns and LFOs which shows some control possibilities that can employ organic developments. BTW, all server-side possibilities (from physical modeling to LFDNoise3) can be used in SC-lang realtime control as well via synchronous buses.
There also exist some dedicated interpolation extensions, including Bezier curves you can use in SC-lang or server-side (via buffers).

3 Likes

I found that adding a bodily perspective to computer music really difficult, as most of the time we rely on digital signals communicated via keyboard and mouse instead of using an interface which is designed as an instrument.
Recently I am really affected by music which is driven by slow breathing or bird sounds - I found Robert Fripps mid-1990s Soundscapes a good inspiration for this, which are essentially live guitar loops with some Eventide H9000 magic - they have something ethereal for me.

One way of taking over some aspects of this music was not to loop a sound (which of course is also very appealing), but instead loop control signals - I found this can give a feeling “reminiscence” and natural but mutating repetition, which I found important for something I want to listen to over a longer period of time.
Although I enjoyetechniques such as LFDNoise, I found them often too “chaotic”, they provide movement but no repetition - I start to get lost and start to either dissect the mixturen of the moment or listen on the modulation, instead of adapting a breathing of the music and get in sync with it, feeling and adapting the cycles.

Looping or delaying some control signals, combined with some basic phasing techniques can already give some diverse outcome which keeps on giving over a longer period of time.

We can combine this with a simple technique such as repeatingly switch between two low running SinOSC as LFO

(
Ndef(\lfo, {
	// simply select between two SinOsc
	// running at different ferquencies
	var lfo = SelectX.ar(
		which: ToggleFF.ar(CombC.ar(
			in: Impulse.ar(0.1),
			maxdelaytime: 6.0,
			delaytime: 6.0,
			decaytime: 22.0,
		)).lag(0.01),
		array: [
			// some basic oscillators
			SinOsc.ar(8.reciprocal),
			SinOsc.ar(10.reciprocal),
	]);
	lfo;
}).scope;
)

We can now use this LFO multiple times on the same oscillator which gets ring modulated two times - so nothing too fancy going on here.

(
Ndef(\schwalben, {
	var sig = PMOsc.ar(
		carfreq: 40,
		modfreq: CombC.ar(
			in: Ndef.ar(\lfo).linexp(-1.0, 1.0, 3000.0, 3400.0),
			maxdelaytime: 5.0,
			delaytime: [5.0, 4.7],
			decaytime: 40.0,
		),
		pmindex: CombC.ar(
			in: Ndef.ar(\lfo),
			maxdelaytime: 2.0,
			delaytime: [2.0, 3.0],
			decaytime: 38.0
		) * 2,
		modphase: CombC.ar(
			in: Ndef.ar(\lfo),
			maxdelaytime: 3.0,
			delaytime: [3.0, 2.7],
			decaytime: 32.0
		) * 2,
	) * SinOsc.ar(0.5, phase: Ndef.ar(\lfo) + [0, pi/2]);
	var amp = CombC.ar(
		in: Ndef.ar(\lfo),
		maxdelaytime: [4.0],
		delaytime: [3.3, 3.9],
		decaytime: 48.0
	).linexp(-1, 1, 0.1, 1.0).clip2(1);
	sig * amp * \amp.kr(0.4);
}).play(fadeTime: 4.0);
)

// listen to it - if you had enough,
// remove action from the system
// it will take some to cool down
Ndef(\lfo, {Silent.ar})

// alternatively use a simple lfo
Ndef(\lfo, {SinOscFB.ar(0.1, 1.3)});

// start lfo again to get more action
// or stop here

Ndef(\schwalben).stop(fadeTime: 10.0);

I found that by adding the state of a delayed sound, and therefore memory, to control a sound, it is possible to create mutating sounds which do not drift away like pure random sounds - the sound above is completely deterministic, it does not have a random element in it.
And because everything within the system derives from a single source, it becomes a manageable and therefore reactive and playable sound, while still being a simple LFO at its core.

Additionally, I found that “computer music” is a genre where linearity can often be omitted, because there is no need to attach a controller, a physical movement, a GUI or modulation curves to control the sound, which most of the time do not allow for discrete steps but provide an ordered sequence to step through.
Using code, or abstraction in general, allows you to overcome the limitation of physical movement to excite sound, at the same time, this also creates an uncanniness that may be hard to embrace.

If you are more into non-linear stuff, I can strongly recommend Daniels Fb1 Class, see Fb1 | SuperCollider 3.13.0 Help and GitHub - dkmayer/miSCellaneous_lib: SuperCollider extensions and tutorials: patterns, fx sequencing, granulation, demand rate controlled (half) wavesets, wave folding, sieves, combined lang and server gui control, live coding, single sample feedback, ordinary differential equation audification, generalized functional iteration synthesis.

2 Likes

I was trying to argue that tree do grow in straight lines, i.e., in the most energy efficient path up and towards the sun, if they didn’t evolution would punish them, it is just that the world the tree finds itself in is changing and full of external stimuli — I imagine I would not agree with that critique of the industrial revelation you mentioned.

In my work, I’ve been using this idea of a behaviour map, where you can express signals as mappings between sources and sinks of data, and then interpolate between maps with some a signal, nesting them arbitrarily. I wrote a quark for this, but requires a static server graph so probably won’t be what you are looking for.

i.e., something like this (in pseudo code).

var map1 = { |srcs| 
  (
    '/amp' : Env.perc.kr(trigger: Dust.kr(0.1))
    '/freq' : Waveshaper.kr(srcs[/someOtherThing/amp], somebuf);
       // waveshaping models some curved space.
  )
};

var map2 = { |srcs| 
  (
    '/amp' : Env.sine.kr(trigger: Dust.kr(0.1))
    '/freq' : srcs[/someOtherThingElse/freq].tanh
  )
};

var map3 = {|srcs|
   map1.(srcs).blend(map2.(srcs), srcs[/someOtherOtherThing])
    // interpolate between the maps depending on some other signal
}

This approach is about making signals and groupings of signals (behaviours) dependent on other signals or behaviours, creating complex links between systems that evolve over time. There is some randomness (LFNoise3 is great), but most of the behaviour comes from signals being contingent on others.

Thanks dkmayer, there are some really helpful tips in the. I wasn’t aware of Pseg. That sounds very helpful for concepts such a gamakas. The same goes for LFDNoise, which I am going to experiment with.

Please don’t take anything I have written as a criticism of teachers. It is more a general cultural bias I tend to perceive. However, I intend this to be constructive rather than negative. e.g., what are possible opportunities we might be overlooking?

Very nice. I agree with the use of randomisation. I’ve used it in a recent patch and although the effect is nice it sounds decidedly ‘digital’ as it’s not how organic sounds evolve. You’re technique is interesting.

I was trying to argue that tree do grow in straight lines, i.e., in the most energy efficient path up and towards the sun, if they didn’t evolution would punish them, it is just that the world the tree finds itself in is changing and full of external stimuli

How I read this is: the world would be simpler if it weren’t more complex. My point is, it is more complex. Similarly, a standard ADSR envelope might model the amplitude of a trumpet very well only if a trumpet, its player, and the room it was being played in were much simpler.

However, I can’t prove what I am saying as it comes down to asthetics. I can sort of hear and see something in my mind’s eye and my mind’s ear that I would like to realise, but I also accept it could be imaginary, and perhaps I’m wrong. I will try to create something that demonstrates what I am referring to and takes it out of the abstract.

I love the code you’ve shared. I think such interactions between elements could lead to very interesting and unexpected macro-behaviours.

Okay, I have a very basic patch for exploring these ideas. All roads seem to lead to Perlin noise. What was interesting with working with Perlin noise for about 30 minutes was when I stopped the patch I thought it was still going because there was a sound in the street which was so similar in its modulation.

(
SynthDef(\seascape, {|out, freq=1200, pan=0, master=0.25|
	var perlin = Perlin3.ar(*{Line.ar(0, 1000, Rand(6000,30000))}.dup(3));
	var noise = LPF.ar(WhiteNoise.ar, freq + (perlin*1000));
	var sig = noise * master;
	Out.ar(out, Pan2.ar(sig, pan));
}).add
)

(
3.do({|i|
	Synth("seascape", [\pan, -1+i]);
});
)

I’d also draw attention to the SimplexNoise Quark, which is my go-to tool for periodic noise (in sclang, or in scsynth via buffers), and also handy to produce periodic noise that evolves over time so the pattern gradually changes into something else.

I’ve often created melodies using this, or rhythms, or filter/control variations that repeat but evolve (slowly or quickly).

1 Like

Sure, all fine. I get the point of articulating and maybe exaggerating things, did myself. BTW teachers should and must bear criticism as well :slight_smile:

1 Like