Linking Parameters Together in Meaningful Ways

This isn’t specifically a question about SC, so no code is included, though if you find it helpful to look at some, please let me know, I’ll upload! More of a technical, theoretical, or process-based question, below I’ll be asking about how to/how you go about linking parameters and values in meaningful ways, about how to allow certain parameters to have ‘influence’ in deriving values for others, and generally how to go about researching this, and learning these techniques. It’s something at a cross between aesthetic and practical decision making, and a technical knowledge of math.

There was that thread on here recently about how important math skills are when it comes to making musical things (in SC), and I feel like my lack of knowledge of math beyond simple algebra is what’s impeding me here. (If you disagree, and think it’s something other, I’d like to hear that too!)

My typical process is to bypass this issue and break out every parameter, and whatever, deal with it in a case by case, or moment by moment basis, but I don’t want to do that anymore as eventually none of those parameters get tweaked, and all transformative aspects of sound manipulation tend to get pushed further down the chain, additively… adding elements for nuance. It becomes unwieldy though, and not to mention computationally expensive. So I’m thinking about how to minimize the amount of controls for dialing in different timbres, achieving different timbre transformations, or generally some sort of musical expression, while under the hood coupling, correlating, etc parameters.

For example, I’m working on a coding up a SynthDef - more a synth than a synth-patch - that’s intended to be a bass drum synthesizer. Its comprised of two parallel signal chains that are eventually mixed down. One chain is intended for the low-end oomph and the other for the clicky and noisy part of the timbre. Each chain has its own envelope, and I’d like to have the release times as controls. Instead of independent amplitude argument for each chain, I’m thinking of having a balance control that lets you tweak the ratio between them.

Due to the way I’ve got the chains currently coded as well as perception of loudness of different frequency ranges, when I increase the release time on the clicky/noisy chain it becomes quantitatively and qualitatively louder. So to compensate, I want to bring up the oomph chain a bit, and generally lower the volume of the mixed signals just before output. I suppose I’m sort of trying to make an auto-gain controller, or some sort of compressor, but instead of just sticking a compressor pattern toward the end of the SynthDef, I’m specifically interested in creating this behavior through some sort of what I hope to be (semi-) elegant math. The balance control also should have a nuanced effect at different release times to further compensate for the volume.

I feel that my lack of formal math knowledge is making it far more difficult to deal with these perceptual aspects and aesthetic choices.

So… I know that I don’t know calculus, too much about functions, etc. This I’m not too concerned with cause there are lots of resources out there for finding these, and explanations to them…

What I don’t know that I don’t know - and the reason I never clicked with math in school - is practical application of things, the whys, meaning I don’t know or have context for creative reapplication, or obvious choices of functions or equations.

I understand that this is a rather abstract question… what are some good resources that you’ve found for either learning this applied form of math, ideally in a musical context, or generally what’s your process for correlating and linking together parameters like this?

I’ve got a computer science intro to algorithms book coming that’s supposed to be a very practical take on explanations, but generally… I feel that there’s just something I’m missing here.

Up til now, I’ve been using brute stupidity and creating formulas of more and more complexity (read longer and inefficient) to make the numbers move in the direction and (toward more than at) the rate that I want them to.

For instance, when relNoise increases, I also want to increase ampBass, but not linearly. Thus far I keep tacking on extra parenthesised blocks which grows the formula longer and longer, makes it far more difficult to read, and for all that still doesn’t do exactly what I want it to. And then I look at the math equation for linear to exponential mapping, and it’s far more powerful, a fraction of the size of what I’ve got, etc, etc. Unfortunately I need a different type of curve, it seems.

And/or, it’s a different type of problem. Perhaps I’m compensating in the wrong parts of the signal chain, or should be compensating in multiple places.

Sorry for the long post here! Any insights, personal rules of thumb, resources, links to this is typically solved via x-formula, anything is greatly appreciated! Currently this is a SynthDef that I’m working on, but it’s actually very important to my compositional process as I’m thinking about similar things for the higher level process too.

Boris

Maybe you would benefit from reading something about “easing” functions.
Here’s something to get you started: https://easings.net/
The pictures and animations try to show how the easing works when applied to a movement, but of course nothing stops you from applying the exact same equations to an amplitude or a release time (or whatever you are trying to modulate).
If you click one of the pictures, it will lead to a page with extra information, including the math equation by which you could implement it yourself.

2 Likes

A nice resource. Not all, but a lot of these can be done or approached with SC’s options for Env, Pseg, VarLag, DemandEnvGen, lincurve, curvelin etc.

Thank you @shiihs! What a great resource. Those animations are fantastic for getting a sense of actual motion of each curve.

This is very helpful too. Seems an obvious thing to try now that you’ve said it, using envelopes which disguise some of the more complicated math many of which also have a settable curve parameter. That’s very helpful.

This seems like a good place to start and explore. I get how I can take every parameter I’d like to correlate and apply those types of functions to get levels to rise and fall to taste while summing the results to control some other thing, in this case amplitude.

From an aesthetic standpoint, are there common groupings of parameters or sorts of gestural algorithms that you both find yourselves using commonly? Or is it a case by case basis? And how do you go about picking these different parameters to group together?

Thinking about this some more, with the thing I’m currently working on, the amount of overall signal amplitude change is both very large: increasing in linear proportion to the increase of the envelope’s increase time. This isn’t so good, because the quantitative doubling of release time, from 0.1 seconds to 0.2 will double the amplitude, but of course qualitatively, this additional release time is barely noticeable in how the sound decays over time! Things get very quickly out of hand!

And due to the frequency range, with certain settings of release time and ratio between the two signal changes, very small: the quantitative drop in amplitude (both as shown on the level meters and when using Peak or RunningMax is very small, but qualitatively, the sound from the speakers becomes very very quiet.

So when I modify only one parameter at a time, I get linear jumps that cause too much change, but when changing several parameters which are set up to influence each other, it’s still huge perceptual changes, with tiny numerical changes.

This tells me that of course the way I have the correlations currently set is pretty awful, but because I have 3 or 4 correlated parameters and thus so many permutations possible, I think just by changing adding curves to change the rate of increase won’t solve for everything. So would negative feedback help here?

I don’t think I need anything as complicated as PID, I think a simpler proportional control function might be a starting point, unless the feedback sensor is so overwhelmed by the energy of the frequency content that its readings aren’t anywhere close to what’s perceived, as in the second example - huge changes is perceived volume, with very little change shown on the meters. But likely tuning/filtering the feedback output may be simpler.

Just writing some of this out is very helpful for me, but still, it seems like a rather complex problem to solve for in a fairly simple signal chain. Perhaps I’m overthinking all this. Still the results I’m currently getting aren’t satisfactory.

I’m curious in strategies that others use to solve for these types of situations… Conceptual strategies are also very welcome.

The things you describe sound quite abstract, and so advice will necessarily also be rather abstract.
I think it helps to try to get additional insights into “how things work” to decide on how to modulate them.

As an example: our ears perceive the logarithm of frequency as “pitch”. This means that if you try to modulate the height of a tone, by default you should start by trying an exponential mapping on frequency to get a linear effect in pitch. There’s a similar law for amplitude, where our ears do not perceive a linear increase in amplitude as a linear increase in volume. Instead, decibels better describe our perception of loudness.

Apart from such “obvious” relations between things, a lot of interactions (especially where feedback is involved) are pretty unpredictable, may even lead to chaos. In such cases, I think as an experimental composer/sound designer it’s an option to embrace the unknown - just experiment and accept what works, throw out what doesn’t work.

In more traditional sound design, some tricks exist that are derived from how nature works. E.g. a typical thing that happens is for a filter to open up more (giving a sound that is brighter) when the volume gets louder (and since filters opening up is something related to frequency, a good default is to start by using some exponential mapping).

I remember that in the csound book there is a chapter called “Designing acoustically viable instruments” that talks specifically about things like connecting performance and timbre to the synthesis process. Maybe interesting, athough the examples of course are given in csound syntax which, if you are used to supercollider, looks quite different. But I guess the ideas are transferrable.

23 juni 2021 kl. 00:52 skrev Daniel Mayer via scsynth <noreply@m.scsynth.org>:

dkmayer
June 22

shiihs:

Maybe you would benefit from reading something about “easing” functions.
Here’s something to get you started: https://easings.net/

A nice resource. Not all, but a lot of these can be done or approached with SC’s options for Env, Pseg, VarLag, DemandEnvGen, lincurve, curvelin etc.

since 2011 the rest are readily available too…
Quarks.install(“Ease”);

https://github.com/redFrik/Ease

_f

4 Likes

I’ve got that book, so I’ll definitely take a look! Thank you shiihs.

One thing about feedback - sorry if I wasn’t clear, I meant to use it in a way to hone in the controls of elements, basically how a compressor uses feedback to modulate volume.

And yes, I think I’m muddling the point a little bit - I have the one current problem that I’m using as an archetype for sort of asking about practical solutions to (my) creative problems! But also in my experience these types of decisions are exactly what help shape aesthetics, and exposure to different (technical) techniques open up new possibilities for creative pursuit. I do suppose it would be helpful to post code and be able to point to specifics. Lemme see how far I get in cleaning up the patch to get rid of all the extra stuff that’s outside of the scope of this discussion.

Thanks everyone for taking the time!

Wow, Fredrik, didn’t know that! Blow the horn louder! :slightly_smiling_face:

Definitely case by case, in the sense of work by work. I approach it like this: I’m experimenting with setups that seem to me promising soundwise. Their development might take weeks or months. I am most happy if it turns out that there’s a few crucial parameters left (say 3-7) that are able to transform the sound into a variety of directions. That gives me confidence that I can build larger forms out of it. Then I tend to control the parameters independently, with EnvGens, Patterns etc. These parameters are rather global, so more related to gestures or layers than to single sound events (like frequency and amplitude of a “note”).

Thinking about your question, parameter linkage might be of greater relevance if you have much more parameters. Maybe I’m searching for situations where it is not necessary or already implicitely in place.

BTW, Alberto de Campo has worked a lot in the field of parameter linkage and meta-mapping strategies. I don’t know the most recent resources, but you can start here:

https://quod.lib.umich.edu/i/icmc/bbp2372.2014.034/1

1 Like

linking parameters with Markov chains or mophing/changing splines?

Compositionally, this is also where I’d like to head, gestural manipulations for sonic transformations. Still getting my bearings in SC, and generally using an environment like this for making music with, so I feel like my little bass drum exercise may offer interesting parallels on a much smaller scale.

Thanks very much for that paper as well! I printed it out today at work and will read through it this evening. The abstract is interesting. Will explore de Campo’s other work. I haven’t yet gotten to his chapters in the SuperCollider Book, but they definitely caught my eye in the table of contents.

How do you typically find out about these papers and research? Is there an online community for discussing these topics and research? I’ve been planning to subscribe to CMJ this paycheck or next. Haven’t been exposed to ICMC before, so thanks for that as well! I’ll comb through JSTOR for other papers from their conferences.

I’ve always heard Markov chains mentioned in (semi) random contexts. In going through a quick wikipedia read, the various possible choices, based on the current choice sounds like a very interesting avenue to explore. Still though I feel like if I were to just stick my parameters as variables in a Markov chain, it’d largely be a semi-arbitrary decision making process, and screwing around with the results to taste. All of which I’m fine with! but one of my hopes from this thread is to get insights in how to not do so much guess and check, where/how to build up enough conceptual understanding to be a bit more purposeful with application.

Would you mind sharing some examples of practical applications of Markov chains, or a good resource on them, and importantly why to use them/when to think about using them as opposed to some other method?

A trick I commonly use to find the optimal scaling between 2 correlated variables is to assume they are related by a power law having some unknown exponent. Then I experiment with different exponents until I have found the best-sounding one. To put it more concretely, here is an example SynthDef of a kick drum sound that allows you to test different exponents and release times using the mouse:

(
SynthDef(\testdrum, {
    var sig, env, trig, amp, rel, relpow, normalRel = 0.5;
    rel = MouseY.kr(0.1, 1.0);
    relpow = MouseX.kr(-1.5, 1.5).poll;
    trig = Impulse.kr(\trigrate.kr(1));
    env = Env.perc(\atk.kr(0.001), rel, curve: \curve.kr(-5)).ar(gate: trig);
    sig = BPF.ar(PinkNoise.ar, \freq.kr(110), \rq.kr(0.3)) + (SinOsc.ar(\freq.kr) * 0.1);
    amp = \amp.kr(0.5) * (rel / normalRel).pow(relpow); // normalize rel to the "typical" value to avoid extreme amp values
    sig = sig * env * amp;
    Out.ar(0, sig!2);
}).add;
)

Synth(\testdrum);

MouseX is mapped to relpow, which is the exponent of the power-law relationship, and MouseY is mapped to rel. So, you can move the mouse horizontally to try out different relpow values, then move the mouse vertically to compare how it sounds with different release times. Once you have found the sweet spot, make note of the MouseX value in the post window. Now you can plug this number in place of MouseX. In this example, I think -0.2 is a good-sounding exponent, so my final SynthDef might look something like this:

(
SynthDef(\testdrum, {
    var sig, env, amp, rel, relpow, normalRel = 0.5;
    rel = \rel.kr(0.5);
    relpow = \relpow.kr(-0.2);
    env = Env.perc(\atk.kr(0.001), rel, curve: \curve.kr(-5)).ar(2);
    sig = BPF.ar(PinkNoise.ar, \freq.kr(110), \rq.kr(0.3)) + (SinOsc.ar(\freq.kr) * 0.1);
    amp = \amp.kr(0.5) * (rel / normalRel).pow(relpow);
    sig = sig * env * amp;
    Out.ar(0, sig!2);
}).add;
)

And if there are multiple variables correlated with loudness, you can just chain them together with multiplication. For example if I wanted amp to scale with both freq and rel, I might do:

...
amp = \amp.kr(0.5) * (rel / normalRel).pow(relpow);
amp = amp * (freq / normalFreq).pow(freqpow);
...

I hope you find this useful, and I’m curious what some of your techniques for dealing with this are as well, if you care to share…

2 Likes

In this case, I know Alberto and he showed me examples of his controls at UdK. In general, a keyword search at Google Scholar should provide some hints.

This is a great tip, thank you @PitchTrebler! I will definitely be using this. Exponentiating values seems like a very straight forward way to scale things, where before I was using polynomial expressions that kept getting longer and longer as I worked to force different scalings. And using a mouse to find the right amounts is a much more elegant way to quickly test out the grouping of two different parameters. Thanks a bunch!

And I definitely would love to, but currently haven’t got any techniques aside from various things to explore further from those who’ve shared in this thread.

Sorry, completely forgot this. One can see markov chains as an extension of state machines. In a simplistic way if…then… rules. State machine are nice for linking parameters. “If creshendo: increase over tones”.

In your state machines you can you can add more choise. “If creshendo: increase overtones; or add more bass”. You can use pure randomness to select one or the other or both results. Markov now adds an extra parameter, the current state that adds a weight to the possible choise.

“If creshendo: increase overtones; or add more bass. if current state is silence overtones = 60%, bass = 40%” “If current state is creshendo: overtones = 10%, bass = 90%”.

In the end it is all about building a framework to tame chance to your liking. You could link the sliders of a mixer this way, or use it for algorthmic composition. You can make decisions not oly on current state, but also on a previous state, or an average of several historic states. You could use the state of a different process as a decider.

This pattern of spheres is created with markov chain of 3rd order (looking back three steps)

2 Likes