Setting multiple parameters

I am spinning this off from the recent FM synthesis thread (FM Synthesis algorithms - #23 by dietcv). I am wondering about approaches to setting multiple parameters simultaneously and in a somewhat automated way, as opposed to manually changing every single one. Dietcv mentioned the MLPRegressor from FluCoMa. I’ve tried this approach and will keep looking into it. So far I’ve found it difficult to figure out the right training settings for achieving the kinds of results I was looking for. I know there’s Alberto de Campo’s Influx too, which I still need to try. And Tom Mudd’s thesis, which Dietcv linked to.
One thing I am thinking of would be some kind of non-linear mapping with Dietcv’s transfer functions. But I am curious if you have additional suggestions for how to approach this.

Difficult to give a general answer as workflows and intentions of usage can differ a lot. I like to end up with setups of only a few flexible parameters (a handful or so), where those remaining can give me a relatively large variety of sounds or textures.

I’m using mostly these two categories of strategies to achieve such “reduced input”:

.) Choosing synthesis or processing techniques that are simple and yet have a perceptually wide variety of outputs, e.g. those based on recordings, buffer manipulation procedures in general, certain feedback setups.

.) Reducing parameter groups by linear or non-linear mappings or statistical distributions with only a few basic inputs. That was already mentioned in the other thread, additive synthesis would be a classical use case.

Switching between presets with many parameters is another strategy. Personally, I don’t like it that much as I feel to have less control over shaping transitions is a way that I’m happy with.

2 Likes

Another related approach that I really like is to limit yourself to N parameters (when designing a system) and then write your sound processes to use only those meta-parameters. Internally each sound process do different (possibly non-linear) transformations with the input of those parameters, but the control interface is the same for all sound processes in the end, and you learn to play with them. This can be combined with Influx for instance, or added in groups of sounds that are then controlled together by one single control interface.

For an example of this approach, see Bjarni Gunnarsson’s thesis: https://sonology.org/wp-content/uploads/2019/10/Bjarni-Gunnarsson.pdf

1 Like

As Daniel says - it very much depends upon your workflow.

I tend to start off with a synth (usually in a proxy) that has all the controls, and then experiment with it until I get a sense of which controls would work well together. And then I’ll gradually experiment with different linkages until over time I end up with an instrument that I like, that has fewer parameters. And then I’ll turn that into a synthdef. Often one synthesis technique/approach can result in a number of different synths.

I also have some code (written in Lisp - sorry) that will generate random parameters and save them to a running list. And then I can just listen as it plays, and grab any that work. That can be quite effective as a way of generating ideas/possibilities. I suspect I stole this idea from James Harkins.

And TBH, I also do a lot of stuff intuitively. For example with FM synthesis I have a pretty good idea about how it works, and so I usually start with a shape of what I want and work within that. Theory doesn’t necessarily generate ideas, but it does help you reduce the space of bad ideas.

1 Like