Using nanokontrol2 MIDI faders for synths

Hello together,

I want to use the faders of my Korg nanokontrol2 for controlling parameters (like amp, rate, pan, …) for my granular synth via MIDI.

Sorry if this has been discussed before, but I can only find functions for noteOn/Off messages.

Thanks for you help!

Fader values should come in as MIDI cc’s - you’ll want to use or If you need to figure out which cc/channel/etc values to listen for, you can use MIDIFunc.trace to dump all MIDI input to the post window.

1 Like

I am working with a live buffer for granular synthesis. Is it possible to change parameters while recording and playing back?

There seems to be a system overload. I get this message in the post window: “exception in real time: alloc failed, increase server’s memory allocation (e.g. via ServerOptions)”

The first step for working with controllers is to determine the code that produces the effect you want. Then you can put the code into an external-control responder.

Before you can control the parameters by MIDI, you need to be able to control the parameters by code. The magic word here is set. This is the method that changes parameters of an existing synth without creating a new one. 99% of the time, “controlling a parameter” is some variant on this.

When you say “I can only find functions for noteOn/Off messages” and “There seems to be a system overload,” it makes me think that you’re trying to control parameters by making new synths (but, without any posted example, it’s only a guess – to get more specific advice, it’s a good idea to show what you’re doing in code).


1 Like

Hi and welcome,


I’m posting a simple modulation example I used with the nanoKontrol, you could adapt to your granulation synth.

// start Synth silently

x = { arg minCarrFr = 300, maxCarrFr = 500, freqModFr, widthModFr, ampModFr, panModFr, amp = 0;
	var sig =, maxCarrFr),, 0.5),, 0.1)
	) * amp;

// connect MIDI
// suppose nanoKontrol sliders mapped to cc 0-6

// amplitude controlled by second slider from right


// add global Specs

Spec.add(\minCarrFr, [50, 500, \lin]);
Spec.add(\maxCarrFr, [50, 500, \lin]);
Spec.add(\freqModFr, [0, 200, \lin]);
Spec.add(\widthModFr, [0, 50, \lin]);
Spec.add(\ampModFr, [0, 200, \lin]);
Spec.add(\panModFr, [30, 100, \lin]);

// global Spec for amp is defined

[\minCarrFr, 0, \maxCarrFr, 1, \freqModFr, 2, \widthModFr, 3, \ampModFr, 4, \panModFr, 5, \amp, 6].pairsDo { |sym, num|{ |val|
		x.set(sym, / 127));
	}, ccNum: num);





@dkmayer , thanks a lot for the example, really helpfull! When dealing with MIDI controler do you use control Bus somewhere or do you always prefer to use Spec and ControlSpec ?

Is there any case when dealing with MIDI controller were you think that using control buses is the best solution? Could you please post an example if so ?

One last question, do you think that using environment variables for storing MIDI values tends not to be a good idea ?

There is no right answer here, but it is best to use control busses if the slider (etc) information is going to be used for multiple Synths or if lots of synths are being spawned and need to share some kind of information. For example, if you have a polyphonic keyboard, and every time you press a key there is a new synth for that key, then you probably want the volume to be read from a bus. Moving the slider writes to a control bus and each new synth just looks at the control bus for the volume information. Plus, if the volume changes while the synth is live, it is reading that information from a control bus, and its volume adjusts accordingly.



@Sam_Pluta thanks a lot! (and sorry for the late reply)

So it’s more a matter of style than optimization or best practice. This question about writing midi value to a bus vs storing in a variable came up because MIDIin help has one example with the following:

writing to the bus rather than directly to the synth

//i used this and got acceptable latency for triggering synths live.

//The latency might actually be less than sc2, but i haven’t used it enough

//to tell for sure yet.

//Powerbook G4, 512mb ram.


(I know this class is deprecated but this put me in doubt…)

I think it’s a difference in behavior too.

If you keep the controller values in variables, and use the variables when creating new synths (but not otherwise), then each synth does a sample-and-hold on the controller at the moment of synth creation. Usually this isn’t what you want, but there could be a legitimate use for this (for example, if synths are being produced rapidly like grains, and it’s important for each grain to have a fixed sound).

Typical MIDI controller behavior is, when you move the knob, all notes update to the new value immediately. That is not the same as plugging isolated values into new synths.

IMO I agree with Sam that control buses are the easiest way to do this.

In my own work, I use a wrapper class for a control bus (GenericGlobalControl, in the ddwCommon quark) which also has a handy interface for automating the control by playing a kr synth on it (and automatically syncing those values back to the client, including GUIs). Every synth gets the corresponding control mapped to the bus, and after that, it Just Works™. (Most of the time, my Voicer class is mapping the controls automatically – once I mapGlobal a synth argument name, then I don’t have to think about it at all.)