Loop Station for Midi Input

Hello everybody.

I’ve been thinking about this concept where you basically record all your midi input for a set amount of frames. Then you would trigger it to reconstruct all those signals periodically. Basically this should work like some concept of loop station, just for midi inputs.

I think the possibilities of doing things with this idea are endless. But Straight away, I’m a bit clueless on how to approach this. Is there anyone with some idea of how to record/store some kind of midi input sequence and then play it back in a loop after a certain point?

Im open for any inspiration or code snipplets.
Thank you very much!

I put something together quickly and a while ago as a proof of concept for a motorized fader box that I have. It is able to track new input values (after sensing a touch) and write those values into an array. After the touch is no longer sensed, it reverts to playback.

It’s longish code, and I haven’t actually looked at it since writing it a while ago, but I do seem to have left length comments and commentary. It was left in a working-enough stage. MIDI Touch Automation · GitHub

Likely a lot of it can be cut out or modified for MIDI controllers without capacitive sensing, and it might be worth looking into writing into a buffer rather than an array, but mostly this is at proof of concept stage.

I used a Pbind for the actual playback, and an array to store and retrieve values from. The Pbind’s \dur key sets the frame rate, and the array size sets the amount of frames.

In my case, I just needed the fader to relatively show me its position, so in the code I linked to, the frame rate is super coarse, it should be changed for real playback, though I suppose there are some interesting possibilities to be found in coarse reproduction of values.

A routine/task should be able to also work just as well for this application…

And for closer to real time resolution, I suppose writing to and reading from a buffer might be even better though I haven’t tried that myself yet. And I do like being able to grow an array’s size fairly simply, in case I want to increase the automation time, versus a buffer seems to require a lot more effort to grow in length, or be preset for a very large amount, ultimately still constrained by a certain max value.

1 Like

Funny I was thinking about exactly this yesterday. Maybe it’s time I got my hands dirty and did some experiments :slight_smile: I too think it could be an interesting and rewarding project.

I imagine a basic system would need:

  • record both noteOn/noteOff and midi CC msgs on all midi channels
  • have some count-down mechanism, or waiting for a first note or event to arrive to start recording
  • have some metronome functionality (either visual or auditive)
  • have a way to signal to the system that recording is done and looping should start (with or without recording a new track) - I’m guess here the system needs some configurability to listen for some midi control change msg or a special note value that can be interpreted as a command to cater for different midi devices (or for a really basic system, perhaps a fixed length can be configured and it could always be recording)
  • nice to haves: change tempo, dump recorded midi to file, selectively mute recorded tracks, optional automatic quantization, some GUI support

Anything else?

1 Like

thanks for your inspiration, i implemented some of your ideas but in a little different style.

I tried to make some simple and easy to try out use case for demonstration purposes with a sinus synth. you can use any midi controllers .cc or control (potis, faders) output for my code. I append some [vel, note, ch] array to some other array at position ~playhead. then i just playback the array in a loop after the recording is finished.

can you tell me the advantages of using a buffer instead of an array? Is there any problem when creating a very large array? I assume there is some performance issues? when i set the ~wait period in my later shown code to a value of 0.001, it gets buggy for when the array gets larger sizes than 25000. Maybe its also not a good idea to use a 2 dimensional array. it would be generally better with buffers i assume?

@shiihs yeah also came to mind spontaneously. was experimenting with different looper approaches a little earlier. i didnt try to implement the gimmicks you have, since i think its too complicated to share code if too much is implemented. i was more curious how to effectively implement the core of this idea. i feel everything else, like a start, stop, overdub, needs to be adapted to what you want to do with it anyways.

here is the code:


//this is a midi recorder & playback for midi control
//it is written for demonstration reasons and therefore no special gimmicks are implemented
//this code has a lot more .postln; commands than required for testing and showcase reasons
//for a better overview and worse performance, set the ~wait period to a larger value
~wait = 0.01;
// ~wait = 0.2; //for a good overview
//~wait = 0.001; //gets buggy for array sizes of >30000

//initiate midi client and connect everything
(
MIDIClient.init;
//midi out for linux
~midiOut = MIDIOut(0, MIDIClient.destinations[0].uid);
//is this for windows? i dont know...
// ~midiOut = MIDIOut(0);
MIDIIn.connectAll;
)

//some simple sinus synth
(SynthDef(\sinhit,{
	|freq=440, amp=0.1|
	Out.ar(0,amp*SinOsc.ar(freq.lag(0.2)!2));
}).add;
);
x = Synth(\sinhit);

//some simple sinus synth control
//does take any control midi input and changes the pitch of the sinus synth
(
y = MIDIdef.cc(\cc_sinhit,{
	arg vel, note, ch, src;
	[vel,note,ch,src].postln;
	x.set(\freq,vel/127*770+110);
})
)

//recording the midi input
//~recordControl array is created and every ~wait period the midi control
//is written to some index of this array
(
//control for recording
y = MIDIdef.cc(\cc_record,{
	arg vel, note, ch, src;
	"record".postln;
	~recordControl[~playhead] = [vel,note,ch];
});
//set up a 2d array, if the entry is [-1], then there is no playback for this index
~recordControl = [[-1]];
~playhead = 0;
r = Routine({{
	~recordControl = ~recordControl ++ [[-1]];
	~playhead= ~playhead+1;
	~playhead.postln;
	~wait.wait;
}.loop}).play;
)

//stop the recording and free the recording synth
(
r.stop;
y.free;
)

//playback of the recorded midi in loop according to the array: [vel,note,ch]
(
~playhead = 0;
p = Routine({{
	var action;
	~playhead = (~playhead+1)%~recordControl.size;
	action = ~recordControl[~playhead];
	[~playhead, action].postln;
	//only output control when a action is made
	if(action[0]>=0){
		~midiOut.control(action[2],action[1],action[0])
	};
	~wait.wait;
}.loop}).play;
)

//stop the looping of the recorded midi input
p.stop;

I had had this idea some years ago, but never finished it.

The cruncher was server latency.

  1. Press “record loop” at a time that sounds like a downbeat. (The clock time is really downbeat + latency.)
  2. Play MIDI.
  3. Hit the key to stop recording and start playing, again at a time that sounds like a downbeat. (Realistically, the downbeat’s sounding time is the one that will matter to you while you’re interacting with the system.)

So let’s say it’s “stop record, start play” at sounding beat 100, and server latency is the default 0.2 seconds.

So the clock time is already 100.2.

If you recorded a note on the downbeat, it’s too late to play it for this state transition. The next time through the loop, you can play it, but not right now.

It seems to me there will always be a small hiccup when transitioning from recording to loop-playing, and the code will have to account for this. That should be possible, but be aware that you can’t assume that SC clock time is exactly synchronous with what you hear (rather, you should assume that clock time is never synchronous with what you hear).

ddwMIDI/MIDIRecSocket-support.sc at master · supercollider-quarks/ddwMIDI · GitHub may give you some hints about deriving note durations from note-on/off messages.

hjh

I haven’t tried it with a buffer so this is just conjecture, but I think with a buffer it’ll be more practical to work with “real-time” automation data. An array will always quantize to you .wait, or equivalent, value. As well as for having anything other than very short loops, with the array you have to do some math to figure out how long the loop length actually will be… eg Array.size * waitTime. Seems a bit clunky to me.

As an aside, in thinking about this like a looper rather than automation playback, it might be fun to consider a “sound-on-sound” mode, ie the first take records all the absolute values of the controllers, but on overdubbing the controls can be used to make incremental modifications, or the automation playback instead transforms to be the difference, or something like that.

Or if it’s true-ish overdub, instead of a single stream of values per parameter, something like the an overdub decay mode should also been interesting… maybe? I guess it depends on the parameters being controlled.

Isn’t this a non-issue if the thing the MIDI is controlling is some sort of process running on the server as well (if the server latency is set to be the same as the MIDI latency)?

It that’s not the case, perhaps a workaround for this might be (maybe this would be too expensive computationally though?) to always be recording into a small buffer of a seconds, or some amount of ms, though never actually saving that data until the “record” action is called in your patch. Once you hit record though, it prepends that buffered pre-record stuff to match the timing… I think I’m doing a horrendous job of explaining this… basically something akin to Ableton’s Capture MIDI functionality.

For it not to be an issue, you would have to play the MIDI keys earlier than the beats you’re hearing. That’s not going to feel natural for any musician (or, can you explain how you would do it and have it feel natural?).

That is, the problem is the recording cycle, not really the playback cycle.

hjh

Ah yeah. I suppose this would be the case if playing along with something already sounding.

I’m spitballing here, but would it be possible to do something like:

  1. press record slightly ahead of time, which is quantized/scheduled to actually start recording on clock that existing content is already playing relative to
  2. play notes, twiddle knobs, etc
  3. press playback, also slightly ahead of time, which is quantized/scheduled to start playing back on that same clock
    3a. I suppose those might truncate the last s.latency seconds of the recording loop (I think Ableton sorta does something similar to this)

I’m still a bit under caffeinated this morning, but I seems like this may resolve latency issues?

Or, patterns, which automatically schedule themselves, right? In my example linked to above, I record MIDI data into an array and have a Pattern reference that array for playback. I haven’t gotten back to that test yet so no real extensive testing, or areas where critical timing was necessary, but dealing with recording values into an array in the client starts to circumvent some of these issues as well, yeah? If so, as I said above, it’s not necessarily practical for long lengths of time to deal with arrays, but maybe with some of the interpolation methods it can be workable…

I’ve made a proof of concept here: GitHub - shimpe/sc-midi-recorder: experiment with midi recording

It’s certainly not perfect (in fact, far from it :slight_smile: ), but I’ve already had some fun with it.
It has a flaw related to keeping the original note length when changing quantization that could be resolved best by changing the internal representation of the recorded midi events (maybe some other day), and then some other flaws that perhaps need a very different approach (I need to think about it).
Timing wise, although there are reasons to doubt it based on how it works, so far I haven’t really seen huge problems.

The system does not boot the supercollider server at all, as it’s pure midi stuff. It’s intended to be used with a keyboard or synth that simultaneously can send midi to supercollider and receive midi (and make it audible) from supercollider.

It has some of the features I mentioned above.

  • midi device selection in the gui (do this before you use the system)
  • parameter changes only take effect on the next iteration.
  • Things that can be changed include
    • transposition,
    • quantization (changes in quantization are not permanent, you can “unquantize” at will),
    • time signature (not perfect),
    • number of bars in the loop,
    • playback tempo
  • you can record to different tracks (a single track can contain events from multiple midi channels),
  • you can selectively mute such tracks or completely erase them.
  • There’s a visual metronome.
  • It will store one program change per midi channel per track. You can change the “program change” while the system is looping.
  • It has a recording mode where what you play is added to the recorded information, and also a play mode where you can play something without recording it (while the earlier recorded info is playing back in loop).

Having said this, it’s not perfect and it might simply explode when you try to use it, but I found it fun to play with.

1 Like

I don’t think that will work in practice. We’re more or less hardwired to play in time with what we hear, not with some arbitrary time base shifted earlier.

I think this might be better:

  • For record/play and incoming MIDI, subtract latency * tempo from the receipt time, and quantize this.
  • When switching from record mode to loop-play mode, “fast-forward” the first fraction of a beat (possibly skipping notes within the first latency * tempo beats), and then play the recorded notes after that in sync. (Reducing server latency would reduce the impact.)

Huh. If I’d thought of this years ago…

hjh

I meant this in a cue up the recording kind of way, like a pre-roll. Eg so I don’t have to press record and the first note simultaneously.

Ah, I see, if you have a known quant. Sure, that would be fine. (If it’s me, I would say the relatively small, 1 or 0.5, to allow recording polyrhythms.)

hjh