In heavily metrical collaborative performance, the is always for the computer musician/live coder to follow the tempo of an instrumental performer, rather than vice-versa. Has anyone managed to achieve this with SuperCollider, and how? Something that could track the tempo of an instrumental performer and share it over e.g. the link protocol would be awesome.
I’ve seen Nick Collins’s beat tracking stuff but not managed to get it to work in practice. I wouldn’t mind if it was a bit glitchy or took a while to converge, the tempo tracking could have its own behaviour and aesthetic of uncertainty to play with…
I don’t know of existing code, but I’m implemented tap tempo UI’s a bunch of times before. This would make a nice quark.
Yeah, I tried for ages to get the Nick Collins beat tracking to work, but it couldn’t really accurately handle even basic 4-4 techno and besides that was crashy enough that I would never use it for a live setting anyway. Honestly, the Ableton Link support in SC is good enough that it might be best to find another Link capable thing that can beat sync, and then sync to that? I don’t know if this exists - I’ve paired SuperCollider with Traktor in a DJing context and it works quite well, but thats because Traktor is handling Link directly.
EDIT: With a bit more poking about, I find that the available-software situation for SuperCollider is not worse than Max/MSP (where most beat-tracking externals have disappeared from the web, and the one remaining [ibt~] is somewhat stable with 120-130 bpm techno but drifts and recovers occasionally – probably not significantly better than Nick’s BeatTrack) or PD (AFAICS nothing is available)… which means that recent innovations haven’t been picked up anywhere in the real-time audio space.
Hm, Essentia has Pd bindings, but pretty much just onset detection. Librosa has beat tracking based on 2007 research, and predominant-local-pulse tracking based on 2011 research… which tbh is kinda odd, isn’t it? It’s like beat tracking research hit a wall 10-15 years ago and everybody just gave up.
Poking around today, I did run across a paper or two about machine-learning approaches, but those seemed to be oriented toward file-based batch processing (“here’s a file, find the beats”); real-time tools hadn’t emerged yet. That seems promising, perhaps? Auto-correlation based approaches have pretty much failed, so the field is in need of a paradigm shift. Since this is a topic where humans do this very easily and computers don’t, that seems like a good case for machine learning.
Btw I’m considering trying to exploit deeply flawed beat analysis to see what kind of twisted concept of meter comes out… “aesthetics of failure.” The beat-cli Python thingy sends over a sinusoidal LFO whose peak is supposed to land on a beat. If I can map that back onto a phase (acos with awareness of the derivative to determine whether we’re in the first or second half of the cycle) then I could use the phase to adjust a tempo clock (or, maybe better, manually run a Scheduler?). With the material I was testing today, it will not be stable, not even close (and there’s some uncomfortable jitter to remove too…), but the result is likely to be a weird concept of elastic tempo, occasionally locking onto an external pulse. Could make for interesting, chaotic listening.
Well, it’s quite possible, because I remember getting decent results with a Sonic Visualiser plugin. (Not tempo, but attack times)
I believe, if you need real-time beat tracking, you will need some delay to see a bit “ahead”?
Also, if most MIR algorithms are expensive, it would make sense to write a program that runs a process in parallel to scsynth? Overkill? Maybe it is a silly idea, but if you need it in scsynth, is there anything preventing you from writing a shared buffer between an ugen and a parallel process that does the heavy lifting?