Prompt:
Hi, I’m trying to connect SuperCollider with a Rust/Nannou visualizer to display any sound event in real time during live coding sessions. My goal is for any sound I trigger (from ProxySpace, Ndef, SynthDef, Pbind, Routines, etc.) to be automatically visualized, without having to modify the Rust code each time.
What I’ve done so far:
- I have a Rust/Nannou visualizer that listens for OSC messages and visualizes
/note_on
,/drone_on
,/cluster
, etc. - In SuperCollider, I created global functions like
~sendNote
,~sendDrone
,~sendCluster
that send OSC messages to the visualizer. - If I call these functions from routines or patterns, the visualization works.
- I mainly work with ProxySpace and define my sounds like this:
~padmod = { LFSaw.kr(0.5.neg).exprange(0.01,1).lincurve(0.01,1,0.01,1,5) };
~pad = {
var sig;
sig = LFTri.ar([100,101]*2) * 0.1;
sig = sig * ~padmod.kr(1);
sig
};
- If I put the call to
~sendDrone
inside the proxy function, it only runs when the proxy is recompiled, not every time it sounds. - If I use an external trigger (routine, function, etc.) that calls
~sendDrone
and then~pad.play
, it does visualize, but this is not practical for all live coding situations.
Problem: I want a robust and universal way for any sound event (triggered from ProxySpace, Ndef, patterns, routines, etc.) to be automatically visualized in the Rust visualizer, without having to manually trigger or modify the Rust code. Is there any technique, pattern, or deeper integration (maybe using callbacks, OSCdef, or some ProxySpace extension) that would allow me to intercept all sound events and send the corresponding OSC message automatically?
Any ideas or concrete examples to achieve this?