Just got to play very briefly with this, and it works great! (I will be away from the computer entirely for the next 3 weeks…)
Yes seems like there’s a lot of inefficiency going on here, turning a buffer into a kr signal just to see if it’s changed (scztt previously suggested there is an easy way to make a BufChanged ugen, but nobody’s done it yet that I know of New class VisualBuffer - #3 by scztt )
then turning the result buffer into a kr in order to detect if any parameters have changed in order to send the reply back via OSC… I wonder if for starters it’s a reasonable assumption that if the XY position has changed, the parameters will also have changed?
…
Now, I would suggest to completely separate the interface to the regressor from the destination SynthDef / Ndef / nodeproxy presets / etc., the midi mapping, the LFO, and the GUI.
Then perhaps we wind up with (each of these would be a separate class)
- a robust MLP object that handles training and manages a Synth which maps from some inputs to some outputs. (keeping this closer in spirit to FluidMLPRegressor but more sc-idiomatic and user friendly, e.g. conveniences for training and arranging presets in space, accepting both set messages for setting inputs and bus mapping to connect arbitrary LFOs, providing both language-side output via OSC as you have as well as write to buses for direct server-side mapping – would be super cool if it could run at audio rate?! – we could work on brainstorming a full spec while I’m away)
- a generic GUI for this object, which could e.g. use a 2Dslider for 2D input and a multislider for higher dimensional input
- an interface to SynthDef / nodeproxy which would be straightforward to plug into this object
- a gui for this interface (already done – NodeProxyGui2
) - either classes or simple templates for midi and LFO mapping
so all together this could plug together to create exactly what you have already shared, but opens up much more flexibility for different applications as well…
what do you think?