Heyhey,
something I’ve wanted to do for quite a while - it was actually a motivating factor in starting SC to begin with, although a lot of other things got “in the way” - is to get to a point with SC where I can play live gigs as a “duo” with my laptop. As in, I play, my laptop plays with me - kinda like this, although I have pretty different aesthetic ideals than George Lewis I guess.
My first attempts at doing this were by writing a bunch of patches (granular sampler, reverb, drum machine) and then randomising which patch was cued and how long - aside from the drum machine, the synths recorded input live from my guitar and then manipulated them, again randomly. I did a gig with this setup last Tuesday and it wasn’t too bad - I thought the music we made together sounded more or less ok. However, it became super clear that the patches were not really capable of listening or interaction - I felt like I was playing with a quite talented teenager, who had great ideas, but wasn’t really much of a listener or particularly tasteful.
Does anybody have any experience making this happen in SC? Or any ideas how to “train” SC to listen and respond in real time? I don’t have any experience (yet!) with neural networks or machine learning, but would be super happy to hear about any resources or ideas people have. This project is probably one I’ll work on for quite a while, so it doesn’t bother me if things are outside my skill/understanding level right now.
Long post, hope a few people are interested anyway.
Cheers,
Jordan