Why can’t we manipulate air pressure waves on a speaker in software without using standard Audio objects like filters, oscillators, etc… direct manipulation of the instructions to move the speaker cone. So that it maybe accidentally will approximate acoustic events? Trees, chairs moving, doors closing, you know, with directly manipulating the instructions and maybe coming up with something that we don’t know about, surprises. . It’d be random or something unexpected, but if sound is just a speaker moving air why can’t we just control that? Maybe it would sound like crap. Or some familiar movements of a speaker , like metal being bent, could be made into a paintbrush, much like photoshop, different brushes different sound types.
I don’t mean metasynth, that’s app just sounds like sine waves. I don’t know, it’s a thought.
1 Like
Samples become voltage that becomes air pressure. Some speakers will prevent some things (DC signals etc), but basically it’s a one-to-one correspondence.
You can also use any object as a " speaker," such as the chair that you mentioned. You just need a transducer.
A recent ensemble piece I wrote used many objects as " speakers" instead of the hifi speakers that were also there in other moments. It sounds like another dimension, and it’s quite straightforward to do it.
1 Like
I guess this is a machine learning thing. I am into this idea of Photoshop for audio. One brush for metal bending, One brush for wood bending or cracking. One for timber of a cello. In a UPIC style layout
1 Like