Anyone who has been following the music industry will have seen both Bandcamp and Pitchfork take a turn for the worse in the last 6 months. It feels like things are falling apart, and that many people are tuning out of music. It’s not hard to see why: the way music is created and presented feels out of tune with the times we are living in.
I recently created a generative music system in Supercollider that I like. I would like as many other people to enjoy it too. However, the only way I can plausibly achieve this is by creating outputs of the recordings and presenting them as an album. I have done this, and while I’m happy with it, it feels like it sells the whole thing short. The audio is presented as the core of the project when it’s the code.
It would be a much more powerful experience if the music varied each time for a listener. That they had some control over it. There could be some live form of visualisation that an album can’t achieve. It could be updated over time or extended in ways that have not yet been envisioned. The app could be built within an app or webpage that is in artistic concordance with the overall project, rather than just being shoehorned into Spotify.
Does anyone else share this frustration? My coding is decent, but I wouldn’t know where to begin with creating and maintaining a phone app. One could build for the web, but I’m not sure people naturally want to listen to music on the web.
Does anyone imagine - or have experience with - a plausible way for low-profile artists to present their algorithmic music to the world, beyond recording outputs, sharing code, or building installations?
I realise people like Brian Eno have done this very successfully, but he seems to work with highly skilled programmers and probably has the budget to do so. It feels like there must be a way, but I can’t currently see it. Or is it still a matter of becoming an expert in a programming language like Python, and designing your app?
I am curious to hear people’s thoughts and to know if anyone else shares this frustration.
I might add one of the things that helped provoke this is seeing this at least decade old (and nothing to do with AI) video of Brian Eno talking about generative systems, and seeing how seemingly little progress that had made in popular culture in the last decade or so https://www.youtube.com/watch?v=JFsUQpP1afM
Both are interesting projects. Neither have much in the way of interaction, but I do like the way Generative.fm is presented. It’s a nice twist on radio.
Appreciate you taking the time to reply.
I am not sure if this is an answer that could be within the range of your expectations… anyway, I am trying to formulate my previous unrealised idea in this regard.
Creating a website where users could do the following:
Composers upload the code or patch of a piece with the necessary data.
- The code or patch should automatically play music when the main file of the piece is loaded/opened.
- Something should be done about security by the site administrator.
- To protect authorship, the composer should do one of the following
- make the code open source
- upload a binary (if you use Max, it will be
(what might be an equivalent format of
mxf in another language?)
After manually installing the appropriate software for the piece, listeners can go to the website and press the play button for that patch or code.
- The web browser will then download the code or patch and launch the appropriate software, but the patch or code should not be visually exposed unless its composer has uploaded the file as open source.
- If the software supports OSC, the main code can be sent from the website to the local software as an OSC message, which is executed by the web browser and interpreted to play music. (The composer should also finalise the code and the associated project accordingly).
Blockchain could be used for payment. Perhaps it could be useful for authorship and collaborative works.
Thanks for your reply.
Sounds interesting. Kind of like a record label for generative music. I think such a thing would be a positive move. I’m not sure I see the need for Blockchain either for payment or authourship given there are working alternatives. Please post back if you create this
Thanks semiquaver, interesting to know. Will keep an eye out for examples of this.
Unfortunately, my programming skills are not high enough to do this by myself. If there are some programmers and composers who are interested in the outline I suggested above, I can apply to a foundation in my country to get a budget for the project for several months. My role would be project manager, one of the composers involved, programme reviewer and a design designer with basic code examples. Unfortunately, at the moment I have no one around me who seems able or interested in doing this. This would require a group of programmers who understand the basics of SuperCollider, Max, pd, Csound, processing, python etc; and one of them should be a website developer, I suppose.
From another point of view, SuperCollider has WebView. The HelpBrowser itself is a web browser. So any SC user can browse the sccode.org site and evaluate much of the uploaded code right out of the box. In my humble opinion, this can be seen as a platform @domaversano would like to have.
I am building something similar which is called gencaster.org which is still in beta but it allows to upload SuperCollider code (or samples etc…) to a server which then can be streamed via WebRTC in low latency to the devices of a user - no additional software except an internet connection and a browser is necessary.
One goal of Gencaster is to provide an easy-to-setup (or use) platform to share generative pieces.
Additionally, listeners can dynamically navigate through the code via movement (GPS position can be requested/streamed), auditory surroundings (microphone can be requested/streamed to the server as well) or user input (one can script user popups/modals via Python). I’ve started to write down the concepts at Tutorial — Gencaster documentation
Additionally, as mentioned, I try to get the wasm build of scsynth into the next SC release
If someone is interested in beta-testing, please write me a pm
I’m very interested in this question of the nature of a specific musical form in the Internet age. I did a piece (in the tradition of acousmatic music) along these lines ten years ago, which still works: Flux æterna - Vincent-Raphaël Carinola
At the time, I used Max for it. I’m just starting to work on supercollider to continue this approach in other projects with a better-adapted tool. It gives me great pleasure to read that this thought is shared by discovering this forum!
Wow, this looks very interesting. Right along the lines that I am thinking, especially given the potential for interaction. When I have more time I might be able to help with Beta testing. I will read through more of your documentation.
Great work, I hope it thrives.
This is great too! Well done for keeping it going for such a long time. I’m listening now, very interesting and dynamic. I will look into it more and explore in greater detail.
Very glad I started this thread. Thank you for all the fascinating contributions.
Out of curiosity, why are people opting for Web over Apps?
Thank you domaversano for your interest in Flux æterna. Don’t hesitate if you have any questions about this strange work (for me too), I’m not sure if all the information is translated into English. This project exists, of course, as an application, here realized with Max. The Web is a space of diffusion. I’ve modestly tried to explore its specificity.
This sounds really great, I love it! Also, the fact that it has been running for 10 years is pretty fascinating, as we typically think of the Internet as a rather ephemeral medium.
Very interested in this. SC in wasm seems like a huge win