Perhaps ‘bogus’ is a wrong term. I wanted to say that the “intelligence” in the current wave of AI is not really intelligence. Most of the models are machine learning where a great deal of learning is more or less statistical analysis. There’s no intelligence involved in there - it’s more a combination of developers engineering that uses results from machine learning. As I see it, these are highly specialized functions, and the “AI” does more or less precisely what it’s been programmed to do. Very little intelligence here, and a lot of programming aimed towards very specific goals. That’s why I’m saying (or wanted to say) that it is much more important to be afraid of how this software is used by humans.
Personally I would be very careful in using “robotic popular music” terminology. This is very cultural, and subjective. Techno music - intentionaly machinic and robotic music - was born in mid 80s already and it spawned culture of raves - subculture of people simply dancing to an electronic beats and other synth sounds. Furthermore if you listen to contemporary internet-born music genres, it’s all heavily autotuned, often very hyperactive, fast (glitchcore, hyperpop, etc), but it’s made by humans and it’s expressing contemporary condition. On the other hand you can listen to music coming out of labels like Hyperdub, and that is very alien stuff. But someone else can find stockhausen or even varese very alien and almost machinic?
Robots themselves will never be able to be creative if not programmed by humans. But who’s agency it is if a composer-programmer writes a complex mix of algorithms whose output a very human-like stream of beats, melodies, harmonies, more or less pop-songs? Aren’t these just tools in human hands, always?
There’s an interesting theory that music is prophetic - it often reflects where things are and where we’re going with our society - including governance, economic systems, laws of repression, etc…
One interesting thought is to fight fire with fire: use machine learning to aid with search results, search query and keyword suggestions, and question formulation. Of course this would require a significant effort on either the SuperCollider community or, more plausibly, on the people who run the Discourse platform
Have you all tried using Github’s Co-pilot for SuperCollider? It works surprisingly well (and also suggests a lot of mistakes, but you just need to learn how to work with it). I use it extensively now and it is a huge help for writing classes and unit tests for example.
Just to follow up on this, ChatGPT was released nearly a year ago. It’s crazy how it’s completely transformed our society at this point and effectively made human intelligence obsolete
I have the impression that the discussion about this new technological phase has been more rational and less based on emotions and fear. Finally, we will discover that the issue is not in the technology itself, but in the social system that encompasses its use.
ChatGPT has been called a mansplaining machine (in that it writes with a tone of absolute confidence, even when the contents are largely fabricated). I wish I had thought of that, but I didn’t (I’m stealing it from somewhere). Some LLMs have also famously become erratic when questioned too much, which also kinda fits the profile
Yeah, it’s gone from "omg, it can do that? " to "omg it’s doing that again "
One aspect is that AI, neural network frameworks require computational power to “train their models” that individuals rarely have. in other words, the technology would end up being exclusive to capitalist monopolies. But this is not the only possible future, and people are aware of that.
This is very funny. Although I guess it’s Botsplaining.
I find it hard to evaluate what exactly has happened over the last year. The language is so washy – like what does AGI or super-intelligence even mean, and how do you measure that? How much of this is our innate desire to project agency, intelligence, and even morality onto our surrounding. It’s sort of animist, or even reminiscent of Roman Gods. Then again, there seems to be some really impressive stuff, like ML being used to decode animal calls and possibly open up the possibility of interspecies communication.
I wish there was more sober and informed writing on this. Perhaps I´m looking in the wrong places. Grady Booch seems quite good, and Stephen Diehl links to some good anti-hype thinkers.
there are some great thinking going on, just not in the news. I can send a few items I was sent as part of the (fantastic) MusAI project - it is academic but ethics of technology will always be quite niche and deep. we just need to make sure it is talked and thought about by a wider group of people so democracy can work its magic (the ever optimist talking here )
I would be delighted if you send some of that on to me, thank you. Yes, the news is ill-equipped for this moment. The combination of a desire of sensationalism/clicks combined with few journalists having technical backgrounds seems to make for a mystification of the technology, and doomer/utopia narratives becoming predominant.
On a more techno-optimist angle, I love the fantastic Ollie Bown’s view (yes, the half of the amazing IDM Icarus) in the open access book he has written - I’m yet to read it but heard him talk (and spoken to him about it) extensively - I disagree with some of it but it is a great framework to think about it all.