and of course you can also convert any other MuseScore, or musicXML scores - so I’m now listening to the worst performance of the Art of Fugue ever - but goddam that is a LOT of notes to get into Supercollider.
I know this is “not what Supercollider is for”, but while there are good reasons for that, a BAD reason for that is that it’s hard, and very error prone, to type in the notes. Sometimes when you make something accessible you discover that the grapes weren’t so sour after all. Or not. But I think it opens up possibilities.
It does mean I’ve lost my excuse for not putting all the RIng Cycle motifs into SC.
This would be a monumental task for a single author but manageable for a team of two to four people. With a larger team, Quark would also have a better chance of developing a more general and versatile helpful design for a broader range of users rather than becoming yet another narrowly specialized subset of music theory or notation.
From my experience implementing a small subset of MusicXML in Haskell, I can confirm that MusicXML is not only an enormous standard but also quite frustrating in many ways. Some particularly tricky compatibility issues need addressing.
That said, it’s certainly possible to achieve—it’s just that it hasn’t been done yet. Previously, music notation with SuperCollider was typically driven by individual composers who tailored the work to their specific needs.
A preliminary discussion about it was held in this forum one or two years ago. The conversations are easy to find.
PS.: Someone mentioned Guido + InsCore, and I agree that it would bring a fantastic interactive experience to the user. The problem is that, at least for the music notation I need, it hasn’t caught up with some more advanced notation elements. But if someone decides to integrate INSCORE into SC, that would be pretty fun.
MusicXML looks massive - and it has to be because there are so many variables. But I’m not trying to produce something that plays scores - just something that makes it easier to enter notes and durations into a pattern, and the script above does that.
It’s very simple, but it’s an 80/20 thing. Entering notes into patterns is just SO HARD, that this dumb script makes a big difference (for me)
Hi,
concerning Inscore, there is no need to implement anything since it is an OSC API which is very reliable with SC. In contrast, the guido engine can be developed since it is open source. What it can be done in SC side is to develop some tools for conversion like for instance some I wrote for my personal use (see lines 286 to 391 of gsa.sc).
I converted the Schema to a Haskell source file with all corresponding types. It was a monumental text file. With more people and some automated tools, something can be done.
There are examples of this method. GTK+ bindings to Haskell are produced automatically using a very similar process.
I never understood how many things could be notated, even with the extended GUIDO standard. For example, tuplets are always notations with fractions attached to each note, right? How can ambiguous situations be avoided with more real-world music with some tuplets and nested tuplets, with all other music indications? It kind of gets unpredictable results here, or I simply mess things up,
Indeed, tuplets seem limited in a way they cannot be nested but you can play with Time synchronization. Anyway, you must anticipate some times before the effective play for the real-time music score. Too complex annotations can be counterproductive for the reader, but are still flexible and could be improved on the guido engine side I guess. Have a look at the examples here that give you an overview of what you can write with guido. BTW Inscore can display musicxml files if needed and can be scripted with javascript, and some others I forgot.
With some minor changes, GUIDO could be an excellent small language. Its interaction with INScore is quite something. I like it, but it does not fit my style that much.
Is there something like (musicxml/mei/? + inscore)? Maybe some web-based apps can do something similar nowadays.
Ctk’s Ntk works best with GUIDO, OK with Music XML and pretty poorly with LilyPond. That being said, I was always pretty happy with GUIDO → NoteAbility Pro.
I do think I would probably make one big change in Ntk if I was to rewrite it - I wanted to the system to handle filling in rests for you (you just gave it note onsets and duration and Ntk was supposed to fill in the rests for you) - in hindsight, I’m not sure if that was the best approach. Float → Fractions caused a lot of problems in general.
It does work - but Ntk isn’t really maintained on my part any more.
@josh We have rational types in a quark now (maybe it solves some of the issues). Since Guido is perfect for real-time score manipulation, including INSCORE and other smaller programs, I can see a future there.
I never used NoteAbiity and thought GUIDO had limitations for some notation styles. But I’m sure many SC community members would quickly adopt the possibility of interacting with the score in real-time.
Thank you. I think it would be nice to have a collection of acoustic pieces that people algorithmically composed using SC. I know there have to be a bunch of them.
yes, you end up with a raw list of numbers, but that raw list can be the entirety of a Bach tocatta, which would be daunting to input any other way.
What you do with the list is up to you. I like using Prouts for swing and for dynamic variation based on meter, but you’re not going to get to any kind of decent “performance”. For me this is raw material - like a sample.
But to be honest I don’t really know what can be done with it. For now I’m aiming for video game music!
Are you aiming at generative music? Does it have something to do with the database idea?
What can SuperCollider patterns do in this context? I’m curious to know; I have almost no experience with games. C++ dominates the industry, and it seems those guys are the real masters of high-performance C++ today. For most people, C++ is pretty opaque, I’d guess.
I imagine a more structured music data would be better in this context.
I have no idea how you’d use SC in an actual video game (I think its licensing would be more of a problem than its code/runtime integration). Yes, generative, “music” (organized sound) is my interest.
I started with an interest in sonification, back before I knew that was a thing. I was listening to a steamship engine, and I thought “that sound tells the engineers everything they need to know - wouldn’t it be cool if that was available for other things?”
For a long time I thought the problem was how to do it technically, of course the actual problem is aesthetic and psychological, not technical.
But I really like the idea of crossover generative and interactive sound. Where environmental changes affect the sound. Not so much as a practical thing, much more as fun, useless thing (or “Art”, if you prefer)
The basic structure I’ve been using is “event driven sound” - something happens and the sound changes. The “something that happens” is communicated over OSC - a, possibly parameterized, event gets sent to SC, which responds by
playing a pattern
changing a parameter of something that’s playing
which is vague enough to be sort of useless, so you have to make it more concrete.
The video game model - play a theme, change its key, instrumentation, speed, change what else it’s playing with, change the effects, is one well tried model.
I’ve got that working reasonably well in SC now (using motives from the Ring, because, it’s kind of funny). The trick now is to make something that’s bearable to listen to. Always the hard part