# Algorithmic composition of musical style //INFO

Hello everyone,
I have been trying for a while to understand which approach could be functional, in the generation of rhythmic sequences, going to emulate the style of konnakol rhythmic singing from a composition prospective and not interpretation or sound phenomena.
There are several tutorials and books that describe the rules behind carnatic music and specifically konnakol compositions/improvvisations, from the micro to the macro structure.
I would be grateful if someone could suggest me, an approach that worked for him in the field of algorithmic composition to emulate a predetermineted style.

For instance:

• generative grammar ?
• controlled randomness ?
• AI: machine learning ?
• Markov Chains ?
etc â€¦

I really would love to ear some music results.
Thank you very much!

1 Like

I think that the quark BjorkLund, which implements an euclidean algorithm for generating rhythms would be a good first step.

This is algorithm is clumsy to explain in words but when you try the quarkâ€™s example everything will be super clear. It provides two basic types of Patterns for dealing with rhythms: a traditional one in which you specify the durations and a another more â€śsynth styleâ€ť in which you specify the amplitude in terms of zeros and ones.

https://fredrikolofsson.com/f0blog/bjorklund/

Thank you @fmiramar,
https://theseanco.github.io/howto_co34pt_liveCode/3-4-Euclidean-Rhythms/
https://reprimande.github.io/euclideansequencer/

and in a way itâ€™s pretty cool!
But Iâ€™m looking for something quite different, much more oriented toward a system that emulate a style of pre-existing music.

Are there Quarks for composing in this direction?
Thanks again

Nice!

Well I think this question is embracing two big fields: musical style modelling and automated music generation. On the musical style modelling, you are also asking about two big branches: first one is the method for enconding a musical style and second is the algorithmic structure that will hold your style.

Regarding the process of encoding a musical style, I see two main approaches: human analysis and machine analysis (AI). Both have its pros and cons, but given the info that you have provided, you already have the style humanly coded (you have the set of all possible rules for carnatic music / konnakol). In contrast, for using AI you will need to have tons of data (recordings, music sheets, midi files, etc) to train you AI model.

Secondly, you can translate that set of rules into many types of computer structures like markov chains, grammars, controlled randomness, genetic algorithms, constrain composition, etc.

Given that you want to do this on the SuperCollider environment, for the automated music generation one suggestion is to convert your set of rules into Patterns (e.g. BjorkLund, Pseq, Pslide, etc), and then play around with them. However, this also depends if you are really looking for a real-time solution or looking for a typical NRT solution (give me all possible counterpoint results of a given melody).

Maybe have a look at this paper, it is not SC but it helps facing the problem from different angles:
Dubnov, S., Assayag, G., Lartillot, O., & Bejerano, G. (2003). Using machine-learning methods for musical style modeling. Computer, 36(10), 73-80.

Maybe this SC counterpoint machine can provide some insights

1 Like