Well I think this question is embracing two big fields: musical style modelling and automated music generation. On the musical style modelling, you are also asking about two big branches: first one is the method for enconding a musical style and second is the algorithmic structure that will hold your style.
Regarding the process of encoding a musical style, I see two main approaches: human analysis and machine analysis (AI). Both have its pros and cons, but given the info that you have provided, you already have the style humanly coded (you have the set of all possible rules for carnatic music / konnakol). In contrast, for using AI you will need to have tons of data (recordings, music sheets, midi files, etc) to train you AI model.
Secondly, you can translate that set of rules into many types of computer structures like markov chains, grammars, controlled randomness, genetic algorithms, constrain composition, etc.
Given that you want to do this on the SuperCollider environment, for the automated music generation one suggestion is to convert your set of rules into Patterns (e.g. BjorkLund, Pseq, Pslide, etc), and then play around with them. However, this also depends if you are really looking for a real-time solution or looking for a typical NRT solution (give me all possible counterpoint results of a given melody).
Maybe have a look at this paper, it is not SC but it helps facing the problem from different angles:
Dubnov, S., Assayag, G., Lartillot, O., & Bejerano, G. (2003). Using machine-learning methods for musical style modeling. Computer, 36(10), 73-80.
Maybe this SC counterpoint machine can provide some insights