Proposal: SynthDefs could eliminate duplicate UGens

The snippet updates the entire graph (everything is done with recursion in haskell, it’s really nothing special there). I did this with my own ’ ugens’; I wrote from scratch using Haskell and C++. We can have a video call about it. I design better in haskell, but it’s possible to just use c++, the topological sorting, buffer pool and other things I wrote in c++

Please, I’m not sure if there was sarcasm in your message. But let’s learn with each other, it’s better.

Ah, that’s cheating! The challenge is to do it on the 20 year old legacy code.

No, I’m not being sarcastic, but there has perhaps been a miscommunication. I thought the point of this thread is about improving sc without breaking backwards compatibility (given its tagged as development), not designing a new system from scratch, which of course, given hindsight, would be significantly easier to achieve this task with.

It’s not about easier or harder. The idea that worked in my toy engine also fits the GraphDef structure— it’s the same. That’s my starting point. I may add something more, but that’s irrelevant here. The matter of discussion is working with graphs and using specific techniques.

That’s the development and experimentation of the same data structure. Thus, the same algorithms will work with Haskell, c++20, or C99. If you can’t touch the GraphDef, that’s a design issue that needs to be resolved.

Sure, I’m just trying to keep this discussion on track, which is about a specific proposal that targets the current codebase. That being said, there are indeed other ways of optimizing the SynthDef graph – I have my own thoughts about this topic – but this should be discussed in another thread.

The only thing I mentioned was suggesting pattern matching on graphs instead of using REGEX, which is just a bit too hackish, to speak frankly. If that doesn’t add anything, I don’t know what.

I’m not pushing any idea, but it’s just a good idea for that.

I will leave this thread now. Thanks

… continuing as normal I guess??


Perhaps this could be its own topic, but I think its deeply connected to what is being discussed, let me know and I’ll move it if needed!

I’ve figured out how the optimisation pass works a little better now and have been able to get it to reduce more additions into Sum3s and Sum4s.

It was a simple matter of altering BinaryOpUGen:optimize(Sum3|Sum4|MulAdd...) to:

  1. stop checking that there is only one descendant
  2. stop removing the ‘input’.
  3. rerun optimizeGraph in SynthDef to retrigger the deadcode elimination.

This runs the risk of creating duplicates, which works well with the rest of @jamshark70’s code!

I think the optimisation passes could be rewritten. Its very odd that the graph (SynthDef) delegates to the nodes (UGen) to do all of the optimisation and mutate the state of the entire graph.

Dead code elimination is one such optimisation that could probably be done at the graph level.


Here are some examples.

A clear win.

SynthDef(\s, {
	|a, b, c, d|
	var sig = a + b;
	Out.kr(1, sig + d);
	Out.kr(0, sig + c);
}).dumpUGens

// Old
[0_Control, control, nil]
[1_+, control, [0_Control[0], 0_Control[1]]]
[2_+, control, [1_+, 0_Control[3]]]
[3_Out, control, [1, 2_+]]
[4_+, control, [1_+, 0_Control[2]]]
[5_Out, control, [0, 4_+]]

// New
[0_Control, control, nil]
[1_Sum3, control, [0_Control[3], 0_Control[1], 0_Control[0]]]
[2_Out, control, [1, 1_Sum3]]
[3_Sum3, control, [0_Control[2], 0_Control[1], 0_Control[0]]]
[4_Out, control, [0, 3_Sum3]]

A draw, perhaps?

SynthDef(\s, {
	|a, b, c, d|
	var sig = a + b;
	Out.kr(1, sig + d);
	Out.kr(0, sig + c);
	Out.kr(4, sig);
}).dumpUGens

// old
[0_Control, control, nil]
[1_+, control, [0_Control[0], 0_Control[1]]]
[2_+, control, [1_+, 0_Control[3]]]
[3_Out, control, [1, 2_+]]
[4_+, control, [1_+, 0_Control[2]]]
[5_Out, control, [0, 4_+]]
[6_Out, control, [4, 1_+]]

// new
[0_Control, control, nil]
[1_+, control, [0_Control[0], 0_Control[1]]]
[2_Out, control, [4, 1_+]]
[3_Sum3, control, [0_Control[3], 0_Control[1], 0_Control[0]]]
[4_Out, control, [1, 3_Sum3]]
[5_Sum3, control, [0_Control[2], 0_Control[1], 0_Control[0]]]
[6_Out, control, [0, 5_Sum3]]

A clear win, but needs @jamshark70 duplicate removable to improve further.

SynthDef(\t, {
	|a, b, c, d|
	var com = a + b;
	Out.kr(0, com + c);
	Out.kr(1, com + c);
}).dumpUGens;

// old
[0_Control, control, nil]
[1_+, control, [0_Control[0], 0_Control[1]]]
[2_+, control, [1_+, 0_Control[2]]]
[3_Out, control, [0, 2_+]]
[4_+, control, [1_+, 0_Control[2]]]
[5_Out, control, [1, 4_+]]

// new 
[0_Control, control, nil]
[1_Sum3, control, [0_Control[2], 0_Control[1], 0_Control[0]]]
[2_Out, control, [0, 1_Sum3]]
duplicate>> [3_Sum3, control, [0_Control[2], 0_Control[1], 0_Control[0]]]
[4_Out, control, [1, 3_Sum3]]

Again, all this is based on the assumption that Sum3 and Sum4 are cheap and that UGen calls are expensive.

1 Like

Nice work!

I have thought the same! If the optimization passes were done “on” the UGens, instead of “in” the UGens, it would be much easier to reason about. Whenever I’m looking at the current graph optimization code, my head immediately begins to hurt :smiley:

Perhaps this could be its own topic

IMO it would warrant a new topic. Also because this thread is already quite long.