# CPU usage of additive synthesis(beginner)

As most of you know you can create some pretty interesting waveforms with additive synthesis. I really like the possibilties it opens up to you, but to me it seems it uses a lot of cpu. Are there ways to reduce cpu usage, at the cost of synthflexibity for example? Heres an example of a synthdef.
(
var sig;
sig = Array.fill(200, {
arg i;
var j = i * 3 + 1;
SinOsc.ar(MouseX.kr(1,200) * j,0,1/(j*2))}).sum;
Out.ar(0,sig);
)

1 Like

Have you tried this with Klang instead? It means you won’t be able to alter the frequencies in quite the same way you are doing now.

``````Klang is a bank of fixed frequency sine oscillators. Klang is more efficient than creating individual oscillators but offers less flexibility.
``````

However, given that your spectrum remains the same, you should be able to turn this into a fixed buffer for wave-table synthesis for use with `Shaper`.

Like this for shaper…

``````~sz = 2048;
~sig_sz = ~sz / 2 + 1;                         // must be some n^2 + 1 for n in Real -  see Shaper docs

(
~sig = Signal.fill(~sig_sz, { |i|
var t = i.linlin(0, ~sig_sz, 0, 2pi);      // t is our time index - we only need one full wave cycle - thats from 0 to 2pi
200.collect({ |i|
var j = i * 3 + 1;                     // your code - calc harms
sin(t*j) * (j*2).reciprocal
}).sum                                     // calculate all 200 freqs and sum them together
}).normalize;                                  // make sure |s| <= 1
)

~b = Buffer.alloc(s, ~sz);
~b.sendCollection(~sig.asWavetableNoWrap)      // send to buffer as a wavetable - see Shaper docs

~b.plot;

(
x = {
var lerp = Saw.ar(MouseX.kr(100, 1400, 1)); // lerp is used to read through the buffer - try a sine wave? lerp must be some f in [-1, 1]
Shaper.ar(~b, lerp)
}.play
)
x.free
s.scope
``````

Is IFFT an option? Fill an array in the right spots with amplitudes and then transform. I have not tried it yet. (thinking of what virtual ans does, WarmPlace.ru. Virtual ANS Spectral Synthesizer )

Thanks for the answer, i havent got time to fully understand the shaper part, but Klang offered me sort of what i was asking for. My original idea of the SynthDef was this actually

var sig;
sig = Array.fill(200, {
arg i;
var j = i * 3 + 1;
SinOsc.ar(MouseX.kr(1,200) * j,0,1/(j*2))}).sum;
sig = sig * SinOsc.ar(MouseY.kr(1,200));
sig = sig * 0.5;
Out.ar(0,sig!2);

I wanted to explore the sounds with my mouse and once i found a set of parameters i liked, i would hardcode them into a synthdef. Klang is exactly the solution to this since it has lower cpu usage. This is what i came up with.

(
~freqs = Array.fill(200, {
arg i;
var j = i * 3 + 1;
j});

~amps = Array.fill(200, {
arg i;
var j = i * 3 + 1;
1/(j * 2)});
)
~mul = 2;
{Klang.ar(`[~mul*~freqs,~amps,nil],1,0) * 0.5 * SinOsc.ar(MouseY.kr(1,200))}.play;

However when ~mul is lower than 1.8 the audiolevel starts to grow indefinitely, its a shame because its exactly those low freqs i want to use. Do you have any idea why that is happening?
I’m sorry that the topic shifted.

This produces a 0 frequency components. You can’t hear this as it’s just a DC offset. Do this instead …

``````~freqs = Array.fill(200, {
arg i;
var j = (i + 1) * 3 + 1;
j});
``````

For complex additive synthesis, it’s better to write a buffer which stores the result of the complex equation, and have a synthdef simply play the buffer stored in memory, rather than computing the equation each time.

There are two classes in the core library which follow this method of practice, Signal & Wavetable.

Since this happens to be a recent area of interest, I can’t argue this being the Be all / End all of any one pursuit in digital synthesis, or in creating SynthDefs.

I can however, offer this code I’ve written (slightly modified to make things simple) which uses `Buffer.loadCollection` and `Osc.ar` to play the signal created inside the function for the `collection:` argument.

There’s also a plot function thrown in there to visualize the waveform…this can be removed:

``````(

(
server: s
,
numChannels: 1
,
collection: value
{
var sig = Wavetable.sineFill(1024, 1/(1..21)); // classic saw

sig.plot;

sig
// sig.asWavetable // convert from Signal
}
,
action:
{
|b|

SynthDef
(
\x,
{
|freq|

Out.ar
(
0, Pan2.ar // creates balanced stereo output from single channel
(
in:
(
// correct method for wavetable playback

Osc.ar
(
b, BufRateScale.kr(b) * freq
)
),
pos:
(
0	// -1..1
),
level:
(
Env.perc.ar
(
Done.freeSelf
)
)
)
)
}
)
.play(s, [\freq, 172])
}
)

)
``````
1 Like

One of the rationales for additive synthesis is to modulate the partials’ amplitudes (and perhaps frequencies too) over time, which a steady-state wavetable wouldn’t do.

hjh

Yes it fixed it thank you. Although i dont quite understand why, since the first element of the freqs array is actually 1.

Are you aware of anything from sccode.org demonstrating this way of modulation?

Frequencies lower than 20Hz are irrelevant and only negatively affect the sound. Tldr, you should never play something below 20hz - use LeakDC to remove it.

The speaker cone moves between two maximal positions (in and out), the faster its position changes the higher the pitch, and the further in or out it moves the louder the sound - if you add frequencies in such a way that they interfere constructively you need to move the speaker cone further than either frequency by itself - and since there is a limit on how far you can move the cone, you will get distortion if you don’t compensate for this. Since lower frequencies take longer to return to zero (a frequency that takes infinitely long to return is at 0Hz, which is called a DC offset) you are making a worse case scenario and will always get clipping.

You might want to keep the higher harmonics of such a low fundamental, but remove the fundamental itself.

1 Like

Thank you for the extensive explanations!

No (I’m not aware).

The question is independent of SuperCollider – enveloped control over frequency and amplitude of multiple partials is a general idea that can be realized in any audio environment. Start with some articles about additive synthesis (1 minute search found Sound on Sound’s treatment), then implement the ideas in SC.

From that perspective, it isn’t really necessary to be able to point to a specific example at sccode.

hjh

there have been two examples of doing excatly that. one by @PitchTrebler and one by me

This topic seems to come up every so often - interestingly, there was a talk on implementing sine oscillator (banks) efficiently at the last ADC - the speaker reports 2048 oscillators running at ~4% CPU on an i7:

Would be amazing if we had something like that in SC. Unfortunately, `DynKlang` (the “sine bank” UGen is only a wrapper around `SinOsc` and therefore not very efficient, and `Klang` is not as flexible (compared to, say, Reaktor’s sine bank implementation, which is what’s powering LazerBass and Razor under the hood).

Obviously that figure pales in comparison to a GPU implementation (one million sines in real-time):

On a semi-related note, I did a small, nonscientific benchmark of `FSinOsc` and `SinOsc` yesterday (using a couple thousand random sines) and it seems that `FSinOsc` is actually marginally slower than `SinOsc` now.

2 Likes

yes, please. this would be great!

The key parts of this presentation are between 20 and 25 minutes – in short, handling the phase increment of a sinusoidal oscillator by complex multiplication by a vector (faster than repeated `sin` or `cos` calls).

``````(
var point = Complex(0.9, 0);

// let's do a hundred steps per cycle
var phaseIncrement = 2pi / 100;
var vector = Complex(cos(phaseIncrement), sin(phaseIncrement));

var color = Color(0.5, 0.5, 1);

u = UserView(nil, Rect(800, 200, 400, 400)).front;

u.drawFunc = { |view|
var b = view.bounds.moveTo(0, 0);
var center = b.center;

Pen.color_(color)
Point(
(point.real + 1) * center.x,
(1 - point.imag) * center.y
),
10, 10
));
};

r = {
loop {
u.refresh;
point = point * vector;  // no sin / cos here!
0.05.wait;
}
}.fork(AppClock);

u.onClose = { r.stop };
)
``````

The algorithm would not be difficult to implement in C++. SC’s plug-in interface even includes complex-number operators already. And the cookiecutter template makes it straightforward to release for multiple platforms.

Edit: Of course, minutes after posting, then some details come to mind.

I’m not sure if a complex-multiply approach would be faster or slower than SinOsc’s lookup table approach. The presentation suggests that multiply-add is highly optimized in modern CPUs (+1 for complex multiply) and that removing a conditional to wrap the phase back around would also improve compiler optimization (+1), but the performance killer in the presentation’s native sine oscillator is the sin() function. SC uses a lookup instead. I can’t predict which would be faster or slower.

Modulating frequency would require updating the vector, requiring a cos and a sin. I didn’t watch through the whole thing to see how they handle that. Lookup would help but now that I think about it, continuously changing frequencies could be slower with complex multiply.

So maybe I was a bit hasty there. In any case, the real killer with DynKlang is that the multiply-adds are done in separate UGens – the function call overhead really adds up when you have a couple hundred partials. So a modulatable single-UGen Klang would help a lot.

hjh

1 Like

As an alternative implementation (to doing the addition within one SynthDef) you can work with Patterns. Here are some examples (bottom of the post): Implementing filters with additive synthesis - #4 by dkmayer

Both methods have their pros and cons. From teaching I can say that the Pattern approach is appealing to many as more intuitive. E.g., with SynthDefs you often run into situations where a good handling of arrays is the key to making things more practical. But this can be technically more demanding, e.g., the use of array args, zeropadding etc. These things often cause confusion.

Does the use of patterns greatly reduce cpu usage when doing additive synthesis?

No, actually, what you see in the status bar is the server’s CPU usage, so language-side calculations and more OSC traffic add to the bill. So, you might end up with more CPU usage as if you would mimic the example with a server-only variant. However, this is often not exactly possible, or only with an immense amount of programming. Regarded from the positive side: both approaches can lead to unique solutions that – practically – one wouldn’t find the other way.

So, CPU-usage is only one part of the considerations, and often one that might be less important. I think, in synthesis there exists the mind trap of “more == better” and granular synthesis and additive synthesis are prone to that. Interesting kinds of synthesis do not automatically result from more grains or more sine (or other) components overlaid, often it’s differentiated and/or unusual control that are worth exploring.

BTW, another option – in additive and granular synthesis – is a hybrid control approach. You might, e.g., start with a SynthDef being able to deal with a lot of sines and sequence such synths with Patterns. What I personally like about such setups is that you have the “mass control” encapsulated in the SynthDef and can do the differentiated sequencing control with Patterns (what they can do very well).

1 Like