please forgive me if I have not details of the implementation in my mind, this was quite demanding stuff
It’s all hidden in Fb1_ODE.sc. Due to necessities of accuracy of calculation and additional options (e.g. providing the differential data, blurring the numerical intermediate results with further operators etc.) the passed input Function has to be extended resp. wrapped in another Function.
These two parts of the source code plus comments might give a hint:
size = odeDef.size;
stepDepth = odeIntDef.stepDepth;
sizeFactor = odeIntDef.sizeFactor;
rawIntSize = sizeFactor * size;
// rawIntSize: size of arrays used in the iteration process (without accumulated time)
// this is either the size of the ODE system or its product with a factor 2,
// then the ODE Function values (= the derivation) are stored for each sample
// the x_d intTypes exist for cases where we want these values, but
// where they are not buffered by default (e.g. 'rk3', 'eum', etc.)
outSize = rawIntSize + 1; // time is stored too !
outDepth = stepDepth + 1; // for Fb1 the depth value must be lookback + 1 !
During the whole process a multiple evaluation takes place. If this is avoidable or not – I cannot tell, at least I suppose it doesn’t do any harm as the Function provided for Fb1_ODEdef is thought for that purpose and not anything else.
Ad accumulated time: theoretically, this could be calculated via summing up the sample durations. As you can imagine, this leads to drifting, so I needed to introduce a calibration mechanism and Sweep turned out to be a good way to do it (around line 200 in Fb1_ODE.sc)
Ad number of evaluations: this depends on the integration type. Default \sym2 applies 4 * 8 evaluations in this case (the recommended symplectic integration can be applied iteratively for each sample calculation, higher orders are possible, but \sym2 seemes to be a good default). E.g., with intType \pec, the Function in the above example is applied only 8 times – the minimum to expect according to blockSize.
I can imagine. Anyway, the system comes really very useful to us, and we use it a lot, so thanks!
I see. Why is it necessary? Don’t we already have time as the first argument to the function? I actually stumbled across it when I wanted to derive the dimensionality of the phase space by writing y.size.
It is a little contradictory also because (according to the reasonable assumption in the documentation) we are expected to return an array of the same size as we have received in y. Maybe it could be dropped before passing it in?
Unless it doesn’t build the UGen graph four times it is ok. If it does, it would be a massive performance improvement to get around it.
I’d have to spend some time reading to get what is happening, it is a complex process. But if you are sure that it doesn’t do this, I don’t worry.
That’s a different thing as it really means the time in the ODE definition !
You might have time on the right side of the equation (non-autonomous case).
This is independent from the implementational aspect I’m referring to. Maybe it’s possible to do the implementation differently, but this seemed to me as the most practical way when I designed it.
This reminds me to an ancient discussion with James on the list. I forgot the topic but James’ argument was that when you define the thing yourself, you know its properties. So, why would you ask for the size when you have defined the size yourself ? It’s very easy to keep that number stored in a variable before applying the ode solver.
The passed function is assumed to reflect the ODE and should not do something else – because of the way it is used. Maybe this is a bit similar to the case of UGenGraphFunction which is also used in a very specific way. All kinds of confusion come up when it is thought – as it often happens with new users – to also perform some language-only operation “as normal”.
Maybe that could be emphasized more clearly in the documentation.
This was the statement that you should please ignore after my fresh air break
As I pointed out in my follow-up, it is in fact used multiple times for the graph building – and purposely so !
You can snoop the file Fb1_ODEintdef.sc to compare the different implementations. The amount of applications of odeDef in the Fb1_ODEintdefs gives a hint. For the default integration type \sym2 the helper function symplecticOp is called with order 2, thus it uses odeDef 2 x 2 times, which explains the overall 32 = 8 (blockSize) * 4 calls in your example.
// variants of prediction-evaluation-correction
// using explicit Euler and trapezoidal rule
Fb1_ODEintdef(\pec,
{ |odeDef, t, y, dt, size ... args|
var tn = t + dt;
var y0 = y[0..size-1];
var y1 = y[size..2*size-1];
var p = y1 * dt + y0;
var pe = odeDef.(nil, tn, p, *args);
var pec = (y1 + pe) * dt * 0.5 + y0;
pec ++ pe
}, 1, 2);
But as said, the symplectic variants are far superior in terms of stability, a trade-off with CPU-usage that – if it is a concern – could be decided from case to case.
I understand this: the time on the right side would be an external time, which is passed in as first argument. What I don’t understand yet is what the extra time state in the phase state does.
This value is always zero in my attempts:
s.options.blockSize = 1; s.reboot;
(
Fb1_ODEdef(\x, { |t, state|
var x, y, innerTime;
x = state[0];
y = state[1];
innerTime = state[2];
"state size is: %".format(state.size).postln;
innerTime.poll(label:"time in phase state");
t.poll(label:"time passed in");
[
{ y },
{ x.neg },
]
}, 0, [0, 1]);
)
// when withTimeChannel is false (default), the extra state value equals 0 for both default and sym6 integrators:
{ Fb1_ODE.ar(\x, [], 400, 0, [0, 1]) }.play;
{ Fb1_ODE.ar(\x, [], 400, 0, [0, 1], intType: \sym6) }.play;
// when withTimeChannel is true, and sym6 integrator, the the extra state value equals t (which is passed in)
{ Fb1_ODE.ar(\x, [], 400, 0, [0, 1], intType: \sym6, withTimeChannel: true) }.play;
I am sure I am missing something simple and basic.
You don’t have to know the size when you define the system in Fb1_ODEdef. The function you pass to the solver may be general and adjust itself to different input sizes. The actual size may be given later in the synth, that is in Fb1_ODE.
If the function depends on the size of the phase state, it needs some way to derive it. This is state.size - 1, and that works, just that I would like to be able to explain why the extra value is there …
Sure, we can’t expand SynthDef graphs on a Synth basis. But the ODE solver provides a graph for a synth definition, which may have different shapes dependingon circumstances. Just to make clear what I mean, the analogy in synth defs would be this:
(
~uGenGraphFunction = { |freqs|
freqs.collect { |freq, i|
SinOsc.ar(freq) / (i + 1)
}
};
SynthDef(\x, { |out|
var f = { 1.0.rand2 } ! 5;
var in = LocalIn.ar(f.size, f);
var sig = ~uGenGraphFunction.(f.linexp(-1, 1, 300, 800));
LocalOut.ar(sig.rotate(1) * 0.01);
Out.ar(out, sig.mean * 0.2)
}).add;
)
Synth(\x);
Ah yes, very good, thank you – actually, here I see that the state is extended by some of the solvers, e.g. in \rk3h_d (yNew ++ fNew), so I suspect that this is the reason we are seeing extra state coming in (ref. question above) ? If this is the case, it might be better to drop that extra state before passing it to the actual function:
// instead of
fb1_func = { |in, out|
fb1_intFunc.(out, in[1] /* last time */, basicDelta * tMul, argList)
.miSC_fb1_perform(compose, [], size, outSize);
};
// keep only the size
fb1_func = { |in, out|
fb1_intFunc.(out, in[1].keep(size) /* last time */, basicDelta * tMul, argList)
.miSC_fb1_perform(compose, [], size, outSize);
};
For polling, you’d need the poll in the returned expression as this is the one to be merged into the graph. But keep in mind that this is not the intended way to get the time – for this there is Fb1_ODE’s withTimeChannel arg.
// just for the sake of getting the numbers, no functionality of t in ODE (autonomous)
(
Fb1_ODEdef(\time_poll_ex_1, { |t, state|
var x, y, innerTime;
x = state[0];
y = state[1];
[
{ (t * 0.001).poll(1); y }, // t in milliseconds
{ x.neg },
]
}, 0, [0, 1]);
)
{ Fb1_ODE.ar(\time_poll_ex_1, [], 1000, 0, [0, 1]) * 0.3 }.scope;
// functionality of t in ODE (non-autonomous)
(
Fb1_ODEdef(\time_poll_ex_2, { |t, state|
var x, y, innerTime;
x = state[0];
y = state[1];
[
{ y },
{ x.neg * exp((t.poll / 1000).neg) },
]
}, 0, [0, 1]);
)
{ Fb1_ODE.ar(\time_poll_ex_2, [], 1000, 0, [0, 1]) * 0.3 }.scope;
While it’s possible to avoid defining the size in Fb1_ODEdef, it was not my base case because of the reasons described in above thread (adding of IMO unnecessary complexity with init vectors etc.).
However, if it makes sense in your application, you can request the size from the Fb1_ODE. Under normal circumstances (no time channel or diff channels returned), you get the state size. With time channel or for certain integration types that return the diff channels (like \pec), you get these in addition and could just subtract 1 and/or divide by 2. Check uncommenting the lines in this example of state size 3: