ODEdef state vector size

Working with ODEdef (from miSCellaneous Quark), I noticed two things I could not explain.

@dkmayer maybe you have an idea?

phase state size (+1 ?)

I would expect the phase state vector to have the same size as the init state vector.
But it is one larger.

The clearest it is with the first example from the helpfile. This returns 3, while the y0 is [0, 1]

(
s = Server.local;
Server.default = s;
s.options.blockSize = 8;
s.reboot;
)
(
Fb1_ODEdef(\harmonic_ext_1, { |t, y, k|
	"y size: %".format(y.size).postln;
    [
        y[1], 
        y[0].neg * (1 + (k * y[1]))
    ]
}, 0, [0, 1], 1, 1); 
)


// brassy sound

(
x = {
    var sig = Fb1_ODE.ar(\harmonic_ext_1,
        [1], 1500, 0, [0, 1],
    ) * 0.1;
    sig
}.play
)

number of function calls

The second thing I just wondered why the function is called 4 times for every frame that needs to be built?

Hi Julian,

I won’t have time to look into this before vacation (which starts on the weekend).
Until then, best

Daniel

1 Like

Hi again,

please forgive me if I have not details of the implementation in my mind, this was quite demanding stuff :slight_smile:

It’s all hidden in Fb1_ODE.sc. Due to necessities of accuracy of calculation and additional options (e.g. providing the differential data, blurring the numerical intermediate results with further operators etc.) the passed input Function has to be extended resp. wrapped in another Function.

These two parts of the source code plus comments might give a hint:

		size = odeDef.size;
		stepDepth = odeIntDef.stepDepth;
		sizeFactor = odeIntDef.sizeFactor;
		rawIntSize = sizeFactor * size;

		// rawIntSize: size of arrays used in the iteration process (without accumulated time)
		// this is either the size of the ODE system or its product with a factor 2,
		// then the ODE Function values (= the derivation) are stored for each sample

		// the x_d intTypes exist for cases where we want these values, but
		// where they are not buffered by default (e.g. 'rk3', 'eum', etc.)

		outSize = rawIntSize + 1; // time is stored too !
		outDepth = stepDepth + 1; // for Fb1 the depth value must be lookback + 1 !

and

			// core: everything packed into the Fb1 function, which contructs the graph iteratively
			fb1_func = { |in, out|
				var lastDt, count = composeArIn.size - 1;
				composeIns = composeArIn.size.collect { |i| in[0][i] };

				#argIns, count = argList.miSC_adaptFb1ArgList(in[0], count);

				tSumInOld = in[1][count + 1];
				tMulIn = tMul.miSC_isAr.if { in[0][count + 2] }{ tMul };

				fb1_intFunc.(out, tSumInOld, basicDelta * tMulIn, argIns)
					.miSC_fb1_perform(compose, composeIns, size, outSize);
			};

During the whole process a multiple evaluation takes place. If this is avoidable or not – I cannot tell, at least I suppose it doesn’t do any harm as the Function provided for Fb1_ODEdef is thought for that purpose and not anything else.

Hope that helps.

Cheers

Daniel

A bit more context after some fresh air :slight_smile:

Ad accumulated time: theoretically, this could be calculated via summing up the sample durations. As you can imagine, this leads to drifting, so I needed to introduce a calibration mechanism and Sweep turned out to be a good way to do it (around line 200 in Fb1_ODE.sc)

Ad number of evaluations: this depends on the integration type. Default \sym2 applies 4 * 8 evaluations in this case (the recommended symplectic integration can be applied iteratively for each sample calculation, higher orders are possible, but \sym2 seemes to be a good default). E.g., with intType \pec, the Function in the above example is applied only 8 times – the minimum to expect according to blockSize.

I can imagine. Anyway, the system comes really very useful to us, and we use it a lot, so thanks!

I see. Why is it necessary? Don’t we already have time as the first argument to the function? I actually stumbled across it when I wanted to derive the dimensionality of the phase space by writing y.size.

It is a little contradictory also because (according to the reasonable assumption in the documentation) we are expected to return an array of the same size as we have received in y. Maybe it could be dropped before passing it in?

Unless it doesn’t build the UGen graph four times it is ok. If it does, it would be a massive performance improvement to get around it.

I’d have to spend some time reading to get what is happening, it is a complex process. But if you are sure that it doesn’t do this, I don’t worry.

That’s a different thing as it really means the time in the ODE definition !
You might have time on the right side of the equation (non-autonomous case).
This is independent from the implementational aspect I’m referring to. Maybe it’s possible to do the implementation differently, but this seemed to me as the most practical way when I designed it.

This reminds me to an ancient discussion with James on the list. I forgot the topic but James’ argument was that when you define the thing yourself, you know its properties. So, why would you ask for the size when you have defined the size yourself ? It’s very easy to keep that number stored in a variable before applying the ode solver.

The passed function is assumed to reflect the ODE and should not do something else – because of the way it is used. Maybe this is a bit similar to the case of UGenGraphFunction which is also used in a very specific way. All kinds of confusion come up when it is thought – as it often happens with new users – to also perform some language-only operation “as normal”.

Maybe that could be emphasized more clearly in the documentation.

This was the statement that you should please ignore after my fresh air break :slight_smile:

As I pointed out in my follow-up, it is in fact used multiple times for the graph building – and purposely so !

You can snoop the file Fb1_ODEintdef.sc to compare the different implementations. The amount of applications of odeDef in the Fb1_ODEintdefs gives a hint. For the default integration type \sym2 the helper function symplecticOp is called with order 2, thus it uses odeDef 2 x 2 times, which explains the overall 32 = 8 (blockSize) * 4 calls in your example.

		symplecticOp = { |order, odeDef, t, y, dt, size, args|
			var newArgs = y.copy, yMid = y.copy, yNew = 0!size, or = 1/order;

			or = 1/order;

			div(order, 2).do { |k|
				(k > 0).if {
					newArgs = yNew.copy;
					yMid = yNew.copy;
				};
				for(0, size-1, { |i|
					(i > 0).if { newArgs[i-1] = yMid[i-1] };
					yMid[i] = odeDef.(i, t, newArgs, *args) * dt * or + yMid[i];
				});
				for(size-1, 0, { |i|
					(i + 1 < size).if { newArgs[i+1] = yNew[i+1] };
					yNew[i] = odeDef.(i, t, newArgs, *args) * dt * or + yMid[i];
				});
			};
			yNew
		};

		Fb1_ODEintdef(\sym2,
			{ |odeDef, t, y, dt, size ... args|
				symplecticOp.(2, odeDef, t, y, dt, size, args);
			}, 1, 1);

Though, only 8 (blockSize) calls with \pec:

		// variants of prediction-evaluation-correction
		// using explicit Euler and trapezoidal rule
		Fb1_ODEintdef(\pec,
			{ |odeDef, t, y, dt, size ... args|
				var tn = t + dt;
				var y0 = y[0..size-1];
				var y1 = y[size..2*size-1];
				var p = y1 * dt + y0;
				var pe = odeDef.(nil, tn, p, *args);
				var pec = (y1 + pe) * dt * 0.5 + y0;
				pec ++ pe
			}, 1, 2);

But as said, the symplectic variants are far superior in terms of stability, a trade-off with CPU-usage that – if it is a concern – could be decided from case to case.

Cheers

Daniel