Sclang Clocks stop during code execution?

I was surprised to realize today that all SC Clocks (SystemClock, TempoClock, AppClock) act like time simply stops during code execution, i.e. the internal time of these clocks does not advance while, nor does it appear to catch up after, code runs. These clocks will give the exact same time before and after a code block, even if it took seconds to execute that block.

Process.elapsedTime, however, correctly records the passage of time.

So, unless there’s something I’m missing, we should always expect these clocks to run slower than actual time, since all code takes some amount of time to run. This must be by design, yes? Maybe it would be worth noting in the Clock docs that SC’s clocks cannot be used to reliably measure the passage of time?

This not only affects the ability to measure the passage of time, but also scheduling, as seen in the attached example.

There’s clearly something I’m misunderstanding about the way these clocks work. Any insights are welcome.

(
var clockFuncs, startTimes, endTimes;
clockFuncs = [{Process.elapsedTime},{SystemClock.seconds},{TempoClock.beats},{AppClock.seconds}];
startTimes = clockFuncs.collect{|clockFunc| clockFunc.value};

SystemClock.sched(1.0,{
	("Here's the scheduled function. It was scheduled for 1 second into the future, but"+(Process.elapsedTime-startTimes[0])+"have elapsed since it was scheduled.").postln;
});

1000000.do{ |i|
	var isPrime = true;
	var div = 2;
	while {isPrime && (div <= sqrt(i))}
	{ if(i%div == 0,{isPrime = false}); div = div+1 };
	// if(isPrime, {i.postln});
};

endTimes = clockFuncs.collect{|clockFunc| clockFunc.value};
clockFuncs.do{|clockFunc, i|
	clockFunc.asCompileString.postln;
	("   start:"+startTimes[i]).postln; 
	("   end:"+endTimes[i]).postln; 
	("   elapsed:"+(endTimes[i] - startTimes[i])).postln;
};
)

/* Results on my computer:

{Process.elapsedTime}
   start: 1629.762018209
   end: 1634.669503834
   elapsed: 4.9074856249999
{SystemClock.seconds}
   start: 1629.761879334
   end: 1629.761879334
   elapsed: 0.0
{TempoClock.beats}
   start: 1629.5189145
   end: 1629.5189145
   elapsed: 0.0
{AppClock.seconds}
   start: 1629.761879334
   end: 1629.761879334
   elapsed: 0.0
-> [a Function, a Function, a Function, a Function]
Here's the scheduled function. It was scheduled for 1 second into the future, but 4.907576458 have elapsed since it was scheduled.

*/

This is logical time.

Without logical time, you cannot have accurate musical timing.

There is actually a help file about this – Scheduling and Server timing | SuperCollider 3.14.0 Help

However the conclusion is the opposite of yours. In fact, if code takes x amount of time to execute, and then you 0.25.wait on top of physical time, then the next thing will happen not 0.25 beats later, but 0.25+x beats later, and all rhythmic threads will drift late but by different amounts. This is the bad outcome, which logical time prevents. Advancing time is reliable only if logical time does not advance during code execution.

hjh

2 Likes

James is 100% correct.

Actually, someone :wink: asked about logical time just a few weeks ago and I gave the following explanation: SC-like clocks in Python? - #12 by Spacechild1

1 Like

Missed this one – this is because SC threads are non-preemptive. The SystemClock-scheduled function cannot run until the long loop finishes. The 1.0 second already passed – SystemClock can’t jump back into the past to do it “on time.”

Non-preemptive threading means it’s the user’s responsibility to insert short wait times into long running tasks, if other threads need to have the opportunity to wake up. That’s not an issue in preemptive languages (but then the code needs to handle thread safety explicitly – so there’s always a tradeoff).

hjh

1 Like

not something I had understood before - thx!

…is this suggestion somewhere in the docs? Should be if not!

Thanks very much, @Spacechild1 and @jamshark70. This is a topic that continues to be confusing to me. I think part of my confusion might stem from the fact that for a lot of what I do, I don’t use the server, and considerations of logical vs physical time seem particularly geared towards communication with the server, since it affects the sent timestamps, which are then used as accurately as possible by the server, especially if you include latency.

But I’ll get back to that. For now, just want to note that doing the prime search within a fork, and introducing a short (0.00001s) wait time within each loop (which increases the length of the search from 5s to 10s, but I digress), does allow the scheduled function to be executed at the planned time (1 second after scheduling). Also, the various SC clocks now advance as I would have expected them to before (see results below). However, if I make the wait time 10x shorter (0.000001s), then this doesn’t work — it all behaves as if the function weren’t yielding at all. This stuff is pretty subtle!

(
var clockFuncs, startTimes, endTimes;
clockFuncs = [{Process.elapsedTime},{SystemClock.seconds},{TempoClock.beats},{AppClock.seconds}];
startTimes = clockFuncs.collect{|clockFunc| clockFunc.value};

SystemClock.sched(1.0,{
	("Here's the scheduled function. It was scheduled for 1 second into the future, but"+(Process.elapsedTime-startTimes[0])+"have elapsed since it was scheduled.").postln;
});

fork{
	1000000.do{ |i|
		var isPrime = true;
		var div = 2;
		while {isPrime && (div <= sqrt(i))}
		{ if(i%div == 0,{isPrime = false}); div = div+1 };
		// if(isPrime, {i.postln});
		0.00001.wait;
	};
	
	endTimes = clockFuncs.collect{|clockFunc| clockFunc.value};
	clockFuncs.do{|clockFunc, i|
		clockFunc.asCompileString.postln;
		("   start:"+startTimes[i]).postln;
		("   end:"+endTimes[i]).postln;
		("   elapsed:"+(endTimes[i] - startTimes[i])).postln;
	};
};
)

//results:
Here's the scheduled function. It was scheduled for 1 second into the future, but 1.0008432920004 have elapsed since it was scheduled.
{Process.elapsedTime}
   start: 65096.409127833
   end: 65106.408993708
   elapsed: 9.9998658749973
{SystemClock.seconds}
   start: 65096.40895975
   end: 65106.408959725
   elapsed: 9.9999999747524
{TempoClock.beats}
   start: 393.221658917
   end: 403.22165889175
   elapsed: 9.9999999747524
{AppClock.seconds}
   start: 65096.40895975
   end: 65106.408959725
   elapsed: 9.9999999747524

Or if(i % 1000 == 0) { 0.00001.wait }, which should reduce the waiting time to 0.005 seconds (while adding a small amount of overhead for the mod check – or maybe i bitAnd: 1023 == 0 could be slightly faster? not at the computer now to check)… It isn’t necessary to yield every time, only within some acceptable time granularity.

I think this is because, under the hood, all scheduling takes place in terms of absolute seconds (where exactly what is “absolute” is, I think, probably not visible to the user). Baseline seconds don’t start at zero but rather at some rather large number. Adding very large and very small numbers is the worst case for floating point, so you’ve found the wait time that is below the threshold for double precision addition at the order of magnitude that exists internally in the clock (but this delta is still 3 or 4 orders of magnitude below the human attack detection threshold – good enough for music, not good enough for particle physics).

EDIT: Peeking in the source code, there’s hrTimeOfInitialization = high_resolution_clock::now(); during language startup. Later, in the scheduler loop, the check for “is there something to run now?” is based on schedPoint = hrTimeOfInitialization + schedSecs; so indeed, if hrTimeOfInitialization amounts to a high-magnitude number, then it would be insensitive to addition of very small increments. However I’m a C++ idiot so I couldn’t figure out how to print out a chrono timepoint as a double, so the exact order of magnitude, I don’t know.

hjh

1 Like

As I think more about logical / physical time, it occurs to me that the first is a prescriptive measure of time, the second a descriptive measure. Another way to put it: the first is for assignment, the second is for measurement.