Deep dive into threading, blocking, sync/async, scheduler internals (split from scztt quarks thread)

I’m going to have to redesign the TreeSnapshot.get though for it to be useable in non-streaming fashion. Right now it returns nothing useful, and it can’t because the stream is parsed later in the OSCFunc. It should probably be called do, doNodes or perhaps even doDeferred instead. The stream interface is not enough for my purposes because I need a bottom-up accumulation of controls from the leaves to the root to emulate how the server sees the control broadcast groups. This is the bit that’s sadly missing from Ndef.

I forgot how annoying sync can be in SC because for some reason not every Thread is a Routine… (maybe I should a question why is it like that)

	*getSync { // needs Routine context, alas
		arg node, ignore=[];
		var server, flv = FlowVar.new;
		node = node ?? { RootNode(Server.default) };
		server = node.server;

		OSCFunc({
			arg msg;
			var snapshot, parsed;
			if (dump) { msg.postln };
			snapshot = TreeSnapshot(server, msg);
			{ flv.value = snapshot }.defer;
		}, '/g_queryTree.reply').oneShot;

		server.sendMsg("/g_queryTree", node.nodeID, 1);

		^flv.value;
	}

And of course:

TreeSnapshot.getSync
// ERROR: yield was called outside of a Routine.

fork { t = TreeSnapshot.getSync } //ok

If every thread were a Routine, how would interpretPrintCmdLine know which result to print? A thread may yield a potentially unlimited number of times, and yielded values are the only way to get results out of a Routine, so, should interpretPrintCmdLine print all of the yielded values? Only the last one (and, how to identify the last one)?

The FlowVar must yield a non-numeric value to block the thread (there is no other way for the thread to block itself). That will be returned to the caller. If the caller is interpretPrintCmdLine, then the interaction with auto-printing results should be decided carefully.

EDIT: A hack:

thisProcess.interpreter.preProcessor = { |str|
	"Routine { \"-> %\\n\".postf({" + str + "}.value) }.play(AppClock)"
};

1+1
-> a Routine   // huh, a mild irritant
-> 2

(
f = { |bus|
	var cond = CondVar.new;
	var value;
	bus.get { |data|
		value = data;
		cond.signalOne;
	};
	cond.wait;
	value
};
)

s.boot;

b = Bus.control(s, 1).set(100);

f.value(b);
-> a Routine
-> 100.0  // well, it did actually work

hjh

How about the first one?

See for example CondVar:prWait:

The only way to block a Routine for an asynchronous operation is to yield something.

If interpretPrintCmdLine were to print the first yielded value, then you would see this dummy “wait” symbol and not the thing you wanted.

The hack takes perhaps a better approach (which I didn’t think of at first): let the routine post the block’s result. There’s a redundant “a Routine” which can’t be turned off in the class library at present, but after that, it does produce the right printed output.

hjh

1 Like

Oh, I see now, it’s because any yield “pops back” across any number of stack frames. So if the code invoked from the REPL line called anything that yields, then that would be printed. This interdiction comes down to sclang being less thread safe than usually though.

Are Exceptions also implemented with yield?

I’ve been exploring ways to make receiving (i.e. waiting for) synchronous OSC less kludgy, but the funny thing is that Main.recvOSCmessage which is called directly from C++, is actually called in the context of the Process.mainThread, which is exactly the same as the REPL thread. So, it seems you cannot receive OSC while the main REPL thread is doing something, and also why it would probably be unsafe to hack it to make it be able to yield.


I found the correct idiom for this, but it’s somewhat obscure. Basically, what you have to do is use FlowVar, as if you were using a Ref. Nearly the same code:

	*refGet {
		arg node, ignore=[];
		var server, result = Ref.new;
		node = node ?? { RootNode(Server.default) };
		server = node.server;

		OSCFunc({
			arg msg;
			var snapshot, parsed;
			if (dump) { msg.postln };
			snapshot = TreeSnapshot(server, msg);
			defer {	result.value = snapshot; };
		}, '/g_queryTree.reply').oneShot;
		server.sendMsg("/g_queryTree", node.nodeID, 1);
		^result
	}


	*flowGet {
		arg node, ignore=[];
		var server, flv = FlowVar.new;
		node = node ?? { RootNode(Server.default) };
		server = node.server;

		OSCFunc({
			arg msg;
			var snapshot, parsed;
			if (dump) { msg.postln };
			snapshot = TreeSnapshot(server, msg);
			defer {
				flv.value = snapshot;
			};
		}, '/g_queryTree.reply').oneShot;
		server.sendMsg("/g_queryTree", node.nodeID, 1);
		^flv;
	}

If used directly from REPL the above two behave the same:

t = TreeSnapshot.refGet // -> `(nil)
// By the time you type the next cmd it's updated of course
t.value // -> TreeSnapshot  + Group: 0

// Likewise
t = TreeSnapshot.flowGet // -> a FlowVar
t.value // -> TreeSnapshot  + Group: 0

But if used from a forked thread, the first one (using Ref) has a race of course, while the 2nd one doesn’t.

fork { t = TreeSnapshot.refGet.value }; 
t // nil

fork { t = TreeSnapshot.flowGet.value }
t // -> TreeSnapshot  + Group: 0

And finally, why does flowGet not bomb when you call .value on it in REPL?

The reason is simple: it doesn’t .yield if the value has already been set, which happens in the time it takes to type the next cmd:

FlowVar { // ...
	value {
		condition.wait
		^value
	}
}

Condition { // ...
	wait {
		if (test.value.not, {
			waitingThreads = waitingThreads.add(thisThread.threadPlayer);
			\hang.yield;
		});
	}
}

Morning coffee stuff. But of course, if you actually try in REPL

t = TreeSnapshot.flowGet.value

it still ERRORs. The amazing usability of SC.

So I’m not sure that receiving OSC messages in the context of Process.mainThread, which is how it’s currently done, was a particularly smart decision. For clarity, this a sclang-wide problem, it’s particular to this quark.

The almost funny part is that Thread has a terminalValue field, which is used to store the result for alwaysYield. But the problem is that there’s no way to join a Thread to another in sclang, i.e. wait for it to finish and get the return value, even though there’s a slot for that in Thread. (By the way, the whole CondVar shebang does nothing to improve on this problem. It addresses something else.)

Even if I hacked some thread join, it still wouldn’t work for OSC because the C++ function that gets the latter (localServerReplyFunc) wants to acquire the global interpreter lock. So from the REPL it’s impossible to get the OSC in one command. You need two, so that interpreter lock is released in-between and the OSC can actually go through the interpreter. So we’re stuck with the explicit continuation-passing-style for this and OSC in general.

Routine:yield freezes the state of the current thread (that is, a stack and an instruction pointer - all a Thread is), and resumes the parent thread where it was last stopped (presumably where Routine:next was called). If the OSC message callback supported yielding, it would need (a) it’s own Thread (e.g. stack and instruction pointer), and (b) a parent thread that they would yield to. It’s very feasable to use the main AppClock thread as the main thread, and create a Routine for processing OSC message:

+Main {
  // Imagine something like.... oscRoutine = Routine({ |nextMsg| nextMsg = processMsg(nextMsg).yield })
  recvOSCmessage {
     |...msg|
     oscRoutine.next(msg)
  }
}

However, if a Condition / FlowVar is yield from inside of processMsg (assuming this is your custom OSC processing function), recvOSCmessage would need to handle that - specifically, since oscRoutine is mid-yield, it would need to keep it “paused” (e.g. refrain from calling next) until the Condition resolved, when it could provide this value to the Routine via oscRoutine.next(resolvedCondition). This means, any OSC messages received before the Condition is resolved would be queued and left unprocessed until oscRoutine is resumed with a resolved value and finish with processMsg.
In effect: if you yield in your OSC responder, you block new messages until that condition is resolved (or never again, if the condition is not resolved).

Since this is not a workable solution, we can imagine another implementation where EVERY msg response gets it’s own Routine:

+Main {
  // Imagine something like.... oscRoutine = Routine({ |nextMsg| nextMsg = processMsg(nextMsg).yield })
  recvOSCmessage {
     |...msg|
     Routine({ processMsg(msg) }).play;
  }
}

This will allow yields correctly, but entails creating one new Thread per OSC message processed, regardless of whether they yield or not. Threads are not massively heavyweight objects, but this is still orders of magnitude more expensive that calling a responder function on the stack, making this a bad option for a case where you may need to process hundreds of messages per second.

There’s another imaginable solution that would look like option 1, except when oscRoutine has yielded and is waiting for a Condition to resolve, a new Routine is created to process incoming messages. This avoids the problem of blocking all OSC responders, leaving you with an on-demand “stack” of Routines to replace awaiting ones. Implementing this solution is straightforward, but would introduce a lot of tricky code into the core OSC responder callback chain. If you’re interested pushing callbacks into Routines rather than processing them on the stack directly, this is probably a good approach to investigate.

The solution that sclang implements now is slightly manual, but avoids the requirement of wrapping all callbacks in complex Routine-wrapping logic. If you want to yield in a callback, you wrap it in a Routine yourself:

OSCdef(\yieldingResponder, {
  |...args|
  fork {
     processMsg(args);
  }
})

I would ignore this - this definitely isn’t a reachable value, or used in any functional way. It may be vestigial, or may be a way of preventing a return value from being garbage collected (in which case, someone forgot to add a comment to this rather obscure detail :slight_smile: ).

It is used by alwaysYield as I said somewhere further above. It can be used to hack a Thread.join if you think a bit… since normal yields do no set it! (Basically you can treat alwaysYield as a threadExit and you can find out from outside when that point is reached.) But won’t solve the global interp. lock in localServerReplyFunc.

Nor would your solution with per-callback func threads solve that latter issue. You’d need to move the true dispatcher in the interpreter and have localServerReplyFunc just fill some fifo buffer. Quite a bit of work.

They OSC reception works now is like

  1. interp-lock-acquire
    your first REPL cmd; it returns something, which can’t use OSC data
    interp-lock-release

  2. Server sends OSC which triggers localServerReplyFunc that does
    interp-lock-acquire
    calls the sclang OSCfunc eventually
    interp-lock-release

  3. then either
    another manual REPL command that does
    interp-lock-acquire
    accesses OSC-received stuff on main thread
    interp-lock-release
    or
    similar stuff done via the clock-scheduler:
    interp-lock-acquire
    your CPS-style function that was deferred
    interp-lock-release

So yeah, there are 3 pairs of interpreter entry-exists involved in a typical OSC-reception in the present implementation. For the (even more) curious

	defer { arg delta;
		if (delta.isNil and: {this.canCallOS}) {
			this.value
		}{
			AppClock.sched(delta ? 0, { this.value; nil })
		}

But this.canCallOS is false in an OSCFunc. So there’s no coalescing of 2 & 3 above. In that respect it reminds me of the split in Linux “top-half” (2) vs “bottom-half” (3) interrupts of olde.

I was curious what else use FlowVars under such a name. Apparently Nim does. I’m guessing they copied from SC, however, the somewhat funny story is that JMC committed FlowVars in 2005, but they were broken until 2015. I guess that’s why hardly any SC code uses these.

I (partially) take back what I said yesterday – it may not be a “big” problem, but it’s worth it to make it easier to handle asynchronous actions.

I think I would do it this way:

+ Interpreter {
	interpretPrintCmdLine {
		var res, func, code = cmdLine, doc, ideClass = \ScIDE.asClass;
		preProcessor !? { cmdLine = preProcessor.value(cmdLine, this) };
		func = this.compile(cmdLine);
		if (ideClass.notNil) {
			thisProcess.nowExecutingPath = ideClass.currentPath
		} {
			if(\Document.asClass.notNil and: {(doc = Document.current).tryPerform(\dataptr).notNil}) {
				thisProcess.nowExecutingPath = doc.tryPerform(\path);
			}
		};
		Routine {  // 'nowExecutingPath' will be in force within the Routine btw
			res = func.value;
			codeDump.value(code, res, func, this);
			("-> " ++ res).postln;
		}.play(AppClock);
		thisProcess.nowExecutingPath = nil;
	}
}

+ Function {
	parallelify { |clock(AppClock), limit = 100|
		var active = IdentitySet.new;
		^{ |... args|
			var thread;
			if(active.size < limit) {
				thread = Routine {
					protect {
						this.valueArray(args);
					} {
						active.remove(thread);
					};
				}.play(clock);
				active.add(thread);
			} {
				Error("Too many parallel invocations of this function").throw;
			};
		}
	}

	queueify { |clock(AppClock), limit = 100|
		var queue = LinkedList.new;
		var thread = Routine {
			var args;
			while { queue.notEmpty } {
				args = queue.popFirst;
				try {
					this.valueArray(args);
				};
			};
			status = \idle;
		};
		var status = \idle;
		^{ |... args|
			if(queue.size < limit) {
				queue.add(args);
				if(status == \idle) {
					status = \running;
					thread.reset.play(clock);
				};
			} {
				Error("Too many queued invocations of this function").throw;
			};
		}
	}
}
  1. The only change to interpretPrintCmdLine is to wrap the execution in a Routine, and handle the result within the Routine. That is, what I said before about yielded values actually isn’t necessary, and the solution is dramatically simpler than the over-engineering suggested in the preceding half dozen posts or so. (I’m not worried about the performance impact of creating a Routine per code block, because submitting code blocks is, in terms of CPU cycles, a rare event :grin: )

  2. Response functions divide into three categories: A/ completely synchronous, B/ asynchronous and appropriate to run in parallel, C/ asynchronous and better to run in series. If this discussion began with the idea of making everything run in a Routine, I can’t fully agree because the performance of A-type responder functions would be much worse, unnecessarily. But we could give the choice to the user. B would create and destroy a lot of Routine objects (could be a drain on the GC); C creates one Routine per queueified function, but later invocations would have to wait for preceding ones to finish (this might be better, though, for instance, if you want to load buffers based on an OSC command or GUI action).

s.boot;

b = Bus.control(s, 1).set(100);

(
var cond = CondVar.new, value;
b.get { |data| value = data; cond.signalOne };
cond.wait;
value
)
-> 100.0  // we 'wait'ed but interpretPrintCmdLine just handled it, no fuss

// also...
thisProcess.nowExecutingPath;
-> /home/--redacted--/share/SC/scd/tests/21-1226-threadify.scd
// ^^ so execpath isn't broken by the Routine either, this is looking kinda OK

// responder type B, in parallel
(
g = Slider(nil, Rect(800, 200, 200, 30))
.action_(parallelify { |view|
	var value = view.value;
	rrand(1.05, 1.25).wait;  // simulate async
	value.postln;
})
.front;
)

// responder type C, in series
(
OSCdef(\test, queueify { |msg|
	rrand(0.05, 0.25).wait;
	msg.postln
}, '/test');
)

// now simulate a flood of messages
(
fork {
	var addr = NetAddr.localAddr;
	50.do {
		addr.sendMsg('/test', "testing".scramble);
		0.01.wait;
	};
};
)

hjh

1 Like

That’s not a bad idea. As I said in a different context, the only real difference in terms of internals between a Thread and a Routine seems to be that a “pure” Thread (like Process.mainThread) always has a nil parent. I’m not sure if anymore Threads that are not Routines are even created by sclang, besides Process.mainThread.

I’ll have to digest the rest of the stuff you wrote for a more complete response. I don’t know what GC perf impact making every REPL a new Routine would have.
But you could test this pretty easilty by spamming some, say 1000 cmds directly and also 1000 cmds wrapped in Routine.run { }. And see what GC impact it had. I don’t know how the evaluate the latter myself, at the moment, i.e. what GC internals of perf. counters to look at.

Worth noting that the above won’t allow OSC to be received though while the Routine executes, because of the global interp. lock, as I detailed in my previous post in this discussion thread. The routine would still have to defer the part that it wants to exec after the OSC response comes back. But you’ll at least fix the yields barfing at the user.

For the latter, I’m thinking that _Routine_yield and its two near-copypasta friends (that implement alwaysYield and yieldAndReset) would have to release the global interpreter lock iff there is no other Thread that can execute immediately.
They might even do that right now via the scheduler code, but I haven’t really checked that part of the code.

Scratch that. The per-REPL-cmd routine that you just started will have its parent main thread set as immediately executable. The per-REPL-cmd Routine would end up yielding to the main Thread. So the latter would have to give up the interpreter lock without returning something, which is presently not possible, as I understand it.

Nobody types this fast! :laughing: I mean, c’mon, sure, there’s a theoretical concern. But let’s not forget what interpretPrintCmdLine is really used for. “If I put my car on top of my living room coffee table, then the table will break.”

No. If the interpretPrintCmdLine Routine yields, then it yields, and it is no different from any other Routine yielding. It will not block other stuff.

There is in principle a risk of thread-unsafety here, but it’s not any different from the risk of thread-unsafety if you explicitly forked a Routine and blocked it. The Routine I proposed for interpretPrintCmdLine is just a normal Routine, nothing special about it. It has no magical power to lock the whole interpreter for its duration.

hjh

Oh, I see you’re actually making the forked Routine do the postln work of the REPL

		Routine {  // 'nowExecutingPath' will be in force within the Routine btw
			res = func.value;
			codeDump.value(code, res, func, this);
			("-> " ++ res).postln;
		}.play(AppClock);
		thisProcess.nowExecutingPath = nil;

So no thread joining needed :smiley:, but…

I don’t understand why thisProcess.nowExecutingPath = nil; doesn’t actually exec before the Routine starts (so before the func). I mean you somehow avoid the race, since I’ve actually ran your extension to check that, but I don’t understand why it works like that with respect to thisProcess.nowExecutingPath.

I mean if I do this instead:

		Routine {  // 'nowExecutingPath' will be in force within the Routine btw
			res = func.value;
			codeDump.value(code, res, func, this);
			("-> " ++ res).postln;
		}.play(AppClock);
		"Setting nowExecutingPath to nil".postln;
		thisProcess.nowExecutingPath = nil;

And try it from a save file like

thisProcess.nowExecutingPath

it prints the messages in the order in which I think the execution happens, namely:

Setting nowExecutingPath to nil
-> M:/sc/interphack.scd

So I don’t understand why the routine didn’t see nowExecutingPath as nil, even thought it clearly executed later.

Actually, I suspect I know what probably happens. Routine probably copies the state of nowExecutingPath as it was when it was constructed.

The Thread class has a field for that (in fact has two)

var <executingPath, <oldExecutingPath;

I’d have to look at _Thread_Init to confirm the copying happens.

Yeah this is where the copy happens.

void initPyrThread( // ...
        slotCopy(&thread->executingPath, &g->process->nowExecutingPath);

It also must be the case that switchToThread does the opposite, i.e.
sets nowExecutingPath from the thread’s own executingPath. Yeah the copy in the opposite direction happens in prRoutineResume at the fabled line number 3333 :smiley: .


There’s a wee bit of an issue if you do from REPL

\foo.yield

with this approach. There’s no error, but nothing is printed of course, since the routine just “hanged”, i.e. was de-scheduled and there’s no way to unhang it (i.e. reschedule it) since there’s no reference to it anywhere. It’s more of an issue if you type

s.sync 

as that behaves the same way if there’s no server running. If there is one however, it will work and print

-> localhost

Something like

3.yield

also has a bit of a funny effect, that will execute something in the future, but that’s perhaps more of a feature… because something like

(
"hey".postln;
3.wait;
"later".postln;
)

actually works now as a newbie might expect. Also

v = FlowVar.new
v.value // hangs
v.value = 12 // resumes previously hanged and prints

works too with your patch. Albeit it’s slightly counterintuitive because it will print two things after the last line

-> a FlowVar
-> 12

But I think this is a decent price to pay for the simplicity of the solution.

Also something like

(
SynthDef(\zzxzxd, { Out.ar(0, 0.1 ! 2 * SinOsc.ar) }).add;
s.sync;
Synth(\zzxzxd)
)

works ok too if there is a server running because the whole parenthesized expression executes in the same routine, so there’s serial order ensured. But with no server, it will just hang and alas booting a server later doesn’t resuscitate it. So it’s not really a substitute for s.waitForBoot, by itself. On the other hand,

(
SynthDef(\zzxzxd, { Out.ar(0, 0.1 ! 2 * SinOsc.ar) }).add;
s.bootSync;
Synth(\zzxzxd)
)

does work (with your patch applied) even when the server is not booted.

Given that it’s mostly an ok feature/approach, I guess it could be made optional whether the interpreter spawns a routine or not, i.e. have some classvar boolean in Interpreter that picks the old or new behavior.

You see, I did think of this, and test it, before posting :wink:

hjh

Well, besides not knowing the deeper code, I was a bit confused because schedRunFunc calls runAwakeMessage which on sclang side calls awake, which then calls next, which then calls _Routine_Resume going back in C++ to prRoutineResume.

I don’t quite understand why that much round-tripping is involved, i.e. why can’t
runAwakeMessage call into prRoutineResume directly. I’m guessing it’s because it was envisaged there may be other kinds of Threads (beside Routines) that don’t have next but have an awake although Tread itself does not define awake, so it can’t woken up like that. (Thread does define it as next { ^this }, but that doesn’t enable any scheduler magic.)

awake and next aren’t actually synonyms – see PauseStream and subclasses.

There are a few places where JMc made design mistakes… but not many. I think this is not one of those places.

hjh

Can a PauseStream be awoken by the scheduler? It’s not a Routine, but inherits from Stream directly. I guess it can because while PauseStream is undocumented, the user-visible API is its (documented) Task sub-class. To be honest, I’ve never used Task myself, and I 'm not sure exactly when you’d want it instead of Routine. Maybe I should ask that separately. I guess the difference is that you can more easily pause a Task externally by calling its pause method, while a Routine just runs, so you’d need some additional yield-based protocol to make a routine pause.

Also

PauseStream {
	awake { arg beats, seconds, inClock;
		clock = inClock;
		^this.next(beats)
	}
}

and

Routine {
	awake { arg inBeats, inSeconds, inClock;
		var temp = inBeats; // prevent optimization
		
		^this.next(inBeats)
	}

}

Look identical to me in what they actually do, but of course there’s the difference that they have different next; PauseStream’s doesn’t try to resume a routine.

There’s also the threadPlayer business that I don’t quite understand, but apparently JMC did not either as Julian added that. I’m guessing it has something to do with this issue that there are two kinds of schedulable (“awake-able”) entities, Routines and PauseStreams, but there’s no Thread context for the latter as they subclass Stream directly.

I’m probably missing something since Tasks apparently do create threads

thisThread === thisProcess.mainThread
// -> true (without your patch, of course)

t = Task({ postln(thisThread === thisProcess.mainThread) }, AppClock)
t.start;
// post false

t = Task({ postln(thisThread.parent === thisProcess.mainThread) }, AppClock)
t.start;
// posts true

Ah, yes:

Task : PauseStream {
	*new { arg func, clock;
		^super.new(Routine(func), clock)
	}
}

There’s also EventStreamPlayer, but that also creates a Routine:

EventStreamPlayer : PauseStream {
	*new { arg stream, event;
		^super.new(stream).event_(event ?? { Event.default }).init;
	}

	init {
		cleanup = EventStreamCleanup.new;
		routine = Routine{ | inTime | loop { inTime = this.prNext(inTime).yield } };
	}

I guess the issue solved by threadPlayer was to associate these objects “in reverse” with the Routine they start, so given a Routine that was started by a PauseStream you can find the latter.

Yeah, since there are Streams like FuncStream which are not Routines, and since PauseStream can wrap any Stream, you can have streams running on the main thread!

~peek = { postln(thisThread === thisProcess.mainThread) };
f = FuncStream({ ~peek.(); 4.rand })
p = PauseStream(f, AppClock)
p.start // posts true (every time)

But it seems a pretty obscure feature to have schedulable entities that run on main thread. I don’t recall seeing this used.

Although it’s documented, I didn’t know that Object itself has an awake. So you can actually schedule (on the main thread) anything that does something sensible in next. Object itself no so much as it just returns itself on next.

FuncStream.findRespondingMethodFor(\awake)
// -> Object:awake
FuncStream.findRespondingMethodFor(\next)
// -> FuncStream:next
FuncStream.findRespondingMethodFor(\play)
// -> Stream:play

f = FuncStream({ ~peek.(); 4.rand })
f.play(AppClock) // also runs on main thread

// Unlike used via PauseStream this loses the ability to pause it.
FuncStream.findRespondingMethodFor(\pause)
// -> nil

This in fact the 3rd “version” of an identical awake routine in the library.

Object {
	// scheduling
	awake { arg beats, seconds, clock;
		var time;
		time = seconds; // prevent optimization
		^this.next(beats)
	}
}

I guess the other two copies (in Routine and PauseStream) might have been added first and then someone decided to make it work for everything.

Interestingly Function implements its own awake. And this one is different from the other three I liste above, in that it calls value on itself, not next, which would return the function again.

Function : AbstractFunction {
	awake { arg beats, seconds, clock;
		var time = seconds; // prevent optimization
		^this.value(beats, seconds, clock)
	}
}

So you can do

AppClock.play({ ~peek.(); "Hmm".postln });

// or even with rescheduling:
AppClock.play({ ~peek.(); "Hmm".postln; 2 });

Strangely enough, Function.awake is not documented, even though it stands apart. Also AbstractFunction doesn’t have such a value-based awake, for some reason. And some JITLib classes (NodeProxy and ProxySpace) change the meaning of awake to a boolean. I guess that’s ok since these classes don’t define an interesting next so one wouldn’t think of scheduling instances of them on the sclang clocks.

If it couldn’t, then it would be impossible to play patterns, so… yes, it can.

As for the necessity of PauseStream:

(
r = Routine {
	inf.do { |i|
		i.postln;
		1.0.wait;
	}
}.play;
)

r.stop;

r.play;  // nothing!

Once a Routine is in “stopped” state, then it cannot be resumed. You can only reset it to the beginning. You don’t always want this. McCartney solved this by wrapping the Routine in another object (object composition = good OOP design, instead of making Routine into a god object):

(
// PauseStream(Routine...) is basically Task
p = PauseStream(Routine {
	inf.do { |i|
		i.postln;
		1.0.wait;
	}
}).play;
)

p.stop;

p.play;  // resumes where it left off

p.stop;

This is necessary for thread-blocking mechanisms such as CondVar and Semaphore.

At this point, I’m going to split this thread because we stopped talking about the scztt quark(s) a long time ago :laughing:

1 Like

As a subjective observation, I think that maybe the features of routines were not well defined yet, low level code enumerates different states that aren’t used, just adding pause as a state would have saved yet another level of abstraction for something that is ubiquitous. PS: making it a good object (sorry, I couldn’t resist the pun :D).

1 Like

To be honest, I don’t fully agree.

OOP generally recommends object composition over adding features/methods to base objects. That’s the subject (pun?) of the Design Patterns book: it’s usually easier to imagine how to extend an object’s functionality by adding features, but you get painted into a corner more quickly that way. So this book sought to encourage programmers to develop the habit of extension by composition, looser coupling etc.

The SC community often shies away from this. For example, IMO SynthDef is a low-level, base class, a direct reflection of scsynth’s GraphDef. IMO if we want to extend SynthDefs to handle more dynamic patching, hot-swapping etc., this is best done in a superstructure that uses SynthDefs. But usually discussions about this are about the “weaknesses” of SynthDef.

Similarly, here, I think Routine is a low-level class. It’s suitable for single-use, disposable threads such as fork { serverSomething...; s.sync; moreServerStuff... } but I wouldn’t use it for anything serious. The documentation should probably recommend Task as the default, go-to threading class, with Routine as the exception for simple cases.

hjh

The deal with those PauseStream wrappers and synchronization is that you want the awake to go to the PauseStream wrapper, after the original object or stream yields. This is why the commit that added that made this kind change:

waitingThreads.add(thisThread);

was changed to

waitingThreads.add(thisThread.threadPlayer);

This necessary because a paused PauseStream needs to “eat up” the awake and not resume the original stream, which would happen if the awake went to the original stream directly.

So basically that substitutes the awake method of original stream with that of the PauseStream wrapper in the scheduler’s queue.

And when you wrap an originalStream in a PauseStream it does

stream_ { arg argStream;
	originalStream.threadPlayer_(nil);  // not owned any more
	originalStream = argStream.threadPlayer_(this);
	if (stream.notNil, { stream = argStream; streamHasEnded = argStream.isNil; });
}

Basically only one PauseStream wrapper can own a stream in this way. The Thread of the orignalStream gets its threadPlayer slot to point to its wrapper PauseStream.