Async ops (again)

Off topic, but just as a reminder of how simple everything could be with a proper async programming model:

fork {
    try {
        ~fx = VSTPluginController(Synth(\vst));
        ~fx.open("foo").await;
    } { |e|
        // handle error
    }
}

sigh

Sureā€¦ devil in the details, as always.

Now I want to be careful here ā€“ itā€™s going to sound like Iā€™m trashing the idea ā€“ Iā€™m not, Iā€™d love to have this. But itā€™s also worth looking at whatā€™s involved.

await then depends on the return value of open, so open would have to return a proxy for the task being waited upon, and no longer this. Thatā€™s fine in itself. Then weā€™ll want the same interface for Buffer operations, where simply changing the methods to return a deferred thingy would break compatibility, and adding new syncable methods needs extensive documentation revisions (to encourage use of the new way over the old). Iā€™d argue that the latter is preferable, FWIW.

Thereā€™s also a bit of amusement with asynchronous object constructors ā€“ b = Buffer.read(...).await ā€“ where we need the instance for b but also need the deferred thingy for await. Solvable to be sure (maybe b = Buffer.readWait(...)), just needs to be worked out.

There was Jordanā€™s idea of making all async methods auto-sync. I sort of remember that this code sketch needed a lot of Object methods supporting thisā€¦? Maybe faulty memory. Since namespace pollution in Object has been cited as a problem, it would be nice to address async without adding a lot to Object.

If these async methods then require a Routine ā€“ itā€™s possible for interpretPrintCmdLine to spawn every code block in an AppClock routine, which I think is low-cost but high-gain in terms of reducing the need for fork when used only for a one-off async operation.

Veering off the original topic. Iā€™m happy to split this if this is too far afield.

hjh

Yeah, decided to move these out of the Question thread.

Proof of concept ā€“ change interpretPrintCmdLine like so:

	interpretPrintCmdLine {
		var res, func, code = cmdLine, doc, ideClass = \ScIDE.asClass;
		preProcessor !? { cmdLine = preProcessor.value(cmdLine, this) };
		func = this.compile(cmdLine);
		if (ideClass.notNil) {
			thisProcess.nowExecutingPath = ideClass.currentPath
		} {
			if(\Document.asClass.notNil and: {(doc = Document.current).tryPerform(\dataptr).notNil}) {
				thisProcess.nowExecutingPath = doc.tryPerform(\path);
			}
		};
		{
			res = func.value;
			thisProcess.nowExecutingPath = nil;
			codeDump.value(code, res, func, this);
			("-> " ++ res).postln;
		}.fork(AppClock);
	}

AFAICS the only potentially negative impact is that, without this change, code that is executed directly reports thisThread = ā€œa Routineā€ instead of ā€œa Threadā€ ā€“ probably nobody is dependent on thatā€¦?

Then, fold CondVar stuff into VSTPluginController:open (note, just using VSTPluginController as an example, since thatā€™s what the original thread was about) ā€“

	open { arg path, editor=true, verbose=false, action, multiThreading=false, mode, timeout = 5;
		var intMode = 0;
		var condVar = CondVar.new;

		... snip ...

					loading = false;
					deferred = multiThreading || (mode.asSymbol != \auto) || info.bridged;
					this.changed(\open, path, loaded);
					action.value(this, loaded);
					// report latency (if loaded)
					latency !? { latencyChanged.value(latency); }
					condVar.signalOne;
				}, '/vst_open').oneShot;
				// don't set 'info' property yet; use original path!
				this.sendMsg('/open', path.asString.standardizePath,
					editor.asInteger, multiThreading.asInteger, intMode);
			} {
				"couldn't open '%'".format(path).error;
				// just notify failure, but keep old plugin (if present)
				loading = false;
				action.value(this, false);
				condVar.signalOne;
			};
	... snip ...

Then:

(
SynthDef(\vst, { |out = 0|
	Out.ar(out, VSTPlugin.ar(numOut: 2));
}).add;
)

// no Routine, no fork!
(
a = Synth(\vst);
c = VSTPluginController(a);
c.open("sfizz.vst3"/*, mode: \sandbox*/);
c.isOpen.debug("plugin loaded");
)

read cache file /home/dlm/.local/share/vstplugin/sc/cache_amd64.ini
wine-6.0.3 (Ubuntu 6.0.3~repack-1)
wine-6.0.3 (Ubuntu 6.0.3~repack-1)
plugin loaded: true

Thread-unblocking is properly deferred.

Buffer reading would benefit from a true Deferred though, because we donā€™t want to block each individual read.

hjh

There was Jordanā€™s idea of making all async methods auto-sync

IMO auto syncing would be rather bad. There is a reason why other languages return a Promise/Task object and require the caller to await.

First, it makes it explicit which methods are async and which are not.

Second, by returning Promises/Tasks you can wait on multiple operations in parallel. You can do interesting things like wait for all tasks to complete, a single task to complete, or just any number of tasks to complete.

See for example

etc.

Generally, I would strongly advise anyone interested in implementing a better async programming for SC to have a look at other programming languages first, in particular JS and C#.

1 Like

Yeah, Iā€™m not even sure this can be reasonably done in SC3. We canā€™t change the existing methods, so we have to duplicate them, which would further bloat the codebase. Also, it would probably cause lots of confusion because all the existing examples and tutorials would still use the old methodsā€¦

I was more like day dreaming :slight_smile:

Iā€™m not sure how much of SC is driven by C++ properties and how much by Smalltalk, but when I read about coroutines I think about channels to communicate between them. So one Routine listens on a channel, the other sends a message, like ā€˜Iā€™m doneā€™; ā€˜here is a errorā€™; ā€˜no errorā€™; ā€˜got some data for youā€™:

Itā€™s a different way of thinking. Erlang and Go seems to handle this well, for Smalltalk it seems to be harder to find good information about it though.

Imho, this would be worth the breaking change due to the amount of confusion new users understandably have over this issue.

It is peculiar that evaluating each line individually has a different behaviour from evaluating a block - that I think is the most surprising and difficult to understand when starting.

Sort of, ideally youā€™d just set doesnotunderstand, but methods like size need to sync on buffer so I reimplemented all of object letting you pass a function to evaluate before every method, allowing the promise to wait itself when first used.
With out making a sperate thread for each executetion block it gets a little more complex as methods like printing to the console might need to sync.

I donā€™t see the benefit of this because you are actually just waiting on them sequentially.
Auto promises just wait in the order you use them. True, this isnā€™t explicit, but for server communication which usually happens very fast, I donā€™t think that matters as much.

Another point about promises is that they hide there held class - thereā€™s no promise of T - which makes documentation a little harder to look up. In the autopromise I did a silly thing and overloaded the class method so it deferred to its held, I thought this would causes bugs, but could find any?!

You could also turn a promise into an auto promise with a method call pretty easily!

You could also go the other way, and convert from an autopromise to a promise.

The benefit of autopromise is that it requires only small changes to the code and doesnā€™t change the method signature, nor how the user uses the return object - it doesnā€™t break everything.

This means that if all current methods returned an autopromise (which doesnā€™t break anything), and the user wants a normal promise, they could just ask for. This is a change that we can have today.

It is understandable confusion ā€“ but if we take every buffer read from the last two decadesā€™ worth of code, even considering only those that are in current use, and require every one of those to be updated for a new programming interface ā€“ thatā€™s going to be a lot of angry existing users. I think youā€™re significantly underestimating the amount of upset this would cause. The live-coding environment that I use has a ten year history ā€“ not a small amount of code. Iā€™d probably miss a couple Buffer usages and then get caught out on stage.

Spacechild1 objects to duplicating the interface with new syncing methods. While it would be bloat, I find this to be less awful than breaking everybodyā€™s existing code ā€“ and we could slowly deprecate the non-sync usage. Deprecation would be a lot more polite.

It would take some work, but if we go with the bloat way, the documentation could (should) be updated to point incoming users to the new, more usable methods.

At present, I donā€™t see a serious downside to my suggestion of wrapping every code-block execution in an AppClock routine. There would be a small amount of overhead, but this is just for interactive code execution, which is necessarily low-traffic because we just canā€™t hit the keys that fast. Thereā€™s no way that the extra cost would be invoked more than, oh, 10-15 times per minute in typical usage (probably more like 3-5 times per minute).

Thatā€™s true if .await is at the point of initiating the async op. But (and maybe Iā€™m wrong) I thought the point of a Promise is that the caller gets the Promise immediately, and if the caller tries to access a concrete value thatā€™s only been Promise-d, thatā€™s the point where the caller would be asked to wait. In that case, a whole array of Promises could be made at once.

But the catch is at the point of access ā€“ because Object is so heavyweight, thereā€™s not really an easy way to capture and redirect method calls on the Promise. I donā€™t have a solution for that.

hjh

(Iā€™m sure I have already written all of this in that other massive thread, but somehow I canā€™t find itā€¦ Maybe itā€™s not bad to rehash it, though.)

You are assuming that async operations would be only for Server commands, but that is not necessarily true! Async APIs can be also applied to pure language features:

  • web requests
  • file reading/writing
  • OS commands (i.e. async version of unixCmd)
  • compute-heavy tasks (think AI models) that could be deferred to background threads
  • etc.

By returning Promises you have the option to truely execute them in parallel. With auto-syncing you would have to start a separate Routine for each and then somehow manually sync them ā€“ thatā€™s exactly what weā€™re trying to avoid!

Quick example:

// ask to fetch all web resources in parallel, returns an array of Promises
~results = ~listOfUrls.collect { |url| WebRequest.fetch(url) };
// (possibly do other things in between)
// wait for all requests to complete
~results.awaitAll;

Also, I would like to point out again that in theory even Server commands could be executed in parallel. The only reason why that is currently not possible is because it would break s.sync, which relies on the fact that there is only a single NRT thread. In the distant future we might have managed to deprecate s.sync and have multiple NRT threads.

I would really like to have a consistent async programming model, i.e. a common API style for language and server operations. Auto-syncing is not that.

Again, I would suggest everyone to deeply look at what other programming languages are doing. They have some pretty good reasons for their design choices. That doesnā€™t mean that we canā€™t do things better, but if we deviate from the (now) standard async/await pattern, there need to be good arguments.

Thatā€™s were await comes into play! await would be just a method for Promise that would wait for its completion ā€“ if not completed yet ā€“ and return the underlying value. No need for any magic! Explicit is better than implicit.

And most importantly: it couldnā€™t possibly break any existing code because the feature would be purely additive.

The implementation for Promise would be quite simple actually. Just the result (possible nil), a boolean and a CondVar. I guess I should do a proof-of-concept.

Perhaps this is may fault and Iā€™ll think through what youā€™ve said in more detail throughout the day, but I still donā€™t understand.

The autopromise implementation I made does this.

// launch a routine for each request.
~results = ~listOfUrls.collect { |url| WebRequest.fetch(url) }; //returns autopromise

// (possibly do other things in between)

~results.do{|wr| 
    // waits for each request to be completed, if already complete, does not wait.
    wr.something 
};

// alternatively, in the implementation I made, you could do this.
~results.do{|wr| 
    wr.then({|r| r.something }) 
};
// which adds a function that is executed in another thread 
//    and must be completed before await returns.
//    This could also be added to the 'manual' promise api

Autopromise is not auto sync, it is just a promise that inserts await before each messageā€¦
If we had a minimal Object implementation, the code would basically be this.

AutoPromise {
    var promiseOfHeld; // 'normal' promise of held
    doesNotUnderstand { |sel, args|
        ^promiseOfHeld.await.perform(sel, args);
    }
    promise { ^promiseOfHeld }
}

Meaning things like Buffer.read can just return an AutoPromise and, if the user had placed s.sync everywhere, nothing should change (might be missing something here though)

Sorry, I complete misunderstand how your auto-syncing solution works! I thought it would just block the Routine. Now I understand it would only (potentially) block on first method access via a proxy object.

Thatā€™s neat, but are you sure that this is transparent for all cases? What if something does isKindOf(Buffer)? Is an AutoPromise an AutoPromise or a Buffer? Now we are getting into ontological realms :wink: How would it handle respondsTo and other ā€œmetaā€ Object methods?

Again, your AutoPromise is pretty clever, but for me itā€™s too much magic and I see lots of potential pitfalls. Iā€™d rather prefer a solution that has no potential of breaking existing code. (I am assuming this is still for SC3; for SC4 we would be free to break the API anyway.)

As I said, Iā€™m really in favor of making it explicit that an operation is asynchronous (by returning a Promise) and also make the wait point explicit (with await). Thatā€™s what all mainstream languages are doing and IMO we should not deviate from standard patterns unless there is a very compelling reason.

isKindOf involves a method call and just deffers to the held. In this case autopromise is a buffer ā€” this one is confusing, but necessary to not break anything.

respondsTo, in my opinion, is broken and bad design. Smalltalk allows you to wrap objects like this, its a part of its core philosophy. Same reason you canā€™t have a structural type system and interfaces make no sense (there were threads about these issues recently).

Nope, no change needed (there might be a few oddities with the ā€˜metaā€™ object methods as mentioned).

I definitely get the point that this is magic and best avoided without a good reason, but again, the fact that we could have this today in SC3 I think is a great reason to accept a little magic, especially since if you want a real promise, you can just ask for it.

I say today because this needed doesNotUnderstandWithKeys to be truly transparent and it took me a while to figure out how to do that.

ā€” Beckett should have called it waiting for SC4.

1 Like

I definitely get the point that this is magic and best avoided without a good reason, but again, the fact that we could have this today in SC3 I think is a great reason to accept a little magic, especially since if you want a real promise, you can just ask for it.

The other alternative is to just add dedicated methods that return a Promise, e.g. Buffer.readAsync etc. The only downside is a bit of API bloat. (The actual implementations would be just simple one-liners that wrap the callback versions.)

Again, the problem I have with the automagic solution is that itā€™s not clear whatā€™s happening. Thatā€™s even more problematic when we consider that there are countless existing tutorials that already use these methods, but in a different way (e.g. with s.sync or action). I think thereā€™s a huge potential for confusion. With Buffer.readAsync itā€™s at least entirely clear that it behaves differently than Buffer.read. The tutorials would need to be updated anyway.

ā€” Beckett should have called it waiting for SC4.

:smiley:

1 Like

Small side note: sclang is actually the perfect fit for async/await because it is built on top of coroutines and therefore does not require function coloring *), so itā€™s ironic ā€“ and a bit sad ā€“ that it is still stuck in callback hell and obtuse sync patterns.

*) In most languages, e.g. Python, JS, C#, any function that awaits asynchronous functions must be specifically marked as async. As a consequence, any callers of that funcion must be async as well. Thatā€™s what people mean when they say that async APIs are viral. See What Color is Your Function? ā€“ journal.stuffwithstuff.com. Sclang does not have this problem!

1 Like

Actually, I donā€™t think the AutoPromise is entirely backwards compatible: if you write code in the ā€œnewā€ style, relying on auto-syncing, it wouldnā€™t run correctly on older SC versions. Youā€™d have to add a big disclaimer to each method documentation where you need to

  1. explain that this returns an AutoPromise
  2. tell that this only works on SC version 3.x<

How would you go about updating the tutorials? If you change them to rely on auto-syncing, these would only be valid for SC version 3.x< (And again, you will have the problem of different tutorial versions using the same methods but in different ways.)

Changing the behavior of an existing method will always be a breaking change, no matter how transparent youā€™re trying it to make.

I think you are talking about the forwards compatibly of older versions, not the backwards compatibility of autopromise ā€” requiring all changes to preserve the forwards compatibility of old versions seems unrealistic and a users expectation that code written on 3.10 should work on 3.2 is also unrealistic, the other way around though is reasonable.

What is important is that old (but correct) code still runs, which it should.

Now Iā€™ve got doesNotUnderstandWithKeys I will put together a proof of concept that people can try when I get a chance. @jamshark70 if I make a fork with this change, might you be interested in seeing if your older code still works with it (not asking for detailed bug reports), as I donā€™t have a large collection of projects to test it on?

Iā€™ll also put forward a proper write up of all the pros and cons and make a formal PR after that.

requiring all changes to preserve the forwards compatibility of old versions seems unrealistic and a users expectation that code written on 3.10 should work on 3.2 is also unrealistic, the other way around though is reasonable.

Youā€™re right in general that we canā€™t expect newer features to be available in old versions.

Adding new methods does not pose a problem because old versions will fail with an obvious error.

Adding new arguments to a method is already a slippery slope. Ideally, methods already have a forward compatibility layer in place. For example, VSTPlugin.search takes an Event with additional options and I post a warning if that Event contains an option that the current VSTPlugin version does not understand. Another example: some Pd objects or methods take flags and you get an error message if a flag is not recognized.

Changing the behavior of an existing method ā€“ even if backwards compatible ā€“ is problematic if it can lead to silent failures on older systems. Thatā€™s the issue Iā€™m having with your proposal.

For example, a user might see the following in a new tutorial:

b = Buffer.read("foo");
b.numFrames.postln;

Then they might try this in a previous SC version and it would fail silently. How would they even know whatā€™s going on? If they tried to call Buffer.readAsync, they would at least get a clear error and also quickly realize that the method simply does not exist in their version.

All I want to say is that even with a seemingly backwards compatible solution like AutoPromise youā€™d need a clear migration path.

1 Like

Again, I would suggest everyone to deeply look at what other programming languages are doing.

Most programming languages handle this very poorly. Elixir (and Erlang) probably handle this well, but then async is kind of the building block of that environment (hey kids - SC4 exists, it just runs on the BEAM and has a different name).

If you donā€™t care about performance (and for SuperCollider thereā€™s no particular reason that you should), then Iā€™d use yield and thread. Run your ā€˜asyncā€™ code in threads, and use yield to allow the user to get the result from that code at a later date. Then you can add all your other semantics on top of that.

So you could have a method in the base call that is something like ā€˜asyncā€™. Pass async any method and it wraps it in a thread, yields a ā€˜promiseā€™ (some way to check if the value has completed later, and if so get it, otherwise do some kind of wait). Then you can add your other semantics on top of that.

2 Likes