Async lang behaviour - how to this could be made easier for new users

No programmer would think that the current API design for async stuff in SuperCollider is anything other than a hot mess by contemporary standards :slight_smile: - and no musician is likely to think it’s anything other than confusing and fragile. Probably everyone in this thread can agree on that at least…

3 Likes

FWIW I agree with all the responses – I phrased my original post super poorly ^^ I was trying to say you can’t avoid the learning curve with any sufficiently expressive tool, and that point’s now been made more clearly by other posts in this thread.

1 Like

I find your proposed solution to be the most elegant one. It involves employing SynthDef and Buffer in a manner akin to how Option types in F# or Maybe types in Haskell operate.

It doesn’t seem like a good idea at first to create a wrapper for such unique and common cases as SynthDefs and Buffers if there is a cleaner way to make them handle different situations.

just my 2 cents ))

1 Like

Yes, but when it comes to delving into the old Church’s problem/idea of “program synthesis” using non-algorithmic definitions, music emerges as a particularly fertile domain. :upside_down_face:

Sorry, I’m not sure what Church thing you’re referring to or how it relates to what I said.

1 Like

Of course, it’s not the same thing, we’re talking music, not math, but it surely relates to the idea of defining high-level descriptions of ideas and tasks.

Just wanted to share a lighthearted note, my friend. :smiling_face_with_three_hearts:

For completeness, there’s also (outside of core) VSTPlugin, with two stages of initialization: plug-in loading and preset loading. A promise-based approach would greatly simplify its use!

I can’t think of others offhand, except maybe NRT (waiting for any offline process). The LADSPA UGen is long unmaintained.

hjh

3 Likes

Definitely!

Note that asynchronous programming is not limited to Server interaction. I already mentioned unixCmd in Async lang behaviour - how to this could be made easier for new users - #38 by Spacechild1.

2 Likes

One more thought on this.

Since we really only care that a server resource (a Buffer / SynthDef) is available ON THE SERVER, we really need to wait on our promise only the last stage just before sending to the server. As a result, we can remove even more of this abstraction from the core by having our read-barrier for server promises in NetAddr, the last point before sending to the server. This would look something like:

  1. Subclass our generic promise class as e.g. ServerDeferred, ServerPromise etc
  2. Add an override for asControlInput that simply returns this. This will mean that all OSC messages will still contain unresolved promises:
ServerDeferred : Deferred
{
  asControlInput { ^this }
}
  1. Create a new NetAddr implementation that resolves all promises in an OSC message before sending. This could be done on a separate thread without halting the current thread if that is desired - this would resolve problems with introducing new waits in existing threaded code AND make it work properly when executing directly from the interpreter.

There are some non-ideal things about this solution, but apart from the hack of leaving a theoretically invalid object in an OSC packet it actually may constrain the required changes pretty nicely. With this solution, you could implement e.g. an AsyncSafeServer that uses the read-barrier NetAddr - Buffer and SynthDef could actually check for the presence of this server and return promises ONLY when used on that server. But asControlInput is a pretty internal, undocumented method, and IMO makes no obvious guarantees about what it returns, other than “this can be used in an OSC message” (which, for our case, is true).

2 Likes

For fun, here’s what it looks like to just append a sync flag to the creation methods, using some of the Deferred pattern @scztt introduced, without wrappers and minimal bookkeeping. (Though the OSC intercepting method could be more broadly applicable/extensible).

This uses uses the *read creator and numFrames as the server state-dependent variable as an example.

// Showing only the modifications to Buffer
Buffer { 

	var >numFrames; // removed getter
	var <>loadState, >isSynchronous = false; // new

	// adding the sync flag
	*read { arg server, path, startFrame = 0, numFrames = -1, action, bufnum, sync = false;
		server = server ? Server.default;
		bufnum ?? { bufnum = server.nextBufferNumber(1) };
		^super.newCopyArgs(server, bufnum)
		.doOnInfo_(action).cache
		.loadState_(Condition()).isSynchronous_(sync)    // <<< new
		.allocRead(path, startFrame, numFrames, {|buf|["/b_query", buf.bufnum] }, sync)
	}

	// new getter, this is the pattern for any server state-dependent vars
	numFrames {
		if (loadState.test) {
			^numFrames
		} { 
			^this.prGetSynchronous(thisMethod) 
		}
	}

	// new dispatch method
	prGetSynchronous { |method|
		if (isSynchronous) {
			if (thisThread.isKindOf(Routine)) {
				loadState.wait;
				^this.perform(method.name) // request again
			} {
				Error("Buffer hasn't loaded - synchronous access needs to be done in a Routine.").throw
			}
		} {
			Error("Buffer hasn't loaded - use Buffer's sync arg and a Routine to ensure Buffer is loaded before accessing.").throw
		}
	}

	queryDone {
		doOnInfo.value(this);
		doOnInfo = nil;
		loadState.test_(true).signal;   // <<< new
	}
}

and now in use…

( // a collection of buffers
s.boot;
p = Platform.resourceDir +/+ "sounds/a11wlk01.wav";
p = p.dup(25); // make large enough for significant delay
)

// Try immediate access inside a routine
(
fork {
	b = Buffer.read(s, p.first, sync: true); 
	b.numFrames.postln; // ok!
}
)
b.free; // cleanup

// Try access without synchronous flag
b = Buffer.read(s, p.first); b.numFrames;
    >> ERROR: Buffer hasn't loaded - use Buffer's sync arg and a Routine to ensure Buffer is loaded before accessing.

// Try immediate access with sync=true, but outside a routine
b = Buffer.read(s, p.first, sync: true); b.numFrames;
    >> ERROR: Buffer hasn't loaded - synchronous access needs to be done in a Routine.

// one line at a time, routine or not, same as original
b = Buffer.read(s, p.first, sync: true);
// wait a moment
b.numFrames;     // ok, no error, regardless of sync arg, it's loaded anyway
b.free;          // cleaup

// Load the whole collection
// Buffers load asynchronously, only the request is delayed!
(
fork {
	var bufs, askIdx = 14; // interact with whichever buffer

	// load the bufs - can actually be done before the routine
	bufs = p.collect{ |path|
		Buffer.read(s, path, sync: true);
	};
	// access - invokes a wait
	"buffer % numframes: %\n".postf(askIdx, bufs[askIdx].numFrames);
}
)

Buffer.freeAll; // clean up

and if the loadState condition is visible, waiting for all buffers to load is straightforward:

( // a collection of buffers
s.boot;
p = Platform.resourceDir +/+ "sounds/a11wlk01.wav";
p = p.dup(50); // make large enough for significant delay
)

(
fork {
	var bufs = p.collect{ |path|
		Buffer.read(s, path, sync: true);
	};
	// you effectively wait only as long as the longest-loading buffer
	bufs.do{ |b| b.loadState.wait }; 
	"All buffers are loaded.\n".postln;
}
)

Buffer.freeAll; // clean up

So hopefully that prGetSynchronous dispatch method would do most of the boilerplate.

1 Like

Regards interface, perhaps one could distinguish sync/async requests by setting the “on completion” block to a special token, say ‘sync’, i.e.

  • nil = async, no completion block (as is)
  • aBlock = async, with completion block (as is)
  • ‘sync’ = sync, no completion block (new case)

It’s perhaps a bit confusing to have a completion block and a sync flag?

I agree it’s not ideal, and no one likes adding new arguments :stuck_out_tongue_closed_eyes:. The rationale for is that

  • sync: is more explicit, thinking of the new user (there could be a better name?)
  • depending on the creation method, the callback is either an action (*read, *readChannel) or completionMessage (*alloc/Consecutive, *cueSoundFile, etc.), unfortunately, and they have slightly different purposes. I haven’t fully thought through whether these three use cases are mutually exclusive, but a consistent and explicit keyward arg for this (helpful/common?) use case seems like a plus.

I don’t feel strongly one way or the other, just want to make sure we don’t drift too far from the original intention.

Also still very curious to see the discussion on the Deferred approach continue, and, this option :smiling_imp:

Off the back of this thread, I’ve been making an async quark with the idea to reimplement Buffer and (eventually) SynthDef/Synth. It is called Smart (as in smart pointer).

Here are a few examples.

SmartPromise

The basic promise type is very similar to Deffered, but when you add an action to .then it adds it to a pipeline which is executed when the promise is fulfilled.
There are a few cases when the pipeline can be restarted, but it does what you would expect (I hope).

fork{
    ~promise = SmartPromise()
    .then(_ * 10)
    .then({ Error("meow").throw })
    .catch(Error, { |er| "got an error".postln; })

    fork { 1.wait; ~promise.fulfil(10) };

    // ... other stuff ...

    ~promise.await.postln; // got an error
}

SmartBuffer

This is a complete reimplementation, it is not ‘awaitable’ by itself, but returns SmartPromises.

Buffer messages has been pulled out into a new class.

Impossible to access after .free or server quit.

~b = SmartBuffer.read("/path")
.then(_.normalize)
.await;

~b[3120, 3125...]  // get every 5th indicies from 3120 until end
.then(_ * 2)
.await
.postln

One thing that hasn’t been mentioned in this thread so far is what happens if the osc messages arrive out of order.
Say, if you normalise then get the values of a buffer?
There is an option in SmartBuffer that will make all mutations return a promise to a ‘‘new’’ buffer, invalidating the existing SmartBuffer instance - this is not enabled by default but its the only solution (other than use TCP) I could think of.

SmartBarrier

Like a SmartPromise of array, but doesn’t require waiting on all values before next action in pipeline is evaluated.

(
s.waitForBoot {

	~pathsToRead = (Platform.resourceDir +/+ "sounds/a11wlk01.wav") ! 10; // replace with array of paths

	~bp = SmartBarrier(
		~pathsToRead.collect({|p|
			{ SmartBuffer.read(p) } // array of functions loading buffers
		})
	)
	.then(_.normalize)                                 // normalize each one when it is done
	.thenCleanup(_.get(44100 + 1000.rand), _.free)     // get a sample value, and free buffer
	.reject( _ > 0 );                                  // reject samples greater than 0

	// ... do some other stuff not involving the sample values

	~bs = ~bp.await.postln;  // await
}
)