Async lang behaviour - how to this could be made easier for new users

It would still be a major change.

It’s fine to add breaking changes, but not in a minor version*). Such a big change would certainly require a major version bump, i.e. SC 4.

*) Of course, this only applies to classic semver and there are other versioning schemes (e.g. Lua is notorious for doing major language revisions in “minor” versions).

Here’s my two cent: I don’t think it is a good idea to try to hide the asynchronous nature of SC’s client-server architecture; we rather need to highlight it and make it more explicit! It might be tempting to hide the complexity, but there is a point where the facade will break down eventually. Instead, let’s just be honest from the beginning.

What we actually need are decent modern async programming patterns. The SC Class Library has been written long before async programming became mainstream – and it shows. Callbacks (action arguments) just don’t compose well and easily lead to callback hell. Instead we need a promise-based API, so that we can write pseudo-sequential code while still communicating clearly when things happen asynchronously.

SynthDef.add, for example, is problematic because it returns a half-initialized object without an obvious way to sync at all – except for the mysterious (and problematic *) s.sync command.

Buffer.read is a bit better because at least it has an action function, but that one may still be overlooked. If all asynchronous methods returned promises, it would be much harder to use them incorrectly; with factory methods, such as Buffer.read, it would be even impossible because you couldn’t access the object without resolving the promise first.

In an alternate universe:

// SynthDef.add returns a promise for a SynthDef and also stores it in the SynthDescLib
~foo = SynthDef.add(`\foo`, { ... });
// resolve explicitly (will wait if necessary)
~foo.await;
// Synth looks up and awaits the SynthDef; it is not possible for a Synth to
// instantiate before the SynthDef is ready.
Synth(\foo)`;

 // read soundfile to buffer and wait for completion
~buf = Buffer.read("foo.wav").await;

// read a big list of soundfiles and wait until all have finished.
// [].await is just a shortcut for [].collect(_.await).
// On the Server, the soundfiles may be loaded in parallel!
~bufs = (~files.collect { |f| Buffer.read(f) }).await;

To be clear: I am not suggesting that any of this can be reasonably implemented in SC3 without adding lots of extra complexity or baggage; it would only make sense in the context of a redesign of the Class Library. So it’s rather something for the much longed-for, yet impalpable SC4 :slight_smile:


*) s.sync is bad for mainly two reasons:

  1. it only works for asynchronous operations that require a single server roundtrip; for example, it does not work with VSTPluginController.open.

  2. it imposes a severe limitation on scsynth: for s.sync to work, all async commands must execute in sequence. This means that there can only ever be a single NRT thread. Without the s.sync pattern, async commands would be able to execute in parallel, so that short commands are not blocked by longer commands. (Compare this with my proposed async-task-API for Pd: New asynchronous task API by Spacechild1 · Pull Request #1357 · pure-data/pure-data · GitHub)
    In general, ordering should be enforced client-side – with the help of a proper async programming models – and not by forcing the worker system to be single-threaded…

2 Likes

I had anticipated a bit different problem with “building on top of” – namely, that you could end up with a SyncBuffer, and a XyzOtherFeatureBuffer, and there’s no good way in SC to integrate those.

I think the antidote to both forms of slippery slope is to go through a proper design process: gather requirements, and then design an architecture to support the requirements. Gathering requirements would need to be thorough but also reasonable – some ideas might not make the cut, but try to work in the bulk of it.

Design is… something the SC community has often done in a haphazard way, if at all (and a good amount of the technical debt derives from this). One notable exception is the AbstractResponderFunc hierarchy (OSCFunc, MIDIFunc, HIDFunc), where Scott Wilson in particular put a lot of thought into the requirements and designed a structure which handles pretty much every scenario, not just the typical ones. As a result, since its introduction 12 1/2 years ago, this object hierarchy has seen minor fixes but no major overhaul, and no new-object-creep… which emphasizes to me that when the SC community does approach bigger problems in terms of proper design, then the process works and the result is sustainable.

I think here is where Spacechild1 is getting closer to the core of it. If async is a solved problem in computer science, then, design based on that.

Violating my own thoughts about careful design – what if everything runs in a thread? Then you could await freely, anywhere, anytime (where scztt has already done Futures, I think?).

Actually it’s always been possible to run all interactive blocks in their own threads, without any backend changes.

	interpretPrintCmdLine {
		var res, func, code = cmdLine, doc, ideClass = \ScIDE.asClass;
		preProcessor !? { cmdLine = preProcessor.value(cmdLine, this) };
		func = this.compile(cmdLine);
		if (ideClass.notNil) {
			thisProcess.nowExecutingPath = ideClass.currentPath
		} {
			if(\Document.asClass.notNil and: {(doc = Document.current).tryPerform(\dataptr).notNil}) {
				thisProcess.nowExecutingPath = doc.tryPerform(\path);
			}
		};
		{
			res = func.value;
			thisProcess.nowExecutingPath = nil;
			codeDump.value(code, res, func, this);
			("-> " ++ res).postln;
		}.fork(AppClock);
	}

Then:

// no fork!
(
b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");
s.sync;  // I'm aware that this isn't a Future... just POC'ing the fork
b.numFrames;  // also no postln needed here either
)

-> 188893

There must be a drawback somewhere :laughing: but I think any such drawbacks are likely to be highly exotic. (Except… nowExecutingPath would have to use the thread’s executingPath, not the Interpreter’s.)

(I guess scheduling would also need to fork… conceptually not a terrible thing, though it would drive up GC load for things like defer { gui.update(...) } because Routines have a lot of slots.)

hjh

2 Likes

Definitely agree with this, but I think having this called manually in all cases is a little problematic.

Consider a new users, say a classical oboist, who has no idea what programming is, never mind (a)synchronous programming. For them to simply playback a soundfile on the server they would need to be taught all about the server/client split, then what a promise is, and that they should always remember to call await. Now, I don’t actually think this is that complex, the issue come when they start writing code, how do they know whether to await or not? That is a lot to ask of a new user.

In other words, I am suggesting supercollider should be for musicians and make musical action as simple as possible, applying some ‘automagic’ to smooth over those complex areas. I understand the argument against this… it ‘implicitly’ does stuff, and doesn’t represent ‘good’ programming practice… but, in this specific case, having to learn about async and promises is an awful lot of prerequisite learning just to play a sound file. That is not to say they shouldn’t eventually have to deal with those concepts, but they should arise later in the pedagogical experience.

Making this easier might also increase supercollider user numbers and expand the community.

I think there might be a little bit of survivor bias (the one with the bullet marks on all the non-critical parts of an aeroplane) with regard to people who have transitioned from classical musician to supercollider user, in that we forget how hard it actually is.

To phrase this in another way, is supercollider for: developers who want to make music? musicians who want to make music with a computer? or musicians who want to learn to be developers?

Obviously its all three, but the first two are in clear conflict and I’d rather see supercollider simplify certain things (so long as it can be done safely, which in this case it can) to further engage with musicians.

Another alternative that springs to mind — and I don’t know what other people think about this? — is we make breaking changes, but split the class library into a separate git repo, and in the IDE let the user choose which version to use. There could be a big red box that indicates when supercollider is running in legacy mode.

I don’t think making the user learn the new way of doing things is a problem, the only issue is when their piece of music no longer works. Allowing them to change version would mostly solve this. The down side being quarks - which I think might be worth moving to a project local install folder.

Yup this might just simplify a whole lot of stuff! I’m gonna give it a go, thanks!

In other words, I am suggesting supercollider should be for musicians and make musical action as simple as possible, applying some ‘automagic’ to smooth over those complex areas. I understand the argument against this… it ‘implicitly’ does stuff, and doesn’t represent ‘good’ programming practice… but, in this specific case, having to learn about async and promises is an awful lot of prerequisite learning just to play a sound file.

A programming language is for programmers, just like an oboe is for musicians. I could learn all about correct reed embouchure, fingering charts, breathing technique, and oboe maintenance, or I could pick up a kazoo. If you just want to play a sound file, use VLC. If you insist on doing it in SuperCollider, there is Buffer:-play.

The oboe has changed a lot throughout the past 300 years. Most of these changes were ergonomic, meaning the performer gets to spend less time worrying about technical issues and more time making music — I am simply trying to understand this musical instrument’s strategy for improvement and am concerned there doesn’t seem to be one.

1 Like

boo hiss!

A workshop this month: SuperCollider | CCRMA

this five-day workshop welcomes participants of any discipline, with or without prior programming or musical experience. (The class is for musicians, composers, sound artists, programmers, researchers, and anyone else who’s excited to learn more about SuperCollider!)

(sorry for excursis lets get back to our regular promramming!)

Sure, there’s some truth in that. Although by such argument, why not just write raw dsp code in C++ or assembly? The language is targeted towards making sound and music and I think there’s something to be said for making accomplishing that goal as ergonomic as possible, but without taking away the possibility to do less obvious things.

I’m thinking of this quote: “Simple things should be simple, complex things should be possible.”

If I wanted to load not 1 but 100 buffers from disk, would the autosync version be a lot slower because it essentially serializes loading each buffer? If so, having a SmartBuffer with autosync and a “dumb” Buffer without autosync would both have their merit.

1 Like

No it doesn’t serialise them, it only waits when the buffer is used. This makes it a little complicated to compare.

If you declare them all upfront and use them in the same order, then it should be significantly faster as s.sync syncs with all of them, where this only waits for the one being accessed.

The basic case should always be faster if the buffer has loaded in the background as it just checks a bool where sync always makes the round trip. If it has to wait then it should be about the same.

I’ll put together some benchmarks tomorrow if I get a chance to check I’m not talking out my backside…

But there certainly will be something this can’t do, I’ll also put together a list when I make a proper proposal.

1 Like

Partly for giggles: Pd can provide a nice contrast between what it looks like when “simple things are simple” (using some of my abstractions) vs when the system doesn’t do anything to facilitate simple things (Pd vanilla).

Task: Load a sound file into memory upon initialization (side-stepping sync, fwiw), and upon a trigger, play the entire file at its normal rate (i.e., no tabplay~, no readsf~, because those assume the system sample rate), with a 10 ms fade in and out at the ends.

Part of the point is that there are at least half a dozen places on the left-hand side where you can make trivial errors that will break playback. I “know what I’m doing” but at first, I forgot the [* 1000] and also messed up the ASR envelope formula. If a relatively experienced user gets some details wrong, it’s certainly going to throw off new users.

So I’m sympathetic to the idea of making it easier. It’s just about, what is the right design for this?

This is probably a bit risky suggestion, as it’s a little bit “magical” (meaning that users would struggle with it when it doesn’t come into play) but… if SC has a Future object, and it’s an AbstractFunction, then:

Future : AbstractFunction {
	var value;

	// ...

	value { this.await; ^value }  // assumes we're in a thread

	// auto-evaluate upon math
	composeUnaryOp { |selector|
		^this.value.perform(selector)
	}
	composeBinaryOp { arg aSelector, b, adverb;
		^this.value.perform(selector, b, adverb)
	}
	// ... there are a few other methods to fill in like this...

	// OSC messaging support
	asControlInput {
		^this.value.asControlInput
	}
}

So e.g. numFrames could be a Future, and then using it in a Synth arg list would automatically valueawait. Also math operations would automatically await. The drawback is that in the absence of a math operation or asControlInput translation, the user would be responsible for evaluating, and it might not always clear when that’s the case. So some of the confusion would be pushed to a different place. But it might make the bare-bones case easier, e.g.:

fork {
	b = Buffer.read(...);
	x = Synth(\bufPlayer, [bufnum: b, time: b.duration]);
}

… should be fine under that scheme.

hjh

2 Likes

Orginal impl

Realised the original implementation isn’t here. Its pretty hairy at the moment, but it is essentially an auto-promise wrapper. It works with all classes (except name args don’t work, which was why I started this thread) and just calls wait if needed. Its definitely a work in progress.

Impl
ObjectPreCaller  {
	var <>impl_underlyingObject;
	var <>impl_preFunc;

	// DO NOT WAIT ON THIS METHOD AS THE INTERPRETER USES IT TO PRINT
	asString { |limit| ^impl_underlyingObject.asString(limit)  }
	class { ^impl_underlyingObject.class() }

	dump { impl_preFunc.(); ^impl_underlyingObject.dump() }
	post { impl_preFunc.(); ^impl_underlyingObject.post()}
	postln { impl_preFunc.(); ^impl_underlyingObject.postln()}
	postc { impl_preFunc.(); ^impl_underlyingObject.postc()}
	postcln { impl_preFunc.(); ^impl_underlyingObject.postcln()}
	postcs { impl_preFunc.(); ^impl_underlyingObject.postcs()}
	totalFree { impl_preFunc.(); ^impl_underlyingObject.totalFree() }
	largestFreeBlock { impl_preFunc.(); ^impl_underlyingObject.largestFreeBlock() }
	gcDumpGrey { impl_preFunc.(); ^impl_underlyingObject.gcDumpGrey() }
	gcDumpSet { impl_preFunc.(); ^impl_underlyingObject.gcDumpSet() }
	gcInfo { impl_preFunc.(); ^impl_underlyingObject.gcInfo() }
	gcSanity { impl_preFunc.(); ^impl_underlyingObject.gcSanity() }
	canCallOS { impl_preFunc.(); ^impl_underlyingObject.canCallOS() }
	size { impl_preFunc.(); ^impl_underlyingObject.size()}
	indexedSize { impl_preFunc.(); ^impl_underlyingObject.indexedSize()}
	flatSize { impl_preFunc.(); ^impl_underlyingObject.flatSize()}
	functionPerformList { impl_preFunc.(); ^impl_underlyingObject.functionPerformList() }
	copy { impl_preFunc.(); ^impl_underlyingObject.copy()}
	contentsCopy { impl_preFunc.(); ^impl_underlyingObject.contentsCopy()}
	shallowCopy { impl_preFunc.(); ^impl_underlyingObject.shallowCopy()}
	copyImmutable { impl_preFunc.(); ^impl_underlyingObject.copyImmutable() }
	deepCopy { impl_preFunc.(); ^impl_underlyingObject.deepCopy() }
	poll { impl_preFunc.(); ^impl_underlyingObject.poll()}
	value { impl_preFunc.(); ^impl_underlyingObject.value()}
	valueArray { impl_preFunc.(); ^impl_underlyingObject.valueArray()}
	valueEnvir { impl_preFunc.(); ^impl_underlyingObject.valueEnvir()}
	valueArrayEnvir { impl_preFunc.(); ^impl_underlyingObject.valueArrayEnvir()}
	basicHash { impl_preFunc.(); ^impl_underlyingObject.basicHash()}
	hash { impl_preFunc.(); ^impl_underlyingObject.hash()}
	identityHash { impl_preFunc.(); ^impl_underlyingObject.identityHash()}
	next { impl_preFunc.(); ^impl_underlyingObject.next()}
	reset { impl_preFunc.(); ^impl_underlyingObject.reset()}
	iter { impl_preFunc.(); ^impl_underlyingObject.iter()}
	stop { impl_preFunc.(); ^impl_underlyingObject.stop()}
	free { impl_preFunc.(); ^impl_underlyingObject.free()}
	clear { impl_preFunc.(); ^impl_underlyingObject.clear()}
	removedFromScheduler { impl_preFunc.(); ^impl_underlyingObject.removedFromScheduler()}
	isPlaying { impl_preFunc.(); ^impl_underlyingObject.isPlaying()}
	embedInStream { impl_preFunc.(); ^impl_underlyingObject.embedInStream()}
	loop { impl_preFunc.(); ^impl_underlyingObject.loop()}
	asStream { impl_preFunc.(); ^impl_underlyingObject.asStream()}
	eventAt { impl_preFunc.(); ^impl_underlyingObject.eventAt()}
	finishEvent { impl_preFunc.(); ^impl_underlyingObject.finishEvent()}
	atLimit { impl_preFunc.(); ^impl_underlyingObject.atLimit()}
	isRest { impl_preFunc.(); ^impl_underlyingObject.isRest()}
	threadPlayer { impl_preFunc.(); ^impl_underlyingObject.threadPlayer()}
	threadPlayer_ { impl_preFunc.(); ^impl_underlyingObject.threadPlayer_()}
	isNil { impl_preFunc.(); ^impl_underlyingObject.isNil()}
	notNil { impl_preFunc.(); ^impl_underlyingObject.notNil()}
	isNumber { impl_preFunc.(); ^impl_underlyingObject.isNumber()}
	isInteger { impl_preFunc.(); ^impl_underlyingObject.isInteger()}
	isFloat { impl_preFunc.(); ^impl_underlyingObject.isFloat()}
	isSequenceableCollection { impl_preFunc.(); ^impl_underlyingObject.isSequenceableCollection()}
	isCollection { impl_preFunc.(); ^impl_underlyingObject.isCollection()}
	isArray { impl_preFunc.(); ^impl_underlyingObject.isArray()}
	isString { impl_preFunc.(); ^impl_underlyingObject.isString()}
	containsSeqColl { impl_preFunc.(); ^impl_underlyingObject.containsSeqColl()}
	isValidUGenInput { impl_preFunc.(); ^impl_underlyingObject.isValidUGenInput()}
	isException { impl_preFunc.(); ^impl_underlyingObject.isException()}
	isFunction { impl_preFunc.(); ^impl_underlyingObject.isFunction()}
	trueAt { impl_preFunc.(); ^impl_underlyingObject.trueAt()}
	mutable { impl_preFunc.(); ^impl_underlyingObject.mutable()}
	frozen { impl_preFunc.(); ^impl_underlyingObject.frozen()}
	halt { impl_preFunc.(); ^impl_underlyingObject.halt() }
	prHalt { impl_preFunc.(); ^impl_underlyingObject.prHalt() }
	primitiveFailed { impl_preFunc.(); ^impl_underlyingObject.primitiveFailed() }
	reportError { impl_preFunc.(); ^impl_underlyingObject.reportError() }
	mustBeBoolean { impl_preFunc.(); ^impl_underlyingObject.mustBeBoolean()}
	notYetImplemented { impl_preFunc.(); ^impl_underlyingObject.notYetImplemented()}
	dumpBackTrace { impl_preFunc.(); ^impl_underlyingObject.dumpBackTrace() }
	getBackTrace { impl_preFunc.(); ^impl_underlyingObject.getBackTrace() }
	throw { impl_preFunc.(); ^impl_underlyingObject.throw() }
	species { impl_preFunc.(); ^impl_underlyingObject.species()}
	asCollection { impl_preFunc.(); ^impl_underlyingObject.asCollection()}
	asSymbol { impl_preFunc.(); ^impl_underlyingObject.asSymbol()}
	asCompileString { impl_preFunc.(); ^impl_underlyingObject.asCompileString() }
	cs { impl_preFunc.(); ^impl_underlyingObject.cs()}
	storeArgs { impl_preFunc.(); ^impl_underlyingObject.storeArgs()}
	dereference { impl_preFunc.(); ^impl_underlyingObject.dereference()}
	reference { impl_preFunc.(); ^impl_underlyingObject.reference()}
	asRef { impl_preFunc.(); ^impl_underlyingObject.asRef()}
	dereferenceOperand { impl_preFunc.(); ^impl_underlyingObject.dereferenceOperand()}
	asArray { impl_preFunc.(); ^impl_underlyingObject.asArray()}
	asSequenceableCollection { impl_preFunc.(); ^impl_underlyingObject.asSequenceableCollection()}
	rank { impl_preFunc.(); ^impl_underlyingObject.rank()}
	slice { impl_preFunc.(); ^impl_underlyingObject.slice()}
	shape { impl_preFunc.(); ^impl_underlyingObject.shape()}
	unbubble { impl_preFunc.(); ^impl_underlyingObject.unbubble()}
	yield { impl_preFunc.(); ^impl_underlyingObject.yield() }
	alwaysYield { impl_preFunc.(); ^impl_underlyingObject.alwaysYield() }
	dependants { impl_preFunc.(); ^impl_underlyingObject.dependants() }
	release { impl_preFunc.(); ^impl_underlyingObject.release() }
	releaseDependants { impl_preFunc.(); ^impl_underlyingObject.releaseDependants() }
	removeUniqueMethods { impl_preFunc.(); ^impl_underlyingObject.removeUniqueMethods() }
	inspect { impl_preFunc.(); ^impl_underlyingObject.inspect()}
	inspectorClass { impl_preFunc.(); ^impl_underlyingObject.inspectorClass()}
	inspector { impl_preFunc.(); ^impl_underlyingObject.inspector() }
	crash { impl_preFunc.(); ^impl_underlyingObject.crash() }
	stackDepth { impl_preFunc.(); ^impl_underlyingObject.stackDepth() }
	dumpStack { impl_preFunc.(); ^impl_underlyingObject.dumpStack() }
	dumpDetailedBackTrace { impl_preFunc.(); ^impl_underlyingObject.dumpDetailedBackTrace() }
	freeze { impl_preFunc.(); ^impl_underlyingObject.freeze() }
	beats_ { impl_preFunc.(); ^impl_underlyingObject.beats_()  }
	isUGen { impl_preFunc.(); ^impl_underlyingObject.isUGen()}
	numChannels { impl_preFunc.(); ^impl_underlyingObject.numChannels()}
	clock_ { impl_preFunc.(); ^impl_underlyingObject.clock_()  }
	asTextArchive { impl_preFunc.(); ^impl_underlyingObject.asTextArchive() }
	asBinaryArchive { impl_preFunc.(); ^impl_underlyingObject.asBinaryArchive() }
	help { impl_preFunc.(); ^impl_underlyingObject.help()}
	asArchive { impl_preFunc.(); ^impl_underlyingObject.asArchive() }
	initFromArchive { impl_preFunc.(); ^impl_underlyingObject.initFromArchive()}
	archiveAsCompileString { impl_preFunc.(); ^impl_underlyingObject.archiveAsCompileString()}
	archiveAsObject { impl_preFunc.(); ^impl_underlyingObject.archiveAsObject()}
	checkCanArchive { impl_preFunc.(); ^impl_underlyingObject.checkCanArchive()}
	isInputUGen { impl_preFunc.(); ^impl_underlyingObject.isInputUGen()}
	isOutputUGen { impl_preFunc.(); ^impl_underlyingObject.isOutputUGen()}
	isControlUGen { impl_preFunc.(); ^impl_underlyingObject.isControlUGen()}
	source { impl_preFunc.(); ^impl_underlyingObject.source()}
	asUGenInput { impl_preFunc.(); ^impl_underlyingObject.asUGenInput()}
	asControlInput { impl_preFunc.(); ^impl_underlyingObject.asControlInput()}
	asAudioRateInput { impl_preFunc.(); ^impl_underlyingObject.asAudioRateInput()}
	slotSize { impl_preFunc.(); ^impl_underlyingObject.slotSize() }
	getSlots { impl_preFunc.(); ^impl_underlyingObject.getSlots() }
	instVarSize { impl_preFunc.(); ^impl_underlyingObject.instVarSize()}

	do { arg function; impl_preFunc.(); ^impl_underlyingObject.do( function ); }
	generate { arg function, state; impl_preFunc.(); ^impl_underlyingObject.generate( function, state ); }
	isKindOf { arg aClass; impl_preFunc.(); ^impl_underlyingObject.isKindOf( aClass );  }
	isMemberOf { arg aClass; impl_preFunc.(); ^impl_underlyingObject.isMemberOf( aClass ); }
	respondsTo { arg aSymbol; impl_preFunc.(); ^impl_underlyingObject.respondsTo( aSymbol ); }
	performMsg { arg msg; impl_preFunc.(); ^impl_underlyingObject.performMsg( msg );  }
	perform { arg selector ... args; impl_preFunc.(); ^impl_underlyingObject.perform( selector, *args );  }
	performList { arg selector, arglist; impl_preFunc.(); ^impl_underlyingObject.performList( selector, arglist );  }
	superPerform { arg selector ... args; impl_preFunc.(); ^impl_underlyingObject.superPerform( selector, *args );  }
	superPerformList { arg selector, arglist; impl_preFunc.(); ^impl_underlyingObject.superPerformList( selector, arglist );  }
	tryPerform { arg selector ... args; impl_preFunc.(); ^impl_underlyingObject.tryPerform( selector, *args );  }
	multiChannelPerform { arg selector ... args; impl_preFunc.(); ^impl_underlyingObject.multiChannelPerform( selector, *args );  }
	performWithEnvir { arg selector, envir; impl_preFunc.(); ^impl_underlyingObject.performWithEnvir( selector, envir );  }
	performKeyValuePairs { arg selector, pairs; impl_preFunc.(); ^impl_underlyingObject.performKeyValuePairs( selector, pairs );  }
	dup { arg n ; impl_preFunc.(); ^impl_underlyingObject.dup( n );  }
	! { arg n; impl_preFunc.(); ^(impl_underlyingObject !  n);  }
	== { arg obj; impl_preFunc.(); ^(impl_underlyingObject ==  obj);  }
	!= { arg obj; impl_preFunc.(); ^(impl_underlyingObject !=  obj);  }
	=== { arg obj; impl_preFunc.(); ^(impl_underlyingObject ===  obj); }
	!== { arg obj; impl_preFunc.(); ^(impl_underlyingObject !==  obj); }
	equals { arg that, properties; impl_preFunc.(); ^impl_underlyingObject.equals( that, properties );  }
	compareObject { arg that, instVarNames; impl_preFunc.(); ^impl_underlyingObject.compareObject( that, instVarNames );  }
	instVarHash { arg instVarNames; impl_preFunc.(); ^impl_underlyingObject.instVarHash( instVarNames );  }
	|==| { arg that; impl_preFunc.(); ^(impl_underlyingObject |==|  that);  }
	|!=| { arg that; impl_preFunc.(); ^(impl_underlyingObject |!=|  that);  }
	prReverseLazyEquals { arg that; impl_preFunc.(); ^impl_underlyingObject.prReverseLazyEquals( that );  }
	-> { arg obj; impl_preFunc.(); ^(impl_underlyingObject ->  obj);  }
	first { arg inval; impl_preFunc.(); ^impl_underlyingObject.first( inval ); }
	cyc { arg n; impl_preFunc.(); ^impl_underlyingObject.cyc( n );  }
	fin { arg n; impl_preFunc.(); ^impl_underlyingObject.fin( n );  }
	repeat { arg repeats; impl_preFunc.(); ^impl_underlyingObject.repeat( repeats ); }
	nextN { arg n, inval; impl_preFunc.(); ^impl_underlyingObject.nextN( n, inval );  }
	streamArg { arg embed; impl_preFunc.(); ^impl_underlyingObject.streamArg( embed );  }
	composeEvents { arg event; impl_preFunc.(); ^impl_underlyingObject.composeEvents( event ); }
	? { arg obj; impl_preFunc.(); ^(impl_underlyingObject ?  obj); }
	?? { arg obj; impl_preFunc.(); ^(impl_underlyingObject ??  obj); }
	!? { arg obj; impl_preFunc.(); ^(impl_underlyingObject !?  obj); }
	matchItem { arg item; impl_preFunc.(); ^impl_underlyingObject.matchItem( item ); }
	falseAt { arg key; impl_preFunc.(); ^impl_underlyingObject.falseAt( key );  }
	pointsTo { arg obj; impl_preFunc.(); ^impl_underlyingObject.pointsTo( obj ); }
	subclassResponsibility { arg method; impl_preFunc.(); ^impl_underlyingObject.subclassResponsibility( method );  }
	doesNotUnderstand { arg selector ... args; impl_preFunc.(); ^impl_underlyingObject.doesNotUnderstand( selector, *args );  }
	shouldNotImplement { arg method; impl_preFunc.(); ^impl_underlyingObject.shouldNotImplement( method );  }
	outOfContextReturn { arg method, result; impl_preFunc.(); ^impl_underlyingObject.outOfContextReturn( method, result );  }
	immutableError { arg value; impl_preFunc.(); ^impl_underlyingObject.immutableError( value );  }
	deprecated { arg method, alternateMethod; impl_preFunc.(); ^impl_underlyingObject.deprecated( method, alternateMethod );  }
	printClassNameOn { arg stream; impl_preFunc.(); ^impl_underlyingObject.printClassNameOn( stream );  }
	printOn { arg stream; impl_preFunc.(); ^impl_underlyingObject.printOn( stream );  }
	storeOn { arg stream; impl_preFunc.(); ^impl_underlyingObject.storeOn( stream );  }
	storeParamsOn { arg stream; impl_preFunc.(); ^impl_underlyingObject.storeParamsOn( stream );  }
	simplifyStoreArgs { arg args; impl_preFunc.(); ^impl_underlyingObject.simplifyStoreArgs( args );  }
	storeModifiersOn { arg stream; impl_preFunc.(); ^impl_underlyingObject.storeModifiersOn( stream ); }
	as { arg aSimilarClass; impl_preFunc.(); ^impl_underlyingObject.as( aSimilarClass ); }
	deepCollect { arg depth, function, index, rank ; impl_preFunc.(); ^impl_underlyingObject.deepCollect( depth, function, index , rank ); }
	deepDo { arg depth, function, index , rank ; impl_preFunc.(); ^impl_underlyingObject.deepDo( depth, function, index , rank ); }
	bubble { arg depth, levels; impl_preFunc.(); ^impl_underlyingObject.bubble( depth, levels);  }
	obtain { arg index, default; impl_preFunc.(); ^impl_underlyingObject.obtain( index, default ); }
	instill { arg index, item, default; impl_preFunc.(); ^impl_underlyingObject.instill( index, item, default );  }
	addFunc { arg ... functions; impl_preFunc.(); ^impl_underlyingObject.addFunc(*functions );  }
	removeFunc { arg function; impl_preFunc.(); ^impl_underlyingObject.removeFunc( function );  }
	replaceFunc { arg find, replace; impl_preFunc.(); ^impl_underlyingObject.replaceFunc( find, replace );  }
	addFuncTo { arg variableName ... functions; impl_preFunc.(); ^impl_underlyingObject.addFuncTo( variableName, *functions );  }
	removeFuncFrom { arg variableName, function; impl_preFunc.(); ^impl_underlyingObject.removeFuncFrom( variableName, function );  }
	while { arg body; impl_preFunc.(); ^impl_underlyingObject.while( body );  }
	switch { arg ... cases; impl_preFunc.(); ^impl_underlyingObject.switch(*cases );  }
	yieldAndReset { arg reset ; impl_preFunc.(); ^impl_underlyingObject.yieldAndReset( reset );  }
	idle { arg val; impl_preFunc.(); ^impl_underlyingObject.idle( val );  }
	changed { arg what ... moreArgs; impl_preFunc.(); ^impl_underlyingObject.changed( what, *moreArgs );  }
	addDependant { arg dependant; impl_preFunc.(); ^impl_underlyingObject.addDependant( dependant );  }
	removeDependant { arg dependant; impl_preFunc.(); ^impl_underlyingObject.removeDependant( dependant );  }
	update { arg theChanged, theChanger; impl_preFunc.(); ^impl_underlyingObject.update( theChanged, theChanger ); }
	addUniqueMethod { arg selector, function; impl_preFunc.(); ^impl_underlyingObject.addUniqueMethod( selector, function );  }
	removeUniqueMethod { arg selector; impl_preFunc.(); ^impl_underlyingObject.removeUniqueMethod( selector );  }
	& { arg that; impl_preFunc.(); ^(impl_underlyingObject &  that); }
	| { arg that; impl_preFunc.(); ^(impl_underlyingObject |  that); }
	% { arg that; impl_preFunc.(); ^(impl_underlyingObject %  that); }
	** { arg that; impl_preFunc.(); ^(impl_underlyingObject **  that); }
	<< { arg that; impl_preFunc.(); ^(impl_underlyingObject <<  that); }
	>> { arg that; impl_preFunc.(); ^(impl_underlyingObject >>  that); }
	+>> { arg that; impl_preFunc.(); ^(impl_underlyingObject +>>  that); }
	<! { arg that; impl_preFunc.(); ^(impl_underlyingObject <!  that); }
	blend { arg that, blendFrac; impl_preFunc.(); ^impl_underlyingObject.blend( that, blendFrac );  }
	blendAt { arg index, method; impl_preFunc.(); ^impl_underlyingObject.blendAt( index, method);  }
	blendPut { arg index, val, method; impl_preFunc.(); ^impl_underlyingObject.blendPut( index, val, method);  }
	fuzzyEqual { arg that, precision; impl_preFunc.(); ^impl_underlyingObject.fuzzyEqual( that, precision); }
	pair { arg that; impl_preFunc.(); ^impl_underlyingObject.pair( that ); }
	pairs { arg that; impl_preFunc.(); ^impl_underlyingObject.pairs( that );  }
	awake { arg beats, seconds, clock; impl_preFunc.(); ^impl_underlyingObject.awake( beats, seconds, clock );  }
	performBinaryOpOnSomething { arg aSelector, thing, adverb; impl_preFunc.(); ^impl_underlyingObject.performBinaryOpOnSomething( aSelector, thing, adverb );  }
	performBinaryOpOnSimpleNumber { arg aSelector, thing, adverb; impl_preFunc.(); ^impl_underlyingObject.performBinaryOpOnSimpleNumber( aSelector, thing, adverb );  }
	performBinaryOpOnSignal { arg aSelector, thing, adverb; impl_preFunc.(); ^impl_underlyingObject.performBinaryOpOnSignal( aSelector, thing, adverb );  }
	performBinaryOpOnComplex { arg aSelector, thing, adverb; impl_preFunc.(); ^impl_underlyingObject.performBinaryOpOnComplex( aSelector, thing, adverb );  }
	performBinaryOpOnSeqColl { arg aSelector, thing, adverb; impl_preFunc.(); ^impl_underlyingObject.performBinaryOpOnSeqColl( aSelector, thing, adverb );  }
	performBinaryOpOnUGen { arg aSelector, thing, adverb; impl_preFunc.(); ^impl_underlyingObject.performBinaryOpOnUGen( aSelector, thing, adverb );  }
	writeDefFile { arg name, dir, overwrite; impl_preFunc.(); ^impl_underlyingObject.writeDefFile( name, dir, overwrite );  }
	slotAt { arg index; impl_preFunc.(); ^impl_underlyingObject.slotAt( index );  }
	slotPut { arg index, value; impl_preFunc.(); ^impl_underlyingObject.slotPut( index, value );  }
	slotKey { arg index; impl_preFunc.(); ^impl_underlyingObject.slotKey( index );  }
	slotIndex { arg key; impl_preFunc.(); ^impl_underlyingObject.slotIndex( key );  }
	slotsDo { arg function; impl_preFunc.(); ^impl_underlyingObject.slotsDo( function );  }
	slotValuesDo { arg function; impl_preFunc.(); ^impl_underlyingObject.slotValuesDo( function );  }
	setSlots { arg array; impl_preFunc.(); ^impl_underlyingObject.setSlots( array );  }
	instVarAt { arg index; impl_preFunc.(); ^impl_underlyingObject.instVarAt( index );  }
	instVarPut { arg index, item; impl_preFunc.(); ^impl_underlyingObject.instVarPut( index, item );  }
	writeArchive { arg pathname; impl_preFunc.(); ^impl_underlyingObject.writeArchive( pathname );  }
	writeTextArchive { arg pathname; impl_preFunc.(); ^impl_underlyingObject.writeTextArchive( pathname );  }
	getContainedObjects { arg objects; impl_preFunc.(); ^impl_underlyingObject.getContainedObjects( objects );  }
	writeBinaryArchive { arg pathname; impl_preFunc.(); ^impl_underlyingObject.writeBinaryArchive( pathname );  }
}



AutoPromise : ObjectPreCaller {
	var <>priv_condVar;
	var <>priv_isSafe;

	*new{
		var self = super.new()
		.priv_isSafe_(false)
		.priv_condVar_(CondVar());

		self.impl_preFunc = {
			if(self.priv_isSafe.not, {
				try
				{ self.priv_condVar.wait({ self.priv_isSafe }) }
				{ |er|
					if((er.class == PrimitiveFailedError) && (er.failedPrimitiveName == '_RoutineYield'),
						{ AutoPromise.prGenerateError(self.class.name).error.throw },
						{ er.throw } // some other error
					)
				}
			})
		};
		^self
	}

	// only call this once in normal use
	impl_addUnderlyingObject { |obj|
		impl_underlyingObject = obj
	}

	impl_markSafe {
		priv_isSafe = true;
		priv_condVar.signalAll;
	}

	// used to wrap the functions that the child class explicitly defines
	doesNotUnderstand { |selector ... args|
		this.impl_preFunc.();
		^this.impl_underlyingObject.perform(selector.asSymbol, *args)
	}

	*prGenerateError { |className|
		^className ++ "'s value has not completed,"
		+ "either use it in a Routine/Thread, or,"
		+ "literally wait until the resource has loaded and try again"
	}
}



+ Buffer {
	*readAP { |server, path, startFrame=0, numFrames=(-1), action|
		var r = AutoPromise();
		var buffer = Buffer.read(
			server: server,
			path: path,
			startFrame: startFrame,
			numFrames: numFrames,
			action: { |buf|
				r.impl_markSafe();
				action !? {action.(buf)};
			}
		);
		r.impl_addUnderlyingObject(buffer);
		^r
	}

	at {|index|
		var r = AutoPromise();
		this.get(index, action: {|v|
			r.impl_addUnderlyingObject(v);
			r.impl_markSafe();
		});
		^r
	}
}



usage

This is what it looks like to use, I’ve also made it work for Buffer.get as it can seamlessly wrap any type.

Basic forked

fork {
	~b = Buffer.readAP(s, ~path);
	~b.numFrames.postln; // waits automatically
}

Basic no fork

~b =  Buffer.readAP(s, ~path);

// wait a second

~b.numFrames; //has been updated behind the scenes, does not wait

Wrapping Buffer.get

fork {
	~b = Buffer.readAP(s, ~path);
	~result = ~b[41234]; // waits on ~b, calls Buffer.get -- returns an AutoPromise
	format("result + 1 = %", ~result + 1).postln; // waits on ~result
}
~b = Buffer.readAP(s, ~path);
// wait a second
~result = ~b[41234]; 
// wait a second
format("result + 1 = %", ~result + 1).postln; 

There is an issue here though…

fork {
	~b = Buffer.readAP(s, ~path);
	~result = ~b[41234]; // waits on ~b, calls Buffer.get -- returns an AutoPromise
	format("1 + result = %", 1 + ~result).postln; // waits on ~result
}

… doing 1 + ~result does not work as no message has been sent. I don’t know if this is a simple change or not as you might just be able to call value inside Number.add with little consequence? Ultimately the issue is that it uses a primitive here. Instead, you get an error as the current ‘value’ of ~result is nil.

Benchmarks

@shiihs
Okay turns out I was … sort of…

Basic single buffer = the same


s.waitForBoot {
	{
		var b = Buffer.readAP(s, ~path) ;
		b.numFrames.postln;
		b.free;
	}.bench // time to run: 0.12787021299999 seconds.
}

s.waitForBoot {
	{
		var b = Buffer.read(s, ~path) ;
		s.sync;
		b.numFrames.postln;
		b.free;
	}.bench // time to run: 0.12784386300001 seconds.
}

10 Buffers = the same.
Now this assumes you call sync, but it breaks on my system if you don’t.
This is the one that I thought would be faster, but for some reason it isn’t? Not a big deal as it is at least no slower.

s.waitForBoot {
	{
		~bufs = 10.collect({
			Buffer.readAP(s, ~path)
		});
		~bufs[0].numFrames.postln;
	}.bench; //time to run: 1.227028303 seconds.
	~bufs.do(_.free);
}

s.waitForBoot {
	{
		~bufs = 10.collect({
			Buffer.read(s, ~path)
		});
		s.sync;
		~bufs[0].numFrames.postln;
	}.bench; // time to run: 1.230603833 seconds.
	~bufs.do(_.free);
}

Drawbacks

  • Doesn’t work with keyword arg function calls (original purpose of this thread) — solvable, but definitively not trivial.
  • If you wrap it in normal parenthesis, (...), it will only sometimes throw an error — I’ve added a custom error message to make this more obvious. @jamshark70’s suggestion of change the interpreter to always fork fixes this, but might have a performance cost, although since this is only ever evaluated once at a time, it might be minor?
  • Hides the true nature of the server/client relationship — I don’t think this is a drawback at all, and you might as well argue that Buffer hides the fact everything is sending OSC messages.
  • One down side is that things like SynthDef do not work if you have many being defined in parallel, and AutoPromise approach can be unclear if something is run in parallel. I don’t think this is AutoPromise’s fault, it is SynthDef’s and the change should be there.
  • When passing some AutoPromise’d to a method that calls a primitive, this doesn’t count as a message, so no sync is done — it might not be possible to solve this mean the commutative property is broken in some cases (this does not apply to Buffer as it isn’t a primitive or used in any primitive calls).

I still think this is the best solution for Buffer (with always fork in interpreter).

  • User just writes the code as they would as if s.sync didn’t exist.
  • It is always safe.
  • The performance is okay.
  • Almost completely backwards compatible — we don’t have to wait for the messiah of SC4 to arise.
  • If this is applied to other classes, there would be no reason to teach the client/server split or even mention synchronous/asynchronous programming in the introduction of supercollider — that is the biggest win in my mind.

As a way to fix the primitives, there could be an extra method added to Object called impl_touch, which just does nothing and would need to be called before any _XPritimitive primitive is called… Otherwise primitive types might just have to be explicitly waited on.

Why not just call wait automatically? In supercollider (unlike many other languages) we have a very clear definition of what it means to ‘use’/‘access’ an object — its when you send a message.

I’m not a fan of your example (when applied to Buffer) as it means the user must remember to call .value, which is essentially the same as remembering to call s.sync, and having certain built in things do it automatically just confuses things. In my solution, the user has to do nothing (except make sure its being called in a forked context, or leave sufficient time, but that is true of both approaches).

JMc’s math implementation (likely cribbed from Smalltalk) should definitely support this.

1 + ~result first dispatches to Integer:+. If the primitive fails (which it will in this case), then it calls performBinaryOpOnSimpleNumber on the second operand – this method, because the first operand is known to be a simple number at this point. AutoPromise’s performBinaryOpOnSimpleNumber should then await.

I.e., a message is sent to the AutoPromise: performBinaryOpOnSimpleNumber.

There’s a suite of other performBinaryOp methods that should be included for completeness.

It does take some time to understand math ops in the class library, but it’s IMO beautiful: handles every case with minimalistic, elegant factoring. (One reason why I suggested making Promise/Future a subclass of AbstractFunction is to take advantage of the compose***Op interface.)

While I don’t have a big investment in the eventual outcome, I’d note that both lnihlen and spacechild1 have raised concerns about “hairy” implementations. Balancing user needs against forward maintainability is a difficult question.

hjh

I tend to agree with this approach. I think most are sold on the idea to make things like async easier for the new user. I’ll just point out that if the main objection is in the name of looking out for the new user, then adopting the latest and greatest in the culture should come naturally (they’re new, after all, there’s no culture shift for them).

And for experienced users (who likely share OP’s pain in learning about async processes), the latest abstraction will be a breath of fresh air. But also for experienced users, who have fully internalized this at-times-useful async behavior, this

might lead to some confusing behavior (though maybe quick to adapt to?).

Thanks @jordan for starting this discussion. It will be useful to refer back to in the future… would you consider changing the name of the thread to capture the theme of async lang behavior?

If you start to learn sclang with no prior experience, there are lots of hurdles to overcome. I can vividly remember when I learned sclang in university I didn’t know anything about object oriented programming; I didn’t even know what a method call is! Learning a complex object oriented programming language like sclang will require a large amount of effort from a novice programmer.

I agree that we should strive to keep the learning curve flat, but we should also be realistic. Unpopular opinion: a classical oboist who has no idea about programming and wants to dabble in live-electronics should probably start with Max/MSP.

never mind (a)synchronous programming.

Actually, the core idea around asynchronous programming itself is rather trivial:

  1. certain operations can take an unbounded amount of time
  2. we have to wait for such operations to complete
  3. it might be nice if we could do something else in between

The problem is rather how asynchronous programming is typically done in sclang. Just a short recapitulation:

// for a single buffer, we can pass a callback function:
~buf = Buffer.read('foo.wav', action: { ... });

// For multiple buffers we need to use s.sync instead
~bufs = ~files.collect({ |x| Buffer.read(x) });
s.sync;

// Oh, but this only works if the async operation involves a *single* Server roundtrip,
// so the following does not work:
~data = [];
~bufs.do { |b| b.getToFloatArray(action: { |data| ~data = ~data.add(data) }) };
~s.sync; // nope...

// Also, s.sync only works for asynchronous operations that involve the Server, so the following doesn't work either:
~cmds.do(_.unixCmd);
~s.sync; // nope...

// So how do we actually synchronize in the last two examples? Go figure...

// Finally, getting data asynchronously with callbacks is awkward:
~buf.getn(0, 128, action: { |data| ~data = data });
s.sync;

In a promise-based model, on the other hand, all of these operations would look the same and they would be much simpler to use:

// read a single buffer and wait for completion
~buf = Buffer.read('foo.wav').await;

// Wait for multiple buffers to load
~bufs = ~files.collect({ |x| Buffer.read(x) }).await;

// Naturally, this also works for operations with several Server roundtrips:
~data = ~bufs.collect(_.getToFloatArray }).await;

// Same for async operations that don't involve the Server:
~cmds.collect(_.unixCmd).await;

// Getting data asynchronously looks the same:
~data = ~buf.getn(0, 128).await;

I hope this illustrates the point I’m trying to make. Asynchronous programming does not have to be hard!

For them to simply playback a soundfile on the server they would need to be taught all about the server/client split, then what a promise is, and that they should always remember to call await.

You don’t really need to know much about the actual client/server-architecture. The only thing you do need to know is that some operations are asynchronous and you need to await them. IMO it is not more difficult than remembering to call SynthDef.add.

how do they know whether to await or not? That is a lot to ask of a new user.

Documentation and examples.


One important thing I forgot to mention: a promise-based model also simplifies error handling because you can use exceptions, just like with ordinary synchronous method calls!

try {
~buf = Buffer.read('foo.txt').await;
} { |error|
...
}

In general, I think there is lots of truth in the adage “explicit is better than implicit” (PEP 20 – The Zen of Python | peps.python.org).

I’m not saying that your autopromise approach is bad per se, but I don’t think such “magic” belongs to basic server abstractions like Buffer. A method like numFrames should just return a value and not do some funky stuff behind the scene, such as blocking the calling thread. Waiting for completion should be done explicitly.

Actually, you can keep your internal promise object and just let the user await it with an explicit method call:

~buf = Buffer.read(`foo.wav`).await;
~buf.numFrames;

This way we can get close to a “real” promise-based model. It is not perfect because we don’t really return a promise, but it’s probably the best we can get without breaking backwards compatibility or adding lots of new dedicated methods (which would just further bloat the Class Library).

Anyway, thanks indeed for starting this discussion!

2 Likes

Assuming we’re not changing the current behavior of Buffer, I still think something like this

could be quite useful, though a warning is probably more appropriate.
The getters for state vars depending on async resources could be guarded by a flag that is set when the Buffer (or whatever) is loaded. See Buffer:-queryDone

// called from Server when b_info is received
queryDone {
	doOnInfo.value(this);
	doOnInfo = nil;
	stateLoaded = true; // new
}

// new getter
numFrames {
	stateLoaded.not.if{
		"Tried to access Buffer before the server had finished loading it.".warn
	};
	^numFrames // still return the var
}

I’m not sure how rude we’re willing to be with warnings/errors, but in this case it’s clearly bad behavior to access these uninitialized vars.

Okay a follow up about server commands.

I’m writing a promise and making a new SmartBuffer class (for lack of a better name). I will eventually add all these to a quark, with the hope they might be added to the standard class library.

To synchronise with any server message, can a sync command be used? The documentation says,

Replies with a /synced message when all asynchronous commands received before this one have completed.

Is that because the server only has one thread executing these and therefore, it will only ever be done once all previous commands have been completed, or does it mean it will end up waiting longer than needed? This isn’t too much of a problem with SmartBuffer as other messages can be used, but something like SmartSynthDef and the \d_recv message doesn’t respond with the name of the defined synthdefs in \done.

1 Like

Yes, that’s exactly how it works.

1 Like

Coming to this thread quite late (good discussion though!), but a couple notes:

Probably it’s slightly better to do your sync via the completionMessage rather than an explicit s.sync call. It’s unlikely to be THAT important, but it removes one set of abstractions (the sync mechanism) from your implementation. The implementation I’m using with Deferred is just:

    *doRead {
        |server, path, startFrame = 0, numFrames = -1, bufnum|
        var d = Deferred();
        Buffer.read(server, path, startFrame, numFrames, d.valueCallback, bufnum);
        ^d
    }

Though it involves a lot of boilerplate, I think you could do a full Buffer implementation that’s transparently async. There are only two server objects that really require synchronization: Buffers and SynthDefs - I think it might be overkill to go too far down the route of any uber-generic solution, when you could just fix probably five or ten methods on these objects. I’ll sketch out what I’m thinking of (making use of Deferred as my promise, but could be translated to other mechanisms) - this is just a mockup / example.

AsyncBuffer : Buffer {
    var <>async;
    
    *read {
        |argpath, fileStartFrame = 0, numFrames = -1, bufStartFrame = 0, leaveOpen = false, action|
        var deferred, newBuffer;
        deferred = Deferred();
        newBuffer = super.read(argpath, fileStartFrame, numFrames, bufStartFrame, leaveOpen, deferred.valueCallback());
        deferred.then(action);
        newBuffer.async = deferred;
    }
    
    // basically: wrap this, asUGenInput, and every other read operation that depends on
    // the buffer being loaded - there should only be 5 or 6?
    asControlInput {
        if (thisThread.isKindOf(Routine)) {
            async.wait;
            ^this.bufnum
        } {
            if (async.hasValue.not) {
                Error("Buffer is waiting to be loaded").throw
            }
        }
    }
}

+Buffer {
    *new { arg server, numFrames, numChannels, bufnum;
        ^AsyncBuffer.new(server, numFrames, numChannels, bufnum)
    }
    // and other Buffer creation methods.
}

Thinking about the behavior and compatibility here:

  1. If we ARE in a Routine:
    1.1 …and our Buffer is already loaded: behavior is identical to before
    1.2 …out buffer is not loaded: we wait implicitly, and when we return behavior is identical to before. This is pretty unlikely to have negative effects, except in cases where we are e.g. reading a bufnum for a message we schedule in the future (this would have worked before, and would now have an extra pause which could be disruptive)
  2. If we’re NOT in a Routine:
    2.1. …and the buffer is not loaded: this is now an explicit error instead of undefined behavior as it was before
    2.2. …and the buffer is loaded: this is the same as before

It’s likely that any user code that’s even remotely sophisticated is already managing these server sync issues somehow, so adding this behavior by default (or without care…) is going to be somewhere between not-beneficial and a bug risk - this feels muuuuuuch better as a Quark than a core library change. If there was momentum to introduce better async API’s in the core library, we might as well just skip backwards compatibility entirely and redesign ALL the server object API’s, and then move internal uses over to the new API’s - there’s lots of good clean-up work to do anyways…

1 Like