Depending on how you think about it, sclang already has a limited, ad-hoc implementation of type-based dynamic dispatch. You can imagine the collection of all methods defined in SuperCollider as a map between a tuples that look like: <Symbol, Type, Type, ...Type>
and a method implementation. SuperCollider does runtime dispatching based on only the first two in the tuple, and all the rest are assumed to be Object
(e.g. derived from Object
, which is just any value at all). So, a table of play
methods might look like:
{
<'play', Ndef, ....Object>: { ...Ndef:play method... },
<'play', Pdef, ....Object>: { ...Pdef:play method... },
}
Secondly, sclang does another limited form of runtime dispatching with unary and binary operators. The primitive implementation of these look like this (in pseudocode):
// operatorPlus, left.type == Integer
if (right.type == Integer) {
plusIntInt(left, right);
} else if (right.type == Float) {
plusIntFloat(left, right);
} ... etc ...
This is just the same as:
{
<'plus', Integer, Integer>: { ...int+int implementation... },
<'plus', Integer, Float>: { ...int+float implementation... },
<'plus', Float, Integer>: { ...float+int implementation... },
... etc ...
}
It’s not so hard to imagine typed dispatch bolted onto the existing sclang runtime. We keep the current “big table” dispatching for the symbol + first argument, and then dispatch on the remaining arguments in the same way we do binop dispatching (in pseudocode):
for (auto method : possibleMethods) {
if (signatureMatches(method, argumentTypes))
{
invoke(method, arguments);
}
}
All existing sclang methods work fine with an implementation like this, since they can be assumed to have a signature that expects all Object
’s (e.g. they take any input) - cases where there are methods that overload based on arguments beyond the first are just cases where possibleMethods.size > 1
.
Practically, this would do two things -
First, it would remove the need for most type checks in sclang, as well as most type checks in primitives. Rather than writing:
doSomething {
|what|
if (what.isKindOf(Array)) {
what.do { |value| value.something }
} {
what.something;
}
}
you could write this:
doSomething {
| what:Array |
what.do { |value| value.something }
}
doSomething {
| what:Object |
what.something;
}
Dispatch now takes care of branching based on the type of what
.
It’s worth noting that, while this kind of dispatch seems like it would be slow, it’s guaranteed to be no slower than any of the type-based dispatching we do NOW (which are, in the end, just doing some form of type comparisons in a sequence of if-else blocks). FWIW the nim language, which does dispatch similar to what I’m describing for sclang, is iirc still just doing if-else checking for all the types past the first one - there’s not any particularly magical optimizations going on here.
This kind of dispatch could even be implemented purely as syntactic sugar, with no runtime changes required at all - we can simply compile the two separate overloads of doSomething
in the previous example back to the old school single implementation with branching isKindOf
checks. In this case, we have all the functionality of typed dispatch for every argument, with more or less the same performance as we’d get now for comparable code.
There’s a whole pile of optimizations that can be done with a system like this - actually, many of these optimizations are things sclang is already doing, just not in a systematic way. For example: control flow is not a fundamental feature of sclang - it only get control flow via dispatch. “If” branching is basically implemented like this:
+True {
if {
|trueFunc, falseFunc|
trueFunc.value()
}
}
+False {
if {
|trueFunc, falseFunc|
falseFunc.value()
}
}
+Object {
MustBeBooleanError().throw;
}
In a subset of common cases, if
blocks are inlined - this effectively ends up looking like:
if (condition.isKindOf(True)) {
trueFunc.value()
} else if (condition.isKindOf(False)) {
falseFunc.value()
} else {
MustBeBooleanError().throw // implicitly condition.isKindOf(Object)
)
In short: the sclang compiler is just inlining the dispatch code for efficiency. Generalizing this optimization would mean we could potentially apply it to other similar cases across the language (for example, maybe this could be applied to ANY case where a method name has 2 or fewer implementations) - it would also eliminate a bunch of special case optimization code for control flow structures like if
and while
.