It is just that since sc tries to be rank polymorphic (multichannel expansion), i.e., almost all functions will take a T or an array of T, without some way to specify this you would have to write every function at least twice.
Generic definitions in constructs like classes, enable abstraction across types. For instance, the Collection hierarchy as generic classes:
• Collection[T] → a subclass of Object parameterized by T.
• Set[E] → a subclass of Collection parameterized by E.
For example, Collection[Character] or Set[Int] specifies concrete types replacing the formal arguments. Formal arguments can have bounds [a range, or a condition (odd numbers)?]; without explicit bounds, they are assumed to be subtypes of Object.
function syntax extended? Parametric polymorphism? Union types?
[^Symbol] denotes a block returning a symbol without any arguments.
[Character, ^Integer] describes a block/function taking a Character and returning an Integer.
[Boolean, Integer, ^Boolean] specifies a function accepting a Boolean and an Integer, returning a Boolean.
a method to have return types that depend on the types of their arguments, with the actual types being inferred?
specify an object’s type as a combination of several types (e.g., Symbol | UndefinedObject). Parametric polymorphism is useful for methods where the result type is dependent on argument types. For instance, Collection[T] collect: method, uses a function to infer the return type within a collection operation.
That specific code can still be expressed as foo { |x| x + 1 } - it just inherits the type constraints of + (which already supports Number and Collection).
If the implementation of foo was not JUST delegating to other methods that are also polymorphic, then it means that it’s current sclang implementation already has something like: if (x.isArray) { x.collect(foo(_)) } { x + 1 }. User code is already having to deal with polymorphism in an annoying way. From a language design / UX perspective, I guess it would be good to make sure that the “new” statically typed code is clearer and easier to write than old “type checking” code.
With an idealized compiler, the compiled result should be identical whether user code has the two overloaded implementations, OR a type check in code like x.isKindOf(Array). It’s probably an open question how difficult it would really be to make a compiler that could understand that both of these cases are the same, but the information is definitely there.
I’m not sure if we would need to have a user way to specify a return type? The return type of a function is just the union of all of the types of every ^something statement in that function:
So, the return type here is Union<Integer, String> - adding return type to the signature doesn’t add anything. What if you WANT to constrain the return type? I think this might be better expressed in code rather than the signature - for example:
We can infer a return type of Number for foo because:
numberOrString → Union<Number, String>
validateNumber(Number) → Number
validateNumber(String) → Error
So, foo → Union<Number, Error>
I think an assumption would be that every return type would implicitly be a union with Error, but ofc there could be some kind of noexcept clause that changes this.
I can’t really think of a case where specifying a return type feels significantly better than specifying that constraint in code?
Yes, one reason I could mention is that it could evolve into a better way to deal with failures (Option Type in F#, actually a type constructor (or union) bc it will be paired with a type, also known as a Maybe monad) seems like an influential idea, etc. So a failure doesn’t have to be catastrophic all the time, can be Some or None (or an Error could just mean something different, a type Option), giving a chance to the program to deal with it elegantly.
(or even bind functions that return an option type together, like they were regular functions, but lifted )
This can be useful for example in async situations that may succeed or fail, lookup tables, partial functions, and all sorts of things, dealing with failure as just one of the possibilities
Rather than a noexcept, it might be possible to go explicit — there is a nice whole in the syntax after the argument pipes where a question mark could go to mark the function as returning a Union<T, Error>
Error would need an overload for ?? that might collide with nil. Since nil is indicated only by the tag in pyrslot, it could be made to also attach a PyrSymbol containing the message to throw.
Having an lsp hint for unhandled errors would be nice.
I think possibly no extra syntax is even needed? In your example, it’s trivial to infer that the return type is Union<Float, Error>. But also important to point out: that code ALSO could implicitly throw, unless we know at compile time that every possible if implementation, as well as Error.new is non-throwing.
Having an lsp hint for unhandled errors would be nice.
Yes, the more we could do of this the better. It’s a bit hard because e.g. any method called on a variable where we don’t know it’s type could potentially throw - this would mean essentially every method in sclang is a warning - probably not useful information at that point? I’m wondering what kinds of cases of likely-or-for-sure errors WOULD be useful and actionable?
That is interesting. This language also has the option type, and it uses null. But in its case, null is always paired with a type when you assign it to a variable, it can’t exist alone. So it’s not exactly our nil, but similar to Nothing, or None. I’m not sure one can bind functions using it (I think not) that would be an even closer implementation, but it means it can be useful even without biding.