I’m building a GUI to control my UnoSynth using sysex. I have noticed that the UnoSynth’s sysex schema represents bipolar parameters (such as an oscillator’s tune) as something that I think is a signed 16 bit integer (although there are only 128 possible values). (Obviously since SC’s sysex works with Int8Arrays, these parameter values can be thought of as two 8-bit integers.)

Some example values are as follows:

[127,64] // Minimum value, i.e tuned fully down
[127,74] // Ten units higher than the minimum
[127,127] // Just below the centre
[0,0] // Just above the centre
[0,63] // Maximum value, i.e. tuned fully up

My question is this: is there a natural way I can express the values of this parameter in SuperCollider, so that I don’t have to write some horrible code to make sure arithmetic like this works?:

One thing that has been talked about for a long time, but not actually done, is to reduce, rather than expand, the surface area of the class library (to address the criticism that there are too many ways of doing things, and that this is confusing to new users).

One obstacle to this effort is the constant temptation to add new features – “oh it’s just one little method.”

It’s reasonable, of course, to consider it – but it should be noted that the cost (in terms of maintenance) of adding features is not zero.

I don’t know - users can be overwhelmed by “too many ways” but also frustrated by reasonable expectations not being met.

I’m not sold on the idea of an .as16bit method myself - for me, the “as” methods should be reserved for Classes. But we do have an Int16Array with putInt16 and getInt16 methods so…

I would say though, that if a small suggestions is indeed “reasonable to consider” let’s consider!

I think there should be a dedicated thread for this:

Thread title: A wish list of methods and classes that could/should be deprecated in future SC releases to reduce multiple ways to do the same thing.

However, your function ~as16bit should be added as a method. Otherwise it should appear somewhere in the tutorial or help documents. It explains the difference between 16-bit and 32-bit integers and the method wrap in a very simple and impressive way.

The best solution for signed 16-bit would be to implement it as a type (a class). One design decision that would have to be made is: if the result of an operation between two signed 16-bit ints would overflow, should it follow the current behavior of Integer ‘+’ and just overflow, or Integer ‘/’ and automatically cast upward? I tend to think, overflow (if you’re using this as a type, then presumably you know what you’re doing) – just that decisions should be considered, not assumed.

Int16(a) + b where b is an Integer, what should be the output type? (Because e.g. anInt + aFloat → Float so there is already precedent for casting to the “bigger” type. Then Int16(32767) + 1 would be 32768, which is probably not what’s wanted… so then Int16 would follow a different principle.)

a + Int16(b) where a is an Integer, what should be the output type?

An Int16 class could be a quark.

as16bit is a bit misleading as a name because it isn’t converting to a 16-bit type. Maybe wrapToSigned16Bits or wrapToSignedBinaryPrecision and specify the number of bits.

I don’t have a very strong objection to adding – what I’m really talking about is a sort of mental habit in the SC community where “oh, that’s a neat way to do it” is often reflexively followed by “that should be in the class library.” Well… maybe it should. Or maybe not. Maybe there’s a better way, or maybe it’s a niche feature that wouldn’t be widely used (meaning it might not be worth a permanent maintenance burden). Within the community, it needs to be OK to say “ooh, we’d like that in core” and it also needs to be OK to say “sure, it’s cool, but it maybe doesn’t make the cut.”

~~

Another thing where I’m not sure what exactly is the right thing to do is: if a point of confusion is not SC-specific, to what degree is it SC’s responsibility to (re-)document something that is either a general convention in classical computing, or a known general formula in DSP?

That wrap operation is based on understanding:

that unsigned fixed-precision integers basically modulo every result. In 2-digit decimal, (99 + 1) == 0 because everything is % 100 and 100 % 100 == 0. (You can check this in SC as well: 0xFFFFFFFF + 1 is 0.)

that subtraction is done by finding the complementary positive number matching up to the negative number. In unsigned 2-digit decimal, -1 matches up to 99 (because + 100 is a no-op in this number system, plus, -1 is 1 less than 0 and 99 is 1 less than 100). x - 1 and x + 99 behave the same – take the mod example and flip the operands: 1 + 99 == 1 + (-1) == 0.

that signed types then, effectively, shift the modulo range so that half the possible values are negative, and half are non-negative (>= 0). In 2-digit decimal, the 50 negative numbers would be -50 to -1, and the 50 non-negative would be 0 to 49.

x % n is the same as x.wrap(0, n-1) for integers. Shifting the operation is then just x.wrap(lowestNegativeValue, highestPositiveValue).

Certainly, a method named asInt16 might suggest the existence of an Int16 type, even though that wouldn’t be true. It can be a good idea to implement Int8, and Int16 for performance reasons, or even a “scientific” type with arbitrary widths, it is all fine. But the way a polymorphic type for all integers would work, and how they behave exceeding its width has to be well documented.

Out of curiosity, I checked the performance difference between fixed-sized Ints and modern implementations of a type representing the entire infinite range of integers. The performance difference is not that significant. (In some cases, can be faster)