SuperCollider 4: First Thoughts

Yes, I already had this idea. Reblocking and resampling would definitely be possible.

It would even allow multi-threading / multi-processing in scsynth because you could run the sub-instance in a seperate thread or process with a double buffer or ring buffer, similarly to the [pd~] Pd object. However, that wouldn’t make so much sense since we already have Supernova.

Is it theoretically possible for a future version of SC to include only the core server & client architecture, along with a bare steel internal package manager (integrated with git & networking) serving as the Quarks/extension interface?

And as a comprise to resolve the problem of “backwards compatibility”, if one attempts to run a script which contains class names that are not found in the local source, then (perhaps) the interpreter would use git to check for a dedicated directory file (a table of package names for keys which are then bound to arrays of class names installed by the package) and would then, in the case of such an event, print something useful such as the code that would need to be run in order to download any/all necessary packages not found in the local source?

And would something like this ruin the prospect of having the future SC exist with comparable performance to being able to run Pd headless on an SP32 w/ only 512 kB of RAM?

If you’ve used Pure Data (as I do in the classroom) then you already know what this is like: 1/ features that you thought were core might not be (e.g., in Pd, signal rate comparators are external :woozy_face: … so it’s not sig > 0.5, it’s ((sig - 0.5) * 1e34).clip(0, 1) – I’m afraid I’m not joking about that, without cyclone that is really what you have to do) and 2/ because extras aren’t mentioned in the core documentation, the only way to find the right extension is to ask on the user forum and wait for someone to answer. To Pd’s credit, with deken, they have made externals even easier to install than SC.

There may be good reasons to go that way, but it’s no utopia.

hjh

In the endeavor of envisioning such a utopia, I wonder if we could make the first level an advancement from where typical usage starts at the second.

  • Kernel
  • Core
  • Quarks (Verified)
  • Quarks (…in the wild)

By comparison, with Pure Data, one is essentially cornered into Pd-VanillaPd-Extended has been abandoned for nearly a decade.

The level of performance Pd achieves on microprocessors with limited resources is a considerably advanced feature, though it is standard to it’s core distribution.

I propose that we may achieve the same level of versatility, while remaining accessible to one’s first impression.


If SC4 can effectively achieve the raw deployability of Pure Data, the compatibility with as many platforms as Faust, while earning a reputation as the premiere & open-source solution for interfacing OSC communications between any and all platforms & applications ( a central control hub that makes OSC routing between apps a robust simplicity )

…if utopia may be achieved, then I see no purpose in driving towards a vision of anything different, for SC.

That’s just a problem with languages that don’t support proper modules with namespaces.
Actually, Pd does have some kind of namespaces: you can prepend the library path, e.g. [cyclone/zl]. However, it’s entirely optional and many users are too lazy to type those extra characters, so you end up with the very problems you’re describing.
Sclang, on the other hand, just drops all classes in the global namespace. A very common solution is to prepend every struct/function with a library prefix. Unfortunately, this is up to the library author and can’t be enforced at the language level…

(e.g., in Pd, signal rate comparators are external :woozy_face:

This has always been a pet peeve of mine. There really is no convincing reason why signal comparison operators are not part of Pd vanilla. The only explanation is that Miller just forget about it and nobody cared enough to make a PR - including myself. I’ve just opened an issue (https://github.com/pure-data/pure-data/issues/1449) so I don’t forget :slight_smile:

the only way to find the right extension is to ask on the user forum and wait for someone to answer.

In Deken you can also search for individual objects and it will show you the library/libraries. There even is a website: http://deken.puredata.info/. Unfortunately, it doesn’t work with objects containing “forbidden” characters, like [>~], [<~] or [||~] (unless the author provided an object list).

To Pd’s credit, with deken, they have made externals even easier to install than SC.

I also think that Deken is a nice package manager. One thing I am missing is a list of all available externals, like you get with Quarks.gui. Ideally you would also show the download stats, so people can sort the list by popularity.

EDIT: you can get a list of all available externals by doing an empty search. But the resulting list is a bit unwieldy

I think I could see scserver and sclang being two separate packages/installs. Scserver and sclang would still have to be “official” though. I just can’t see how releasing SuperCollider without a Patterns library or JitLib, for example, could be a solution to anything.

2 Likes

To clarify, the implication was that there would be a bare steel version of a future release of SC, which would contain:

  • Server
  • Client
  • Quarks (Extension Manager)

…and not much else, as a way to achieve the raw deployability that is a trademark of Pd.


It would be available as a more advanced feature, for anyone who wanted their own custom or headless SC to run on a downsized system.

1 Like

Food for thoughts

3 Likes

Well, if you add a name mangler that sortof imitates what linker does, then it actually is useable, for me. Otherwise you can’t include the same function more than once in the same sythdef, if it has any control args of its own. Name conflicts are horribly handled right now, by the standard SC machinery, but at lest for func args there a is straighforward fix to make them visible across lexical units. The dynamically generated names by the old Control.names interface have rather horrible corner cases that I personally didn’t find worhtwhile supporting, rather than patching the few places in JITLib where they were still used for some reason (There also use of the NamedControls in nearly the same library, e.g. in GraphBuilder.)


Hers’s another feature to ponder: names for control busses, and perhaps for all busses.

Inside a single SynthDef you can give your controls, connecting wires and in fact all your signals, meaningful names (which pretend to be variables), at least during Synth development, except for outputs.

So a logic control synth has decent names as input and just a largish array as output. I can’t think of any EDA package or circuit (HDL) language (e.g. Verilog) from the last 40 years that was like this, where inputs in your circuit can be named, but outputs cannot.

Yeah, I’m probably going to get one of those “I’ve been using SC for 20 years and never needed this” replies or “this is music/dsp software, not circuit design, so we’re used to bus numbers because we only use a handful”.

By the way, CSound added a signal flow graph facility around 2010. And it has multiple names outputs, much a like a HDL. But it falls short of SC’s flexibility because CSound uses a syntactically specified graph, as far as I can tell, whereas in SC the graph is obtained by some OOP magic by running the user’s SynthDef function on some special objects (OutputProxies) passed as inputs. (Really, it’s the simplest way to use OOP to generate an abstract syntax tree. It was like assignment #2 in a compiler class I took 25 years ago, or so.)

Perhaps in keeping with CSound philosophy, one is expected to run an external program to generate a graph from an OOP-like paradigm, as for CScound scores in general there are (too) numerous external generators, CMask and what not. There’s some discussion on their dev list on philosophical differences in that regard, where it was even noted that “one can not define [a] Reverb instrument and use multiple instances of it with the inlet/outlet/connect [CSound] opcodes. In a system like SuperCollider, one might instantiate multiple Synth instances of SynthDefs but tell them which channels of the bus to read from/write to, and that’s done at the instance level, not the definition.” Well, if you use my fairly simple SC mangler extension, you can easily have multiple function instances compiled in the same SynthDef (without control name conflicts), so you can even have it as one SynthDef in SC.


Another thing that could perhaps be done as a UGen: some kind of address decoder, so if you have e.g. 128 synths you don’t need 128 * number of params busses to map them to some control logic. But you can have them select themselves based on an address bus (only 7-bit wide in this case) and have them only read from the data bus (the only needs to be as wide as the params) when their in-built address matches what’s on the address bus. If this sounds too “circuity” for music, just think how MIDI works: it doesn’t use hundreds of wires for interconnect, which would happen if you had one wire per instrument to select them.

Pause.kr can do a wee bit of this as it’s the only UGen I know that can do something to another based on its id. But it’s flexible enough, because it can just turn some other node on or off.

I actually overestimated how difficult is to do this part in current SC, when I wrote the above. It turns out it’s actually fairly easy… because SC busses pass floats, not just bits… duh.

I am not sure if this was mentioned (probably yes), but a more homogeneous syntax would be a good fix point, as discussed here.

type conversion is working under heterogeneous methods:

("some"++"string").asArray; // -> somestring
("some"++"string").asArray.class; // -> String
("some"++"string").as(Array); // -> [ s, o, m, e, s, t, r, i, n, g ]
("some"++"string").as(Array).class; // -> Array

Moreover:

1.asInteger.class //returns Integer
1.0.asInteger.class //returns Integer

[1.0, 1.0].asInteger // returns an array of Integers 
-> [ 1, 1 ]

1.0.as(Integer); // ERROR: Message 'newFrom' not understood.
1.as(Float); // ERROR: Message 'newFrom' not understood.
\a.as(Char); // ERROR: Message 'newFrom' not understood.

However:

[1.0, 2.0].asInteger // -> [ 1, 2 ]
$a.asInteger // -> 97
"a".asInteger // -> 0
[$a, $b].asInteger // ERROR: Message 'asInteger' not understood.

This is really confusing for those beginning with SC and I guess it is incovenient/troublesome for advanced users.

A similiar problem is related to OOP capabilities of the base classes. Array2D for instance is capable of responding to a admittedly small amount of methods when compared to Array. Why having Array2D at all in this small implementation if most users are creating 2D array by nesting Array due to the lack of methods ? Thinking in the begginers usage, I guess many people have spend some time trying to use Array2D until they realize this and went back to nested Array. IMO this is steeping the learning curve…

Moreover Array2D syntax seems to be too close to the syntax of arrays of arrays. In the case of the keeping this class on SC4, wouldn’t it make more sense to have an alternative syntax ? eg. [1 2 3; 4 5 6; 7 8 9] ?

One issue here is that there is not only one kind of type conversion.

If I understand your point correctly, you’re proposing that there is a category of operations called “type conversion” and that all members of this category should behave the same – to homogenize.

But I think the category “type conversion” is itself heterogeneous. For instance, type conversion may be divided into “if-necessary” conversion (where the conversion method prefers to return the receiver if that’s appropriate) and “forced” conversion (always create and return a new object based on the receiver).

In general, method names of the format asSomething are if-necessary, while as(Something) is “forced” (since it’s a synonym for Something.newFrom(x)).

The argument seems to be that the two syntaxes asSomething and as(Something) are very similar, so they should be unified. But they mean different things. You can’t unify operations that are not the same.

A possible solution here would be to remove as(Something) and require users always to write Something.newFrom(x), and reserve asXXX for the if-necessary case – that is, to make the syntax more divergent for different operations. This also makes the problem with as(Integer) explicit – because there is no concept of a “new” Integer (or any atomic type). I guess the best you could do is:

+ Integer {
	*newFrom { |x| ^x.asInteger }
}

… but I can easily imagine someone reading this and thinking “what-the-xxxx is that for?” … and there would be no point anyway to writing Integer.newFrom(aNumber) vs aNumber.asInteger.

I think this can be explained as a case of multichannel expansion. Many places in SC accept either a single number or an array of numbers (multichannel expansion), so it isn’t completely outrageous for numeric type conversion to multichannel-expand also.

Consider this:

f = { |numberOrArrayOfNumbers|
	// now I want all of them to be floats
	if(numberOrArrayOfNumbers.isArray) {
		numberOrArrayOfNumbers = numberOrArrayOfNumbers.collect(_.asFloat);
	} {
		numberOrArrayOfNumbers = numberOrArrayOfNumbers.asFloat;
	};
	... etc...
};

I don’t think anybody really wants to be forced to do that… but if asFloat didn’t multichannel expand, this is what everyone would have to do.

Fully agreed – Array2D is not a useful class. We should deprecate it.

It’s a worthy goal to address sources of confusion. But, I’m skeptical of the idea that a confusing area in a programming language is necessarily the fault of the language. (For instance, multichannel expansion of numeric type conversion is a little confusing, but it’s also very useful! And, consistent with other idioms in SC.)

Programming language acquisition mirrors natural language acquisition. A child learning English might say “dad goed to the store” and then find out that the general rule “-ed = past tense” doesn’t apply to all verbs. Similarly, when encountering asArray and as(Array), it’s understandable to overgeneralize and collapse them into one category – but they don’t actually belong to one category. This process of overgeneralizing and then refining distinctions is a natural part of learning programming. (The opposite process – undergeneralizing and gradually coming to understand a more general principle – is another natural part of it.)

There is a temptation to say “if I have to refine my understanding, then something is badly designed in the language” but this may not always be true (or, redesigning might introduce different confusion). Lately I notice this kind of thinking in myself (“I’ve been doing this a long time, I know what I’m doing, I shouldn’t suffer confusion so it’s the software’s fault” :laughing: ) and I’m working at recognizing it and questioning whether design could really have avoided confusion.

With that said, it might actually be a good idea to get rid of as(aClass) – another case of syntax sugar sometimes backfiring.

hjh

4 Likes

Can you explain this inconsistency as noted by @fmiramar?

[1.0, 1.0].asInteger // works
[$a, $b].asInteger   // throws error
$a.asInteger         // -> 97

I’m kind of stumped, to be honest.

Edit:

$a.class.findRespondingMethodFor('asInteger') // -> nil

So why doesn’t $a.asInteger complain?

asInteger on a SequentialCollection is basically implemented as
^this.collect({ arg item; item.perform(\asInteger) });

For some reason, $a.asInteger works, but $a.perform(\asInteger) doesn’t. Don’t know why, but it’s certainly a bug.

Typical methodology for this type of question is to check the byte codes:

{ $a.asInteger }.def.dumpByteCodes

BYTECODES: (4)
  0   40       PushLiteral Character 97 'a'
  1   B0       TailCallReturnFromFunction
  2   D7       SendSpecialUnaryArithMsg 'asInteger'
  3   F2       BlockReturn

At this point, then, the short answer is that SendSpecialUnaryArithMsg handles some types internally in the C++ function, and falls back to a normal method call only if the type isn’t handled. (The longer answer is that opcode 0xD7 = 215 calls handleSendSpecialUnaryArithMsg() and this calls doSpecialUnaryArithMsg(), and here there is a long section for case tagChar:.)

When you .perform(\asInteger), it’s bypassing the special unary arithmetic message and looking for a method to implement.

But my other question about this is, why use .asInteger when a more canonical way to get the ASCII code of the character would be .ascii? (That is, it may not be good that there is an inconsistency about .asInteger, but I’m not sure we can exactly call it correct usage either. Also nobody found this for almost two decades… agreed that this is a rough edge, but it’s also a rough edge that is not sticking into anyone’s eye.)

hjh

1 Like

Thank you for the explanation!

Definitely agree, I was just curious about how one would go about figuring out the source of the behaviour - what kind of debugging strategies exist for more low-level things like this, basically. I’ve used dumpByteCodes before, but didn’t think of using it here, and I didn’t know how to connect the dots between the byte code output and where exactly in the C++ source these things are handled. Your post led me to PyrInterpreter3.cpp, so that seems at least a bit more clear to me now!

Very little of this is documented – here, I was proceeding on experience (realizing that the operation could be done by a primitive called from a class library method definition, or by the C++ implementation of the opcode, and no other way – and we know in this case that it couldn’t be the former, so it must be the latter).

I guess that’s not a very forward-looking answer but documenting all of that is a massive job and other issues are more pressing.

hjh

the .asArray vs .as(Array) is also about having too similar names.

To illustrate analogous confusions with a funny history: a friend of mine spent hours trying to find a radians converter on SuperCollider because he thought that .degrad was a bitcrusher/decimator :rofl:

I know this can be too much about personal preference, but .degrad and .raddeg wouldn’t be more clear names if swapped for .degrees and .radians ? I know that SC conversions methods are named in this style, but this .degrad name resembles MAX metaphorical naming “convention” which I find extremely confusing…

That’s quite an understatement. The in-code GUI graphs that borrow a bit from what some editors like VSCode do with its inline thingies (like for git), is quite revolutionary, IMHO. Unfortunately on Windows, Gibber puts the DWM in some mode in which I can’t take screenshots, meaning they come out all black, or I would have pasted on here…

1 Like

yep love those in-code animations - could add interest to livecoding for sure

I wonder if something like that could be hacked together for those of us using nvim …