POC of simplified responders

Hmmm, I don’t know, I didn’t mean to drift so far off topic, it’s not ideal…

I did realise that by writing NamedControl.new(name, values, spec: ControlSpec.new(...)) a little more concisely, and generating a name array if values is an array, the notation:

{var freq = R.ctl(\freq, [193, 195], 110, 220, 'exp')
;var amp = R.ctl(name: \amp, default: 0.1, minval: 0, maxval: 0.2, warp: 'amp') // with keywords
;SinOsc.ar(freq, 0) * amp}

makes a spiffy Ui with three nicely named controls.

(Also that it makes a kind of sense to write freq &etc. twice, since the control names need to be freq:1 and freq:2 or such, but the variable name is freq, and these are not the same…)

Baby steps…

Also, more off-topic drift below…


If a basic difficulty is that SC has lot’s of different kinds of Sound objects that don’t “compose” (in the maths sense) then…

I wonder if SC could move towards a more Kyma-ish “unified Sound object model” incrementally, starting from where it already is, without breaking anything along the way.

That is, if a Sound object could be placed in the class hierarchy (where?) and people could slowly teach it how to compose (in the maths sense) different kinds of sound structures?

Maybe starting with rather simple things where the current notations are considered non-ideal.

For instance, if u is a UGen graph (which is a Sound) and if f is a Soundfile (which is also, obviously, a Sound) to make the composite Sound that mixes them together the nice notation would be u + f.

But that’s not really the standard SC notation?

A Soundfile would be a sort of BracketedSound, one that needs to do some initial “before” work and some final “after” work.

u + f would be a little like 0.1 + SinOsc.ar, in that the left hand side would be lifted to compose with the right hand side, which has more structure.

I.e. the idea is not to add lot’s of eccentric syntactic sugar, but to make it a proper semantic relation, the kind SC already has in some cases, but not very uniformly.

Soundfile is more complicated to lift than than Float because there are two distinct ways to do it.

Either load the file into a Buffer or stream it from disk.

But usually we just decide on one or the other by looking at the size of the SoundFile?

So the SoundCompiler could do the same thing.

There could be a parameter, say SoundCompiler.soundFileMaxLoadBufferSize, so people could tune things?

But also any existing approaches would still just work, so it’s not essential that it be perfect for all use cases, just acceptable for most.

Also, if r is a filter (which is a Sound processor) then the nice notation is r(u + f), i.e. as we write r(u).

And so on.

Initially {...}.compileSound.play could be the notation for making a sound.

Eventually, in the distant future, if it worked out, {}.play could defer to this, since it’d be completely backwards compatible?

SC could borrow as much “structure” from Kyma as seems sensible.

If f1 and f2 are SoundFiles, to play one and then the next write the nice notation is f1 ++ f2 (also r(f ++ u) &etc.).

But perhaps no-one wants that particular way of composing Sounds and it just doesn’t get implemented.

Sound could just accumulate things people actually want over a possibly very extended time frame.

(Some Patterns are Sounds so perhaps some people would want to write r(p + (f ++ u)) where p is a Pattern, or maybe no one will…)

I’m definitely not the correct person to try and design the fundamentals of this (though I would like to help) because firstly I just don’t know enough of SC (I know a rather narrow part of it moderately well) and secondly I have no experience of this kind of “surgery” on a big object system.

But perhaps it could be a nice “community” project.

It’d need to be something like that, since it would be lots of work, and it’d perhaps need to tinker with the SCClassLibrary.

If the “interface” for what something needs to do to be a Sound were made very clear, then it might be workable as a distributed project, there’d be a kind of “logical” criteria for if an implementation of a thing was correct or not.

(This is kind of how classes work in Haskell, which I know a little better. Classes come with set of rules, which aren’t checked by the compiler, but which instances need to follow. Sometimes there are multiple “correct” ways to write an instance, but at least anyone can look at an instance and see if it follows the rules or not. The rules are written in comments, and are things like: to be a Functor then map id x == x, to be a Monoid then x + zero == zero + x == x and so on. There could be these kinds of rules for what it means to be a Sound so that there would be a way of saying if someone has made a mistake and what would count as a correction.)

On the other hand I know there are already many, many different approaches to this kind of higher level “composable sound” problem, so perhaps it’s both too late and unnecessary to try and make a shared model for it.

Also partly I do think it’s nice having so many different “schools” within SC!

I really don’t know…

Ps. To be clearer about the last part, perhaps Jitlib is already this thing! It’s quite close in many ways. It’s completely possible I just need to learn more Jitlib, that it’s a deep well I’ve not properly fallen into…

The crucial-library Instr and Patch system aimed to be this sort of composable system. For instance, it has a Sample class that could be passed into an argument of a Patch; playing the patch would automatically load the soundfile if needed. As I recall the design was pretty good, though there were really difficult problems with caching resources.

I believe that this was designed in SC2 and was pretty transparent in SC2. Chris made a heroic effort to translate it to SC Server but it wasn’t quite 100% successful.

At least there is some architecture to review.

hjh

At least there is some architecture to review.

Yes, exactly, which is part of what I mean by all this.

It’s not that people can’t do all of these things in SC.

Everyone does, but just in quite different ways.

To get a little bit back to the topic again:

To make an oscillator with a frequency control and a Ui with a slider and a randomise button:

In Kyma would everyone write roughly the same thing, SinOsc(!freq)?

In SC would different people write this differently to one other?

To make a sound that mixes a sine wave and a sound file:

In Kyma would everyone write roughly the same thing, SinOsc + SoundFile?

In SC would different people write this differently to each other?

And I think it’s so interesting because Kyma and SC are almost the same thing.

They’re both Smalltalk systems that compile Smalltalk objects to scheduled DSP graphs for a dedicated synthesiser.

So to the degree there is a difference, and to the degree the difference is a technical thing, it’s a very particular kind of technical thing.

In Kyma it seems a lot of the really complicated, difficult parts of the system are located in a relatively small number of places.

For instance in the Sound object, in the DSP compiler and scheduler, in the interface builder.

Deep system things.

It’s always interesting how things fall out once basic structures are set.

If there is only one Sound object, and if the mixing operator is +, then to mix any two sounds p and q you will write p + q.

If every time you play a Sound you get a “Virtual Control Surface” then that’s how you make Ui’s.

It’s nice there are so many ways people work in SC, but it also seems to mean a certain amount of duplicate work.

It also seems to make it a bit hard for non-experts (like me) to work out how to do things, sometimes even very simple things.

The other thing I think is very interesting is that in interviews with musicians and sound designers that use Kyma it often arises that people tend to think Kyma is very complicated!

(Interesting in the sense of “is SC even more complicated than Kyma?”)

Also, I know Kyma is not perfect, and I know SC is very (very, very) good, so I hope this isn’t seen as unhelpful criticism!

Having written all of this I also know that at this stage in their respective lives it might well be the case that SC is just how it is, and Kyma is how it is, and there’s not really much to be gained by considering them in relation to one another.

But then again, perhaps they’re both still infants, it’s hard to tell.

I think Game of Life’s Unit Lib quark is also along these same lines: GitHub - GameOfLife/Unit-Lib: The Unit Library is a system that provides high level abstractions on top of the SuperCollider language.

The question Rohan seems to be asking is whether what’s wanted is perhaps a jiggering of the class tree structure rather than just bolting on another branch. I like the idea of setting the scene for a migration to a composable system. JitLib does go half way there.

My opinion remains that we need the low level abstractions as they are, but that new structures on top of those would be helpful.

Wrt Scott’s comment about a new design becoming just one more style and a maintenance burden: I think we have a cultural bias against privileging one working method over another. The result is that we have unwittingly privileged a weak set of abstractions! We could try 1/ designing something to meet common needs and 2/ encouraging its use in documentation. There’s never been a centralized attempt at either (so, naturally, nothing has ever succeeded in filling the position of a superstructure that most users should be looking to, first).

hjh

I don’t know if this is helpful for SuperCollider proper, but I did implement the “Bracketed” idea sketched above in a simpler context.

It’s very straightforwards and has some nice aspects.

For instance brackets can be added to existing graphs without changing the graphs themselves, so they still work in whatever context they’re currently for.

Brief notes below, in case anyone’s curious.


ScSynth is controlled by sending instructions in the form of Open Sound Control (OSC) messages. One family of messages allocate, set and free Buffers. UGen graphs that utilise Buffers don’t contain the messages to manage them. These messages are ordinarily written and sent outside of the graph context.

The bracketUGen function attaches a pair of OSC message sequences to a UGen value. The first sequence is to be sent before the graph the UGen belongs to is started, the other after it has ended. The messages are stored in the UGen type, but aren’t written to the SynthDef file representing the UGen graph. The scsynthPlayAt function reads and sends UGen bracket messages, in addition to the UGen graph itself.

sinOsc ar 440 0 * 0.05 + sndfileIn ("dsk", 0, [0, 1]) "sndfile.flac" Loop

“dsk” is name for the control holding the buffer identifier, 0 is the default value for the buffer identifier and the identifier used for the bracketing messages, [0, 1] is the list of channels to read and sets the number of channels at the diskIn UGen (an empty list reads all channels), “sndfile.flac” is the name of the sound file to load (searched for at SFDIR and SFPATH).

This particular argument structure makes the above graph equivalent to:

sinOsc ar 440 0 * 0.05 + diskIn 2 (control kr "dsk" 0) Loop

Likewise the graph:

osc ar (bGenSine1Tbl ("tbl", 0, 8192) [1, 1/2, 1/3, 1/4, 1/5]) 220 0 * 0.1

is equivalent to:

osc ar (control kr "tbl" 0) 220 0 * 0.1

https://gitlab.com/rd--/hsc3/-/blob/master/Sound/SC3/UGen/Bracketed.hs

1 Like

I believe a beginner would be confused right now installing the quark and trying to implement MVB.

Considering that GUI and controllers (MIDI and all sorts of things a musician needs to build an instrument) are specific to the domain that sclang proposes, we should have an MVB implementation as simplified as possible. Ideally, it should be trivial to use, be part of the default library, be used on most help examples, etc.