Why does this { } work this way?

var slider,w;

{ var x=5;


If I remove the .defer, the slider doesn’t appear. Why not?

Another thing I found out was:

 arg start,end;
"start:% end:%\n".postf(start,end);



start:start end:900
start:20 end:560

The set seems to be evaluating the inputs like value.

Replying, just to keep my thoughts in place. The { } would indicate a function. Somehow a defer is scheduling this function to be called in some other context, and when it is called, the slider appears.

you are right in that the curly braces make a function, so you define a function.
but that is all. in order for the function to do something it needs to be called.
the defer message calls the function, which causes the code in it to be executed.

a simple function call in sclang looks like this: .() or .value

gui operations need to be scheduled on the AppClock. .defer(with an optional argument) schedules a function on the AppClock. you can also schedule functions on the SystemClock or TempoClock. .value will evaluate immediately.

This is a different phenomenon that the original question.

A function is actually an object, or rather, an instance of the Function class. That class responds to a series of messages. If you look at the source code you can see where set is implemented. Why does it do this? I don’t know. It probably makes sense in some context to call set on a function (there is a comment above it mentioning controllview), but for the most part, set is just very confusing. This is a problem with object oriented design, particularly when you have large hierarchies, and need polymorphic behaviour, you end up with many methods and some, as is the case here, will end up making little sense outside some specific context. Basically, set just calls value.
supercollider/Function.sc at aa93ffcf409b0c979b2a2b983b1d75807c6b7ede · supercollider/supercollider · GitHub

Two things might help:

  1. Git history shows the Function:set method has been around for a long time (at least 21 years, before even my time).
  2. The comment “// ControlView support” refers to something that’s been dropped ages ago.

So it was added to support a protocol that didn’t end up making the cut (or perhaps it’s an old SC2 protocol that ended up being deleted in SC3), but the method was never deprecated. Perhaps it should be.

FWIW I would agree that Function:set is not useful. Function:get is useful for one case, but probably not the original case for which it was added lol


The ide tutorials generally have .set used in conjunction with a Synth/Bus object, so my assumption is the semantics are the same across all .set verbs. But yeah something to look out for for anyone following the tutorials. Another thing that kind of grinds my goat is, are the IDE suggestions context aware?

I didn’t think git existed pre dotcom boom.

That’s not quite a safe assumption.

// make a list of all classes that answer .set

var stream = Post;

c = Class.allClasses.select { |cl|
.quickSort { |a, b| a.name < b.name };

c.do { |class|
	var m = class.findRespondingMethodFor(\set);
	stream << class.name << ".set";
	if(m.argNames.size > 1) {  // because 'this' is an argName
		stream << "(";
		m.argNames.do { |name, i|
			if(i > 0) { stream << ", " };
			stream << name;
		stream << ")";
	stream << "\n";

Skimming over this list, it turns out that most of them do loosely follow the same semantic.

EZ GUIs use set for properties, but not for the value.

EZKnob.set(label, spec, argAction, initVal, initAction)
EZNumber.set(label, spec, argAction, initVal, initAction)
EZRanger.set(label, spec, argAction, initVal, initAction)
EZSlider.set(label, spec, argAction, initVal, initAction)

Rect is similar:

Rect.set(argLeft, argTop, argWidth, argHeight)

Ref just sets the value, without an arg name: Ref.set(thing).

Now, one could choose to complain about that. But I also think: Would you expect an EZSlider and a synth/group Node to have broadly compatible behavior? Not really.

One of the things that really came home to me in the last couple of years (picking up Pd for teaching a class, and then transferring that knowledge to Max) is that the following expectations simply aren’t realistic:

  • Computer languages or programming environments will be carefully, systematically thought through. (In reality, they are all full of inconsistencies and “gotcha”-s. Go ahead, get me started on all the dumb little rough edges that even the professional development team at Cycling '74 didn’t/couldn’t smooth over.)
  • “I’ve been programming for some time and I think I know what I’m doing” translates into easy, rapid fluency. (I do know what I’m doing in SC, but it took over a year, year-point-5 to feel reasonably competent in Pure Data… and this semester is Max, where I routinely find “now why the bleepity-bleep did they do it this way” types of things.)

The reality is, no matter where you go, you just have to learn to handle the little quirks.

SC 3’s repository was originally hosted on sourceforge using cvs svn (if I recall correctly). The commit logs were brought over into git later.


I agree about the quirks that happen during development and the added technical debt, which tends to surface when expected semantics aren’t consistent.

Here’s another one:

 var k=(
             ck:{ |self| "what is self:%\n".postf(self) ; }


In this case, k[\ck] should semantically behave like k.ck, but one has self passed into it, and the other doesn’t. Pure data has a similar problem with inconsistencies in how it tries to handle message passing, where the messages can have list prepended to it to denote a list or have symbol prepended to specify the type of message data, but sometimes the downstream object requires this to be explicitly stated.

Nice usage of stream.

I wonder if this would be different (improved) for a language with a full type system? The error in this thread would probably not have been caused if the function had it’s inputs typed. Not to mention all the benefits to tooling.

I don’t think it should, k[\ck] is shorthand for k.at(\ck) which does what it’s supposed to do: it returns the value of the ck key which in this case is a function. If we didn’t have that, there would be no easy way to access the function itself.

If you would like to bind self to the Event no matter how you call the function, you could do something like this:

var k=().make {
	var self = currentEnvironment;
	~ck = { "what is self:%\n".postf(self) ; }


Or perhaps it’s cleaner to bind the function to its environment:

var k=().make {
	~ck = { "what is self:%\n".postf(currentEnvironment) ; }.inEnvir


To me, both of these comments reflect different aspects of the delusion: 1/ that technical debt is extra (I think it’s inherent and unavoidable – efforts to reduce it are well-spent but there is no debt-free programming environment anywhere) and 2/ that there exists somewhere a programming technology that will save us from ourselves (type checking has certain benefits but I don’t think it makes self-consistent systems a more likely outcome – there are too many ways to mess up strict typing too).

Speaking only for myself, I chafe against strict type checking, I love the relative freedom of duck typing, and I accept the occasional interface inconsistency (some of them should be fixed of course, but tbh the above Function:set seems relatively harmless) and I also accept that debugging is a somewhat different process without type checking.


Interesting, I haven’t looked at the context/environment part yet.

But, if k[\ck] returns the function, then shouldn’t k[\ck].value have a self passed into it? Because it seems to indicate 2 different ways of looking at a function, i.e. function consists of { } and function consists of { } & environment. It seems like { } is different and the semantics in which it operates with changes.

Yes so I know of operator overloading, but in terms of [ ] { }, it’s usually consistent.

But yeah, i accept languages have different ways of dealing with things. Except, in terms of documentation in the IDE tutorial, as well as in most online tutorials, this isn’t mentioned.

value should not magically add arguments that are not written.

Edit: The reason for the event prototyping behavior is so that ~xyz syntax inside a prototype’s pseudo-method will access the outside environment… in which case, how do you access the inside environment? Answer: pass it in as an argument. The pseudo-method-call behavior only automates passing it in. My ddwPrototype quark binds ~xyz vars to the inside environment – so there is no need for a self argument. But (there’s always a catch) then you can’t aProto[\abc].value because this doesn’t use the prototype’s environment. That is:

  • aProto.abc(1) (automatically uses the environment, ok)
  • aProto.use { ~abc.value(1) } (also ok)
  • aProto[\abc].value(1)not ok

So, choose your poison :wink:


The function is the same, but the usage is different: The self-passing is a somewhat obscure feature of IdentityDictionary and subclasses (eg Event), allowing for event prototyping. The function doesn’t know anything about it.

Events are basically a prototypical inheritance system bolted on top of sclang’s class based inheritance. Implicitly passing the receiver to the method is pretty common in such systems as a form of syntactic sugar. It is much nicer to write ~obj.add(5) than ~obj.add(~obj, 5);.

On the other hand, some languages handle this more explicitly. Lua, for example, has two different ways of calling a function:
a) foo.bar(5) just calls the function bar in foo as is
b) foo:bar(5) automatically passes foo itself as the first argument

This is another one of those weird syntactic inconsistencies.
a.c.(8) vs a.c(8)

As in, if a prototype member is a function, and functions are usually called with f.(x), then why is it a.c(8), when a.c is a function?

(edit-- @Spacechild1 said most of this more clearly above!)

…well its not just any function - its a function that “knows” the state of its parent Event. Since pseudo-method calls have access to the pseudo-object’s state the syntax is like a method call rather than like function evaluation. Note that an Event has to have know: true for pseudo-method calls to work this way.

I agree that this is a bit klunky and was clearly sort-of “bolted on” - since sclang can’t generate Class definitions on the fly this added an important bit of functionality (you can generate and modify Class-like Events programatically) and once you see it as motivated that way the syntax does make sense imo.

1 Like

I had this explained thusly:

a[\c] returns a function. Hence a[\c].(). But a.c means call c on a. So a.c(8) implicitly calls the function, and doesn’t require the . of the .(8).