Proxyspace vs NodeProxy - confused

So I´m new to Proxyspace and I went through the examples. What is the difference between ProxySpace and NodePrxy? I´m a little unsure. Also I was wondering if the following three things are different from each other.

Is there a difference between initiating this:

~ex.play 

vs doing this first:

~ex = NodeProxy.new;
~ex.play;

and thirdly, what about doing this first?

p = ProxySpace.push(s);
~ex.play; 

Finally, I´m sure this is related. What do the 1 and 2 refer to in the following Post Window entries?

-> NodeProxy.audio(localhost, 2)
-> NodeProxy.control(localhost, 1)

Thank you!

A NodeProxy is an object that can stand in for an audio signal or a control signal. They have two particularly nice properties:

  1. They handle a lot of boilerplate, resource management, and setup required to, for example, play a Synth, and instead let you just describe what you want (e.g. a sine oscillator, NodeProxy.audio(s, 1).source = { SinOsc.ar(220) }
  2. They can be transparently swapped out. You can define one synth in a NodeProxy and then swap it out later and everything it’s connected to stays intact (with the new synth) - you can even have it cross-fade.

A ProxySpace - when it’s push()ed - intercepts all references to ~environmentVariables, and automatically creates NodeProxy’s for them. This means, you can do something like ~reverb.map(\input, ~synth) before you have assigned anything to ~reverb and ~synth yourself, they will be automatically created as NodeProxy’s (empty ones). If you later assign synths to ~reverb and ~synth, they will be mapped automatically.

There is also a class called Ndef which is, for the most part, a way that you can get ProxySpace functionality without having to create and push a ProxySpace - it’s a slightly different syntax but achieves basically the same thing. These two are equivalent:

// with a pushed proxyspace
~tone = { SinOsc.ar(220) };
~tone.play;

// with Ndefs, no need to push
Ndef(\tone, { SinOsc.ar(220) }).play;

When you have pushed a ProxySpace, every time you refer to an ~environmentVariable, a NodeProxy is automatically constructed. So yes, in this case setting ~ex = NodeProxy by hand wouldn’t really do anything - it happens anyway when you’d call ~ex.play.

If you don’t have a ProxySpace pushed, then ~environmentVariables behave exactly as normal variables. ~ex.play would be calling play on whatever you previously set ~ex to - nothing fancy, no automatic creation of a NodeProxy. A proxyspace only intercepts env variables when it’s pushed.

The 1 and 2 are how many channels the NodeProxy has. Generally, the default for audio NodeProxy’s is 2, e.g. stereo - and for control NodeProxy’s it’s 1 (since you’d use these for parameters and whatnot). The channel count gets set automatically by whatever you put into your NodeProxy / ~environmentVariable. If you did ~sine = { SinOsc.ar(100) }, it would be a single channel b/c you’ve only described one channels of audio, a single sine tone.

3 Likes

Thanks for your answer! Just to clarify:

Can´t you do these things without a ProxySpace anyway? I mean if I set them up as NodeProxys myself, without creating a ProxySpace, do they work the same way? When do you need a ProxySpace and when not?

Thanks again!

You don’t need a ProxySpace - it just makes the syntax for NodeProxy more readable and simpler. All three of the following are essentially the same:

~tone = NodeProxy.audio(s, 1);
~tone.source = { SinOsc.ar(220) };
~tone.play;
p = ProxySpace().push;
~tone = { SinOsc.ar(220) };
~tone.play
Ndef(\tone, { SinOsc.ar(220) }).play;
1 Like

One thing that was initially confusing to me was this very fact mentioned. ProxySpace is a new environment and you CANNOT use a ~ type variable without it becoming a proxy. You can still use a-z (probably minus p and s at least) but it does simplify creating new proxies and is much less verbose.

After struggling a bit I personally decided for most of my needs Ndef works better but requires more typing, however it does allow you to keep using normal ~ environment variables which seems to keep things more readable (at least for me). I’m not overriding anything (I hope) that @scztt has said - he’s the real source, but hopefully only clarifying something took me a bit to understand. Once it does make sense, it’s pretty simple.

I will add that I think one benefit of using Ndefs is that you can still use much of the learning code you’ll find elsewhere along with some JITLib techniques. That really helped me to understand. But there are more than one way as always to do things. I had special needs :slight_smile: If you are just live coding it seems ProxySpace works better. If you need certain other things, like track output and things, I find Ndefs more usable. And I believe technically you can use Ndefs in ProxySpace (but why?) but not the reverse.

One other note that started to make sense to me @askevob, is that much of these JITLib tools are just levels of abstraction for live-coding. Much like patterns are abstractions for complex events, both ProxySpace and Ndefs just simplify what you have to type to get sound output but in different ways. Both simplify greatly what needs to be typed for one thing, but also allow for live ‘evaluation’ of code to change in clock time. I think of this more like how Ableton or Bitwig operate with their clip launcher. Not the best example maybe, but similar in that you can quantize changes as well.

Thanks everyone. Good points. I found this tutorial which was also helpful in describing the distinction.

P.S. How do I mark this as answered?

1 Like