Ugen to signal at client side

Currently im writing enveloped ugens to buffers. With enveloped ugens i mean that they have a finite output. However i constantly have to wait until the writing process is done before you can manipulate the sounds. So i wondered if there is any way to convert an enveloped Ugen to a signal or collection, without having to go by every sample of the Ugen at sample rate on the server.

I’m pretty sure this isn’t possible in real time - or rather, its not possible to run realtime and non realtime processes at once; perhaps someone else might know more as the server does allow something to be run in the background but I don’t think this is exposed in sclang, only the C++ library.

However, you could make these ahead of time (assuming they are fixed) and save them as audio files, loading them in as needed. Would that work?

The reason i would like to do this, is for example to create a sine perc and instantly play it back in reverse.
With the buffer method id have to wait the sine perc length amount in seconds, before the buffer writing is finished and only then i can play it back in reverse. This period of time is what id like to bypass in some way.
I could save them as audio files, but then i lose flexibility in synthesis.

UGens calculate in the server. So there is no way to run UGens “faster than light” in the language.

If the server is running in real time, then it can’t calculate “faster than light.” You could prepare a buffer in advance but you wouldn’t be able to start preparing it at the moment when you need it.

A non-real-time server can run faster than light, but the server bootup is a bit heavy – it’s not instantaneous.

Or you could reimplement specific UGens’ logic in sclang (waste of effort IMO).

Basically the idea isn’t practical in SC.


1 Like

Your are thinking about the problem the wrong way around.

Instead of reversing the envelope, just make a flexible one…

var time = 0.3;
var ratio = 0.99; // try me at 0.01
	levels: [0, 1, 0], 
	times: time * [ratio, 1 - ratio],
	curve: [10, -10]

I see, so there is really no way to emulate things like .toArray, .asSignal for Ugens? I guess i got hopeful because sometimes functions or Ugens that require another Ugen as parameter, denote that param as ‘inputArray’. Also sometimes i also hear the word Ugen graph. All these terms make it seem that Ugen output is possible to predetermine on the client side. But thats probably because i dont understand these concepts or terms.

Youre right in this case, but for more complex sounds this wouldnt quite cut it. For example for reversing a reverbed sound, just reversing the envelope will not reverse all elements of the sound. But if there is really no way of bypassing the writing period of the Ugens BufWr, RecordBuf and the like, id just have to take in account that amount of time everytime i reverse a finite Ugen output.

this is maybe a bit off topic but ive once played around with FDNs and probably this is just a matter of the reverb implementation.

// householder matrix
var getHouseholder = {|n|
	Matrix.fill(n, n, { |rows, cols| 2 / n - if(rows == cols, { 1 }, { 0 }) }).asArray;

// hadamard matrix
var getHadamard = { |order|

	var n = order.log2 - 1;

	var h2 = [
		[ 1,  1 ],
		[ 1, -1 ]

	var kronecker = { |a, b|
		a.collect { |x|
			x.collect { |y| b * y }.reduce('+++')

	var matrix = h2;{
		matrix = kronecker.(matrix, h2);
	matrix * sqrt(order).reciprocal;

var matrix = { |rotate|

	var matrix = getHadamard.(4);
	//var matrix = getHouseholder.(4);

	var angle =, rotate *, 0, 1) * 2pi;

	var sine = angle.sin;
	var cosine = angle.cos;

	var givens2x2 = [
		[ cosine, sine.neg ],
		[ sine,  cosine ]

	var kronecker = { |a, b|
		a.collect { |x|
			x.collect { |y| b * y }.reduce('+++')

	kronecker.(givens2x2, matrix);

var primePowerDelays = { |delays|
	(delays.collect{ |delay, i|
		var prime = i.nthPrime;
		prime ** ((log(delay) / log(prime)) + 0.5).floor;
	}).asInteger / s.sampleRate;

var delayLengths = { |n, dmin, dmax|
	var nm1 = n - 1;
	var d = dmin * ((dmax / dmin) ** ((0..nm1) / nm1));
	(d * s.sampleRate).round(1.0).asInteger;

SynthDef(\zigzag, {

	var ffreq = \;
	var fq = \;
	var feedback = \;

	var lfo =\ * (1 + ( * 0.5)));

	var rotate = \ + lfo.lincurve(-1, 1, 0, \, \;
	var delTimesMod = \ + lfo.lincurve(-1, 1, 0, \, \;
	var reverbTime = \ + lfo.lincurve(-1, 1, 0, \, \;

	var sig, inSig, localOut, decayCoef, delTimes, order;

	matrix = matrix.(rotate);
	order = matrix.size;

	//inSig =\, 2);
	//inSig.collect { |it| it.source}.debug(\input);

	inSig = * Env.perc(0.01, 1).ar;

	sig = inSig +;

	// multiplying signals by matrix
	sig = matrix.collect({|it i| matrix[i].collect({|item j| item * sig[j] }).sum });

	delTimes = primePowerDelays.(delayLengths.(order, 0.03, 0.06).debug(\delTimes));
	delTimes = delTimes * delTimesMod;

	decayCoef = 0.001.pow(delTimes * reverbTime.reciprocal);

	localOut = order.collect({|i|[i], delTimes[i], delTimes[i] -;

	localOut = order.collect({|i|[i], ffreq, fq, feedback) * decayCoef[i];

	sig = localOut.size.div(2).collect({|i|
		i = i * 2;
			localOut[i] * -1,
			localOut[i + 1]

	sig =, ffreq, fq, feedback.neg);

	sig = sig * sqrt(reverbTime).reciprocal;

	sig = Mix([
		inSig * \,
		sig * \,

	sig = sig.tanh;

	sig =;\, sig);

Synth(\zigzag, [

	\ffreq, 8000,
	\fq, 0.5,
	\feedback, -3,

	\modFreq, 0.5,

	\rotate, 0.75,
	\rotateModAmount, 2,
	\rotateModCurve, 4,

	\size, 0.1,
	\sizeModAmount, 0.1,
	\sizeModCurve, 0,

	\reverbTime, 5,
	\timeModAmount, 0,
	\timeModCurve, 0,

	\dry, 0,
	\wet, 1,

	\out, 0,

1 Like

I think what you are asking for would be an important addition to SC.

as a hack you could maybe use @scztt OfflineProcesses quark to write to a temp file and then read that.

you’ll probably need his Deferred quark too to rig up some kind of class

That’s referring to an array of channels, not an array of samples.

Oversampling would be more general, and would at least allow speeding up this type of pre-calculation.

The design spec would need to be carefully considered… a million ways that design errors could make scsynth a lot less stable.


So whilst the marked solution does provide good advice, it doesn’t quite solve the problem with an example.

Below is an example that uses an NRT (non-real-time) server to render the audio very quickly. Whilst there is bootup cost, it seems negligible as it takes 0.09 seconds on my machine to render 2 seconds of audio.

~render_to_buffer_offline = {
	|numoutputs, synthDef, argsArray, duration, outputFile, action=({})|
	var condVar =;
	var r_ = Routine({
		var server = Server(\nrt,
		var score = Score([
			[0.0, ['/d_recv', synthDef.asBytes ] ],
			[0.0, (x = Synth.basicNew(, server, 1000)).newMsg(args: argsArray)],
			[duration, x.freeMsg]
			outputFilePath: outputFile,
			headerFormat: "wav",
			sampleFormat: "int32",
			options: server.options,
			duration: duration,
			action: {  
	var wait_ = condVar.wait;
	var buf =, outputFile);

s.waitForBoot {
	var startTime = Clock.seconds;
	var path = "~/Music/nrt-help2.wav".standardizePath;
	var buffer = ~render_to_buffer_offline.(
		numoutputs: 1, 
		synthDef: SynthDef(\a, {
			var snd =;
			var env = Env.perc(0.01, 1.99).ar;, snd * env)
		}),    /// note there is no add here!
		argsArray: [],
		duration: 2,
		outputFile: path
	var endTime = Clock.seconds;
	~x = {, buffer, rate: -1, loop: 1) * -20.dbamp }.play;
	postf("loaded audio in % seconds\n", endTime - startTime);

There are many caveats with this approach, as example: if you wanted to load a buffer you would need to rewrite the function (see below) or if you wanted to use a large synthdef this would fail, see the nrt docs for that.

An example using a buffer, lifted and modified from the docs…

	var server = Server(\nrt,
	var bufnum = server.bufferAllocator.alloc(1);
	a = Score([
			[ 'b_allocRead', bufnum, (Platform.resourceDir +/+ "sounds/a11wlk01.wav").asString, 0, 0 ],
				SynthDef(\NRTsine, { |out, freq = 440|
					var buf =, bufnum);
					var env = Env.perc(0.01, 2 - 0.01).ar();, buf * env )
		[0.0, (x = Synth.basicNew(\NRTsine, server, 1000)).newMsg()],
		[2.0, x.freeMsg]
		outputFilePath: "~/Music/nrt-help2.wav".standardizePath,
		headerFormat: "wav",
		sampleFormat: "int16",
		options: server.options,
		duration: 2,
		action: { "done".postln }

All of this being said, you should not need to do this in most cases. Usually it is better to rethink the sound processing, however, it the situations where it is needed, I don’t think its that hard to setup, and the example above hopefully provided a good starting point.

1 Like

Usually this refers to a multichannel array of Ugens, i.e., 220 ! 10 ), which is an array of 10 sine waves.

This refers to how the Ugens are connected. Each Ugen is a vertex, and each connection an edge - see here Graph (discrete mathematics) - Wikipedia

Technically some of them are determinable (like a sine wave), but not all of them, therefore this isn’t implemented… it isn’t possible for the computer to know ahead what you will sing into a microphone, and therefore what the value of SoundIn will be, therefore no attempt is made to compute ahead of time the value of any Ugen. Personally, I’d like to see this changed a little and implement a generic language side version of some of the Ugens … but that is beyond this thread.

Hopefully that clears some things up, if not, ask away!


1 Like

O/T but I’d love to hear more about this idea in another thread sometime!

This could be fine as an individual project (quark), but (IMO) not an ideal candidate for the core distribution.

p = UGen.filenameSymbol.asString.dirname;

UGen.allSubclasses.count { |class|

-> 416

Now, you wouldn’t have to do all 400. But to reimplement even 25% of them is a hundred new classes, some of which might not be simple. So the development effort would be large, for signal processing that already exists in the server.

In 20 years, this feature (of pre-calculating signals Right Now) just hasn’t been hotly demanded. So the benefit:cost ratio doesn’t look good.

But… a significant, general feature which many audio environments have and SC does not is: oversampling. Pure Data has [block~]; we have nothing similar. If we had oversampling, then you could multiply frequencies by n (and divide time values by the same n), and render a synth at n times oversampling. If n = 32, then you could get 2 seconds of audio in 63 ms – while not instantaneous, it’s a lot better than waiting 2000 ms for the same result – and you could write oversampling anti-aliasing oscillators, etc. etc. I’m willing to bet that this would be less of a development effort than copying hundreds of UGens’ logic, and it would cover a larger area of potential cases.

Again, just my opinion, but I’ve seen in the past where we were sometimes not careful about thinking through features, ended up implementing something not quite mature, and then had to continue to maintain it. That’s why I think “language-side UGens” would be fine as a quark, but not necessarily for core.


I’m not to sure about this. An implementation like this…

SinOsc.fn(samplerate: 1000, in: [...], freq: [...])
OnePole.fn(samplerate: 500, in: [...], coef: [...])

Would allow for auto generating interfaces like this…

   dur: 0.01,
   freq: SinOsc.p(freq: Pseries(...))

Where sample rate is deduced from dur. I think I’ve seen a quark that implements such ugens as patterns, and I’ve definitely roled my own onepole as a pattern before. So I think those would have implementations beyond just preprocessing buffers.

Sure most ugens won’t have implementations (and couldn’t), but I don’t think that’s a problem. There could easily be a subclass (DeterministicUGen perhaps) that only certain ugens inherit and implement. I’m just thinking of the ones that already have well defined stable DSP algorithms.

It’s not uncommon for c++ ugen code to have a method that does its calculations just on numbers (extracting all the signal rate conversions), I wonder if that could be linked up to a _Method and made easy to generate in the sclang. Would be a bit of effort to get the linking working, but you could create a c++ interface where you write in once and it become implemented in the server and language.

Admittedly, this is quite a lot of work, but I think it creates a more streamlined language with less features to worry about for the user, and once they have learnt about ugens to easier to apply them elsewhere/everywhere. Which is what this thread is ultimately about - how supercollider has too many way of doing the same thing, and how some approaches only work in certain contexts which often seem arbitrary to a beginner.

@jordan has already given an example, here’s another one that takes a UGenGraphFunction as argument:

The other factor is that DSP in the language will block the interpreter – dkmayer’s link to the case of “I want to plot 120 seconds, right now” would certainly lock up the interpreter for a noticeable fraction of a second at least. This is fine if you aren’t sequencing anything, but not fine if you’re running anything sensitive to timing.

In any case, my point is that I’m skeptical of this as a core feature (while I and I’m sure others sorely lack oversampling).

With regard to unifying patterns and UGens (reducing the number of ways to do things), that would be worth discussing for a hypothetical SC4, but in SC3.x, it would only be adding yet another way to do something (unless you delete old ways and break compatibility).