Object Attributes

let me catch up with the suggested threads by you and @smoge. I think I can get good insights there.

thanks :slight_smile:

This is a good implementation but it has the problem I was pointing out here:

My main problem is that I see computer music as a stream of numbers, and the timing should always be sample based to ensure synchronicity. But Collider’s architecture makes it impossible cause sending a message and retrieving, delays the information. Or in your example, you will have to trigger the SendReply each sample to be sure the information you are getting is the one you wanted. And of course this is not possible cause you will saturate the server with thousands of requests.

Cause you have to send a reply to write the values you are tracking, if you want to get the actual value at the moment of the request, you would need to SendReply each sample to have accurate information. For example, In your implementation, the modulation is happening faster than the trigger of the SendReply, thus, the information about this frequency is discretized by this trigger frequency, and all the points in between are lost.

But apart from this, this is a good solution :slight_smile:

in this post i have explained three ways of subdividing ramps:

1 Like

You don’t want to try to process audio rate data in sclang.

Sclang is not optimized to process streams of data with tens of thousands of data points per second. Scsynth is.

If you went on the Max or Pd forums and asked how to get every audio sample into the control layer without dropping a sample, they’d tell you the same thing I’m saying here: that it isn’t recommended. (It may be possible in Pd actually, but I bet you’d get a ton of audio dropouts while the control layer is struggling to keep up with the high rate of data.)

James McCartney (creator of SC) once explained to a user, if your control mechanism runs at a higher bandwidth than the output signal, then you need to rethink the control mechanism. The thing you’re asking about here is the same, but in reverse.

There are two questions in this thread, which are getting confused together. One is, how to .trace and get the data. .trace discretizes and this question has been asked and answered.

You’ve rejected the solution because of question 2: how to subdiv~. Dietcv has just pointed you to a post that explains how to manage ramps without needing to get audio rate data into the language. Strongly suggest that you read this carefully, because it is much better to handle this problem fully in the signal domain, where “attributes” are not necessary. You do not need “attributes” to do subdiv (and Max doesn’t use “attributes” to implement subdiv either).

hjh

I am reading the dietcv thread and it is what im looking for, I think I will dive deep in all his posts.

In my view the main subject of this thread (besides the title) is how to get sample accurate information about the ugens, nodes, functions or synths within your performance. The attribute thing was because I actual wanted to subdivide phasor to have different clocks synched within my program. So my idea was to have an attribute value that when you ask to the server it returns the value of that attribute at the moment you ask for.

Here the solutions are backwards, we are “sampling” this value at a certain rate by SendReply and then reading this samplings to get the attribute. But by sampling at a certain rate you loose granularity of the variable.

In any case I think dietcv has an expertise on how to handle this “quantum” measures jaja. I will study his posts to get a clearer understanding of this topic.

thanks :slight_smile:

Actually your solution is backward (sorry for being blunt).

You have access to the phasor’s frequency input by looking at the input – not by asking the phasor to invent another output for information that’s available somewhere else.

See the Pd screenshot I posted earlier – multiple cables out of the frequency input. The frequency is right there – you don’t need a new feature to get it.

Audio rate polling into the language is not a good idea.

hjh

Um… The two code snippets I made were conceived based on your comment, which I quoted with each code.

I am writing my experience without having read @dietcv’s post and the one linked by @jamshark70 (sorry):

I tried three or four times to extract values at the audio sample rate, but ran into problems each time. Even displaying some values continuously could affect the application.
It is not recommended to continuously get the real time value from the audio synthesis according to the audio sample rate. To do this without problems, the CPU and GPU would need to be much faster than the current specification.

If you don’t need a sclang-side process for these values, I would recommend building a server-side process using a large Ugen network. This is the same concept as your first code in this thread. You could see some values when you need them using .poll, SendReply or other things. Using .poll is a good way to monitor values, but its default rate is also not the audio sample rate.

If you need a sclang-side process or control while the scserver (scsynth or supernova) is producing sound, it is not recommended to continuously get values at the server-side audio sample rate.

but what if you want to modulate this input frequency? then the input will no longer works cause it depends on an oscillator. The thing is that I dont want audio rate polling, I want to have a sample accurate communication with the server, this means that when I ask I get. No need to audio rate poll, just a simple ask, return. Im aware that there would be a delay between the ask and the response, but with this architecture I think is the best approximation.

Example:
Let’s say we have an oscillator, modulated by another oscillator:
x = {SinOsc.ar(SinOsc.ar(0.1, 0, 10, 440))}

In this case we can’t know the actual value of the input frequency cause its being modulated by another one, it would be between 430 and 450.

If I want to make some logic with the output of that function, lets say If x.freq > 445 {“trigger something”}, then the x.freq would get the number that x is outputting at the moment of the request, to perform the logic operation (no need to audio rate poll, just a single request). But “if” doesn’t work like that, it outputs true when x =1. So in this case it will tigger all the time cause the frequency of x is always grater than one in this case.

In any case I see that there is a work around about this “attributes” thing, Im going to check all dietcv posts to understand how he is implementing his sample based calculations.

thanks

1 Like

The SC client-server design sacrifices sample accurate communication in exchange for the flexibility of having multiple clients on one server, or different languages controlling the server. James McCartney mentions in his initial paper on SC Server that he is aware that this is a trade-off, and he made that trade-off for other reasons.

If you absolutely need sample-accurate communication, then SC in its current form might not be the right platform for you. Maybe ChucK? In ChucK you can 1::sample => now in your control loop – this will be slow (high CPU use), but sample accurate.

BUT… there may be other ways to accomplish what you’re after, that don’t depend on sample-accurate communication, and you might be overlooking those approaches because of a single-minded fixation on one and only one methodology.

This is why Jordan asked at the beginning what specifically you’re trying to do. Your answers have been a bit of a moving target on this, so it’s difficult to proceed.

Let’s say we have an oscillator, modulated by another oscillator:
x = {SinOsc.ar(SinOsc.ar(0.1, 0, 10, 440))}

In this case we can’t know the actual value of the input frequency cause its being modulated by another one, it would be between 430 and 450.

Correct – a feature of DSP thinking is that it focuses on abstract characteristics of large data sets, rather than on specific data. It’s not only that you can’t know the actual value, but that most of the time, you don’t need to.

If I want to make some logic with the output of that function, lets say If x.freq > 445

First, consider not embedding the modulator into the carrier.

{
	var xfreq = SinOsc.ar(0.1, 0, 10, 440);
	x = SinOsc.ar(xfreq);
}

This is why we keep saying that attributes are not needed. You can choose to make the frequency available in its own variable. Problem solved. Now let’s move on. (Note that, in Max or Pd, you can’t bury a modulator inside another processor.)

Now, in the SynthDef function, you can write xfreq > 445 and this will produce an audio rate signal that is 0.0 when false, and 1.0 when true. The transition from false to true is fully sample accurate and you can use this for sample accurate triggering of envelopes, or resetting of ramps, or pulse counting, or many other signal-domain operations – same as in Max or Pd (e.g., in Max, a threshold [>~ 445] can serve as a gate for an adsr~).

By itself, this doesn’t support all of the language-side logic. To do that, you need a bridge between the server and the language. This is SendReply → OSCFunc (or OSCdef). Max and Pd also need bridge objects for this: [edge~] or [threshold~] respectively.

DSP thinking is functional rather than imperative. You seem to be stuck in an imperative mode. Dietcv’s thread suggests a lot of ways to break out of imperative-code style.

hjh

1 Like

@Dasha I think the code structure you want might look like this:
Pitch works at control rate, and its default rate is 1 / 64 * s.sampleRate. Even this rate produces unwanted noise at the transition. To reduce this you need lag. I think that audio rate is needed to produce sound, but to control other sounds it does not always give the best result.

(
{
	var lowFreqM = 430;
	var highFreqM = 450;
	
	// var modulator = SinOsc.ar(0.1, 0, 10, 440);
	var modulator = SinOsc.ar(0.1, 0).range(lowFreqM, highFreqM);
	
	var carrier = SinOsc.ar(modulator) * 0.1;
	
	var freqModulator = Pitch.kr(carrier, lowFreqM)[0].lag(0.7).poll; // you need lag to get a better transition. The lag time would be different for each system. 0.7 gives me a stable sound change.
	// var freqModulator = Pitch.kr(carrier, lowFreqM)[0];
	// var freqModulator = K2A.ar(Pitch.kr(carrier, lowFreqM)[0]);
	
	var triggered = WhiteNoise.ar * 0.05;
	var which = freqModulator > 440;
	
	Select.ar(which * 2, [carrier, triggered]);
	// SelectX.ar(which * 2, [carrier, triggered]);
	// SelectXFocus.ar(which * 2, [carrier, triggered]);
	// XFade2.ar(carrier, triggered, which.linlin(0, 1, -1, 1));
	// if(which, carrier, triggered)
}.play;
)
1 Like

@prko Nice!!! jajaja you made it! this is a really good solution, thanks man!

yes, reading dietcv’s posts I agree that I have to change paradigm or I will be fighting forever with the architecture.

thanks.

1 Like

Please don’t thank me, thank other long-time sc users, including @jamshark70. This code snippet is from learning from many users.

3 Likes

you are right, thanks everyone for your insights! @prko @jamshark70 @dietcv @smoge.