let me catch up with the suggested threads by you and @smoge. I think I can get good insights there.
thanks
let me catch up with the suggested threads by you and @smoge. I think I can get good insights there.
thanks
This is a good implementation but it has the problem I was pointing out here:
My main problem is that I see computer music as a stream of numbers, and the timing should always be sample based to ensure synchronicity. But Colliderâs architecture makes it impossible cause sending a message and retrieving, delays the information. Or in your example, you will have to trigger the SendReply each sample to be sure the information you are getting is the one you wanted. And of course this is not possible cause you will saturate the server with thousands of requests.
Cause you have to send a reply to write the values you are tracking, if you want to get the actual value at the moment of the request, you would need to SendReply each sample to have accurate information. For example, In your implementation, the modulation is happening faster than the trigger of the SendReply, thus, the information about this frequency is discretized by this trigger frequency, and all the points in between are lost.
But apart from this, this is a good solution
in this post i have explained three ways of subdividing ramps:
You donât want to try to process audio rate data in sclang.
Sclang is not optimized to process streams of data with tens of thousands of data points per second. Scsynth is.
If you went on the Max or Pd forums and asked how to get every audio sample into the control layer without dropping a sample, theyâd tell you the same thing Iâm saying here: that it isnât recommended. (It may be possible in Pd actually, but I bet youâd get a ton of audio dropouts while the control layer is struggling to keep up with the high rate of data.)
James McCartney (creator of SC) once explained to a user, if your control mechanism runs at a higher bandwidth than the output signal, then you need to rethink the control mechanism. The thing youâre asking about here is the same, but in reverse.
There are two questions in this thread, which are getting confused together. One is, how to .trace and get the data. .trace discretizes and this question has been asked and answered.
Youâve rejected the solution because of question 2: how to subdiv~. Dietcv has just pointed you to a post that explains how to manage ramps without needing to get audio rate data into the language. Strongly suggest that you read this carefully, because it is much better to handle this problem fully in the signal domain, where âattributesâ are not necessary. You do not need âattributesâ to do subdiv (and Max doesnât use âattributesâ to implement subdiv either).
hjh
I am reading the dietcv thread and it is what im looking for, I think I will dive deep in all his posts.
In my view the main subject of this thread (besides the title) is how to get sample accurate information about the ugens, nodes, functions or synths within your performance. The attribute thing was because I actual wanted to subdivide phasor to have different clocks synched within my program. So my idea was to have an attribute value that when you ask to the server it returns the value of that attribute at the moment you ask for.
Here the solutions are backwards, we are âsamplingâ this value at a certain rate by SendReply and then reading this samplings to get the attribute. But by sampling at a certain rate you loose granularity of the variable.
In any case I think dietcv has an expertise on how to handle this âquantumâ measures jaja. I will study his posts to get a clearer understanding of this topic.
thanks
Actually your solution is backward (sorry for being blunt).
You have access to the phasorâs frequency input by looking at the input â not by asking the phasor to invent another output for information thatâs available somewhere else.
See the Pd screenshot I posted earlier â multiple cables out of the frequency input. The frequency is right there â you donât need a new feature to get it.
Audio rate polling into the language is not a good idea.
hjh
Um⌠The two code snippets I made were conceived based on your comment, which I quoted with each code.
I am writing my experience without having read @dietcvâs post and the one linked by @jamshark70 (sorry):
I tried three or four times to extract values at the audio sample rate, but ran into problems each time. Even displaying some values continuously could affect the application.
It is not recommended to continuously get the real time value from the audio synthesis according to the audio sample rate. To do this without problems, the CPU and GPU would need to be much faster than the current specification.
If you donât need a sclang-side process for these values, I would recommend building a server-side process using a large Ugen network. This is the same concept as your first code in this thread. You could see some values when you need them using .poll
, SendReply
or other things. Using .poll
is a good way to monitor values, but its default rate is also not the audio sample rate.
If you need a sclang-side process or control while the scserver (scsynth or supernova) is producing sound, it is not recommended to continuously get values at the server-side audio sample rate.
but what if you want to modulate this input frequency? then the input will no longer works cause it depends on an oscillator. The thing is that I dont want audio rate polling, I want to have a sample accurate communication with the server, this means that when I ask I get. No need to audio rate poll, just a simple ask, return. Im aware that there would be a delay between the ask and the response, but with this architecture I think is the best approximation.
Example:
Letâs say we have an oscillator, modulated by another oscillator:
x = {SinOsc.ar(SinOsc.ar(0.1, 0, 10, 440))}
In this case we canât know the actual value of the input frequency cause its being modulated by another one, it would be between 430 and 450.
If I want to make some logic with the output of that function, lets say If x.freq > 445 {âtrigger somethingâ}, then the x.freq would get the number that x is outputting at the moment of the request, to perform the logic operation (no need to audio rate poll, just a single request). But âifâ doesnât work like that, it outputs true when x =1. So in this case it will tigger all the time cause the frequency of x is always grater than one in this case.
In any case I see that there is a work around about this âattributesâ thing, Im going to check all dietcv posts to understand how he is implementing his sample based calculations.
thanks
The SC client-server design sacrifices sample accurate communication in exchange for the flexibility of having multiple clients on one server, or different languages controlling the server. James McCartney mentions in his initial paper on SC Server that he is aware that this is a trade-off, and he made that trade-off for other reasons.
If you absolutely need sample-accurate communication, then SC in its current form might not be the right platform for you. Maybe ChucK? In ChucK you can 1::sample => now
in your control loop â this will be slow (high CPU use), but sample accurate.
BUT⌠there may be other ways to accomplish what youâre after, that donât depend on sample-accurate communication, and you might be overlooking those approaches because of a single-minded fixation on one and only one methodology.
This is why Jordan asked at the beginning what specifically youâre trying to do. Your answers have been a bit of a moving target on this, so itâs difficult to proceed.
Letâs say we have an oscillator, modulated by another oscillator:
x = {SinOsc.ar(SinOsc.ar(0.1, 0, 10, 440))}In this case we canât know the actual value of the input frequency cause its being modulated by another one, it would be between 430 and 450.
Correct â a feature of DSP thinking is that it focuses on abstract characteristics of large data sets, rather than on specific data. Itâs not only that you canât know the actual value, but that most of the time, you donât need to.
If I want to make some logic with the output of that function, lets say If x.freq > 445
First, consider not embedding the modulator into the carrier.
{
var xfreq = SinOsc.ar(0.1, 0, 10, 440);
x = SinOsc.ar(xfreq);
}
This is why we keep saying that attributes are not needed. You can choose to make the frequency available in its own variable. Problem solved. Now letâs move on. (Note that, in Max or Pd, you canât bury a modulator inside another processor.)
Now, in the SynthDef function, you can write xfreq > 445
and this will produce an audio rate signal that is 0.0 when false, and 1.0 when true. The transition from false to true is fully sample accurate and you can use this for sample accurate triggering of envelopes, or resetting of ramps, or pulse counting, or many other signal-domain operations â same as in Max or Pd (e.g., in Max, a threshold [>~ 445] can serve as a gate for an adsr~).
By itself, this doesnât support all of the language-side logic. To do that, you need a bridge between the server and the language. This is SendReply â OSCFunc (or OSCdef). Max and Pd also need bridge objects for this: [edge~] or [threshold~] respectively.
DSP thinking is functional rather than imperative. You seem to be stuck in an imperative mode. Dietcvâs thread suggests a lot of ways to break out of imperative-code style.
hjh
@Dasha I think the code structure you want might look like this:
Pitch
works at control rate, and its default rate is 1 / 64 * s.sampleRate
. Even this rate produces unwanted noise at the transition. To reduce this you need lag
. I think that audio rate is needed to produce sound, but to control other sounds it does not always give the best result.
(
{
var lowFreqM = 430;
var highFreqM = 450;
// var modulator = SinOsc.ar(0.1, 0, 10, 440);
var modulator = SinOsc.ar(0.1, 0).range(lowFreqM, highFreqM);
var carrier = SinOsc.ar(modulator) * 0.1;
var freqModulator = Pitch.kr(carrier, lowFreqM)[0].lag(0.7).poll; // you need lag to get a better transition. The lag time would be different for each system. 0.7 gives me a stable sound change.
// var freqModulator = Pitch.kr(carrier, lowFreqM)[0];
// var freqModulator = K2A.ar(Pitch.kr(carrier, lowFreqM)[0]);
var triggered = WhiteNoise.ar * 0.05;
var which = freqModulator > 440;
Select.ar(which * 2, [carrier, triggered]);
// SelectX.ar(which * 2, [carrier, triggered]);
// SelectXFocus.ar(which * 2, [carrier, triggered]);
// XFade2.ar(carrier, triggered, which.linlin(0, 1, -1, 1));
// if(which, carrier, triggered)
}.play;
)
@prko Nice!!! jajaja you made it! this is a really good solution, thanks man!
yes, reading dietcvâs posts I agree that I have to change paradigm or I will be fighting forever with the architecture.
thanks.
Please donât thank me, thank other long-time sc users, including @jamshark70. This code snippet is from learning from many users.
you are right, thanks everyone for your insights! @prko @jamshark70 @dietcv @smoge.