Problem in controlling a Playbuf with Patterns

Hi!

I want to use a Playbuf in order to play a wavetable that I have created in a buffer.
When using just play; the Playbuf works as expected.

When I want to make it work with patters things do not go as expected…

this is the example code:

// 1) Create a table at buffer 10:

(
p=2.pow(11);
v=Signal.sineFill(p, 1.0/[1, 2, 3, 4, 5, 6]);
v.plot;
n=Buffer.alloc(s,p,1,bufnum:10);
n.loadCollection(v);

)

// 2)This plays normally…

(
SynthDef(\help_PlayBuf, {| out = 0,buf=10,trig=1000,dur |

trig=MouseX.kr(1,10000);	
Out.ar(0,
	        PlayBuf.ar(1, buf, (2048/Server.default.sampleRate)*trig, Impulse.ar(trig), 0.0, 0)
)

}).play;
)

//3) when adding a pattern things behave differently…

(
SynthDef(\help_PlayBuf, {| out = 0,buf=10,trig=1,dur,envratePoll_Mult=1,sustain |
var env = EnvGen.ar(Env([1, 1, 0], [sustain, 0]), doneAction: 2);
Out.ar(out,
PlayBuf.ar(1, buf, (2048/Server.default.sampleRate)*trig, Impulse.ar(trig), 0.0, 0)*env
)
}).add;
)

(
Pdef(\f,
Pbind(
\instrument, \help_PlayBuf,
\trig, Pbrown(1,10000,100,inf)
\buf,10,
\dur, 1/(Pkey(\trig))
)
).play;
)

How could I use patterns to trigger the synth and make it play the table correctly like in the first example without patterns? I tried to use dur in order to free the Synth every time a trig hits… I pass the sustain in an env which releases the synth after each trigger. It seems that the wavetable is not played till the end but leaves a small gap in between each grain (is it scaling the table?). Why is this happening and how could I correct it? I think it has something to do with dur, sustain etc but I am not sure how to figure it out…

any help would be appreciated!
thank you people!!!

1 Like

I’ve not looked into this in detail, but maybe it is because the default \legato value for an event is 0.8? Try \legato, 1.0?

Hi!
thanx for replying! Unfortunately this was one of the first things I tried, but it does not fix it…
If you put a Pseries in \trig you will notice that the gap between grains increases as frequency increases, so the gap is not constant in size as well…

Hi!
thanx for replying! Unfortunately this was one of the first things I tried, but it does not fix it…
If you put a Pseries in \trig you will notice that the gap between grains increases as frequency increases, so the gap is not constant in size as well…

Hi,

there’s a number of interesting and critical topics related to these examples and I’m sorry that I can’t go into detail with all of them. For the SynthDef variant I’d definitely recommend not to go with PlayBuf; Osc and BufRd are the right tools for this (see their help files). E.g. you get distortion with PlayBuf already with frequencies where the sixth partial is below Nyquist (see freqscope).

To get the same flexibilty of Osc (up to rather high frequencies) with language-based triggering is impossible because of limited OSC bandwidth and imprecision of language-based timing in realtime. Here some remarks on the latter:

Taking Out instead of OffsetOut in the pattern variant is a further reason for imprecision, but as said, if you you want sample-accuracy + realtime control see Osc.
BTW if you post an example, especially with a potentially rather unpleasant sound, please consider lowering the amplitude.

Hope that helps, best

Daniel

I think this comment hits the nail on the head.

@eskay You might have assumed that language-side sequencing provides sample-accurate timing. When using a real-time server, it doesn’t and it can’t. Without sample accuracy, there’s no guarantee that the end of one node will line up exactly with the beginning of the next.

Sample accuracy is not possible in the current design because the audio server must follow the sample clock from the hardware interface, and the language client doesn’t have access to that. So there are two ways to proceed: A/ Both the language and server resolve timestamps against the system clock. Server and client stay together, but at the expense of sample accuracy. B/ The language resolves timestamps against the system clock while the server resolves them against the sample clock. This does give sample accuracy, but perceived messaging latency drifts over time. (In fact, this option is available using supernova.) In theory, there’s a third option C/ to connect the language to the same audio device and run both clocks off of the hardware sample clock, but that’s much more complex and not implemented in SC.

If you try option B (Server.supernova; s.useSystemClock = false; s.boot;), you might get better timing accuracy in the short term, but the drift will be apparent within 15-20 minutes.

If you really require sample-accurate language timing, you might be better off with ChucK (which is designed from the ground up for that).

But even if SC could give you sample accuracy, there’s another reason not to try to write an oscillator using patterns: What if your frequency does not divide the sample rate exactly? If freq = 100, then you know each cycle is exactly 441 samples. If freq = 101, then sample accuracy alone will not help you because the cycle is not an integer number of samples. The first synth could start at phase 0, but the next synth will have to start at a non-integer phase.

Meanwhile Osc.ar takes care of that for you.

In short: “How could I use patterns to trigger the synth and make it play the table correctly like in the first example without patterns?” – Don’t.

hjh