SoundIn and multiple patterns

Hello, I’m trying out a new way of using SoundIn with streams/patterns/events.

I’m very far behind from what I want so here is the newb code I just did and I already get stuck. The Pbind code is based on Eli’s tutorial.

My idea is that I want to create one or two Synthdef(s) and control the AudioIn in patterns( using any P’s family) and be able to change the timbre/effects/sustains etc of an Audio Input. Any help will be much appreciated.

Thank you so much for your time.

 ~b3 = Buffer.alloc(s, 44800*0.6,2);
SynthDef(\hubSine, { | out =0, freq= 0.0, phase = pi, atk = 0.5, rls= 0.5, pan-0, amp=0.0, gate=1, outBus = 0, level = 1, run =1, loop= 1|

	var sig, rec,play;

		sig =\* level);
	     //sig = sig +, 2, [atk, rls],0.7);
	sig = sig +, 1,, 1) * [0.13, 0.17], 7);
	     rec =,~b3,\,\,\, run,loop);
	play =, ~b3.bufnum,loop:1);
	play  =,80);
	play =, 12000);,, play*0.2,level:level));


d= Synth(\hubSine);

~note = 40;
~pat = Pbind(
	\instrument, \hubSine,
	\dur, Pexprand(0.02,1),
	\atk, Pexprand(0.5,2),
	\sus, 0,
	\rel, 4,
	\relcrv, -2,
	\freqdev, Pwhite(-0.2, 0.2),
	\midinote, Pfunc({~note}), //Pfunc to the rescue
	#[harmonic,amp], Pfunc({
		var h, a;
		h = exprand(1,40).round(1);
		a = h.lincurve(1,40,1,0.02,-8) * 0.03;

~note = 52; // changable 


Hi velma,

At a glance, my feeling is that you might be going about this in a way that’s going to make life unnecessarily difficult for yourself. There are many situations where a SynthDef/Pbind combo will produce good results, but I don’t think this is one of them.

Some general observations:

  • the first argument of SoundIn is the bus index, but you seem to be treating it as an amplitude parameter
  • freq is defined in your SynthDef argument declaration, but not used in the SynthDef code, so using midinote, harmonic, etc. in your Pbind will have no effect.
  • similarly, you’re manipulating atk, sus, rel, relcrv in your Pbind, but these parameters are not present in your SynthDef algorithm, so they’ll also have no effect.
  • you’ve hard-coded ~b3 into your SynthDef, which means every Synth generated by the Pbind will use this buffer. this might cause problems, depending on the specifics of what you end up doing.
  • your XFade2 is crossfading between your signal and a quieter version of the same thing. this won’t have any noticeable or interesting effects.
  • there’s no envelope applying a small fade-in to your signal when it’s passed to RecordBuf, so the PlayBuf signal will likely contain a bunch of clicks and glitches.

This is sort of vague, it would be helpful to understand your goals with more specificity. Are you trying to do a live granular synthesis effect? An echo effect? A pitch resonator effect? I’m not totally convinced that you actually need Pbind or the RecordBuf/PlayBuf combo at all. It feels like you are throwing about 7 different techniques at a problem, when you might only need 1 or 2.

I would also strongly recommend breaking your SynthDef into smaller, dedicated units that each have a separate task, and which pass signal from one to the other using busses. For instance, a \mic SynthDef that writes SoundIn to a bus, a \comb SynthDef that applies your comb filter effect, a \filter SynthDef that applies you LPF/HPF combo, etc. I believe this will allow you to make your Pbind much simpler, and it’ll improve CPU efficiency.

If you can be more specific with your goals, I will probably be able to write some code that should point you in the right direction.

1 Like

Hello Eli,

Thank you so much for your insightful comments. I really appreciate it. What I’m trying to do is experiment with live granular and delay effects. I have an instrument, a lyre, feeding its signal in, and I would like to manipulate its sound (changing its sustain, delay, etc.) while maintaining a short loop ( say 30 seconds long ). This way, the musical loop can evolve over time as I adjust those effects.

I’m considering the possibility of using multiple audio inputs if that’s feasible.

I’m very new to SC, so it’s a trial-and-error process for me. However, I’d truly love to understand what code would make sense and not cause any unpleasant sounds. I’m not sure if my intention is clear enough. It would be greatly appreciated if I could have a good example to get started.

Thank you very much

Have you watched videos 4-6 in my Spring 2021 lecture playlist? It covers the topics you mention, specifically, live looping, live granular synthesis, and delays. These videos might give you some concrete ideas of how to proceed.

If you are just using one lyre and one microphone, I don’t really see why you’d need to use multiple audio inputs.

I would also recommend Pmono instead of Pbind. Pmono creates exactly one Synth and calls ‘set’ messages on that Synth using its internal patterns. When the stream ends or is stopped, the Synth automatically receives a (\gate, 0) message. This approach might make more sense than creating a sequence of multiple Synths.

Here is a very simple example, loosely based on your original post, with many aspects that could be improved or modified.


b = Buffer.alloc(s, s.sampleRate * 10, 1);

SynthDef(\live_fx, {
	var mic, sig, env;
	// get mic signal
	mic =\, \ *, 1, 0.02);
	// record and loop playback, \, 64, preLevel: \;
	sig =, \, loop:1);

	env = Env.asr(\, 1, \, \;
	// echo effect
	sig =, 0.5, \, \;
	// granular effect
	sig =,\, \, sig, \;

	sig =, 80), 12000);
	sig = sig * env;\, sig);

p = Pmono(\live_fx,
	\buf, b,
	\dur, 0.1,
	\dens, Env([20, 1, 20], [10]).asPseg.repeats_(inf).trace

~pat =;

1 Like

Hello Eli,

Fantastic, I did go over the video lecture and learned a lot from it. I’ll try your code sample and will post something next week.

Thank you very much for your time. It’s extremely helpful.