Reverb design: sounding good with percussive and sustained sounds

I’m working on a bigger project involving a bunch of different reverbs. One thing I am struggling with is that either designs or specific settings for a reverb tend to sound very good on either percussive or sustained sounds, but rarely both. I think this is a difficult problem to solve - in studio settings, I think it’s pretty common to use different reverbs on different instruments. But this isn’t possible for the project I’m currently working on.

Here’s a semi-ok FDN that I knocked together this morning, it’s based on the classic “FDN of order N” by Jot and Chaigne but with a few modifications to get it sounding a bit better. It sounds fine, nothing special. The saw sound in the routine sounds decent with the reverb; the kick drum sounds terrible. I am wondering if anyone here has experience with this problem, or creative solutions to deal with it.

Cheers,
Jordan

(
SynthDef(\jotFDN, { |feedback = -3, modFreq = 0.2, modAmp = 0.0002, scale = 1, damping = 4000, amp = -10, rt = 10|
    var input, output;
    var early;
    var fb, fdn, matrix, delays;

    input = In.ar(0, 2); 

    early = input.sum; 
    
    // early reflections
    4.do { |i| 
        early = AllpassC.ar(early, 0.01, 0.00237 + (i * 0.0002), 1) ;
        early = AllpassC.ar(early, 0.01, 0.00337 - (i * 0.00015), 1) ;
    };

    // predelay
    early = DelayC.ar(early, 0.1, 0.02);


    // fb, fb matrix, fdn function
    fb = LocalIn.ar(4);
    matrix = [
        [1, 1, 1, 1],
        [1, -1, 1, -1],
        [1, 1, -1, -1],
        [1, -1, -1, 1]
    ];
    fb = fb * matrix.flop;
    fb = fb * sqrt(2).reciprocal;

    fdn = { |input, delayTime, fbIndices = #[]| 
        var sig, mod, decayCoef;
        decayCoef = 0.001.pow(delayTime/rt);
        mod = LFNoise2.ar(modFreq) * modAmp;
        sig = input + fb[fbIndices[0]] + (fb[fbIndices[1]] * -5.dbamp);
        sig = DelayC.ar(sig, delayTime + 0.1, (delayTime * scale) + mod - ControlDur.ir);
        sig = LPF.ar(sig, damping); 
        sig;
    };

    delays = Array.newClear(4);
    delays[0] = fdn.(early, delayTime: 0.065103, fbIndices: [0, 3]); 
    delays[1] = fdn.(early, delayTime: 0.037335, fbIndices: [1, 2]);
    delays[2] = fdn.(early, delayTime: 0.036431, fbIndices: [2, 1]);
    delays[3] = fdn.(early, delayTime: 0.064091, fbIndices: [3, 0]);

    delays = delays * feedback.dbamp;
    delays = LeakDC.ar(delays); 

    LocalOut.ar(delays);

    delays = LPF.ar(delays, damping * 3);
    delays = HPF.ar(delays, 100);
    delays = LeakDC.ar(delays);

    output = delays * amp.dbamp;
    output = Splay.ar(output);
    Out.ar(0, output);
        
}).add;

SynthDef(\saw, {
    var sig;
    sig = Saw.ar(\freq.kr(660));
    sig = sig * Env.perc(0.5, 0.5, curve: -3).ar(Done.freeSelf);
    sig = sig ! 2 * -8.dbamp;
    Out.ar(0, sig);
}).add;

SynthDef(\kick, {
    var sig;
    sig = SinOsc.ar(57 + (1 + Env.perc(0, 0.07, 26, -50).ar));
    sig = sig + (BPF.ar(PinkNoise.ar * -23.dbamp, 733));
    sig = (sig * 3.3).tanh;
    sig = sig * Env.perc(0, 0.13).ar(Done.freeSelf);
    sig = sig ! 2;
    Out.ar(0, sig);
}).add;
)

x = Synth(\jotFDN, [scale: 0.9, feedback: -9.5, amp: 0, damping: 3000, \modFreq, 0.01, \modAmp, 0.002])
Synth(\kick)

(
Routine {
    x = Synth(\jotFDN, [scale: 0.9, feedback: -9.5, amp: 0, damping: 3000, \modFreq, 0.01, \modAmp, 0.002]);
    loop {

        rrand(3,6).do{
            Synth(\saw, [freq: ( -3 + [0, 12, -12].choose +  [50, 55, 57, 60, 62, 63, 65, 67, 69].choose).midicps]);
            rrand(0.3, 0.8).wait;
        };
        3.wait;
    }

}.play;
)

1 Like

why not? (and some more characters)

The idea is to play live and use the same reverb for multiple instruments - both conceptually (we’re in the “same room”) and for cpu performance reasons (not sure about running multiple reverbs at the same time on my laptop) I’ve been planning to use one reverb per piece.

Maybe try a few instances before writing that off. Modern CPUs are pretty fast. Even my old laptop, couple years ago, where sclang benchmark numbers that I’d get seemed slower than other people’s, I was routinely running 2 FreeVerbs and a JPverb along with a chunk of synthesis and rarely hitting 50%. (These weren’t home-grown reverbs, and it depends how much synthesis is going on, of course.)

That doesn’t say anything about the conceptual reason – just pointing out that “just a laptop” is a lot less of a distinction than it used to be (except for, like, hardcore Hollywood peeps using 200 Kontakt instances for orchestration).

hjh

1 Like

I’d start with an impulse response and PartConv, maybe with some filters to thin then sound. Designing reverbs is hard.

I’ve been working on it pretty consistently for the better part of a year and am starting to be more or less okay at it (still a long way away from good…). I was hoping someone might have encountered and solved this specific problem.

We had an aesthetics discussion about reverb on live performances here once, nice chat

If we are talking aesthetics, I would ask why the first artificial reverb inside a natural reverberating room is more " natural" than a second artificial reverb? You’ve crossed half the bridge and are wondering if you should go all the way, and what side :slight_smile:

Have you tried to use a bigger matrix?

(
var getHadamard = { |order|
	var matrix0 = [
		[ 1,  1 ],
		[ 1, -1 ]
	];
	var kronecker = { |a, b|
		a.collect { |x|
			x.collect { |y| b * y }.reduce('+++')
		}.reduce('++')
	};
	var matrixN = matrix0;
	(order.log2 - 1).do{
		matrixN = kronecker.(matrixN, matrix0);
	};
	matrixN * sqrt(order).reciprocal;
};

getHadamard.(16);
)

or different delay times?

(
var delayLengths = { |order, dmin, dmax|
	var nm1 = order - 1;
	var d = dmin * ((dmax / dmin) ** ((0..nm1) / nm1));
	(d * s.sampleRate).round(1.0).asInteger;
};

var primePowerDelays = { |order, dmin, dmax|
	var delays = delayLengths.(order, dmin, dmax).debug(\delTimes);
	var powerDelays = delays.collect{ |delay, i|
		var prime = i.nthPrime;
		prime ** ((log(delay) / log(prime)) + 0.5).floor;
	};
	powerDelays.asInteger / s.sampleRate;
};

primePowerDelays.(16, 0.03, 0.06);
)

or swap the LPF in your feedback path for OnePole?

sig = OnePole.ar(sig, exp(-2pi * (\lpf.kr(16000) * SampleDur.ir)));
LocalOut.ar(sig);
sig = (sig - OnePole.ar(sig, exp(-2pi * (\hpf.kr(100) * SampleDur.ir))));

or HighShelf?

sig = HighShelf.ar(localOut, ffreq, fq, feedback) * decayCoef;
LocalOut.ar(sig);
sig = HighShelf.ar(sig, ffreq, fq, feedback.neg);

or change the blockSize ?

or instead of using Splay a different distribution of channels across the stereo field?

I’m not too worried about the purity or naturalness or whatever. It’s more that, because the musicians will be improvising, using in-ear monitors etc., I think it makes intuitively more sense for interaction purposes if everyone feels like their sounds are moving through the same air. If that makes sense? It’s about creating a vaguely comfortable situation, I think.

@dietcv i’ll check this stuff out on the weekend. I tried the bigger matrix and it obviously sounds better :slight_smile: I need to work out if my computer can handle running two summed 16 order matrices to achieve a nice stereo affect. But I think it will work. Thanks for your suggestions!

1 Like

Some initial results:

  • increasing matrix size definitely works (obviously), the question of performance remains

  • different delay times in my experience change the colour, which isn’t really my major concern here, but also interesting of course

  • HighShelf has a nice effect! A OnePole (regardless of the coefficient) inside the feedback loop generates NaNs, I’m not really sure why.

  • changing block size didn’t help in this case, although it has helped me in the past. one of the more fun things to do is including a sample rate reduction in the feedback loop to get down to like 28k or 24k (or lower), this can sound surprisingly good

  • the biggest, most important thing I’ve worked out is that I was using the matrix incorrectly. I’m not really sure why, as a few different versions of code that seem to do the same thing produce different audio results, but replacing fb = fb * matrix.flop with delays = early + fb; delays = matrix.collect { |item i| var sig; sig = item * delays; sig.sum } * 0.5; has made a major difference - not just because I’m now mixing in the input sig to the fb matrix, but because AFAICT supercollider is actually processing the signals differently. If this interests anyone I have more concrete examples but don’t want to just shout into the void here :slight_smile:

tl;dr increasing to a 16 order matrix has had the best impact, although I am still not super convinced that it will work well with, eg, percussive sounds and sustained sounds, but only time will tell.

Do you have a minimal example of this, could be a bug?

Quickly looking at this again I can’t reproduce my own bug. I’ll have another look the next few days and see if I can…

from what i know when dealing with feedback-delay-networks.

  • You are able by increasing the size of the matrix to pick shorter delay times without getting the metallic comb frequency response.
  • The delaytimes colour the sound, they are best chosen by using prime numbers to avoid stacking of frequencies.
  • The OnePole / Highshelf is ment to be used to attenuate the high frequencies in the feedback loop and then attenuate the low frequencies after the fedback loop. Similiar to guitar distortion design with shapers and filters.
  • before summing the result you could change the delay weights of your matrix by attenuating their amplitudes, i dont think you have to create and sum a matrix for both channels of your stereo signal.

Here are some additional thoughts: