@jamshark70 is probably right re. coloration - but this is still a real issue and there are pragmatic solutions… Mixing / mastering are fundamentally about balancing frequency distribution over the spectrum (e.g. EQing), and balancing amplitude distribution over time (e.g. Compression) - you’ve got to make intentional decisions about how you adjust along these two axes, and be prepared to balance desirable vs undesirable changes that are introduced.
You should be able to get this by using two very slow-moving
Amplitude UGens to balance the amplitudes of the two signals. You can alternatively use
Loudness ugens, or any other amplitude measurement depending on what, exactly, you want to balance (e.g. peaks or overall energy, perceptual loudness).
The fundamental trade-off here is related to responsiveness: if your tracking and adjustments are applied slowly enough, they will be perceptually impossible to hear as anything apart from volume changes. You can imagine that the furthest end-point in this direction is to simply measure the overall amplitude of two sound files and normalize them - this would be perceptible as a volume change only. As you increase the responsiveness from this extreme, you’ll start to apply an uneven envelope over time, which will have subtle effects on the envelopes/dynamics of your sound. These effects will eventually become perceptible - which may be acceptable, or undesirable. As you make your tracking MORE responsive (where your response time is in the audio range, e.g. in millisecond ranges), you can introduce differences in the actual timbre of the sound - this can also be either acceptable or undesirable.
Of course, it’s pragmatically… not so useful if your response time is so slow that your two signals become balanced only after 60 seconds of playback - so the challenge is to find a good middle-ground where they become balanced quickly after a change, but not so quickly that it distorts the sound in a way that is undesirable. These are very subjective boundaries, so it really depends on your sound and your intention. Autechre has made a career (and a great new album) sticking really hard, short-response-time compression on reverb, so… sometimes this coloration or distortion is just another piece of the character of your sound.
Here’s an example, with both RMS and Amplitude measurements. Note that the responsiveness is controlled by the attack and release time of
Amplitude, or the
lpFreq argument of
var a, b, ampA, ampB, ampAvg, ampAdjustA, ampAdjustB;
a = Decay.ar(Impulse.ar(1/1), 5);
a = a * LPF.ar(LFSaw.ar(52.midicps), 300);
b = Decay.ar(Impulse.ar(1/1.125), 5);
b = b * LPF.ar(LFSaw.ar(55.midicps), 400);
b = -20.dbamp * b;
ampA = RMS.ar(a, 1).ampdb;
ampB = RMS.ar(b, 1).ampdb;
// ampA = Amplitude.ar(a, 0.01, 1).ampdb;
// ampB = Amplitude.ar(b, 0.01, 1).ampdb;
ampAvg = [ampA, ampB].mean;
ampAdjustA = (ampAvg - ampA).min(36);
ampAdjustB = (ampAvg - ampB).min(36);
(ampA.ampdb - ampB.ampdb).round(0.1).poll(label:" difference");
a = a * ampAdjustA.dbamp;
b = b * ampAdjustB.dbamp;
Amplitude.ar([a, b], 1, 6).poll;