Visualising or simulating Intersample Peak in SC

Dear SC-Users,

Could the so-called “intersample peaks” (or “inter-sample peaks”) be simulated (or visualised) using UGens in SC?

“intersample peaks” while recording is easily conceivable: All peak points may not be sampled in the phase of DAC. Thus, normalising the recording to 0dB could cause intersample peaks.

Similarly, generator UGens seem to omit some points in the wavetable oscillators. For example, SinOsc has 8192 samples. Supposing that scsynth runs at 44100 sample-rate, all greater frequencies than 5.38330078125 Hz ( <= 44100/8192) should skip some values in the wavetable. I am not sure.

Even if my thought is correct, no ideas come into mind to visualise “intersample peaks” resulting from single or multiple instances of generator UGens.

Any ideas or code examples would be deeply appreciated!

Throwing this question has the following background:

Recently I watched a YouTube video of a specialist in acoustics and audio system. There was a question about normalisation from a subscriber of the channel: “Is normalising to -3 or -6 dB FS better than normalising to 0 dB FS?” The specialist’s answer on that video was normalisation to 0dBFS causes no problem of clipping, and we could ignore the red signal of the LED panel.

I was disappointed with the answer because of the followings:

  1. Before studying electroacoustic music, I experienced that playing back sound files normalised to 0dBFS sometimes turn on clipping indicators of the level meter. It was horrible to listen to that sound again to ensure any unwanted clipped sounds were present.

  2. After having learned headroom, I always reserve headroom when normalising. The amount of the normalisation differs: from 90 % to 99 % or from -1 dB FS to -0.1 dB FS (or even -0.01 dB). I usually listen to the sound around the highest value as soon as normalising a sound file in order to ensure an unwanted clipping sound. (Sometimes, I renormalise the file with a smaller value when experiencing problems while working further with the normalised sound.)

Of course, some clipping sound is sometimes ignorable, and that sound usually has a noisy timbral character. However, when I hear an unwanted clipping sound, I usually can see that its waveform is normalised to 0dB FS or clipped at a particular value.

I did not say my opinion to the YouTuber but asked this later in a closed forum. Unfortunately, a forum member has the same view as the YouTuber, and I have entirely been perplexed. I listed most of my own experiences mentioned above and pointed out the “intersample peak” as a convincing theory that theoretically well summarises my experiences. However, he thinks my explanation is insufficient to convince him why headroom is necessary, and “inter-sample peak” is only some people’s view of the point.

What do you think about it?

You could perhaps put an LFPulse through FFT, PV_Brickwall it, and IFFT. That might do it…?

I have a nice client-side graphical demonstration that I’ll be happy to share, next time I’m at the computer.

This “specialist” is definitively incorrect. (It might not cause clipping if the DAC’s antialiasing filter is 100% analog, which is basically not done anymore because of phase distortion, or if the DAC does zero-order hold and digital filtering using floats… but if the DAC is using integers for this, then the digital oversampling would definitely be clipped. If that person denies this, then they don’t know what they’re talking about.)

hjh

1 Like

OK, here’s that example. Run the big block first (pasted below), then manipulate the e object:

e.data = Array.fill(z div: 2, [0.7, -0.7]).lace(z);  // square

And you’ll see the Gibbs effect prominently, with the curve extending outside of the samples’ range.

The upsampling technique here uses FFT – with the 32 points, you can have a spectrum up to 16 cosine cycles per window. I extend that spectrum by a factor of 16, with zeros in the extra bins. So you get exactly the original bandlimited function, just with more samples, and the IFFT resynthesis gives you an accurate curve between samples. (That’s in case this “expert” claims that I’m doing something wrong. I’m not.)

hjh

// James Harkins -- upsampling demo
(
s.waitForBoot {
	var size = 32, oversampling = 16,
	shiftFactor = 16,  // total shift range: 8 = +/- 8 samples
	fftCosTable = Signal.fftCosTable(size),
	bigCosTable = Signal.fftCosTable(size * oversampling);
	var data;
	var userView, sampleView, spectrumLayouts, spectrumViews,
	shiftSlider, syncButton;

	// [DC, bin1, bin2, nyquist]
	// --> [DC, bin1, bin2, nyquist, 0, 0, 0, nyquist, bin2, bin1]
	var mirrorExpandArray = { |array, outSize|
		array.extend(outSize div: 2 + 1, 0).foldExtend(outSize)
	};
	var ifft = { |polar, outSize(size * oversampling)|
		var halfSize = size div: 2,
		halfExpanded = outSize div: 2,
		factor = outSize / size,
		rho, expanded,
		table = case
		{ factor == 1 } { fftCosTable }
		{ factor == oversampling } { bigCosTable }
		{ Signal.fftCosTable(outSize) };
		rho = mirrorExpandArray.(polar.rho[0 .. halfSize] * factor, outSize);
		// if factor > 1, then the nyquist bin is copied by foldExtend
		// but the base fft does not copy -- so the expanded one is doubling it, we must halve it
		// if factor <= 1, nyquist is not doubled so leave it alone
		if(factor > 1.0) {
			rho[halfSize] = rho[halfSize] * 0.5;
			rho[outSize - halfSize] = rho[halfSize];
		};
		expanded = Polar(
			rho,
			// mirror image should invert the phases
			// 'lace' takes all the first elements, then all the second elements
			// so you get first half = 1, second half = -1
			// doesn't this mess with Nyquist? No...
			// Nyquist phase must be one of 0, pi or -pi.
			// 0.0 * -1 = 0.0, no problem
			// cos(k * theta - pi) == cos(k * theta + pi) so again, no problem
			Array.fill(halfExpanded, [1, -1]).lace(outSize)
			* mirrorExpandArray.(polar.theta[0 .. halfSize], outSize)
		).asComplex;
		expanded.real.as(Signal).ifft(expanded.imag.as(Signal), table)
		.real
	};

	var showSpectrum = { |polar|
		spectrumViews.do { |column, i|
			column[0].value = polar.rho[i] / size * 2;
			column[1].value = polar.theta[i] / 2pi + 0.5;
		};
	};

	var setGuiEnabled = { |bool|
		sampleView.enabled = bool;
		syncButton.enabled = bool.not;
		spectrumViews.do { |column|
			column.do { |view| view.enabled = bool };
		};
	};

	var buffer = Buffer.alloc(s, size * oversampling * 2, 1), synth, ampSlider, freqSlider;

	z = size;  // data size, make available outside

	// data/fft interface, available outside
	e = data = (
		data: nil,
		upsampleShift: 0,
		upsampleShift_: { |self, shift = 0|
			setGuiEnabled.(shift == 0);
			// update display data without syncing everything (copy/paste code, sorry)
			self[\expanded] = self[\baseExpanded].rotate(shift);
			self[\data] = self[\expanded][0, oversampling ..];
			self[\fft] = self[\data].fft(
				Signal.newClear(self[\data].size),
				fftCosTable
			).asPolar;
			self.changed(\data);
		},
		syncShifted: { |self|
			shiftSlider.value = 0.5;
			self[\upsampleShift] = 0;  // clear shift without updating data
			setGuiEnabled.(true);
			self['data_'].value(self, self[\data].debug("set to"));  // set to shifted samples
		},
		data_: { |self, array|
			if(array.size == size) {
				self[\data] = array.as(Signal);
				self[\baseData] = self[\data].copy;
				self[\fft] = self[\data].fft(
					Signal.newClear(array.size),
					fftCosTable
				).asPolar;
				self[\basefft] = self[\fft].deepCopy;
				self[\expanded] = ifft.(self[\fft], size * oversampling);
				self[\baseExpanded] = self[\expanded].copy;
				if(shiftSlider.notNil and: { shiftSlider.value != 0.5 }) {
					shiftSlider.value = 0.5;
					setGuiEnabled.(true);
				};
				self.changed(\data);
			} {
				Error("Container got % values, expected %".format(array.size, size)).throw;
			};
			self
		},
		fft_: { |self, polar|
			self[\fft] = polar;
			self[\data] = ifft.(polar, size);
			self[\expanded] = ifft.(polar, size * oversampling);
			self[\basefft] = self[\fft].deepCopy;
			self[\baseData] = self[\data].copy;
			self[\baseExpanded] = self[\expanded].copy;
			self.changed(\data);
		},
	);

	data.data = Array.fill(size, 0);

	w = Window("Signal lab", Rect.aboutPoint(Window.screenBounds.center, 400, 300));
	w.layout = VLayout(
		HLayout(
			StaticText().fixedWidth_(50).align_(\center).string_("amp:"),
			ampSlider = Slider().orientation_(\horizontal).fixedHeight_(24),
			StaticText().fixedWidth_(50).align_(\center).string_("freq:"),
			freqSlider = Slider().orientation_(\horizontal).fixedHeight_(24)
		),
		HLayout(
			StaticText().fixedWidth_(50).align_(\center).string_("shift:"),
			shiftSlider = Slider().orientation_(\horizontal).fixedHeight_(24)
			.value_(0.5)
			.action_({ |view|
				// 2 samples shift either way
				data.upsampleShift = ((view.value - 0.5) * (oversampling * shiftFactor)).round.asInteger;
			}),
			syncButton = Button().fixedWidth_(50).states_([["sync"]]).enabled_(false)
			.action_({ data.syncShifted })
		),
		StackLayout(
			sampleView = MultiSliderView(),
			userView = UserView()
		).mode_(\stackAll),
		HLayout(*(
			spectrumLayouts = Array.fill(size div: 2 + 1, {
				VLayout().spacing_(2)
			})
		)).spacing_(2)
	);

	spectrumViews = Array.fill(spectrumLayouts.size, { |i|
		var out = [
			Slider().action_({ |view|
				var rho = data[\fft].rho;
				rho[i] = view.value * size * 0.5;
				if(i.inclusivelyBetween(1, size div: 2 - 1)) {
					rho[size - i] = rho[i];
				};
				data.fft = Polar(rho, data[\fft].theta);
			}),
			Knob().mode_(\vert).action_({ |view|
				var theta = data[\fft].theta;
				theta[i] = (view.value - 0.5) * 2pi;
				if(i.inclusivelyBetween(1, size div: 2 - 1)) {
					theta[size - i] = theta[i].neg;
				};
				data.fft = Polar(data[\fft].rho, theta);
			}),
			StaticText().fixedHeight_(18).string_("k=" ++ i).align_(\center)
		];
		out.do { |view| spectrumLayouts[i].add(view) };
		out
	});

	userView
	.background_(Color.white)
	.drawFunc_({ |view|
		var extent = view.bounds.extent,
		hardcodedMultiSliderMargin = 6,  // don't change this
		xSize = extent.x, ySize = extent.y - hardcodedMultiSliderMargin - sampleView.valueThumbSize,
		bigSize = size * oversampling,
		xWidth = (xSize - hardcodedMultiSliderMargin) / bigSize,
		adjustment = ((xSize - hardcodedMultiSliderMargin) / size + hardcodedMultiSliderMargin) * 0.5,
		yAdjustment = (hardcodedMultiSliderMargin + sampleView.valueThumbSize) * 0.5,
		polar = data[\fft],
		halfSize = size div: 2,
		timeDomain,
		unmapX = { |x|
			((x - adjustment) / xWidth) % bigSize
		},
		mapY = { |y|
			(0.5 - (0.5 * y)) * ySize + yAdjustment
		},
		// new func: x is horizontal view position
		mapPoint = { |x|
			Point(x, mapY.(timeDomain.blendAt(unmapX.(x), \wrapAt)))
		};

		timeDomain = data[\expanded];

		Pen.moveTo(mapPoint.(hardcodedMultiSliderMargin div: 2));
		(hardcodedMultiSliderMargin div: 2 .. xSize - (hardcodedMultiSliderMargin div: 2)).do { |x|
			Pen.lineTo(mapPoint.(x))
		};
		Pen.stroke;
	});

	sampleView.background_(Color(1, 1, 1, 0))  // Color.clear has a bug
	.fillColor_(Color.black).strokeColor_(Color.black)
	.elasticMode_(true)
	.drawRects_(true).drawLines_(false)
	.thumbSize_(8)
	.gap_(0)
	.value_(data[\data] * 0.5 + 0.5)
	.action_({ |view|
		data.data = view.value * 2 - 1;
		showSpectrum.(data[\fft]);
	});

	data.addDependant {
		sampleView.value = data[\data] * 0.5 + 0.5;
		showSpectrum.(data[\fft]);
		buffer.setn(0, data[\expanded].as(Signal).asWavetable);
		userView.refresh;
	};

	data.changed(\data);  // force refresh all views

	w.onClose = { data.releaseDependants; synth.release; buffer.free };

	w.front;

	synth = { |bufnum, freq = 100, amp = 0|
		(Osc.ar(bufnum, freq) * amp).dup
	}.play(args: [bufnum: buffer]);

	freqSlider.value_(100.explin(5, 500, 0, 1))
	.action_({ |view| synth.set(\freq, view.value.linexp(0, 1, 5, 500)) });

	ampSlider.value_(0)
	.action_({ |view| synth.set(\amp, view.value.lincurve(0, 1, 0, 1, 3)) });

	~ifft = ifft;
};
)
2 Likes

Thank you for your kind and quick responses.
Your instruction and code helped me!

Warm regards,