Wavetable .sineFill

Going back to Generating unique saw wave…the simplest method for a generating the standard formula can be found in the documentation for Wavetable:

The function serves the same purpose… for a custom/classic saw:

{
	|res n_harmonics| Wavetable.sineFill(res, 1/(1..n_harmonics) 
}

For anyone who’s interested, method of playback:

Buffer.loadCollection
(
	server: s
	,
	numChannels: 1
	,
	collection: Wavetable.sineFill(1024, 1/(1..6))
	,
	action:
	{
		|b|

		SynthDef
		(
			\x,
			{
				|n| var xyc = Env xyc:
				[
					[ 0, 0, \sin ]
					,
					[ 0.23, 1, \sin ]
					,
					[ 0.58, 0.64, \wel ]
					,
					[ 1, 0 ]
				]
				;
				Out.ar
				(
					0, Pan2.ar
					(
						in:
						(
							Osc.ar
							(
								b, BufRateScale.kr(b) * n
							)
							*
							xyc.ar
							(
								Done.freeSelf, \gate kr: 1, \x kr: 0.26, \y kr: 1
							)
						),
						pos:
						(
							0
						),
						level:
						(
							AmpComp.kr
							(
								n, 54, 0.36
							)
						)
					)
				)
			}
		)
		// .add
		.play(s, [\n, 172])
	}
)

There’s very little out there on the subject… causing one to wonder if there’s anything remotely related to creating interesting (or experimental) Wavetables in any way.

Does anyone use them?

Hello @Rainer,

If you haven’t already become familiar with the SignalBox quark, I’m going to point you to Signal:*readWave.

This method will instantiate a Signal filled with a periodic wave read from a sound file. (You’ll need to supply the source frequency.) The method works directly in the time domain with the sampled sound file via windowing and folding.

Once you’ve got the Signal, you can then convert for use as a Wavetable via Signal:-asWavetable.

A single Wavetable isn’t necessarily super interesting on its own. To do something interesting you’ll want to create multiple Wavetables from various parts of the source sound file, and then crossfade.

@joslloand Excellent recommendation regarding the SignalBox quark there’s a ton of interesting options for manipulating waveforms, very well done.

A single Wavetable isn’t necessarily super interesting on its own

I’m rather gravitated towards the prospect of generating single-frame wavetables resulting from a mathematically sound formula as it’s source of origin.

Regarding the notion which claims every sound can be reproduced using only sine waves at various amplitudes & frequencies,

Is it theoretically possible for Wavetable.sineFill to be just as capable of producing any sound, using only the amplitudes & phases from a number of sine waves?

Yes.
You can sample values of a function and fill them. You can scan a path across an image and take the gray scale values and fill those.
For example here I scan a function in a spiraling pattern and play it with DynKlang in real time. You could as well fill a wave table in a similar way.
You could also draw a (bezier)curve and take values from that. Or even fill it with some sensor (weather data over a longer period).
FFT data of existing sounds.

// or any input signal
a = Signal.fill(32, { 1.0.rand2 });

// discrete Fourier transform
b = a.fft(Signal.newClear(a.size), Signal.fftCosTable(a.size));

// complex form is less useful for resynthesis
// --> Polar
b = b.asPolar;

// sineFill adds sines, not cosines,
// so it needs a 90-degree phase shift
t = (b.theta + 0.5pi).wrap(-pi, pi);

// it won't be exactly the same
// 1. rho[0] and theta[0] are DC component. sineFill can't do that.
// 2. sineFill normalizes so the amps won't match.
// but the shapes should be pretty close

// top graph = sineFill; lower = IFFT
// slider --> right, and compare against `a.plot`, should be uncanny

(
var indexSl, indexView, plotSignal, plotIfft;

c = Array(32);
d = Array(32);

(1..32).do { |i|
	var fft, ifft;

	c = c.add(Signal.sineFill(a.size, b.rho[1..i], t[1..i]));
	
	fft = Polar(b.rho.copy, b.theta.copy);
	fft.rho.putSeries(i + 1, nil, a.size - 1, 0);
	fft.theta.putSeries(i + 1, nil, a.size - 1, 0);
	
	fft = fft.asComplex;
	ifft = fft.real.ifft(fft.imag, Signal.fftCosTable(a.size));
	d = d.add(ifft.real);
};

w = Window("test", Rect(800, 200, 400, 600)).front;
w.layout = VLayout(
	indexView = View().fixedHeight_(24),
	plotSignal = MultiSliderView(),
	plotIfft = MultiSliderView()
);

indexSl = EZSlider(indexView, indexView.bounds.moveTo(0, 0), "index", [0, 31.999], {
	plotSignal.value = c[indexSl.value] * 0.5 + 0.5;
	plotIfft.value = d[indexSl.value] * 0.5 + 0.5;
});

[plotSignal, plotIfft].do { |sl|
	sl.elasticMode_(true).drawRects_(false).drawLines_(true);
};

indexSl.action.value;
)

hjh

Excellent responses everyone.

Many sincere thanks.

One final thought… is there a general advantage/disadvantage with classes Signal & Wavetable ?

Perhaps a lower-level explanation may help to further our understanding of the relationship between the two.

Why are they separate in the first place?

Wavetable and the associated format of data within a buffer are only a math optimization for Osc (and siblings) and Shaper. Nothing more than that.

hjh

1 Like

Any recommendations for playback of Signal playback?

Without converting to Wavetable?

Hi @joslloand excited to try this but I get Message 'kaiserWindow' not understood - does that come from another quark that I might not have?

EDIT found it ExtraWindows quark

Likely there are some hidden assumptions here… In context, I’m assuming Signal playback is being contrasted against Wavetable playback.

To my recollection, Wavetable playback takes place in the Osc family of UGens (or Shaper – incidentally, if you map a Phasor onto -1 … +1, and feed this into Shaper, it should sound like Osc with 0 frequency, and the same Phasor mapped onto 0 … 2pi in the phase input). This implies cyclical playback. I don’t recall other UGens that use wavetable format.

Take the same Phasor, map it onto 0 … numFrames, and use it in BufRd, and you’ve got cyclical playback. (This is the technique in my recent wavetable quark – so you could also have read the code, maybe gotten your answer faster than waiting for a reply.)

PlayBuf with looping is also cyclical playback.

(The other hidden assumption is that the Signal simply contains a series of samples – almost certainly the case, but Signal is really just a FloatArray and doesn’t insist that the data only be consecutive audio samples.)

hjh

So then Buffer.loadCollection is a sensible choice? Along with PlayBuf within the SynthDef?

Hello @semiquaver

If you installed SignalBox via any of the standard methods the ExtraWindows quark should have been auto-magically installed, as it is listed as a SignalBox dependency.

If this didn’t happen, then sounds like it might be worth filing an issue

I’m assuming the answer is rather apparent… though now I’m experiencing an issue with the rate: argument in PlayBuf

This modified version of the code I’ve been using to play Wavetable is refactored for Signal… the .play tests at the end sound incorrect

(

Buffer.loadCollection
(
	server: s
	,
	numChannels: 1
	,
	collection: value
	{
		var sig = Signal.sineFill(1024, 1/(1..6))
		;
		sig
	}
	,
	action:
	{
		|b|

		SynthDef
		(
			\x,
			{
				|n| var xyc = Env xyc:
				[
					[ 0, 0, \sin ]
					,
					[ 0.23, 1, \sin ]
					,
					[ 0.58, 0.64, \wel ]
					,
					[ 1, 0 ]
				]
				;
				Out.ar
				(
					0, Pan2.ar
					(
						in:
						(
							PlayBuf.ar
							(
								numChannels: 1
								,
								bufnum: b
								, 
								rate: BufRateScale.kr(b) * n // ?
								,
								startPos: 0
								,
								loop: 1 
								,
								doneAction: 0
							)
						),
						pos:
						(
							0
						),
						level:
						(
							xyc.ar
							(
								Done.freeSelf, \gate kr: 1, \x kr: 0.26, \y kr: 1
							)
						)
					)
				)
			}
		)
		.play(s, [\n, 172])
		// .play(s, [\n, 472])
		// .play(s, [\n, 872])
	}
)

)
BufRateScale.kr(b) * n

But what is n?

In any case, I’d start by thinking through the math conversions.

What we have is, I’ll assume, cycles/sec.

What we want is, buffer samples to cover per second.

It will eventually be BufRateScale’d but the frequency conversion is key, and independent of file sample rate.

Let’s say your buffer is n samples. Then normal playback rate would produce sr/n cycles per second – sr = sample rate. (We advance by sr samples in one second, and there are n samples per cycle, so sr/n is (samples / second) / (samples / cycle) = samples/second * cycles/sample = cycles/second.)

So we want to scale this baseline sr/n to get freq Hz. Freq is in cycles/sec, and so is sr/n, so a scaling factor would be their quotient = freq * n / sr. Then this needs to be scaled for buffer rate, so it’s probably BufRateScale.kr(bufnum) * freq * n * SampleDur.ir (SampleDur.ir just allows the division by sample rate to be expressed as a multiplication, which is more efficient.)

I admit I haven’t tested this – I could be offbase – but I’m spelling out all the unit conversions as reassurance that the basic idea makes sense, also as a pedagogical device. Just hope I got it right :laughing:

hjh

I understand n to be the demonstrative value synonymous with BufFrames.kr(bufnum)...

Thank you very much.

One more small detail… why is loop: 1 instead of loop: 0 required when using a one-shot envelope?

Out.ar
(
	0, Pan2.ar
	(
		in:
		(
			PlayBuf.ar
			(
				numChannels: 1
				,
				bufnum: b
				,
				rate: SampleDur.ir * BufFrames.kr(b) * BufRateScale.kr(b) * freq 
				,
				startPos: 0
				,
				loop: 1 // one shot output
				// loop: 0 // no sound...?
				,
				doneAction: 0
			)
			
		),
		pos:
		(
			0
		),
		level:
		(		
			Env.perc.ar
			(
				Done.freeSelf
			)
		)
	)
)

How long is the envelope?

How long is the duration of one wave cycle?

How many wave cycles do you need to fill the envelope duration? I’m willing to bet that it’s greater than 1.

hjh