This method will instantiate a Signal filled with a periodic wave read from a sound file. (You’ll need to supply the source frequency.) The method works directly in the time domain with the sampled sound file via windowing and folding.
Once you’ve got the Signal, you can then convert for use as a Wavetable via Signal:-asWavetable.
A single Wavetable isn’t necessarily super interesting on its own. To do something interesting you’ll want to create multiple Wavetables from various parts of the source sound file, and then crossfade.
You can sample values of a function and fill them. You can scan a path across an image and take the gray scale values and fill those.
For example here I scan a function in a spiraling pattern and play it with DynKlang in real time. You could as well fill a wave table in a similar way.
You could also draw a (bezier)curve and take values from that. Or even fill it with some sensor (weather data over a longer period).
FFT data of existing sounds.
Likely there are some hidden assumptions here… In context, I’m assuming Signal playback is being contrasted against Wavetable playback.
To my recollection, Wavetable playback takes place in the Osc family of UGens (or Shaper – incidentally, if you map a Phasor onto -1 … +1, and feed this into Shaper, it should sound like Osc with 0 frequency, and the same Phasor mapped onto 0 … 2pi in the phase input). This implies cyclical playback. I don’t recall other UGens that use wavetable format.
Take the same Phasor, map it onto 0 … numFrames, and use it in BufRd, and you’ve got cyclical playback. (This is the technique in my recent wavetable quark – so you could also have read the code, maybe gotten your answer faster than waiting for a reply.)
PlayBuf with looping is also cyclical playback.
(The other hidden assumption is that the Signal simply contains a series of samples – almost certainly the case, but Signal is really just a FloatArray and doesn’t insist that the data only be consecutive audio samples.)
In any case, I’d start by thinking through the math conversions.
What we have is, I’ll assume, cycles/sec.
What we want is, buffer samples to cover per second.
It will eventually be BufRateScale’d but the frequency conversion is key, and independent of file sample rate.
Let’s say your buffer is n samples. Then normal playback rate would produce sr/n cycles per second – sr = sample rate. (We advance by sr samples in one second, and there are n samples per cycle, so sr/n is (samples / second) / (samples / cycle) = samples/second * cycles/sample = cycles/second.)
So we want to scale this baseline sr/n to get freq Hz. Freq is in cycles/sec, and so is sr/n, so a scaling factor would be their quotient = freq * n / sr. Then this needs to be scaled for buffer rate, so it’s probably BufRateScale.kr(bufnum) * freq * n * SampleDur.ir (SampleDur.ir just allows the division by sample rate to be expressed as a multiplication, which is more efficient.)
I admit I haven’t tested this – I could be offbase – but I’m spelling out all the unit conversions as reassurance that the basic idea makes sense, also as a pedagogical device. Just hope I got it right