Does this class exists?

Hi All,

I need for my project to visualize in realtime shape of wave when recording (like e.g. in Audacity when recording). I test it with SoundFileView metod setData in loop, but it is too slow and eats a lot of resources. So something simpler would be better. Probably somebody of you does know if such a class exists.
Thank you.

paum

Have you tried using the server scope?

Hi Josh, thanks for quick response.

Maybe I was not so clear as I should. Sorry for that. Scope is something different.

I probably should write that I need something similar very like when recording in e.g Audacity - you can see all the recorded buffer, you can see how it grows in realtime, and ability to select some parts as in SoundFileWiew to process them.

My buffer would have tens of minutes. One hour for example.

My laptop has a horizontal screen resolution of 1366 pixels. Let’s be generous and say maybe you have 1920.

An hour is 60 min x 60 sec/min = 3600 sec. So each horizontal pixel would represent 3600 sec / 1920 pixels = 1.875 sec/pixel x 44100 samp/sec = 82,687.5 samp/pixel.

So any approach based on getting all the sample data into sclang memory and letting SoundFileView figure out what to display will be inefficient by almost 5 orders of magnitude :astonished: . (You didn’t say specifically that this is what you tried, but you did say it’s too slow, so I’m guessing there is some inefficiency involved.)

I’d suggest to calculate a triggered running RMS in the recording synth, and at some interval correlated to the zoom level, SendReply a single value per channel back to the language for display. I’m not at the computer right now but could probably hack up an example later.

Do you really want to show the whole hour on screen at once, or would you prefer more detail with a scrolling display?

hjh

Dear James, thank you for your generous reply.
Yes, I know. I think I can manage. I was just curious, if there is some work already done in this topic. If anybody is interested, here is my current situation, and I think I will go further, it still needs to be optimized.


s.boot;
~b = Buffer.alloc(s, 44100 * 60*60  , 1); // one hour
~l = List.new; // for storing RMS


(
Ndef(\rec, {
    var sig = SoundIn.ar(0);
    RecordBuf.ar(sig, ~b, loop: 1);
	SendPeakRMS.kr(sig: sig,  replyRate: 5.0,  peakLag: 3,  cmdName: '/reply',  replyID: -1);
}).play
)


OSCdef(\r, { |m| 
~l.add(m[4]); // metered value
~l.add(m[4]* (-1)); // my hack to have signal also in negative part ..
}, '/reply');




( // make a simple SoundFileView
	y = Window.screenBounds.height - 120;
	w = Window.new("soundfile test", Rect(200, y, 740, 100)).alwaysOnTop_(true);
	w.front;
	a = SoundFileView.new(w, Rect(20,10, 700, 80));

	f = SoundFile.new;

	a.soundfile = f;            // set soundfile
	a.refresh;                  // refresh to display the file.
	a.gridOn = false
)

// realtime display of audio signal being recorded
(
	Tdef(\t,
			{
				loop{
					~b.loadToFloatArray(action: {
						arg array;
						b = array;
						{
							a.setData(
								FloatArray.newFrom(~l*10) // scaling
							)
						}.defer;
						});
					0.5.wait;
				}
			}
	).play;
)

Tdef(\t).stop
Ndef(\r).stop;
Ndef(\r).clear;





This pretty much works:

(
var width = Window.screenBounds.width;
var channels = 2;
var wc = width * channels, wc2 = wc * 2;
var shift = (wc2 * 0.9).round(2).asInteger;
var index = 0;

d = FloatArray.newClear(wc2);

v = SoundFileView(nil, Rect(800, 200, 500, 400)).front
.alloc(width * 2, 2)
.gridOn_(false);

OSCdef(\recSummary, { |msg|
	var chan = msg[3..];
	chan = (chan ++ chan.neg).as(FloatArray);
	d.overWrite(chan, index);
	index = index + (channels * 2);
	if(index >= wc2) {
		// shift everything left
		d.overWrite(d[shift..], 0);
		d.overWrite(FloatArray.fill(shift, 0), wc2 - shift);
		index = wc2 - shift;
		defer { v.setData(d, startFrame: 0, channels: channels) };
	} {
		defer { v.set(index div: 2, chan) };
	};
}, '/recSummary', s.addr);

a = {
	var sig = SoundIn.ar(Array.series(channels, 0, 1));
	var trig = Impulse.ar(32);  // speed of updates
	var sampleCount = Phasor.ar(trig, 1, 1, 10e34, 1);
	// using FOS here as an integrator
	// because Integrator's coefficient is control rate only
	var runningSum = FOS.ar(sig.squared, DC.ar(1), DC.ar(0), Delay1.ar(trig) <= 0);
	var rms = (runningSum / sampleCount).sqrt;

	SendReply.ar(trig, '/recSummary', rms);

	// RecordBuf.ar( /* you fill in this part */ );

	sig * 0.1
}.play;

v.onClose = { a.release; OSCdef(\recSummary).free };
)

PS I think you absolutely do not want to ~b.loadToFloatArray an hour’s worth of audio every half second. Really, don’t do this.

EDIT: Replaced the synth, after finding a trick to do the RMS at audio rate. Now it reports a higher (more accurate) level.

hjh

3 Likes

Wow James,

this solution is really great! I have to study it more. Why you did not use just SendPeakRMS and instead there is some magic, I can’t see into yet.

thanks again.

1 Like

No good reason – I just forgot that the unit exists :flushed: you could replace the few lines in the SynthDef with a SendPeakRMS – just be sure to use only the peak or RMS values in the OSCdef, but not both (because that section expects one value per channel).

Tbh I wasn’t sure how good the GUI would look, but it turned out quite well.

hjh