Thanks to all for your help. @jamshark70, I don’t know if it’s what you’re suggesting but I will use lazy loading.
I’ve test it and it works well.
Anyway, I have to confess that I 'm a bit disapointed.
I follow the advices for debugging and it’s not working at all for my case, the exit is not reproducible in debug mode and something clearly behave differently in this case.
I think, it’s a bit scary.
@smoge can you explain the differences between debugging by opening scsynth in gdb and debugging by attaching gdb to scsynth please ?
I’d be happy to avoid wasting time by recompiling sc etc. to figure out that I can’t reproduce the exit with this way too.
I have a bit of feeling of hiding the dust under the carpet here.
SC isn’t exactly a light build, but it isn’t very, very heavy either (complete build on my machine is, oh, 15 minutes, or less?). I’d suggest to change your build_type to RelWithDebInfo, rebuild once, and leave it with that setting. I don’t think there’s a strong benefit to building as Release.
Re: attaching – it’s a bit of a stab in the dark. I know you’re not going to like that, but the fact is, this is a very weird problem.
I was able to attach by asking sclang s.pid, then plugging that pid into sudo gdb -p xxxxxx. I could get it to work only with sudo. If you don’t use sudo, there’s some message about changing a ptrace setting. I changed the setting, but wasn’t willing to reboot, so I don’t know if that’s entirely effective.
After attaching with sudo, quitting the server caused the JACK server to freeze – had to kill -9 it. That’s not wonderful, but it’s worth it if it gets more information than you got before.
I think I would look at it from a different perspective.
Bombarding the server with 2017* b_allocRead messages all at once, without any sort of partitioning or interim waiting, doesn’t strike me as an ideal thing to do. (As I noted in an earlier message, I take pains in my code to avoid doing anything like this. And… 2017? You’re never going to use all 2017 samples in one set. That’s a lot of wasted effort.)
Doing something that is not ideal may or may not work. You might get away with it on some machines, and then find another machine or environment where it fails.
The fact that a high stress use case didn’t fail in x and y environments isn’t a guarantee that it won’t fail in z.
“Sweeping it under the rug”… well ok, I haven’t exactly refuted that. But … speaking for myself, if SuperDirt were mine, I would bend over backward to avoid mass loading that many samples without any breaks in the process. Load them a hundred at a time, s.sync in between each chunk, I bet the problem goes away.
hjh
* For giggles:
p = "808 (6) 808bd (25) 808cy (25) 808hc (5) 808ht (5) 808lc (5) 808lt (5) 808mc (5) 808mt (5) 808oh (5) 808sd (25) 909 (1) ab (12) ade (10) ades2 (9) ades3 (7) ades4 (6) alex (2) alphabet (26) amencutup (32) armora (7) arp (2) arpy (11) auto (11) baa (7) baa2 (7) bass (4) bass0 (3) bass1 (30) bass2 (5) bass3 (11) bassdm (24) bassfoo (3) battles (2) bd (24) bend (4) bev (2) bin (2) birds (10) birds3 (19) bleep (13) blip (2) blue (2) bottle (13) breaks125 (2) breaks152 (1) breaks157 (1) breaks165 (1) breath (1) bubble (8) can (14) casio (3) cb (1) cc (6) chin (4) circus (3) clak (2) click (4) clubkick (5) co (4) coins (1) control (2) cosmicg (15) cp (2) cr (6) crow (4) d (4) db (13) diphone (38) diphone2 (12) dist (16) dork2 (4) dorkbot (2) dr (42) dr2 (6) dr55 (4) dr_few (8) drum (6) drumtraks (13) e (8) east (9) electro1 (13) em2 (6) erk (1) f (1) feel (7) feelfx (8) fest (1) fire (1) flick (17) fm (17) foo (27) future (17) gab (10) gabba (4) gabbaloud (4) gabbalouder (4) glasstap (3) glitch (8) glitch2 (8) gretsch (24) gtr (3) h (7) hand (17) hardcore (12) hardkick (6) haw (6) hc (6) hh (13) hh27 (13) hit (6) hmm (1) ho (6) hoover (6) house (8) ht (16) if (5) ifdrums (3) incoming (8) industrial (32) insect (3) invaders (18) jazz (8) jungbass (20) jungle (13) juno (12) jvbass (13) kicklinn (1) koy (2) kurt (7) latibro (8) led (1) less (4) lighter (33) linnhats (6) lt (16) made (7) made2 (1) mash (2) mash2 (4) metal (10) miniyeah (4) monsterb (6) moog (7) mouth (15) mp3 (4) msg (9) mt (16) mute (28) newnotes (15) noise (1) noise2 (8) notes (15) numbers (9) oc (4) odx (15) off (1) outdoor (6) pad (3) padlong (1) pebbles (1) perc (6) peri (15) pluck (17) popkick (10) print (11) proc (2) procshort (8) psr (30) rave (8) rave2 (4) ravemono (2) realclaps (4) reverbkick (1) rm (2) rs (1) sax (22) sd (2) seawolf (3) sequential (8) sf (18) sheffield (1) short (5) sid (12) sine (6) sitar (8) sn (52) space (18) speakspell (12) speech (7) speechless (10) speedupdown (9) stab (23) stomp (10) subroc3d (11) sugar (2) sundance (6) tabla (26) tabla2 (46) tablex (3) tacscan (22) tech (13) techno (7) tink (5) tok (4) toys (13) trump (11) ul (10) ulgab (5) uxay (3) v (6) voodoo (5) wind (10) wobble (1) world (3) xmas (1) yeah (31)"
.findRegexp("\\([0-9]+\\)")
p.sum { |row| row[1].select(_.isDecDigit).asInteger }
-> 2017
Perhaps there is some space for a slight enhancement in the SuperDirt quark to avoid problems like this, i.e., partitioning the loading of buffers into chunks with s.sync in between. That is a simple thing and would prevent an uncommon edge case.
var <>syncAfter = 10; // or nil to load all-at-once
And change loadSoundFiles to:
loadSoundFiles { |paths, appendToExisting = false, namingFunction = (_.basename), action| // paths are folderPaths
var folderPaths, memory;
paths = paths ?? { "../../Dirt-Samples/*".resolveRelative };
folderPaths = if(paths.isString) { paths.pathMatch } { paths.asArray };
folderPaths = folderPaths.select(_.endsWith(Platform.pathSeparator.asString));
if(folderPaths.isEmpty) {
"no folders found in paths: '%'".format(paths).warn; ^this
};
memory = this.memoryFootprint;
"\n\n% existing sample bank%:\n".postf(folderPaths.size, if(folderPaths.size > 1) { "s" } { "" });
fork {
var i = 0;
folderPaths.do { |folderPath|
this.loadSoundFileFolder(folderPath, namingFunction.(folderPath), appendToExisting);
i = i + 1;
if(doNotReadYet.not and: { i >= syncAfter }) {
i = 0;
server.sync;
};
};
if(doNotReadYet) {
"\n ... sample banks registered, will read files as necessary".postln;
} {
"\n... file reading complete. Required % MB of memory.\n\n".format(
this.memoryFootprint - memory div: 1e6
).post
};
action.value(this);
};
}
There were just over 200 sample libraries referenced in the original post. Pausing after 10 libraries ~= 20 sync calls. The sync round-trip, worst-case, shouldn’t be higher than the audio hardware driver’s period (e.g., now I’m running with a relatively large buffer; scsynth reports 42.7 ms maximum latency, and if I measure the time that s.sync takes, it’s always less than this). Worst case, then, it would add ~= 40 ms/chunk * 20 chunks ~= 800 ms, or sample loading would finish less than a second later than it would without syncing – or less, if you’re running with a smaller HW buffer size. It will take longer than that second to switch back to the Tidal window and type an expression, so the sync time shouldn’t make a difference to human users.
After this change, though, loadSoundFiles becomes asynchronous. If other parts of SuperDirt code assume that it’s synchronous, those would have to be updated to use the new action function.
Set the doNotReadYet instance variable to true. Then your samples will only be read when you use them. This means that the first time you play a sample it may not sound, but it will load in the background.
You can read your sound files explicitly and amortize over time, either by waiting or, as James has suggested, by calling a s.sync in between. The method to be used for this is loadSoundFile(path, name, appendToExisting) – the methods loadSoundFiles etc. are just convenient wrappers around it.
Maybe it would be good if we add the sync as an argument to the method, but doNotReadYet was intended to solve these kinds of issues.