Loading buffers with samples exit server with code 0

Hello all,

I really love SuperDirt and Tidal, I used them on several machines and never had any problem but I´m actually having trouble with a fresh install of SuperDirt:
I can´t call loadSoundFiles method on a SuperDirt instance (and can´t execute SuperDirt.start) without crashing the server with this message:

Server 'localhost' exited with exit code 0.
server 'localhost' disconnected shared memory interface

I´m on Ubuntu 24.04, SuperCollider 3.14.0-dev (build from source), sc3-plugins (build from source) all freshly installed.
I had this problem on a first install, I reinstall everything but it´s the same.
I have no quarks installed except SuperDirt 1.7.3, Vowel and Dirt-Samples.
This is my startup file:

s.options.memSize = 2.pow(20) * 2;
s.options.numBuffers = 1024 * 256;
s.options.maxNodes = 1024 * 32;
s.options.numWireBufs = 64 * 8;
s.options.maxSynthDefs = 1024 * 4;

// rec options
s.options.recChannels = 2;
s.options.recHeaderFormat = "aiff";

I have not installed Tidal yet cause I´d like to be able to run SuperDirt first, I don´t know if this can be related.
I need help please, if someone can point me to the right direction, I would be very gratefull.
@julian have you ever encountered this ? I search on SuperDirt github issues and on tidal without success.

This is the code that produce the error:

s.boot;
SuperDirt.start; // problem here
// Server 'localhost' exited with exit code 0.
// server 'localhost' disconnected shared memory interface

s.boot;
~dirt = SuperDirt(2, s); // ok
~dirt.start(57120, [0, 0]); // ok

~dirt.loadSoundFiles; // problem here
// Server 'localhost' exited with exit code 0.
// server 'localhost' disconnected shared memory interface

and the whole post window with the error message:

compiling class library...
	Found 880 primitives.
	Compiling directory '/usr/local/share/SuperCollider/SCClassLibrary'
	Compiling directory '/usr/local/share/SuperCollider/Extensions'
	Compiling directory '/home/fabien/.local/share/SuperCollider/Extensions'
	Compiling directory '/home/fabien/.local/share/SuperCollider/downloaded-quarks/Vowel'
	Compiling directory '/home/fabien/.local/share/SuperCollider/downloaded-quarks/Dirt-Samples'
	Compiling directory '/home/fabien/.local/share/SuperCollider/downloaded-quarks/SuperDirt'
	numentries = 1307510 / 20832240 = 0.063
	6070 method selectors, 3432 classes
	method table size 21504576 bytes, big table size 166657920
	Number of Symbols 15663
	Byte Code Size 471296
	compiled 570 files in 0.46 seconds

Info: 4 methods are currently overwritten by extensions. To see which, execute:
MethodOverride.printAll

compile done
localhost : setting clientID to 0.
internal : setting clientID to 0.
Class tree inited in 0.01 seconds


*** Welcome to SuperCollider 3.14.0-dev. *** For help press Ctrl-D.
SCDoc: Indexing help-files...
SCDoc: Indexed 1955 documents in 0.95 seconds
Booting server 'localhost' on address 127.0.0.1:57110.
Found 237 LADSPA plugins
JackDriver: client name is 'SuperCollider'
SC_AudioDriver: sample rate = 44100.000000, driver's block size = 512
JackDriver: connected  Built-in Audio Analog Stereo:capture_FL to SuperCollider:in_1
JackDriver: connected  Built-in Audio Analog Stereo:capture_FR to SuperCollider:in_2
JackDriver: connected  SuperCollider:out_1 to Built-in Audio Analog Stereo:playback_FL
JackDriver: connected  SuperCollider:out_2 to Built-in Audio Analog Stereo:playback_FR
SuperCollider 3 server ready.
Requested notification messages from server 'localhost'
localhost: server process's maxLogins (1) matches with my options.
localhost: keeping clientID (0) as confirmed by server process.
Shared memory server interface initialized
-> SuperDirt
loading synthdefs in /home/fabien/.local/share/SuperCollider/downloaded-quarks/SuperDirt/classes/../synths/core-modules.scd
---- core synth defs loaded ----
loading synthdefs in /home/fabien/.local/share/SuperCollider/downloaded-quarks/SuperDirt/classes/../synths/core-synths-global.scd
loading synthdefs in /home/fabien/.local/share/SuperCollider/downloaded-quarks/SuperDirt/classes/../synths/core-synths.scd
loading synthdefs in /home/fabien/.local/share/SuperCollider/downloaded-quarks/SuperDirt/classes/../synths/default-synths.scd
loading synthdefs in /home/fabien/.local/share/SuperCollider/downloaded-quarks/SuperDirt/classes/../synths/try-load-extra-synths.scd
loading synthdefs in /home/fabien/.local/share/SuperCollider/downloaded-quarks/SuperDirt/classes/../synths/tutorial-synths.scd
loading synthdefs in /home/fabien/.local/share/SuperCollider/downloaded-quarks/SuperDirt/classes/../synths/zzzzz-core-modules-that-come-last.scd


217 existing sample banks:
808 (6) 808bd (25) 808cy (25) 808hc (5) 808ht (5) 808lc (5) 808lt (5) 808mc (5) 808mt (5) 808oh (5) 808sd (25) 909 (1) ab (12) ade (10) ades2 (9) ades3 (7) ades4 (6) alex (2) alphabet (26) amencutup (32) armora (7) arp (2) arpy (11) auto (11) baa (7) baa2 (7) bass (4) bass0 (3) bass1 (30) bass2 (5) bass3 (11) bassdm (24) bassfoo (3) battles (2) bd (24) bend (4) bev (2) bin (2) birds (10) birds3 (19) bleep (13) blip (2) blue (2) bottle (13) breaks125 (2) breaks152 (1) breaks157 (1) breaks165 (1) breath (1) bubble (8) can (14) casio (3) cb (1) cc (6) chin (4) circus (3) clak (2) click (4) clubkick (5) co (4) coins (1) control (2) cosmicg (15) cp (2) cr (6) crow (4) d (4) db (13) diphone (38) diphone2 (12) dist (16) dork2 (4) dorkbot (2) dr (42) dr2 (6) dr55 (4) dr_few (8) drum (6) drumtraks (13) e (8) east (9) electro1 (13) em2 (6) erk (1) f (1) feel (7) feelfx (8) fest (1) fire (1) flick (17) fm (17) foo (27) future (17) gab (10) gabba (4) gabbaloud (4) gabbalouder (4) glasstap (3) glitch (8) glitch2 (8) gretsch (24) gtr (3) h (7) hand (17) hardcore (12) hardkick (6) haw (6) hc (6) hh (13) hh27 (13) hit (6) hmm (1) ho (6) hoover (6) house (8) ht (16) if (5) ifdrums (3) incoming (8) industrial (32) insect (3) invaders (18) jazz (8) jungbass (20) jungle (13) juno (12) jvbass (13) kicklinn (1) koy (2) kurt (7) latibro (8) led (1) less (4) lighter (33) linnhats (6) lt (16) made (7) made2 (1) mash (2) mash2 (4) metal (10) miniyeah (4) monsterb (6) moog (7) mouth (15) mp3 (4) msg (9) mt (16) mute (28) newnotes (15) noise (1) noise2 (8) notes (15) numbers (9) oc (4) odx (15) off (1) outdoor (6) pad (3) padlong (1) pebbles (1) perc (6) peri (15) pluck (17) popkick (10) print (11) proc (2) procshort (8) psr (30) rave (8) rave2 (4) ravemono (2) realclaps (4) reverbkick (1) rm (2) rs (1) sax (22) sd (2) seawolf (3) sequential (8) sf (18) sheffield (1) short (5) sid (12) sine (6) sitar (8) sn (52) space (18) speakspell (12) speech (7) speechless (10) speedupdown (9) stab (23) stomp (10) subroc3d (11) sugar (2) sundance (6) tabla (26) tabla2 (46) tablex (3) tacscan (22) tech (13) techno (7) tink (5) tok (4) toys (13) trump (11) ul (10) ulgab (5) uxay (3) v (6) voodoo (5) wind (10) wobble (1) world (3) xmas (1) yeah (31) 
... file reading complete. Required 444 MB of memory.

Server 'localhost' exited with exit code 0.
server 'localhost' disconnected shared memory interface

Thank you

1 Like

Just a simple check: Is there a problem with increased memory limits? can you load samples in smaller batches?

I’d second smoge’s question: is there an OS-level memory limit applied to the process? (It would have to be OS-level: s.options.memSize does not govern buffer allocation.)

If that isn’t the issue… For crashes, the final word wrt diagnosis is to run scsynth in gdb (or to attach gdb to it – though I found that attaching gdb ended up borking the JACK server, oops).

In SC: s.options.asOptionsString to get the server command line parameters.

Then, in a terminal:

gdb --args scsynth (and paste the cmd line args here)

... gdb info stuff...

// at gdb prompt:
run

Then, back in SC:

Server.default = s = Server.remote(\debug, NetAddr("127.0.0.1", 57110), s.options);

... should see:
Requested notification messages from server 'debug'
debug: server process's maxLogins (1) matches with my options.
debug: keeping clientID (0) as confirmed by server process.

Maybe also need to do:

s.connectSharedMemory;

Then try to initialize superdirt.

When the server crashes, you’ll get a gdb prompt.

thread apply all bt

… which will print out a ton of stuff, which you can copy and bring here. The most important but will be whichever thread crashed.

Then exit in gdb.

hjh

2 Likes

@kesey Other simple checks (it won’t hurt to check them; they are probably fine, jack package usually take care of those things)

# system limits
ulimit -a     #  all process limits
ulimit -v     # virtual mem 
ulimit -m     # Max mem 
ulimit -l     # max locked mem
# scsynth specific
pid=$(pgrep scsynth)
cat /proc/${pid}/limits
cat /proc/${pid}/status | grep -i mem
cat /proc/${pid}/status | grep VmSize    # virtual mem 
cat /proc/${pid}/statm                   # mem stats
#  see config ( check subfolder limits.d for other config files)
cat /etc/security/limits.conf 

# Edit it
sudo nano /etc/security/limits.conf

# check these lines for your group (can vary, like jackuser, for example)
@audio          -       memlock         unlimited
@audio          -       rtprio          95
@audio          -       nice            -19
# see if your user is in the audio (or similar name) group
groups $USER

# add user to audio (or another name) group if needed
sudo usermod -a -G audio $USER
1 Like

Thank you very much for your answers @smoge & @jamshark70.
I tried your suggestion @smoge (load fewer samples) by removing the content of the Dirt-Samples folder except for one folder and SuperDirt.start run without error.
After that, I add more folder of samples and I can go to about 260MB of samples without crashing.
Above 260MB it crash as describe earlier.
I’dd like to be able to load the entire Dirt-Samples folder.

Thank you very much for your time and advices, I really appreciate it.
I don´t know how to check if there is an OS-level memory limit applied to the process, so I can´t really answer you for the moment.
I will try to run scsynth in gdb tomorrow, I don´t have enough time now.

Thank you for your time.

system limits

ulimit -a
real-time non-blocking time  (microseconds, -R) unlimited
core file size              (blocks, -c) 0
data seg size               (kbytes, -d) unlimited
scheduling priority                 (-e) 0
file size                   (blocks, -f) unlimited
pending signals                     (-i) 127511
max locked memory           (kbytes, -l) unlimited
max memory size             (kbytes, -m) unlimited
open files                          (-n) 1024
pipe size                (512 bytes, -p) 8
POSIX message queues         (bytes, -q) 819200
real-time priority                  (-r) 95
stack size                  (kbytes, -s) 8192
cpu time                   (seconds, -t) unlimited
max user processes                  (-u) 127511
virtual memory              (kbytes, -v) unlimited
file locks                          (-x) unlimited

ulimit -v
unlimited

ulimit -m 
unlimited

ulimit -l
unlimited

scsynth specific (it don´t seem to work)

pid=$(pgrep scsynth)
cat /proc/${pid}/limits
cat /proc/${pid}/status | grep -i mem
cat /proc/${pid}/status | grep VmSize    # virtual mem 
cat /proc/${pid}/statm                   # mem stats
cat: /proc//limits: Aucun fichier ou dossier de ce nom
cat: /proc//status: Aucun fichier ou dossier de ce nom
cat: /proc//status: Aucun fichier ou dossier de ce nom
cat: /proc//statm: Aucun fichier ou dossier de ce nom

config

cat /etc/security/limits.conf
# /etc/security/limits.conf
#
#This file sets the resource limits for the users logged in via PAM.
#It does not affect resource limits of the system services.
#
#Also note that configuration files in /etc/security/limits.d directory,
#which are read in alphabetical order, override the settings in this
#file in case the domain is the same or more specific.
#That means, for example, that setting a limit for wildcard domain here
#can be overridden with a wildcard setting in a config file in the
#subdirectory, but a user specific setting here can be overridden only
#with a user specific setting in the subdirectory.
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - a user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#        - NOTE: group and wildcard limits are not applied to root.
#          To apply a limit to the root user, <domain> must be
#          the literal username root.
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open file descriptors
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#        - chroot - change root to directory (Debian-specific)
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#root            hard    core            100000
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#ftp             -       chroot          /ftp
#@student        -       maxlogins       4

# End of file

user

groups $USER
fabien : fabien adm cdrom sudo audio dip plugdev users lpadmin

I don´t know if it can be usefull but I set up my system with rtcqs,
this is the output:

rtcqs
rtcqs - version 0.6.2

Root User
=========
[ OK ] Not running as root.

Group Limits
============
[ OK ] User fabien is member of a group that has sufficient rtprio (95) and memlock (unlimited) limits set.

CPU Frequency Scaling
=====================
[ OK ] The scaling governor of all CPUs is set to performance.

Kernel Configuration
====================
[ OK ] Valid kernel configuration found.

High Resolution Timers
======================
[ OK ] High resolution timers are enabled.

Tickless Kernel
===============
[ OK ] System is using a tickless kernel.

Preempt RT
==========
[ OK ] Kernel 6.8.0-47-lowlatency is using threaded IRQs.

Spectre/Meltdown Mitigations
============================
[ OK ] Spectre/Meltdown mitigations are disabled. Be warned that this makes your system more vulnerable to Spectre/Meltdown attacks.

RT Priorities
=============
[ OK ] Realtime priorities can be set.

Swappiness
==========
[ OK ] Swappiness is set at 10.

Filesystems
===========
[ OK ] The following mounts can be used for audio purposes: /

IRQs
====
[ OK ] USB port xhci_hcd with IRQ 124 does not share its IRQ.
[ OK ] Soundcard snd_hda_intel:card0 with IRQ 129 does not share its IRQ.

Power Management
================
[ OK ] Power management can be controlled from user space. This enables DAWs like Ardour and Reaper to set CPU DMA latency which could help prevent xruns.

I have Ubuntu studio

It just works when you are running scsynth.

Your system seems fine overall. It is hard to tell what the problem is with this input. You will need to investigate more and send back what you find.

Check what happens to the scsynth process; you must run it to see that. What kind of computer are you running this?

And finally, as James said, a debugger would also give more context to what the problem is.

you could try lazy loading your samples, check:

2 Likes

ahah, sorry, yes it´s better when the server is booted.

scsynth specific

pid=$(pgrep scsynth)
cat /proc/${pid}/limits
cat /proc/${pid}/status | grep -i mem
cat /proc/${pid}/status | grep VmSize    # virtual mem 
cat /proc/${pid}/statm                   # mem stats
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             127511               127511               processes 
Max open files            8192                 1048576              files     
Max locked memory         unlimited            unlimited            bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       127511               127511               signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     95                   95                   
Max realtime timeout      200000               200000               us        
RssShmem:	    4224 kB
Mems_allowed:	00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:	0
VmSize:	 2579068 kB
644767 19009 7181 117 0 550514 0

do you see something wrong here ?
I use a ThinkPad T460p, i7 with 32Go of ram, Ubuntu 24.04 with Ubuntu Studio.
Do you need more details ?
I will investigate more and send back what I find, next step: gdb.
Thank you for your patience.

1 Like

Thank you very much.
I will use that as a workaround but I continue to investigate for now, cause I really like to understand what´s going on here.
I´d never encountered this problem, I use SuperDirt for about 4 years now and I did the same set up on 4 (linux) machines without any trouble before that.

@jamshark70, I follow your advices with gdb.
Everything works as expected,
I run gdb after passing to it scsynth with the options which I get from `s.options.asOptionsString.
After doing:

Server.default = s = Server.remote(\debug, NetAddr("127.0.0.1", 57110), s.options);

I can see:

Requested notification messages from server 'debug'
debug: server process's maxLogins (1) matches with my options.
debug: keeping clientID (0) as confirmed by server process.

I do s.connectSharedMemory;
but when I try to initialize SuperDirt under these conditions, I can´t reproduce the error anymore, everything works fine, the Dirt-Samples folder (which is, now, full of samples like the default) is loaded without exiting the server.
So in the terminal, in gdb, I see nothing (nothing to see as expected cause no error):

gdb --args scsynth -u 57110 -a 1024 -i 2 -o 2 -b 262144 -n 32768 -d 4096 -m 2097152 -w 512 -R 0 -C 1 -l 1
GNU gdb (Ubuntu 15.0.50.20240403-0ubuntu1) 15.0.50.20240403-git
Copyright (C) 2024 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from scsynth...

This GDB supports auto-downloading debuginfo from the following URLs:
  <https://debuginfod.ubuntu.com>
Enable debuginfod for this session? (y or [n]) y
Debuginfod has been enabled.
To make this setting permanent, add 'set debuginfod enabled on' to .gdbinit.
(No debugging symbols found in scsynth)
(gdb) run
Starting program: /usr/local/bin/scsynth -u 57110 -a 1024 -i 2 -o 2 -b 262144 -n 32768 -d 4096 -m 2097152 -w 512 -R 0 -C 1 -l 1
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/pipewire-0.3/jack/libjack.so.0
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/spa-0.2/support/libspa-support.so
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/spa-0.2/support/libspa-journal.so
[New Thread 0x7ffff6a006c0 (LWP 8593)]
[New Thread 0x7ffff5e006c0 (LWP 8595)]
no more csLADSPA plugins
Found 237 LADSPA plugins
[New Thread 0x7fff6cc006c0 (LWP 8597)]
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/spa-0.2/support/libspa-dbus.so
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/pipewire-0.3/libpipewire-module-rt.so
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/pipewire-0.3/libpipewire-module-protocol-native.so
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/pipewire-0.3/libpipewire-module-client-node.so
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/pipewire-0.3/libpipewire-module-metadata.so
[New Thread 0x7fff6c2006c0 (LWP 8598)]
[Thread 0x7fff6c2006c0 (LWP 8598) exited]
[New Thread 0x7fff6c2006c0 (LWP 8599)]
[New Thread 0x7fff6b8006c0 (LWP 8600)]
JackDriver: client name is 'SuperCollider'
SC_AudioDriver: sample rate = 44100.000000, driver's block size = 512
[New Thread 0x7fff6ae006c0 (LWP 8601)]
[New Thread 0x7fff6a4006c0 (LWP 8602)]
SuperCollider 3 server ready.

I start over again from the beginning by following every steps maybe 5 times but it works every time.
Between each try, I try again with the old fashion way, without gdb and without doing (I don´t know if it´s related):

Server.default = s = Server.remote(\debug, NetAddr("127.0.0.1", 57110), s.options);

and every time, it fails: server exit with code 0…
It´s seems like in debugging mode, something behave differently.
It´s not really convenient for debugging.
I don´t know what to do from here.

If you can share ways to check if there is an OS-level memory limit applied to the process, it would be helpful to me.

Thank you

oh, I think I misunderstood. It does not happen when you compile with debug flag?

I doesn’t happen when I follow @jamshark70 steps by using gdb and do:

Server.default = s = Server.remote(\debug, NetAddr("127.0.0.1", 57110), s.options);

etc.
but it happens everytime I try it by booting the server and just run SuperDirt.start;

You can also start the scsynth process and attach the debugger using its PID number. (I don’t know if that would change anything)

Is it compiled with a debug flag?

It is a bit strange indeed.

I don´t know if it answers your question but I build SuperCollider with this flag (and a couple others) for cmake

cmake -DCMAKE_BUILD_TYPE=Release ..

Yea, you would need to use the DEBUG flag.

If this is actually a bug, maybe you should team up with a core server developer in a chat and trace it.

There are more things you could try.

Ok so I need to rebuild supercollider with this flag for cmake:
DCMAKE_BUILD_TYPE=RelWithDebInfo .. ?

and after that what I can with a do with a debugger if I can´t reproduce the error ?

Yes, or just use Debug

:face_with_raised_eyebrow: I’m stumped then.

It’s unlikely to be a memory allocation error because scsynth already traps that condition.

In my own code, I’ve tended to avoid mass preloading of samples because a large number of requests submitted very quickly is a bit of a stress condition – in theory that shouldn’t be an issue, but if that were always the case in practice, software QA departments wouldn’t need to devise stress tests. But I don’t remember cases from the past where a rapid series of buffer loads caused the server to crash… new one on me.

hjh