Let's please enforce the stack size limit

Thread *new:

*new { arg func, stackSize = (512);

But the reality is that infinite recursion uses up all available system memory, until it goes into swap, and the whole system becomes unresponsive, forcing a hard shutdown and reboot.

If the stack size really is 512, then 16 GB on my system / 512 frames = 32 MB per frame, this doesn’t seem like a realistic estimate. So the more likely explanation is that we “say” there’s a stack size limit but then ignore it.

… which is unwise, isn’t it?

Surely we can put something in the interpreter to say, if we’ve blown past the declared stack limit, just stop evaluating…? (Why do we not do this already?)


1 Like

PyrInterpreter3.cpp defines dispatch_opcode with checkStackDepth(g, sp); assert(checkStackOverflow(g, sp)); – perhaps we are choosing not to halt on a failed assert? Or this macro is not universally used? (After changing the macro to hard-exit upon !checkStackOverflow(g, sp), I find that recursion doesn’t terminate – so it must be missing in some places?)


Note that assertions are only evaluated in debug builds. Release builds typically define NDEBUG, which disables assertions.

I did eventually figure that out.

It still leaves the question, though – why do sclang threads have a stack size limit if this parameter is ignored?