I was also thinking this – plus, the fact that basically all CPUs (except smaller ARM units) are multicore basically eliminates DSP’s drag on language processing and vice versa.
I started with SC on a G4 iBook – I distinctly remember, when DSP load was high, GUI objects’ updates could be delayed up to half a second. That’s with SC Server’s multiple-process design, but on a single core (and before Apple decided that the PowerPC chips were never going to be fast enough). That problem has fully disappeared just because of hardware.
I’m curious about the level of complexity and the s.latency setting you were using here. I use the default latency most of the time (because most of my event triggers are sequencers) and I don’t get “late” messages unless the system clock gets NTP-ed in a funny way.
I’m curious about the level of complexity and the s.latency setting you were using here. I use the default latency most of the time (because most of my event triggers are sequencers) and I don’t get “late” messages unless the system clock gets NTP-ed in a funny way.
I usually stick with the default s.latency, but I do often write code with nontrivial computation to determine e.g. which notes to play or control values to set. In my case, sclang is absolutely more hiccup-y than Haskell. If a person is writing pretty normal patterns in sclang, I’d expect less of an issue or none at all.
I also recall experiencing missed deadlines with trivial computations, when there was GUI interaction involved, but I can’t make an apples-to-apples comparison.
I wouldn’t advise anyone to switch away from sclang for reasons of code performance (unless performance was becoming a problem for them). My reasons for responding to this thread were to make sure we weren’t encouraging the opposite: avoiding non-sclang clients for fear of performance losses or non-realtime-ness. General-purpose languages absolutely do open up universes of possibilities that a small time-strapped community couldn’t hope to recreate.
Yes, that’s why SPJ emphasized those factors in the forum. One of them is to create a model for dealing with failures; if one instance fails, your system has to do better than just crash. The other model that you will have to design is concurrence. You mentioned one possibility with a “master” one. But I believe there are many things to think about, and even Erlang must have tried different things and evolved this with time as well.
Reference counting is not real time, because decrementing a reference to zero can cause an unbounded cascade of freeing objects. SC’s incremental garbage collector has an O(1) sweep phase, because the dead objects list gets moved to the free list in a single step.
If my understanding of this comment in cpython is correct…
I believe that only certain objects that reference immutable objects are reference counted, meaning there can’t be a cascade as the tree is only one deep. Whereas objects that do reference other mutable object are moved to the GC.
Obviously I have no data, but my understanding is that this makes the performance of the individual block slower, but more predictable.
Unless things have changed since my knowledge, Python uses reference counting for all objects, and a cycle collector (which here is referred to as the garbage collector?) to reclaim cyclic garbage. Thus the comment would be referring to the fact that some objects cannot participate in cycles and do not need to be traced.
Ah I see, yes I’ve misunderstood. I though it reference counted immutable objects that could’t reference others (like a string) and used GC for everything else. Thanks!
I was sought of thinking about a hypothetical language that used lifetime analysis to determine if the object left the scope and therefore instructions to allocate and free the memory without triggering the gc could be emitted in the bytecode. This could be applied in places like collection.collect{...}.collect{...}.collect{...}, where all those intermediate temporary objects could actually use the same piece of memory. I incorrectly assumed python did something similar (but not as complete) as this with reference counting.
If anyone want to try how this things might work in Python I’ve implemented them all a few years ago:
Haven’t touch the project in a long time, it used to work well up to Python 10. You have to clone it to have the latest version pypi is out of date.
In that repo there is also a non real time mode for Routines and Clocks that create the osc score for scsynth -nrt which I think could be awesome to have in sclang. There is some documentation with examples about nrt.