It’s kind of aggravating when you submit a simple fix for a bug in some code that doesn’t have a unit test and then you’re expected to write a unit test for a complex bit of code in order to get your (perfectly) valid fix accepted.
In legacy code bases I think it’s a good idea to add documentation and unit tests to code that doesn’t have them - but that should always be a separate task to actually fixing stuff. Otherwise you end up not wanting to fix stuff.
And I agree about the unit testing framework in SC. It’s pretty clunky.
The whole point of unit tests is that you don’t do them at that level.
Really they should be used for utilities and library code that your application depends upon. There are different (and far more involved, and frankly annoying) approaches to doing systems testing.
Whether the codebase of SuperCollider supports unit tests well, I couldn’t say. It may not. But in principle there’d be nothing preventing them.
Not to mention that if the same developer writes the code and a Unit Test, he will carry with him all the preconceptions, the framework of mind, just to write a kind of test that just doesn’t work in practice.
Yes, I mentioned that a system so dynamic as SC would require a model of the system to test anything. A unit test in isolation to test a class or something is a weak inference.
I did a post on property-based tests here, but it didn’t generate any discussion, unfortunately.
Well it’s more of an architecture thing. A system’s architecture has to be designed in such a way as to make it possible.
Unit tests written in isolation can be very effective. I have a vector library that I wrote which has a ton of tests on it. But a lot of the code that uses it could not be tested with unit testing, but because it relies upon simple and composable libraries then this is less of a problem. Once IO/system becomes a thing, the code that’s being tested is pretty thin and so stress/random testing is generally sufficient. Not perfect (what is), but pretty good for the amount of effort exerted.
Given they’re not in C++, or Rust, I don’t think they’d be hugely useful.
There is some property based testing in there, but not a huge amount because the languages I use tend not to have a good system for that, and for most use cases (obviously not a vector library) I find setting up the properties is more trouble than it’s worth.
Maybe I should try and write one for Zig as an exercise.
I was told that there was strong guidance against anything property-based or even randomness in tests a short time ago. And arguments I was told were not correct. Property-based tests are deterministic, it’s easy to confuse with something else.
Unless you didn’t. And it’s really annoying if you later realized that your properties were fundamentally misconceived, which is a pretty common programming experience. And once software gets sufficiently complex, it’s pretty much impossible to define it sufficiently that these tools work.
Testing is a useful tool, but it also comes with a ton of issues. The trick is to know when these tools are useful, and when they’re not worth (the considerable) effort required to make use of them.
I think this would be the result of bad design. Composition is the essence of programming, and if your properties are impossible to write, maybe there is something wrong. The thing is that it’s very difficult to “prove” computations using operational semantics. It would be like showing a property of a program by “running it”.
That’s where denotational semantics comes to the rescue and offers an alternative, and got attention in the last decade. I’m sure it would be a long discussion to explain why, but I already shared materials in this forum, so no need to do it here.