SC Mailing Lists archive

The static archives for the (now closed) sc-users and sc-dev mailing lists are up here, complete with Google Custom Search :slight_smile::

Sorry this took a little longer than I imagined, but please take a look and let me know if there are any issues.

Currently http, but https should come very soon, they’re just sorting out certificates. I have not for the moment edited messages to correct author pasted links to other messages, or edited links in the auto-signatures. I may do this in the future, but it’s a big job, and I wanted to get these available. The indices are all correct (I hope! :slight_smile:).

Enjoy browsing almost two decades of SuperColliding!


Thanks a lot!

Unfortunately, the search is still suboptimal. I made a little test and tried to search for “VSTPlugin”, but I only get 10 results and half of those are unrelated. For example, I get " Re: [sc-users] How to use SynthDescLib?" only because “VSTPlugin” is in the title of the next thread. However, I see none of my actual release mails except for this one: Re: [sc-users] VSTPlugin v0.5.0 - final release!

I guess this is a problem with Google search and I don’t know if it’s even possible to fix this…


Yes, it’s a Google issue, and we’ve been trying to gradually tweak it. I gather indexing is an ongoing process, especially as producing a site map for tens of thousands of pages did not seem to be straightforwardly possible, at least without paying. So I’m hoping it will steadily improve.

I am not an expert on Google Custom Search though, so am happy to take any advice anyone has! Note at the moment that thread and date indices are excluded from the search results.

Is the archive available for download? I would like to archive it on my hard drive. I am preparing a PhD in musicology on live coding / live programming. I’m sure this archive could be useful to complete my sources about the history / dev of SC. I would totally understand if you were not willing to let people download it.

Thanks! It’s a great resource.

Um, I’m not sure about that. Although I realise anyone with a web crawler could do it, I’m not sure I am legally allowed to ‘give’ it to people. GDPR is tricky.

Looking at the indexing info, looks like there may be a problem with redirects. I will see what I can do…

Understandable! I will use the online version, it’s allright.
Thanks for having answered so quickly.

I’ve tweaked a little. I’ll see if that improves the page count in the next crawl.

This is great, thanks! I’m not sure how hard it would be to print the number of matches, and a “next” button (plus previous, etc; the usual) to see the next set/page of search results.


An update: I’ve tweaked this, and managed to generate a sitemap. The coverage is now slowly climbing and is up to about 10K pages. There are 216K in the sitemap though, so it may take some time. Heading in the right direction though!


Sounds great, thanks Scott!