SC Mailing Lists archive

The static archives for the (now closed) sc-users and sc-dev mailing lists are up here, complete with Google Custom Search :slight_smile::

Sorry this took a little longer than I imagined, but please take a look and let me know if there are any issues.

Currently http, but https should come very soon, they’re just sorting out certificates. I have not for the moment edited messages to correct author pasted links to other messages, or edited links in the auto-signatures. I may do this in the future, but it’s a big job, and I wanted to get these available. The indices are all correct (I hope! :slight_smile:).

Enjoy browsing almost two decades of SuperColliding!


Thanks a lot!

Unfortunately, the search is still suboptimal. I made a little test and tried to search for “VSTPlugin”, but I only get 10 results and half of those are unrelated. For example, I get " Re: [sc-users] How to use SynthDescLib?" only because “VSTPlugin” is in the title of the next thread. However, I see none of my actual release mails except for this one: Re: [sc-users] VSTPlugin v0.5.0 - final release!

I guess this is a problem with Google search and I don’t know if it’s even possible to fix this…


Yes, it’s a Google issue, and we’ve been trying to gradually tweak it. I gather indexing is an ongoing process, especially as producing a site map for tens of thousands of pages did not seem to be straightforwardly possible, at least without paying. So I’m hoping it will steadily improve.

I am not an expert on Google Custom Search though, so am happy to take any advice anyone has! Note at the moment that thread and date indices are excluded from the search results.

Is the archive available for download? I would like to archive it on my hard drive. I am preparing a PhD in musicology on live coding / live programming. I’m sure this archive could be useful to complete my sources about the history / dev of SC. I would totally understand if you were not willing to let people download it.

Thanks! It’s a great resource.

Um, I’m not sure about that. Although I realise anyone with a web crawler could do it, I’m not sure I am legally allowed to ‘give’ it to people. GDPR is tricky.

Looking at the indexing info, looks like there may be a problem with redirects. I will see what I can do…

Understandable! I will use the online version, it’s allright.
Thanks for having answered so quickly.

I’ve tweaked a little. I’ll see if that improves the page count in the next crawl.

This is great, thanks! I’m not sure how hard it would be to print the number of matches, and a “next” button (plus previous, etc; the usual) to see the next set/page of search results.


An update: I’ve tweaked this, and managed to generate a sitemap. The coverage is now slowly climbing and is up to about 10K pages. There are 216K in the sitemap though, so it may take some time. Heading in the right direction though!


Sounds great, thanks Scott!

I’ve updated the link above as the site now (finally!) has https support! Also dropped the www so the link is now


Invaluable resource, thanks Scott !

1 Like

Thanks @muellmusik, looks good. Unfortunately, when I click a search result link, I get this error:

403 - Forbidden: Access is denied.

You do not have permission to view this directory or page using the credentials that you supplied.

I’m using Safari 15.2 on macOS 10.15.7.

Looks like a great resource, hope the access is fixable. Thanks anyway. :smiley:

same here, Firefox on android


Okay, I’m guessing I’ll need to restart the Google index. Annoying but not the end of the world. Thanks for letting me know!

Okay, I’ve made a few tweaks to the search engine and added the new property. Unfortunately I had to delete the old one, which means that the crawl will have to start from scratch. So at the moment it returns no results but it should build gradually over the coming days and months.

1 Like

Sorry for the inconvienence!

I am still having the 403 - Forbidden: Access is denied, however this is happening when I am searching directly from google by doing

site: someKeywordsToSearch SinOsc SynthDef

I am doing this because this search method apparently provide more best matches (quantity as well) compared with the google box on the archive site. Some topics were quite difficult to find through this search box.

Is this also a matter of waiting for crawl to be finished?

I think so. I had asked for the http://www… property to be removed, but it seems it still shows up in the global results. I’m not sure why. The custom search excludes them to avoid problems. I’m trying to get them to redirect.

I’ve set up the new indexing slightly differently, so I’m hoping this will get better coverage, but at 200k+ pages to index it will take some time to fully crawl.

1 Like