Experiences with RAVE UGen?

Hi everyone,
I haven’t found any mention about this in the forum yet, but I’d like to collect experiences (if any!) with the Rave supercollider UGen by Victor Shepardson.

We’re working in the same lab and I’ve become increasingly interested into using this UGen to augment audio in Supercollider. However I’m very noob and mostly hoping to see what other people have been doing to learn some more, before approaching the task.
(other stuff about Rave is here:
GitHub - acids-ircam/RAVE: Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder)

Hi Robin,

Thanks for the link and to Victor Shepardson for porting Rave to SuperCollider!

I just compiled it and it works ! I have only tested it with a model that comes with the Max version of Ircam. Apparently there are several models already made for example https://neutone.space but I don’t know if they are compatible with this version. Maybe the documentation can be completed to be the same as the Max version (for exemple “advanced” tab in Max with a GUI)?
In general this treatment has a lot of latency, but it can be used to generate interesting sounds or when latency is not important.
I will continue to test it !

All the best,


Thanks @Robin_Morabito

For anyone who saw rave-supercollider before last week, it’s now been completely overhauled with separate Encoder and Decoder UGens and a more straightforward interface.

@Jose I just pushed support for Neutone models! RAVEEncoder and RAVEDecoder only, since the Neutone export process replaces the forward method and discards the prior (?). After downloading models in the Neutone app, they can apparently be found at /Users/<user>/Library/Application Support/Qosmo/Neutone/models/ (on a mac). If you can get an original RAVE export, that’s still better.

I haven’t tried rave-supercollider with a low-latency RAVE model yet – if anyone does please share!


Wow! I’ve seen it before last week, so I’m definitely looking forward to try the new gears :smiley:

I’ve been looking into RAVE and training some models… do you know if there is any resource to help tuning the training? Like choosing parameters, making nice datasets, and understanding the relationships between the two?

And also, I usually train at 44100, does it mean that I can’t use those models in SC, or would they work if SC is also running at 44100?

Thanks @victor-shepardson !!

I just tested the Neutone models and they work great, here an audio exemple with a violin model with SC example :slight_smile:

In RAVEEncoder if the latenSize is different from the model, the server crash. Is there any way to retrieve the latenSize of a model before send to RAVEEncoder?

Otherwise here a SC example of Max nn~ ‘advanced’ exemple “to get control over the generation”, of course you can change with different techniques any of latent dimensions…

b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav"); 

~synth = {
	var z = RAVEEncoder.new(
		"/Users/jose/nn_tilde/help/wheel.ts", 8, PlayBuf.ar(1, b, BufRateScale.kr(b), loop: 1.0),// input for latent embedding
	z[1] = MouseX.kr(-3, 3);
	z[4] = MouseY.kr(-3, 3);
			z //latent input


Thanks again for your work!

All the best,


1 Like

I think it should work fine with different sample rates and block sizes as long as scsynth matches the RAVE model. Let me know if it doesn’t!

It’s tricky, you have to supply the latent size to RAVEPrior and RAVEEncoder because sclang needs to create the OutputProxy for each latent before scsynth actually loads the torchscript file. I’m not sure how to get around this – reading the torchscript file from sclang for example looks hard. I will fix it not to crash the server though.

thanks for the example!

In case anyone is interested, I made a TidalCycles interface for this here: GitHub - jarmitage/tidal-rave: Live coding RAVE real-time models using TidalCycles and SuperDirt

You can see it in use at Algorithmic Arts Assembly 2022 (Lil Data): Lil Data, Eloi El Bon Noi, Alicia Champlin, Qbrnthss - YouTube

1 Like

Hi @victor-shepardson,

I just saw that PyTorch is compatible with Apple M1. I wonder if you have tried to train models with this version?

And thanks (a little late) for the new version!