SC vs Modular Synths (Eurorack)

Kryananda, firstly, no laughing, but I’ve just listened to Blip 3 and realised - in my earlier post - I’m talking about another piece (ouch), when talking about the gongs. But the blippy/beeping drums are still the same.

Apart from synth melodies (often made using SinOsc), the other sounds are piano notes played randomly backwards, forwards, different speeds, pitches and the lot. I spent an afternoon at work - I’m a music teacher - meticulously recording each note on the piano, and then cleaning them up (on audacity). The result being I now have a great file built up of all sorts of piano sounds, hits, scrapes and all - they’re up on Freesound.org. The rest of the effects in there I’ve forgotten what they were?!

So, what I’ve learned from your post is that I need to be more organised in saving my code in the right places, and not making endless changes. I can’t find Blip 3, which sounds much more interesting than the Blip 2 (no joking) - the one I was talking about with gongs! As you might guess, I’m off to clean up my SC folder on my computer!

Since we’re talking about modular-inspired working methods – I have a couple of projects that are relevant.

JITModular it makes it easier to use JITLib NodeProxies as synth modules. It was hard (the JIT part was easy; saving/restoring, and buffer/MIDI management were hard) and I had to do some hacky things to get there, but I can now experiment with different oscillators, filters, other processors, swapping them in and out dynamically, and saving the whole thing to disk to be restored later. After that, it does take a little work to copy the pieces into a unified SynthDef, but overall it saves me time because I can test out small changes to the graph by just switching out one module. https://github.com/jamshark70/JITModular

Unfortunately I haven’t had time to write complete documentation … but maybe try it out and ask questions on the forum.

p = JITModPatch.new;

… and then, in the code window, create ~module = { ... synthesis ... }. A module can have an audio input by writing JMInput.ar, and then you supply a signal to it by writing ~source <>> ~target.

For example, I just made a synth with a gapped wavetable oscillator (with layered detuning) --> HPF/LPF pair acting like a band pass filter --> Shaper adding higher harmonics --> reverb. At each stage, I could play with it to get the sound right. Then rewrite as a SynthDef, add a slow envelope, sequence it, and New Age magic:

hjh

6 Likes

joesh lol :smiley: yes, artist can be sort of chaotic sometimes, but that’s fine! The piano recording episode sounds interesting (you’re a music teacher!!! you lucky man! :slight_smile: ) and reminds me that we are surrounded daily by an amazing array (no pun intended!) of wonderful sound sources, if only we are good enough to realize that :wink:

that sounds really interesting, though maybe too advanced for my current level of knowledge.

The sound file sounds amazing and I hope I can soon cover the gap so that I can enjoy fully your words.

Thank you!

James, that sounds great. It also reminds me that I seem to remember a very simple line of code that could be added at the end of a synthdef and which made the various args into sliders. However, I can’t remember what it was, or where I found it - either in the sc book, or maybe in an example?

BTW, it isn’t varGui - this one I remember.

Thanks in advance - if my explanation brings up any ideas?

1 Like

Probably you mean the metadata arg, used with key ‘specs’. I meanwhile got the habit to almost always write SynthDefs with metadata specs.

With standard lib you can use ‘makeGui’ (not widely used this way it seems), with miSCellaneous installed ‘sVarGui’.

// using 'myFreq' here, as for 'freq' and 'amp' there are already global specs defined
(
SynthDef(\test, { |myFreq, amp|
	Out.ar(0, SinOsc.ar(myFreq, 0, amp))
}, metadata: (
	specs: (
		myFreq: [20, 10000, \exp, 0, 400]	
	)
)
).add
)


SynthDescLib.global[\test].makeGui

\test.sVarGui.gui
3 Likes

Daniel, you’re my saviour, brilliant!

Big thanks - Joe

I have a fairly large modular synth setup and also spend a lot of time with SuperCollider. I find myself using SC a lot for control, sending MIDI to various modules, as well as signal generation, particularly for playback of samples. My modular ends up seeing more use for effects, processing, mixing. But these are trends, not necessarily rules. And as I get better writing synthesis code perhaps more will end up coming from SC.

I think I introduced it badly… the intent is actually to keep each module as simple as possible.

(It’s mostly a documentation problem – when the semester is over, then I can work on a manual of sorts – until then, it looks “magical” just because it isn’t explained properly.)

For a quick example, using that system, you would p = JITModPatch.new and, in the patch code window:

// JMInput.ar is part of JITModular: Module's audio input
// .play() connects this module to the speakers
~out = { JMInput.ar }; ~out.play(vol: 0.2);

~osc = { Saw.ar(440) };  // simple oscillator

~osc <>> ~out;  // connect to output

~filter = { LPF.ar(JMInput.ar, 2000) };

// modules can be chained in series
// you can read the synth structure directly from the code
~osc <>> ~filter <>> ~out;

// add a control input
~osc = { |freq = 440| Saw.ar(freq) };

// swap in a different type of oscillator
~osc = { |freq = 440, width = 0.5| Pulse.ar(freq, width) };

// for GUI: set an appropriate range for 'width'
~osc.addSpec(\width, [0.01, 0.5]);

// swap in a different filter
~filter = { |ffreq = 2000, rq = 1| RLPF.ar(JMInput.ar, ffreq, rq) };
~filter.addSpec(\ffreq, \freq, \rq, [1, 0.02, \exp]);

Each line is very simple. (The fact that I used it yesterday to build something more complicated doesn’t mean that you are required to do lots of complicated things with it.) What I wanted to accomplish with this system is to break up the process of designing a synth so you can focus on one piece at a time, and experiment with each piece as its own unit.

It works well for that (though there are still some issues trying to use it in the classroom…).

hjh

2 Likes

Concerning the original question, trying to emulate the specific sound quality of a modular device by software means is probably the most difficult thing because of analogue idiosyncracies.

But debatable if this is eligible or necessary. Personally I like to use SC in order to obtain things that are hardly or impossible with analogue devices, the digital space of possibilities feels infinite, but this alone is not an argument pro or contra per se. Luckily however worlds can be linked also.

Not being an expert with analogue devices I find them inspiring, e.g. I’ve written the smooth wavefolding classes contained in miSCellaneous quark because I stumbled across analogue wavefolding following Buchla. It’s an anti-aliasing variant of SC main lib’s Fold UGen, pictures of some options on a sine source might be self-explaining:

https://github.com/dkmayer/miSCellaneous_lib/blob/master/HelpSource/Tutorials/attachments/Smooth_Clipping_and_Folding/fold_examples.png

Furthermore feedback behaviour is different in the digital domain, e.g. I’d be interested in phase-locked loop examples in SC. I asked some days ago on the mailing list and got no response, maybe anyone here has some experiences ?

Dear James, thanks a lot for your explanation. Things now are clearer, and I can perceive the sense of it all better now.

Thanx! :smile:

Yes Daniel… absolutely useful and amazing :)))) thaaaaaanks!

I bought my first hardware synth 10 years ago, and 5 years later I got my first eurorack modules and it kept growing until 6 months ago, at that time I started to learn SC and Pd.

On functionality aspect one of the big advantage on SC is you can do polyphony with ease, and even multi channels, while eurorack is more lean toward to monophonic patch, or you may have a wall of modules to do the task.

On my experience, it’s all about the form factor. If I sit in front of my eurorack system, I might think about how to do what in a confined condition, or most of the time I randomly patching for no actually goal, sometimes I ended up with sometimes cool or nothing, and to me it is the most rewarding moment on eurorack. However, I believe many eurorack people do, I would think of getting new modules to do something more or something my current system unable to do. If you can afford then it is okay, but might lead you to a rabbit hole where I witnessed so many others do: getting new modules for inspirations.

My opinion is if you can afford, get start on eurorack, but be patient, one step at a time.

1 Like

Yes Vinc, I absolutely do agree with you when you write that relying ONLY on the tools

you don’t play and compose as J. Hendrix did by simply buying the same guitar, do you? :slight_smile:

Hi Jam,

I just wanted to tell you that I’ve been fooling around a bit with the JITModular library a bit… fantastic!

Yes, now I really understand the “modular” thing and it seems to offer a totally different approach, very rich and stimulating…

… thank you once again for your precious advice :slight_smile:

Wow ! Looks very handy. Can you have several outputs with JITModular ? Like a sequencer which would output triggers and notes ?

Have a look at AE Modular.

Currently, in JITModular, it assumes audio-rate proxies should be stereo, but you can pre-define the number of output channels:

~audio2 = { DC.ar(0) ! 4 };
StereoNodeProxy.nil(localhost, nil): wrapped channels from 4 to 2 channels
~audio2
-> StereoNodeProxy.audio(localhost, 2)

~audio4.ar(4);  // pre-define
~audio4 = { DC.ar(0) ! 4 };
~audio4
-> StereoNodeProxy.audio(localhost, 4)

.kr proxies are free, as many channels as you output.

But IMO the better way to sequence in JITModular is with patterns.

~player = \psSet -> Pbind(
	\skipArgs, [\list, \of, \symbols, \to, \ignore],
	\degree, ...,
	\dur, ...
);

… which will treat any gt inputs as gates and t_trig inputs as envelope triggers.

hjh

Thanks a lot James for your detailed explanations !

Concerning sequencing and modular design you might have a look at the PbindFx class from miSCellaneous_lib. My primary concern was to sequence events and fx data, but PbindFx is not restricted to the notion of source and fxs, together with fx data you can sequence arbitrary graphs of nodes (patches of modules in a general sense, but it needs SynthDefs with some conventions). As the syntax only uses indices, their content can be exchanged on the fly – as well as the graph pattern itself (see PbindFx help Exs. 10 for modulation graph sequencing and Exs. 7 for replacement).

Syntax example for sequencing graphs with keyword ‘fxOrder’:

\fxOrder, Pn(Pshuf([
	`(0:1, 4:1, 1:6),
	`(0:1, 5:1, 1:6),
	`(0:2, 3:2, 2:6),
	`(0: 1, 1: [2, 3], 3: 4)  // arrays cause parallel routing
	`(0: [1, 2, 3], 2: [4, \o], 3: 2)  // graph example below, 1 and 4 go to out as not defined separately
]))

PbindFx_graph_3b

The bus management is done automatically, all buses can be multichannel, sizes should fit but are checked per event.

2 Likes