Deriving SynthDef from sound source

Hello, I was wondering: is there any way to derive a complete SynthDef from a sound source?

You mean given a sound, generate a synthdef that produces that sound?

You’d win the Nobel prize for music if you came up with an algorithm for that, i think. Although … probably DeepMind has a team of grad students working on this.

Cheers!
eddi https://alln4tural.bandcamp.com

2 Likes

Ok, but I red the code of many SynthDefs and I can’t believe they coded them exclusively by trial and error, so I thought there must be something in the middle, at least some tool with a gui and multiple knobs or some other software

Well, trial and error and a good amount of knowledge about audio theory in general and DSP theory in particular. Audio theory says that any sound can be broken down into a (possibly infinite) number of sine waves. This theory dates back to Joseph Fourier about 200 years ago and is not a function of digital audio, it is a general theory about the propagation of waves through a medium like air. Virtually all forms of complex audio analysis is build on this theory - FFT or Fast Fourier Transform.

So theoretically, with an infinitely fast computer with infinite capacity you could synthesize any sound in realtime. This would not give you any insight into the design or the ‘sound recipe’. This is (infinite) additive synthesize - stacking sine waves of modulating amplitudes. You wouldn’t be able to catch every aspect of the sound from just one note or one chord because of modulation, nonlinearities etc, you would need a lot of instances to analyze. So not very practical for a sound that was synthesized to begin with, better to get the recipe.

I am pretty sure it is possible right now to create an AI that can come up with very persuasive coding from ‘listening’ to synthesized sounds. It would train itself by going through a gazillion examples of audio coding in all the programming languages and analyzing the outcome. Maybe it already exists or is being built?

1 Like

Sure, some SynthDefs may be the result of trial and error, or “happy accidents”, but to hear or imagine a sound and then make it happen in SuperCollider? This requires an understanding of sound design, synthesis, and DSP theory, as Thor said, and good old-fashioned practice. It takes a long time to get really good at sound design in SuperCollider (I’m not there yet myself), so it can seem daunting at first, but there are a lot of resources out there that can help you along. @nathan 's videos and blog do indeed contain great examples of how to design SynthDefs from scratch, but I would consider this to be intermediate to advanced level, for the most part.

If you are truly a beginner at synthesis, I would recommend learning to design sounds on a simple software synth first. Not just twiddling knobs until it sounds good, but actually learning the theory behind it so that you really understand what is happening when you turn each knob. For example, do you know what to expect from the sound when you turn up the resonance, or “Q”, on a filter? Do you know which harmonics are present in a sawtooth wave vs. a square wave? This is all theory stuff you need to know if you want to get really good at this. A good place to start learning theory is Synth Secrets: https://drive.google.com/open?id=12SM0SAOvMq166gc8B1b81Y_S7HPym3Iy

1 Like

Back at the first SuperCollider symposium (2006 I believe), Dan Stowell demoed a genetic algorithm to generate and mutate UGen graphs randomly and compare the resulting spectrum to an audio source. It was able to capture some features of the input audio but it wasn’t “good” in the sense of producing a usable and elegant synthesizer from an audio file. The code was never released IIRC (it had been done as a “wonder what would happen if…” type of thing, not as a “let me create something generally usable” thing).

To the main topic… “Trial and error” – this is partly a misconception but partly accurate. To create sounds in SC, you need to learn about signal processing, period. SC is not Serum. Serum is successful for a lot of electronic musicians because it curates the search space where sounds happen, opening up many useful options but closing off others. The closing-off is just as important as the opening-up.

The “search space” in SC is larger and less organized by comparison – meaning that, to navigate this space, you have to learn more.

I’d call it “informed trial and error” – very often, the coolest sounds are made not by trying to achieve exactly that sound, but rather by interacting with the machine and discovering (by accident) something that you didn’t expect, and then using DSP knowledge to refine it.

There is something to be said for pitching a bunch of knobs on screen and allowing them to take extreme settings. I sometimes find my SC sound design to be hampered rather than enabled by the thought of “writing the synth technique correctly” where it’s sometimes better to do some things wrong.

hjh

2 Likes

It is amazing to see how a stupid question can generate such smart answers.

I must say, I was more interested in reproducing the sound of a digital device, not a “real” acoustic guitar or a cello, but something like this: GitHub - everythingwillbetakenaway/DX7-Supercollider: My accurate Yamaha DX-7 clone. Programmed in Supercollider. (of course, I refer to one single sound for SynthDef, not the entire keyboard)

I love the plugs, thanks. :slight_smile:

The FM7 is amazing, I don’t own a hardware DX7 so I can’t exactly tell you how close it is to the original but my ‘sound memory’ of playing DX7s in the past tells me the FM7 is very close to the original sound. Keep in mind that the design of the FM7 is not based on analyzing the sound of a DX7, rather it is based on analyzing the design of the DX7 which is public due to the original patent.

1 Like