very much in progress but here’s a scene - “singing” is Synthesizer V (project files generated in supercollider and rendered via annoying AppleScript) - pronunciation is crazy yes -
eventual presentation likely to be multiple monitor and ambi ring
This is completely amazing, it’s so beautiful… Thanks kindly for sending. Also the SynthV.fandom pages for AiKO and Genbu are very good, I’d no idea about this.
Thanks Rohan - appreciate the kind word! Aiko and Genbu are older sample based databases - the new hotness are the AI voices which are more lifelike. …I feel like I ought to do my time with the (free) lite gen1 stuff for now… The project files are just JSON so easy enough to generate and then render back to get audio back into sc
Happy to report that I have finished the complete episode - run time: 58 minutes and that my 2020 MacBook can cheerfully “perform” it from top to tail.
…now to look for exhibition opportunities, maybe seek a grant to make a working distributable app version … any advice from the community would be welcome…
I’ve shown this to my family and friends and we all enjoy it very much.
Would you be willing to explain a bit how you composed the parlando rhythms and managed the alternation between the speech sections and the more a tempo rhyhmic bits?
I’m also curious whether you’ve had any response from the Trek fan community? I feel like there should be a whole sub-genre of stuff like this.
re writing I put the script in music stand of the piano and just read it in my head until I land on a rhythmic performance of a line or two… I get a feel for the shape that produces the emphases I want. Often there are little melodicules that get reused as part of each sentence or to stress relationships. In this style chords attach to individual notes rather than a melody floating over a steady harmonic rhythm - chords are like punctuation or markdown styling that attach to words. I write a lot of tunes on paper usually without stems just to remind myself.
Syllables go by at a dependable rate as in speech - around 5 per second but generally slowing down as phrases progress - and chords have a similar behavior one level out.
When I hit on a bit of language that sounds like a slogan or a song, I allow it to have tempo, though if it goes on too long it breaks the time-texture. I think about Monteverdi a lot… The big tricks for me are controlling the harmony to set up the cycling moment and then seeing how to set up a new harmonic moment for the breath and recomencement after.
I’ve been working on this non-metered style of writing for a loooong, time originally in a sort-of rock style: https://youtu.be/pT_rHitarOY
I haven’t figured out how to reach the Star Trek community yet sadly…, not on social media etc etc! cheers and thanks again for asking
Thanks for the info. I guess I’m wondering whether you are doing note entry directly in SuperCollider against a clock of some kind or whether the performances are recorded separately in a MIDI sequencer or another kind of tool.
I enter the pitches for the tunes manually (using 1-indexed notation like [1, 3, 5].df(\c, \mixolydian)) -
… then wrote a little tool which, when I tap the shift key, steps through the vocal line - if I like it it stores the rhythms.
Then I pass those rhythms into all other music functions (again I enter allt he pitches manually) so that the rest of the music hangs on the syllables of the vocal line. If I want to re-flow it I just tap again and everyone moves.
One of the difficulties is when there are more tempo-ey sections - there I wrote a TempoMap object that lets me assign durations to the tapped beats - once that’s done I have methods to warp arrays of “normal” durations to those maps. Again all of these can be reflowed just by stepping through the piece with the shift key…
I don’t leave supercollider except to render the vocals. Supercollider builds the project files for the vocal synth and then calls an applescript to render and grabs the resulting wav into a buffer. These also reflow automatically although I have to re render which is annoying…
Thank you so much for sharing! I really enjoyed your installation at the SC symposium!
I don’t leave supercollider except to render the vocals. Supercollider builds the project files for the vocal synth and then calls an applescript to render and grabs the resulting wav into a buffer.
I’m wondering, why do you need to leave SC to render the vocals? Are you running Synthesizer V in SC with VSTPlugin? If yes, you can render it with NRT synthesis. I know someone who does this.
thanks @Spacechild1 for the kind words - no I have not been using VSTPlugin for this for whatever reason, but the desktop version of the synth, perhaps this has been very foolish of me! I would have to figure if it is possible to get the VST instance to load or open with the generated .svp file… I will report back…
I loved the opera during the SC symposium and was one of my absolute favorites from the symposium (it also works if you’ve never watched star trek before) – it gave me some Robert Ashley/Perfect Lives vibes.
The amount of LOC of the scripts gave me shivers and my non-understanding of whats going on in nvim added to the amazement/alienation
I think you can’t do this directly, but you can first load it in the GUI and then save the whole plugin state with writeProgram. Then you can dynamically load different scores/settings with readProgram. I don’t know if that would improve your workflow, though.
@dscheiba thanks for the kind words - when you are a venerable professor invite me to Germany and I will buy you a beer!
@Spacechild1 Inspecting the fxp files I see that they are mostly just JSON - I think I can build these and load them - then just send .setPlaying(1) and I’m off to the races! Thanks for the nudge this is a great tip. Then for me the issue will be whether its ok to have very many of these loaded or if I have to manage by loading programs (and is doing this performant). But right away this seems like a big timesaver and avoids Applescript which is flaky…
You can of course create several plugins upfront and bypass those that you currently don’t need.
However, I think that loading the programs at runtime should be ok. The loading is done on the NRT thread, so it doesn’t interrupt audio processing. If loading takes too long, you can use the “double buffering” trick: create two plugin instances and keep an index in a variable; the “active” instance is used for playback, the “inactive” instance already loads the program for the next section; on every section, just toggle the index and repeat.