Preliminary timeline system

Happy Friday all,

I’ve been working lately at a proof-of-concept vanilla SC timeline with GUI editing. Although I’m trying to keep my sights achievable (not trying to make the ultimate whatever, just to be of use to myself) I thought I’d put it up for general feedback. So here is a preliminary but usable attempt. Lots of rough edges and many more clip types to be made but the concept, I think, is strong. Much more information including tutorial/example workflow on the github readme.

Almost all editable fields can take any valid SC code, and timelines by default create their own Environment and set ~timeline to refer to itself. They can still access interpreter variables in this mode, or you can turn off useEnvir and then they can access the parent Environment.

If you do try it out, I would love to know your thoughts, ideas, critiques, and if you find bugs etc please report them here or on the github with steps to reproduce.

big thanks to the community here <3

5 Likes

Looks cool, will give it a test when I have a chance. The problem with how to mix things of fairly different natures in a timeline is not simple.

Years ago I had a go at a minimalist approach to this. Not much used, and not sure if it still works, but it’s here: GitHub - muellmusik/SST

I tried to move away from the ‘track’ metaphor. It seems ultimately derived from analog multitrack tape, and embeds a notion that tracks are scarce resources. Instead I tried to make a sort of grouping concept. Not sure if I got it right, but seems worth trying to reimagine it. I know there has been some experimental work on new DAW concepts which might be worth looking at too…

3 Likes

Oh cool thanks I will look at it more but just scanning it’s crazy how similar a lot of the code looks… I guess similar solutions to similar problems. Look forward to trying it out. I guess maybe my version of grouping is embedding one timeline in another. I did try abandoning tracks at one point but I came back around because it does actually make sense for a lot of musical purposes (mute/solo, e.g., eventually effects and track-level parameters like maybe volume or panning) and it also simplifies the code. In my timeline you can have clips overlap on a single track, but they’re not drawn very well yet.

I’ve been sort of following Blockhead’s development, haven’t yet tried it out but it looks super interesting

3 Likes

I think Jan Trutzschler also tried to code a timeline/daw back in the mid-2000s. I’m not sure the code is archived somewhere.

EDIT: also, I think some inspiration can be found in the CSound world re: mixing audio, code, and “scores” in the same timeline. For example: Blue

2 Likes

Thanks for sharing Blue, I hadn’t seen it before. Just reading the manual, it’s definitely useful inspiration! And a lot of the concepts are similar or compatible to the way I’ve been thinking. Although in blue/ csound I think the end result is always a txt file in a particular format to generate an audio file, where of course SC is real time and can do many other things as well which adds to complexity of this timeline project; on the other hand, a lot of the extra features in blue like note processors we get for free because of the expressiveness of sclang.

1 Like

Blue includes the Clojure programming language, which is both compiled and interpreted. Both are very expressive.

I like the idea of various language alternatives, but one underlying semantics.

I think in his other project, “Pink”, he developed more how to use Clojure for music.

Hm. I’ve been thinking about this timeline the opposite way - all sclang, but not forcing any kind of output mechanism / semantics. All sclang, because I think to integrate other languages would be a mess unless it’s done brilliantly and I don’t have the knowledge or skills for that. And no forced output semantics because I want it to be able to sequence anything that SC is capable of, which is pretty broad (including responding to real time input). If I limit it to just generating notes in a particular format, the range narrows significantly…

Do you plan to output OSC scores for scsynth to render non-realtime? SuperCollider has yet to develop that area.

Some software tried to do that, but not developed anymore.


For now my focus is real time and not necessarily linear. And not everything that can be done in real time even makes sense in NRT (like waiting for an osc trigger from someone else, or sending one). But it would be possible for at least some clip types (so far synth, pattern, env) to be able to generate score osc messages, and once a routine has been played through in rt it would probably be possible to compile a score from the generated OSC server commands which could be used for nrt rendering in the future… bus management would need to be considered, though. Something to play with. I agree this would be a great feature to develop

1 Like

@muellmusik just trying this, there is a missing ugen PlayBufSendIndex but I get the gist

To me the groups look exactly like tracks – each on its own line, an item either belongs to a group or is ungrouped, which just displays as its own track. Am I missing something?

(btw in my no-track experiments I had clips freely positionable on x and y axes… But I realized that the benefits of free y positioning, e.g. getting direct access to another sound parameter, didn’t outweigh the benefits of tracks for me… since every new synth for example I am setting many parameters anyway. Another analogy for tracks is instruments on a score, which is only as finite as your imagination and budget… and once the score gets too big to manage in your head, you can just wrap it up and put it as a clip on a new timeline. Or if you want to manipulate a subset of instruments as a group, wrap them in their own timeline )

This is buried in the GitHub page but here is an extreme example of embedded timelines, where some timeline clips have their own clocks, and some automate the tempo of the main timeline. Just me testing the extremes of the timing system. The milder form is that they all just play together on one playhead as displayed. But I think this concept might be broadly useful…

4 Likes

@Eric_Sluyter what do you think about https://ossia.io/ ?

@jordan Reading the manual, I really like a lot of ossia’s philosophy, especially the approaches to non linearity. Just having downloaded it, I find it really buggy, like I’m having a hard time just making sound file clips and moving them, sometimes a clip changes sound and I have to quit and reopen ossia for it to sound right again. Have you used it for music making?

My experience has been very similar, but I also couldn’t understand what half the things did. There is some really cool technical things like it’s own DSP language, but I don’t really know why it does sound and visual processing at all rather than just being a really good timeline that sequences everything.

I only mention it because it also does nested timelines and had a load of really cool ideas regarding looping and triggering that might be inspirational in your timeline (which looks awesome by the way!).

This software is precisely the opposite of what I’m looking for. We should go back to the Unix philosophy a little, with modular systems that already do a good work alone and can also be combined in many ways when one wants.

Working on a serious project with that software feels like a nightmare.

Having just tried it a bit, I agree. The most important thing is reliability. And making music in software that feels more designed for video (maybe - or just not very musical) is just rough in uninspiring ways.

The parts of the philosophy I like as Jordan says have to do with solving problems around real time non linearity, which they have put some good thought into