Using pianoteq

Hi,

I am currently playing around with supercollider and the pianoteq trial and as a total noob I have a few questions:

I have found two ways of using pianoteq: You can either run pianoteq as its own process and control it via midi or run it as a vst-plugin and control it via vst_midi events.
It seems to me that when you run it as plugin you have more control over it (you can use the set-method) - is that correct or does it not matter? When would you prefer to run it in its own process?

When you run it as a plugin you have the gui() and editor() methods to tweak it. There seems to be no programmatic control over what you can do with the editor (is that right?), whereas you seem to be able to programmatically control what you can do via gui() by using the get and set methods.

Some parameters are displayed as strings in the gui (e.g “440 Hz”), whereas when I “get” the corresponding parameter I receive a float. My theory is that all parameters are floats and it is the gui that translates some parameters into strings - is that correct?

It seems to me that when you run it as plugin you have more control over it

Yes! Also, with the \vst_midi event type you get precise timing, just like with “regular” Pbinds. When you send MIDI to another process, there will always be some timing jitter.

There seems to be no programmatic control over what you can do with the editor

Yes, the set method only works for VST parameters, but plugins often have additional functionality that is only accessible via the plugin UI. However, you can tweak the settings in the plugin UI and save the current state as a program (writeProgram) or a list of bytes (getProgramData); you can then recall the state any time with readProgram resp. setProgramData.

My theory is that all parameters are floats and it is the gui that translates some parameters into strings - is that correct?

Correct!

Many thanks so far, but I now I have a follow-up question:

What is the difference between a preset and a program - ie between savePreset and writeProgram?

Is it that savePreset just saves the parameter-values that are accessible via the gui whereas writeProgram saves the full state of the plugin, even that state that can only be set via the editor and is not exposed a gui-parameter?

And guessing what things do is not my preferred style of learning - is there any systematic explanation of all of this for noobs like me somewhere?

What is the difference between a preset and a program - ie between savePreset and writeProgram?

In short: savePreset saves the preset file to some standard location whereas writeProgram takes a file path. This is all explained in more detail in the section “Preset Management” in VSTPluginController.schelp and respective method documentation.

Is it that savePreset just saves the parameter-values that are accessible via the gui whereas writeProgram saves the full state of the plugin, even that state that can only be set via the editor and is not exposed a gui-parameter?

No, both save the full plugin state.

The help files? If anything is unclear in the documentation, feel free to open an issue (Issues · Pure Data libraries / vstplugin · GitLab) and I’ll try to improve it.

aha, so a preset and a program are essentially the same concept (the saved state of a plugin).

and also “write” and “save” are the same - as are “load” and “read”.

It is (at least for me) a little bit surprising to find that “savePreset” differs from “writeProgram” only in where the state-file is written to but this is an irritation only first-time users will have I assume…

It’s a bit muddy. In the context of VSTPlugin, I use the term “preset” to refer to preset files written to standard preset locations.

I could have used writePreset and getPresetData instead of writeProgram and getProgramData to make sure that these are all closely related.

The thing is that with VST2 plugins there is a difference between “program” (.fxp) and “bank” (.fxb) files, whereas with VST3 plugins there are only .vstpreset files.

In the future I might rename writeProgram/readProgram/getProgramData/setProgramData to writePreset/readPreset/getPresetData/setPresetData and keep the old names as (deprecated) aliases.

Also when using a sample accurate MIDI, Like JACK MIDI?

I guess JACK MIDI would solve the jitter issue, but you’d still need to compansate for the timing difference between the standalone and scsynth. If you only use Pianoteq and nothing else that wouldn’t be an issue, of course.

Generally, I don’t see any upsides of using the standalone when you can use it as a plugin inside scsynth.

1 Like

Is that true, assuming scsynth would use JACK MIDI and all standalone synths in the session would use sample accurate MIDI (JACK MIDI)?

Let’s swap that around. What are the upsides of the plugin inside scsynth compared to standalone?

scsynth does not use MIDI at all… scsynth and your standalone would not be synchronized in any way. You need to find out the timing difference empirically and then adjust the MIDIOut latency accordingly. Again, this assumes that you are using scsynth in the first place.

What are the upsides of the plugin inside scsynth compared to standalone?

  • keeping everything in a single application
  • no need for inter-app MIDI or audio routing
  • you can automate everything from sclang
  • sample-accurate timing relative to other Synths on the Server
  • …

I won’t try to convince you, though. Just do what you find more comfortable.

I can see that this is a potential downside, all though the default latency setting (0.2) is high for modern systems, if I recall a comment somewhere on this forum board correctly.

I see certainly some benefits indeed.

The downsides I can think of is that you rely on VSTplugin to work correct for your particular synth and will be actively maintained. Your plugin needs to have a particular plugin format to work in SC with VSTplugin. If sclang / scsynth crashes or needs to be rebooted, you need to reboot your plugin as well.

Me neither, I just find this a interesting topic and try to grasp some of the technical consequences of both approaches.

all though the default latency setting (0.2) is high for modern systems,

I’m not sure what you’re trying to say here. Of course, you can change the latency. The point is that you have to manually fiddle around with Server and MIDIOut latency if you want scsynth and your standalone to be in sync. Again, this is of course not an issue if you only use the standalone.

The downsides I can think of is that you rely on VSTplugin to work correct for your particular synth and will be actively maintained.

That’s true. But doesn’t the same apply to SC? :wink:

Your plugin needs to have a particular plugin format to work in SC with VSTplugin.

Sure, but if your instrument does not have a VST plugin version, the question does not even arise…

If sclang / scsynth crashes or needs to be rebooted, you need to reboot your plugin as well.

Well, if you need to reboot sclang/scsynth, you need to restart your piece anyway. Creating the VSTPlugin instance would be just part of your code. It also makes it easier to open the project because you just need to evaluate some code. With the standalone you need to open a separate program, manually load your settings in the plugin UI, etc.

AFAICT, the only real “downside” is that you’d have to “trust” VSTPlugin. That’s a fair point. I can just say that VSTPlugin is used by many people and generally very stable. I’d suggest to give it a try at least!

FWIW, here’s a video of @madskjeldgaard playing around with Pianoteq with VSTPlugin: https://www.youtube.com/watch?v=EyDi0ehGSko

1 Like

If the latency is very low in general across the session with SC and standalone synths, it might be not that much of a concern and you don’t have to fiddle around with it.

Also Ableton Link could play a role in syncing the sequencing part.

Makes me wonder how the latency of OSC is compared to MIDI (software). Some synths like SurgeXT and Zynaddsubfx can be played /controlled by OSC.

On Linux you’ve audio / music session managers like NSM. You would only need to restart sclang I think.

It does not matter what the latencies are, the point is that they have to be matched – manually and by ear! The Server latency will always be larger than the threshold for accurate musical timing.

Ableton Link could help, but then you rely on yet another external component.

On Linux you’ve audio / music session managers like NSM. You would only need to restart sclang I think.

If you’d prefer that, sure.

1 Like