MPE FM Synth
During 2019 and 2020 I built a simple FM Synth. Fortuitisouly, I was able to use this same synth as a testbed for several different features. I made the original version of the synth for my talk at ADC 2019 on strong unit types to prove that the concepts were useful in real world applications. I used JUCE's voice management system to handle all of the midi proccessing and note on/off. Each voice of the synth was a simple FM oscillator pair whose output was scaled by an ADSR envelope. All voices were then routed into a single resonant lowpass filter.
Later Versions
In early 2020 I wanted to build a header bar with preset management, logos, and metering/volume control that I could use in all of my plugins. I decided to build it on top of the FM synth. This turned out to be a good choice, as a later redesign of the synth exposed problems in the way I was saving and loading presets, which I wouldn't have caught in my other plugins.
Later in 2020, I decided to try making a synthesizer that could respond to MPE input and, if successful, improve it with a system that could automatically detect whether incoming MIDI was MIDI or MPE and trigger a synthesizer voice accordingly. Because MPE allows for dynamic expression, I built a new kind of synth voice. In this voice there was no ADSR because MPE expressions allow continuous control over different parameters- something which removes the need for an envelope. I replaced the filter at the output of all the voices and used a filter in each voice instead. Taking advantage of MPE's 3 "dimensions" I made the filter cutoff, the FM depth, and the volume controllable through MPE. The concept worked well, but I had to use hacks to get my controller (two Seaboard blocks) working correctly- both keyboards were unable to accurately detect my finger's vertical position when it was above or below the keys, outputting values that were unrelated to where my finger was, beyond the fact that all of the values were also from outside of the key area. This makes me worried about the ecosystem of MPE controllers. At the time, ROLI was the major player in getting MPE adopted and at some point the biggest manufacturer of MPE devices, and so having two of their controllers that don't work consistently is disappointing. Making workarounds for every device is unfeasible, which leaves the user to create a setup that properly tunes their controller. Virtues of doing so aside, I don't think it should be required for glitch-free operation.
Nevertheless, I decided to try and make a system that could detect whether or not a controller is sending MIDI or MPE data and switch which kind of voice it triggers accordingly. MPE is basically a hack on top of normal MIDI, sending note data and channel pressure, pitch bend, and cc 74 to signify input in various "dimensions". What physical input maps to a given "dimension" is decided by the controller manufacturer. By itself, handling this is straightforward- you could have the ability to map these easy to detect data to different controls in your plugin, allowing the operator to map input to control in the way that makes the most sense for them and their controller(s). However, this hack means that when sending normal midi notes and mpe notes on the same channel, it is impossible to tell which incoming data maps to which controller. One can guess based on if the data is the same data that MPE controllers, but this requires compromising the performance of the audio and/or increasing the latency of the controller in order to make sure that an MPE event has started or ended but the MIDI event has not (or vice-versa). If the "normal" MIDI stream contains data that is used in MPE processing there is no way to correctly tell which data is coming from which controller. Whilst my implementation worked with my Seaboard and computer keyboard sending MIDI, I wouldn't ship this, or something that tries to do this, in a product.
Some Retrospective
One thing that came up during development of the MIDI/MPE detection was the difficulty of saving data in JUCE without using the built-in parameter system. I had wanted to save some of the UI state so that it could be reloaded when the plugin's host is re-opened. JUCE uses two mechanisms to save data. The first is the AudioProcessorValueTreeState, which is a monolithic object for managing different kinds of plugin parameters, syncing them to the GUI, syncing them to a plugin host (like a DAW), serialization and deserialization, and other things. The other is the ValueTree. This is meant for handling application state and not plugin parameters, and so can handle most built-in data types. I used this to save the current preset I had stored, whether or not I was listening for MIDI, MPE, or both, and the output gain. While the last two could be represented by presets, I didn't want them to be exposed to the host, which all parameters are (in JUCE). An AudioProcessorValueTreeState contains a ValueTree, as a way of storing non-parameter data, but this is not immediatly obvious. Also non-obvious is how to get data in and out of a ValueTree. Accessing or storing data must be done through JUCE's Value class, essentially a way to store most primitive types in C++ with a single class. Conceptually, it is wrapped in something like a shared pointer so that changes to the value can propogate to and from the tree. Unfortunately, this whole system adds a lot of complexity, both by adding an object that must translate between built-in data and type-erased data, and also by making looking at a ValueTree with a debugger impossible as no debugger can understand data stored after type-erasure. This made it hard to figure out if or how I was using it wrong. Getting it working most of the time was easy, but making it rock-solid took much longer than it needed to because of this.
Overall, this synth was a great platform for testing out different ideas. Whilst not a very interesting synthesizer, It was a great way to learn and test different features.