General
The world of computer processing is vast and limitless. For audio processing purposes, a world of possibilities presents itself. In the next table we summarize some of the pros and cons of using a computer on stage.
| pros | cons |
| unlimited in options | |
| light-weight | valuable |
| compact | generally requires more preparation |
| Lots of freeware and open source software can be found | external soundcard + controller is often indispensable |
Notice that we consider the lack of limitation to be an advantage as well as a disadvantage. In reality every audio software program surely has its limitations, but it extends far beyond the capabilities of pedals. However sometimes there is so much control that it can obstruct your intentions and creativity. We look for the right amount of limitation, because if you spend less time thinking, you spend more time doing actual music.
Note to self: when using a computer, try to work as intuitively as possible. Efficient work starts with not to lose yourself in too many details at first.

The word flow refers to a state we are looking for when working creatively. It has to do with working near to your skill level, rewarding you after a certain amount of challenge. Too challenging things lead to frustration and things that don’t challenge you lead to boredom, both killing the flow. In the book ‘Flow’ (Handbook of Competence and Motivation) studies are shown that the level of difficulty should match or slightly exceed the skill level of the user/musician, as shown in the graph. This is not only applicable to working with a computer but to working and learning in general.
The process of developing setups, testing them interactively during multiple sessions and evaluating them was a key procedure throughout this research.

This flow chart shows the steps we went through when developing our extended instruments. An idea leads to a prototype of a tool that subsequently auditioned in different test cases. After that we can objectively evaluate the functionality and usability of the prototype. This usability has to do with control but also with simplicity. The subjective evaluating criteria include the aspect of fun, pleasure and rewarding inspiration, which give meaning to the tool and a reason to keep using it.
Ableton Live
We are accustomed to the use of Ableton as our DAW (Digital Audio Workstation), since it’s particularly built to use in live situations.
Routing
The interface of Ableton Live is very practical for handling multiple audio in- and outputs, especially when it comes to flexibility. The starting point of this research project was to make improvised music with extended setups. Because of the improvisational aspect it seemed logical to us that we had to search for a flexible setup: which audio source goes to which effect chain and which effect chain goes to which audio output. This process of software routing can become quite complex. The big question on a software level is the following:
‘What kind of routing setup is versatile and intuitive at the same time?’
The answer is mainly that creating a versatile setup is perfectly possible, as long as you ensure that your instrument remains intuitive to yourself. This evaluation is subjective, personal and can change over time.
The flow chart is also applicable in the consideration of the signal path. Whereas effect pedals (mostly) have a fixed routing with jack-jack patch cables, Ableton gives us opportunities to send everything anywhere easily and versatile, which sometimes does not benefit the simplicity.
Anecdote: During a first public performance of our ensemble in 2018, in which we passed along solo improvisations, Vitja found himself stuck in routing options. He gave himself the freedom to choose whether he wanted to send any element to any effect whilst being able to possibly send anything back to a looper and a sampler channel. The experience was not at all satisfying, because he couldn’t let go of a focus on how his signal was currently routed. At that point it seemed that his setup lacked simplicity and room for intuition.
More information about how our setups are routed in the tab interface / midi.

Effects and plug-ins
Ableton Live is equipped with a bundle of handy audio effects. A big difference to the use of hardware effects is that you can use as many effects as you like in one set. As far as we know there’s no limit on the amount of effects, tracks, etc. Compared to the old analog studio environment this must have been mind-blowing. Another crucial difference to stompbox effects is that they are not necessarily made for a specific input or specific outcome. They generally tend to have less character and some of them can sound dull at first. They require a bit more attention for specific applications.
We also used other effect plug-ins. Third party plug-ins are little programs that provide additional functionality to your DAW. Sometimes they also work as standalone programs. We make use of iZotope, Sound Toys, GRM Tools, Fabfilter and Waves plug-ins. Most of them are supposed to be used for post-production, but when they don’t generate latency nor need too much computer calculation, they can be used for live processing as well.
This is what a violin can sound like when processed with ring-modulation and overdrive.
Effect alternation
The environment of Ableton Live is already so infinite that it deserves a focussed attention. The following example is made with a setup using only the guitar and Ableton Live. An audio effect chain is set up with multiple artificial spaces through which the guitar signal can pass. The artificial spaces are programmed to alternate at random moments.
On a technical level, this random alternation between effect sequences is possible because of Ableton’s Audio Effect Rack. Different chains can be put next to each other in parallel, and every chain can be switched on and off using automation clips. This concept is retrievable via the term dummy clips.

Here is another audio example of a similar setup processing an incoming drum computer, voice and piano. The main difference between this setup and the previously described one is that here the alternation between effects chains is done manually with a midi controller, resulting in very well chosen moments in which the delay is frozen, for example. In this way, certain events in the instrument playing can be accentuated by adding an effect or drying it up. This setup is highly inspired by Arthur Russell’s magnificent album World of Echo.
Max For Live and Max/MSP
People that use Ableton Live extensively may have noticed that there are certain limitations to the DAW. Sometimes, specific ideas occurred that turned out not to be possible in Ableton Live itself. We distinguish two sorts of limitations:
- limitations regarding controllability: For instance, if you midi map a toggle switch (of any midi-controller) to turn on a track while the same switch turns off another track, a mismatch arises. These kind of problems are discussed in interface / midi.
- limitations regarding functionality: This has to do with the lack of specific features of audio effects that we want to overcome by developing our own versions.
For the development of these ideas MAX MSP by Cycling ’74 was the go-to software. Max can be implemented in Ableton Live via the Max For Live (M4L) pack. With this, it is very easy to make a hybrid setup using the best of the two software programs. The setups we ended up making music with were always both using Ableton and Max MSP.

When installed, Max for live comes with a couple of devices. There are two very basic, important ones: LFO and Envelope Follower. These tools can make any parameter in Ableton move in different ways: LFO makes any parameter that you assign it to move, following waveforms (sine, square, triangle, noise,…). Envelope Follower can make any assigned parameter react to the volume envelope of an incoming audio signal.
These building blocks give easy access to the making of ‘smart’ effects, connecting different sound parameters on a level of adaptivity. For instance: when mapping the Envelope follower to a reverb dry/wet knob, it is possible to make the reverb effect only appear when playing softly, disappearing when playing at louder volumes. This can result in much more dynamic and organic relations between players and their effects.
Note to self: when learning MAX MSP or Max For Live it is recommended to have an idea of what you want to create. This environment is so extensive that you can explore it aimlessly for years.
From freezer to looper/sampler

One of the first things developed in MAX MSP was a stereo freeze plugin, having similar controls to the Gamechanger Audio PLUS pedal. It was designed for the simple reason that a hardware freeze pedal only exists in mono versions (for guitar in the first place). To freeze an audio signal means artificially sustaining the frequencies that come in on the specific moment that the freeze button is being pushed. This MAX freeze device has controllable attack and release parameters, allowing the frozen sound to gently fade in and out. Without this fading, the effect sounds digital and brute. Last version of the Freeze plugin (called XFreeze) used an algorithm by Jean Francois Charles to freeze the sound, and works with two freeze buffers that can crossfade over each other.
Fragulator & Stutter by Pluggo Devices: These devices record little pieces of audio and spit them out in real time. Fragulator is triggered by transients of an incoming audio signal, stutter is triggered by a midi message (sustain pedal, for example). Parameters like buffertime (length of recorded audio), pitch and volume are variable.
When set up right, Fragulator feels like an organic companion of the musician, because it follows the dynamics of the player, without having to control the dynamics manually. An example using the M4L LFO on random mode, controlling the pitch of the effected signal:
Fragulator automatically generates different effect-signals on the left/right side of the stereo spectrum, Stutter just loops one captured stereo fragment. Using flagulator, the attack is automatically captured and looped (always getting somewhat percussive sounds) while using stutter it’s also possible to capture a more static audio fragment. In the following example fragulator and stutter are used in combination to obtain one thick, digital computer-cloud (some distortion is used on the fragulator to make it more rough).
In fact the fragulator can be seen as an ‘always-listening’ device, meaning that you can grab/repeat a moment right after it happened (its buffer is constantly renewing itself until you recall the sound). This concept fascinated us. Vitja made a M4L-patch as an extension of the fragulator, to use an iRig keyboard for transposing. Hitting the C1-key on the keyboard will repeat a moment that just happened with no transposition while the C2-key will repeat that same moment but one octave higher and so on. It’s programmed to hold a sample for 2 seconds after pressing (so you can play melodies with it) and to ‘forget’ the sample afterwards.
There are many Max-patches to be found for free online. This stutter engine was used in combination with an extensional device, to only engage the stuttering (with randomized division) when a certain volume threshold is exceeded. While recording leap/detach, Vitja used this setup for the track Myrte.

As described in the section about stompbox effects, we got specifically fascinated by looping and sampling fragments of audio on the fly while playing. Having the experience of working with different features of pedals, we wanted to bring their most interesting features together. When working with Ableton, there are two devices that come close to what we aim for: the looper and the sampler.

The looper has many useful controls, including:
- Standard record/play/overdub/stop buttons
- Speed knob for linked speed-pitch sliding
- Arrows that allow you to multiply speed down or up with a factor 2, resulting in half time (octave down) and double speed (octave up) interventions
- Forward and reverse playing
- Loop length manipulation
- Quantization/tempo/song control (what we consider less interesting for our purpose)
What it lacked for us is the ability to launch one-shots of a recording and the option to play one-shots with a keyboard, transposing our sample on a tempered scale. Let this be the thing that the sampler is good at, including numerous other features. The sampler works with any audio sample but then again it lacks an option to record something straight in its buffer, without having to click and drag a sample on your computer screen. A record button in the device itself would be crucial if you want to use it in a live context, because we wanted to be able to manipulate audio that happened just before the manipulation. In other words: we were desperate for a good live-sampling device with looping features.
Until now, Ableton Live does not have a function where you can capture audio and sample it in an intuitive way. However there is one well-known Max For Live instrument called Granulator II, developed by Robert Henke, which is a powerful textural sampler that has a wide range of possibilities.

This tool can be a very handy companion for fast sound design. You can grab audio out of the air, weave and shape it, and control it with a keyboard. The input device is an always-listening device (it’s actually constantly recording while bypassed), allowing you to recall a moment in time. However, we aimed for a sampler with less textural qualities and a more precise playability, similar to Ableton’s built-in looper.
When we started combining Max and Ableton, we tried to implement every self-made Max-patch as a M4L-device in our Ableton session. After a while it became clear that this way of working is not always practical because of built-in limitations in the M4L-‘circuitry’: a M4L-device has to be either an audio-device either a midi-device. An audio-device can not receive midi-messages and vice versa [There is a workaround possible via the lmh-module pack though]. For more extensive MAX-patches it is better to use Max as a standalone program called Max/MSP, next to Ableton.
Hence, while conducting this research, we used Max for the development of different kinds of samplers. Here’s an evolution of Vitja’s sampler prototypes:



The last sampler/looper device can record two independent loops that have options similar to Ableton’s built-in looper. We can skip octaves up and down (and you can slide in between) and it’s possible to change pitch independently from the loop time. Vice versa, using the timestretch-knob, we can stretch or squeeze the loop without affecting the pitch. We can trigger forward, reversed and re-pitched one-shots of recordings, to make events re-occur without feeling stuck in a loop. Via a midi allocation, we can play the sampled recordings with a keyboard. The red box processes the samplers with some post fader effects (delay – reverb – overdrive), allowing us to make the sounds develop over time. The blue box has some utilities for panning purposes (more info on this in output / amplification). For the sake of simplicity, this sampler works completely independent from Ableton, both sending to the same pair of outputs.
Hendrik’s 4-track looper: Allows the player to record up to 4 loops that are not synced with each other. The idea is inspired by the piece Cascade by William Basinski. The loops are never equally long, so they start shifting from the moment they repeat. The looper can be set in serial (looper 2 records the output of looper 1) or the looper can be set to parallel (every looper can receive live-input). This choice of input signal can be made for every one of the 4 loopers separately. Next to this, there is an optional sync-button resulting in a fixed pulse for every one of the four loopers. The whole device is programmed to be mapped on a Novation SL MkII midi-controller.

Hendrik’s POLY-BOMBA 8000: A ‘classic’ sampler: with only one buffer to record an audio fragment, but it has 8-note polyphony. Developed for use with the band Bombataz.

This sampler is linked to a SM57, put in front of the electric guitar amp. Here’s how that sounds in practice: