Audio production Part 3 - Sound generation
From LXF Wiki
|Table of contents|
Audio and Music Production - part 3
(Original version written by Graham Morrison for Linux Format magazine issue 65.)
After building a simple synthesizer in the previous tutorial, we investigate other possibilities for sound generation and synthesis...
Just towards the end of the previous tutorial we got to the point where we could start to construct a project using the audio sequencer Rosegarden. While most of the article was concerned with building a basic synthesizer, the final stages involved recording the audio output from AMS into Rosegarden. This is a typical way of working with synthesizers and audio applications, though with escalating processor speeds it's increasingly possible to run more and more virtual instruments without ever having to render the output to an audio file.
Most projects still start with a glut of various MIDI sequences though. Whether they're captured from an external controller, programmed manually, or automatically generated by external MIDI processors (and it's usually a mixture of all three) creativity still starts with these building blocks of a project. Audio is also going under the knife, often being used as an elastic loop that can be wrapped and warped using software without going anywhere near either MIDI or a synthesizer, but despite this, there's still a need to generate the original source material.
Rosegarden originated as a MIDI sequencer, and it's from this foundation that the rest of the audio handling functionality has been added. The difference now is that you don't need external equipment to be able to generate any sound. As we've seen in the previous tutorials, software synthesis is becoming a viable alternative to owning the hardware, with the added advantage of also being self-contained.
One of the best ways of generating MIDI data automatically is by using an arpeggiator. Back in the dark ages of modular synthesis, an arpeggiator became the performance version of a step-sequencer. Where a the sequencer typically featured 8 or 16 pre-defined notes played in order, an arpeggiator generated notes automatically depending on the input, with various user configurable rules defining the relationship between the incoming notes and those the arpeggiator produced. Modern varieties work on the same principle, and at its simplest, an arpeggiator mirrors the input notes to the output in either ascending or descending order, with a predefined value for delay and duration.
One of the most comprehensive arpeggiators available for Linux is by the prodigious Matthias Nagorni of Alsa Modular Synthesizer fame. Succinctly called QMidiArp, this application epitomizes the less-is-more approach to user-interface design to such an extent that it actually belies its powerful capabilities and may put the average user off using it. A typical arpeggiator would generally provide plenty of hands on control, with switches and sliders with immediate access to the most important parameters such as octaves, range and direction. This isn't the case with QMidiArp which takes a typically Linux-like approach by using a small programming language to describe the input-output transformation. To hear anything, it also needs to be MIDI wired between an input from a MIDI device (such as an external MIDI keyboard or vkeybd) and its output needs to go somewhere useful, like the MIDI input of a synthesizer (which is obviously routed to the audio outputs). An arpeggiator is usually a single instance, but with QMidiArp, it's possible to generate as many as needed all running concurrently.
A complete list of QMidiArp's command set
- 0..9 Note indices in buffer
- + Octave up
- - Octave down
- = Reset to input octave
- > Double tempo
- < Half tempo
- . Reset to standard tempo
- ( ) Enclosed chord
- / Volume up by 20%
- \ Volume down by 20%
- d Double length
- h Half length
- p Pause
To generate a classic up-down arpeggiation in QMidiArp, press the 'Add Arp' button and then enter '0' into the Pattern box. This is the hardest part to get your head around, but the '0' simply tells the software to output the note at the first position on the input buffer (all incoming notes are added to the buffer sequentially). As the first note is played, it is removed from the buffer, moving the previously second note to the first position (FIFO). For example, if the pattern '01' is entered then the second note of the sequence would be repeated as it's played. This is because it's firstly played as the second in the buffer, and once more after it's become the first. To make this clearer, with an input sequence of CDEF, with 0 for the Pattern, the output would be CDEFCDEFCDEF. With a Pattern of 01, the output would be CDDEEFCDDEEF, the final note isn't repeated because by the time it's played there's no other note in the buffer.
QMidiArp also has a direction selection, which by default is set to 'Up' meaning that it works its way through the note buffer from the lowest note to the highest. The opposite of this is obviously the 'Down' setting. For more complex arrangements, using more than one arpeggiator, especially with a combination upward and downward running sequences can create almost complete tracks (think Tangerine Dream). For a good example of this, create three arpeggiators with the first one containing the simple pattern '>>0' going up (>> quadruples the tempo). The second one's pattern should be up an octave (>>+0) but otherwise the same and to generate a bass line, the third could be something like '--0012' that drops the sound down a couple of octaves and changes a three note chord into an arbitrary 4 beats to the bar arpeggiation.
While it's great to be able to play all this auto-generation live, the output becomes a lot more useful when recorded into a sequencer such as Rosegarden. Firstly, it's important to make sure that both the sequencer and the arpeggiator are set to the same tempo. This ensures that the arpeggiator notes can be made to synchronize with an entire Rosegarden project. In QMidiArp this configured on the settings page while in Rosegarden it's is a little less straightforward. As it is possible to define changes in tempo at any point within a project, changing a whole project's tempo is only possible at the beginning of the arrangement's timeline. To make sure the current position is set to the beginning, press the 'Rewind to beginning' button then set the tempo either by double-clicking on tempo in the transport window (by default it's 120 beats-per-minute) or via the menu (Composition->Tempo and Time Signature->Add Tempo Change). For a moderately slow tempo, try something like 95.
To record QMidiArp's MIDI output into Rosegarden, it firstly needs to be routed to Rosegarden's MIDI input from qjackctl. To makes things clearer in Rosegarden, it's also wise to rename the inputs and outputs from the 'Manage MIDI Devices' dialog to reflect the QMidiArp connections and that of any other synthesizers. It's also essential to make sure that the Record Devices list has the QMidiArp input activated. QMidiArp actually has two separate outputs that can be assigned to different arpeggiators, such as for splitting the bass and the lead sounds, but for now it's easier to keep to the first one.
The next step is to make sure the intended track is connected to the synthesizer by clicking on the channel name and selecting the corresponding output. If this is connected to QMidiArp by mistake, it can generate feedback errors. The final thing to do is make sure the metronome is disabled by clicking on its icon in the main transport window (otherwise it would be recorded). Next, record enable the track and press record on the transport control before playing something creative on the keyboard.
After stopping the recording, the track should now contain a single block full of notes. If there are spaces in the recording, they can be trimmed down to a more manageable size using the split tool (F7) which is one of Rosegarden's best features. Playing back the track should produce an identical sequence as in the recording, and as long as QMidiArp and Rosegarden shared the same tempo, the notes should fit exactly into the bars and beats on the timeline. After opening the block (called a segment by Rosegarden), the notes won't exactly fall onto the time-grid, but this can be solved by using the quantize tool. Quantization shifts notes onto the closest note division, and with the arpeggiator doubling the tempo twice to 16 beats in a bar, selecting 1/16 from the quantize drop-down menu should move the notes to the correct division. To show the notes at their correct positions, the matrix editor's grid also needs to be changed to 1/16.
To make this block of notes more useable, it's necessary to cut out the superfluous notes from the beginning and end before selecting everything and moving all the notes to the beginning of a bar. To make the sequence a little more versatile its better to move the bass notes to another track so they can trigger a different sound. This can be done easily by selecting all the lower notes with the selection marquee and cutting them from the block. After this, close the matrix editor and draw a new block onto an adjacent track. Opening the matrix editor for this block and pasting the clipboard contents should insert the bass notes from the previous track.
As Rosegarden was developed to compete with similar applications available for other systems, one of the main goals was to create an integrated audio environment. Not only did this mean providing for both audio and MIDI tracks, but also internal effects, synthesizers and mixer routing, all from a single application. While Linux doesn't share the same snappy acronyms as Windows or Mac users, such as VST or VSTi, the same functionality is becoming available with the emerging LADSPA (Linux Audio Developer's Simple Plugin API) and DSSI (Disposable Soft Synth Interface) standards, designed for plug-in effects and synthesizers respectively.
The best way to work with software synthesizers in Rosegarden is to use its dssi integration. It's important to note that Rosegarden needs to be configured to support these plugins, needing both the dssi library and its header files installed and specified when compiling Rosegarden (using ‘--with-dssi' with the configure script). If dssi installation has been successful, then choosing the output (right-click on the channel name) for a Rosegarden channel should include not only the MIDI and audio channels, but 'Synth plugin' devices as well.
Dssi is an API for audio plugins, aimed at bringing some of the power of VST instruments to Linux audio applications. So far though, only a simple stand-alone host and Rosegarden feature support for dssi plugins, but it's certainly building enough momentum for other applications to include support. The main reason for using dssi plugins over other options is integration. Compatible software (such as Rosegarden) is able to directly control dssi applications through its own interface. Not only does this give the impression that the dssi is actually part of the same software, but also that projects containing dssi plugins can be saved and restored without concern for loading any external synthesizers and their associated Jack connections. It even goes as far as remembering patch configuration. Another great advantage is that the audio chain stays within a single program, making it much easier to manage, opening the way for chains of effects to be added on an integrated dssi channel, as with Rosegarden.
By default, the dssi API comes with several example synthesizers but the two most useful are developed independently. Xsynth is more of a traditional approach to synthesis, consisting of the typical VCO-VCF-VCA approach - basically the same as the AMS synth built in last month's tutorial. It's still comprehensive though, and boasts the excellent overall sound of a classic analog-style synthesizer. The other big contender for best dssi-synth is called hexter and is an almost identical copy of the best-selling Yamaha DX7 from the 1980s. The thing that made the DX7 so different, and successful, was that it used a form of synthesis called 'frequency modulation' that is quite different to typical subtractive synthesis. The sounds are never as thick, but instead have a much more distinct and percussive quality especially suited to electric pianos and soft string timbres. While it doesn't feature much of a GUI, hexter is totally compatible with the system exclusive data used to program and store DX7 presets. As a result, the thousands of sounds available on the Internet should work with hexter, along with any editors made for programming the machine in the 20 years since its release.
One of the more integral parts of a software studio is the sampler. This is the virtual descendant of the old hardware stalwarts responsible for many modern genres of music and many more that can't really be called music at all. While Linux doesn't have a native software sampler that can compete with some of the more serious applications for other platforms, there is a dssi version of the popular FluidSynth, a close relation to a sampler.
FluidSynth is a SoundFont synthesizer based its version 2 specification and is basically the sample-based equivalent of the synth that many current and older Creative soundcards feature. This means that the simple oscillators that are typically used in analogue style synthesizers are replaced with an audio sample. In all other ways, the sample is treated the same as an analogue oscillator in that it can be passed through envelopes and filters, and modulated with LFOs before finding its way to the audio output. These components are built into the software and hardware versions of each player, hopefully generating the same sound regardless of the platform.
Over the years, people have used cheap SoundFont hardware as a replacement for expensive samplers, and as a result there is a great deal of pre-prepared sound banks covering almost every genre of music. It is also perfectly possible to create custom sound banks from using a SoundFont editor, the most comprehensive for Linux being Swami. The only downside to using a SoundFont synthesizer is usually the quality of its output. This isn't so much a result of the source material, but more a result of the basic filter and LFO design that can be found lacking in complexity compared to their retro-counterparts. This isn't true for drum sounds, which seldom need to use these components, and therefore fit into the SoundFont framework perfectly. In fact, Swami makes a perfect editor for creating drum kits, as its graphical interface makes the usually arduous task of assigning drum sounds to keyboard notes relatively painless.
DSSI versus VST
When it comes to dssi though, the area with the most potential when it comes to x86 architecture, is called VST (Virtual Studio Technology). VST is one of the primary technologies responsible for the incredible proliferation of software effects and instruments in recent years. Developed by Steinberg, the VST API basically builds a virtual studio protocol capable of replacing the wires and MIDI cables of a typical studio setup. Over the years, VST effects and instruments have grown to such an extent that they can now seriously compete with their hardware counterparts, and in many ways improve on their often cumbersome and unreliable designs. There are now VST versions of many of the great synthesizers of a bygone age, from Yamaha's CS-80 to the Moog Modular.
The trouble with VST is that it's only available for Windows and OS/X machines (with a brief sojourn to BeOS at one point). Some clever people though, have developed several approaches to getting some level of VST functionality to x86 Linux. As is obvious when the processor architecture is stated, they all use Wine. Initial attempts were focused on building a wrapper around the LADSPA Linux effects API. This was followed by a more successful effort by directly connecting to Jack called Jack_fst. The dssi API, however, proves to be a much better container for VST instruments and effects, being purpose built to provide a much more VST like protocol than LADSPA, which was primarily designed for effects. While it is still in the early stages of development, dssi-vst proves to be a truly worthwhile endeavour.
Like hexter and Xsynth, dssi-vst is available from the dssi repository and only needs to be compiled against a recent version of Wine. Some synthesizers also require msvcp60.dll to be present in Wine's Windows directory. VST instruments and effects are usually supplied as DLLs (Windows libraries for the uninitiated). Most come with executable installers that can be successfully run from Wine before extracting its corresponding DLL from Wine's installation and moving it to the VST_PATH location (usually /usr/lib/vst). After which, the new synth should become available to dssi hosts as a dssi client. Part of the dssi-vst package that isn't installed by default is called vsthost. This can either be executed directly or moved to/usr/bin and with a single argument for the VST DLL provides an independent audio and MIDI host for VST plugins. This enables it to work in the same way as ZynAddSubFX for example, and is a versatile way of bringing VSTs to other applications.
Rosegarden is currently the only audio application to feature support for dssi directly. Each track can be assigned to a Synth plugin in the same way that it can be set to audio or MIDI, and the sound generated by the plugin is routed directly back into the Rosegarden mixer. This makes it perfect for experimenting with the arpeggiator tracks recorded earlier. Sadly, one of the most powerful aspects of working with virtual instruments isn't yet implemented in Rosegarden, and that's automation. While it does currently support rather primitive editing of some of the more important controller data, such as volume and pan for example, it doesn't go the extra mile necessary to provide the custom controlling and routing to be able to change virtual instrument parameters automatically.
While it often becomes essential to record the output from external software synthesizers as with AMS last month, there's a slightly different approach to recording Rosegarden's internal dssi instruments, and that involves the mixer interface. Mixing consoles and audio production go hand-in-hand, and in studios worldwide, the console is often the heart of the whole process. A mixing console's primary function is obviously to mix sound. This can be as basic as taking several inputs and mixing them down to a single output, to taking a hundred inputs, routing them to different busses, some heading out of the console, while others are re-routed to different destinations. Modern consoles have become functional power-houses, and often impart a considerable degree of their character on a recording's overall sound. They function part as audio patch bay, part effects router and part parametric equalizer, and are usually directly linked to a recording device.
It's the same philosophy that's made the virtual mixer the centre of software audio production, and it basically serves the same function. Not only do software mixers provide a means of mixing the various audio related channels to a soundcard's available outputs, but they often provide effect channels, sub-grouping and equalization in the same way as their hardware counterparts. Routing and recording audio through the mixer is often called sub mixing, and it's using this approach that makes recording the internal dssi instruments possible.
The first stage is to disconnect any external connections to Rosegarden from the qjackctl Connect window. These connections are usually direct from a soundcard's inputs, and while making the recording of external hardware possible, it can introduce unwanted noise into the audio signal path. Currently, Rosegarden doesn't support recording off the dssi channels directly, and the way around this problem is to send the audio signal to a different bus. Busses are basically audio terminology for a buffer and they're useful for all kinds of reasons. One typical example is with recording a drum kit where each sound is recorded to a different channel. Often, the whole kit needs to be treated as a single channel, and this is easily achieved by routing them all to a different bus and controlling them as a single channel from there.
By default, Rosegarden doesn't have any additional buses, but they can be added from the audio mixer by selecting Settings->Number of Submasters->2. In Rosegarden, they all output to the master bus (which is where the sound is output) and are added to the mixer window. To re-route one of the dssi channels to a different bus, simply left-click on the Output destination button just above the channel's fader. Choosing Sub 1 will send the audio to the first bus, and hopefully after starting playback, the audio should be shown on Sub 1's level meter.
The next stage is to set the input source to the new Sub bus, and this is done with the button above the output-destination button on an unused audio channel. This is helpfully called the Input Record Source and needs to be changed from In 1 (the default audio input) to Sub 1, which should be the output from the Sub 1 bus. After record-enabling the destination track, Sub 1 can be recorded by pressing the Record button, after which, the dssi synth's output should be rendered to the track.
The final stages of music production are as important as the composition itself. Next tutorial we'll cover various effects and the final mastering of a project.