FUNDAMENTS OF ELECTRONIC MUSIC
This article showcases some of the pillars that sustain the world of electronically-generated music. The first musical instruments operating through electric or electronic principles were presented early in the 20th century, being the earliest of them the Telharmonium presented in 1897, a sort of electro-mechanical organ which was the precursor of the Hammond organ. In more practical terms, we can consider that the history of electronically-generated music began in the decade of 1960, when the first portable synthesizers were introduced.
Comfronted with more modern standards, those were relatively rudimentary devices built as external separate modules, which produced simple and energic sounds, far from the complex and subtle multi-layered effects that the most modern synthesizers can generate. In the 1980s the introduction of a large number of technologies which were in experimentation during the previous decades brought many new possibilities to the field of electronically-generated music. During that decade they made apparition things such as the MIDI protocol, the digital synthesis of sound and countless software-based devices (both processors and generators of sound).
Hardware-based sequencers became omnipresent and hence the new electronic music, other than being performed in real time through hardware devices, could also be created in a small workshop, being possible to modify the musical composition countless times before exposing it to the public. With the arrival of software-based sequencers and Virtual Studio Technology during the 1990s, the production of electronic music became very affordable, regarding both the economic cost and the effort deployed, as a basic knowledge of musical structures and a personal computer fitted with the proper software could be enough to initiate a musical project of standard quality.
Synthesizers and samplers
A sound synthesizer (commonly called synthesizer and often abbreviated as synth) is an electronic instrument capable of producing a more or less wide range of sounds. Synthesizers generate electric signals, known as waveforms, which are simple and monotonous in nature but can be turned into complex and rich sounds when mixed and processed. Synthesizers may imitate the sound of traditional instruments, with more or less success, depending on the prestations of the instrument and the skill of the operator, but they rarely excel at this; their true strength is the ability to generate sonic timbres that have not been explored by traditional music.
Synthesizers can be controlled through different input devices, such as keyboards, music sequencers or instrument controllers. They use analogue or digital technologies, based either in hardware or software, and programmed algorithms to generate electric signals, having each system its own strengths and weaknesses. The most common techniques of sound synthesis comprise subtractive synthesis, additive synthesis, wavetable synthesis, frequency modulation synthesis, phase distortion synthesis, physical modelling synthesis and sample-based synthesis. Other techniques, rarely used, are subharmonic synthesis and granular synthesis.
Synthesizers are often controlled through a piano-style keyboard, which led these instruments to be referred to simply as keyboards. Several other forms of controller have been devised to resemble fingerboards, guitars (guitar synthesizer), violins, wind instruments (wind controller) and drums and percussions (electronic drum), among others. Synthesizers lacking controllers are often called sound modules and they can be controlled through MIDI or Control Voltage/Gate methods, in which the voltage signal typically controls the pitch of notes while the gate signal controls their on/off status.
Unline synthesizers, which generate sound by their own means, samplers have to be supplied with waveforms stored in audio files. Because of this, samplers are much better suited than synthesizers to emulate the sound of a traditional instrument. But apart from this, these instruments are particularly useful to add sound effects to a composition, and in fact they are designed to be able to create musical notes from any kind of sound beyond that of musical instruments. For example, it would be possible to record a single sample of the sound of a gas engine and supply it to the sampler to build a whole range of musical notes with it.
However, such simplistic approach is advisable only for adding occasional sound effects to a composition, as an output generated in such way would sound artificial and monotonous. For achieving the rich quality of the sound of a real instrument, the musician would have to take the effort of recording the sound of all the notes into individual audio files, to later prepare these files in an audio editing software before supplying them to the sampler, following a rather tedious process known as mapping. But after this effort, the musician would have a sound patch in his sound library that resembles faithfully the sound of the real instrument.
Sequencers
A music sequencer is a hardware device or a software application which can record, edit or play back music, by handling note and performance information in several forms, typically MIDI or CV/Gate, and possibly audio and automation data for plugins and DAWs (Digital Audio Workstations). There are some types of sequencers categorized by how they handle data types, as following:
MIDI data: MIDI sequencers.
Control Voltage/Gate data: analogue sequencers.
Automation data: plugins and DAWs, which include software-based sequencers with software-based plugins, either
processors or generators, connected to them, as well as sequencers which mainly work with audio waveforms.
Audio data: audio sequencers, including DAWs, loop-based music software, phrase samplers and groove machines,
among other types.
Sequencers can be categorized as well by their construction and supporting mode, as following:
Real-time sequencers: devices in which musical notes played by the musician are recorded in real time, as in a handheld
recorder, and played back with designated pitch, tempo and quantization. For real-time editing, punch in/out features are often supported;
however, to be able to edit details another editing mode may be needed. Built-in sequencers on electronic keyboards often support real-time
mode.
Analogue sequencers: devices that generate sound by using analogue electronic circuitry. They are designed for
assisting the musician either while composing in a studio or when taking part in a live performance. The user may change the notes in a
sequence without activating record mode and steps (the time interval between each note) are independently adjustable.
Step sequencers: devices in which musical notes are divided into steps, and each step is recorded without exact
timing. The timing of each step is designated separately, either following the order in which steps are programmed (used in bass machines)
or the selection of buttons arranged in columns (used in drum machines). Analogue bass machines and drum machines often use this mode,
along with semireal-time mode.
Software sequencers: software applications that provide the functionality of a hardware sequencer within the
scope of a personal computer. A software sequencer may emulate the characteristics of an analogue sequencer. The user may control the
software sequencer by means of either a graphical user interface or a specialized input device, such as a MIDI controller.
With the advent of MIDI technology, programmers were able to write software which could record and play back the notes played by a musician. Unlike early sequencers, which played back mechanically sounding sequences with exactly equal note length and spacing, the newer sequencers allowed irregular quantization and hence to record and play back expressive performances where the actual timing applied by the musician was preserved.
Sequencers were typically used to control external synthesizers, especially rack-mounted sound modules, as it was no longer necessary that each synthesizer had its own keyboard. As this technology matured, sequencers incorporated new features and the ability to record multi-track compositions. Modern software sequencers are usually capable of incorporating virtual instruments implemented as software plugins, allowing musicians to replace hardware-based synthesizers with software-based equivalents. Along with audio tracks, sequencers incorporate a mixer in which the properties of individual tracks can be adjusted.Sequencers work basically with two types of audio tracks: instrument tracks, which work with musical notes drawn on a piano roll and to which virtual instruments are connected, and waveform tracks, which work with waveforms stored in audio files and which are commonly used to add sung performances or sound effects that instruments cannot provide. Some sequencers allow to load waveforms directly into audio tracks (and even to perform a basic editing of the waveform), while others do not support this function, requiring that waveforms be supplied to a sampler connected to an instrument track.
Nowadays the term "sequencer" is most often used to describe software-based sequencers. However, hardware-based sequencers are still in use. Workstation keyboards have their own proprietary built-in MIDI sequencers. Drum machines and some classic synthesizers have their own built-in step sequencer. Standalone hardware-based MIDI sequencers are still in use, albeit the market demand for these devices has greatly diminished due to the greater set of features offered by software-based counterparts. Modern virtual music studios are praised for having brought the physical possibility of creating music to nearly everyone, but they have been criticized as well because their ease of use contributes to fill the spectrum of music creation with loads of low-quality productions made by users with little knowledge or talent.
Virtual Studio Technology
Commonly known by its acronym (VST), Virtual Studio Technology is an interface created for integrating software-based plugins into modern software-based sequencers, audio editors or virtual music studios in general that support this feature. These plugins can emulate sound processors (widely known as "effects") or sound generators (in the form of synthesizers, samplers or drum machines). Sound generators powered by the VST interface are called VSTi (or VST instruments).
VST and similar technologies use digital signal processing to emulate through software the hardware that can be found on a traditional recording studio. Many thousands of these plugins have been created, both as commercial and freeware products of very diverse quality and appearance, and the VST interface is nowadays supported by a large number of audio applications, especially those of commercial character. VST plugins are generally run within a Digital Audio Workstation, providing the host application with additional functionality, and they may coexist with DX plugins powered by Direct X, which serve the same purpose through a different standard.
VST and DX plugins generally feature a custom Graphical User Interface (GUI) displaying fancy controls that resemble the physical switches and knobs present in any piece of audio hardware. Some plugins (generally the older ones) do not have a built-in GUI so they rely on the host application for displaying it. Since the GUI of VST and DX plugins is not responsive, an inconvenience has emerged with the adoption of much larger screen resolutions during the last years: the fancy graphics and labels of sometimes costly plugins designed one decade ago are now hardly readable. Some VST instruments are emulations of classic hardware-based instruments and processors, and portray their visual and sonic characteristics with more or less fidelity. This enables musicians to use virtual versions of devices that are difficult or impossible to obtain in their physical form.
Virtual instruments require notes to be sent via the MIDI interface in order to output audio, whereas other plugins are used to process the audio generated. However, some of these plugins require a MIDI input as well; for example, they might use MIDI synchronization to modulate the effect produced in the audio in synchronization with the tempo set in the composition. MIDI signals can be used as well to control the parameters of both instrument and effect plugins. Generally, host applications allow the output of a plugin to be routed toward the input of another plugin, functionality which is known as chaining. For example, the output from a VST synthesizer could be sent to a VST reverb effect for further processing.
MIDI
MIDI, which stands for Musical Instrument Digital Interface, is an international standard developed for setting communication between electronic musical instruments and computers. However, users do not necessarily have to plug an external musical instrument to their computers, since there is a sound synthesizer built into the sound card of every computer. In fact, this is key to the way MIDI files (which have .mid extension) work and explains why such files are so small in size; they do not contain any sound, but only textual instructions to indicate sound cards which built-in instruments and sequences must be played, as well as the properties of each note contained in a sequence (pitch, velocity and duration).
Speaking with simplicity, we can refer to MIDI files as digital sheet music and to the MIDI synthesizer built into every sound card as a digital music band or orchestra. The quality of these MIDI built-in synthesizers is as variable as the quality of the sound cards, and the sounds (known as voices) which they produce will not be the same across different computers which are equipped with different sound cards. Besides, MIDI files cannot store lyrics so their output will be always purely instrumental; albeit sound patches imitating the human voice are available, there is no possibility for them to articulate words.
The overall concept around MIDI files is that they allow to store a musical composition to be later played or loaded into a virtual studio, where it can be modified or integrated into a larger composition. Within the scope of these handy virtual studios the user can draw a MIDI composition directly into a piano roll by just clicking on it, or otherwise record the composition from the output generated by an electronic musical instrument connected to the computer. In any case, the MIDI composition can be later "rendered" into a sound file by means of sound generators, typically VST instruments.
WAV and MP3
When recorded, sound is electronically represented as waveforms. Waveforms are analogue in nature, and in order to store sound in a computer they must be converted to digital format. This is done through the process of sampling analogue waveforms many thousands of times per second. The data acquired from this process is stored in sound files (which typically have .wav extension). The higher the sampling rate is, the more accurate the representation of the waveform will be. For example, CD quality is standardized to a sampling rate of 44100 times per second. Sampling rate is measured in hertz (Hz).
Each sample is 2 bytes (16 bits) in size. For stereo sound both the left and right channels have to be sampled. Doing the math (44100 samples x 60 seconds x 2 channels x 2 bytes) we can see that one minute of sound sampled at CD quality would generate a .wav file of about 10.5 MB in size. With the average music theme being three to four minutes long it is clear that .wav files would claim abundant space in a drive. Besides, such large files would be also difficult to transfer throughout the internet. If sampling rate is reduced the size of files can be reduced, but only at the expense of quality of sound, since a lower sample rate would ignore a greater amount of the variations which constitute the waveform.
In the following pictures we can see the whole shape of a waveform generated by a plucked string synthesizer along with a detail view in which the small squares represent the sampling points.
The MP3 (abbreviation of MPEG layer 3) codec is a compression standard which reduces the size of raw sound files, with the minimum possible loss of quality. The largest part of the music available for download in the internet can be found in this very popular format, along with some others developed by different companies. The reduction in size is accomplished by compression, elimination of inaudible data and encoding of duplicate data. The encoding process discards data that is above or below the range of human hearing and quiet sounds that are drowned out during loud passages. If data is duplicate in both the left and right channels, data from one of the channels can be discarded to be later reconstructed during playback. The result is an audio file which sounds nearly the same as the original but occupies a much lesser disk space.
The MP3 algorithm can encode at different bit rates, which are measured in kilobits per second (kbps). Sound encoded at 128 or more kbps is generally considered to be of good quality, and with such ratio of compression the size of an audio file can be reduced to about a fifth part of which it originally was. However, there is a drawback with compression algorithms that we should not forget: loss of quality is accumulated each time that a compressed file is edited and saved. Because of this, important sound data should be stored in raw sound files - and not only in compressed files - so each time editing is performed a full-quality file is used as source.
Back to top