SOUND SYNTHESIS TUTORIAL (VI)
Tutorial about synthesis of sound: theory on the synthesis of sound (phase) and the keyboard as modulator.
Theory on the synthesis of sound - Phase
It is time to concentrate on the concept of phase and the role that it plays in wave shaping. Phase is determined by time and is always related to the combination of two or more waves. In the simplest example, as shown in the following graphic, the combination of two identical waves without any delay generates a wave of double amplitude. But what would happen if one of these identical waves were delayed half a cycle with respect to the other? They would cancel each other and the result would be silence. As seen in the first chapter of this tutorial, a single sine wave can be measured by specifying just its frequency and amplitude, but when two or more waves are combined together we have to consider their relative offset (delay) as well. This offset, which is usually called phase, can be measured in time or degrees. As we can see, the concept of phase has very serious implications in the fields of sound wave theory and sound synthesis.
The concept of harmonics has a lot of importance in the following explanation. Harmonics are essential in any field related to the generation and manipulation of sound; in fact, harmonics are what sound is made of. It is essential to understand harmonics and their role, so if you need to review this concept you will find information about it in the first chapter of this tutorial.
Combining complex out of phase signals does not necessarily lead to a complete cancellation; let us take as example a saw wave, a type of wave which contains every harmonic. If the first harmonic (the fundamental frequency) is located at 100 Hz, the second harmonic will be at 200 Hz, the third harmonic at 300 Hz... and so on... Combining two of these saw waves with their fundamental frequencies offset (delayed) half a cycle would mean that the fundamental frequencies would be cancelled. But on the contrary, the second harmonics, located at 200 Hz, would be added; the third harmonics, located at 300 Hz, would be cancelled; the fourth harmonics, located at 400 Hz, would be added... and so on... Hence, odd harmonics would be cancelled whereas even harmonics would be reinforced; the practical result is that the saw wave has doubled its frequency (because which previously was the second harmonic would become now the first harmonic) while keeping the same amplitude (because half of the harmonics have been eliminated and the remaining ones have doubled their amplitude).
Fourier analysis theory states that any two complex signals can be described as an infinite number of sine waves which represent all the frequencies present in the signal. Hence, at any given offset between two identical signals each frequency would be phase shifted (offset) by a different amount. But what sense does this make, above all in relation to the synthesis of sound? The conclusion is that filtering leads to changes in phase, because, as the Fourier analysis theory states, the fact that filters alter frequencies means that they also alter phases. It is interesting to note that phase modulation is the base of a type of synthesis of sound. The Yamaha DX7 synthesizer, which is usually referred to as an FM (frequency modulation) synthesizer, is actually a PM (phase modulation) synthesizer. Both types of modulation sound very similar but phase modulation is usually easier to implement in a synthesizer.
The keyboard as modulator
Apart from the standard control of notes (pitch), the keyboard could be assigned to different parameters present in a synthesizer (if this one allows for it) to act as a modulator, allowing so to achieve a range of additional effects. This technique is known as keyboard tracking or keyboard scaling. For example, the keyboard can be set to modulate the filter's cut-off frequency; doing so, when octaves are played up the filter opens more and more and the sound becomes brighter, whereas playing octaves down has the opposite effect. This technique is known as filter tracking. Another common form of keyboard tracking involves setting the keyboard to modulate the amplitude of the sound, so when octaves are played up the sound becomes louder, whereas playing octaves down has the opposite effect. Synthesizers which have a more extensive modulation matrix will yield a larger variety of results.
Another aspect of keyboards is velocity, which represents how hard a key is struck. Oldest synthesizers often lack a way of measuring velocity but in modern synthesizers velocity control can be a serious manipulation tool. Velocity can be assigned to a number of parameters (modulation matrix) depending on the possibilities offered by the synthesizer. It is usually assigned to amplitude, so the keyboard imitates the natural behavior of a non-electronic keyboard, where the higher velocity is the louder the sound is (and vice versa). For example, piano samples are often patched so that velocity controls both amplitude and filter cut-off, so the harder a note is struck the louder and brighter the sound becomes, achieving a more natural presence. Velocity is used for expression but also as an additional modulation tool in a synthesizer.
To this point I have shown basic knowledge on how to make use of a synthesizer without having to rely always in factory patches. Programming a sound patch can be so interesting and creative as composing a music theme. If you are a newcomer to synthesizer programming, I recommend you to start your learning with a basic, simple synthesizer such as the analogue, classic Wahnsyn Type I.
Back to top