The University of Surrey

EE2.LabA: L2 Digital Audio Synthesis and Effects

[Section:1,2,3,4,5 ]

Teams: 2 members per group
Software: Pure Data (pd), v. 0.39 or later
Timetable: Two weeks allocated.
If you have not reached the end of Section 3 by the end of the 1st session, then continue from section 4 in the 2nd session.

Aims of the Experiment

  1. To understand and implement several principles of digital audio synthesis and effects: sinusoidal additive synthesis, vibrato, FM synthesis, ADSR/ADSHR envelopes, noise shaping, stereo-delay effects, and single-tap and multiple-tap delay lines.
  2. To construct the DSP component of a note synthesizer based mainly upon FM synthesis, with an attached delay-effects unit.
  3. To provide an exposure to patcher languages such as Max/MSP, jMax, and pd used for audio/video processing, and specifically, to familiarise oneself with the graphical programming environment Pure Data (pd).

Required reading

1. Overview

This laboratory aims to implement some well-known digital audio synthesis methods and effects algorithms, including additive and FM synthesis, stereo-widening using delays, and single-tap and multiple-tap delay lines. Patches (programs) will be built for each of these synthesis/effects units in the graphical programming environment Pure Data (pd), using simple objects such as oscillators, delay lines and DACs. Pd allows real-time control of processing parameters, and the experiment should also provide an opportunity to explore creative uses of digital synthesis/effects. NB! A large number of related audio example patches exist in the pd Browser. These can be used to gain familiarity with the pd environment. Help for any PD object can be obtained by cmd-click on the object and click help. A useful glossary and summary of objects can be found online.

2. Preparation

Fig 1: Square wave
The preparation work covers the material for both weeks of the lab and will be assessed at the beginning of the first session.
  1. Visit pd community site: http://puredata.info and see the guide here, for an overview of pure data. Further tutorial overviews can be found here and here.
  2. Derive an analytical expression for the Fourier series (i.e. frequencies and amplitudes of harmonics) of the square wave in fig. 1 with period T (start from the general expression for a Fourier Series, and find the values of the coefficients).
  3. Sketch the power spectrum for a sinusoid oscillating at f Hz, where f< f_s/2, labelling f and f_s. For an N point FFT, write down an expression for the width of each FFT bin, and comment on the implications of N on the resolution/performance of the transform.
  4. By expanding the expression for the amplitude modulated (AM) signal below, simplify it into a sum of pure sinusoidal components (single sine or cosine terms only). What are their frequencies and amplitudes (for a_AM non-zero)? You will need to use the trigonometric identity for cosAcosB.
    y[t] = A (1+m(t)) cos(2π f_c t) = A (1+a_AM cos (2π f_AM t)) cos(2π f_c t)
  5. In frequency modulation (FM) synthesis:
    y[t] = a_c cos( 2π ∫ f_c + a_m cos (2π f_m t) dt)
    sinusoidal components are generated at frequencies, |f_c + n*f_m|, where n is any integer, f_c is the carrier frequency, and f_m is the modulator frequency. A harmonic spectrum (i.e. where all components are integer multiples of a fundamental frequency component) clearly results when the ratio f_c:f_m = 1:1. At what other ratios of f_c:f_m will a harmonic spectrum result?
  6. Sketch an ADSHR envelope, labelling each section and describing its influence on a note.

3. Experiments

3.1 Sinusoidal Additive synthesis

3.2 Amplitude/frequency modulation and vibrato

Fig 2: Frequency vibrato
Vibrato is a slight undulation in pitch and/or variation in intensity of sound, at a rate of around 4 to 10 Hz for most acoustic instruments. In acoustic instruments, it is brought about by rapid movements affecting the excitation mechanism (e.g. vibrating the position of a finger on a fretboard, or for some wind instruments, varying the air pressure within the lungs using the diaphragm). In acoustic instruments, vibrato is generally a combination of amplitude and frequency modulation. In frequency vibrato (see fig. 2), the frequency of the tone can be heard to sweep up and down repetitively. Vibrato can also be induced by amplitude modulation. This is often heard in wind instruments (e.g. flute) and voice (although it is not always a sign of good playing technique!)

3.3 FM synthesis

The basic concept of frequency modulation (FM) synthesis is much the same as frequency vibrato. The center frequency of the carrier, f_c, is modulated at a frequency, f_m, and by an amount, a_m, to yield the FM signal:
y[t] = a_c cos( 2π ∫ f_c + a_m cos (2π f_m t) dt)
However, whereas vibrato is characterised by a modulator frequency of around 10 Hz, at much higher rates the FM becomes too rapid to discern, and the effect becomes more timbral. This was discovered by John Chowning at Stanford University around 1967, and was later patented and becoming the trademark sound of the hugely successful Yamaha DX7 synthesizer in the 1980s. In FM synthesis, the carrier and modulator frequencies are usually set to be equal, or at least of a similar order.
We define the modulation index as:
B = a_m / f_m
One should notice that at non-zero values of the modulation index, harmonic spectra are produced with a fundamental frequency, f_c. In general, FM introduces frequency components at frequencies: |f_c + n*f_m|, where n is any integer value (at negative values of f_c + n*f_m the component is effectively an inverted positive frequency component). Hence, when f_c = f_m, harmonics exist at f_c, 2 f_c, 3 f_c, ... The amplitudes of the harmonics are dependent on a_m, and can be derived in a non-trivial manner by expanding the Fourier series of the FM signal, involving Bessel functions. As a general rule though, by keeping f_m constant whilst increasing the modulation index, the amplitude of the sidebands increases, but the frequencies of the sidebands do not change. As a rule of thumb, Carson's rule states that nearly all (~98%) of the power of a frequency modulated signal lies within a bandwidth of B = 2(fΔ+f_m), where is the peak deviation of the instantaneous frequency f(t) from the center carrier frequency, f_c, and f_m is the highest modulating frequency.

4. Effects

4.1 Single-tap delays

Fig 3: Single-tap feedforward delay line
A wide range of effects based upon delays can be implemented in pd. A delay line is basically a circular buffer in memory, which is sequentially filled at regular sampling intervals with sample values. A write pointer points to the location in memory where the current sample is to be stored. When the end of the buffer is reached, the write pointer returns to the start of the buffer and begins to overwrite old samples. Samples can be read from the buffer using one or more taps or read pointers at specified lags behind the write pointer (i.e. a tap will read samples that are n samples old if it points to a location n memory locations behind the write pointer). In pd, delwrite~ can be used to allocate memory for a delay line, and delread~ creates a tap from the delay line (more than one tap can be created using multiple instances of delread~).
Fig. 4: Amplitude response of a comb-filter

We are going to create some simple digital audio effects using single-tap finite impulse response (FIR) filters (see fig. 3). The difference equation and transfer function for a single-tap FIR comb filter are:

y[n] = x[n] + a x[n-M]
H[z] = 1 + a z^{-M}

where M is the delay length in samples, and a is the amplitude of the delayed signal relative to x[n]. For a particular value of M, with the addition of the signal to a delayed version of itself, some frequency components will constructively interfere and others will destructively interfere. We end up with a frequency response similar to that shown in fig. 4 for both a = -1 and a = 1.

Build the following effects:
Fig 5: Single-tap feedback (IIR) delay line
A simple delay effect can be used to enhance stereo placement of instruments, and to make the mix sound wider.

4.2 Multiple-tap delays

Fig 6: Multi-tap feedforward delay line
Multiple tap delays (fig. 6) allow more flexibility in the design of the effects unit, and can be used to add a rhythmic quality to the instrument. The principle is essentially the same as in single-tap delays.

5. Note Synthesiser

5.1 ADSR / ADSHR Envelopes

Many temporal characteristics of synthesized instrument notes can be summarised using ADSR (attack-decay-sustain-release) or ADSHR (attack-decay-sustain-hold-release) envelopes. These include changes in the volume of a note over its duration (e.g. piano notes have very sharp attacks followed by long release times, wherease in the trumpet, for example, the release is more rapid), but also other effects such as fundamental frequency, filter frequencies, etc.. ADSHR is an acronym for:

In section 3.3, it would have been noticed that 'brighter' spectra (i.e. larger bandwidth or spectral centroid) result at larger values of the modulation index. In many instruments, brighter spectra usually also occur at the onsets of notes, accompanied by a sharp increase in dynamics.

Design a note synthesizer that permits the following functionality, making notes about the effect of varying each parameter:

If time permits: To account for some of the impulsive or transient characteristics of the note attack, a white-noise component can also be incorporated.



EE
[ Home ]

© 2004-13, written by Mark Every, maintained by Philip Jackson, last updated by Phil Coleman on 12 Nov 2013.


FEPS