|Teams:||2-3 members per group|
|Timetable:||One week allocated.|
Audio distortion refers to any kind of deformation of an input audio waveform, such as hard/soft clipping, compression, non-linear behavior of electronic components, modulation, aliasing, etc.. The aim of this laboratory experiment is to characterise an unknown audio signal processing algorithm, black_box, using several measures of audio distortion. When publicly releasing a signal processing algorithm or piece of hardware, the standardized conditions or input signals on which the distortion measurements are made should be recorded to allow objective comparisons with other products.
Matlab will be the preferred software environment for performing numerical and frequency analyses of the input/output signals. To get help on any Matlab function use help nameoffunction or search the Help files.
This is a form of nonlinearity that causes unwanted signals to be added to the input signal that are harmonically related to it. The spectrum of the output shows added frequency components at 2x the original signal, 3x, 4x, 5x, and so on, but no components at fractional multiples of the frequency components.
To measure THD, we pass a sinusoid through the DSP algorithm and examine the output for evidence of any frequencies other than the one applied. Performing a spectral analysis on this signal using a discrete Fourier transform (DFT) shows that in addition to the original input sinusoid, there are components at integer multiples of the input frequency. The amplitude spectrum of a signal x can be obtained using:
Fx = abs(fft(x)); plot(Fx);
To plot this on a decibel scale use:
THD is then defined as the ratio of the RMS amplitude of the added harmonics to that of the fundamental component, and is expressed in percent. Measuring individual harmonics with precision is difficult, tedious, and not commonly done; consequently, THD+N (see below) is the more common test. Caveat Emptor: THD+N is always going to be a larger number than just plain THD. For this reason, unscrupulous (or clever, depending on your viewpoint) manufacturers choose to spec just THD, instead of the more meaningful and easily compared THD+N.
Required Conditions. Since individual harmonic amplitudes are measured, the manufacturer must state the test signal frequency, its level, and the gain conditions set on the tested unit, as well as the number of harmonics measured. Hopefully, it's obvious to the reader that the THD of a 10 kHz signal at a +20 dBu level using maximum gain, is apt to differ from the THD of a 1 kHz signal at a -10 dBV level and unity gain. And more different yet, if one manufacturer measures two harmonics while another measures five.
Full disclosure specs will test harmonic distortion over the entire 20 Hz to 20 kHz audio range (this is done easily by sweeping and plotting the results), at the pro audio level of +4 dBu. For all signal processing equipment, except mic preamps, the preferred gain setting is unity. For mic pre amps, the standard practice is to use maximum gain. Too often THD is spec'd only at 1 kHz, or worst, with no mention of frequency at all, and nothing about level or gain settings, let alone harmonic count.
Correct: THD (5th-order) less than 0.01%, +4 dBu, 20-20 kHz, unity gain.
Wrong: THD less than 0.01%
What is tested? Similar to the THD test above, except instead of measuring individual harmonics this tests measures everything added to the input signal. This is a wonderful test since everything that comes out of the unit that isn't the pure test signal is measured and included -- harmonics, hum, noise, RFI, buzz ... everything.
How is it measured? THD+N is the RMS summation of all signal components (excluding the fundamental) over some prescribed bandwidth. Distortion analyzers make this measurement by removing the fundamental (using a deep and narrow notch filter) and measuring what's left using a bandwidth filter (typically 22 kHz, 30 kHz or 80 kHz). The remainder contains harmonics as well as random noise and other artifacts.
The narrow notch filter can be designed using Matlab's filter design and analysis tool, fdatool. Design the filter and export the filter's impulse response. An arbitrary signal can now be convolved with the impulse response using filter.
Weighting filters are rarely used. When they are used, too often it is to hide pronounced AC mains hum artifacts. An exception is the strong argument to use the ITU-R (CCIR) 468 curve because of its proven correlation to what is heard. However, since it adds 12 dB of gain in the critical midband (the whole point) it makes THD+N measurements bigger, so marketeers prevent its widespread use.
[Historical Note: Many old distortion analyzers labeled "THD" actually measured THD+N.]
Required Conditions. Same as THD (frequency, level & gain settings), except instead of stating the number of harmonics measured, the residual noise bandwidth is spec'd, along with whatever weighting filter was used. The preferred value is a 20 kHz (or 22 kHz) measurement bandwidth, and "flat," i.e., no weighting filter.
Conflicting views exist regarding THD+N bandwidth measurements. One argument goes: it makes no sense to measure THD at 20 kHz if your measurement bandwidth doesn't include the harmonics. Valid point, and one supported by the IEC, which says that THD should not be tested any higher than 6 kHz, if measuring five harmonics using a 30 kHz bandwidth, or 10 kHz, if only measuring the first three harmonics. Another argument states that since most people can't even hear the fundamental at 20 kHz, let alone the second harmonic, there is no need to measure anything beyond 20 kHz. Fair enough. However, the case is made that using an 80 kHz bandwidth is crucial, not because of 20 kHz harmonics, but because it reveals other artifacts that can indicate high frequency problems. All true points, but competition being what it is, standardizing on publishing THD+N figures measured flat over 22 kHz seems justified, while still using an 80 kHz bandwidth during the design, development and manufacturing stages.
Correct: THD+N less than 0.01%, +4 dBu, 20-20 kHz, unity gain, 20 kHz BW
Wrong: THD less than 0.01%
What is tested? A more meaningful test than THD, intermodulation distortion gives a measure of distortion products not harmonically relted to the pure signal. This is important since these artifacts make music sound harsh and unpleasant.
Intermodulation distortion testing was first adopted in the U.S. as a practical procedure in the motion picture industry in 1939 by the Society of Motion Picture Engineers (SMPE -- no "T" [television] yet) and made into a standard in 1941.
How is it measured? The test signal is a low frequency (60 Hz) and a non-harmonically related high frequency (7 kHz) tone, summed together in a 4:1 amplitude ratio. (Other frequencies and amplitude ratios are used; for example, DIN favors 250 Hz & 8 kHz.) This signal is applied to the unit, and the output signal is examined for modulation of the upper frequency by the low frequency tone. As with harmonic distortion measurement, this is done with a spectrum analyzer or a dedicated intermodulation distortion analyzer. The modulation components of the upper signal appear as sidebands spaced at multiples of the lower frequency tone. The amplitudes of the sidebands are RMS summed and expressed as a percentage of the upper frequency level.
[Noise has little effect on SMPTE measurements because the test uses a low pass filter that sets the measurement bandwidth, thus restricting noise components; therefore there is no need for an "IM+N" test.]
Required Conditions. SMPTE specifies this test use 60 Hz and 7 kHz combined in a 12 dB ratio (4:1) and that the peak value of the signal be stated along with the results. Strictly speaking, all that needs stating is "SMPTE IM" and the peak value used. However, measuring the peak value is difficult. Alternatively, a common method is to set the low frequency tone (60 Hz) for +4 dBu and then mixing the 7 kHz tone at a value of -8 dBu (12 dB less).
Correct: IMD (SMPTE) less than 0.01%, 60Hz/7kHz, 4:1, +4 dBu
Wrong: IMD less than 0.01%
What is tested? The unit's bandwidth or the range of frequencies it passes. All frequencies above and below a unit's Frequency Response are attenuated -- sometimes severely.
How is it measured? A 1 kHz tone of high purity and precise amplitude is applied to the unit and the output measured using a dB-calibrated rms voltmeter. This value is set as the 0 dB reference point. Next, the generator is swept upward in frequency (from the 1 kHz reference point) keeping the source amplitude precisely constant, until it is reduced in level by the amount specified. This point becomes the upper frequency limit. The test generator is then swept down in frequency from 1 kHz until the lower frequency limit is found by the same means.
Required Conditions. The reduction in output level is relative to 1 kHz; therefore, the 1 kHz level establishes the 0 dB point. What you need to know is how far down is the response where the manufacturer measured it. Is it 0.5 dB, 3 dB, or (among loudspeaker manufacturers) 10 dB?
Note that there is no discussion of an increase, that is, no mention of the amplitude rising. If a unit's frequency response rises at any point, especially the endpoints, it indicates a fundamental instability problem and you should run from the store. Properly designed solid-state audio equipment does not ever gain in amplitude when set for flat response (tubes or valve designs using output transformers are a different story and are not dealt with here). If you have ever wondered why manufacturers state a limit of "+0 dB", that is why. The preferred condition here is at least 20 Hz to 20 kHz measured +0/-0.5 dB.
Correct: Frequency Response = 20-20 kHz, +0/-0.5 dB
Wrong: Frequency Response = 20-20 kHz
What is tested? The input stage is measured to establish the maximum signal level in dBu that causes clipping or specified level of distortion.
How is it measured? During the final product process, the design engineer uses an adjustable 1 kHz input signal, an oscilloscope and a distortion analyzer. In the field, apply a 1 kHz source, and while viewing the output, increase the input signal until visible clipping is observed. It is essential that all downstream gain and level controls be set low enough that you are assured the applied signal is clipping just the first stage. Check this by turning each level control and verifying that the clipped waveform just gets bigger or smaller and does not ever reduce the clipping.
Required Conditions. Whether the applied signal is balanced or unbalanced and the amount of distortion or clipping used to establish the maximum must be stated. The preferred value is balanced and 1% distortion, but often manufacturers use "visible clipping," which is as much as 10% distortion, and creates a false impression that the input stage can handle signals a few dB hotter than it really can. No one would accept 10% distortion at the measurement point, so to hide it, it is not stated at all -- only the max value given without conditions. Buyer beware.
The results are assumed constant for all frequencies within the unit's bandwidth and for all levels of input, unless stated otherwise.
Correct: Maximum Input Level = +20 dBu, balanced, <1% THD
Wrong: Maximum Input Level = +20 dBu
Finally, based upon the measured audio specifications and the observed behaviour, predict what kind/s of audio effects are applied within the "black box".
[Acknowledgements: Thanks to RaneNote for compiling this list of audio specifications.]
© 2004-7, written by Mark Every, maintained by Philip Jackson, last updated on 17 Jan 2007.