Nonlinear Distortion and Perception at Low Frequencies
As was explained on the previous page, nonlinear distortion is dependent on the level and frequency of the input frequency. To put it another way, some aspect of the signal has to ‘activate’ a mechanism of distortion. A somewhat blunt example of this is a signal processing hard limiter; the signal may be reproduced with accuracy until a passage with sufficiently large amplitude triggers the limiter, which then lowers the output level (thereby distorting it) until the input signal level no longer surpasses the amplitude threshold of the hard limiter’s activation point. This is a lot like what happens when a speaker driver bottoms out. When the moving assembly of the driver hits the limit of its excursion, whether it’s from the voice coil hitting the backplate or the suspension being stretched to its limit, the driver can no longer reproduce amplitudes above a certain point, but below that point it may be able to play back the signal with reasonable fidelity. The maximum mechanical excursion of the driver is the limiter, and this limiter isn't activated until the input signal has surpassed a specific level of amplitude over frequency. It is a distortion that is dependent on an aspect of the input signal and doesn’t occur until then. Those are rather blatant examples of nonlinear distortion, and we will focus our discussion on the types of nonlinear distortion which happen in more typical conditions, namely harmonic distortion and intermodulation distortion. Let’s begin with harmonic distortion, because it is the simpler of the two.
Every sound has a fundamental frequency (usually just called the fundamental), which is the lowest frequency component of that sound. A harmonic is a frequency component in the sound that is an integer multiple of the fundamental. So, if the fundamental is 100 Hz, the harmonics would be multiples of whole numbers above that: 200 Hz, 300 Hz, 400 Hz, and so on. Harmonics are ordered by the multiple of which they follow the fundamental, so, for example, the 200 Hz harmonic of a 100 Hz fundamental is the second harmonic, the 300 Hz harmonic would be the third harmonic, the 400 Hz harmonic would be the fourth harmonic, and so on. Most sounds we hear have harmonic components, and it is an immensely important principle in sound reproduction, not to mention music, physics, and engineering.
So how does the principle of harmonics tie into distortion? Well, what if your speaker produced harmonic components that were not in the original signal? This addition of extra harmonics is called harmonic distortion, and every speaker does it to some degree. Our concern about harmonic distortion for this article is at what point does it become audible in bass frequencies. This is not a simple matter, because harmonic distortion can be heard in a number of different ways. This is due to the fact that it adds entirely new frequency content to the output, unlike the distortions discussed up to this point that really only changed the loudness of existing frequencies. Energy must be conserved, however, and the amplitude of the fundamental is reduced, with the missing energy transformed into harmonics.
Let’s take a look at a simple example of harmonic distortion in action. Figure 1 is a graph of the input signal, which is a very simple sound, namely a 50 Hz tone. Keep in mind that we are not measuring the nonlinearity itself and that we are only measuring how it responds to a single frequency.
Fig. 5. Frequency graph of the input signal.
Now let’s run our input through a typical speaker, and see what happens in Figure 6:
Fig. 6. Frequency graph of output demonstrating harmonic distortion.
A whole new set of sounds has been added to the signal: harmonics. They can be generated in many different ways and are generally a byproduct of nonlinear cone travel. Recall our earlier discussion of the mechanisms of distortion: the variation in magnetic field due to excursion, the change in the compliance of the suspension due to excursion, and the change of magnetic field due to inductance. These are typically the major sources of harmonic distortion in bass output, but any nonlinearity will create harmonics.
Before we specify the levels at which harmonic distortion becomes audible in bass, we must first go over two important concepts for this subject, which are the threshold of audibility of low frequencies and auditory masking.
Audibility of Low Frequencies
Refer back to Figure 3, the equal loudness contour chart, and note how rapidly low frequencies become inaudible to human hearing as they decrease. For an example, note that 20 Hz starts to become audible at just below 75 dB, and compare how perceptually loud 60 Hz is at 75 dB, which is approximately 40 dB. Even though they share the same absolute sound pressure level of 75 dB, there is an approximately 40 dB perceived loudness difference between them (a 40 dB increase is 100 times greater sound pressure level). In other words, the equal loudness contour is telling us that, for the same sound pressure level, bass frequencies become a lot louder to human hearing for even small increases in frequency below 150 Hz or so. At this point, it should be noted that the equal loudness contours were created using pure tones in an anechoic chamber using 1 kHz as a reference frequency. They do not work for broadband sounds as well except in a very broad manner.
Now let us consider our greater sensitivity to changes in sound pressure levels in low frequencies when viewing Figure 6, the graph of harmonic distortion products. It would suggest that the harmonic products would be perceived as being much louder than they actually are with respect to the fundamental, since the fundamental is in a relatively low frequency and thus lower perceived loudness level compared to the harmonics when taking the equal loudness curves into account. This deficiency in human hearing would render us unusually sensitive to harmonic distortion, were it not for another trait in human hearing called ‘auditory masking’, which almost acts as a counterweight to the first deficiency.
Anyone with normal hearing will be familiar with the idea of masking, which is where a loud sound can make a softer sound inaudible. Try holding a conversation in a bar on karaoke night for an experience of the masking effects of loud noise. However, most people don’t realize just how extensive auditory masking is. Masking happens at every loudness level, although at louder levels, the bandwidth of the masker widens, and a great deal more is concealed. In auditory masking, the ‘masker’ is the loudest sound of the bands of frequencies it is concealing, and the ‘maskee’ is the sound that cannot be heard under the masker. In human hearing, the masker creates a frequency band around itself wherein softer sounds are hidden, but the masker’s influence diminishes as the frequencies grow further away from the fundamental. This means that a loud sound at one frequency will be able to better disguise sounds that are closer to it in frequency than sounds which are much further away in frequency. Another important aspect of masking is that at increasingly louder levels and also at lower frequencies, the masking band spreads upward over higher frequencies much more than lower frequencies, so higher frequencies are masked much more than lower frequencies, although frequencies lower than masker can be masked as well. Let’s take a look at what that means for harmonic distortion in Figure 7:
Fig. 7. Effects of masking on harmonics
As we can see from our crude illustration, most of the harmonic distortion has been masked, however, a couple of the high order harmonics were far enough away in frequency and loud enough to be heard. So in order to determine the audibility of harmonic distortion, we have to know how much masking is done by different tones at different loudness levels. Many researchers have investigated masking, but the most extensive study in this area with respect to low frequencies was conducted by Louis Fielder and Eric Benjamin. They determined masking thresholds for a number of frequencies and loudness levels, such as Figure 8, where a 50 Hz tone was replayed at 80 dB, 100 dB, and 110 dB. To explain Fig. 8., from Fielder and Benjamin’s research, a 50 Hz tone with a sound pressure level specified by the dashed lines will mask any sound with a loudness level and frequency below those lines.
Fig. 8. Masking thresholds of a 50 Hz tone. 0 dB = 20 μPa. Reprinted by permission of AES.
It’s worth mentioning at this point that auditory masking happens not just during the sound itself, but also before it and after it, in a phenomenon known as backward masking and forward masking. The masking sound can actually hide sounds from our perception that occur as much as 25 ms before the masking sound itself, and this is known as backward masking. Forward masking is the masking of sounds after the masker itself has ended. Forward masking covers up other sounds for a longer duration than backward masking, and low frequencies forward mask more than high frequencies.
Confused about what AV Gear to buy or how to set it up? Join our Exclusive Audioholics E-Book Membership Program!
Recent Forum Posts:
From the text below the last graph, "Where the curve dips between 2000-5000Hz, this implies that less sound intensity is necessary for the ear to perceive the same loudness as a 120dB, 1000Hz tone. In contrast, the strong rise in the curve for 0 phons at low frequencies shows that the ear has a notable discrimination against low frequencies for very soft sounds.". This is the reason loudness controls boost the bass and treble and specific to Yamaha, their variable compensation curve produces more bass and treble at lower levels.
Here's another link-
I remember curves from past reading that were the inverse of this- not sure why they don't appear with searches, now.
Is your sound system up to the task of faithfully reproducing bass content, and if not, how short does it fall between your hearing and the sound engineers intention?
Read: The Audibility of Distortion At Bass Frequencies