“Let our rigorous testing and reviews be your guidelines to A/V equipment – not marketing slogans”
Facebook Youtube Twitter Google Plus instagram pinterest

Why Audio Amplifiers Can Sound Different

by March 10, 2015
Emotiva XPR-1 Power Amplifier

Emotiva XPR-1 Power Amplifier

This is a topic that is unlikely to be resolved to everyone’s satisfaction anytime soon. And to whatever extent there are real differences in the sound of different amplifiers, it’s very possible that those variations in sound are not from the obvious visual cues that subliminally influence our opinions, like seeing big banks of output devices nestled snugly in a massive cast aluminum heatsink farm. That’s just “got” to sound better than that pedestrian featherweight mass-market receiver, right?

Maybe yes and maybe no.

Do All Amplifiers Sound the Same?

I have had the pleasure and privilege for the past many decades of working alongside some of the most talented and creative people you can imagine in the audio business.  People who routinely reject conventional wisdom as being, well, not-so-wise. People who look at things, whether a design consideration, an engineering approach, a sales/marketing strategy, etc., from a fresh perspective.

I’ve also had the distinct privilege and learning opportunity of working with people who have been singularly unimaginative, predictable and clichéd in their approach to virtually every product development and sales/marketing situation, which has resulted in some of the most spectacular failures imaginable, even when some of those products looked like sure-bet, “can’t-possibly-miss” affairs. They missed. They flopped. They set the company back a mile, or in some cases, absolutely ruined them.

The point is, you learn a ton from observing people at both ends of the spectrum.

Without question, one of the most talented, innovative, practical, intelligent, clever, sane, clear-thinking, analytical, and most importantly, the most uproariously funny person I’ve ever worked with is the Senior Electrical Engineer at Atlantic Technology, Paul Ceurvels.  (By the way, although his primary role was that of EE, Paul happens to be an expert in mechanical engineering, acoustics and materials. This unprecedented wide range of expertise gives Paul an overridingly authoritative engineering perspective to draw upon during the design and evaluation stage that is unequalled by anyone else I’ve encountered. Just don’t ask him about the history of the Phillips screw or he’ll keep you there for an hour.)

I worked with Paul for about a decade in the early 2000s. In addition to doing straight design work, Paul would also evaluate and tweak incoming samples from overseas vendors.

There are virtually no companies that offer powered subwoofers who actually make their own subwoofer amplifiers. Those are provided by overseas (mostly Chinese) vendors. The U.S. brand goes to them and says, “We need a 300-watt plate amp with these features, these knobs and controls and we don’t want to pay more than $X.”

BASH 300W plate amp

300W BASH plate amplifier. Can we get these with a high pass filter, two channels of PEQ, and a notch filter at 50Hz for $25 apiece???

The vendor sends in a sample for the US brand to evaluate. This is where the U.S. company’s engineering skill really comes into play, because as sent by the vendor, most of these things are unusable. Unusable. They have bad turn-on/turn-off thumps (no muting circuitry). The distortion limiters, if there are any to begin with, don’t work worth a darn. The crossover control? The printed numbers have only a passing relation to the actual frequency. A silk-screened “80Hz” is just as likely to be an actual 40 or 120Hz. The RCA inputs are flimsy and break off the PC board if you look at them the wrong way. Heatsinking? Oh, you wanted heatsinking?

You had to virtually re-design the amplifier, both mechanically and electrically. Every change you made, the overseas vendor would say, “Sure, we can do that. It’ll cost you…” Pretty soon the amp you’d budgeted $X for in the preliminary cost estimate is now $X + $24. Figure the usual cost-to-retail multiplier of 4 or 5, and now your killer $499 sub is $599 or $649. Not so good after all. These are the real-world headaches that drive companies crazy.

See: An Insider's View to Product Development Part I and Part II

Paul was the guy at AT who could re-work a shoddy vendor sample with cost-effective mods and fixes and do it in such a way that it was actually manufacturable by the vendor. He categorized his fixes into two groups: must-haves and luxury mods. Must-haves were just that: without these, the product wouldn’t function up to our brand standards. Luxury mods were in the really-nice-to-have-but-ok-we-can live-without-it-if we-must category.

No one in my long experience was better at this than Paul. No one.

Yet he is so down to earth and unpretentious. Paul would alternate during his lunch hour between downloading data sheets on the latest digital ICs “just for fun to see what those sons of b’s were up to,” to watching old cartoons from the ‘50s and ‘60s. He rode his Honda Gold Wing to work virtually all the time except for the coldest NE winter days. On those days, he’d drive his ‘95 Saturn, which he kept running perfectly as it approached 300k miles by one only-Paul fix after another. “Why not?” he’d say. Why not indeed.

This is one of the great characters of our industry. Looks-wise, if you can imagine “Henry Kloss-on-a-bike,” you wouldn’t be too far off.

So with that as a background, it’s not really any surprise that Paul has developed an intriguing, remarkably accurate and reliable way to predict an amplifier’s sound character based on a simple test that he developed.

In Paul’s own words:

Thanks for the opportunity to write on a subject I find interesting. At the end, see if you can guess which model amps I’m talking about.

Actually, this is a longer story than you'd think, dating back to the early 1980s.  My employer had made a big splash with an inexpensive low-powered integrated amp from a small offbeat company, and they wanted to bring out a larger version.  After all, can’t leave well enough alone; gotta increase those sales, right? However, the golden-eared  honchos said prototypes of the new one sounded too forward, verging on harshness.

I was hired just a short while previous to this, and my responsibilities included checking performance of new prototypes developed at the main lab located elsewhere.

Both of these amps had user bypassable sub- and supersonic filters in their main amps, and the geekophiles ears blamed these filters, although the amps measured identically in the audible range (same ± 0.25 dB, 20–20k) whether the filters were on or off.  This time, they were right (amazing!), as it worked out.

We had long been aware that tone control functionality could be quickly checked by observing square-wave behavior (leading edge spike means treble boost, "flat-top" bulged up means bass boost, opposites mean roll-off).  Part of my testing on the new amp included this 1kHz square-wave check.

I noticed a slight, rounded overshoot, maybe 4-5%, in the filtered output that was absent from the bypassed signal; this got me curious.  I asked the Head Engineer about this, but he saw no problem, as the amp measured fine for him.  He had no use for square-waves.

2013 vs 2014 HDAM

Marantz's 2013 HDAM module versus the 2014 update on the scope.

Notice the purple and green traces are a perfect square wave while the blue trace exhibits overshoot.

I had a fellow working with me who had spent time at AR (among other labs), and he mentioned that they were developing one of the first mainframe-based FFT analyzers, just around the corner.  The AR guy in charge of this project (Bob Berkowitz, I think) was kind enough to run our amp through his Frankensteinian lashup, and we found that the filtered signal showed a 2dB rise across the 20-20k band when given an impulse signal, while the bypass mode's FFT was flat.  Nowadays, a rise as gradual as 2dB across the entire 10-octave range would be called "spectral tilt.”

The filter was redesigned a bit, the overshoot was greatly reduced, and everyone was happy.  The Head Engineer wasn't totally convinced, but he went along with it anyway.

Since then, we've never failed to make this check, and on rare occasion, we have degraded sine-wave performance by maybe 0.2–0.3dB at the very top end in order to get the overshoot right.  From time to time, a certain amount of overshoot is left in, just to make the sound more, um, “interesting.”  And no, I'll not say how much.  However, whatever we do in this regard is both very careful and intentional. So far, nobody has called us on it for having “bad”-sounding amps. Actually, just the opposite.

During the past several years, I've worked on a few subwoofer amps, most of these have had user-adjustable low-pass filters.  Like most small companies, we don't design them from scratch, but instead we take a manufacturer's submitted sample and tweak it; sometimes a little, sometimes a lot.  Initially, these were tested with sine waves only because they weren't considered as audiophile material.

One such amp measured great—flat up to the lowpass cutoff frequency, then a sharp rolloff to prevent voice leakage.  But when we listened to it, the overshoot from the filter made it sound as if there were a hand-held school bell being rung somewhere ("Good morning, children!").  So now, sub amps are given the same square wave test, but at 50 and 100Hz also.  Multi-stage filters are tested individually.

You see, the rounded overshoot itself approximates half of a sine wave, whose relation to the input signal is that it occurs at every half-cycle.  If the overshoot's shape is that of something within the audible range, it can be easily heard as such; if it's outside the audible range, it is perceived by some listeners as a generalized unpleasant presence.  When it’s bad enough, it becomes "grit" riding on the music—you know what I mean.

Now of course, it isn't only misaligned filters which cause overshoot; some amp designs just do it all by themselves, particularly when pushed extremely hard.  But they can generally be fixed also, with a bit of judicious tweaking.  There are a number of good amp designers who could (and do) write volumes about this, so I won't.  My point is that this is a matter that is easily checked, which we do as a matter of course now. And so, we can rectify any problems.

I've only written about leading-edge overshoot, because this can be such an annoying thing.  Of course there's leading-edge undershoot also, and distortions of the square-wave's "flat-top" which represent low-frequency misbehaviors. However, a good designer will generally have already addressed these as part of the sine-wave frequency response testing, and so these should normally be non-issues.

Before I forget, one more thing—ADCs and DACs.  In the early days of digital audio, these overshot and rang horribly, and were primarily responsible for the rotten "CD sound" we all hated.  Thankfully, oversampling and other tricks have reduced this to acceptable (to me) limits.  Please note that I didn't say "perfect", but it's OK by me.  And I still don't love MP3s, but I do listen to them.

In a similar vein, switch-mode amps and their output filters have similar characteristics, and have a tendency to ring.  Higher switching speeds and better matching of the filter to the load has helped a lot.  Perfect, no; better, yes.  If you want a kilowatt in the palm of your hand for pennies a watt, you've got to compromise a little; that's reality.  Sorry folks, no silver bullet. If you want cleaner sound, stick with linear (A/B), at least for now.

Oh, the amps—the NAD 3020 and 3140.

NAD 3140

The NAD 3140 stereo amplifier.

Addition by Peter Tribeman of Atlantic Technology (another practical, down-to-earth guy):

Paul is one of the best analytical guys I know in AV electronics and I credit him with isolating and identifying a number of issues with our stuff and of course NAD's when we were there back in the day.

This square wave test is easily demonstrable. Even with my well-worn “senior” hearing, I can detect the differences when an amp is misbehaving in this area. Very few people know about this and even less pay attention to it when they hear about it. The audible differences are not subtle.

I honestly believe that if Gene digs into this with a real world test, and a listening panel confirms it, it could be a breakthrough.

One final story: Years ago at NAD we were dealing with a Japanese OEM who happened to be building both for us and Adcom. The factory had a listening room and we asked what they thought of the NAD Power Envelope design (one of our first models that used it). They said that compared to the Adcom our amp was "sharper—more detailed." The Adcom, to their way of listening, was more "mellow" or smoother. I knew right away what they were hearing: our overshoot with 1k square waves. We had lost the battle with London (NAD corporate headquarters) on that amp, since corporate paid little attention to what Paul was saying.

Going forward, since Paul was now being asked by London to debug all new amps shipped to Boston, I asked him to quietly slip in his modification that would take care of the square-wave issue, innocently buried along with his list of other suggested component value changes. It worked. We never again (at least while Paul and I were at NAD) had an NAD amp that sounded "sharp."

Lonnie Vaughn VP/CTO of Emotiva Audio:

Amplifiers do sound different for a number of reasons.  Paul cites a good one, the square wave response.  On a personal note, I believe the power supply plays a big part in the way an amp sounds.  I get that the power supply accounts for the biggest part of an amplifiers cost.  But so many companies take it down to the bare minimum required to meet the specs that there is no headroom in the system at all and the amp itself just sounds flat as a board.  In these cases, all the designer had to do was put in a few dollars more in storage and it would have made it a completely different unit. 

For more info on this topic, see Trading Amplifier Quality for Features: A New Trend in AV Receivers?

To Hear Or Not To Hear?

So, there you have it—a very interesting take on one aspect of the causes of the audibility differences among amplifiers, as documented by the experiences of two well-known and highly-regarded industry veterans. Another log on the fire, so to speak, but one that’s definitely worth giving full consideration.

Based on these consistent, repeatable, scientifically-valid experiences, it seems logical to conclude that there are some tangible reasons why amplifiers can and do sound different, even when operating well below their distortion limits.  Obviously, the average user is not going to have the opportunity to observe their amplifier’s square wave behavior and see if there is a correlation between its sound character and its square wave overshoot/undershoot performance on an oscilloscope.  Nonetheless, this phenomenon could explain a lot of the reported stark differences in amplifier sound that has previously gone unexplained. Maybe the golden ears at the tweak magazines have really been hearing something all along—they just couldn’t identify it or measure it. Maybe this is it.

What do you think? Have you heard differences in amplifiers?  Share your experiences on our forum.

Acknowledgements

We'd like to thank Paul Ceurvels, Senior Electrical Engineer at Atlantic Technology for his contributions to this article.

Confused about what AV Gear to buy or how to set it up? Join our Exclusive Audioholics E-Book Membership Program!

 

About the author:

Steve Feinstein is a long-time consumer electronics professional, with extended tenures at Panasonic, Boston Acoustics and Atlantic Technology. He has authors historical and educational articles for us as well as occasional loudspeaker reviews.

View full profile

Recent Forum Posts:

jkenny posts on May 14, 2015 16:40
I followed up this guy's (Ultmusicsnob) posts on Head-fi to see if he had done other listening tests as he is an interesting expert listener. I wanted to see what else he may have differentiated in ABX tests & found his posts on a “jitter audibility” thread

What all his posts demonstrate to me is something that I've reckoned about ABX tests - very difficult to do them successfully :
- it's a different type of listening to “normal” listening
- it requires very good skills at forensic analysis of the two audio samples before being able to pick out the audio snippet that allows consistent identification
- it requires very focussed concentration to maintain the characteristic being listened for
- it requires specific, selected audio files to use as source.

Look at the number of others on these two threads that are also able to differentiate the differences ultmusicsnob can - none

Further quotes from him:
I have to think it's not about fidelity of the equipment, it's figuring out what to listen for. Listening for jitter is *unlike* other ABX comparisons I've done before. If it helps, I try to imagine the sharpest focus of sound in terms of how “narrow” I can hear the piano attack, as though it were a spatial measure. The narrower attack is ‘n’. It is difficult because I'm continually tempted to chase mirages of differences in other details. If I stick to “focus” and “narrow” I get a result.

Congrats, UltMusicSnob, for doing so well on that test. I can't help but notice that you had to “game” the test to achieve your results. That is, you had to “listen” in an unusual way pretty much wholly removed from how one would normally listen to music. With all respect, is this a tacit admission that the “added jitter” track would have been indistinguishable absent the gaming?

Well, I will insist on the caveat that *all* ABX testing is of a sort pretty much wholly removed from how one would normally listen to music. The protocol can't be completed otherwise. The *only* time I ever listened like that in real life was when I was trying to hear John Lennon say “I bury Paul” at the end of “Strawberry Fields”. That said,

Yes, my first research question is usually “Is differentiation possible at all???”, and so I use the tools available to hunt for the differences.

It was particularly difficult in this case, as I don't have a good sense of what problematic jitter *ought* to sound like, and it matters what testers are listening for.

Since I can pick out a difference on one snare hit, a further refinement would be to listen more ‘casually’, and see if the drum set sounds different throughout.



I'm guessing that the added jitter track would have been indistinguishable for this particular music, but it's faintly conceivable that interested listeners could learn to hear the difference without the procedures I described.
jkenny posts on May 14, 2015 14:57
Here's an example of what I'm talking about - “Perceptual differences that effect the whole music portrayal may well be very difficult to isolate & identify as a specific audible difference in a small snippet of music.”

In this case, the ABX tester was/is a recording professional & yet here are some comments from his successful identification of 16/44 Vs 24/192 differences

I suspect that a lot of the audible differences between devices fall into this category. No longer are there too many examples of frequency/amplitude differences between devices.

Keeping my attention focused for a proper aural listening posture is brutal. It is VERY easy to drift into listening for frequency domains–which is usually the most productive approach when recording and mixing. Instead I try to focus on depth of the soundstage, the sound picture I think I can hear. The more 3D it seems, the better.

Caveats–Program material is crucial. Anything that did not pass through the air on the way to the recording material, like ITB synth tracks, I'm completely unable to detect; only live acoustic sources give me anything to work with. So for lots of published material, sample rates really don't matter–and they surely don't matter to me for that material. However, this result is also strong support for a claim that I'm detecting a phenomenon of pure sample rate/word length difference, and not just incidental coloration induced by processing. The latter should be detectable on all program material with sufficient freq content.
Also, these differences ARE small, and hard to detect. I did note that I was able to speed up my decision process as time went on, but only gradually. It's a difference that's analogous to the difference between a picture just barely out of focus, and one that's sharp focused throughout–a holistic impression. For casual purposes, a picture that focused “enough” will do–in Marketing, that's ‘satisficing’. But of course I always want more.

I can't post my actual files here without copyright violation, but I'll give the info:
For the first two, I just used a track from a CD I purchased recently, “Groove Tube” by a Japanese artist who goes by “MEG”, from her album ‘Room Girl’. It's Redbook Audio, of course, and I used SoundForge 10 which comes with a resampler by Izotope that I used to go to 192 kHz, and another tool also by Izotope which I used to go from 16 to 24 bits. There are some individual settings within those tools, I'll follow up with details. The usefulness of the program content was that it was 1) live miked and 2) complex with many elements carefully placed within a large soundstage.
In re “kind of artefact”, I tried to listen for soundstage depth and accurate detail. It took a lot of training repetitions, and remains a holistic impression, not any single feature I can easily point to. It seems to me that the 192 files have the aural analogue of better focus. To train, I would try to hear *precisely* where in front of me particular sound features were located, in two dimensions: left-to-right, and closer-to-further away–the foobar tool would then allow me to match up which two were easier to precisely locate. I know it muddies the waters, but I also had a very holistic impression of sound (uhhhhhh) ‘texture’??–in which the 192 file was smoother/silkier/richer. The 192 is easier on the ears (just slightly) over time; with good sound reproduction through quality headphones (DT 770) through quality interface (RME Babyface) I can listen for quite a while without ear fatigue, even on material that would normally be considered pretty harsh (capsule's ‘Starry Sky’, for example), and which *does* wear me out over time when heard via Redbook audio.

I realize that the ABX only reveals that *something* is detected that allows me to identify the proper pairs. No one need take my word for it that I'm listening for and hearing spatial detail–but that is in fact what I'm doing, so folks can take it or leave it in that respect.

I will note that IF it were the case that a consistent artifact/distortion is being added to the signal, then it would also have to be the case that this artifact would be detectable in all tested content. But this is not the case. If there's not soundstage depth present in a live-recorded signal on the disk, then I can't score above random guessing in foobar, period. It IS the fact that I can detect the difference on some, but not others.

Practice improves performance. To reach 99.8% statistical reliability, and to do so more quickly (this new one was done in about 1/3 the time required for the trials listed above in the thread), I mainly have to train my concentration.

It is *very* easy to get off on a tangent, listening for a certain brightness or darkness, for the timbre balance in one part, several parts, or all–this immediately introduces errors, even though this type of listening is much more likely to be what I am and need to be doing when recording and mixing a new track.

Once I am able to repeatedly focus just on spatial focus/accuracy–4 times in a row, for X & Y, and A & B–then I can hit the target. Get lazy even one time, miss the target.

It took me a **lot** of training. I listened for a dozen wrong things before I settled on the aspects below.

The difference I hear is NOT tonal quality (I certainly don't claim to hear above 22 kHz). I would describe it as spatial depth, spatial precision, spatial detail. The higher resolution file seems to me to have a dimensional soundstage that is in *slightly* better focus. I have to actively concentrate on NOT looking for freq balance and tonal differences, as those will lead you astray every time. I actively try to visualize the entire soundstage and place every musical element in it. When I do that, I can get the difference. It's *very* easy to drift into mix engineer mode and start listening for timbres–this ruins the series every time. Half the battle is just concentrating on spatial perception ONLY

I initially found training my ears to find a difference very difficult. It's *very* easy to go toward listening for tonal changes, which does not help. I get reliable results only when trying to visualize spatial detail and soundstage size, and I tend to get results in streaks. I get distracted by imaginary tonal differences, and have to get back on track by concentrating only on the perceived space and accuracy of the soundstage image.
jkenny posts on May 09, 2015 13:13
PENG, post: 1083004, member: 6097
I tend to agree to a point but I believe most people can tell the difference in sq between mp3 and cd sound quality without even trying. I can also tell the difference between my different speakers when comparing them side by side without trying hard, yet I still won't call it night and day. At the end of the day, I accept the fact that it is just an expression, one that has a range of meaning.
Sure, people involved in a hobby tend to exaggerate differences - it's part & parcel of how hobbyists differentiate themselves from others

I'll give you an example of what I'm talking about - a while ago, I heard DSD played through a Lampizator Big 7 & compared to the same recording in PCM played through the lampizator, the DSD was more realistic & solid sounding - just much more musical sounding. Loud & subtle sounds each seemed to be independent in the soundstage whereas, I wouldn't have complained about the PCM playback but compared to DSD it sounded strident in HF when things got busy - it seemed the busyness affected all of the sounds in the sound field
PENG posts on May 09, 2015 12:37
jkenny, post: 1083003, member: 73303
Yes, I agree, the phrase needs qualification - which is it “night” or “day”

Perceptual differences that effect the whole music portrayal may well be very difficult to isolate & identify as a specific audible difference in a small snippet of music.

I'm of the opinion that blind testing is only really successful when a specific difference has been isolated & can be heard in a small audio snippet. I believe that trying to compare two full tracks is doomed to failure due to the nature of how our perceptions work. It's like trying to compare the visual difference between two colours - the most successful way is to have the two colours side by.The least successful way is to show one colour, remove it & then show another colour.

I tend to agree to a point but I believe most people can tell the difference in sq between mp3 and cd sound quality without even trying. I can also tell the difference between my different speakers when comparing them side by side without trying hard, yet I still won't call it night and day. At the end of the day, I accept the fact that it is just an expression, one that has a range of meaning.
jkenny posts on May 09, 2015 11:30
PENG, post: 1082982, member: 6097
jk, so in short when people say night and day we should ask them to narrow their meaning down first.
Yes, I agree, the phrase needs qualification - which is it “night” or “day”

Perceptual differences that effect the whole music portrayal may well be very difficult to isolate & identify as a specific audible difference in a small snippet of music.

I'm of the opinion that blind testing is only really successful when a specific difference has been isolated & can be heard in a small audio snippet. I believe that trying to compare two full tracks is doomed to failure due to the nature of how our perceptions work. It's like trying to compare the visual difference between two colours - the most successful way is to have the two colours side by.The least successful way is to show one colour, remove it & then show another colour.
Post Reply