“Let our rigorous testing and reviews be your guidelines to A/V equipment – not marketing slogans”
Facebook Youtube Twitter instagram pinterest

Shootout Method Continued

By

There were two different tests conducted - blind and sighted. For the blind test (which was done first) I had the entire front of the room blocked off with a large piece of speaker grill fabric. This fabric is acoustically transparent and very thin. From behind it, I could clearly see each of the participants to the point that I could tell when they were looking over each other's shoulders. Once I blocked the light coming in from the window behind the screen, the participants could neither see me nor the speakers. The screen was set up before the participants arrived on Friday and stayed up until after the blind listening tests were completed on Saturday. After each listening session, I switched the speaker cables (each marked with an A or B as were the amps), came out from behind the curtain, and let the listeners rest their ears/minds as long as they wanted. I did not answer any questions about which speakers were in the previous tests even after all the blind tests were complete. In fact, as of this writing, they still don't know. They are finding out the same time as you.

Shoot_part_through.JPG     Shoot_front_screen.JPG

After the blind tests were done, I took down the screen. I asked the participants to examine the speakers. They inquired after the grills (which were removed for all the listening tests), they looked closely at the finishes and build quality, and even picked up the speakers. I then allowed them to switch the speakers individually and compare them as they wished. I asked for an individual writeup of each speaker for the sighted portion. They were encouraged to compare and contrast speakers in any way/order they wanted. During the sighted test, I was not in the room (for most of it I was taking a nap).

My first big concern was ensuring the signal chain didn't affect the sound. Since I was running one pair of speakers through the "main" analogue inputs and the second through the "surround," there was a concern that the sound quality would be affected. Before the participants arrived, I hooked up one of the speakers and took a measurement with the Sencore SP395A FFT Audio Analyzer. I then switched the speaker wire to the second amp and took a separate measurement.

Shoot_FrontvSide.JPG

As you can see, the measurements are identical. The second big question is how the speaker grill fabric I was using for a screen would affect the sound. With the same setup, I took a measurement and then held up the fabric and took a second measurement. This second measurement probably is worse than what you'd experience in the actual room as there may have been more folds in the fabric and my proximity to the speaker/mic could have made a difference. As you can see, the measurements are both very close.

Shoot_Fabric.JPG

My room is 14.5 feet long and 12.5 feet wide. It is open on two sides. It has undergone the Auralex Room Analysis Plus process which has you do measurements of your room and uses the data to suggest room treatments and locations. My room is treated by my own DIY acoustical absorbers and six GiK Tri-Traps. The room is not at all dead nor is it 100% flat. For a complete discussion of my room, please see the review of the Auralex Room Analysis Plus (forthcoming). The big problem with this setup and the room is not so much the acoustics (which thanks to Auralex and GiK are quite good) but the number of speakers. Four pairs of floorstanding speakers are a lot in a 12.5 foot wide room. What this meant was that the best spot for the listening was in one of the two center seats. The side seats aren't that great even with one pair of speakers but with four, they go from tolerable to terrible. This required the participants to switch seats - something I would have asked them to do anyways.

When picking the pairs, I went to random.org and generated pairs of numbers, each between one and four. I numbered the speaker based on the order I received them (seemed about as random as anything else). I ignored pairs that were identical (1 and 1, etc.) or that were repeated (in either order). This left me with the following order of pairs:

Blind

a

b

Krix

Salk

Infinity

Krix

DALI

Infinity

DALI

Krix

Salk

DALI

Salk

Infinity

One of the joys of randomness is that it takes all of the guesswork out of trying to "fool" the listeners. Of course, Clint said afterward that he "knew" I'd pair up Krix and Salk first. The fact is that I didn't and he knew that I had chosen the pairs randomly so he didn't "know" anything. At the end of each of the listening tests, I would switch the speaker cables and reset all the amps. At the end of the break I would re-check all the speaker/amp connections and start the next test.

Shoot_front_left.JPGIn order to facilitate data collection, I decided to draw up two forms - one for the sighted test and one for the blind. The blind form was very specific as to what the listeners should be paying attention to - Highs, Mids, Lows, Soundstage, Imagining, etc. After they filled out the form on their laptops, I had them email them to me. This way, they couldn't go back and make changes after talking during the breaks. For the sighted form, all I listed were build quality and aesthetic concerns. At first I had drawn up a second set of listening pairs like the Blind table above but in the end I decided just to let them have at it. It had already been a long day and six more listening tests really wouldn't have proven anything. Better, I thought, to get their individual reactions to each speaker in a sighted test and let them compare and contrast as they will. The only addition I made to the sighted test was to ask them to do a listening test with the Krix Phoenix speakers without the port plug. During my review of the Phoenix speakers, I found that the bass response was extremely overbearing without the port plugs so I left them in during the blind tests. I could have included a second configuration of the Krix to the blind tests but I didn't think it would do much good (as I figured they'd just get creamed) and would add considerable time. The evaluation of the Krix without the plugs in the sighted test backed that up.

The participants were asked to bring their own music for the shootout. We burned a CD with six songs on it (the first track was a 45 second clip, the rest full songs) to eliminate having to switch CDs in mid test.

Track 1 - 45 second clip from Seal's Crazy off the 1991 album
Track 2 - Happier Girl by Wellville off their same titled release
Track 3 - Toy Matinee from the band of the same name off the album of the same name
Track 4 - Willie and Lauramae Jones by Shelby Lynne off Just a Little Lovin'
Track 5 - South Texas Girl by Lyle Lovett off It's Not Big It's Large
Track 6 - Tonight by Kate Walsh off Tim's House

All in all, the CD was 25 minutes long though they had the ability to fast forward, rewind, and jump around as they wished.

 

Confused about what AV Gear to buy or how to set it up? Join our Exclusive Audioholics E-Book Membership Program!

engtaz posts on April 05, 2009 15:45
Nice write up. I like it when you discussed lessons learned about music length, music choice and I really like the blind listening. The individual review you did before it when cool. I liked how you compared your personal testing from what you were hearing during the shootout.

Thanks,
Roy
krabapple posts on April 05, 2009 14:34
jinjuku, post: 546806
Dinging ppl since they need Dinging since 1988:

I am all aflutter waiting for the krabappleholics.com audio enthusiast web site where blind listening is done with 500 people wired to machines that record over 200 different biometric functions. Most importantly when they wet themselves.

end of rant.

Your wait is over. You can stop fluttering. The site is called Hydrogenaudio

Also odd that you'd jeer about blind testing….since the Audioholics shootout under discussion was, you know…*blind*. And I applauded them for that. Audioholics markets itself under this banner: “Let our rigorous testing and reviews be your guidelines to A/V equipment—not marketing slogans”. If the testing is only ‘somewhat’ rigorous….some of us notice that sort of thing.

Anyway, you ‘dinged’ me just out of spite, not because anything I wrote was incorrect or malicious. It just made widdle baby jinjuku *annoyed*.

Better get a change of diapers ready before you read the following; it's a post from yesterday by Sean Olive – the guy who's been running all those ‘scientifical’ tests on loudspeakers at NRC and Harman for a decade or more, with the same goal as Audioholics – ‘to pursue of the truth in audio’ – but using better methods. You might notice it says *kinda the same thing I did*

http://www.avsforum.com/avs-vb/showthread.php?p=16198521#post16198521

First of all, I'd like to say that it is extremely difficult and often misleading to make comparisons between different speakers without the use of speaker mover to control positional effects. The positional effects can easily swamp out any true audible differences between the speakers, particularly if the measured differences among them are very small. I used to find that positional effects alone could change the preference rating of a loudspeaker by 20%. That is why Harman spent considerable money on a multichannel and in-wall speaker mover – so the positional biases are removed from the test.

Secondly, you cannot correlate what you hear to what you measure unless you have comprehensive on and off-axis data like the kind we advocate. You could be hearing brightness in a loudspeaker that has a dip at 2-4kHz on-axis because of its off-axis response. The upper-treble brightness you are hearing could also be due to the 2-4 kHz dip since the dip could produce a release in upward masking - emphasizing the frequency range above the dip. In our listener training exercises, listeners commonly mistake dips as peaks located higher in frequency.

Comprehensive anechoic loudspeaker measurements combined with the right set of in-room measurements can usually explain what you are hearing.
jinjuku posts on April 01, 2009 14:35
krabapple, post: 546779
Bravo that you did the test blind, but I can't put too much stock in the results – Harman built an expensive ‘loudspeaker turntable’ and and acoustically optimized room for a reason. It's just not fair to compare loudspeakers situated in different positions in the room, given the effects loudspeakers position has on room interactions.

Also, three listeners is a mighty small sample….and I wouldn't venture to guess what statistical analysis would say about the results. It would also have been interesting to allow repeat of pairs. And allowing the listeners to discuss their impressions amongst themselves during the test (which appears to be the case) is a definite no-no.

Dinging ppl since they need Dinging since 1988:

I am all aflutter waiting for the krabappleholics.com audio enthusiast web site where blind listening is done with 500 people wired to machines that record over 200 different biometric functions. Most importantly when they wet themselves.

end of rant.
jinjuku posts on April 01, 2009 14:33
Dinging ppl since they need Dinging since 1988:

I am all aflutter waiting for the krabappleholics.com audio enthusiast web site where blind listening is done with 500 people wired to machines that record over 200 different biometric functions. Most importantly when they wet themselves.

end of rant.
krabapple posts on April 01, 2009 13:02
Bravo that you did the test blind, but I can't put too much stock in the results – Harman built an expensive ‘loudspeaker turntable’ and and acoustically optimized room for a reason. It's just not fair to compare loudspeakers situated in different positions in the room, given the effects loudspeakers position has on room interactions.

Also, three listeners is a mighty small sample….and I wouldn't venture to guess what statistical analysis would say about the results. It would also have been interesting to allow repeat of pairs. And allowing the listeners to discuss their impressions amongst themselves during the test (which appears to be the case) is a definite no-no.
Post Reply