Originally Posted by Nuance
I disagree and think winners and/or losers based on ratings is a good idea (so do Sean Olive and Floyd Toole, for what it's worth. They didn't assign number scores, but they did rank them first, second, third, etc). The important thing to remember is the results should not be taken as "speaker A won so it will always be better than speaker B, so buy speaker A." The results only apply to those listeners in that room using those electronics. If you keep that in mind then there is nothing to get "fired up" about. Obviously that didn't stop some people, though...
My final "piece of advice" is to ignore those who do
"get fired up." They are usually the type of people who are either upset because their brand lost (angry based on biases) or simply love to argue and won't stop until they've exhausted themselves doing so. There is a major difference between that type of person and the person who gives constructive criticism to help in the future. Don't get those two types of people confused, as constructive criticism is good. Again, ignore those who only want to wine and argue for the sake of it and don't let them ruin your fun. Those same type of people even bash Toole and Olive's tests.
Say what you will, but when you have a large
number of listeners, not sure of the exact count, four rows, probably at least 6 seats per row, for >24 listeners, reducing every listener's thoughts to a score, such as 55, and then presenting it to represent a speaker's performance, is worthless. Especially when a large number of listening positions (majority?) were inadequate.Again, such blind tests should be limited to a reasonable number of listeners.
If there's place for say 6 or 8 good listening positions, cramming 40 people, spanning 4 rows, from one wall to the other, isn't beneficial at all for the test, it's actually the opposite, it's detrimental. Again, when people after the test say things like: "If you were in one particular seat and had determined a particular speaker was noticeably better, in another seat your opinion became the opposite. And more than likely, in yet another seat, your opinion would change again."
, how can you put any worth to the results?
Again, you cannot average the score of , say, 10 abysmal listening positions, 10 bad ones, 6 not so bad, and 6 decent to get a definitive score on speaker performance. It's ridiculous.
But hey, what else can you do? If there was 30 listeners, and notes go like:
A) I like these, good and tight bass, not bright at all
B) I disliked these, no bass at all, bright at times
C) I like them, good bass, bit recessed highs and mids, overall not bad
D) I hated them, way too much bass, recessed treble...
So you do what, list all 30, all of which are contradicting themselves? No.. You average results... (well if you're lazy and don't want to do what you should be doing
A) 60 vs 55
B) 40 vs 65
C) 50 vs 40
D) 30 vs 35
45 pts vs 49 pts
So is this the same as the above? Seeing every comment? Does this mean the same thing? Does this have the same worth? No... Even looking at the final score, 45 vs 49, you might think one was better, but actually, looking at just the four scores might tell a different story... (And as I said, speakers are subjective, so knowing you scored 40 on one and 65 because you like more bass and one had more bass... Again... Is worthwhile of mention)
Again, you have some people trying to discredit others: "people who are either upset because their brand lost (angry based on biases) or simply love to argue "
, but that's just the same old stories with these forums. It's much easier to try to discredit someone than to rebuke their arguments.
What is also easy is build straw man fallacies: "I disagree and think winners and/or losers based on ratings is a good idea."
. Basically, you present someone's argument as something it's not, and disprove this made up argument. It's simply a logical fallacy mostly employed by people with old grudges...
"the best thing to do is present your data and results honestly, being careful not to draw incorrect conclusions."
Again, if you had, say, 30 listeners, an honest way of presenting the results would be to present every test sheet for every listener. That would be interesting to see as a result, because it would give the reader a decent idea of what transpired in the test. Again, it's simply presenting your data in an honest and correct fashion... One has to be careful not to present incorrect conclusions... Just presenting one mark averaging the good/bad listening positions definitely isn't the best idea imho.