AVS Forum banner

5601 - 5620 of 5660 Posts

·
Registered
Joined
·
470 Posts
On-axis response is only one line out of five of a spinorama plot. I don't know how highly it's emphasized in the "Harman preference equation" but based on that, I would assume the answer is, not very.
I could be wrong, but I think he and Dennis were talking more about directivity index consistency vs y-axis height.

A good contrasting example would be the Genelec 8341 vs the Ascend Sierra 2. The Genelec has a much straighter(more consistent) directivity index, but the line goes all the way up to ~55. the Ascend Sierra 2 has a much more jagged(less consistent) directivity index, but the line only goes up to ~48. The question is, which is better, a straighter line, or a line with a smaller slope? I think Dennis was saying that if forced to compromise in some way, he'd rather compromise the consistency in favor of an overall lower DI.
 

·
Registered
Joined
·
473 Posts
Stereo tests didn't produce significantly different results than mono tests. Between Olive and Toole, they must have tested almost 250 speakers over the course of a decade. Tests in mono, stereo and surround yielded the same preference rankings, just took longer to get those results as the tests went from mono to stereo to 5.1. AFTER finding out that the rankings didn't change, they stuck with testing in mono. Why waste time & resources delaying the same result. Turns out that if you prefer a particular speaker when listening to one of them, then it will continue to be your preference when listening to two of them or five of them. The research is described and footnoted in Toole's book. Once the testing found which speakers were preferred and which were not, Toole/Olive measured them to find out if measurements correlated to preference. They found common aspects to the most preferred speakers: smooth on-axis response, similar off-axis response that changed gently, wide dispersion, extended frequency response, etc. With enough samples, just looking at the measurements allowed them to predict (fairly well) where the speaker would end up on preference rankings. Not trying to "convince" you of anything, merely pointing out how measurements correlated to preference. And they did so with monotonous consistency.

Keep in mind that these are preference tests. Just because a song is sitting atop the Top 40 charts doesn't mean that it will be your favourite song. That chart can tell you what people like, but it cannot predict your personal preference. Same with Harman's testing. Most preferred is an aggregate, not an absolute for every single listener.
This is important to note.

Sean Olive in his blog says as much:

"The early studies involved comparison of different speakers that varied more than bass and treble balance. Some speakers had resonances that produced serious colorations, distortions, differences in directivity. The headphone study basically takes a flat neutral headphone and asks people to adjust the bass and treble. That's where experience and age seem to take over. The same holds true for loudspeakers when we did a similar experience. "

"Prior to this study, I nor anyone I know had published a study where trained and untrained listeners were given a bass and treble control and asked to adjust to taste. In previous studies, trained and untrained listeners were asked to give preference ratings to speakers that varied in ways other than bass and treble. It seems that given some finite choices people will pick the most neutral speaker or headphone (no resonances), wide bandwidth. However, given some tone controls they will adjust for variations in program and taste. "

Source: http://seanolive.blogspot.com/2015/11/factors-that-influence-listeners.html

Here is the link to the white paper "Listener Preferences for In-Room Loudspeaker and Headphone Target Responses": http://www.aes.org/e-lib/browse.cfm?elib=17042 The abstract itself says the following: "There were significant variations in the preferred bass and treble levels due to differences in individual taste and listener training. "

The conclusion is that under the overarching umbrella of a speaker being "well behaved" enough," listener preferences take over, and from there, there is "significant variation" - this is the definition of subjective preferences.
 

·
Registered
Joined
·
470 Posts
Stereo tests didn't produce significantly different results than mono tests. Between Olive and Toole, they must have tested almost 250 speakers over the course of a decade. Tests in mono, stereo and surround yielded the same preference rankings, just took longer to get those results as the tests went from mono to stereo to 5.1. AFTER finding out that the rankings didn't change, they stuck with testing in mono. Why waste time & resources delaying the same result. Turns out that if you prefer a particular speaker when listening to one of them, then it will continue to be your preference when listening to two of them or five of them. The research is described and footnoted in Toole's book. Once the testing found which speakers were preferred and which were not, Toole/Olive measured them to find out if measurements correlated to preference. They found common aspects to the most preferred speakers: smooth on-axis response, similar off-axis response that changed gently, wide dispersion, extended frequency response, etc. With enough samples, just looking at the measurements allowed them to predict (fairly well) where the speaker would end up on preference rankings. Not trying to "convince" you of anything, merely pointing out how measurements correlated to preference. And they did so with monotonous consistency.

Keep in mind that these are preference tests. Just because a song is sitting atop the Top 40 charts doesn't mean that it will be your favourite song. That chart can tell you what people like, but it cannot predict your personal preference. Same with Harman's testing. Most preferred is an aggregate, not an absolute for every single listener.
Mostly agree with what you're saying, but I do disagree somewhat on some things. I've read Toole's book, as well as this thread, as well as the M2 vs Salon 2 thread. I still don't believe the research is sufficient to say that "the speaker that is preferred in mono will ALWAYS be preferred in 5.1, or 7.1, or 9.1". Like I said, I do think it's sufficient to say a softer statement like "the speaker that is preferred in mono will be preferred in 5.1, 7.1 or 9.1 most of the time". I have slight doubts that a mono vs 9.1 test would hold consistently with a shootout between something like a Danley SH50 and an Ohm Walsh. I'd love to for my doubts to be proven wrong, though, and I mean that seriously.

I also don't agree with stereo testing with the speakers in the same spot. Optimal speaker positioning depends on the speaker and on the room. Some need to be closer together, some farther apart, some toed in more, some toed in less, some closer to the listener, some closer to the wall. Again, I get why they can't test this way, but it's a limitation that makes me trust their stereo tests less so than I trust the mono tests.

I actually used to have much more confidence in measurements ability to predict user preference than I do now. I was very convinced after reading this thread, but it was due to a misunderstanding on my part related to what "86% correlation" really means. One thing that Amir at audio science review is showing us now is that speakers really have to be quite far apart on the preference scale in order for the formula to be able to predict user preference with great confidence. It seems that if two speakers are even somewhat close in the spins, user preference takes over, and the formula is not really that great at predicting for individual use. Like you said, it's an aggregate.
 

·
Registered
Joined
·
2,905 Posts
That's not true. Notice I said measure the same "in terms of frequency response". Speakers can have similar frequency responses, yet different directivity indexes. These speakers will sound different. Look at the M2 vs Salon 2. They both measure similarly very neutral in terms of frequency response, but have different dispersion patterns.
It's not? If you take two speakers with similar on-axis frequency response with different dispersion patterns, the power response will not be the same. Since the power response governs what the speaker will actually sound like in a room, it's not irrelevant. When I look at the spinoramas between the M2 and Salon 2, the power response starts deviating. What is "similar"? To which frequency response were you referring?

That being said, Dr Toole clearly states that dispersion effects are a subjective preference, which certainly appears to be the case given the various opinions expressed by users on AVS over the years. If you have two different speakers that measure similarly well, but have different dispersion, then it becomes a subjective decision whether you prefer precision or a bigger soundstage. Of course, you can "narrow" the dispersion by applying sound absorption at the first reflection points. Would the two speakers produce different imaging in that case?

Importantly, dispersion effects can be heard in mono as well as stereo. There are multiple problems judging "imaging" between individual pairs of speakers -- are they positioned/aimed identically acoustically in the room (unlikely); is there a bad "draw" and one pair has some QC inconsistencies/variation that result in poorer imaging than another pair with better "matching"; is the listener positioned the same acoustically (unlikely with multiple listeners)?

No scientific experiments are going to determine the subjective preference of every single listener. We have all witnessed many people who make radical trade-offs to prioritize their personal preference (such as picking speakers that have a bigger soundstage but frequency response flaws), you can't design an experiment to accommodate every individual.
 

·
Registered
Joined
·
457 Posts
Mostly agree with what you're saying, but I do disagree somewhat on some things. I've read Toole's book, as well as this thread, as well as the M2 vs Salon 2 thread. I still don't believe the research is sufficient to say that "the speaker that is preferred in mono will ALWAYS be preferred in 5.1, or 7.1, or 9.1". Like I said, I do think it's sufficient to say a softer statement like "the speaker that is preferred in mono will be preferred in 5.1, 7.1 or 9.1 most of the time". I have slight doubts that a mono vs 9.1 test would hold consistently with a shootout between something like a Danley SH50 and an Ohm Walsh. I'd love to for my doubts to be proven wrong, though, and I mean that seriously.
I believe Toole has said that directivity matters less in a multi-channel setup, if all channels are driven simultaneously. I think the explanation was that the direct sound from the additional surround channels will provide the spaciousness and dominate the reflected sound of the other speakers.

Someone correct me if I'm wrong.
 

·
Registered
Joined
·
470 Posts
It's not? If you take two speakers with similar on-axis frequency response with different dispersion patterns, the power response will not be the same. Since the power response governs what the speaker will actually sound like in a room, it's not irrelevant. When I look at the spinoramas between the M2 and Salon 2, the power response starts deviating. What is "similar"? To which frequency response were you referring?

That being said, Dr Toole clearly states that dispersion effects are a subjective preference, which certainly appears to be the case given the various opinions expressed by users on AVS over the years. If you have two different speakers that measure similarly well, but have different dispersion, then it becomes a subjective decision whether you prefer precision or a bigger soundstage. Of course, you can "narrow" the dispersion by applying sound absorption at the first reflection points. Would the two speakers produce different imaging in that case?

Importantly, dispersion effects can be heard in mono as well as stereo. There are multiple problems judging "imaging" between individual pairs of speakers -- are they positioned/aimed identically acoustically in the room (unlikely); is there a bad "draw" and one pair has some QC inconsistencies/variation that result in poorer imaging than another pair with better "matching"; is the listener positioned the same acoustically (unlikely with multiple listeners)?

No scientific experiments are going to determine the subjective preference of every single listener. We have all witnessed many people who make radical trade-offs to prioritize their personal preference (such as picking speakers that have a bigger soundstage but frequency response flaws), you can't design an experiment to accommodate every individual.
I'm not sure we're really understanding each other, as we seem to be somehow disagreeing while saying the exact same thing.

A spinorama is a much more comprehensive measurement than a basic on axis frequency response. I agree with the point that if they spin the same, they will sound basically the same. The Salon 2 and M2 have different spins, so it's no surprise that they don't sound the same. The Salon 2 puts a higher percentage of its overall energy into the off axis, hence the difference in DI.
 

·
Registered
Joined
·
470 Posts
I believe Toole has said that directivity matters less in a multi-channel setup, if all channels are driven simultaneously. I think the explanation was that the direct sound from the additional surround channels will provide the spaciousness and dominate the reflected sound of the other speakers.

Someone correct me if I'm wrong.
That's been my finding as well. I've found that I prefer wider dispersion, and that preference grows stronger and stronger as the number of channels decreases.
 

·
Registered
Joined
·
1,882 Posts
We hear the instruments on the left side of the stage as being on the left (center is center, right is right and there is an obvious difference between left-center, center and right-center). Stereo imaging is a term that was used by audio/recording professionals decades ago when they went from mono to stereo. It is not just a "made up audiophile" term.
Symphony is probably a good exception that proves the rule but as you said it is dependent on the mic technique. I don't listen to a whole lot of symphony stuff but sometimes I do and I don't recall much in the way of stereo imaging. Most of what people listen to are studio recordings though, where there isn't a stage at all and the effects are things like drums panning to each speaker or vocals alternating between speakers, it sounds cool sometimes but isn't realistic. Either way, all of this discussion relates to what is in the recording and has nothing to do with the speakers playing it. Each individual speaker may have it's own unique signal to create all sorts of effects, imaging, etc but all that matters is how well the speaker plays each signal unless there is something I'm missing. This is likely why the decades of double blinds have never had a speaker win in mono but lose in stereo or vice versa.

I believe Toole has said that directivity matters less in a multi-channel setup, if all channels are driven simultaneously. I think the explanation was that the direct sound from the additional surround channels will provide the spaciousness and dominate the reflected sound of the other speakers.

Someone correct me if I'm wrong.
I believe I've heard that too and it makes sense. A single speaker in mono is going to have the toughest time filling a room in regards to it's dispersion pattern but as you add more that becomes less of a problem.

I'm not sure we're really understanding each other, as we seem to be somehow disagreeing while saying the exact same thing.

A spinorama is a much more comprehensive measurement than a basic on axis frequency response. My point was that if they spin the same, they will sound basically the same. The Salon 2 and M2 have different spins, so it's no surprise that they don't sound the same. The Salon 2 puts a higher percentage of its overall energy into the off to the side, hence the difference in DI.
I believe what he was saying was earlier you said that 2 speakers can have similar frequency response but different directivity. If 2 speakers have quite different directivity then by definition they will have a different frequency response in the early reflection and sound power curves.
 

·
Registered
Joined
·
470 Posts
I believe what he was saying was earlier you said that 2 speakers can have similar frequency response but different directivity. If 2 speakers have quite different directivity then by definition they will have a different frequency response in the early reflection and sound power curves.
By frequency response I mean on axis frequency response. Directivity comes in to play when you start measuring at different levels of off axis and comparing those to the on axis frequency response. My point was that 2 speakers can have the same frequency response, yet sound very different depending how much energy they put off axis(which would show up in the di).
 

·
Registered
Joined
·
28,977 Posts
I still don't believe the research is sufficient to say that "the speaker that is preferred in mono will ALWAYS be preferred in 5.1, or 7.1, or 9.1".
Again, no one is talking in absolutes. Harman has done enough tests with enough speakers that the results have pointed them to testing in mono. Waiting for someone to produce similar data that demonstrates enough of a difference in ranking to justify testing in stereo or surround.
 

·
Registered
Joined
·
470 Posts
Again, no one is talking in absolutes. Harman has done enough tests with enough speakers that the results have pointed them to testing in mono. Waiting for someone to produce similar data that demonstrates enough of a difference in ranking to justify testing in stereo or surround.
When you say it's "useless" to compare in stereo, that to me means that there is 0 use to testing in stereo, and that to me is an absolute.

I agree with Harman's decision to test in mono over stereo. I think it leads to more meaningful results.
 

·
Registered
Joined
·
568 Posts
When you say it's "useless" to compare in stereo, that to me means that there is 0 use to testing in stereo, and that to me is an absolute.

I agree with Harman's decision to test in mono over stereo. I think it leads to more meaningful results.
First off, I just want to say that I think your posts have been a model of reason, balance, and civility. That's not saying your're right and everyone else is wrong, because this is a super complex topic. I for one think this thread has been very productive. I'm not sure where truth lies, but I suspect the program material plays an important role in the Harman tests. If a speaker is well engineered and neutral, I don't have much doubt that it will be preferred both in mono and stereo if the source material is studio recordings of a popular nature. I'm still not convinced the same would hold if high quality recordings of orchestras and live Jazz were the basis for evaluation. I would love to see my BMR go through the Harman listening protocol, although I doubt that will ever happen. I don't know whether there would be preference reversals in the rankings, but I'm pretty sure the BMR would do better in stereo than mono. The BMR is a fairly extreme case, however, since the main basis for its being is very broad dispersion.
 

·
Registered
Joined
·
28,977 Posts
When you say it's "useless" to compare in stereo, that to me means that there is 0 use to testing in stereo, and that to me is an absolute.
Except I didn't say that, especially the part in quotes. I merely pointed out why Harman tests in mono. In fact, the very first sentence in my reply to you acknowledged that there were differences between stereo and mono tests, just not significantly different. Hardly absolutist.
Stereo tests didn't produce significantly different results than mono tests.
 

·
Registered
Joined
·
470 Posts
Except I didn't say that, especially the part in quotes. I merely pointed out why Harman tests in mono. In fact, the very first sentence in my reply to you acknowledged that there were differences between stereo and mono tests, just not significantly different. Hardly absolutist.
I misread you then, I apologize. I thought I saw someone say something like "Harman's mono vs stereo...therefore it's useless to waste time". Dang I could have sword you used that word, my bad.
 

·
Registered
Joined
·
470 Posts
Also, I want to clarify. This discussion came from another thread where an individual user is comparing multiple speakers in his own home.

My opinion on the mono vs stereo thing is quite different for individuals vs the aggregate.

Harman is trying to find the speaker that is preferred by the most people in the most settings. They have no idea what the end user's room will look like, nor do they have any idea how flexible the end user's ability to position the speakers will be. Their research has shown them that mono preference is consistent with stereo and multichannel, is quicker, and more consistent, and that's why they test in mono. I completely agree with that logic.

My opinion for an individual user testing multiple speakers in his/her room is different. For that situation, I tend to believe that you should conduct the test in a manner that is most similar to the way you will actually end up using the speakers. If you're gonna be listening in stereo, do the test in stereo. If you're gonna have freedom to move the speakers all about, then take time to find each speakers best position, and compare them that way. If you're gonna be forced to put them up against the wall, then test them up against the wall. If you're gonna be using subs, run the tests crossed over to subs. If you're gonna be using EQ, then run the test with EQ. I would use mono testing to help hear differences when I'm having trouble in stereo.
 

·
Registered
Joined
·
9,748 Posts
No one is going to be able to listen to all of the many speakers on the market in their own homes. Everyone is going to have to rely on factors other than their own ears just to narrow down the number of different speakers they might realistically audition in their own homes. So there will always be a factor of "the one I didn't hear that got away might have been better."
 

·
Registered
Joined
·
470 Posts
No one is going to be able to listen to all of the many speakers on the market in their own homes. Everyone is going to have to rely on factors other than their own ears just to narrow down the number of different speakers they might realistically audition in their own homes. So there will always be a factor of "the one I didn't hear that got away might have been better."
I agree. This is where measurements become so important, imo.
 

·
Registered
Joined
·
2,865 Posts
I agree. This is where measurements become so important, imo.
Well, IMO, measurements are "all" that is important because with sufficient measurements (of each the speaker, the room, and, possibly even, the listener's ear) we can (eventually, if not today) simulate the entire signal chain and predict the outcome. I'd much prefer this methodology than constantly swapping crap in and out of my room wondering whether there is something else out there that could do better. In fact, when something "better" came out, I would much prefer to toss it into my simulation and see if there was a "new optimal" I could achieve. Then I could decide if the cost to upgrade was worth the differences. My oh my, that would be a lovely day! Well, possibly only because my "hobby" is "listening to music" not "dorking around with my gear".
 
5601 - 5620 of 5660 Posts
Top