AVS Forum banner

3361 - 3380 of 5662 Posts

·
Registered
Joined
·
8,986 Posts
I had a Phase 400 back in the 70's and never had a problem with it. I had the thermal cutout shut it down several times playing Also Sprach Zarathrustra into Infinity 2000As, though. I don't know what would happen if you try that with one of today's receivers.

Cheers,
OldMovieNut

For whatever reason, the 400 wasn't nearly the problem of the 700.
 

·
Registered
Joined
·
8,986 Posts
It looks like you're measuring both speakers simultaneously. Measure them one at a time, otherwise the data will be confused. Post those results.

Cheers,
OldMovieNut
I measure from the MLP. I don't know of any other way that would mean anything for in-room measurements of dipoles.
 

·
Registered
Joined
·
72 Posts
One problem that would fit in to your closed system argument would be over fitting the data. This would be reflected by the number of degrees of freedom remaining. Probably this is represented in the papers, but I don't know that. Otherwise, I don't think it is necessary to test untested speakers to verify the model. There won't be a future change in people's hearing. Of course, it is bounded by the types of variations in speaker response present in the initial data set. So I would propose that it will be predictive as long as the untested speaker is within those bounds. A case where previous prediction can be less predictive would be stock market trends, where new data coming in may actually be influenced by the previous analysis and knowledge of it by the investors.
I've mentioned previously in this thread that the low number of features used is promising. However, that is not sufficient to prove they haven't overfit the data. In this case, the data is dozens of speakers, not tens of thousands.

One specific method question I would have is what about removing the bass component with a high pass filter and redoing the double blind data and the resulting fitting. Since deeper and well behaved bass is obviously important to the subjective listener, but yet people will generally have subwoofers, removing it will improve the precision of fitting the other parameters.
The practical issue with that is putting the filtered speakers on equal playing field for the blind testing. There's been lots of discussion about this, because they are testing speakers in a way that is not always the intended use: testing a speaker full range that is expected to be operated with a subwoofer supplementing the lows. I certainly agree that it's not ideal. However, the best way to get around it remains elusive to this point.

Of course, I would want more disclosure of what comparison speakers were tested (for Harman double blind tests) and ideally I would want to be able to hypothesize speaker improvement ideas (or different curves) and be able to test them, but Harman doesn't owe us this. This would be more of a job for an independent organization, like Floyd Toole's prior research job.
Agreed. There's a lot of cool engineering ideas that aren't well represented in commercial speakers, like dipoles and line arrays, and leaving them out of the testing is a rather significant omission in my eyes.
 

·
Registered
Joined
·
683 Posts
...
The practical issue with that is putting the filtered speakers on equal playing field for the blind testing. There's been lots of discussion about this, because they are testing speakers in a way that is not always the intended use: testing a speaker full range that is expected to be operated with a subwoofer supplementing the lows. I certainly agree that it's not ideal. However, the best way to get around it remains elusive to this point. ...
I believe Dr. Toole posted something earlier about how the model matches the data in 80+ percent of cases and 99 percent of cases when low bass is taken out of the equation, implying to me that they have run double-blind tests where the low bass was filtered out.

I requested copies of the research papers related to this model via ResearchNet. When/if I get them, I'll let you know.
 

·
Registered
Joined
·
8,937 Posts
One problem that would fit in to your closed system argument would be over fitting the data. This would be reflected by the number of degrees of freedom remaining. Probably this is represented in the papers, but I don't know that. Otherwise, I don't think it is necessary to test untested speakers to verify the model. There won't be a future change in people's hearing. Of course, it is bounded by the types of variations in speaker response present in the initial data set. So I would propose that it will be predictive as long as the untested speaker is within those bounds. A case where previous prediction can be less predictive would be stock market trends, where new data coming in may actually be influenced by the previous analysis and knowledge of it by the investors.

One specific method question I would have is what about removing the bass component with a high pass filter and redoing the double blind data and the resulting fitting. Since deeper and well behaved bass is obviously important to the subjective listener, but yet people will generally have subwoofers, removing it will improve the precision of fitting the other parameters.

Of course, I would want more disclosure of what comparison speakers were tested (for Harman double blind tests) and ideally I would want to be able to hypothesize speaker improvement ideas (or different curves) and be able to test them, but Harman doesn't owe us this. This would be more of a job for an independent organization, like Floyd Toole's prior research job.
I've mentioned previously in this thread that the low number of features used is promising. However, that is not sufficient to prove they haven't overfit the data. In this case, the data is dozens of speakers, not tens of thousands.



The practical issue with that is putting the filtered speakers on equal playing field for the blind testing. There's been lots of discussion about this, because they are testing speakers in a way that is not always the intended use: testing a speaker full range that is expected to be operated with a subwoofer supplementing the lows. I certainly agree that it's not ideal. However, the best way to get around it remains elusive to this point.



Agreed. There's a lot of cool engineering ideas that aren't well represented in commercial speakers, like dipoles and line arrays, and leaving them out of the testing is a rather significant omission in my eyes.
Have either of you read Dr. Toole's book or the research papers referenced in it? Or Dr. Toole's prior posts in this thread?

I enjoy discussing problems and issues, but not when they seem disconnected from existing research. For instance, testing has been done using high pass filters to eliminate bass response as a variable in blind comparos. This is why Dr. Toole has stated multiple times that bass accounts for 30% of preference and that the correlation to the spinorama data is somewhere around 99% when speakers are compared with similar bass extension. There is nothing "elusive" about it as far as I can determine.
 

·
Registered
Joined
·
72 Posts
Have either of you read Dr. Toole's book or the research papers referenced in it? Or Dr. Toole's prior posts in this thread?

I enjoy discussing problems and issues, but not when they seem disconnected from existing research. For instance, testing has been done using high pass filters to eliminate bass response as a variable in blind comparos. This is why Dr. Toole has stated multiple times that bass accounts for 30% of preference and that the correlation to the spinorama data is somewhere around 99% when speakers are compared with similar bass extension. There is nothing "elusive" about it as far as I can determine.
Yes, I own and have read the book. I have read the patent. I have requested the papers. From what I can tell, they have derived an equation to predict preferences, and it combines scores from many features. The bass extension score probably has a weight of around 30%. That's not the same thing as actually doing the tests with filters in place.
 

·
Registered
Joined
·
683 Posts
How do you think they determined the impact of bass extension on preference?
I'm still waiting on copies of the papers, but now that I think about it, the impact of bass extension would have been calculated via linear regression regardless of whether or not they tested it specifically. Of course this doesn't mean they didn't test it, but they might not have.
 

·
Member
Joined
·
844 Posts
How do you think they determined the impact of bass extension on preference?
You need to read the Olive papers, or at least the summary in my book.

All listening was done in the same room using our positional substitution apparatus - active loudspeakers were all moved to the same location in about 3 seconds. The single listener was always in the same location. The room was included, but it was a constant factor for all loudspeakers. If bass is a factor in these experiments one can only believe that in the real world of speaker evaluations - different rooms, different times, different setups - it is even more consequential. Hence the need for in-situ low-frequency measurements, equalization and if possible multiple subs to deliver bass with somewhat predictable quality.

All anechoic data was available - 70 (20 Hz - 20 kHz) curves at 2 m, and the spinorama processed version.

There were two experiments.

One experiment used 13 bookshelf speakers, having somewhat similar low-frequency performances. The factors "low-frequency extension" and "low-frequency quality" (uniformity of frequency response) both contributed 25% of the overall factor weighting.

A second experiment used 70 loudspeakers of all sizes and prices accumulated from competitive analysis tests over many months. These differed greatly in bass extension and this factor, probably because it was subjectively so obvious, alone accounted for 30.5% of the factor weighting.

The cost of accumulating these data was never assessed, but it was considerable, not counting the immense cost of the anechoic measurement and subjective evaluation facilities. It would be great if someone were to take the exercise farther, but as I said in an earlier post, what has been done has served its purpose: accurate and comprehensive anechoic measurements were shown to be reliable guides to perceived sound quality - in a room!. Designing loudspeakers to exhibit neutral sound quality is now a science-based engineering exercise.
 

·
Premium Member
Joined
·
312 Posts
Have either of you read Dr. Toole's book or the research papers referenced in it? Or Dr. Toole's prior posts in this thread?

I enjoy discussing problems and issues, but not when they seem disconnected from existing research. For instance, testing has been done using high pass filters to eliminate bass response as a variable in blind comparos. This is why Dr. Toole has stated multiple times that bass accounts for 30% of preference and that the correlation to the spinorama data is somewhere around 99% when speakers are compared with similar bass extension. There is nothing "elusive" about it as far as I can determine.
I read and enjoyed his book. And my comment was not criticism at all. Obviously the bass extension was considered.
What the statement above means is that when bass is considered as part of the model, it accounts for 30% of the model. Now, knowing that it is important but also well modeled by the measured data, take it out. Then, any less well known behavior would be more clearly targeted. I suspect that something like the 99% fit could be obtained, but over all speakers, not just ones with similar bass extension. The question would be whether any other parameters start to appear to be important beyond what was already identified as statistically significant.
 

·
Member
Joined
·
844 Posts
I read and enjoyed his book. And my comment was not criticism at all. Obviously the bass extension was considered.
What the statement above means is that when bass is considered as part of the model, it accounts for 30% of the model. Now, knowing that it is important but also well modeled by the measured data, take it out. Then, any less well known behavior would be more clearly targeted. I suspect that something like the 99% fit could be obtained, but over all speakers, not just ones with similar bass extension. The question would be whether any other parameters start to appear to be important beyond what was already identified as statistically significant.
This is a good observation - to render all bass the same, or eliminate all bass below some frequency. Then focus on what is left.

First, because transducers are minimum-phase devices we have already included the linear distortions - amplitude and phase/ frequency and time domain - performance. Directivity vs frequency remains, but this will be another situation-dependent variable - and it may also evoke some personal preference biases.

Beyond this there are non-linear, level dependent, distortions. Sadly, right now, there are no metrics that reliably correlate with audibility or annoyance. Only zero is reliable. So, more very basic psychoacoustic research is necessary before any comprehensive modeling can incorporate this factor.

Not my department - not now anyway :).
 

·
Registered
Joined
·
25 Posts
Folks need to remember that what Dr. Toole, Dr. Olive, and the other bright folks at Harman have come up with is not a law of acoustics but rather a theory of loudspeaker design. A theory is just a way of thinking about how the world works. The more closely your theory comports with observations of the world, the better it is. If your theory allows for prediction of future observations that are thereafter confirmed, it is even better.



That's not to say that theories cannot be later changed to provide even closer descriptions of reality. To date, even Einstein's thoughts known as "General Relativity" are only called a "Theory." A future "theory" may well provide additional answers.



Until such time, the current theory of General Relativity has proven itself to be and continues to be a hugely useful tool.



Similarly, what Harman, et al. have come up with is a hugely useful tool for speaker designers. Follow their guidance and most folks will agree that in a double blind listening evaluation your speakers sound good. The corollary of this is if you defy their guidance, in a double blind listening evaluation most folks may agree that other speakers sound better than yours.



All that is happening here is that Harman, et al., are raising the table stakes for deciding which speakers are in the running to be considered "good."


In barbecued pork spare rib competitions, if you do not peel off the membrane on the back of the ribs before preparing them, it does not matter how tasty your rub is. Your ribs will be shuttled away to the second round because you did not meet the foundational qualifications to be in the first round. There is more to a good rib than a tasty rub.


Similarly, there is more to a loudspeaker than an exotic driver, or over damped enclosure, or fancy cables inside the box. First get the foundational preparations right and then tell us about your tweaks.
 

·
Registered
Joined
·
6,549 Posts
Folks need to remember that what Dr. Toole, Dr. Olive, and the other bright folks at Harman have come up with is not a law of acoustics but rather a theory of loudspeaker design. A theory is just a way of thinking about how the world works. The more closely your theory comports with observations of the world, the better it is. If your theory allows for prediction of future observations that are thereafter confirmed, it is even better.

That's not to say that theories cannot be later changed to provide even closer descriptions of reality. To date, even Einstein's thoughts known as "General Relativity" are only called a "Theory." A future "theory" may well provide additional answers.

Until such time, the current theory of General Relativity has proven itself to be and continues to be a hugely useful tool.
And it will still be called a theory, unlike what some people believe, which is that theories "graduate to become laws" when they are "proven". Even though gravity is a law, and we can predict where it will happen and calculate its strength, we still don't completely understand it. Instead, various physical laws, observations, facts, etc, are contained within the theory that explains the phenomenon.

Similarly, what Harman, et al. have come up with is a hugely useful tool for speaker designers. Follow their guidance and most folks will agree that in a double blind listening evaluation your speakers sound good. The corollary of this is if you defy their guidance, in a double blind listening evaluation most folks may agree that other speakers sound better than yours.

All that is happening here is that Harman, et al., are raising the table stakes for deciding which speakers are in the running to be considered "good.".
+1.
 

·
Registered
Joined
·
124 Posts
So a question about ports. When placing a speaker close to a wall (rear and side), is front ported better than rear ported, or is the answer not that simple?
 

·
Registered
Joined
·
359 Posts
So a question about ports. When placing a speaker close to a wall (rear and side), is front ported better than rear ported, or is the answer not that simple?
there are also bottom ported speakers exist..
 

·
Member
Joined
·
844 Posts
So a question about ports. When placing a speaker close to a wall (rear and side), is front ported better than rear ported, or is the answer not that simple?
The port is part of a tuned resonance at very low frequencies. The mass of air in the port is a key factor, so anything you do that changes this will detune the speaker as designed. So, to answer your question, if the port opening is too close to a wall some of the air in the narrow space will be included in the effective mass of the port and mistuning can occur. Normally, a space of a few inches is sufficient to avoid this, a foot would be totally safe.

Bottom ports are designed to function with the known space below the speaker.
 

·
Registered
Joined
·
124 Posts
The port is part of a tuned resonance at very low frequencies. The mass of air in the port is a key factor, so anything you do that changes this will detune the speaker as designed. So, to answer your question, if the port opening is too close to a wall some of the air in the narrow space will be included in the effective mass of the port and mistuning can occur. Normally, a space of a few inches is sufficient to avoid this, a foot would be totally safe.



Bottom ports are designed to function with the known space below the speaker.
OK, so as long as I don't have it right up against the wall I should be fine with a rear port. Thanks.
 

·
Registered
Joined
·
274 Posts
So a question about ports. When placing a speaker close to a wall (rear and side), is front ported better than rear ported, or is the answer not that simple?
As Dr. Toole said, generally a few inches will do the trick, and a foot is generally more than enough. The actual minimum distance varies widely with the speaker, though.

It's actually not too hard to hear the effects of this. Get to the listening position and get someone else to hold something flat and rigid to serve as a "wall" (a small piece of plywood or a clipboard or something). Play something with a lot of content near the port tuning frequency, turn up the volume so the speaker is working hard, and have the other person gradually move the "wall" closer to the port. When it gets close enough to the port the airflow will be affected and it'll change the response of the speaker—this is usually quite audible, and it's kind of a fun exercise. Some speakers you can hear it quite clearly even 6-7" away from the speaker, and for some speakers you won't hear much of anything change until the "wall" is almost in contact with the speaker.
 

·
Registered
Joined
·
683 Posts
So a question about ports. When placing a speaker close to a wall (rear and side), is front ported better than rear ported, or is the answer not that simple?
I assume that rear ported is better than front in general. Presumably that's why all (?) of Harman's speakers are rear-ported now. The Infinity Primus line was front-ported but they were replaced with the Reference line which is rear-ported. When people complain about front-ported speakers, the complaint is usually about leakage and phase cancellation. I know that the bigger Primus bookshelf speakers had some phase stuff going on at around 700Hz. I assume that sort of stuff is lessened with a rear port.
 

·
Registered
Joined
·
108 Posts
Beyond this there are non-linear, level dependent, distortions. Sadly, right now, there are no metrics that reliably correlate with audibility or annoyance. Only zero is reliable. So, more very basic psychoacoustic research is necessary before any comprehensive modeling can incorporate this factor.
What do you think of the Rnonlin metric? That does look to predict subjective ratings.

"Predicting the Perceived Quality of Nonlinearly Distorted Music and Speech Signals"
Tan, Chin-Tuan; Moore, Brian C. J.; Zacharov, Nick; Mattila, Ville-Veikko
JAES 2004
 
3361 - 3380 of 5662 Posts
Top