Do Audio Measurements Correlate With Sound Quality? - Page 8 - AVS Forum
View Poll Results: Do Audio Measurements Correlate With Sound Quality?
Yes, they are strongly correlated 102 35.66%
Yes, but they are only weakly correlated 51 17.83%
No, they are not correlated at all 13 4.55%
It depends on the type of product, testing, and environment 120 41.96%
Voters: 286. You may not vote on this poll

Forum Jump: 
Reply
 
Thread Tools
post #211 of 219 Old 12-03-2013, 02:02 PM
AVS Special Member
 
CruelInventions's Avatar
 
Join Date: Mar 2004
Location: Chicago-ish
Posts: 4,552
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 43 Post(s)
Liked: 189
You crane your neck for car accidents too, don't you. tongue.gif

Mourning the disappearing usage of the -ly suffix. Words being cut-off before they've had a chance to fully form, left incomplete, with their shoelaces untied and their zippers undone. If I quote your post (or post in your thread) without comment, please check your zipper.
CruelInventions is online now  
Sponsored Links
Advertisement
 
post #212 of 219 Old 12-03-2013, 03:12 PM
 
Cyrano's Avatar
 
Join Date: Jul 2003
Location: Northwest Boonies
Posts: 5,738
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 90
Is that how this thread looks to you? An accident? wink.gif
I do look to see if people need help, but I know what you mean. It's better if other drivers mind their own business mostly.

As for this thread I have read lots of interesting thoughts. It just seems as if it's become a battle of words more than ideas.
I find myself agreeing with both sides at times.
But it is entertaining. Avs is cool that way.
Cyrano is offline  
post #213 of 219 Old 12-03-2013, 05:29 PM
Advanced Member
 
palmfish's Avatar
 
Join Date: Jan 2010
Location: Seattle, WA
Posts: 785
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 42
The primary reason why I dont believe extended listening is a reliable measure is because the brain adapts to what it is hearing. If you are used to what your warm lush speakers sound like, then a more neutral pair of speakers will sound intolerably bright to you. But enjoy those neutral speakers for a few days or weeks, and eventually you'll notice that they arent as bright as they once were. If you then switch back to your old warm lush speakers, they will sound veiled and muffled to you.

The ear/brain has a horrible auditory memory, and a listeners personal sound quality "benchmark" is a constantly moving target.
palmfish is offline  
post #214 of 219 Old 12-03-2013, 08:14 PM
 
Cyrano's Avatar
 
Join Date: Jul 2003
Location: Northwest Boonies
Posts: 5,738
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 90
Quote:
Originally Posted by palmfish View Post

The primary reason why I dont believe extended listening is a reliable measure is because the brain adapts to what it is hearing. If you are used to what your warm lush speakers sound like, then a more neutral pair of speakers will sound intolerably bright to you. But enjoy those neutral speakers for a few days or weeks, and eventually you'll notice that they arent as bright as they once were. If you then switch back to your old warm lush speakers, they will sound veiled and muffled to you.

The ear/brain has a horrible auditory memory, and a listeners personal sound quality "benchmark" is a constantly moving target.

Yes. I think you are correct. I do know that it is easier to know when one set of speakers is better than another when specific passages of music are obviously cleaner.
But the area of speakers does seem to be very subjective.
I have speakers that I've found in thrift stores that I love to listen to: pairs of AR17s, DYNACO A25s, KEF 120s. The pairs are all a little different from each other, but each pair is great to listen to music with. (wish I had 3 pairs of one them for HT use.)

Your thought that our mind sets a benchmark while we listen is interesting to consider.
Cyrano is offline  
post #215 of 219 Old 12-04-2013, 09:48 AM
Member
 
nvidio's Avatar
 
Join Date: Nov 2010
Location: beer city (aka Belgium)
Posts: 197
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 15 Post(s)
Liked: 22
Quote:
Originally Posted by palmfish View Post

The primary reason why I dont believe extended listening is a reliable measure is because the brain adapts to what it is hearing. If you are used to what your warm lush speakers sound like, then a more neutral pair of speakers will sound intolerably bright to you. But enjoy those neutral speakers for a few days or weeks, and eventually you'll notice that they arent as bright as they once were. If you then switch back to your old warm lush speakers, they will sound veiled and muffled to you.

The ear/brain has a horrible auditory memory, and a listeners personal sound quality "benchmark" is a constantly moving target.
You mean extended sighted listening is in fact reliable because it allows you to look at the frequency response curve? biggrin.gif
nvidio is offline  
post #216 of 219 Old 12-05-2013, 05:12 PM
Newbie
 
ArtNoxon's Avatar
 
Join Date: Dec 2013
Posts: 1
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10

Hello,

 

This is a very interesting field. My work in the audio chain has been to upgrade the performance of the last link in the audio chain, the listening room.  I found out nearly 30 years ago that improvements the frequency response curve did not correspond to the thrill our customers got from putting TubeTraps in the front corners of their listening rooms.  The musical improvements they described had nothing to do with the minor shifts of the sound level throughout the frequency response curve.   I wrote and then presented a white paper at the 2011 Cal Audio Show.  It covered the evolution of my experiences over all these years with this question and the conclusion.  It can be read at:   http://www.acousticsciences.com/how-tubetraps-opened-whole-new-realm-precision-performance-audio-playback-systems 

 

To summarize, I found a strong correlation between testing response and listening values only when I used the MTF or Modulation Transfer Function type of tests. An MTF test is a test of dynamic clarity.  The MTF in photography is a test of visual clarity, image sharpness: clarity in the rapid variation of tonal brightness over space. The MTF in audio is a test of sonic clarity, dynamic sharpness:  clarity in the rapid variation of tonal loudness over time.

 

The ASC version of the MTF test is the MATT (Musical Articulation Test Tones), a gated frequency sweep between 28 and 780 Hz. This test signal is available on Track 19, Stereophile Test CD II, Track 19.  The on/off gating frequency is set at 8 Hz, 1/16th second tone burst followed by 1/16th second of silence....and so on, up and back down the musical scale.  For more info, including a tutorial and a free download of the test, see: http://www.acousticsciences.com/matt 

 

A very acceptable acoustic upgrade might only produce 1/2 dB changes in some of the peaks in the frequency response curve, while at the same time it produces a +6 dB improvement in the Modulation Transfer Function or MATT test.  

 

The MTF test results correlates directly with musical clarity, just as it does with Speech Intelligibility.  The B&K RASTI system uses 2 octave wide noise bandwidths and a variety of gate speeds to replicate the salient features of speech.  It measures the MTF of these different gated sounds at the listening position and produces a speech intelligibility rating that has excellent correlation to speech intelligibility, which essentially is a measurement of sound quality in speech. 

 

Adding acoustics to a room generally improves the EFT (Energy Frequency Time) decay curves by making the RT60 decay rates more uniform.  In general, we think we like to have the full spectrum of sound drop into the background noise floor at about the same time, with the bass lingering a little longer than the midrange and upper treble.  We especially don't like to see the low frequency ringers, typically slow decay room modes.  

 

The problem with FFT testing is that although the printouts are very interesting to look at, they also have very little meaning when it comes to translating the test data into some rating of musical satisfaction.  When we listen to the sine sweep of the FFT test, we can't hear anything that tells us about how good music is going to sound.      

 

Angelo Farina, University of Parma acoustics professor in Italy, he initially discovered the ASC-MATT test because of how well it directly demonstrates to the listener the value in improved musical performance for a room in which acoustic improvements have been made.  Later he wrote equations that transform the traditional FFT measurements into the ASC MATT type of response curves.  Then he could use the ASC MATT test format to demonstrate to the listener, literally what the improvements in the room sounded like.

 

He has addressed this question formally and came up with AQT, Acoustic Quality Test. See:  http://pcfarina.eng.unipr.it/Public/Papers/153-AES110.PDF  for an AES paper on the subject. What is so very interesting is that he was able to transform the FFT test on a room into a MTF test of the same room.  When he plays the FFT sine sweep no one is impressed, but when he plays the MTF version of the test data for the listener, everyone is impressed.

 

Essentially what the listener hears with the ASC-MATT test is Musical Clarity.  What gets improved with the addition of acoustics into a listening room is Musical Clarity.  Musical Clarity is a well defined term in the evaluation of concert halls.  It is essentially a signal to noise ratio of the direct + early reflected energy to late and reverberant energy in the room. It is in effect a signal to noise ratio that describes the sound masking effect which takes place when late reflections are too loud.   

 

Seigfried Linkwitz has also worked on a variation of the MATT test in his approach to evaluating room acoustics.  

 

We see all sorts of response curves in speaker and room acoustic testing, but in the end, we are still left hanging with the unanswered question..."What does this mean to me?"  

 

I've found that with the MATT test, the audiophile listens to the test signal, hits pause, jumps up, pulls out an album, sets the needle down 2/3rds into track 3 and says.... "Just listen.  That blurred section of the MATT test is the same blur that has been bothering me for years.  Here it comes......."  The Golden ear already has cataloged the defects in his room, but until the MATT Test, the only way the room defects can be demo'd is to play short sections of certain musical passages.  The audiophile is 10 times more impressed with improved musical clarity, less dynamic blur, than with any amount of uniform RT60s or a flat response curve. 

 

In conclusion, MTF audio measurements directly correlate with audio quality while EFT measurements do not, until they are transformed into the equivalent of the MTF measurement, whereupon they do correlate with audio quality. 

 

Thank you, Art Noxon   

ArtNoxon is offline  
post #217 of 219 Old 12-06-2013, 04:49 AM
AVS Addicted Member
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,383
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 758 Post(s)
Liked: 1169
Quote:
Originally Posted by ArtNoxon View Post

Hello,

This is a very interesting field. My work in the audio chain has been to upgrade the performance of the last link in the audio chain, the listening room.  I found out nearly 30 years ago that improvements the frequency response curve did not correspond to the thrill our customers got from putting TubeTraps in the front corners of their listening rooms.  The musical improvements they described had nothing to do with the minor shifts of the sound level throughout the frequency response curve.   I wrote and then presented a white paper at the 2011 Cal Audio Show.  It covered the evolution of my experiences over all these years with this question and the conclusion.  It can be read at:   http://www.acousticsciences.com/how-tubetraps-opened-whole-new-realm-precision-performance-audio-playback-systems 

To summarize, I found a strong correlation between testing response and listening values only when I used the MTF or Modulation Transfer Function type of tests. An MTF test is a test of dynamic clarity.  The MTF in photography is a test of visual clarity, image sharpness: clarity in the rapid variation of tonal brightness over space. The MTF in audio is a test of sonic clarity, dynamic sharpness:  clarity in the rapid variation of tonal loudness over time.

The ASC version of the MTF test is the MATT (Musical Articulation Test Tones), a gated frequency sweep between 28 and 780 Hz. This test signal is available on Track 19, Stereophile Test CD II, Track 19.  The on/off gating frequency is set at 8 Hz, 1/16th second tone burst followed by 1/16th second of silence....and so on, up and back down the musical scale.  For more info, including a tutorial and a free download of the test, see: http://www.acousticsciences.com/matt 

A very acceptable acoustic upgrade might only produce 1/2 dB changes in some of the peaks in the frequency response curve, while at the same time it produces a +6 dB improvement in the Modulation Transfer Function or MATT test.  

The MTF test results correlates directly with musical clarity, just as it does with Speech Intelligibility.  The B&K RASTI system uses 2 octave wide noise bandwidths and a variety of gate speeds to replicate the salient features of speech.  It measures the MTF of these different gated sounds at the listening position and produces a speech intelligibility rating that has excellent correlation to speech intelligibility, which essentially is a measurement of sound quality in speech. 

Adding acoustics to a room generally improves the EFT (Energy Frequency Time) decay curves by making the RT60 decay rates more uniform.  In general, we think we like to have the full spectrum of sound drop into the background noise floor at about the same time, with the bass lingering a little longer than the midrange and upper treble.  We especially don't like to see the low frequency ringers, typically slow decay room modes.  

The problem with FFT testing is that although the printouts are very interesting to look at, they also have very little meaning when it comes to translating the test data into some rating of musical satisfaction.  When we listen to the sine sweep of the FFT test, we can't hear anything that tells us about how good music is going to sound.      

Once upon a time there were a lot of published tone burst tests of loudspeakers, with the oft-criticized Julian Hirsch being one proponent who published many of them. However they were very limited in scope, often being done at just one base frequency.

With the ascendancy of FFTs, swept-sine wave or Chirp testing became something close to the be-all and end-all of audio acoustical testing. If you assume that loudspeakers are minimum-phase this makes some sense, and individual the drivers tend to fit this model. As soon as you put loudspeakers into multi-way systems and/or step out into the room, standard chirp tests start losing relevance and diagnostic power.

Audio is first and foremost about speech articulation and it is clear that MTF tests come several country miles closer to speech than do chirps. As has been pointed out many times, if you do a really good job on speech SNR, tonal accuracy and articulation, (and in that order) music tends to fall into place.
arnyk is offline  
post #218 of 219 Old 12-08-2013, 06:19 AM
Advanced Member
 
SoNic67's Avatar
 
Join Date: Aug 2005
Posts: 722
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 3 Post(s)
Liked: 29
Measurements can tell A LOT about how the equipment will sound. BUT they need to be done/verified by a third party (some manufacturers like to embellish the datasheet) and need to be interpreted by somebody that knows more that '32 bits are more than 24 bits therefore... more is better'.
The interpreting factor is where measurements don't correlate. Some people like to think that they are good at reading data, but they have no formal training in the field. When their conclusions are proved wrong, they jump to blame the measurements.
For example, some might consider that the SNR of a product is all he needs to make a judgment - and it definitely lead to wrong conclusions. Ignoring THD+N number is the first mistake that most of beginners do and that was used by many manufacturers - they either don't show the THD+N or they show it in a form that is harder to compare (say % instead of dB).
There are numerous other measurements that can PREDICT how a device will sound in a perfect room or with headphones.
But listening it in a random room, sometimes changes everything. Professionally I have to use a dedicated software to calculate Speech Intelligibility for different rooms (for mass notification) - results are sometimes surprising in some rooms, based on size, shape, speakers placement. Thinking that anybody can interpret those variables/measurements without a software tool is just unrealistic.
At home, I use headphones for critical listening because speakers in a room add so many variables.

Will measurements for a given product be 100% relevant? Of course not, because usually most/all the measurements are done with static signals, not with real music. But for a 70-80% evaluation, measurements are good - if they are interpreted by somebody educated in the field.
For me that is a strong correlation.
SoNic67 is offline  
post #219 of 219 Old 12-09-2013, 05:55 AM
AVS Addicted Member
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,383
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 758 Post(s)
Liked: 1169
Quote:
Originally Posted by SoNic67 View Post

Measurements can tell A LOT about how the equipment will sound. BUT they need to be done/verified by a third party (some manufacturers like to embellish the datasheet) and need to be interpreted by somebody that knows more that '32 bits are more than 24 bits therefore... more is better'.
The interpreting factor is where measurements don't correlate. Some people like to think that they are good at reading data, but they have no formal training in the field. When their conclusions are proved wrong, they jump to blame the measurements.
For example, some might consider that the SNR of a product is all he needs to make a judgment - and it definitely lead to wrong conclusions. Ignoring THD+N number is the first mistake that most of beginners do and that was used by many manufacturers - they either don't show the THD+N or they show it in a form that is harder to compare (say % instead of dB).
There are numerous other measurements that can PREDICT how a device will sound in a perfect room or with headphones.
But listening it in a random room, sometimes changes everything. Professionally I have to use a dedicated software to calculate Speech Intelligibility for different rooms (for mass notification) - results are sometimes surprising in some rooms, based on size, shape, speakers placement. Thinking that anybody can interpret those variables/measurements without a software tool is just unrealistic.
At home, I use headphones for critical listening because speakers in a room add so many variables.

Will measurements for a given product be 100% relevant? Of course not, because usually most/all the measurements are done with static signals, not with real music. But for a 70-80% evaluation, measurements are good - if they are interpreted by somebody educated in the field.
For me that is a strong correlation.

IME most published measurements don't correlate with audible differences because most products like AVRs and BD players perform so well that audible differences are vanishing.

Published measurements on speakers show differences that can easily be audible, but some of them such as efficiency tend to be fudged a lot as you suggest.
arnyk is offline  
Reply Community News & Polls

Tags
Polls

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off