Originally Posted by Bigus
But you have said that is the only frequency value necessary.
I had. I was just commenting on you asking "what graph." Ethan post such a graph on the first page;
As you see it is not a single number. Per your wish for outlining the procedure it is useful to have the right process described for getting that piece of data.
Indeed, lower frequencies are probably entirely useless, and even 500Hz is questionable, but for "general liveliness" I'll grant you that, and that is all you said was necessary.
It is what Dr. Toole, Olive, Bradley, Yang, etc. have said is sufficient
. This is a quote from latter two's research on speech:"It was desired to create test conditions with T60 values of 0.3, 0.6, 0.9, and 1.2 s, which were thought to correspond to the full range of likely conditions in typical elementary school classrooms. A T60 of 0.6 s is often thought to be near optimum  and is referred to in the ANSI S12.60 classroom acoustics standard . A T60 of 0.3 s is representative of the lowest T60 values likely to be found in a normal classroom. T60 values of 0.9 and 1.2 s could occur in real classrooms but were expected to lead to increasingly less suitable conditions with lower speech intelligibility scores."
As long as experiments confirm and correlate listening preferences in this regard, the measure is a reliable one to use especially in the coarse grain application of it.
I'm curious as to the process you believe will get me from point A (any point A) to point B which is an optimized listening environment given whatever limitations are imposed (cost, geometry, ability to strip bare, etc.). If it confused you that I provided a hypothetical RT60 value, I'm sorry. If you really think the conditions as I described them are out in left field, however, I'd like to know why you think that, as I put thought into what I was saying to make sure the pieces fit together and were potentially representative of a real space.
Yes, it is an odd piece of data you put forward. Dr. Bradley performed a research project measuring noise level, transmission loss and in our case, reverberation time as a function of frequency in 600 Canadian homes (he works for NRC there). The median was 0.4 seconds. The standard deviation was 0.1 seconds. Assuming standard distribution, 70% of the bell curve is in the range of 0.3 to 0.5 seconds. This means your hypothetical scenario with 0.3 seconds falls in the 15% of the homes surveyed. That is one reason I said the value you picked was "curious." The second reason is that lacking late reflections, then it is odd to suspect the room is at fault if you continue to have speech quality issues. What could cause a bit of that is if you had tried to absorb early reflections but your scenario did not indicate that. So as you see, you did paint a rather odd scenario with that one number!
I provided you with a general overview of just the sort of typical "well furnished" room you say can be great, with a reasonable RT60 value you say is both in the expected range (and correlates to the description of the room), and am asking you to describe a similar process that, for whatever the problems are, will identify them and derive appropriate solutions.
Well, at 0.3 seconds, you are definitely "well furnished."
So yes if you are at 0.3 I would be very careful and not deploy any more absorption in the room. Take a look at Ethan's graph above. You see that the before numbers are in 0.8 second range. The "after" numbers are in 0.2 to 0.25. I expect that room to be rather dead which for his music mixing applications may be fine but is not recommended for home use. Certainly not a good number for a living room where you want people to feel comfortable talking.
The "symptoms" could be anything. I said poor speech intelligibility, ringing bass (those often go hand in hand), soundstage collapsing to one side, etc. It could be HF harshness, amorphous image, lack of soundstage depth, or whatever subjective descriptors you wish to apply. Doesn't matter. You have a room and an RT60 value. How do I proceed?
So you didn't mean for me to address those specific issues? I am confused about what you are asking me to do. Is there something wrong with the sound in your room or not? If there is something wrong, what is it if it is not those? If there is nothing wrong, then what are you chasing?
I will address some of those comments for the sake of discussion but they would be random tidbits:
Speech Intelligibility. If you have killed a lot of early reflections, that could cause that per Bradley/Yang NRC research. Another cause would be poorly designed speaker. The center channel level could be too low and out of calibration. If it is movie soundtrack, the dialog could have just been mixed poorly that way. You could also be thinking that but in reality there is no problem there.
Soundstage collapsing. This is really odd. You mean it is fine then all of a sudden it flattens? What are the conditions when it has not collapsed? If it is always shifted, then be sure to check levels and timing. It is surprising how much this helps to get things right and of course how easy it is to do it. This is one of the reason auto room correction features work as well as they do since they set these parameters accurately.
High Frequency Harshness. This is very likely a speaker problem. You would need to rule this out by getting both on-axis and off-axis data including directivity. Sadly this is rarely provided. Other sources could be amps clipping. Don't think of fixing either by messing with the room. Bad speakers need to be fixed by replacing them with good speakers. Above transition frequencies, the speaker dominates the overall sound you hear, not the room. Compare two speakers and them compare a speaker with or without absorber. I am confident you will find the former to be far larger difference.
Lack of soundstage depth. I would look to recording as the first issue. Maybe speakers.
Ringing. I assume you are not really hearing ringing but rather measured the low frequency impulse response and arrived at that observation. Run your analysis tool and capture a frequency response graph. Set the smoothing to 1/12 or 1/24th and pay attention to frequencies below 100 Hz. No doubt you have some massive peaks in there. They can be pulled down with simple parametric EQs. If you are using REW, it has ability to automatically generate filter coefficients for a number of low cost DSPs. You can either use that or visually do it based on what the graph says. Measure after to see if you have pulled that down. Listen to the effect. Boominess will likely go way down.
And oh, this reminds me, you didn't say if you had a sub. I assume if you have a center channel that this is 5.1 so you have one. If so, the DSP goes in the path of that. If you don't have one and have folded the bass into mains, you have to tell me if you can loop them in there or not.
The other thing to do is optimize your seating position. Since your room is not a rectangular box, we can't tell you what that location is. But you can measure it with REW or your favorite tool. Overlay the graphs and see which one is smoothest. Similarly, speaker positioning can be very helpful. Again, nothing can be given to you remotely as your room does not fit a standard box. If you are open to more subs, then I recommend getting at least another one. Then if the funds allow it, get the JBL BassQ. This box is unique in that it optimizes multiple subs using a special algorithm that varies level, timing *and* filter settings for each sub. It iteratively searches for an optimal solution to smoothest base response. The permutation it tries is huge so no human can easily accomplish the same. It works very well in this scenario since as I said, none of the standard speaker/sub placement recommendations apply. A much higher end version of that is in JBL Synthesis SDEC-4500. This is what we have in our theater and I have purchased for mine. Harman has patented this algorithm so for this bit, you either go with them or do without. You can read more about it in Dr. Toole's book (I think it is covered there) or the AES paper: In-Room Low Frequency Optimization, Todd S. Welti and Allan Devantier.
Slightly different take on this topic is covered in my WSR article. Shorter version here: http://www.madronadigital.com/Library/BassOptimization.html
Note that SFM optimizes for multiple seating so your wife will be happier too
As you can tell, once you have general absorption right the room, the main focus should be on low frequency optimization. There is really nice low hanging fruit there. You can trust your meter there and rely on good bit of science to improve your experience. As I explained in a previous post, reducing the ringing time will subjectively make higher frequencies sound better because the overhang from bass does not cover the nature rhythm of a lot of music which is around 0.4 seconds.