I was initially irritated that this study did not identify which signal alteration systems (I heartily agree that calling them "room correction" is a misnomer) produced which results. After perusing this thread and thinking about it, I realize that this was the right thing to do. Moreover, the degree of identification you have done is excessive. The reason for this is that people tend to reify studies like this, when in actuality, this study has virtually zero relevance for the experience of listeners enjoying music or movies in their home.
In delineating the reasons for my conclusion, I will acknowledge that some of this repeats points that others have made or alluded to.
1. The n was woefully small and the description of the subjects quite limited. This makes it impossible for any reader to determine whether they were sufficiently similar to him or her to make comparisons relevant. These people positively correlated "boomy" with their overall satisfaction with their listening experience. Is this true for most people? Certainly not for audiophiles. I'm also skeptical that you had sufficient power for the ANOVA to be valid.
2. You conducted the study in a room tweaked to within an inch of its life. Do you have a dollar figure on the cost of the acoustic treatment employed? Far beyond the means of the average listener, and unlikely to be encountered in the real world.
3. A limited and homogeneous sample of music. It would be hard to find 3 more similar artists. There is no jazz or classical, let alone death metal, ambient, rap, etc.
4. A lack of specificity to the subjective dependent variables. Do you have any data on whether people can reliably determine whether music is "thin" or "muffled"? Without reliability, of course, the question of validity is moot. I had to smile when I saw you employ the terms "forward" and "colored". When audiophiles use such terms, AES types greet them with sneering and derision.
5. The music employed was downmixed to mono. Nobody listens to mono any more except audiophiles and people who listen to music on clock radios. Audiophiles are, of course, self deluded and those who listen on clock radios have no need for the signal processing tested.
6. One group of components are used. There are myriad types of speakers that real people use. Would you have achieved similar results with Magnepans? Lowthers? Klipschhorns? How about different amplifiers? Oh, wait. All amplifiers sound the same, right? So where does Harman get off charging so much for Levinsons?
I understand that some of this is endemic to the process. You have to have sufficient control of variables to rule out confounds in your results. However, the greater this control, the less the relevance to real-world experience. I also understand the need to limit the factors assessed. Other approaches are impractical. Who wants to try to interpret a 10-way interaction?
So the responsible stand is to not overinterpret these results. It would be more than "bad manners" to reveal the results more fully, it would have been misleading. I did notice that a concern for manners didn't prevent you from tooting the horn of your own system or bum-rapping a competitors speaker (and a discontinued model at that) in this thread.
In other areas of Psychology research (which is what you are doing whether you recognize it or not) both laboratory research and naturalistic observation are considered important. The strengths and weaknesses of each approach are considered, with the hope that, somewhere in the middle, consistent and generalizable conclusions can be reached. It is therefore puzzling that places like the AES are so dismissive of the experiences of audiophiles. This isn't just engineering that you are dealing with, it's human perception. This is a far more ephemeral, quixotic, and emotionally tinged endeavor.
Taxes are the price we pay to live in civilization. If you don't like government, move to Somalia.