Originally Posted by tuxedocivic
Assuming you have the frequency response you have the impulse response. But why talk about CSD as a function of FR when it's actually a function of IR. CSD is a series of slices from the impulse response plotted in a 3 axis graph. Which is why I said that's not how CSD works. You aren't wrong that it's all related, but it's kind of like saying an apple is basically a part of a tree. Well, yes it is but it's a piece of fruit. Which is why your original statement didn't make sense to me. Making FR dead flat does not make CSD razor sharp.
Slicing the impulse response is a convenient method for generating CSD but methods that act directly on frequency response are possible too. Impulse response and frequency response are just two different ways of representing the same thing. A tree and an apple is the wrong analogy. The actual "thing" is a complete characterization of the behavior of a single-input, single-output linear dynamic system. The thing can be represented as frequency response, impulse response, or one of many time-frequency representations. However, some time-frequency transformations discard some information and cannot be reversed. Likewise, frequency response smoothing discards information. You cannot construct a CSD from smoothed frequency response data.
If you make un-smoothed frequency response flat with zero phase, then you will make the CSD pristine. If the response is minimum phase, as is typically the case with a raw driver response, then any minimum phase EQ you use to make frequency magnitude response flat will also zero the phase. The ringing from break-up can be completely eliminated at a single spatial location (or far-field angle) this way. The trouble with break-up is that (1) it can behave non-linear; and (2) you'll get a different frequency response and CSD at different angles, and you can only eliminate it at one location or reduce its impact "on average", over multiple spatial angles. The question of how to weight the data is interesting, important, and not clearly answered. In a listening room with early reflections, a listener will hear sound produced from the speaker at different angles, so an averaging approach may be better even if the application calls for optimization for a single seat only.
Anyway, I like looking at frequency data more than impulse response data. Frequency data can be presented on a logarithmic axis, which is the perceptually relevant scale to use. This can't be done in a meaningful way with the impulse response, and information from the highest frequencies dominates the visual presentation of the data. When looking at frequency data, the narrower the peak or dip, the greater the time required for it to manifest. Tall narrow minimum phase peaks are high Q and ring a lot. Knock down those peaks down with some EQ and the ringing will be reduced too. What the minimum phase EQ can't correct is the reflected sounds, unless they are indistinguishable from the direct sound in the frequency range of interest.
Originally Posted by tuxedocivic
It still could be placebo. After all, you know what change you've made, therefore you have a preconception that such adjustment should make such a difference. Have a friend make the changes, tell you he changed it without, etc. If you can still get it right then I'm impressed. 0.1db can be audible to me over a range of an entire decade. Very subtle though, and passive can easily do that. When you are talking about 0.1db difference only active can handle I get suspect
Also, in my experience, and based on research, you are far better off getting your response right using anechoic measurements. You are talking about 0.05db changes based on a room measurements where changing you head position 6" can have a bigger impact. Heck, sometime I leave the mic in the same spot and two back to back sweeps have 0.25db difference, even in a dead silent room. You're talking about changes the measurement setup really can't even quantify right.
Hmm it could be placebo, but probably not. Because a lot of times I change something and I don't hear it. Indeed, not all 0.1 dB changes are equally audible. I believe it has to do with masking thresholds. The change is likely to be audible if some content suddenly becomes masked or unmasked as a consequence. Now, if the thing you are changing is far out of balance, than small changes won't really be audible. If you are boosting or attenuating something that's completely masked or something that's completely masking something else, then you don't hear any change.
There are undoubtedly advantages to using anechoic measurements to do calibration. But I am doing it the "hard" way on purpose, because I am very interested in the subject of room calibration. I also believe that even in the high frequencies, an in-room calibration may be able to give superior results to an anechoic one, even though existing room EQ technology hasn't really gotten this right yet. The challenge is to figure out a good psychoacoustic model for analyzing impulse response and to design a system around it that is flexible enough to meet the needs of both single seat and multi-seat applications
My approach is based on modeling a high quality anechoic speaker in what I imagine a high quality mastering room to be like. I seem to be able to get results that work great for most music, but movies are a total mess. The more recent home mixes these days are decent, but I end up basically re-EQing stuff that is likely either theatrical original or just poorly re-mixed/re-mastered home material. It's possible that I'm more tolerant of variation in music just because I accept that some tonal imbalance is introduced on purpose for style.
Originally Posted by tuxedocivic
I question your claims because I've been there and did not find it to be audible. You aren't more enlightened based on your hours obsessing over a SEOS 15 design. Like I said, 0.25db over a decade is obvious. But you sound like you're talking about pretty narrow band stuff (breakup). Heck, research doesn't support your claims. And I haven't even mentioned the aberrations in the recording that you can't compensate for.
Sorry if I'm nit picking. I just called out someone in another thread for doing that and here I am doing it. But if you are going to claim Be is a waste of time until you've eq'ed your speaker to 0.05db then we should discuss it. I found Be to be well worth the money. It doesn't change the low end capability so you should reconsider that point of your argument. The Radian 475 may have less low end than your CD choice, but that separate from Be. Which is why I actually backed you up earlier, because that was my concern going with the Radian, despite me wanting Be.
I'm not sweating 0.05 dB for anything narrower than 1/3rd octave. For bands as wide as 1/3rd octave or wider, I definitely do perceive very small changes, particularly in the vicinity of masking thresholds.
Your point about variation between content is interesting. I absolutely do have to listen to a wide variety of content to subjectively evaluate my configurations. However, the closer I get to an ideal response, the better everything sounds on average. I have a few thoughts on why this is true. Note that these apply to music more to movies:
1) The negative effects of flaws in the production vs. playback system are largely additive. If your system has imbalances like most systems do, you may occasionally find content that was produced in a very similar environment that will sound better on your system than it would on an ideally neutral system, but this is a lot less likely than the converse.
2) Skilled mix and master engineers are aware of the flaws and shortcomings of their systems and are able to compensate for them. These flaws do impact the ability of an engineer to make adjustments to the sound that may be needed, but the good engineers know when to doubt what they hear and avoid doing harm to the sound.
3) When dealing with difficult areas like bass, engineers rely on reference material to make judgments. Ironically, the behavior of mixing to match established reference material has been described by Floyd Toole of NRC and Harman as being part of a "circle of confusion", even though it seems to work out better in practice for music compared to how movies are produced.
In general, I believe a lot of sound passes through the production chain with little to no alteration. That's usually for the best because it means that the recording was high quality in the first place.
Anyway, this discussion may be swaying off topic. I am still posting here because I am legitimately interested in Be. Even though I've cured at least 90% of the break-up problem in my DNA-360, I am left to wonder if it could be better. Of course, I probably still haven't done all I can with the 360 alone. I'm quite certain that my methods have room for improvement.
What I'd really like is if I could get a Be diaphragm for my DNA-360 or even something of that size that plays even lower like the BA-750, but I reckon this is wishful thinking.