Thanks for the feedback guys! I'm fairly new to the calibration side of things, so I'm trying to learn as much as I can as I go. I can't afford to hire a professional calibrator, so I think this will be a fun hobby to pick up.
Originally Posted by MarkHotchkiss
But where's the source?
The concept of an all-pass FIR filter is strange to me. The advantage of a FIR filter over an IIR filter is linear-phase (a constant delay). So a filter that doesn't affect frequency response (such as an all-pass filter) and also doesn't affect phase (such as a FIR filter) would be a filter that does nothing at all. Well, it would create a delay, but there are much easier ways to do that - as a FIR filter is computationally expensive, while a delay is computationally free.
Now, it is possible to create a FIR filter that does
affect phase, and maybe that is what that 'source' of yours is discussing. So I'm interested in that source.
Randomizing phase is an interesting concept, but I'm not sure how that would be done in a DSP. I seems to me that it would require randomizing the filter coefficients in an IIR filter (randomizing coefficients in a FIR filter would not change the phase). Maybe a simpler approach might be a variable delay (a rubber-buffer) and maybe some reverb (as mentioned in your other thread).
I assume that this has been addressed by the big-boys, and that there is an algorithm out there that does what you want.
Well, that's embarrassing. Here's the article I intended to link to
. I'll update the above post as well. I'm not sure how it's actually implemented in a DSP, but decorrelation filters (also called whitening filters) are pretty common in other signal processing applications. Transforming these signals to the frequency domain using FFTs can be done in near real time. I haven't read it in a while, but IIRC, that paper suggests manipulating the phase in the frequency domain because it can be done independently of the amplitude. By randomizing the phase, they claim
The timbral coloration and combing associated with constructive and destructive interference of multiple delayed signals is perceptually eliminated.
which, unless I miss my guess, is what I'm after.
Also, I believe I've read that this is the technique used to simulate multiple channels from one source. Mono to stereo for example. Here's a dolby patent
that Dennis linked to that describes a method of simulating multiple channels from a single channel using decorrelation filters.
I suppose I need to reread that as well. I have a feeling there is a simple answer to this, and I may just be using the wrong terms to describe the problem.
Originally Posted by Ivan Beaver
When you have two signal that arrive at the same location and have a different phase response (either by delay or "randomized phase-WHATEVER that is???????????) you will have cancellations. PERIOD.
Unless the phase is exactly the same (and the only way to have that is to have both loudspeakers in exactly the same location -NOT side by side or on top of each other-but IN the same physical position-which is impossible), there will be interference which will lower the overall quality.
NO way around it-no matter how "fancy" the term-sorry.
The use of delay for speakers that are covering different location-and trying to get ti signal arrivals the same-is a very real tool-BUT ONLY if done properly and designed properly.
Just doing 'random things" without a specific approach is NOT a good idea.
I'm not sure what "random things" you are suggesting that I am intending to try, but I'm just trying to learn how this particular situation is addressed during calibration. Based on the documents that I've linked to and referenced it's clear that more can be done than just adding a delay to these signals to improve things. I'm just not clear on how it's done in a DSP. Dennis recommend QSC components to do this. If time aligning the signals is all that's required, there's certainly no need for that much processing power, so I think it's safe to assume they're doing a little more.
While I agree that there will always be some constructive or destructive interference where multiple signals are present, I think the intent of this is to simulate a sound field that would have originated from a much larger space (destructive interference included).