Originally Posted by enricoclaudio
It says the FV25HP should be +7dB more powerful at 20Hz than FV15HP; however data-bass shows it is only +5.3dB. The difference between our numbers and data-bass numbers is +/-1.7dB.
It says the F18 should be -6.5dB from the FV18; according to data-bass it is -8.1dB. The difference between our numbers and data-bass numbers is +/-1.6dB.
Did you know that a Type 2/Class 2 SPL meter, even the most expensive ones like my $1500 Casella CEL-620 have an accuracy of +/- 1.5dB? If you take in count differences in software, ambient temperature, mic models, wind, external noise, you name it .... a difference of 1.7dB is totally acceptable and even inside the ANSI and IEC 651 Type 2/Class 2 accuracy standards. In fact, that difference is even inside the Class 1/Type 1 accuracy standards. The lower the frequency, the higher the tolerance goes up. For instance, at 20Hz, the tolerances are +/-2.5dB for Class 1 and +/- 3.5dB for Class 2.
Hello Enrico. You're citing tolerances for the mic, which is true - however that same deviation should appear consistently over all of his measurements. And typically, a calibration file is used to correct for all but a very small remnant of this deviation. All things considered, this shouldn't
be a huge factor. But, it's possible.
I would like to think that he is controlling for many variables by averaging runs together and using standard protocols so as to not allow wind or weather to influence the results. He follows CEA-2010 subwoofer measurement standards. I don't think he's that much of an amateur
And if he were, that would be a problem, as Rythmik.com cites his results:
Speaking of ANSI, the correct way to do these tests would be to take multiple samples. That is, you pick a sub, say an FV15HP, FV25HP, whatever, and you test more than one sample from different production times, and with at least two different microphones, and average the results together. That would account for sample variance in both the product and the measuring equipment. I know that isn't practical for someone not being paid by industry as a full-time quality tester, so his methods will have to suffice. But if there is variation in his results, whether it be from sample variation or mic accuracy of up to 2dB, then the conclusion has to be that I can't rely on his measurements. I just have a hard time believing he hasn't controlled for most of this, except for testing multiple samples. So is there a sample variance from sub to sub of up to 2dB? It's possible, but I would have a hard time believing that - knowing how much excursion, efficiency, or amp power needs to change to produce a 2dB difference, that would be a very large swing and I would conclude that there are production problems.
So what I'm saying is, I am unsure where the variances are coming from.
But Rythmik.com provides relative numbers, so we need something to base at least one of the subs on, in order to line up the others. If I can't use data-bass.com, maybe you have an internal measurement? For example, if we knew the 20Hz output of the F12, we could then fill in the rest of the blanks.
Thanks for helping to shed some light on this.