Originally Posted by LastButNotLeast
The "rule of thumb" is that it's never a good idea. It is, however, sometimes a necessary evil.
Like when? And is looking at the distortion a good way to know whether the trade off was worth it (and if so, what thresholds do you use to judge that)?
You can boost a dip, you just can't boost a NULL.
Right. But at what point does a dip become a null, because isn't a null essentially just a very large dip? Is a -20dB dip a null, a -40dB dip a null etc?
So if you apply a boost and nothing happens, you may as well un-apply it, since it's not doing any good, and probably doing some harm.
Makes total sense. Of course there are times then boosting a dip can looks smooth, but without realizing what one may have done as far as clipping or distortion. Which is why I asked about determining the effectiveness of such a boost above.
You're overthinking (one of my favorite passtimes) distortion.
This is AVS. That's what we do around here eh?
REW has a nice help file:
Yes I looked at that yesterday, but I was unsure of what it was referring to. And I just looked again - its pretty technical and not sinking in, yet.
This is one of my graphs. Notice the THD at 20Hz:
Nothing really sticks out at 20Hz, at least compared to what's to the left of it. Unless that's distortion too at 10 Hz and 15 Hz.
This is another. Notice more THD at 20Hz (less desirable):
Now this hump at 20 Hz is much more notable. What is the scale for the THD? I don't think its dB on the left, and on the right it just shows a label of 200 Hz at the bottom of the axis. When looking at the THD, where do you draw the line between an acceptable and unacceptable amount/level?
Interestingly, on either of those, if you go to the null at 70ishHz, you can see the distortion go up. So I certainly don't want to try to add any boost there.
Yes I see it go up at 70Hz. However it does so just a tiny bit, when compared to the large hump at 20 Hz. So why is this movement at 70 Hz noteworthy?