I've come to the conclusion that this is actually a rather complicated issue with no clear answer. What do we consider a "real" lag value to be? How do we normalize values across the multitude of test methods, so we're comparing meaningful values? With the data we have now, I think some TVs appear better or worse than they really are, depending on the test methods that were used.
For example, SMTT is so precise that it allows you to distinguish input lag from pixel lag. With other timer tests, it's not clear which is which. You usually end up reading partially visible values on a single timer.
In my SMTT tests of my LH20, if I consider only the input lag (using the highest barely visible number), I get 21ms. This is lower than the usual timer tests, which average somewhere around 25ms. If I consider the time to a fully fired pixel (using the highest fully visible number), I end up with 30ms. What is the real, meaningful value to put on the list of TV models?
It's really hard to think every time I look at some numbers on the model list about the test differences, guess about their effects, then do the mental math to compare results . If there was at least some "standard" to reference, it would help a lot.
Naturally, I've got ideas about it, but I'd rather hear what other people think before I blather on further...