Without a CRT you can only measure differences between TFTs but as long as you don't know the input lag of the TFT monitor that you use as reference you can only tell which one is faster.
So if you compare two TFTs and get a difference of 3ms this does not
mean that the TFT has 3ms lag.
This may mean...
Monitor A has 0 ms and Monitor B has 3 ms lag. (Not very likely)
Monitor A has x ms lag and Monitor B has (x+3) ms lag
Monitor A has (x+3) ms lag and Monitor B has x ms lag (depends on the direction you substract the values)
And the problem is that you don't know anything about x. "x+3" could be anything: 0+3, 9+3, 16+3, 32+3, 110+3... As x will be part of the measurements on both screens you will not be able to measure it.
The free tools are often worthless. Sad but true. If you may get an erorr of up to 16 ms, just because you use a single stopwatch / time code if the stopwatch itself is perfect. As soon as it is not (e.g. flash based like a well known and often used stopwatch with bright green letters) it simply can't update the output as often as it should: Flash runs the code every frame and is limited to low framerates. That will add almost random errors of several milliseconds. Most stopwatches are bound to plain 2D-Output (no hardware accelerated OpenGL or DirectX3D). But Windows forces V-sync to the output and you can't omit that. You can do much faster calculations in the background but the output of the screenbuffer itself is limited to the vertical refresh rate. (This may force up to 16ms lag).
These are reasons why SMTT 2.0 is superior - but again: Still not perfect.
It just uses DirectX 11 based hardware acceleration, omits the v-sync problem (you can disable v-sync in 3D-mode within your graphics driver), the high-precision counters are updated several thousand times a second and the output shows up-to-date values independent from the actual screen refresh.
All these key features are needed to omit systematic errors of up to 16ms.
Some of the mentioned errors of plain stopwatches may vary over time as they are not synchronized with the monitor refresh. So you will measure several different input lag values and hope that the average will give you the correct result.
But what is the result when you take 3 measurements with the following amounts of error for each measurement:
1. 16ms error
2. 8ms error
3. 0ms error
Average: (16+8+0)ms/3 = 8ms.
Even if you take 30 measurements, or 300 measurements: You will always get a wide spread of results, not knowing if this spread is related to the monitor itself or related to the method of your measurement.
If it is just caused by the bogus measurement technique taking the average of your measurements will just add an average error to your results but it will never solve the problem.
SMTT has the very same problem but it minimizes the errors or omits some sources of them completely.
If you are happy with your monitor: Don't try so hard to find a reason to don't like it.
Sometimes it might be better to keep the state of: "I like it" instead of reaching the state of: "I liked it, but then I found a tool that showed weaknesses..."
Of course it may also show that everything IS great - but you never know. And as long as there is nothing to worry about: Don't worry. Edited by ThomasSMTT - 10/30/12 at 9:51am