Originally Posted by xrox
No, generally this is actually called instrument resolution (along with the inclusion of instrument noise). And just because an instrument has low resolution does not mean it has low precision (snell eye chart is a great example).
Yes it does. An eye chart with more character lines in between 20/20 and 20/15 would be more precise not because of better repeatability, but because of more graduations between the previous graduations.
Like I said, 1.012, 1.013, and 1.011 are more precise for measurement of a quantity than 1, 1, and 1. According to you, the second data set is more precise. I've got a plethora of physics books with explanations on measurement precision vs accuracy that agree. It also happens to be the first lesson I teach to my grade 11 physics class every semester before we do a lab (although that doesn't mean I'm right).
You can call it 'resolution' if you want...it works. Nevertheless, the number of sig. figs in a measurement is a factor in 'precision' as well as reproducibility.
Regarding the ruler example I concede that the numerical precision is better on the mm ruler according to the "other" definition.
Then you also concede that precision is not mere reproducibility if the other definition has any validity.
If I asked you to hand me the more precise ruler...most would accept that as a valid identifier of the ruler with the smaller graduation marks.
Anyway its now down to semantics...you say 'resolution'...I say 'precision' ...but we are talking about the same statistical aspect of the data set.
IMO High precision, low resolution
The problem here is that I can't fathom anyone referring to an instrument with the largest gradations...or low resolution...as 'high precision' merely because you artificially eliminate data variances via rounding.
Who would argue that the odometer in your car is more precise than a tape measure?
Now using the numerical term "precise" I would say that the 20/x measurement is not very precise?
Think of it this way: A chart with 20/20, 20/19, 20/18, 20/17, 20/16, 20/15 lines has more precision because we have added another significant digit. If you round the value to the old scale of either 20/20 or 20/15 then you will find that the reproducibility of the un-rounded data set is not diminished by the improved graduations.
So if a patient scores 20/18, 20/17, and 20/18 on the improved chart, we can round to 20/20, 20/20, and 20/20 and see that we have not lost any previous repeatability. The new data set is just as precise as the old one with the same sig. figs. It is more precise than the old one when we add the extra sig fig.
Its like the typical throwing darts on a target example. If we put large segments on the board then a shooter can claim he hit target X 5 times...very precise. If we further subdivide target X then and the shooter now hits the subdivisions of X then would it make any sense to say that the shooter has less precision than before? He still hit target X every time...he just can't hit the same division of X every time. The additional segments give us more information on where he actually hits on target X...and therefore make our target measurement more precise (higher resolution works too).