Now how could you measure this psychophysically without a photodiode...
Pursuit cameras can do it.
Millimeter #1 in photographed motion blur trail is the first 1ms of pixel transition
Millimeter #2 in photographed motion blur trail is the first 2ms of pixel transition
Millimeter #3 in photographed motion blur trail is the first 3ms of pixel transition
Millimeter #4 in photographed motion blur trail is the first 4ms of pixel transition
Millimeter #5 in photographed motion blur trail is the first 5ms of pixel transition
So along a linear axis along the pursuit camera photograph, you can create a pixel response curve!
Just like a photodiode. The pursuit camera photograph is a linear record of the pixel response curve
. It's beautifully simple.
Tickmarks (e.g. Pursuit Camera Sync Track) can create the necessary calibration to synchronize the physical millimeters of the captured photo, to the actual pixels, and then create a pixel response graph from the pursuit camera photograph.
Beautifully simple, assuming you make sure you properly adjust your camera's dynamic range to fit the whole gamut of your monitor (a matter of simple photographer's / scientist's skill)
Commercial MPRT cameras already do this. The homebrew pursuit camera method, that I invented, won't be as accurate as certain photodiode or commercial methods, but it will still graph pixel response curves accurate to ~0.25ms or thereabouts (in my tests -- e.g. when adding an undocumented experimental Sync Track to TestUFO Blur Trail), while simultaneously providing a WYSIWYG photo that can be studied -- this creates several other superior advantages -- something a photodiode cannot do.
I believe that once a human finally understands when display motion blur CAN be interpreted as the pixel transition curve, and understands how framerate/sample-and-hold/strobing interacts with it, one can finally be considered a hobbyist vision researcher who "who gets it".
Originally Posted by spacediver
Hm... perhaps one approach is to try and quantify motion blur as the area under the decay function curve. This would avoid the cutoff issue you discussed earlier. Think of it as "luminance energy per unit time". One one extreme, you'd have a pixel that instantaneously reached full brightness and remained at that level on for the entire duration of the frame, and then took its sweet time decaying. On the other extreme you'd have a pixel that instantaneously reached full brightness and stayed on for an infinitesimal duration.
The area under the entire off-on-off curve, is an excellent suggestion. We're talking about the entire curve, not just the rise-fall. That means sample-and-hold would be accounted for, whether square-curve or gaussian-curve. Thinking about this more, the 50% cutoff method will often resemble this quite well, but can fall apart for strange curves (e.g. plasma rampup-and-then-decay). The area method is more complex and requires you to use the pursuit camera photograph method more.
Photodiode method -- is very simple, yes. Much simpler than pursuit camera. It could technically be done as a photodiode on a one-frame flash (off-on-off). The curve of the flash will often accurately predict the perceived motion blur of the display, though not always (due to various complications like complex plasma processing, and different pixel transitions being different speeds, etc). And you don't have blogger-friendly photographic proof.
Nontheless, motion test patterns for determining MPRT is still useful as is a much needed improvement to motion testing.