Originally Posted by csundbom
I would like to chime in on this one, as I have some experience with it. As we all know, consumer models implement a large number of "features" to make them stand out in a showroom in comparison with other sets. Commercial/Industrial models normally do not implement these features, and are much more geared towards picture accuracy and following the NTSC/ATSC standards. The reason is that industrial monitors are used in critical viewing applications (like medical, manufacturing, studio broadcasting etc). You don't want a situation where the spot on the lung from the MRI scan was really an DNIe edge enhancement artifact (for obvious reasons).
The consumer models have features that have been developed for a long time to attract and seduce the viewer into believing that they are seeing a "better" picture. The features work extremely well, and that's the reason they are implemented. It's similar to over-seasoning your food with too much salt, and adding MSG. It will taste great, but it won't be the actual taste of the food item.
I can't count the number of times I finished a calibration and the client asked me to show him the "before" picture. I switched the set back to "Vivid", and the client exclaimed "This is much better!". These features really, really work in creating an illusion of "better".
"Better" for the eye/brain translates in a brighter picture with higher contrast and very sharp edges. I'm sure there are evolutionary reasons for this. Features such as very blue grayscale (the eye sees bluer as brighter), floating black level (to introduce an illusion of higher contrast), sharpness control (to add false outlines to objects) all contribute to fooling us into the picture is indeed better. However, they have nothing to do following the standards we have for television. The standards are very well defined (not like in the audio world), and by implementing them correctly, you will get the correct picture.
I'm being very categorical here, there is only one correct picture. There is no room for personal preference or taste when it comes to following a standard. It's either right, or it is not.
What the television manufacturers have realised is that by deviating from the standard they sell more sets. Just like adding a "Loudness" button to the stereo makes people believe is sounds better, because it boosts frequencies that the ear likes to hear. The sounds becomes "cleaner" (less complex).
What you are losing with these features are of course color accuracy and detail. It takes a little while to get used to watching a properly set up television that implements the standards right. After a few minutes however, you will start seeing all these details you never realised were there before. You will see subtle shadings and colors that you never saw before. Switch the set back to "Vivid" and the brain snaps right back into panic-mode with the set now demanding your attention with all it's noise, sharp edges and blueish whites.
So, this of course makes side-by-side comparison of sets very hard. I've done several of these, and the client always prefers the Vivid/most inaccurate set over the properly calibrated one. However, take the bad set out of the room, give the client a couple of days to actually watch the calibrated set, and he/she almost always comes around the appreciating the calibrated picture. There is a difference in actually watching a set and having your primal brain instincts stimulated by "features".
That said, I'm not sure if this helps explain what Rich is seeing or not, but it's a very common situation. There could well be a bad set or something else involved, but please be aware of the pitfalls with side-by-side comparisons. A better test is to watch one set for a day or two, then swap it out for the other set and watch that one for a couple of days. Assuming the industrial model is calibrated correctly, it should give you a more accurate and pleasing picture in my experience.