Originally Posted by Doug Blackburn
...I mean... deciding d75 for video calibration for daytime viewing is just a total and complete crapshoot and is absolutely no better than picking any other d-point for calibration. But because it went through some sort of thought process you devised, you are predisposed to prefer it to a different and more appropriate d65 calibration...
I work in the audio industry and completely understand the need for correct signal duplication.
There's an entire pipeline involved in getting a movie to the TV in someones home. Throughout that process all of the equipment involved must be calibrated as well as possible to exact standards or errors will creep into the signal. If some technician makes a subjective call that the image needs more green it's an ignorant mistake.
If someone is paying you to calibrate their home set, it is your responsibility as a professional to calibrate it exactly to standards. They could tinker with the settings all they wish to make it look subjectively pretty. That's not what you're being paid to do.
But when I'm watching my television in my den I'm not passing my image errors down a production pipeline, and I'm not a professional being paid to ensure that it's accurate. D75 looks more accurate to me and others watching. All day. So why not?
Now I titled the thread a hair provocatively, (you must on the internet to get a response), but in fact I was merely curious if anyone was aware of any validity to this.
Especially since manufactures across the board seem to believe D65 is not good for viewing in a lit room.
It reminds me of audio and the "smiley face". In this business everyone knows that consumers do not like the sound of a correct "flat" EQ response, they want gain in the bass and treble. Engineers find it pathetic, but manufactures give the people what they want. You pretty much have to purchase professional studio equipment to get true sound reproduction.
But this has been researched long ago. In fact the subjective perception of audio loudness is not linear across the frequency spectrum. Music is generally created in all cases louder than it's normally listened to, (most popular music must at least be loud enough to compete with the drums). So when you play back an accurate reproduction of audio at a "normal" listening level, it would still measure as accurate on any equipment, but subjectively you don't hear it accurately. You hear too much mid-range and the desire to increase bass and treble (the smiley face) is merely a desire to restore a correct sound.
So my better-phrased question here was whether or not there is a similar "subjectively valid" reason for using a higher color temperature in a lit room. Because it looks better to me and all the manufactures do it, just like the smiley face sounds better to most people and all the manufactures do it.
But - eh - the answer seems to be a resounding "NO!"