Originally Posted by lovingdvd
First question: In this scenario is ColorFacts using 0-255 or 16-235?
Colorfacts running on the desktop will always be using 0-255.
And is the Ruby using 0-255 or 16-235?
It depends. You'd always want it to be in the 0-255 space, but it will only be there in DVI IF you are running 1080p, and IF you have the input mode set to Computer. In fact, once you have the HTPC running 1080p over DVI and the Ruby is set to it, you can freely switch colorspace by changing the input mode between Video GBR (16-235) and Computer (0-255).
Now the calibration is over, so I hook the Ruby back up to my Bravo D1 outputting 720p. Further assume just for discussion purposes that the Bravo is perfectly accurately and doesn't introduce any errors into the equation.
In this above scenario, is my GRAYSCALE part of the calibration still applicable for the Bravo? Or does the fact that I calibrated grayscale with 0-255 and now I'm using 16-235 for my DVD player going to throw off my grayscale.
Grayscale is really another term for color balance, color tracking, etc., and is a global setting (as far as inputs are concerned) so it would be fine.
As far as colorspace is concerned, and assuming that the projector is properly designed (which it seems to be in this regard) it would make no difference as long as you match colorspace settings on source and projector. So if your Bravo was outputting in the 16-235 space and you were set to a Video GBR mode on your Ruby (which is 16-235) it would track correctly.
Also I assume that grayscale aside, I cannot properly calibrate brightness using ColorFacts and the BC for Video sources, since CF presumably will be using 0-255 and my player using 16-235?
If I understand this question correctly, that assumption would be wrong. Remember that in the 16-235 space 16 is black, equivalent to 0-255's 0, and 235 in the 16-235 space is white, just like 255 in the 0-255 space. So as long as your Ruby input colorspace matches the source colorspace your calibrations will stand.
I've heard this a lot lately but I do not follow the logic here. Please help me understand this. I understand the point that ideally the pj is calibrated to a spec, and that way it works great as long as the source adheres perfectly to the spec. If I was a pj manufacturer calibrating for D65 out of the factor that is how I would do it.
But to me it makes more sense just to calibrate directly from the DVD player. For instance I use the Bravo D1 to output 720p or 1080i. If I use the Bravo to display the test patterns (instead of CF) and calibrate grayscale off of that, don't I then wind up with a calibration at D65 that is corrected for any errors in my source?
This way if the Bravo has errors I am correcting them to get back to the D65. Now granted this calibration is only applicable for use with my DVD player, but as long as I can use different settings for different sources, logically this seems to me to be the better way to go.
Then how would you calibrate for your HD receiver? See, the whole thing is about standardizing and having a point of reference. It's done this way professionally in both audio and video worlds so you can judge the balance of what you're viewing. In a home environment it's less rigid, of course, but the concept still makes sense there too if you have or plan to have more than one source.
DVD's vary significantly in calibration during production and transfer (mastering), and to a lesser extent, so do HD broadcasts. If your display is calibrated to a standard, you'll find there is much less objectionable deviation between sources. What you'll see more clearly is differences in production style, differences in transfers, obvious faults on the broadcaster's side, etc. If you then choose to fine tune, you can, but you still know where the reference settings are. Without a standard you would be adjusting all the time and never really know how far off or in what manner the source is incorrect.
Does this help any?