Originally Posted by JohnAd
It would be great to find out what actually happens. Would it also be possible to clarify the approval process for the various formats. Is the grading step the last time material changes are made to a typical master, or are significant manual adjustments made later in the process?
From what I can gather (and this is by no means concrete yet).
Film negative gets scanned to 10bit log. This is then graded with reference to a print color model ( profiled displays 3d luts of desired end print stock).
DCi and video versions then created with conversion luts that convert from the print stock to desired standard target.
Thats the hypothetical perfect model.
Real world what happens is that the print doesn't always nicely fit into the video range and some compromises have to be subjectively decided upon.
However I have come across some differing methods: (excuse the rambling)
Some places convert to linear right at the start and view with suitable LUTs to give them an end print environment for linear material . This makes zero difference to the first method assuming that the colorist knows what colorspace they are operating in (in my experience companies that enforce linear workflow tend to do it to prevent the operator from having to get involved in colorspace considerations rather than for any immediate quality improvement).
I regularly come across people who tell me you can't grade in log and it just makes me laugh: you need to know what the pros and cons of each working space are and move in and out accordingly. I also regularly come across work from companies that have enforced linear workflow and find that the work has a 10 code value rounding error as a result of imprecise conversion and or too many multiple conversions inducing rounding errors. You could argue that 10 code values is nothing... not sure a good colorist would agree with that.
I've come across one place that uses rec.709 monitors and a colorist there told me that to convert the film to video they just "take the film lut off to see what it looks like as video". Which is either a massive oversimplification of the process or the guy didn't really know what was going on. Interestingly enough that company had the DI pulled a few months into production.
I also had a colorist talk to me about "some sort of linear video format for final delivery" which is of course an oxymoron. They also used the terms "2k" and 1080p interchangably and managed to badly clip some material (and I mean bad!)
To be honest you could take any colorspace as a working environment as long as you were able to move to any others that were required and still get the desired controlled result. Some people may work in a limited video environment like rec.709 with float precision (maintaining all the data from the scan) and then bodge a conversion to whatever they need but I kinda doubt it as you are then never appraising with the largest set of visual precision regardless of the data you are shuttling about.
I work log input with print lut and move into linear and back out when I need to for specific types of operations. I reference it all back in log as print though. I also linearise video as and when required for the same reasons and I hardly ever see other operators doing this ( quite a few still seem to think video is linear anyway purely because its not log
What I/we need to find out is where they are shooting for white when they convert to video regardless and unfortunately a lot of the people you would think might know this either don't or will answer back 235 or 254 without explicitly knowing that for a fact.