Originally Posted by Wendell R. Breland
Can not speak for D5, for HDCam the proper way to make a clone/dub using a studio deck is to use a special dubbing board. It takes the error corrected data from one machine and feeds it to the second machine (this may be included in recent machines). As long as there is no errors in the data then the number of generations should be many.
Every time you retrieve and save a computer file you are performing a similar process. Error correction is the key element to most digital storage processes. Take some of your CD,s and back light them. You have a good chance of seeing a pin hole(s). That will mean a chunk of data being lost but with error correction processes the data will be fully recovered in most cases.
I agree with one exception - whomever is in control of the overall production and post process must adhere to all of the rules of good video engineering. I find out all too often that someone "assumes" something is true only to find out after the fact that it wasn't true or someone else downstream "assumed" something else. It happens every day in Hollywood.
I'll give you a good example. There is a very good high-end professional video device which will go unmentioned. The SW I/O GUI has two settings, one named something like "Full" and the other named "Head" (or something like that, I can't ever remember what they were ever talking about). It has 10 bit HD-SDI I/O intended to patch into a mainstream router or D5 machines.
There is a difference in mapping from reference black to peak white. Is it 16-235, or 64 to whatever? Do you clip off the LSBs, do you round? What happens with video overshoot and undershoot. Note that throwing away the undershoots and overshoots is NOT the right thing to do with video if you've ever studied signal processing. I could go on.
If you patch thru this mess even once during a dub you will get potentially horrible picture quality. That has nothing to do with the movie or the compression or the source content.
You might thing that this is a trivial problem but when you only have 8-10 bits to deal with the lookup tables cause considerable quantization noise. Each piece of equipment may make a different assumption. And there will be be repeated noise accumulation and different noise if different equipment interprets the code values in different ways in rounding up or down.
HD-SDI is not deterministic, despite what anyone might tell you. Digital isn't perfect, it is quantized analog information that can be in interpreted any one of a different ways depending on the make and model.
Then add the inherent compression of a DA5 machine. I can see D5 compression on a good Sony CRT monitor. It isn't a transparent master transfer to begin with even if everything is calibrated off the telecine.
Then, if you're tweaking the bits coming in cause of all the things I mentioned above (let alone color correction, grading, grain), the compression on the way out to the high def DVD is going to do completely weird things.