Originally Posted by P Smith
As I wrote before - no need manipulations with buffers and insertions ( I think it's station's muxer/network is the point of blame ): ROVI must change FW to make TTL for packets with timestamps be less then 15 seconds, as the regular interval is 15 seconds.
Then the delayed info [time-stamp packets] will be discarded at any network router. And never reach our devices, TV and DVRs.
Perhaps adding priority flag to the packets would help; those packet is so small, ~50 bytes and came once per 15 second, it's really will not harm video/audio of main channels.
I disagree. I think the skew is occuring inside the ROVI box in a FIFO buffer and not downstream of the ROVI box within the station infrastructure. Here are my reasons:
1) KEYE Tech Director told me explicitly that there is no MUX or buffer of any kind downstream of the ROVI box. He said that ROVI provided two installation options for their encoder: A) UPSTREAM of the station MUX with a dedicated bandwidth allocation assigned to the encoder within the MUX. B) DOWNSTREAM of the station MUX with no need to pre-assign any ATSC bandwidth to the encoder. Not surprisingly KEYE and apparently many other stations chose the DOWNSTREAM installation option because it requires much less effort on their part and does not impose any burden on their ATSC bandwidth.
2) The published FIX from ROVI is to RELOCATE their encoder from the DOWNSTREAM position to the UPSTREAM position within the station infrastructure and assign the required bandwidth to the encoder within the station MUX. Essentially they are admitting that their original instructions to the stations were erroneous. There really is only one robust installation option for their encoder, not two, as originally advised. If the clock skew observed at KEYE and other stations was occuring within the station infrastructure downstream of the encoder as you suggest, then physically relocating the encoder to a position further upstream would be meaningless.
3) As WillN937 pointed out, we never see skew of greater than 7 minutes in any broadcast market. If the skew is occuring downstream of the encoder in some station-owned piece of hardware, I would expect to see more variation in the maximum magnitude of skew between stations. Unless every station uses the exact same model of ATSC MUX or all models of ATSC MUX on the market have the exact same buffer capacity, which seems very unlikely.
4) The fact that the stream logs from Austin show timestamps present every 15 seconds regardless of whether they are" fresh" (real-time) or "stale" (skewed) means that there is some kind of dual-mode behavior going on at the point of buffering (wherever it is). If a downstream MUX or other station-owned hardware was doing the buffering, how could it reliably ensure that a timestamp gets inserted at perfect 15s intervals and yet never allow the max skew to get above 7 mins?
As I said, all we have is a working THEORY as to what goes within the station infrastructure to generate the skew, but I think the idea that the buffering is occuring internal to the ROVI encoder is the simplest explanation that would explain ALL the observations made to date. And that is how the scientific method works right? The simplest theory is assumed to be correct until either some new data contradicts it, or somebody comes up with an even simpler explanation that still explains all the observations...