Radeon and Geforce GPUs launched since 2006 feature interdependent VPU (Video Processing Unit) embedded within the die. ATI calls it AVIVO/UVD
and NVIDIA calls it PureVideo/PureVideo HD
. I thought you knew, which is why I didn't mentioned it.
BTW, those VPs are exclusive to GPUs. So when someone mention GPU (in the context of video processing), they mean the embedded VPU.
Processing video solely on the GPU is very difficult and requires advanced software and extensive user configuration (unless it is done at the hardware level governed by the VP). Before ATI and NVIDIA introduced VPU, obtaning high quality image from GPU was extremely difficult. So most people opted to utilize the CPU instead of the GPU.
Originally Posted by specuvestor
So are you saying the TV chips are better than GPUs as VPs at this point of time? I am genuinely confused by rogo's statement as I thought Nielo is implying otherwise. And I do not think the majority of 2010 VPs pass cadence test, inverse telecine, etc with flying colors else there will be little use for external processors. Like I said, even Oppo gave up doing inverse telecine in their BDP93.
Which links to how can we say we can be indifferent with i/p transmission? We know that some stations actually scale their 4:3 interlaced transmission to fit 16:9, that is not going to make deinterlacing easier at the receiving backend.
All 2009/2010 Samsung we reviewed passed 2:3 and the ones with Valencia processor passed 2:2 (which is more difficult detect). All 2010 Sony TVs we reviewed passed 2:2 and 2:3 (and all 09 models passed 2:3).
LG and Philips didn't preform so well and Panasonic completely failed the test. However, 2011 Panasonic can in fact detect and correctly process 2:2 and 2:3
Modern GPU's can detect and process much more than just 2:2 and 2:3. It is why certain users opt for HTPC instead of investing in expensive dedicated video processors.http://www.xbitlabs.com/articles/vid...hd6850_11.html