Originally Posted by lag0a
I just find it stupid that the any HW decoder whether nvidia, intel, or amd needs built in instructions to know which file format to decode. Shouldn't it be able to decode anything as long as it is a video file?
There's no built in instructions exposed, only DXVA (1,2,HD) interfaces as well as DDI (device driver interface).
Regarding supported file formats, these are usually container formats that hold zero or more video streams, zero or more audio streams and zero or more subtitle streams. Parsing/demuxing the container format is simple and done in SW. HW acceleration is done for CPU intensive operations like decoding compressed video in order to save power and gain speed at the expense of a larger chip die area.
Since each video format (h264, mpeg2, etc.) uses different algorithms to decode the video stream, adding more supported video formats requires more HW. This means more architects, HW designers, validation and enabling efforts as well as increased die area which translates to a more expensive to manufacture chip.
So HW manufacturers decide which video formats are most important (useful) and implements those in HW.
A strong GPU may implement some or all of the decoding in a programmable implementation (CUDA, OpenCL, etc.) but this solution has a major drawback - it will use a lot of power.
Implementing in fixed function/ASIC means more effort to produce a decoder and increased die area but the speed and power are fully optimized. There's a very big difference between the two approaches.
Many older video formats are limited by resolution and bitrate. Enabling them in HW will only increase the GPU size/cost and not reduce power significantly as the CPU will decode them easily.