AVS Forum banner
Status
Not open for further replies.
1 - 20 of 37 Posts

·
Registered
Joined
·
43 Posts
Discussion Starter · #1 ·
So, for some reason (which I can't even remember now), I was trying to do the math related to the bandwidth of uncompressed HDTV. Suddenly, I realized that, even though I know the party line concerning how much bandwidth an HD image needs, I had no idea how it got there. Let me demonstrate:


The top HDTV resolution (1080x1920) has about 2 million pixels per screen. Every full, uncompressed frame needs to send color information for those 2 million pixels. Assume for now that it's sending that information at 30 frames per second. Also assume that DTV uses 8 bit color resolution (Probably wrong, but I couldn't easily scare up the right number). So, by my math:


2,000,000 pixels/frame * 30 frames/second = 60,000,000 pixels/second

60,000,000 pixels/second * 8 bits/pixel = 480,000,000 bits/ second.


So, as you can see, I get 480 Mb/s. Now, this is a far cry from the 1.5 Gb/s that I hear bandied around a great deal. Sure, there's going to be some overhead to the data (sync signals, packet ordering, etc), but is it really 1Gb/s of overhead? I doubt it. Clearly I'm missing something. Am I just estimating too low of a color density? I suppose going up to 24 bit color would make the difference. Anybody?


Nathan
 

·
Registered
Joined
·
43 Posts
Discussion Starter · #3 ·
Quote:
Originally posted by jckessler
24 bits is more likely. 8 bit color is only 256 different colors, and would look terrible.
Hmm, of course you're right. I saw something about 8 bit color in another post and didn't even think about it. I suppose it is probably just 24 bit color that makes up the extra space, then.


Nathan
 

·
Registered
Joined
·
114 Posts
I don't think it's that bad. The raw capture may be 1920x1080xFPSx24 but it's quickly reduced. The actual encoding is 4:2:2. The luma and two color difference channels are encoded at different resolutions. Apparently the eye is less sensitive to difference in shade than brightness.
 

·
Registered
Joined
·
43 Posts
Discussion Starter · #5 ·
Yeah, I understand that compression can very quickly reduce the amount of bandwidth necessary. I was pretty much just curious about the raw capture, since that's the number that comes up when folks are talking about working with a DVI feed.


Video compression does pretty impressive things with the size of that bitstream, but it would still be nice to lose the artifacts, either through using a raw bitstream or a lossless compression mechanism. Sadly, I think that sort of bandwidth and storage are still well into the future.


Nathan
 

·
Registered
Joined
·
9,223 Posts
Here is the SMPTE274 numbers which is the standard for 1080i HDTV


You actually have 2200x1125. 1920x1080 is only the visible picture.


Y is filtered to 30mhz and sampled at 74.25mhz. pB and pR are limited to 15mhz and sampled at half that or 37.125mhz. Yes Nyquest dictates the Y could be allowed to go as high as 37.125mhz, but no filter is a perfect cliff so 30mhz is the 0db point. Likewise 15mhz for the half sampled chroma channels.


There are 10bits Y and 10bits pB/pR muxed together so the combined data rate is 148.5mhz in 10bit parallel form. Seralize that is a simple multiplication x10. So we get 1.485gbs rounded to the 1.5gbs you hear about. As for framing control of the serial stream that is encoded in the data. These are reserved words that control framing.


1280x720 uses exactly the same bandwidth because you have 60 versus 30 frames.


1920x1080/24p simply inserts padding words or null data to make it look like 1920x1080i so it's compatable with 1080i distribution equipment. That's not the same as adding 3/2 which actually makes 24p into 1080/30i.
 

·
Registered
Joined
·
43 Posts
Discussion Starter · #7 ·
Well, I guess I really SHOULD be careful what I ask for! Glimmie, thanks for the info. This whole thing is much more complicated than I had realized. I'll trust the 1.5 Gb/s figure a little more now/


Nathan
 

·
Banned
Joined
·
3,772 Posts
Thanks Glimmie!


Us computer types have to cross our eyes a bit to make sense of the video/broadcast terminology you guys use on the production side.


The numbers glimmie uses are for the raw feeds from HD cameras, and are sent over a different interface than Nathan describes.


If we capture ATSC 1080i into a computer and start to talk about datarates for "decompressed" dataflow turned into RGB and bytes in a computer, the terminology is different but we are still dealing with similar magnitudes of bandwidth.
 

·
Registered
Joined
·
9,884 Posts
This thread sort of points out again the cats and dogs difference in HDTV between the computer and TV engineers. ;)


TV types tend to talk about signal bandwidths and frequencies. And they count the cycles and bits in places like overscan that don't have any useful picture information. But they have to be concerned with the electrical characteristics and timing of real world devices, so that makes sense.


Computer types tend to think about the actual data, with the idea it magically gets to go perfectly from one place to another, with no errors except program bugs. This lead them to think in terms of bytes and data structures.


Like PVR I'm more of a computer type. I've worked with various open source MPEG-2 and MPEG-4 software decoders and while the original HDTV signal may have 10 or more bits I can tell you that all the software decoders I've seen will first decode to an intermediate format called planar 4:2:0. This is a YUV format that effectively for each pixel contains 8 bits each for the Y, U, and V components.


But for the U and V components there is only one byte each for every 4 visible pixels. So it all averages out to 12 bits / pixel. This is even true for the MPEG-2 decoder sample source code at www.mpeg.org , put out there by the folks that developed the standard.


But then, at least in computer storage, you have 12 bits / pixel. HDTV 1080i is actually sent as 1920x1088 at 30 frames second. So, not counting other ATSC overhead since we are talking about uncompressed (received) data we have:


1920 x 1088 x 30 x 12 = 752 mbps


or 94 megabytes / second if we wanted to store it raw.


This is about half the RGB estimate of 1.5 Gbps that is sometimes posted but the only difference is the average number of bits / pixel. And of course it would be more if you could accurately extract 10 bit components from the Y, U, and V transmitted. I don't know if that's possible or not but after the compression (quantization) it's gone through I suspect there would be nothing more there anyway.


- Tom
 

·
Registered
Joined
·
2,171 Posts
Excellent discussion, guys. Glimmie, Tom - thanks for the detailed posts.


OK, so how do you get from the 1.5 Gbps (as it's presented to the antenna - and that's not counting the AC3 audio stream) to the 19.1 MBps "Transport Stream" (which includes the audio) that HDTV PCI cards write to hard disk?
 

·
Registered
Joined
·
3,594 Posts
Quote:
Originally posted by ElvisIncognito
Excellent discussion, guys. Glimmie, Tom - thanks for the detailed posts.


OK, so how do you get from the 1.5 Gbps (as it's presented to the antenna - and that's not counting the AC3 audio stream) to the 19.1 MBps "Transport Stream" (which includes the audio) that HDTV PCI cards write to hard disk?
The 19.3 Mbps Transport Stream is what's being broadcast

(and what your antenna is receiving). The 1.5 Gbps

is compressed to whatever video rate is being used

(up to 17 Mbps) by an HD MPEG-2 encoder at the

transmitter.


Ron
 

·
Registered
Joined
·
2,171 Posts
Interesting... Thanks for the response, Ron.


So, apparently, if I want to effectively process (external) DVI-D (which, as of now, is unlikely to exceed the bandwidth requirements of an HD boradcast) and bring it into an HTPC at a rate that's compatible with the PCI bus, then all I need to do is provide compression similar to that used by ATSC broadcasters?


Presumably this is not some mystical black box de/compression algorithm, right? The spec (or at least a chipset) must be available to anyone that wants to create an HDTV decoder card, right? There could be no top secret encryption/decrytpion algorithms or HDCP would be unnecessary...?


Sorry for the barrage of questions, but in the 2nd ("LexiClone") link in my sig, you'll see that we're trying to do something fairly "groundbreaking" over in the HTPC forum - we're trying to create an external peripepheral that will serve not only as a modular audio/video input/output switcher, but also as a high-end sound card and video capture device... It's an ambtiious project, yes, but if we can pull it off, you'll not only be able to replace that $10K Faroudja video processor with an HTPC, but that $10K Lexicon (audio) preamp/processor as well.
 

·
Registered
LG 55" C9 OLED, Yamaha RX-A660, Monoprice 5.1.2 Speakers, WMC HTPC, TiVo Bolt, X1
Joined
·
45,766 Posts
Quote:
Originally posted by dr1394
The 19.3 Mbps Transport Stream is what's being broadcast

(and what your antenna is receiving). The 1.5 Gbps

is compressed to whatever video rate is being used

(up to 17 Mbps) by an HD MPEG-2 encoder at the

transmitter.


Ron
If I remember what I was told correctly, the major networks do one (or more?) level of compression before sending the signal to the affiliates. I think it goes to the uplink at 45 Mbps, but will check to be sure.
 

·
Banned
Joined
·
3,772 Posts
dr1394 knows all about "mezzanine compression" and such. 45Mbit/sec is a typical uplink datarate (since it matches ATM fiber networks, and some full sat transponders). Not all content goes through a step at that rate, but a good chunk does.

He was just giving us the short version that it starts at 1.5Gbps and works it way down to 19.3 for ATSC OTA (and most DBS sat).


ElvisIncognito:

To get from "uncompressed HD" (e.g.: what you might see on DVI-D) is not really practical on the consumer side. Currently the HD MPEG2 encoders needed to go down to ATSC datarates are very expensive. They were probably $500,000 a few years ago, and now some

Another wrinkle: most devices which have DVI outputs are likely to encrypt with HDCP/HDMI. Unless your device is a "display only" monitor you are not likely to get a license to be able to decrypt the DVI stream into anything useful.
 

·
Registered
Joined
·
2,171 Posts
Quote:
Originally posted by PVR
Another wrinkle: most devices which have DVI outputs are likely to encrypt with HDCP/HDMI. Unless your device is a "display only" monitor you are not likely to get a license to be able to decrypt the DVI stream into anything useful.
I'm not sure I'd say "most", but at any rate, we are well aware HDCP issues. The plan is to support a DVI-I interface... for analog signals on the DVI-I cable, the plan is to
  1. pass the analog signal to an ADC/video capture device and from there to the PCI bus for software processing
  2. output to the switch matrix
  3. if the user chooses to do so, we would output from the switch matrix to the display device via DVI-D (assuming the display device can/will accept the signal) otherwise...
  4. convert the signal back to analog (using a module designed to do so) and send RGB out to the display device

We were hoping to (hardware) compress (non-HDCP encrypted) DVI-D so that it could be PVR'ed, etc. but apparently that's not possible, so I guess we'll have to handle unencrytpted DVI-D the same way we had planned to handle HDMI - pass it straight through (literally- via relays or other solid connections that will not fail the continuous re-authentication process) to the output and from there to the viewing device.
 

·
Registered
Joined
·
9,223 Posts
Quote:
Originally posted by PVR
Currently the HD MPEG2 encoders needed to go down to ATSC datarates are very expensive. They were probably $500,000 a few years ago, and now some
Today you can buy a Harmonic (ex Divicom encoder) for $40K. A Tandberg E5821 which also does SD goes for $50K. JVC announced a $15K black box based hardware encoder, probably based on their card described below.


I saw two PCI card hardware based HD encoders at NAB. One from JVC and one from Doremi Labs. No price point yet but I would imagine they will be >$10K.


The Doremi card looked pretty good from an integration perspecitve. It takes in HDSDI with embedded audio and will output DVB ASI on another BNC connector. In addition it will place the transport stream on the PCI buss. Full developer kit will be available.


No info on the JVC card, no one knew anything about it and I don't know if there will be an open developer program.
 

·
Registered
Joined
·
2,171 Posts
This is all very surprising (and puzzling to me)...


If you can *decode* MPEG2 in software (and still have PLENTY of CPU cycles to spare) and/or you can buy an MPEG2 hardware decoder card for next to nothing, why is ENcoding so crazy expensive?
 

·
Registered
Joined
·
9,223 Posts
Quote:
Originally posted by ElvisIncognito
This is all very surprising (and puzzling to me)...


If you can *decode* MPEG2 in software (and still have PLENTY of CPU cycles to spare) and/or you can buy an MPEG2 hardware decoder card for next to nothing, why is ENcoding so crazy expensive?
I wouldn't call north of $20K "next to nothing". Likewise how much these days for an SD MPEG encoder? Well considereing every non-satellite Tivo has one, they are pretty cheap. Even professional DVD encoder PCI cards are less than $5K these days. It's just market demand. Once there is more call for HD encoders, the price will fall even more. But keep in mind there won't be much of a consumer push for an HD MPEG encoder as it arrives to the home pre-compressed.


As for the CPU horsepower, MPEG is very complex to encode and very easy to decode. That's the way it was designed so the comsumer part is cheap and simple.
 

·
Registered
Joined
·
2,171 Posts
Quote:
Originally posted by Glimmie
I wouldn't call north of $20K "next to nothing".
I was referring to the inexpensive decoder cards that provide hardware decoding for DVD-ROM drives.
 

·
Banned
Joined
·
3,772 Posts
Thats the way MPEG2 works - decoding is "easy" and encoding is "hard".


Basically the encoder has to queue up a whole bunch of frames and analyzes them. They can do "forward and reverse temporal compression" (AKA "predictive encoding") so that the compressed data is actually taking notes on what is going to happen in the future. So it can work on a frame and say things like "this frame is similar to the one I ended 3 frames ago, or is like the one 3 frames ahead". I am over-simplifying, but basically the encoding process can be very complex. The higher priced hardware can do a better job as well. That is the price to pay to get a 75:1 compression ratio.


If all you needed to do was compress about 3 to 1, then some much simpler encoding could be done. Digital VCRs (like D5) do this kind of simple compression. DV camcorders all do some sort of "low" compression (approx 5:1) to make data more manageable. This simple/low compression systems also make data that is easier to edit since it isn't so tricky to reconstruct the frames.


Doing HD MPEG2 decoding on a PC is still not a piece of cake with CPU to spare. A top of the line 2.4GHz PC doing 1920x1080 software decoding is likely to be using 90% of the CPU cycles.


The MPEG2 hardware decoder chips aren't exactly free. Cards like the HiPix, MyHD, etc are still over $200 each. HD set top boxes also start at a few hundred.


Another reason why the encoders cost roughly 100 times more than the decoders is just a matter of volume. They easily sell 100 HD deocders for every 1 encoder they can sell.

Not many consumers have HD source material that they need to encode...


By the way - fractal encoding takes this even further. The encoding is so difficult that prototype decoders have to spend hours chewing on the data for every minute of output.

The playback is much easier.


Sort of like the guy who spends years studying a problem... He finally finds the answer. It doesn't take long to tell others the result once the work has been done.
 
1 - 20 of 37 Posts
Status
Not open for further replies.
Top