The frequency of the channel doesn't effect picture quality (aside from blocking/tiling if you have levels issues). Picture quality is dependant on the bitrate of the video. The bitrate of the video is dependant on who is doing the encoding. Since the encoding is generally handled by the service provider or sattellite provider, it can vary. Some digital channels will look better then others because of the bitrates. If you look at HITS' service offering on their webpage, you'll see that in some cases they cram 10-12 services into a 64 QAM pod... at 10 services, that around 2.5 mbps per stream. For decent video quality, you'll usually find 3.5 - 4.0 mbps of video stream bandwidth.
When dealing with digital video from Sat (DirecTV/Dish) or from Cable Cos, there is no compression. Digital video in this form can't be compressed, just the bitrate changed to raise or lower the overall bandwidth that the stream uses. Sat and Cable providers use MPEG-2 as their encoding standard. There are other encoding standards out there like MPEG-4 and DIVX, but those are primarily computer formats and you won't find them in use by Sat/Cable for quite some time to come. But in the end, its all about the manipulation of bitrates and encoding, there is no compression. Compression would be taking that 3.5 mbps video stream, dumping it to a file on a computer, and then ZIPing it or RARing it into a compressed file (which of course can't be played until its uncompressed).
Without getting really into the heavy technical details of how MPEG video works, this is about the best explanation that I can give.
To know the difference between picture quality (PQ) and a levels related issues effecting the digital data itself you can usually tell by what you see.
If you are seeing tiling, where parts of the screen or the whole screen will seem to go like into a checkerboard and square pieces of the picture will come and go (or you get something like One Moment Please on Moto boxes), you have a levels issue.
If you are seeing things like say color blending... this one is hard to explain. Take for instance looking at someone's teeth. In real life you can see their individual white teeth with tiny black gaps/lines in between. When you have a lower bitrate the encoding process is actually averaging the pixels more. So instead of seeing individual teeth, you might see those black lines become larger gaps. Or looking at someone's eyes you can see differences in color, whereas the lower the bitrate, the more the encoding is doing pixel averaging and you'll start to see a more uniform color (generally black as the encoding process will average the pupil to be wider). The lower the bitrate, the fuzzier and/or grainer the picture will appear.