|Originally posted by MrGonk
i was referring to the 34" viewable 16:9. i've never seen a pg monitor but i keep hearing that they're the reference standard for tube tvs.
I dunno. I think the only real reference-level video monitor they produced was the AF3.0HD. The AS and Ai series displays were apparently custom-modded by PG (the vision boxes for the Ai/AS series is supposedly incompatible with their AR series monitors would suggest they made modifications internal to the monitor or even used a completely different housing), and perhaps were pretty good, too.
However, I bought a AR2.7FTX, because it was still the best 4:3 direct-view I could find.
|i know the masking limits the resolution on tubes, but from what i understand the aconda is capable of somewhere around 960 horizontal lines of resolution (though this is probably in large part due to its extreme size-- a smaller monitor simply would have less room for holes in the mask, correct?)
Yes, the limiting factor is the shadow mask. I think you may be confusing that resolution metric, though. Did they really say "960 horizontal lines of resolution" or "960 lines of horizontal resolution"? It also should be noted that someone may be talking about "per picture height" or "per picture width", to make matters even more confusing.
Your assumption about the dot-pitch improving as the monitor size increases is incorrect (past a point probably somewhere around 20"). On large displays, the mask resolution seems to decrease (i.e. larger dot pitch) faster than the viewable size of the picture tube increases. This would suggest that the structural integrity of the mask is the limiting factor, past a certain point. The fact that exotic materials and technologies are employed to help maintain mask rigidity would tend to support the assertion that mask distortion and alignment are problems, at this size.
For example, consider that the AR2.7FTX has a dot pitch of 0.68 mm (I personally verified that it's about +/- 0.03mm of this) and the AR3.2FTX has one of 0.9mm (based on the information I got from Monivision). Even taking into account the size difference, that's still worse. Furthermore, I assume Monivision used the best CRT available, at all sizes, given that the intent of these products is to display high-resolution computer graphics and HD material. However, at 29", a dot pitch of 0.68mm can't touch
a dot pitch of 0.25mm, on the average 21" monitor. The two aren't even in the same ballpark. The 29" monitor can't fully display 1024x768, while the 21" monitor is doing better at 1600x1200 or more.
|i keep hearing that the pg monitors are only good for roughly 800x600, which kind of makes me wonder whether the sony and toshiba models even live up to THAT...
It'd be a stretch (because the display isn't quite linear to that point), but you could probably measure 960 lines of horizontal lines (per full picture width) on my AR2.7FTX. However
, when you increase the number of scanlines past the point that each covers at least two lines of the mask, your maximum horizontal resolution drops by half (since dots in contiguous lines of the mask are staggered). It is for this
reason that I think either something like 1024x480 or 800x600 would be better than 1024x768, on the AR2.7FTX.
By the same token, you could increase the number of scanlines to the point that the Aconda's horizontal resolution is bifurcated.
For technologies such as LCD and Plasma, I think they're not worth considering, until they have at least
twice the number of pixels as the peak resolution you want to watch. This is due to the fact that TV formats are rated in samples, while LCDs (and I think plasma) natively produce pixels.
If video formats used pixels, instead of samples, you'd have to throw away bandwidth, in order not to get alaising. Samples make the best use of available bandwidth, but must be properly reconstructed, which CRTs do pretty naturally. Doing a dumb 1:1 sample -> pixel transform is a usable approximation, but results in pretty bad aliasing. Therefore, having something like 2 pixels per sample would allow you to reproduce the highest frequencies, and keep those aliases further away from the signal band (you'd still get some aliasing, though).
It should be noted that the same issue & considerations apply to CRT shadow-mask dot resolution (though to a slighly lesser degree, since there's space between the dots).
Most people - even some Home Theater magazine reviewers - seem to miss the distinction. I cringe to read posts from people trying to optimize their system to see each "pixel" of an HDTV image clearly delineated (it's okay for the bandwidth of the system to be optimized to allow the harmonics through, but they just shouldn't be generated, in the first place). A quick read through the first couple chapters of a decent signal processing book would probably clear up the issue, for many.
It's actually the same misunderstanding that caused millions of hi fi enthusiasts to believe that even a properly designed CD player would naturally sound harsh. The truth is that a properly-designed CD player would be transparent, up to near 22 KHz. On such a system, if the CD sounds harsh, it's the fault of the recording/mastering - not the format.