While I didn't vote in the shootout, I certainly formed some opinions about the contenders. I'd also like to address some of the comments I've read and heard about the event.
After 10 years of holding the shootout at the Value Electronics store in Scarsdale, NY, this is the first time it was held at a neutral location, which helps address the concern I've heard that the event is tainted by the sales angle. I asked Robert Zohn about the impact of the shootout on his dealer relationship with the manufacturers, and he said it was difficult—after all, there's one winner, and the other companies aren't happy they didn't win. But he is committed to allowing consumers and industry professionals to see as many production samples of flagship TVs as possible—including those he doesn't sell—side by side and fully calibrated to the best of their ability.
Another allegation has been that the TVs were supplied by the manufacturers, which is patently false—with one exception this year. The LG, Samsung, and Sony TVs were randomly selected from Robert's store inventory, while the Panasonic was supplied by the manufacturer because the very first production samples arrived in the US only days before the event. If he could have, Robert would have purchased a 65CX850 at another store, but they were not available at retail yet. (Our deepest thanks to Panasonic for moving heaven and earth—almost literally—to get that TV to us in time for the shootout.)
What about having screens of different sizes? It was impossible to have all screens the same size—the Sony X940C is available only at 75 inches, and the LG EG9600 and Panasonic CX850 max out at 65 inches. We could have chosen the Samsung UN65JS9500 or UN78JS9500, which would mean having three 65-inchers and one 75-incher or two 65-inchers and two larger screens (75 and 78 inches). We decided to go with the latter choice, but that was a judgment call.
Many AVS members have commented that price and value should be taken into account in the scoring. However, this shootout has always been exclusively about picture quality; it does not take price, features, user interface, or any other criteria into consideration. This is how Robert originally designed the event, and that orientation has not changed in its 11-year history.
Another related comment is that the shootout should include step-down or mid-level models that more people can actually afford. But Robert's explicit goal is to present only flagship TVs, under the assumption that flagships represent the best that a manufacturer can do in terms of picture quality. I know that isn't always true, but it is mostly true. A shootout of mid-level TVs is a great idea, and one I hope to pursue at some point in the future, but by definition, that's not what this event is about.
I've heard and read many comments that the ballot was designed to give OLED an unfair advantage. I don't see how—the ballot includes various picture-quality attributes that should be examined with any TV, regardless of its underlying technology. Of course, OLED will be superior in the depth of its blacks and its off-axis performance compared with LCD. And perceived contrast is likely to be better as well, though with full-array local dimming and greater light output, high-end LCDs can exhibit superb—perhaps even better—contrast. But those are attributes that any reviewer would look at with any TV, and OLED naturally excels in these areas. Does that mean we shouldn't include them in the ballot? Not in my book.
This year, we added screen uniformity as a separate category, and it happened to put the LG OLED at a distinct disadvantage because of the weird darkening along the side edges. This problem manifested itself at low APLs (average picture levels), though it was also clearly visible in some cases of not-so-dark images—for example, in the orange menu of the AVS HD 709 program, the sides were darker orange than the center. Should we not have included screen uniformity as a ballot category because it gave the LCDs an advantage in that respect? Not as far as I'm concerned.
LG sent an engineer from Korea to the event, who said the side-edge darkening has something to do with how the electronics address the pixels, but we did not get a definitive explanation. Calibrator David Mackenzie confirmed that the issue could not be mitigated by adjusting the picture controls. Tim Alessi, LG's product-development manager, acknowledged that the problem is endemic—which was demonstrated by trying two samples from Robert's inventory—and that LG is working diligently to correct it.
Some have noted that HDR content was not used in the shootout, but that is because only one of the TVs—the Samsung JS9500—currently has HDR capabilities; the others will get these capabilities in a future firmware update. A brilliant demonstration of HDR and a comparison with SDR was presented by Joe Kane on two Samsung TVs, a UN65JS9500 and UN65JU7100, at CE Week—in the same room—but not during the shootout. Shootout participants had an opportunity to learn about HDR from a true master of the art and see it under excellent conditions, and I hope they took advantage of that.
Another common complaint is that we didn't evaluate UHD content on these UHDTVs. (We did play some UHD content from a Sony FMP-X10 server before and after the evaluation process.) Certainly, there is a growing amount of UHD content available via streaming and downloading, but as of today, very little of it includes high dynamic range (HDR) or wide color gamut (WCG), so we decided to calibrate the TVs to BT.709 color and BT.1886 gamma and present mostly Blu-ray content. Blu-ray has the same dynamic range and color gamut as most currently available UHD content, so things like color accuracy and contrast would be the same whether we played HD or UHD content. Next year, I expect there will be sufficient HDR/WCG content—and TVs that can reproduce it—to warrant including it in the evaluation content.
Many have asked why we didn't include upscaling from HD to UHD in the evaluation. We discussed adding this category to the ballot, but we decided not to because the upscaling in virtually all high-end sets is quite good these days. Also, each session was unavoidably time-limited, because the shootout had to share the room with two other demos in an alternating fashion, so adding a category that we believed all the contenders would do well at seemed impractical. However, I can see a valid argument for including it anyway, which we may well do next year.
This year, the ballot was changed to more accurately reflect subjective impressions; for example, unlike past years, this year's ballot replaced "black level" with "black quality" (which includes depth of black and shadow detail), "contrast ratio" with "perceived contrast," and "motion resolution" with "motion clarity." The older categories are actually objective measurements, which are included in the calibration results.
We also added off-axis performance and screen uniformity, which used to be part of "overall picture quality," a voting category that was dropped from this year's ballot because it is too vague. As I mentioned earlier, we felt that these attributes are not unreasonable to examine with any TV; in particular, off-axis performance is important for many buyers, even though it might not be important to true videophiles, who sit as close as possible to the on-axis position.
Even though we omitted overall picture quality from the scoring, we wanted to get a sense of which TV each participant thought was best overall. So we added this question to the ballot: "Which TV do you think is the best, and why?" This allowed each participant to apply their own weighting—some might put blacks at the top of their priorities, while others (say, sports fans) might think motion clarity is most important, which would influence which TV they thought was the best. (Some voters took things like flat versus curved screens into account as well.) We did not include this info in the tabulated results, but we did count how many chose each TV, and you can see a summary of those results in the comment above.
I was actually somewhat surprised that the LG OLED won the ballot tabulation and the "which is best" question among the non-experts despite its screen-uniformity issue. (It also won the ballot tabulation among the professionals, but not the "which is best" question.) Of course, the OLED's blacks and contrast were spectacular—though the black level seemed to float a bit in some dark scenes—and it's off-axis performance killed the LCDs, as you would expect.
For me, the LG's uniformity problem is a deal-breaker, and it's color was often clearly different than all the LCDs, with too much cyan in the blues. Of course, OLED technology is extremely promising; without those two flaws, it would beat the LCDs by a mile instead of a whisker as it did this year.
The Panasonic had the best color accuracy and widest maximum color gamut, but its blacks were quite a bit less deep than the others, taking it out of contention for me. According to David Mackenzie, he could have set the local-dimming control to its maximum strength, which would have deepened the blacks considerably, but that also took the gamma far from the target of BT.1886, thus compromising shadow detail. The mandate of the shootout was to get as close to that gamma as possible, so he set the local-dimming control one step down from maximum, which allowed him to nail the BT.1886 gamma at the expense of deeper blacks.
The Sony and Samsung were extremely close in my view; I found it difficult to select one over the other. The Sony had better blacks and motion clarity, while the Samsung had better color accuracy and screen uniformity. The Sony does not have a CMS (color-management system) to adjust the colors—which were slightly oversaturated—and maximizing its backlight scanning to improve motion clarity reduced the brightness dramatically.
Speaking of brightness, the Sony's peak brightness could not be reduced to 35 fL with the backlight-level control alone, and David did not want to do it by lowering the contrast control, which would have reduced the overall contrast of the picture without lowering the black level. So he decided to set the backlight-flashing control, called Clearness, to "1," which reduced the peak brightness to around 35 fL and improved motion clarity as a bonus.
I don't typically watch with the lights on in my black-hole theater, so I don't need high peak luminance—in fact, I prefer 30 fL to 35. Plus, I like the fact that the Samsung has HDR capabilities now rather than having to wait for a firmware upgrade, and the TV's hardware can be updated with a new One Connect Box. Given all that, I decided I liked the Samsung just a smidge better than the Sony. But the Sony also paints a beautiful picture, so I can easily see someone else making the other choice based on their preferences.
As with any major event that undergoes major changes, I recognize that improvements can be made, and if I'm involved next year, I intend to make as many as I can. For example, I want to find a way to increase the amount of time for each session to allow for longer consideration of each category and more Q&A. Other likely changes include adding an HD-to-UHD upscaling test and native UHD content with HDR/WCG.
Above all, I want this event to be as fair and transparent as possible, and if we fell short of that ideal, I can assure you that I will work to raise that bar as much as I can. My only interest is the truth about how well each TV performs to bring consumers the best possible visual experience. The Value Electronics Flat-Panel Shootout provides a rare opportunity for consumers to examine the best TV from each manufacturer, side by side and fully calibrated, showing the same content in a well-controlled viewing environment. To be involved in such an event is an honor, and I look forward to making it even better next year.
Last edited by Scott Wilkinson; 07-01-2015 at 11:03 AM.