|Originally posted by Jack Gilvey
Does owning the review system somehow "taint" the objective measurements? How might they be biased?
Most if not all reviewers from Mike Fremer, Gary Reber, Fred Mantaghian and almost all others worth their salt have a reference system by which they compare and contrast equipment reviewed. Widescreen Review magazine faithfully details all of their reference systems by which all DVDs. D-Theater, and equipment is reviewed. This is very helpful. All Stereophile, Perfect Vision, Ultimate A/V reviews list the reference equipment as well in every review. There is little sense in doing the review without a reference. Reviewers know their system(s) well, they know their source/test material well and so forth. They know all parts of the chain such that when a new (review) piece is added, differences can not only be measured but subjectively heard and sensed as well.
There must be a reference and a reference that is as close to the ideal as practicality allows. This is the real world and few reviewers have excellent systems..but the best reviews do have incredible systems and treated rooms.
Think of a reference system as analogous to Joe Kane's Digital Video Essentials - a cornucopia of 'ideal' patterns, colors, gray scales, etc that are used by an expert to set up an uncalibrated monitor. They try to compare and as closely as possible reproduce fully the reference patterns with reference equipment on the new monitor. Without such, calibration would have to be done by eye...no good and not reproducible and probably only the eyeballer thinks it looks great! Yes, there will always be a certain amount of bias in any review but the good reviewer works to limit this as much as humanly possible as he knows this can taint ones own words if obvious. The less bias, the more validity of the said conclusions.
Let me give you a scientific analogy [bear with me, I am simplifying] of the importance of a reference system in science and particularly drug efficacy testing (like in a drug that is being tested for human use).
The basic experiment in determining efficacy of action will test the drug against a negative control (placebo - absolutely zero known ability to produce the desired effect the experimental drug is designed to do) AND a positive control...the positive control gives a 100% positive response and is considered a reference for obtaining the desired effect of the drug. The positive control is like the theater reference system as it is 100% in producing the desired response. It is the effect that all others are compared to because it faithfully does so unfalteringly so, time after time.
Keep that in mind.
Now, in testing the experimental drug, you have a scale of 0 (placebo effect) to 100 (positive control effect). Typically the product will fall in between that scale and efficacy can be determined and reproduced. Now you really have some ifo to chew on..Not only can you tell if it works but how well it works and how close to the ideal (positive control) it comes. You can now confidently say whether it is near the best of the best in performace or whether it is middle of the road....or it just sucks! Since you have a reference, comparisons are better made. Comparisons in this regard coupled into the conclusions are critical for validity.
Hence, a reference system that the reviewer knows well can aid not only the reviewer but also the astute reader in that he can properly weight the reviewer's conclusions and recommendations.
Good reviews can be entertaining as well..read Mikey Fremer or Keith Yates. I am not trying to beat up Ed, he seems to have a loyal following and is educated in what he is doing. I am only making observations as to what I look for in reviews.