We went over this many years ago in 1000’s of test photos. Some members here myself included set up test standards and even made still images that had area removed to project pure white and best black into an image. We could then measure an ANSI contrast of sorts within the photo doing an A vs. B comparison. The white area were also used to illuminate a known and calibrated color and grey scale test card and the colors and intensities could then be measured in the photo with a color sampling tool (software) and that eliminated the monitor and our eyes from comparing screen to screen. We also went thru careful steps to have the projector calibration known as you can’t calibrate a projector to two samples at once and that difference had to be taken into account. We then took photos in a manner that disallowed the influence of the digital camera to correct the image based around how the new cams have different ways to center balance an image for example. We also did A, B, C tests where we rotated the samples from side to side and center to remove variability. Because a big factor in what we were seeking was ambient lights influence on the image and some methods that help with that also use angular gain we would do the test at many angles right to left across the screen. We also had established sources of ambient light so we could repeat a known problem light and see how different paints and screen surfaces dealt with it under control.
The forum broke into two schools those looking at scientific approaches to what was going on and those that went by what the eye saw. To complicate matters more everyone has a different room, different projector and a different screen size. Then people were using different throw distances and at different points in bulb life and also different projector heights with keystone adjustments set different. We did a lot of measurements that looked at central warm spots in the image and at what level the eye couldn’t pick up on them.
Keep in mind when watching a split image like the projector can’t be adjusted to both images your eyes can’t either. The pupil adjusts to control the overall light entering the eye and allowing the brain to see the most contrast possible. I forget how many f-stops the eye adjusts I think it’s 22 f. Each f stop doubles from the one before 2, 4, 8, 16, 32,……..22 times. That’s how we can walk around in almost total darkness or bright noon sun.
Google something like “color picker tool” some free ones are on line. You can then take one of your test photos and use the picker tool to see what the RGB value is of any pixel. 0, 0, 0 is pure black and 255, 255, 255 is the full white of the monitor turned on. Someplace in your image go to each side or the split line and do a sample find a bright spot and a dark spot. You will find that the image you see is cranked up on both ends of the spectrum most likely a brighter white but also a brighter black in the same sample. If that’s the case both screens are equal just the one is over calibrated. This is a little over simplified but basically what you need to understand. The next step is the eye and the perception of contrast. When there is some brightness in the image your eyes f-stop closes and set the reference for white and grays on the screen will become inky black. We view an image that is some part real and some part processed by our brain and improved. The projector companies are building better and better machines producing better native contrasts but you have to understand what those numbers mean in terms of lumens also and then factor in the room and ambient light to get the full picture.