These are really good questions, and I don't have a solid grasp on them, but I'll attempt to shed some light, and hopefully others who have sharper knowledge can improve upon or correct my reply.
Originally Posted by michael10003
So, what's the point in using the AdobeRGB color space - for image capture, editing, or output - if my monitor can't resolve those colors? Would I be working blind? And would that be better than working more or less 1:1 with an sRGB color space?
When it comes to image capture, I think this is how it works. The camera captures the exact same raw information regardless of what colorspace you set the camera to. The values of each pixel in the raw image is determined by the spectral transmission of that pixel's particular filter combined with the spectral power distribution (SPD) of the light that passes through that particular pixel's filter. Using various algorithms, groups of these raw pixel values (each group contains red, green, and blue filtered values) are averaged with a weighted sum* to produce a single triplet of numbers, such as [50 70 200]. If the camera is set to sRGB, the idea is that when an sRGB display device renders the red channel at a value of 50, the green channel at a value of 70, and the blue channel at a value of 200, the resulting color that the display renders will match the color of the portion of the scene that is represented by that pixel. Of course, this is an imperfect process for at least two reasons:
1) the camera has no way of knowing what the spectral power distribution of the incoming light from the scene is, and as described earlier, the value of the raw pixels depend on the SPD of this incoming light. So assumptions have to be made, and the more the assumptions don't match reality, the less accurate the color encoding will be.
2) Even if the camera knew what the spectral power distribution of the incoming light was, and adjusted its weighted averaging accordingly, it is possible that parts of the scene contain colors that are more saturated than can be represented by a particular color space, such as sRGB. In that case, color information has to be shrunk down to fit a particular color space.
With respect to this second point, suppose you are imaging a very saturated green surface, one whose color exactly matches the top corner of the triangle defined by the Adobe RGB color space
, and suppose that the camera's assumptions about the SPD are spot on. In such a situation, if you set the camera to Adobe RGB, then the resulting value for all pixels capturing that green surface would be [0 255 0] (in an 8 bit context), and, on a display that is capable of rendering the full Adobe RGB color space, the displayed surface would match the color of the surface in the scene. However, if the display that is rendering this image is an sRGB display, the resulting color will now correspond to the top corner of the sRGB color space. If you look at the image from the previous link, you will note that the the hues (in addition to the saturation) for the top corners of Adobe RGB and sRGB differ. So rendering this image on an sRGB display will not only show a less saturated green, but will get the hue wrong. However, if you had selected sRGB on the camera, the weighted averaging may now encode an sRGB coordinate that perfectly matches the hue of the surface. Of course, by representing the hue accurately, the saturation of the green is now even less than the most saturated green that the sRGB display is capable of, but this tradeoff may be acceptable. Of course, different schemes may prioritize different aspects of color, so other weights may favor saturation over getting the hue just right. And of course, don't forget point 1 above, which is that the assumptions about the SPD of the incoming light are just that: assumptions.
Hopefully this discussion will give you the tools to think about what happens with other conditions, such as when the camera is set to sRGB and the image is rendered on an Adobe RGB display.
As for image editing and output, I'm delving more into speculation here, but I think the general idea is the same. Say you load up that image that was captured using the Adobe RGB camera setting, and render it on a display that is capable of rendering the full Adobe RGB space. You now save it but want it to be encoded in sRGB (you assume that most people use sRGB displays). If all the pixels in the image you are working on are the same, and are [0 255 0], and you save it in sRGB, then the pixels will now have a different triplet: one that when rendered on an sRGB display, will match the hue of the original image (and again, some software will allow you to make different tradeoffs when re-encoding the image).
As for ppi, I don't have much experience with raster programs, but in a vector based program, it's easier to think about. The ppi that you save the image in will be the true image resolution. Of course, if your display is incapable of rendering that density of ppi, then it will have to magnify the image to compensate.
*I've skipped a step for brevity. After this averaging, a demosaicing process uses a sliding window so that the resulting image does not suffer a loss of resolution. See http://www.cambridgeincolour.com/tut...ra-sensors.htm
for more information.
edit: fhoech beat me to it, while I was writing this post