Substantive debate over the subject of film resolution vs. digital image resolution is clouded by the fact that it is difficult to meaningfully and objectively determine the resolution of either.
Unlike a digital sensor, a film frame does not have a regular grid of discrete pixels. Rather, it has an irregular pattern of differently sized grains. As a film frame is scanned at higher and higher resolutions, image detail is increasingly masked by grain, but it is difficult to determine at what point there is no more useful detail to extract. Moreover, different film stocks have widely varying ability to resolve detail.
Determining resolution in digital acquisition seems straightforward, but is significantly complicated by the way digital camera sensors work in the real world. This is particularly true in the case of high-end digital cinematography cameras that use a single large bayer pattern CMOS sensor. A bayer pattern sensor does not sample full RGB data at every point; each pixel is biased toward red, green or blue, and a full color image is assembled from this checkerboard of color by processing the image through a demosaicing algorithm. Generally with a bayer pattern sensor, actual resolution will fall somewhere between the "native" value and half this figure, with different demosaicing algorithms producing different results. Additionally, most digital cameras (both bayer and three-chip designs) employ optical low-pass filters to avoid aliasing. Such filters reduce resolution.
In general, it is widely accepted that film exceeds the resolution of HDTV formats and the 2K digital cinema format, but there is still significant debate about whether 4K digital acquisition can match the results achieved by scanning 35mm film at 4K, as well as whether 4K scanning actually extracts all the useful detail from 35mm film in the first place. However, as of 2007 the majority of films that use a digital intermediate are done at 2K because of the costs associated with working at higher resolutions. Additionally, 2K projection is chosen for almost all permanent digital cinema installations, often even when 4K projection is available.
One important thing to note is that the process of optical duplication, used to produce theatrical release prints for movies that originate both on film and digitally, causes significant loss of resolution. If a 35mm negative does capture more detail than 4K digital acquisition, ironically this may only be visible when a 35mm movie is scanned and projected on a 4K digital projector.
 Grain & Noise
Film has a characteristic grain structure, which many people view positively, either for aesthetic reasons or because it has become associated with the look of 'real' movies. Different film stocks have different grain, and cinematographers may use this for artistic effect.
Digitally acquired footage lacks this grain structure. Electronic noise is sometimes visible in digitally acquired footage, particularly in dark areas of an image or when footage was shot in low lighting conditions and gain was used. Some people believe such noise is a workable aesthetic substitute for film grain, while others believe it has a harsher look that detracts from the image.
Well shot, well lit images from high-end digital cinematography cameras can look almost eerily clean. Some people believe this makes them look "plasticky" or computer generated, while others find it to be an interesting new look, and argue that film grain can be emulated in post-production if desired.
Since most theatrical exhibition still occurs via film prints, the super-clean look of digital acquisition is often lost before moviegoers get to see it, because of the grain in the film stock of the release print.