How does the dynamic range of the human eye compare to that of digital cameras?

  • According to DxO tests, cameras have 10 to 12 stops of dynamic range. Is that correct? Noise can completely screw some lowers values (easily resulting in loss of some stops).

    Also Norman Koren says that a digital camera's original dynamic range can be 9 to 11 stops, but prints have "only" 6.5 stops.

    In a section on dynamic range, Wikipedia says the human eye has a contrast ratio of around 6.5 stops. If that is the case, why is the human eye clearly much better than cameras to record scenes with high dynamic range?

    The question of dynamic range is asked as part of How does the human eye compare to modern cameras and lenses?, but this specific part didn't really get answered. I think it's a reasonable stand-alone follow-up question since the broader question may be _too_ broad.

  • This is a very good question, and the answer could fill hundreds of pages - and, in fact, the answer already DOES fill hundreds of pages.

    The short answer is that the figures you are citing do not agree with apparent reality because the commonly quoted figures are wrong :-). Read on ...

    Much is available on the internet on this subject and the quality is, as ever, widely variable. There is also a lot of parroting of "facts" between sites and figures like those in Wikipedia seem common enough BUT there are some very reasoned arguments which seem to suggest that the Wikipedia figure is extremely wrong and underestimates the figure very substantially.

    It's important to note that the eye acts as a contrast detector rather than an absolute level detector (such as a digital camera sensor uses) so comparisons need care.

    With irising, chemical adaptation and every other trick it can pull it seems that the absolute dynamic range of the whole eye system is well over 20 stops. As each stop is a factor of 2, that's 2^20 or about "well over 1,000,000:1". At the top end, the sun is too bright!!!. At the bottom end the dark adapted eye can detect a single photon. A D3S (better performance than a D4) may have trouble with that. (Note that that is not EVERY photon - when you get down to the few photons per second level a lot of them will hit non-sensor areas and not be detected. But when one DOES strike a sensitive retina area it will produce a signal that can be recorded.)

    But, I digress :-). An extremely good (it seems) page that discusses eye dynamic range and more is

    Paragraph headings are worth noting:

    Notes on the Resolution of the Human Eye
    Visual Acuity and Resolving Detail on Prints
    How many megapixels equivalent does the eye have?
    The Sensitivity of the Human Eye (ISO Equivalent)
    The Dynamic Range of the Eye
    The Focal Length of the Eye

    The writer argues that the dynamic range of the eye without changing sensitivity by adaptation or irising is about 1,000,000:1 in low light conditions. That is, as great as the "well over" lower limit mentioned above. Then he justifies this claim as copied below. This sounds fairly convincing at first glance. There may be flaws in the argument, but it seems OK, and this does not mean that it applies in all light levels.

    Here is a simple experiment you can do. Go out with a star chart on a clear night with a full moon. Wait a few minutes for your eyes to adjust. Now find the faintest stars you can detect when the you can see the full moon in your field of view. Try and limit the moon and stars to within about 45 degrees of straight up (the zenith).

    If you have clear skies away from city lights, you will probably be able to see magnitude 3 stars.

    The full moon has a stellar magnitude of -12.5.

    If you can see magnitude 2.5 stars, the magnitude range you are seeing is 15.

    Every 5 magnitudes is a factor of 100, so 15 is 100 * 100 * 100 = 1,000,000.

    Thus, the dynamic range in this relatively low light condition is about 1 million to one, perhaps higher!

    But, here's a suggestion from me for an experiment at normal daylight light levels.

    • Find a scene that has a good mixture of dark areas and very bright areas - ideally with some dark areas as isolated islands near islands of brightness. An example may be sunlight shining through trees into a heavily shaded area - a few cavelets or deeply shaded areas will help.

    • Allow your eyes to adapt to the general lighting level - do not stare at the bright spots near where the sun is shining through and do not focus on any especially dark areas.

    • Note how well you can see detail in the darkest of dark areas - at what level of darkness does is fade to black.

    • Try the same with bright areas - as you look toward the sun there will be a place where details washes out and you cannot reasonably see more.

    • Cast your eyes to and fro across the scene between dark and light to try to stop your adaptation mechanism changing f-stop on you.

    • Now, take photos of the scene. Expose "correctly" and then so the darkest areas that you could see can be seen in the photo and then so that the brightest highlights you could distinguish are not washed out.

    • If you have the equipment, take an HDR photo with maximum f-stop variation between photos. (My Sony A77 allows 5ev steps.)

    My experience is that my eye can always see a wider brightness range than my camera (Minolta 7Hi, A200, 5D, 7D, A700, A77, other)

    On maximum HDR image (10 ev range between centers) my eye can see as well as or better than the camera.

    The area where this does not APPEAR to be so is in extremely low light when I may need to allow the eye to integrate (which it does for up to about 4 seconds!) whereas I can look at a low light photo and see the image immediately. The fact that I may have needed a 10 second exposure is then irrelevant for viewing.

    Other variably good stuff:

    Wow :) this is really fascinating.

    It's even worse than this; the brain makes up the periphery of the mental image using what it sees as you move your focus around the scene. So you see all the highlight detail of a lighter area when your eye adjusts for that, and then see all the shadow detail of a darker area. This all happens in milliseconds, so you don't realise that the scene is being reconstructed for you.

    +1 Good answer, and when you add to it the fact that we don't "see" with our eyes, but with our brain, it gets even more complicated.

    Interesting stuff. I think there may be some conflation of terms here, however. I've read things in the past (I'll need to find links) that indicated the eye had a **dynamic** range of about 24 stops or so, but a **contrast** range of about 20 or less. Dynamic range is the ENTIRE sensitivity range of a sensing device, where as contrast range is usually used to indicate the part of the total dynamic range being utilized. That would make sense, given that the eye can detect as little as a single photon (its lower DR limit) as well as millions of photons under bright sunlight.

    It would make sense, then, that the DR of the human eye is more like 2^24 (16 million)...however much like the DR of a camera, one cannot make use of all the dynamic range the hardware is capable of all the time. You have to compress the available DR into a narrower range of contrast to fit the viewing device...which is about 8-10 stops for computer screens, and 5-7 stops for print. The nature of variable contrast within the total dynamic range of a device should enlighten readers to the reason it is called ***dynamic***.

    Really - all that? in actuality, for a given scene 11 stops

License under CC-BY-SA with attribution

Content dated before 7/24/2021 11:53 AM