How many colors and shades can the human eye distinguish in a single scene?

  • How many distinct colors, shades, hues, and tints can the average person distinguish in a single scene? In other words, what's the theoretical bit-depth required to be sure of recording a photograph with all of the visual information a human would perceive?

    I've seen answers ranging from 200,000 to 20,000,000, and it's hard to sort out authority. And the term "color" is ambiguous — is just hue meant, or are differences in saturation and lightness also included?

    I'm sure there's been statistics collected for the "Farnsworth Munsell 100 Hue Test." Here's a crappy online version which I am sure is affected by monitor calibration: http://www.xrite.com/custom_page.aspx?PageID=77&;Lang=en

  • jrista

    jrista Correct answer

    10 years ago

    When discussing the number of colors perceptible to the human eye, I tend to refer to the 2.4 million colors of the CIE 1931 XYZ color space. It is a fairly solid, scientifically founded number, although I do admit it may be limited in context. I think it may be possible for the human eye to be sensitive to 10-100 million distinct "colors" when referring to both chromaticity and luminosity.


    I'll base my answer on the work done by CIE, which began in the 1930's, and progressed again in the 1960's, with some algorithmic and accuracy improvements to formula over the last couple decades. When it comes to the arts, including photography and print, I think that the work done by the CIE is particularly relevant, as it is the basis of color correction and modern mathematical color models and color space conversion.

    The CIE, or Commission internationale de l'éclairage, in 1931 established the "CIE 1931 XYZ color space". This color space was a plot of full purity color, mapped from 700nm (near-infrared red) through 380nm (near-UV), and progressed through all the wavelengths of "visible" light. This color space is based on human vision, which is a tri-stimulus created by the three types of cones in our eyes: short, medium and long wavelength cones, which map to 420-440nm, 530-540nm, and 560-580nm wavelengths. These wavelengths correspond to blue, green, and yellow-red (or orangish-red) primary colors. (The red cones are a bit unique, in that their sensitivity has two peaks, the primary one in the 560-580nm range, and also a second one in the 410-440nm range. This double peaked sensitivity indicates that our "red" cones may actually be "magenta" cones in terms of actual sensitivity.) The tristimulus response curves are derived from a 2° field of view of the fovea, where our cones are most concentrated and our color vision, under medium to high lighting intensity, is at its greatest.

    The actual CIE 1931 color space is mapped from XYZ tristimulus values, which are generated from red, green, and blue derivatives, which are based on actual red, green, and blue color values (additive model.) The XYZ tristimulus values are adjusted for a "standard illuminant", which is normally a sunlight balanced white of 6500K (although the original CIE 1931 color space was created for three standardized illuminants A 2856K, B 4874K and C 6774K), and weighted according to a "standard observer" (based on that 2° foveal field of view.) The standard CIE 1931 XYZ color plot is horshoe-shaped and filled with a "chromaticity" diagram of pure 'colors', covering the hue range from 700nm through 380nm, and ranging in saturation from 0% centered at the white point to 100% along the periphery. This is a "chromaticity" plot, or color without regard to intensity (or color at maximum intensity, to be most accurate.) This color plot, according to some studies (references pending), represents about 2.38 million colors that the human eye can detect under moderately high intensity lighting approximately the same color temperature and brightness of daylight (not sunlight, which is closer to 5000k, but sunlight + blue sky light, about 6500k.)


    So, can the human eye detect only 2.4 million colors? According to the work done by the CIE in the 1930's, under a specific illuminant that equates to the intensity and color temperature of daylight, when factoring in only the 2° of cones concentrated in the fovea of our eyes, then it seems we can indeed see 2.4 million colors.

    The CIE specifications are limited in scope, however. They do not account for varying levels of illumination, illuminants of differing intensity or color temperature, or the fact that we have more cones spread across at least a 10° area of our retinas around the fovea. They also do not account for the fact that peripheral cones seem to be more sensitive to blues than the cones concentrated in the fovea (which are primarily red and green cones).

    Refinements to the CIE chromaticity plots were made in the '60's and again in 1976, which refined the "standard observer" to include a full 10° color sensitive spot in our retinas. These refinements to CIE's standards have never come into much use, and the extensive color sensitivity research that has been done in relation to CIE's work has been largely limited to the original CIE 1931 XYZ color space and chromaticity plot.

    Given the limitation of color sensitivity to only a 2° spot in the fovea, there is a strong likelihood that we can see more than 2.4 million colors, particularly extending into the blues and violets. This is corroborated by the 1960's refinements to CIE color spaces.


    Tone, perhaps better labeled luminosity (the brightness or intensity of a color), is another aspect of our vision. Some models blend chromaticity and luminosity together, while others distinctly separate the two. The human eye contains a retina composed of both cones..."color" sensitive devices, as well as rods, which are color-agnostic but sensitive to changes in luminosity. The human eye has about 20 times as many rods (94 million) as it does cones (4.5 million). Rods are also about 100 times as sensitive to light as cones, capable of detecting a single photon. Rods seem to be most sensitive to the blueish-green wavelengths of light (around 500nm), and have lower sensitivities to reddish and near-UV wavelengths. It should be noted that a rods sensitivity is cumulative, so the longer one observes a static scene, the clearer the levels of luminosity in that scene will be perceived by the mind. Rapid changes in a scene, or panning motion, will reduce the ability to differentiate fine tonal gradation.

    Given the rod's far greater sensitivity to light, it seems logical to conclude that humans have a finer, and distinct, sensitivity to variations in light intensity than they do to changes in hue and saturation when one observes a static scene for a time. Exactly how this factors into our perception of color and how it affects the number of colors we can see, I can't exactly say. A simple test of tonal sensitivity can be done on a clear day's evening, just as the sun sets. The blue sky can range from near white-blue to deep dark midnight blue. While the hue of such a sky covers a very small range, the tonal grade is immense and very fine. Observing such a sky, one can see an infinitely smooth change from bright white-blue to sky blue to dark midnight blue.


    Studies unrelated to CIE work have indicated a wide range of "maximum colors" that the human eye can perceive. Some have an upper limit of 1 million colors, while others have an upper limit of 10 million colors. More recent studies have shown that some women have a unique fourth cone type, an "orange" cone, that could possibly extend their sensitivity to 100 million, however that study counted both chromaticity and luminosity in their calculation of "color".

    That ultimately begs the question, can we separate chromaticity from luminosity when determining "color"? Do we prefer to define the term "color" to mean the hue, saturation, and luminosity of the light we perceive? Or is it better to separate the two, keep chromaticity distinct from luminosity? How many levels of intensity can the eye really see, vs. how many distinct differences in chromaticity? I am not sure these questions have actually been answered in a scientific way yet.


    Another aspect of color perception involves contrast. It is easy to perceive a difference in two things when they contrast well with each other. When trying to visually determine how many "colors" one sees when looking at varying shades of red, it can be rather difficult to tell if two similar shades are different or not. However, compare a shade of red with a shade of green, and the difference is very clear. Compare that shade of green in sequence with each shade of red, and the eye can more easily pick up the differences in the red shades in peripheral relation to each other as well as in contrast with the green. These factors are all facets of the vision of our mind, which is a far more subjective device than the eye itself (which makes it hard to scientifically gauge color perception beyond the scope of the eye itself.) Given a setting with appropriate contrast, one may be able to detect far more distinct colors in context than a setting without any contrast at all.

    Anyway: 10-100 million distinct colors = 24-27 bits, of which 22 are hue and saturation.

    The sad thing about the RGB color model is it mixes chromaticity and luminosity. You can't simply change luminosity independent of chromaticity, you have to change chromaticity at the same time...they are intrinsically linked. That link inherently limits how much fineness we can extract from RGB until we reach higher bit depths beyond 8bpc...16bpc is pretty adequate, but still not ideal. A real bummer about a lot of vision testing is...its done with computers and computer screens, USING the RGB color model. I think that has limited our measurement of human vision in some ways.

    @jrista: how does Bezold–Brücke shift related to that?

    I believe Bezold–Brücke is based only on extrafoveal perceptual tests, or tests that involve the outer 10° color sensitive spot, but ignoring (or under-weighting) the 2° foveal spot (which has more red and green cones). The greater concentration of blue cones in the extrafoveal area could account for the blue/yellow weighted shift. I don't know that much about their studies, though, so I can't say anything definitively.

    @jrista: how are tests like that conducted? The papers I see reference studies on human subjects giving subjective responses, rather than measurements or anything. Right now I am too tired to understand anything I'm reading, but I'm developing the sneaking suspicion that a model which separates color into hue, saturation, and value also has limitations. Not that that necessarily relates directly to my question here. :)

    @mattdm: Every color model we have ever devised has limitations...its the nature of the best. Were only "modeling" reality, nothing more. Until the day we can actually take direct measurements from within the eye, or even directly off the cones and rods themselves, we will probably never fully understand color vision, and therefor, all of our models will be flawed in one way or another. To keep things simple, both HSV/B/L and RGB color models have their limitations. RGB is a horrid way to *model* color, but a great way to *emit* or *sense* color, as it matches the hardware level design.

    Color systems that separate hue and saturation from luminosity are better ways to *model* color, as they more closely match how we see. Technically speaking, the Lab model is the most perceptually relevant model, as it has a green/magenta color *axis*, and a blue/yellow color axis, which are diametrically opposed poles that can't be used to represent both of their primary colors at the same time (i.e. you can either have magenta or green, but not both simultaneously). Luminosity is presented by a third axis in a different plane. Cont ->

    L*a*b* is about as close as it gets to the two color axes and the intensity axis of human eyesight, which is why it is the sole color model/space used to compute chromatically adapted color difference, which is critical in color matching, color space conversion, perceptually accurate color model development (such as ink jet color models), etc. On the flip side, HSL, HSB, YCC, etc. are not great for *emitting* or *sensing* color, as it is difficult or impossible to design a device in such a way that would make it possible. Sensing specific colors (i.e. RGB) is easier and more cost effective.

    As for how vision tests are conducted, it has to do with field of vision. Many color vision tests use a dark screen, and emit points of light at certain angles in a specific field of view. You can stimulate just the 2-10° area around the foveal spot, or stimulate just the foveal spot, or stimulate extreme peripheral regions of the retina. Go into an ophthalmologist and complain about extensive vision problems...you'll learn a lot about vision and vision testing that way. ;P I have something called fibrous cloudy masses in my eyes, which wreak havoc on my vision. I've done all the tests.

    @jrista - awesome, fascinating answer. I usually skim or skip long-winded answers (by the first paragraph you know little will be said), but ended up wishing this answer was much longer and *more* detailed. Any other links to articles about this? Either way, thanks for this superb explanation.

    @bill: There is a lot of information on this topic. There are at least hundreds, if not thousands, of sites and pages on the net that cover color, color models and modeling, color spaces, vision, the design and capabilities of the eye, the subjective aspects of how our brains process what our eyes pick up, etc. I may have more links in old archives of my bookmarks at home...I haven't browsed through that stuff in a while, though.

License under CC-BY-SA with attribution


Content dated before 7/24/2021 11:53 AM