A few years ago we were visiting friends in the high country of Colorado, and I took a morning excursion by myself, to do a little photography. I managed to find a hole that allowed me a look inside an old mine that was all boarded up. My eyes had trouble seeing in the dark, but I could make out the dimly lit form of something that seemed interesting. Everything appeared black and white, so I thought it might make a good black and white photograph.
It was quite a contortion affair to get my tripod set up just right so that the lens of my camera could poke in the hole. A tripod was necessary, because the darkness inside the mine was going to require a long exposure. More difficulties arose in trying to get the image composed, but I finally did. I tripped the shutter and waited for the exposure to complete. It took five seconds and, much to my surprise, this is what appeared on the screen of my digital camera:

Now I consider myself somewhat scientifically minded, so I needed to figure out why the image produced by the camera was so different from what I saw when I looked in the mine myself! But apparently it wasn’t that urgent, because I’ve pondered this incongruity off and on since making the image. Here’s what I came up with on my own:
Our eyes and a digital camera work in very similar ways. Both have a lens, that focuses an image; in the eye the image is focused onto the retina, and in a digital camera the image is focused onto a sensor. The retina has a bunch of cells that are sensitive to light, and the sensor of a camera does as well. In each case, the light hitting a cell causes an electrical impulse. That impulse travels to the brain or the processing hardware of the camera, where it is interpreted by the brain or camera. All of this is simplified a bit, but is commonly accepted scientific knowledge.
So my theory was that our brains have to process this information very fast, because we can’t wait five seconds to act in the event of danger, or even for just ordinary functioning. And there isn’t enough time to process the color information, so we just see in black and white when the light is low. The camera, on the other hand, is in no hurry, and can extract the color information even in low light.
I thought it all sounded pretty good, but it is in fact WRONG! It turns out that the light sensing cells on a camera sensor are all the same, but the retinas of our eyes have two kinds of cells, rods and cones. The cones work in adequate light, and give us our color perception, whereas rods function well in low light but only process in black and white, in a sense. Mystery solved!
Comments