← Bookmarks 📄 Article

What an unprocessed photo looks like: (Maurycy's blog)

Your camera's "unedited" photo is actually the result of massive computational processing—raw sensor data looks like gray mush until algorithms apply black point compensation, demosaicing, white balance, and non-linear curves to match human perception.

· photography
Read Original
Listen to Article
0:004:32
Summary used for search

• Raw sensor data is monochromatic gray (not even black-and-white) because it only uses ~13% of the available dynamic range and each pixel captures just one color through a Bayer filter
• Demosaicing reconstructs full RGB by averaging neighboring pixels, but the 2x green pixels in the Bayer pattern create color casts that require white balance correction
• Human perception is non-linear, so linear sensor data looks too dark—color spaces like sRGB allocate extra bins to darker tones, requiring gamma curves to display correctly
• The processing order matters: white balance must happen before applying non-linear curves, or you'll get incorrect color shifts
• There's no such thing as an "unedited" photo—your camera's JPEG is just one interpretation of the sensor data, not inherently more "real" than manual edits

The article dismantles the myth of "unedited" photography by walking through the computational pipeline that transforms raw sensor data into viewable images. Starting with actual sensor data from a Christmas tree photo, the author reveals it looks like uniform gray—not because the sensor is broken, but because the raw ADC values only span 2110-13600 out of a possible 0-16382 range. After black point compensation stretches this to full range, the image is still monochromatic because camera sensors can't see color—they measure light intensity through a Bayer filter grid where each pixel captures only red, green, or blue.

The reconstruction process involves demosaicing (averaging each pixel with its neighbors to fill in missing color channels), but this creates problems: the Bayer pattern has twice as many green pixels, causing a green color cast. White balance correction multiplies each channel by constants to equalize them, but this must happen before applying non-linear curves. These curves are necessary because human perception is logarithmic—linear sensor data looks far too dark on displays. Color spaces like sRGB allocate more quantization bins to darker tones to match our perception, so displaying linear data directly wastes dynamic range on imperceptible white variations while crushing shadows.

The final insight is philosophical: every photo involves extensive mathematical interpretation to translate sensor physics into human perception. The camera's automated JPEG processing makes thousands of decisions about curves, white balance, and color rendering. Manual editing isn't making photos "faker"—it's just making different choices about the same underlying data. Understanding this pipeline empowers photographers to override the camera's automated decisions when they don't match the scene or artistic intent.