Your camera probably doesn’t actually have the resolution it claims on the box. Here’s why — and why it probably doesn’t matter.

Top image via Shutterstock.

In the below video from CookeOpticsTV, cinematographer Geoff Boyle explains the science and theory behind the Color Filter Array (CFA) and Bayer Pattern approach to digital camera sensors. The video itself is chock-full of really great nuggets of information, and I’d suggest watching it three or seven times.

In the video, Boyle makes the argument for why the constant race for higher and higher resolution in today’s camera scene is nonsense. I have to admit, he makes a very interesting argument.

The fact is that almost all modern camera sensors use what’s called a Bayer Pattern or Bayer Filter array (some mirrorless cameras in the stills world use slightly different patterns). As such, all of the footage captured by a digital filmmaking camera gathers its color and image data using 50% green, 25% red, and 25% blue color data. So, a 4K camera, put extremely simply, would be 2K of green data, and 1K of red and 1K of blue — but it’s comprised of a 4K resolution’s worth of photosites (tiny little cavities in the sensor that gather light and measure the amount of it).

Why Your 4k Camera Isn't Really 4k — Bayer Pattern
A Bayer Pattern diagram via RED.

In the diagram above, you can see how a Bayer color array works. If you look closely, you’ll see how there are green photosites on every row, staggered with either blue or red photosites. As a result, there are twice as many green photosites as there are blue or red.

Each photosite has a small color filter on top of it that filters out each specific color of light so that only the light of that desired color can get through. After that, the light gets measured, and the sensor can decide how much of each color to represent in the image —based on the amount of light in each color of photosite.

According to Boyle, you can multiply your digital camera’s resolution by .7 to determine the actual true resolution of your camera’s image. However, I have read in a few places that RED cameras are actually closer to .8 because of some creative things they do with their OLPF (optical low-pass filter) technology, but I digress.

Debayering or Demosaicing


Images via RED.

In the image above, you can see an example of how the Bayer pattern sees the colors of an image (zoomed in at 400%). The portion on the left is exactly what the sensor sees (notice the abundance of green color). The portion on the right is how the sensor resolves the color after a process called debayering (also known as demosaicing) the image. This is when the sensor uses a series of mathematics and filters to more or less “guess” what those colors in the photosites are attempting to represent and fill in the gaps. Obviously, modern cameras have gotten very good at doing this.

They use some really neat maths, which is different for every camera or every post program — to guess what’s in the holes between the bits. — Geoff Boyle, cinematographer

It is during the process of debayering that you get some of your differences between various camera manufacturers and sensors. Every camera company uses a different debayering strategy or math to create the image; therefore, you have people complaining that Sony cameras are too green, or Canon has red-ish skin tones, etc. When shooting RAW, generally speaking, the debayering process happens in the computer software rather than in the actual firmware of the camera the moment the image is captured — and the debayering is stored as metadata.

So there are holes in the picture, and those holes are filled in with maths. So someone, who doesn’t make their living making images, someone who’s a mathematician — decides what’s in those holes. So, if I want to get a 2K image, I’ll probably shoot at 4K. — Geoff Boyle, cinematographer

Does It Even Matter?


Image via The Tiffen Company.

The first thing any DP with a really sharp lens does is put a diffusion filter on or a net on the back. — Geoff Boyle, cinematographer

I think perhaps my favorite part of Boyle’s argument is that resolution is really less important in the discussion of what creates a good cinematic image. In the video, Boyle explains the absolute truth that, in the cinematography world, the first thing a DP does to a sharp and highly detailed image is add some diffusion in front of it. In stark contrast to the race for higher resolution, there is also a race for a more filmic (or film-like) digital image. Digital sensors can now create sharply detailed images that have created beautiful sports and nature photography (like the RED Epic Dragon used to create a lot of BBC’s Planet Earth II‘s beautiful 6k footage), but in terms of digital narrative and commercial filmmaking, we’re generally seeing a much more stylized approach aimed at creating a more soft and organic image.

Whether it means a fogal net, a pro mist filter (and any other diffusion filter), or vintage Russian lenses, filmmakers like to soften their images. It makes skin look better, it can make your lights and bokeh bloom, and it helps bring your viewers into the world that you’re creating. So if you’re going to soften your image anyway, why stress so much about the resolution of your camera? At the end of the day, many people will still opt for an Alexa (which tops out at 2.8K when shooting RAW) over a RED Weapon Helium 8K, just based on their preference regarding how the image looks.

So, more than anything, I think it would boil down to what it always boils down to:  your personal tastes and what the story calls for. So is arguing over resolution pointless? Well, to Geoff Boyle it is. To me, it is sometimes — but other times it isn’t. What is more important is understanding how your camera creates its images, and why it’s different from other cameras.