How Pixel Binning Makes Your Samsung, Apple and Google Photos Better

Flagship phones rely on this technology to offer good low-light performance when it’s dark and high-resolution photos when it’s bright. Here’s how it works.

Megapixels used to be so much simpler: A bigger number meant your camera could capture more photo detail as long as the scene had enough light. But a technology called pixel binning now universal on flagship smartphones is changing the old photography rules for the better. In short, pixel binning gives you a camera that offers lots of detail when it’s bright out, without becoming useless when it’s dim.

The necessary hardware changes bring some tradeoffs and interesting details, though, and different phone makers are trying different pixel binning recipes, which is why we’re taking a closer look.

Read more: Check out CNET’s Google Pixel 7 Pro review, iPhone 14 Pro review and Galaxy S22 Ultra review

Pixel binning arrived in 2018, spread widely in 2020 with models like Samsung’s Galaxy S20 Ultra and Xiaomi’s Mi 10 Pro, and arrived on Apple and Google hardware with the iPhone 14 Pro and Pixel 7 phones in 2022. The top-end model from Samsung, the Galaxy S22 Ultra, features a 108-megapixel main camera sensor, and pixel binning could take the next technological leap with the S23 Ultra’s expected 200-megapixel main camera set to debut Feb. 1.

Here’s your guide to what’s going on.

What is pixel binning?
Pixel binning is a technology that’s designed to make an image sensor more adaptable to different conditions by grouping pixels in different ways. When it’s bright you can shoot at the full resolution of the sensor, at least on some phones. When it’s dark, sets of pixels — 2×2, 3×3, or 4×4, depending on the sensor — can be grouped into larger virtual pixels that gather more light but take lower resolution shots.

For example, Samsung’s Isocell HP2 sensor can take 200-megapixel shots, 50-megapixel shots with 2×2 pixel groups, and 12.5-megapixel shots with 4×4 pixel groups.

Pixel binning offers another advantage that arrived in 2020 phones: virtual zoom. Phones can crop a shot to only gather light from the central pixels on the iPhone 14 Pro’s 48-megapixel main camera or the Google Pixel 7’s 50-megapixel camera. That turns a 1x main camera into 2x zoom that takes 12-megapixel photos. It’ll only work well with relatively good light, but it’s a great option, and 12 megapixels is the prevailing resolution for years now, so it’s still a useful shot.

With such a high base resolution, pixel binning sensors also can be more adept with high-resolution video, in particular at extremely high 8K resolution.

Pixel binning requires some fancy changes to the sensor itself and the image-processing algorithms that transform the sensor’s raw data into a photo or video.

Is pixel binning a gimmick?
No. Well, mostly no. It does let phone makers brag about megapixel numbers that vastly exceed what you’ll see even on professional-grade DSLR and mirrorless cameras. That’s a bit silly, since the larger pixels on high-end cameras gather vastly more light and feature better optics than smartphones. But few of us haul those big cameras around, and pixel binning can wring more photo quality out of your smartphone camera.

How does pixel binning work?
To understand pixel binning better, you have to know what a digital camera’s image sensor looks like. It’s a silicon chip with a grid of millions of pixels (technically called photosites) that capture the light that comes through the camera lens. Each pixel registers only one color: red, green or blue.

The colors are staggered in a special checkerboard arrangement called a Bayer pattern that lets a digital camera reconstruct all three color values for each pixel, a key step in generating that JPEG you want to share on Instagram.

Combining data from multiple small pixels on the image sensor into one larger virtual pixel is really useful for lower-light situations, where big pixels are better at keeping image noise at bay and capture color better. When it’s brighter out, there’s enough light for the individual pixels to work on their own, offering the higher-resolution shot or a zoomed-in view.

Pixel binning commonly combines four real pixels into one virtual pixel “bin.” But Samsung’s Galaxy S Ultra line has used a 3×3 group of real pixels into one virtual pixel, and the South Korean company is likely to adopt 4×4 binning with the Galaxy S23 Ultra.

When should you use high resolution vs. pixel binning?
Most people will be happy with lower-resolution shots, and that’s the default my colleagues Jessica Dolcourt and Patrick Holland recommend after testing the new Samsung Galaxy phones. Apple’s iPhones won’t even take 50-megapixel shots unless you specifically enable the option while shooting with its high-end ProRaw image format, and Google’s Pixel 7 Pro doesn’t offer full 50-megapixel photos at all.

The 12-megapixel shots offer better low-light performance, but they also avoid the monster file sizes of full-resolution images that can gobble up storage on your device and online services like Google Photos and iCloud. For example, a sample shot my colleague Lexy Savvides took was 3.6MB at 12 megapixels with pixel binning and 24MB at 108 megapixels without.

Photo enthusiasts are more likely to want to use full resolution when it’s feasible. That could help you identify distant birds or take more dramatic nature photos of distant subjects. And if you like to print large photos (yes, some people still make prints), more megapixels matter.

Does a 108-megapixel Samsung Galaxy S21 Ultra take better photos than a 61-megapixel Sony A7r V professional camera?
No. The size of each pixel on the image sensor also matters, along with other factors like lenses and image processing. There’s a reason the Sony A7r V costs $3,898 while the S22 Ultra costs $1,200 and can also run thousands of apps and make phone calls.

Image sensor pixels are squares whose width is measured in millionths of a meter, or microns. A human hair is about 75 microns across. On Samsung’s Isocell HP2, a virtual pixel on a 12-megapixel shot is 2.4 microns across. In 200-megapixel mode, a pixel measures just 0.6 microns. On a Sony A7r V, though, a pixel is 3.8 microns across. That means the Sony can gather two and a half times more light per pixel than a phone with the HP2 Ultra with 12-megapixel binning mode, and 39 times more than in 200-megapixel full-resolution mode — a major improvement in image quality.

Phones are advancing faster than traditional cameras, though, and closing the image quality gap. Computational photography technology like combining multiple frames into one shot and other software processing tricks made possible by powerful phone chips are helping, too. That’s why my colleague and professional photographer Andrew Lanxon can take low-light smartphone photos handheld that would take a tripod with his DSLR. And image sensors in smartphones are getting bigger and bigger to improve quality.

Why is pixel binning popular?
Because miniaturization has made ever-smaller pixels possible. “What has propelled binning is this new trend of submicron pixels,” those less than a micron wide, said Devang Patel, a senior marketing manager at Omnivision, a top image sensor manufacturer. Having lots of those pixels lets phone makers — desperate to make this year’s phone stand out — brag about lots of megapixel ratings and 8K video. Binning lets them make that boast without sacrificing low-light sensitivity.

Can you shoot raw with pixel binning?
That depends on the phone. Photo enthusiasts like the flexibility and image quality of raw photos — the unprocessed image sensor data, packaged as a DNG file. But not all phones expose the raw photo at full resolution. The iPhone 14 Pro does, but the Pixel 7 Pro does not, for example.

The situation is complicated by the fact that raw processing software like Adobe Lightroom expects raw images whose color data comes in a traditional Bayer pattern, not pixel cells grouped into 2×2 or 3×3 patches of the same color.

The Isocell HP2 has a clever trick here, though: it uses AI technology to “remosaic” the 4×4 pixel groups to construct the traditional Bayer pattern color checkerboard. That means it can shoot raw photos at full 200-megapixel resolution, though it remains to be seen whether that will be an option exposed in shipping smartphones.

What are the downsides of pixel binning?
For the same size sensor, 12 real megapixels would perform a bit better than 12 binned megapixels, says Judd Heape, a senior director at Qualcomm, which makes chips for mobile phones. The sensor would likely be less expensive, too. And when you’re shooting at full resolution, more image processing is required, which shortens your battery life.

Indeed, pixel binning’s sensor costs and battery and processing horsepower requirements are reasons it’s an option mostly on higher-end phones.

For high-resolution photos, you’d get better sharpness with a regular Bayer pattern than with a binning sensor using 2×2 or 3×3 groups of same-color pixels. But that isn’t too bad a problem. “With our algorithm, we’re able to recover anywhere from 90% to 95% of the actual Bayer image quality,” Patel said. Comparing the two approaches in side-by-side images, you probably couldn’t tell a difference outside lab test scenes with difficult situations like fine lines.

If you forget to switch your phone to binning mode and then take high-resolution shots in the dark, image quality suffers. Apple automatically uses pixel binning to take lower-resolution shots, sidestepping that risk.

Could regular cameras use pixel binning, too?
Yes, and judging by some full-frame sensor designs from Sony, the top image sensor maker right now, they someday do that.

What’s the future of pixel binning?
Several developments are possible. Very high-resolution sensors with 4×4 pixel binning could spread to more premium phones, and less exotic 2×2 pixel binning will spread to lower-end phones.

Another direction is better HDR, or high dynamic range, photography that captures a better span of bright and dark image data. Small phone sensors struggle to capture a broad dynamic range, which is why companies like Google and Apple combine multiple shots to computationally generate HDR photos.

But pixel binning means new pixel-level flexibility. In a 2×2 group, you could devote two pixels to regular exposure, one to a darker exposure to capture highlights like bright skies, and one to a brighter exposure to capture shadow details.

Indeed, Samsung’s HP2 can divvy up pixel duties this way for HDR imagery.

Omnivision also expects autofocus improvements. With earlier designs, each pixel is capped with its own microlens designed to gather more light. But now a single microlens sometimes spans a 2×2, 3×3, or 4×4 group, too. Each pixel under the same microlens gets a slightly different view of the scene, depending on its position, and the difference lets a digital camera calculate focus distance. That should help your camera keep the photo subjects in sharp focus.

Leave a comment

Your email address will not be published. Required fields are marked *