How does the Face Unblur feature work on the Google Pixel?

Face Unblur Google
Face Unblur Google (Image credit: Google)

Best answer: Face Unblur is a new feature on the Pixel 6 and 6 Pro that relies on Google's machine learning algorithms to ensure when you take a shot of a moving subject, its face isn't blurry. It does this by taking photos from the primary and wide-angle lenses simultaneously and stitching them together to deliver images with clear detail.

Face Unblur makes great use of machine learning

Go through your phone's gallery, and you'll find at least a few photos where the subject you're trying to capture is moving too fast, leading to a blurry face. Google aims to solve this problem with the Face Unblur feature on the Pixel 6 and 6 Pro. This feature relies on the custom Tensor hardware and uses Google's machine learning skills. When it detects a subject moving too fast, it automatically takes an image from both the primary and wide-angle lenses and stitches them together. Here's how Google describes it:

Pixel 6 and Pixel 6 Pro simultaneously take a darker but sharper photo on the Ultra Wide camera and a brighter but blurrier photo on the main camera. Google Tensor then uses machine learning to automatically combine the two images, giving you a well-exposed photo with a sharp face.

Basically, the main 50MP camera on the Pixels usually defaults to a higher ISO and low shutter speed, and while this leads to bright images full of detail, it doesn't work very well for moving subjects. That's why when you try to take photos of your kids as they're playing, their faces often look blurry.

What Google is doing here is using both lenses; even before you shoot a photo, the camera finds the subject and determines if they're moving too fast for the primary lens. If it finds that this is the case, it automatically switches to Face Unblur mode, so when you take a photo, the camera uses both the primary and wide-angle lenses to take two pictures.

Face Unblur works unerringly well, and it's automatic — you don't have to do anything.

The primary lens contributes the details and uses a low shutter speed, while the shot from the wide-angle lens is taken at a low ISO and high shutter speed. The wide-angle shot delivers a clean face even while the subject is in motion because it's shot at a high shutter speed, and Google then turns to its machine-learning algorithm to stitch the photos together.

Most of the details that you'll find in the final image are from the primary lens, with the face data taken from the wide-angle lens. It's not particularly difficult for Google's ML algorithms to stitch these images together; after all, Google has been doing facial recognition for nearly a decade now.

The result is that when you take a photo of a moving subject on your Pixel 6 or 6 Pro, you get a photo with all the details and a clear face. The best part is that all of this is done automatically. There's no Face Unblur mode to select on the phone or any other changes you need to make while taking a photo. Face Unblur is one of several machine learning-based features that Google has introduced here, and it solidifies the Pixel 6 and 6 Pro as the best Android phones you can buy today.

Harish Jonnalagadda
Senior Editor - Asia

Harish Jonnalagadda is a Senior Editor overseeing Asia at Android Central. He leads the site's coverage of Chinese phone brands, contributing to reviews, features, and buying guides. He also writes about storage servers, audio products, and the semiconductor industry. Contact him on Twitter at @chunkynerd.