Throw your camera in the trash. AI is here to make photography better

Taking a photo of Irish Dancers with the Honor Magic 6 Pro
(Image credit: Nicholas Sutrich / Android Central)

Our perception of photography just changed radically, but you may not have even known it. Seemingly overnight, many major Android manufacturers have started using AI to help enhance photos taken from your smartphone, but that doesn’t mean you’ll suddenly have generative AI inserting giraffes wearing party hats into your pictures.

Instead, super-intelligent AI algorithms trained by companies like Honor, Oppo, and, of course, Google are looking at photos that your phone takes and helping fill in the missing details that tiny smartphone camera sensors simply cannot capture. They’re also solving a huge problem that has plagued photography since the dawn of time: blurry moving subjects.

I’ve written about that particular subject multiple times and nearly always point it out in reviews. When you take a picture of something moving - be it a kid, a pet, or something else - the end result is almost always a blurry picture. Thankfully for us, MWC 2024 has proven that this particular problem has been solved.

What is a picture, anyway?

Official promo shots of the Honor Magic 6 Pro's new Sportography mode

(Image credit: Honor)

When the Google Pixel 8 Pro debuted in the Fall, features like Best Take and Magic Editor caused many people to wonder about the nature of a photo. Is a photo the image that a camera sensor captures at the moment, or, rather, should a photo be a proper way to relive a memory? After all, people and pets moving in real life don’t look blurry to our eyes, so why should they look blurry in a photo?

I’ve been using the Honor Magic 6 Pro for the past two weeks and have been enamored with its camera in a way that I haven’t felt in a long time. Previously, the Pixel 6 Pro blew me away with its clever motion mode that keeps a subject crisp and the surrounding areas intentionally motion blurred for cinematic effect.

Honor seems to have tapped into the same kind of AI magic that has made Google’s Pixel phone cameras feel magic for years, and it’s combining that AI with its own years of hardware expertise.

The company calls its latest innovations “Sportography,” and it takes after similar efforts from companies like Samsung to highlight a particular feature with a cute and memorable name. In Samsung’s case, they’ve been using the term Nightography to describe the Galaxy S series’ penchant for taking great low-light photos.

The Honor Magic 6 Pro can automatically take photos whenever specific kinds of movement is detected, oftentimes capturing better shots than I would manually.

Honor is using a new brand of on-device AI that’s been trained on millions of images so it can do all sorts of new things. In addition to clearing up potentially blurry photos of moving subjects, it can also take a photo for you when it recognizes specific patterns of movement.

To test this out, I brought my Honor Magic 6 Pro to our local Celtic Fest to capture something amazing. While the Irish dancers were on stage, I held my son with my right arm and held the phone with my left hand.

All I did was open the camera app and point toward the stage. The Magic 6 Pro handled the rest.

The result is a series of photos that are all usable. Some are entirely blur-free, but most just look like good action shots with clear faces and feet with motion blur applied. I know for sure I could have never captured these images on a Samsung Galaxy S24 Ultra, for instance, as my S24 Ultra camera review proved.

Traditional photography methods can't keep up with what AI enables in smartphone cameras.

This kind of feature might sound silly at first, but I know without a doubt that millions of parents worldwide will jump for joy at the thought of being able to capture the moment even with a squirming kid in their arms. It’s the sort of promise we’ve heard about AI but have rarely seen it work as effectively as this.

And it’s not just Honor working on this problem. Oppo’s MWC 2024 conference specifically unveiled a new AI algorithm they created that is designed to specifically use AI to enhance photos by analyzing a series of photos, finding all the nitty gritty details, and ensuring they stay visible in the final image.

(Image credit: Nicholas Sutrich / Android Central)

This side-by-side helps showcase the difference quite nicely. Oppo had two of the same model phones on a dual tripod, each snapping a photo of a spinning wheel. The phone on the left uses antiquated photography methods that rely entirely on shutter speed and ISO levels to create a balanced photo. The right enhances things with AI processing, and the result couldn’t be more stark.

AI examines dozens of frames captured in an instant and assembles them in an intelligent way to create the "perfect" picture.

Briefly, here's how it works: when the camera app is open, it's constantly capturing image data and lightly processing it. Once you tap the shutter button, the app then takes a series of photos it's already been capturing and runs them through an AI algorithm that understands what looks best, thanks to its training. The photo you look at after browsing your gallery is actually a composite image of many different photos, all stitched together pixel by pixel to bring out the best details, color, and lighting.

All of this takes 2-3 seconds, depending on how complex the image is. If you're quick enough, you can tap the thumbnail button and watch the photo transform before your eyes. This process isn't exceptionally new - Google pioneered it ages ago, and many companies have picked up similar methods since then - but companies like Oppo, Honor, and Google are now using even more complex algorithms that understand motion and expression.

As someone who has been critiquing smartphone cameras and their general inability to capture any kind of motion, this is a dream come true. The best part is that it's going to be coming to even more phones by way of the Qualcomm Snapdragon 8 Gen 3, a processor that's already powered tons of flagship phones since the end of 2023.

Qualcomm is harnessing the power of Metavision, made by a company called Prophesee, to bring substantial improvements to photography on Snapdragon-powered phones. I don't specifically know when these updates will make their way to other phones, but I know how much they improve things.

The best part is that it's not just for photos. Metavision also works for video, and the results can be nothing short of incredible. Ironically, you might have already been using AI on your smartphone's camera without knowing it. Those great telephoto advancements in recent years are all AI-powered, proving that AI isn't just a buzzword.

It's here to stay, and it's making things better piece by piece.

Nicholas Sutrich
Senior Content Producer — Smartphones & VR
Nick started with DOS and NES and uses those fond memories of floppy disks and cartridges to fuel his opinions on modern tech. Whether it's VR, smart home gadgets, or something else that beeps and boops, he's been writing about it since 2011. Reach him on Twitter or Instagram @Gwanatu
  • Mooncatt
    Throw my camera in the trash? In the words of Dr. Evil:

    2HJxya0CWco
    And then there's this from the article:

    They’re also solving a huge problem that has plagued photography since the dawn of time: blurry moving subjects.

    I'll refer you back to Dr. Evil above. That problem has been solved for decades. There's a reason photographers use what are known as "fast lenses," full frame cameras, and artificial lighting. It's only a problem if you are unwilling to put in the effort to improve your photography skill set. Maybe blurry images are considered a problem plaguing phone cameras, but I guess that's what happens when marketing is too influential. It's like making you think an econobox car should be able to compete against a top fuel dragster at the drag strip.

    People need to ask themselves: Do you want a photo you took, or do you want an image that some computer only *thinks* you want to see? It's not much different than the manual vs. auto mode on cameras. One gives you the photo you took based on your settings, and the other one gives you a photo based on the settings developed by a programmer at the manufacturer.

    And if you want an AI based image, I don't really care. Just stop basically lying to us to promote it.
    Reply
  • nwh212
    Whoever wrote this must be joking or not as competent about electronics as they think. There's praises to be sung about smartphone cameras, don't get me wrong - there is an unbeatable convenience, they've gotten usable after 15 years of development, and AI is definitely improving photos taken with smartphones. Counterpoint? No matter how much tweaking, how much time spent editing, no matter what you do, you WON'T get the same pictures out of a smartphone. Even with AI, photos have artifacts and noise in many of the pictures you take, and produce a JPEG that is what it is. A DSLR, on the other hand, lets you step down to ISO 50, let in heaps on light with a big lens, and gives you a raw image to work with that is sharp from edge to edge. And, don't even get me started on video - phone videos are HORRIBLE compared to cameras, because the AI or image cleaning happening on phones takes heaps of information for a photo. It wouldn't be able to do that consistently for every video frame on the fly. Not to mention that heavily relying on AI can give you inconsistent images back to back. No wedding is gonna be shot on iPhone/android, and Hollywood is not going to approve making a movie with a phone camera. The fact is, DSLRs have a sensor at least 40x bigger, lenses that have true, actual depth of field, and colors that you control. People said this back when the iPhone 5 came out, and they'll say it another 20 years, but the fact is, the moment you replace my Sony A7IV with a phone camera is the moment I'll take you out back and we'll go 1v1. I might lose, but I won't go down without a fight.
    Reply
  • Ranger Ric
    Suggestion when capturing video, always orientate the camera horizontally (landscape). Every TV screen, monitor screen, computer screen, movie theater screen, are orientated horizontally. In videos where the camera is vertical (portrait orientation), then played back on the aforementioned devices the left one-third, and right one-third of the screen is cut off, causing the video to appear as a stick video, tall and narrow. We humans, maybe all lifeforms, view our surroundings considerably more horizontally than vertically. Holding a cellphone or camera vertically the video produced is similar to looking through a tight vertical keyhole or wearing horse blinders. In addition, when held vertically then to view the entire scene the camera person must pan more left and right which creates much camera movement; not good. Also, because of our way of viewing our world more horizontally, a video captured vertically becomes spatially disorientating.
    Reply
  • Village_Idiot
    Meh...another click-bate article written by someone who thinks that technology didn't exist before he was born.
    Reply