Samsung wins in all areas but the most important: The camera
Samsung's Galaxy S phones are always among the best money can buy. Year after year, with very few exceptions, the Galaxy S or Galaxy Note has the best display, the best processor, the features most buyers want, and above all the availability that builds a recipe for success. The line's tenth anniversary paints an even bigger picture of success for a company that's had a lot of it, and if you were to say the Galaxy S10 is the best phone you can buy, I probably wouldn't argue with you.
But there is one area where Samsung no longer leads: its camera. Samsung's Galaxy S10 has a great camera, but the past few releases have seen a Galaxy phone with a great camera that isn't the best camera, as one could claim until, say, the Galaxy S7 or so. There's a mostly simple explanation of why, and it's something that Samsung is trying to address: Artificial Intelligence.
Samsung is the undisputed king of mobile phone hardware. Some people will say differently, but no other phone maker releases a product chock full of its own in-house components, or sells those same components to other phone makers at the level Samsung does. And those components are really darn good. This same philosophy has been the forte of the Galaxy S camera through its lifetime; the company depends on the best parts to make its camera as good as it can be. Those parts may be sourced from another company or also built in-house, as we've seen with camera sensors the past few years.
But there's one thing that two or three lenses and adjustable apertures over large format CCD sensors can't do on their own, and that's computational photography bolstered by machine learning software. Think of what we see from Google's Pixel, the Huawei Mate 20 Pro, and the iPhone. Using the same sensors, these phones can make you a better photographer because of the complex algorithms programmed to turn a mediocre photo into a good one, and a great photo into a masterpiece.
How they do it isn't magical. Feed a monstrous computer filled with specialized ML (the generic term for machine learning) cores enough photos of a thing and eventually that machine will "know" what that thing is supposed to look like. This is the kind of machine learning that's used in self-driving cars, facial biometric algorithms, and robotic vacuums, so it's not specific to the camera software. But it does work very well with camera software, specifically the portion of it that builds an image from a cluster of pixels and points of colored light.
Google's Pixel camera and Pixel Visual Core "knows" what a fence or a cat or a pair of glasses on a person's face is supposed to look like. So does Huawei's Kirin AI and, to a lesser but still competent extent, so does Apples A-series AI co-processors. It knows what those things look like in bright light where the image being captured is crisp and clear, but it also knows the same things in poor light or even when there's not enough light to build a photo without this ML component, as we see in the Pixel's Night Sight.
When you open the camera app on the Pixel 3 or Mate 20 Pro, it starts processing what's in front of the sensor long before you tap the shutter button. It's collecting data to make sure it can recognize the objects in front of it and when it's time to take the photo the camera software — in this case, the software also encompasses the firmware on the chips that are used to process camera capture — uses data collected before the button was tapped, while the button was tapped, and just after the button was tapped to collect and build a photo or series of photos that you see once processing is complete. Usually, this makes for a better photo and sometimes a much better photo.
Be an expert in 5 minutes
Get the latest news from Android Central, your trusted companion in the world of Android
Samsung also collects sensor data the same way, and it's used for some really cool tricks in the photo app like finding the best framing of your shot. What Samsung isn't doing as well as its competition in the camera space is the computational AI part. The "engine" just isn't as good as Google's or Huawei's.
Imagine a Galaxy S10 with the same tri-camera arrangement and the same sort of powerful AI that can manage the shape, edge detection and color of the things in front of the lens. These ML components don't need to replace the excellent hardware or the extra data Samsung can collect by having multiple cameras and things like time-of-flight sensors; they can exist to fill in the blanks when those hardware components don't give enough data. What Google can do with one lens is amazing; not only can you get a crisp photo the majority of the time, its color will be correct and you can even manipulate the light field for an adjustable (and synthetic) depth of field effect.
I want the next Galaxy phone to have this. Either the photo algorithms need to get up to snuff or Google needs to open up its chest of secrets or someone needs to break into a hotel room late at night and steal it Watergate-style. I don't care how it happens, I just want it to happen. Google isn't going to ever be able to build a phone with the level of quality in camera hardware that Samsung does, even though both companies use the same sensors. Huawei phones are never going to be a thing in the U.S. no matter how many times the company sues the government, so the burden lands on Samsung's shoulders to give us the best of everything.
The Galaxy Note 10 is already in production and it probably isn't going to be the unicorn we deserve. The Galaxy S 11 is also probably very deep into its design and also might not be the one. But one day, Samsung will get it sorted and just like things were a few years ago, the Galaxy phones will have a camera that all other companies strive to copy. I'm counting on it.
Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Threads.