Skip to main content

Here's why the Pixel 4's Neural Core chip could be a photography game-changer

There's a reason why the Pixel 3 has been lauded as the best camera phone. Google uses software algorithms inside its HDR+ package to process pixels, and when combined with a little bit of machine learning, some really spectacular photos can come from a phone that may have standard-issue hardware.

To help process these algorithms, Google used a specialized processor called the Pixel Visual Core, a chip we first saw in 2017 with the Pixel 2. This year, it appears that Google has replaced the Pixel VIsual Core with something called a Pixel Neural Core.

Google may be using neural network techniques to make photos even better on the Pixel 4.

The original Pixel Visual Core was designed to help the algorithms used by Google's HDR+ image processing which makes photos taken with the Pixel 2 and Pixel 3 look so great. It used some machine learning programming and what's called computational photography to intelligently fill in the parts of a photo that weren't quite perfect. The effect was really good; it allows a phone with an off-the-shelf camera sensor to take pictures as good or better than any other phone available.

If the Pixel Neural Core is what we believe it is, the Pixel 4 will once again be in a fight for the top spot when it comes to smartphone photography. Here's why.

Neural Networks

It seems that Google is using a chip modeled after a neural network technique to improve the image processing inside its Pixel phone for 2019. A neural network is something you might have seen mentioned a time or two, but the concept isn't explained very often. Instead, it can seem like some Google-level computer mumbo-jumbo that resembles magic. It's not, and the idea behind a neural network is actually pretty easy to wrap your head around.

Neural networks collect and process information in a way that resembles the human brain.

Neural networks are groups of algorithms modeled after the human brain. Not how a brain looks or even works, but how it processes information. A neural network takes sensory data through what's called machine perception — data collected and transferred through external sensors, like a camera sensor — and recognizes patterns.

These patterns are numbers called vectors. All the outside data from the "real" world, including images, sounds, and text, are translated into a vector and classified and cataloged as data sets. Think of a neural network as an extra layer on top of things stored on a computer or phone and that layer contains data about what it all means — how it looks, what it sounds like, what it says, and when it happened. Once a catalog is built, new data can be classified and compared to it.

A real-world example helps it all make more sense. NVIDIA makes processors that are very good at running neural networks. The company spent a lot of time scanning and copying photos of cats (opens in new tab) into the network, and once finished the cluster of computers through the neural network could identify a cat in any photo that had one in it. Small cats, big cats, white cats, calico cats, even mountain lions or tigers were cats because the neural network had so much data about what a cat "was".

With that example in mind, it's not difficult to understand why Google would want to harness this power inside a phone. A Neural Core that is able to interface with a large catalog of data would be able to identify what your camera lens is seeing and then decide what to do. Maybe the data about what it sees and what it expects could be passed to an image processing algorithm. Or maybe the same data could be fed to Assistant to identify a sweater or apple. Or maybe you could translate written text even faster and more accurate than Google does it now.

It's not a stretch to think that Google could design a small chip that could interface with a neural network and the image processor inside a phone and it's easy to see why it would want to do it. We're not sure exactly what the Pixel Neural Core is or what it might be used for, but we will certainly know more once we see the phone and the actual details when it's "officially" announced.

Jerry Hildenbrand
Jerry Hildenbrand

Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Twitter.

24 Comments
  • Meh, it can only do so much...
    Between all of the "flagship" phones...
    They all take darn near the same quality pictures And now with the budget phones taking very decent pictures...
    Picture taking has hit a Plateau with incremental upgrades... Next "big" area is going to be foldable phones...not so much what's on the Market in it's current iteration... Just my 2 cents..
  • Your right! the next "BIG THING" will be big chunky foldable phones... 🤣
  • Actually Corning is working on better Foldable glass...its just a matter of time.. where the candy bar will not be the only design... https://youtu.be/ToYJrT21vk8
  • I agree completely. I do think we are 4-5 years out from a foldable phone we would all drool over.
  • Sorry as someone who's been using mid-range phones the past 4 years coming to the Pixel 3 I would hardly call this a small jump. The cameras on the high-end phones are light years ahead of the mid-range phones. between the stability and the speed of taking photos on higher and phones blows midrange and lower out of the water.
  • Yeah you gotta laugh when people who can't afford top of the line products/handsets are delusional about reality. No mid tier handset will compare to any flagship device on the market today.
  • Unless whatever camera team takes the time to perfect the software.
  • I wonder if they should just skip foldable phones. Don't get me wrong, I like sci-fi movies that use rollable, foldable displays for communication or for information. Done correctly it would be so much better than using the current 'candy bar' approach.
    I don't think a flexible screen should be used also as a hinge - I think that is a bad fundamental design with a high stress point. They need to move me beyond that concept and embrace something different.
    But what about having receiver and transmitter in a different component than the display? So in other words have the phone 'cast' the image to another - device - screen - be at rollable, foldable, projectional display etc.
    Now I'm all over that...
    I guess baby steps first...
  • Meh, it only does so much better already. Amongst all the "flagship" phones, Google Pixels have been consistently at the #1 spot without 19 phone wabbling sensors. Pixel 4a (a budget phone) camera leaves those other budget phones in the dust. Next big thing will be time traveling...not so much what's currently on the market.
  • Phone makers nowadays can't invent things about the phones anymore...so just keep adding cameras and sensors. On my opinion, these things won't make me give up my Canon mirrorless camera (yes, with a phone I can snap photos and videos in an instant but actual camera quality isn't there yet)
  • I would guess the average Joe will not take pictures any better than they did with the Pixel 2 or 3 with this phone. Yes, camera upgrades are nice but at this point the camera isn't going to carry the weight of a $900 upgrade.
  • Well except you get unlimited original quality photos and videos stored on Google Photos forever. That's right. Plug my Nikon Z6 in, transfer photos and videos to my Pixel, and it uploads them for free forever. I have terabytes of data up there that I pay nothing for. If I have to buy a Pixel every couple years for that, I'm in (not that I have to as I have an original Pixel which has no expiration date on original quality backup but the battery doesn't hold a charge well but good enough for this kind of backup). It's so worth it.
  • Let's see some improvements to video now. Please. Purdy please!
  • Is this really something completely different from the Visual Core chip, or is it just the same chip renamed, or maybe just upgraded?
  • Ok, not to troll here, but... Apple is on its 3rd generation of Neural Processor with the A13 having on-chip integrated dispatch across GPU, CPU, and NPU.
    Lets hope the Pixel gets faster, cause the Pixel 3 has worse CPU performance than the 5 year-old iPhone 6s (Geekbench)
  • Not everything is about "Geekbench scores." I've seen many videos where the Pixel 3 XL opened apps faster than the iPhone X and Xs Max. But, aside from that, this isn't about Geekbench scores. This is about how Google's Neuro Processor will handle algorithms for photo production. Apple has nothing like what Google is releasing here, which is why the Pixel 4 will crush iPhone 11 Pro Max in picture quality.
  • Not sure the Pixel 4 will "crush" the iPhone 11 camera, as Apple got it pretty darn good this time. But, it is likely the Pixel will be better in some aspects.
    Geekbench scores are not cross-platform comparable, despite what Geekbench says.
    The iPhone X benchmarked with single core and multi scores of 3226 and 11951, while it's Android contemporary, the U12+, got a dismal score of only 2427 and 8815.
    But in the real world speed tests, the U12+ destroyed the iPhone X by an embarrassing margin.
  • Was this used in the Pixel 4 samples we've seen so far?
    If it has, there's certain aspects I'm not impressed with.
    We'll wait for more full resolution samples, but as it is, my current device can match a lot of what I've seen.
    Plus it has good battery life.
    Hopefully the Neural Core is different from the same Google processes that enhance photos now, which often deliver hilarious results.
  • I hope that chip can make wide angle shots
  • Lack of wide angle shots is my only concern with pulling the trigger when this device is available. Really enjoy using mine w the LG V30.
  • It's a bit ludicrous that people fall for terms like "Neural Core"... marketing tripe.
  • better than retina display.
  • They're BOTH better than "slofie"!
  • I get that they want to sell phones and all, but how about bringing this processing online, so anyone shooting raw can take advantage of it?