Skip to main content

Six months later, Google Lens still isn't great

For a company aimed at being the knowledge graph of the entire planet, image-based AI services are an obvious thing for Google to want to get right. And for anyone using Google Photos over the last couple of years, you know there have been huge strides made in enhancing these capabilities. Facial and object recognition in Google Photos, can be incredible, and there's a lot of good that can come from using these features in the real world. Being able to offer a camera with the ability to rapidly identify storefronts and street signs to visually impaired people alone is incredible.

Google Lens is headed in the right direction, but it's clearly not ready for daily use just yet.

As a Google Pixel owner, I've had access to Lens for six months now. This beta period for Lens has been a little clumsy, which is to be expected. I point Lens at an unusual book a friend of mine had, and instead of telling me where I can buy that book for myself I get a text snippet from the cover identified. I ask Lens to scan a photo of a movie theater marquis, it has no idea what is in the photo and does not offer me the ability to but tickets for the show like it was supposed to. I take a photo of my Shetland Sheepdog, Lens identifies her as a Rough Collie. Alright, so that last one is nearly impossible to get right in a photo, but the point is Google Lens doesn't reliably do most of the things it claims to be able to do yet.

To Google's credit, the things Lens gets right it gets right fast. I love being able to use Lens for real-time language translation. Point Lens at a menu written in another language, you will get immediate translations right on the page as though you were looking at the menu in English the whole time. Snap a photo of a business card, Lens is ready to add that information to my contacts book. I've used individual apps for these features that have worked reasonably well in the past, but unifying these features in the same place I access all of my photos is excellent.

I'm also aware that this is still very early days for Lens. It says 'Preview' right in the app, after all. While Pixel owners have had access to the feature for half a year, most of the Android world has only had access to it for a little over a month at this point. And when you understand how this software works, that's an important detail. Google's machine learning information relies heavily on massive contributions of knowledge, so it can quickly sift through it all and use thing that have been properly identified to better identify the next thing. It could be argued Google Lens has only just begun its beta test, now that everyone has access to it.

At the same time, Lens was announced a full year ago at this point, and I still can't reliably point it at a flower and have it tell me which kind it is. It's a cool thing to have access to, but I sincerely hope Google is able to make this feature something special in the not-too-distant future.

Russell is a Contributing Editor at Android Central. He's a former server admin who has been using Android since the HTC G1, and quite literally wrote the book on Android tablets. You can usually find him chasing the next tech trend, much to the pain of his wallet. Find him on Facebook and Twitter

20 Comments
  • Is it me or is everything that came from the Pixel 2 and 2XL are just "Meh"?? Because Lens is "Meh" and outside of the software and its camera the new pixels are "Meh". Just "Meh"
  • Well software and camera are like the most important.
  • AR stickers are quite impressive and the way they interact with the environment around them.
  • I'll second that. I had way more fun with those than I expected to.
  • While they do look similar, the funniest Google Lens moment I had was pointing it at an ASUS C302CA Chromebook and it coming back with "MacBook Pro"
  • As the owner of a C302CA... I WISH.
  • I don't know maybe it's because Google Assistant is not yet available locally, I still have to claim US English to get assistant to work. So Lens is absolutely terrible, almost didn't know anything that I point it at. Even Bixby Vision as often off point as it is it's better.
  • I know what the tone of this article would be like if it was about Bixby vision.
  • You can use lens to scan QR codes though that's pretty cool
  • Out of the roughly dozen times I've tried it, only once did it give me a correct answer...Needless to say, I haven't wasted anymore time with it. Yet another example of Google garbage...
  • Not great because it’s still in beta. Will hit 1.0 sometime in early 2019 only to be renamed Google Duo Lens because it merged with Duo for reasons. Then in early fall, it will be announced it’s being killed off by a newer fresh App called Google Bi-focal Lens that will have nothing in common with the original Google lens.....
  • Lol 😂, that's very good, and easily believable.
  • Thank you, you made my day...🤣
  • That's funny as h%ll!
  • AI is really hard doesn't take an expert to know this. You could probably copy and paste this article for next year
  • I get the "Not seeing this clearly yet" message so often, it reminds me of Space Balls, and Lord Helmet... "I can't see them Sanders".
  • Bixby Vision killer! at the least
  • Well it absolutely excels at this one thing I want from it. And it's the only thing I care. Converting text from pics.
  • I've never found any such service or app that works exactly the way I want. The reason is even humans have trouble with some of what Lens is trying to do. For example, imagine someone suddenly dropping a book on your desk, and demanding you say exactly what they're asking about the book without even asking the question. 99.999% of the time, you're gonna be wrong. Expecting AI to read user minds is, for want of a better word, stupid. This is why Facebook collects so much data, because the only way to know what you might be thinking is to quite literally read your mind. But I digress. As far as AI/automated assistance goes, I tend to stick to apps that do single, well defined things very competently, e.g. IFTTT. An IFTTT recipe will always work as written because its behavior is well defined. The other example is OCR, my favorite apps for which are GT Text and OneNote on the desktop. Both of those are very good at turning (printed) text in images into words.
  • How, exactly, does one "but" a movie ticket? Because, proofreading... "...offer me the ability to but tickets..."