In 2013, the world was not ready for Google Glass when the company let a few thousand developers (and bloggers) take a pair into the wild. Neither was the software that powered it. But five years is a lifetime when it comes to technology and I think both our readiness and Android's capabilities have changed enough to bring the idea back.
It's easy to call Google Glass yet another failed idea from Google, which loves to try outlandish ideas and is willing to see them fail. The reality is that Google wasn't exactly sure how Glass could and would be useful, so it crowdsourced ideas from people who were willing to try them. Fast forward a few years and Google saw the potential of Glass in the enterprise. Lo and behold, Glass Enterprise Edition was born and collaboration and teleconferencing, from professionals like surgeons or security specialists, is better than ever. These are the kinds of projects a company with money to burn can do when executives and board members have a bit of dreamer in them.
We — the collective we that includes you, me, and anyone else who pays attention — didn't have enough of that dreamer in 2013 and didn't see a need for a computer that you wear like a pair of glasses. We decided it needed to die, and there was (and remains) pervasive resistance to the idea that wearable cameras are a bad idea. Never mind that everyone using Glass also had a phone with a much better camera on them, or that the government and businesses across the country were recording our every move, a camera you can wear was outrageous. Selfies while wearing Glass in the shower didn't help, either.
I'm not sure how much ill-informed consumer backlash had to do with the cancellation of a true consumer version of Glass. I do know that said consumer edition would have failed on its own, though. I know this because I used the product and it was clunky, distracting, and nowhere near fluid and friendly enough for anyone but the true tech hardcore to love. Android was nowhere near ready to be 100 percent hands-free, and the pseudo-Google Now interface that powered Glass was a typical version one product that nobody liked. I had this epiphany one evening while driving in winter weather, trying to not be blinded by oncoming headlights, and having Glass tell me about a package Amazon sent two days prior.
Android Oreo and Google Assistant changed all of that. Like it or not, you can do most anything you would want to do on a face-mounted computer by talking to Google Assistant. More importantly, Google's AI could adapt both to a wearable and to your needs. Of course, more powerful and more miniaturized hardware wouldn't hurt when it comes to making a user experience that flows without being distracting because it can know you're going 30 miles per hour in a mix of rain and ice at 10 PM on a Sunday. Assistant isn't perfect, but it's smart enough to know not to bother me with all that going on.
More importantly, Assistant's ability to work with home automation products and be contextually aware can make Glass amazing. Front door locked and you have a bag or two of groceries in your hands? Look at the lock and blink to open the door. Glare from a window is at just the right angle to make you miserable? Glass could know this and lower the blinds. Glass could do these sort of things and more, so my novelty ideas are just the tip of an AI-powered automated future where everyone has a tiny computer strapped to their head.
Maybe my ideas of a utopian future aren't the best example of how Glass could be a product we want in 2019. That's why I don't work at Google thinking up ways to change the world and stuff. The people who do work at Google to change the world can think of ways to integrate Glass into real life, though. If Snapchat was willing to try it, Google should do the same.
Get the Android Central Newsletter
Instant access to breaking news, the hottest reviews, great deals and helpful tips.