Skip to main content

Does a digital personal assistant have ethical responsibilities?

Google Assistant
Google Assistant (Image credit: Android Central)

A friend of mine finally joined the ranks and purchased a Google Home. Largely because they were on sale, but also because she's seen my wife and I use ours on occasion and her curiosity finally got the best of her. In any case, she now has a little robotic AI on her coffee table that can tell her the weather or her schedule for the day, play music on demand, and turn on the porch light using the power of voice command and is having fun with it. I assume everyone who has a Google Home or Amazon Echo device had a similar honeymoon phase before it became routine.

Should we trust an AI to make the right call to the right people?

She also knows I'm pretty savvy when it comes to how Google "stuff" works (her words) and wanted to know if it could call the cops because it "thought" (this time the quotes are from me) there was a crime being committed or a dangerous situation at play. I let her know it couldn't and that unless it recognized a hotword it did not process or upload anything it could hear, but I would be careful asking it stupid questions that might get oneself in trouble just on principle. She seemed ... disappointed.

More: What does an "Always Listening" smart speaker really mean?

Why on earth would you want it to be able to report anything suspicious, I asked. Do you not value your privacy more than that? She quickly responded with a "yes, but" and proceeded to explain that she lived alone and would be perfectly fine if it could assess a potentially dangerous situation and call some sort of dispatch. We discussed the pros and cons and listened to each other's arguments — it eventually came to child safety like a lot of things do — and I admit, her rationale was solid, even if I disagree.

I do not want a little gadget that sits on a table to decide if it needs to call 911. Period, full stop. I do not trust that it would be right enough of the time nor do I want that decision to be taken away from myself, even if I would not make the right call every time. I consider myself to be fairly intelligent and able to make a quick judgment call even though I know sometimes that call may be wrong. I also think an algorithm could be developed that would recognize specifics and make the right call most of the time, too. Nevertheless, I still don't want technology that supersedes my little bit of privacy in my own home regardless of the circumstances.

Her arguments, which I will once admit again are sound, hit unsavory topics like an abusive partner or parent and the idea of a home invasion scenario. The point where I had to pause was when she asked if I would like that feature when my wife was alone or for my daughters to have. I try to be open-minded in all things but am still a husband and father and can't help being a bit protective so I had to admit that I wouldn't hate it as much then.

I value privacy over all else, but I have to pause when it comes to my family members.

I'm not torn — I don't think that a digital personal assistant should have any sort of ability to listen in other than when a hotword is heard and processed. I understand that someone in trouble may not be able to ask Google or Alexa to call for help, but I still think that not having the ability is the lesser evil. But as technology becomes more and more personal, these are the types of conversations that need to be had. Lawmakers and the companies that produce them need to hash it out, but we also need to discuss how much intervention any sort of smart electronics should have in our lives.

I'm OK with the current state and hope that none is always the answer to that question. I'm curious to hear what you think. Should a Google Home be able to call the police and report abuse? What about other situations like discussions about violence? A bigger question may be if one should have the ability to check and to report someone who is wanted since it knows our identity. I expect plenty of knee-jerk reactions to these questions — I certainly had my own — but hopefully, we can have a serious discussion, too.

Hit the comments and tell me what you think.

Jerry Hildenbrand
Senior Editor — Google Ecosystem

Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Twitter.

11 Comments
  • Alexa now has an optional feature where it can notify the account holder through the Alexa app for Android or iOS if an Echo device detects the sound of breaking glass, or a smoke detector going off. This isn't the same as calling 911 of course, but they seem to have added these particular sounds to what their devices can detect locally. Sounds of something like people fighting would probably have to be identified in the cloud. There's also the problem of false alarms, such as when the sounds are coming from the TV. Perhaps there could be an emergency command, like, "Hey Google, call 911 now!" that would try to contact emergency services, and won't stop if someone just says, "Hey Google, stop." Of course, then you might have the problem of small children using this when they thrown temper tantrums.
  • Personal voice detection could just send alert to parents phone first when they try to call 911 with it, instead of just outright making the call.
  • I think it should be an optional feature to have for those who want it...but disabled by default in the settings and something that has to be properly setup for certain scenarios. Like glass breaking, vibration/motion sensor detection, temperature changes (from fire or even an open broken window), smoke/carbon dioxide detection, potentially body impact or pulse detection for the elderly. Like if device sensor detects high temperature rises and smoke alarm tone, then alert 911.
  • it is really a moot point. none of the digital assistants has the ability to dial 911 and they do not really have the ability to distinguish between what is heard live or broadcast. one thing for sure, even if they did have the ability, the emergencies services would be overwork. it is bad enough they get all the false alarms from home security systems. think how much worse it could be. as far as the ethics of it, naw, it should never be in play for AI, at least not with current technology
  • No thanks. I use personal voice detection and my kids still manage to spoof it. Not to mention it being activated by tv sounds. I don't want to deal with the authorities anytime I'm watching a tv show with gunshots, people arguing or glass breaking.
  • In theory, I think there are some good conversations to have. But, in practice? A thousand time no. It's not just that the technology is nowhere near close to being able to get this right on any consistent level. It's that we need a lot of other "conversations" first. We know that deeply and dangerously flawed software is being deployed and there's a lot of handwaving about the flaws because "safety". Fortunately, there has also been push back, and SOME of it has been effective. But there is still a lot of this floating around. There is more to the issue that how good the software is. The first question is what actually constitutes "good" and "working well'? According to Amazon, their software works well - so well that they are comfortable selling it it to law enforcement. But we also know that this software has a significant error rate, and that error rate is even higher for women and minorities. And they are also comfortable selling the software set to default to making matches at 80% confidence levels, and not bothering to make it crystal clear to their law enforcement clients that this is asking for a very high rate of false positives. The problem is not Amazon. They are not the only ones, and none of the vendors of these systems would be able to sell if their customers were thinking this through. Until we get VERY clear about what sufficient accuracy looks like, and what kinds of safeguards we need to have in place to deal with the possible failures, this is just too dangerous, even without the privacy implications.
  • Having fine-tuned control over such an ability would be fine with me. If you could set a schedule or manually activate/deactivate its security function like many home security systems have.
  • The other problem at play is the routine police mistreatment of people of colour, mentally unwell people, etc. Assistants having this ability would only exacerbate that problem.
  • The potential for abuse is enormous. This would be yet another weapon for the surveillance state to wield. Edit: oops this was supposed to be a new comment not reply
  • Something like this already exists. The newest Apple watch has fall detection and can call for assistance. It allows the user a time period during which they can stop the call, if it was a false detection, or just not needed. The whole function can be turned off entirely of course. This is not to the degree proposed but it moves in the direction. An AI calling for help exists.
  • Agree with a lot of the comments, and although it could serve a valid purpose on occasion, the negatives and potential problems far outweigh the benefits. Last week I specifically avoided asking Google Assistant a question because I knew it would raise flags. Even more so now with people listening in.