Additions to the U.K.'s Online Safety Bill seem more worrisome than the problems they try to solve

Twitter Schefter Lifestyle
Twitter Schefter Lifestyle (Image credit: Andrew Myrick / Android Central)

Most Android Central readers are in North America so you might not know about the U.K. Online Safety Bill. It's a proposal by Parliament that is intended to improve user safety while online. It's well-intended, not yet a law or set of laws, and may or may not have any meaningful effect.

Case in point: the latest proposed additions to the bill are designed to "protect people from anonymous trolls online." Yes, that's an actual quote from the Right Honorable Nadine Dorries, U.K. Secretary of State for the Department of Digital, Culture, Media, and Sport; not something I have paraphrased. Besides the candid title, the additions also have the distinction of being absurd.

I'll let you read through the actual text here but I'll also summarize for those who won't bother:

  • Online platforms must give users a way to verify their identity and block unverified accounts or users.
  • Online platforms must develop tools for filtering out content that is "legal but harmful" (again an actual quote).
  • Protecting users from harmful or illegal content is the responsibility of the online platform.

Welcome to your grandma's internet where we share recipes and inspirational quotes, but only the ones we like.

I'm not a citizen of the U.K. and am smart enough to know that those who are should decide how valid this proposal is. But I do know that this would lead to a specialized version of the internet for the U.K. where I would never visit. It's also one of those things that could act as a blueprint for similar legislation where you live and where I live. That's where I have an opinion.

No "safe spaces" can exist on the internet

Maybe there should be, especially when it comes to children, but there can never be a safe space on the internet. The people who would seek to ruin a safe space are smart (often smarter than those who would try to create one) and will always find a way. We have 30 years of internet history that proves this simple fact.

Trying to force people to identify themselves via some sort of official method isn't going to change this and marginalizing those who don't want to submit said ID by adding them to the "troll section" doesn't make anything better. You should be allowed to use your real name and verify your identity on any social platform if you like, but doing so doesn't make you special or your content more important. Only your content itself can decide the quality of your content. With this proposal, I will be safe from you (for example) if I verify myself and just block you until you do the same.

"Legal but harmful" filtering is a good idea that's impossible to implement. You either have to depend on AI — and we've seen how well that works — or have a system that over or under reacts. Before we even get there, though, who decides what is harmful? The platform? The user? The underpaid person writing the tools for filtering?

"Legal but harmful" filtering is a good idea that's impossible to implement.

The last thing I want to see whenever I go on social media is people talking about white cheddar popcorn. It's harmful — people choke to death on it every year and the ingredients contain whey which is a proven killer for those allergic to it. I demand the ability to filter out all text and imagery about white cheddar popcorn. The second to the last thing I want to see is people arguing about which is the best Android phone but there is no way to hide from that.

Yes, that's about the dumbest paragraph you will ever read. But replace white cheddar popcorn with religion, democrats, immigration, or any number of everyday ideas and you see how this idea can't work without someone, somewhere deciding what is harmful but perfectly legal. You don't want that person to be me. I do not want that person to be you. Hell, I don't want that person to be anyone.

Only you are responsible for you

See more

I am a professional when it comes to acting stupid on the internet.

The crux of these proposed changes is potentially the most dangerous idea: the online platform is responsible. This is also known here in the states as the Section 230 black hole. Section 230 of the Communications Decency Act says that Twitter isn't responsible for the garbage I post but can take it down if it violates the terms of service. Ditto for Facebook, YouTube, and any other online platform that's not part of the U.S. Government.

Making an online platform responsible for keeping the peace is the opposite idea and it's wrong in the U.K. for the same reasons it's wrong in the U.S. Every instance where you think Section 230 offers too much protection to "big tech" can be countered by an instance where you would lose your mind if an online platform was responsible for what its users post. It doesn't matter if you are conservative or liberal, caucasian or a person of color, or college-educated or still in high school. People have different opinions and Facebook isn't responsible for the shitty memes that come from having any of them.

The crux of these proposed changes is potentially the most dangerous idea: the online platform is responsible.

Unfortunately, that also means Facebook isn't responsible for the hateful things a person or group might have to say about another person or group. For some people, a barrage of this sort of content could actually be harmful but the person(s) posting it are ultimately responsible for saying it.

What we need here, and this goes for the U.K. as well in my opinion, are better tools to filter out accounts we find problematic, better teams of real people to monitor reports of actual illegal content, and firm terms of service that are enforced unilaterally instead of based on the number of followers or influence one may have.

What we don't need is to have to show our ID at the door lest we be labeled an anonymous online troll or for Instagram, YouTube, or Gab to be responsible for our commentary.

Jerry Hildenbrand
Senior Editor — Google Ecosystem

Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Twitter.