Artificial Intelligence is an often misunderstood term. It sounds like claiming that machines — whether they be robots or your washing machine — are sentient and able to think for themselves, but that's not really the case. Machine learning (another misnomer in itself) is a tool where programmers can set up the software to recognize a pattern, like a shape or a color or a specific phrase — then call for an action to happen if it "sees" that pattern again.
A great example was how an NVIDIA engineer "taught" one of its AI machines by feeding it photos of cats (opens in new tab). All sorts of cats in all sorts of different situations. Eventually, the machine was able to recognize a cat in any photo or even a live feed. It didn't need any more programming to find a cat, no matter the situation because it "learned" what a cat was and what it looked like.
We've moved well beyond cats and as Google CEO Sundar Pichai mentions in his Financial Times editorial in that Google can predict the weather in India better than a meteorologist, and some companies or groups of people have trained machines to do things when a face is recognized.
Identifying a person in a Facebook photo, for example, can allow a machine to get a name, address, phone number, financial information, and an email address. If it's a famous person, it can probably find even more information including things that one would rather not be made public.
This is bad. Maybe it's not the same level of bad as a Terminator moving through time as we see in fictional movies, but still, do you want someone finding things out about you because your friend posted a photo with your face in it on social media?
And that's not the worst of it. AI that has learned exactly what a person looks and sounds like can create an electronic duplicate (called a deep fake) in a photo or video. Imagine a 90-second video of the head of state in some sort of compromising position or saying something off-color but it's 100% fake and computer-generated, and you couldn't tell it wasn't real.
These are real problems. Whether it's someone getting your credit score and selling it to fly-by-night creditors (don't you hate getting those letters?) or a movie star in a fake porno film or a presidential candidate making a fake speech that has millions of views on Facebook. Just because AI can detect cancer really well doesn't mean everything done with it will be beneficial.
There needs to be some sort of oversight. That's obvious. It's also obvious that the companies building the machines or individuals writing software aren't capable of keeping it all in check. But having "the government" be the watchdog is insane.
Governments are created to take care of people but exist to take better care of some people. Even the most benevolent governments of the world are staffed by humans, and humans can not be trusted to always do the right thing. In a perfect world, it might work, but in the real world, government officials are concerned about being re-elected more than fixing the potholes in the roads or not starting World War III.
These are not the people who should be regulating something that's potentially more powerful than any other tool (or weapon) the world has ever seen. Do you want the Pentagon or the NSA to have technology that can run 24/7 to keep citizens under even more surveillance or to determine who is a threat to our freedom? Or an "enemy" country to have a system in place that can recognize the right time to make a first strike and how to invoke the most fear and chaos into your daily life? And have it all be OK under the law because the fox is guarding the henhouse?
I've been reminded that not every government official is evil. E.U. Competition Commissioner Margrethe Vestager is a great example. It is her job to make sure that businesses great and small — including Google, Apple, Amazon, Microsoft, Facebook and Volkswagen — play fairly and follow E.U. law when it comes to data, honesty, and privacy. And to date, she has done an excellent job and made valuable changes.
But things that happen in the E.U. don't always have such a far-reaching effect. Especially when it comes to tech that can be weaponized. I don't expect Syria or Libya or the U.S. to take a well-meaning E.U. regulation about AI technology seriously when the heads of those countries know how powerful not following any regulation can be. This leads to a world where powerful and, depending on your point of view, aggressive nations having more power to be more aggressive. Or countries breaking their own laws and developing the same types of smart weaponry as countries without similar laws will.
An appointed official that the world would listen to, either willingly or by force, could develop rules for how AI can be used both in the private sector and by the world's governments.
Sundar Pichai knows how his words will be perceived, and it's good hearing one of the people responsible for the mess propose regulation. But just saying a thing that affects every single one of us needs government regulation is almost as terrible as saying nothing at all.
It's obvious that someone needs to take the reins and control who has access to powerful online servers that can be used for machine learning and what they are allowed to do with it once they have proper access. I don't know who that should be, though. One thing I do know is that passing the buck to "the government" means you want someone else to figure it all out for you.
Who is going to regulate the government? You are! With your vote! That's what it means to have a democracy! You Yanks need to reign in your fear of "government", and instead make it your own...!
The people. With guns, if need be. Hence the 2nd Amendment.
Finally someone who gets why we have the 2nd Amendment.
This reads like someone who has no business owning a gun. Threatening violence because of privacy concerns? Good god, the 2Aers have no business owning guns. What's your little AR-15 going to do against the US military, who has bigger guns, drones, and 1.5 million soldiers? Hint: Not a damn thing. Maybe your post should be reported to the FBI
Lol really do you think you over glorified super soakers are any match against the US military
Good luck fighting somebody who can shoot at you from 100 miles away in the comfort of an office building.
No matter who's voted. Governments are able to regulate companies, but there's nothing stopping governments from implementing deadly AI in the military. And sooner or later, rouges will let the public have access to the tech.
Please, the government cannot successfully regulate anything. They can barely protect their citizens and their rights and freedoms or fulfill pretty much any part of the social contract. Do you think they can protect consumers rights or privacy?! How could they when oftentimes the very same people in charge of all the regulatory bodies and agencies work both sides of the fence? There are still, for example, some US Senators currently serving, who think of the Internet as a series of interconnected tubes...yeah those guys...they also probably think that AI is short form for someone they slept with and forgot to pay off, and Google is their favourite steakhouse, you know the one with that world famous chef, what was his name? Oh yeah his name was Android...
Get the best of Android Central in in your inbox, every day!
Thank you for signing up to Android Central. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.