This is the stuff science fiction fans and over-imaginative kids daydream about – an army of artificial intelligence robots designed to help and protect humans continue to learn and develop until they try to take over the human race. But it turns out that this scenario might not be too far-fetched after all.
But set aside your fears of an AI takeover – AI experts from Google and major universities are working on a “kill switch” to shut down AI in a worst-case scenario, giving humans the ultimate control over AI no matter how big the problem grows.
Need for an AI Kill Switch
The expansion of robots and artificial intelligence in recent years has made the threat of uncontrollable robots a much more realistic threat. In fact, experts, including Tesla’s Elon Musk, have warned that the rise of robots could seriously impact and hurt the human race.
The major concern with AI is that it is designed to get smarter the longer it functions, meaning it can start to see patterns in human behavior, gain intelligence, and acquire a “personality” of its own. By adapting to different situations and storing information, the machines start to make their own judgments, similar to humans. We previously reported on Google’s AI robot which beat the Go boardgame world champion at his own game after it learned from its human opponents.
In essence, the longer AI functions, the less control human developers have over it and the more likely the machine is to stop listening to instructions and commands from humans. Hence, the need for a kill switch just in case things get out of control.
One of the biggest challenges of such a device is preventing the AI machine from knowing the kill switch is coming because many AI machines can predict scheduled interruptions or times when it will be turned off, meaning that could potentially fight someone who tries to power down their machine. In other words, it needs to be a sneak attack.
AI machines are set to achieve a specific goal, but they sometimes disregard social norms or other rules to reach that goal. For example, a self-driving car may be programmed to get from point A to point B as quickly as possible, but it could possibly learn to break traffic laws to get to its destination on time. Breaking those laws isn’t written into the machine’s algorithm, but that doesn’t stop it from doing whatever it takes to be successful at its task. Similar examples can potentially be found in robots trained to win games or perform household tasks – machines can find shortcuts that aren’t what programmers intended. On a larger scale, this is what could lead to an AI machine going rogue and potentially creating a large-scale problem.
>Ongoing Research to Protect Against Renegade Robots
If a kill switch sounds too dramatic, call it by the official scientific name – safe interruptibility. This new algorithm is being designed by the same team working on Google’s British DeepMind artificial intelligence system, in partnership with Oxford University. Explained two members of the project, “Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions – harmful either for the agent or for the environment – and lead the agent into a safer situation.”
Because AI learns from its past experience and adjusts its future actions accordingly, a big part of the kill switch is the ability for the AI machine to not learn how to prevent future interruptions. However, the goal is to not have the kill switch impede an AI machine’s ability to learn.
That’s not to say that the interuptibility setting will only be used in dire situations when AI is on the verge of taking over the world. Researchers also see it as a way to remove the machine from a delicate situation or even to have AI perform a different task than it was initially programmed to complete.
Before you get concerned and start making a contingency plan for an imminent AI takeover, consider that the technology is still likely years away from being capable of large-scale movements. Technology may be advancing, but that doesn’t mean robots are perfect—take for example the restaurants in China that recently had to fire all of their robot waiters because the machines were too incompetent to perform basic tasks and they were causing the restaurants to lose money.
Although there is always a risk involved with new technology, especially with technology that can think for itself, research by the top AI scientists in the world should help create a safer, more advanced robotic world.