Lately, Silicon Valley has been undergoing a dystopian news: Google is currently designing killer robots. Two years ago, a very discreet convention in Switzerland gathered Japan and other countries to purchase these special devices.
While many conspiracy websites confirmed that GAFAM were building killing robots, a former engineer from Mountain View confirmed its accuracy. She shared in an exclusive interview to The Guardian that an order made by the U.S military for Google.
According to the former Google employee, this could be a grave threat to humanity.
Will robots kill human?
“Killer robots calamitous things that they were not originally programmed for”, confessed Laura Nolan, a former top engineer at Google.
Her brilliant career stopped the day when she was assigned on designing killer robots for the U.S Army while being a full time employee at Google. While killer robots have remained, for the moment, a dystopian fantasy only seen in science fiction movies, there might soon become a reality.
do “calamitous things not originally programmed for”…
— Ramona (@desderamona) September 16, 2019
As The Guardian revealed, “last month a UN panel of government experts debated autonomous weapons and found Google to be eschewing AI for use in weapons systems and engaging in best practice.” To many, killer robots could potentially include casualties for humans.
Read on Alvexo: “Is a Russian-American collaboration a solution?”
The limits of AI
Before Nolan’s testimony, other specialists already warned about then dangers of a non-regulated artificial intelligence in the Silicon Valley.
— The Verge (@verge) July 18, 2018
Last year, Elon Musk and more than 50 tech experts signed a joint statement online to pledge tech giants to stop their development.
Nolan joined Musk’s movement and said that “there could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”
Should note I came across this tool while writing about Google’s Project Maven and the military-industrial fantasy of using algorithms to automate the targeting of humans. More on Maven and mechanical turks: https://t.co/c204nVGike
— 𝙹.𝙳. 𝚂𝚌𝚑𝚗𝚎𝚙𝚏 (@jd_schnepf) September 15, 2019
As more than 3,000 employees went on strike and most of them resigned from the “Project Maven”, Google started developing algorithms to better target humans, as the Intercept revealed earlier this week.
“I realized that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan.”, concluded Laura Nolan.
3 AI weapons have launched
While killer robots are still in the making, the Guardian has listed autonomous weapons that are already in use.
The U.S Anaconda gunboat is a navy’s warship that has AI-related capacities to shoot and “loiter in an area for long periods of time without human intervention”, the newspaper relates.
We should be skeptical of the Kalashnikov's claims, but the overall thrust of Russia's #AI efforts & full weapon autonomy goals are concerning. Using #AI for national security offers a number of benefits, but a lot of risks too. https://t.co/bD5ZazAvLy
— Mike Rogers (@RepMikeRogers) November 28, 2017
The American army has confirmed that it owns another weapon, the “Sea Hunter” autonomous warship that has a “robotic warfare”. While the 40-meter-long prototype has been launched, it could be self-sufficient without any crew on board for up to three months in a row.
Last but not least, the Russian army has the third publicly known automated weapon. The Armata tank is still being built.