The luminaries believe that for the last 20 years we have been dangerously preoccupied with making strides in the direction of autonomous AI that has decision-making powers.
“Artificial Intelligence (AI) technology has reached a point where the deployment of [autonomous] systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” they write in a letter presented at the International Joint Conference on Artificial Intelligence in Buenos Aries.
The danger has to do with a combination of abilities and improvements the machines have been gaining over the years, including various integration, learning, processing power and a number of component tasks, as evidenced in their previous letter published by the Future of Life Institute.
The researchers acknowledge that sending robots into war could produce the positive result of reducing casualties. The flipside, they write, is that it “[lowers] the threshold for going to battle.”
The authors have no doubt that AI weaponry development by any nation will result in a “virtually inevitable global arms race” with autonomous weapons becoming “the Kalashnikovs of tomorrow.”
“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” they write.
Nothing will then stop all this from reaching the black market, researchers argue. “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group,”they continue.
The message is not that AI is bad in and of itself, but that there’s a danger of using it in a way that is diametrically opposed to how it should be used – which is for good, for healing and for alleviating human suffering.
The group speaks against a select few who would “tarnish” a field that, like biology or chemistry, isn’t at all preoccupied with using that knowledge to make weapons. Any negative use of this knowledge could only lead the public to forever stand against the field, therefore denying it the chance to really be of benefit in the future, the visionaries write.
None of this comes as a surprise to anyone who follows their careers or keeps up to date with AI news. Wozniak, who co-founded Apple with Steve Jobs, is known to be very vocal on the issue. But In June, he posited that a smart AI would wish to control nature itself – and therefore, humans. As “pets.” He even joked about feeding his dog fillet steak, because “do unto others…”
“They’ll be so smart by then that they’ll know they have to keep nature, and humans are part of nature. So I got over my fear that we’d be replaced by computers. They’re going to help us. We’re at least the gods originally,” he told an Austin audience at the Freescale Technology Forum 2015.
Musk is another skeptic. A recently-authorized biography outlines how he worries that his friend and Google CEO Larry Page could “produce something evil by accident” (the company has been making huge strides in robotics and AI, including recently purchasing Boston Dynamics – the company that scared everybody with its four-legged animal-like robots).
“The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most,” Musk wrote in a leaked private comment to an internet publication about the dangers of AI, a few months ago. “Please note that I am normally super pro-technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand,” Musk wrote.
Like Wozniak, Musk has been donating millions to the cause of steering AI in a direction beneficial to humanity.
World-famous physicist Stephen Hawking, who relies on a form of artificial intelligence to communicate, told the BBC that if technology could match human capabilities, “it would take off on its own, and re-design itself at an ever increasing rate.”
He also said that due to biological limitations, there would be no way that humans could match the speed of the development of technology.
“Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” he said. “The development of full artificial intelligence could spell the end of the human race,” the physics genius said.
Here is the full letter:
Autonomous Weapons: an Open Letter from AI & Robotics ResearchersSee Also
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.
Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.
In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.
And while we certainly agree that no one wants to live in a world where autonomous marauding robots roam the earth indiscriminately eliminating targets of their own choosing, we fear the revolution may have already begun… in Connecticut: