Deployment of Weaponized Artificial Intelligence and the “Terminator Conundrum”

Home / Articles / External Non-Government

new_atlas_autonomous_rototic_weapon_systems_o.png

August 14, 2017 | Originally published by Date Line: August 14 on

Earlier this month, the Russian weapons manufacturer Kalashnikov Group made a low-key announcement with frightening implications. The company revealed it had developed a range of combat robots that are fully automated and used artificial intelligence to identify targets and make independent decisions. The revelation rekindled the simmering, and controversial, debate over autonomous weaponry and asked the question, at what point do we hand control of lethal weapons over to artificial intelligence (AI)?

In 2015, over one thousand robotics and artificial intelligence researchers, including Elon Musk and Stephen Hawking, signed an open letter urging the United Nations to impose a ban on the development and deployment of weaponized AI. The wheels of bureaucracy move slowly though, and the UN didn”t respond until December 2016. The UN has now formally convened a group of government experts as a step towards implementing a formal global ban, but realistically speaking this could still be several years away.

The question of whether we should remove human oversight from any automated military operation has been hotly debated for some time. In the US there is no official consensus on the dilemma. Known informally inside the corridors of the Pentagon as “the Terminator conundrum,” the question being asked is whether stifling the development of these types of weapons would actually allow other less ethically minded countries to leap ahead? Or is it a greater danger to ultimately allow machines the ability to make life or death decisions?