Skip to main content

Are we entering an age of killer robots?

ASU researcher among those clarifying what is, and isn't, possible at UN conference


ASU's Dr. Heather Roff testifies at the UN

Dr. Heather Roff, researcher at ASU's Global Security Initiative, testifies at a UN meeting of experts to consider lethal autonomous weapons. Dr. Roff's testimony focuses on unintentional risks of autonomous weapons systems, including artificial intelligence and human control. The meeting is being held under the auspices of the Convention on Certain Conventional Weapons from April 11-15 in Geneva. Photo courtesy of Heather Roff

April 13, 2016

The world’s militaries are close enough to wielding weapons that decide on their own whether to kill — called Lethal Autonomous Weapons Systems — that the United Nations this week gathered a select few experts, including an ASU researcher, to separate fact from fear of robot overlords.

Dr. Heather Roff, research scientist with ASU’s Global Security Initiative and current Oxford senior research fellow, is testifying on the unintended risks of such weapons, and where many autonomous or semi-autonomous weapons currently exist. Her research focuses on artificial intelligence and human control.

Question: Why would we want weapons systems that control themselves? Do we have some already?

Answer: There are real potential advantages for the use of autonomous weapon systems (AWS), including force protection and low cost — in both monetary and human terms. There is also the increased ability to maneuver in “denied” environments, like underwater or in areas with jammed communications. In this way, AWS could be a “force multiplier” — that is, maximizing the effects and impacts of force without having a person to do it. However, the concerns outweigh these potential benefits. Use of AWS could lead to an artificial-intelligence “arms race,” where states attempt to realize stronger and stronger AI to outmatch their opponents. This alone is troublesome, but there are other grave risks too, such as the risk of malfunction or fratricide, the escalation of a conflict, increased levels of deniability, stressors on accountability and the potential for mass surveillance of populations. The potential for mass human-rights abuses overshadows short-lived advantages in a denied warfighting space. 

Q: Are we at risk of a real-life Skynet?

A: The key issue here is whether states want to delegate the decision to kill to a machine. If they do not want to relinquish that authority to a tool, and reserve that only for human beings, then they cannot support AWS. I understand that many people may think of AWS as some sort of precursor to “Skynet,” but I don’t think that is a helpful analogy. If we jump from AWS — something that can engage and fire without human intervention — to Skynet, then we fail to see the issues that are in front of us now: systems that may not be easily testable under combat conditions, are not verifiable, may not be predictable or reliable, systems that would breed incentives for human-rights abuses, or could trigger conflicts due to unintended engagements.  

Q: When we think about autonomous weapon systems, people may think of “drones.” Are they right?  

A: AWS are not “drones.” A remotely piloted aircraft — or RPA — is piloted. In other words, there is a human there from the outset.  Not only is a human piloting the aircraft, there are many other humans engaged with the operation of and decision to use lethal force. As to whether “drones” would become autonomous, the answer is “sure.” Autonomy is a capability. With the right software, we can make unmanned systems autonomous. Look to, for example, the U.S. Navy’s swarm boats or Google’s self-driving car. The difference, however, is that those vehicles are not armed and making lethal decisions. I think autonomy will challenge diplomacy because of difficulties with transparency, signaling intent between states, showing resolve and permitting confidence-building measures. The ubiquitous use of these systems could change the face of conflict in ways we don’t yet fully understand.

To schedule an interview with Dr. Roff, please contact Logan Clark at the ASU Office of Media Relations and Strategic Communications at mediarelations@asu.edu

More Local, national and global affairs

 

Photo illustration of a hand holding a cell phone with words like conspiracy theories, fake information and news overlayed on top

The truth behind believing lies: ASU professors explain what may be behind irrational beliefs

With the 2024 election looming in the not-so-distant future, there’s a lot of information to consider and consume. And that may…

May 14, 2024
Skyline of Chengdu, China, showing modern and traditional architecture

ASU professor aims to grow social work profession among Asians, Pacific Islanders

Practiced widely in Western countries, social work is still developing in Asia and the Pacific Rim, where professionals seek to…

May 13, 2024
Palo Verde Blooms

ASU cybersecurity student selected for competitive US Department of State fellowship

Arizona State University rising junior Isa Cohen is one of only 10 undergraduate students nationwide chosen for the 2024 cohort…

May 13, 2024