Moral agency and meaningful human control: Exploring military ethical values for alignment in the use of autonomous weapons systems

Moral agency and meaningful human control: Exploring military ethical values for alignment in the use of autonomous weapons systems
Lead Researcher: Dr Elke Schwarz
 
Funding Agency: Leverhulme/British Academy
Advances in autonomous technology and Artificial Intelligence (AI) will shape civic and military futures in significant ways. Despite this, a focus on promoting innovation in these areas means that ethical aspects often take a backseat. There is broad consensus in current debates that ethical issues must be addressed in the development of robotic AI systems, but it is less clear what kinds of ethical values (as distinct from legal requirements) should factor into this enterprise. This is particularly crucial for the use of robotic AI systems in military operations, where human-machine teams will shape significant aspects of decision-making and operational conduct in future defence operations. This project examines how technologically advanced militaries view moral agency and ethical values vis-a-vis new autonomous and intelligent technologies. It seeks to:
(1) provide a clarification of ethical values and moral agency in military operations, and
(2) open an interdisciplinary dialogue on the topic to help shape policy and industry guidelines.
 
 

Leave a Reply

Your email address will not be published.