Is it right to separate humanity from warfare? Governments and organizations across the world have been struggling with this question throughout the 21st century, as technological advancement has increasingly removed people from the reality of war. This progress is culminating in what is called autonomous weapon systems, or AWS. Imagine if, for example, drones could identify targets autonomously, without the need of a human operator, and actually act upon this information. This is a future that major actors in international security and defense are grappling with as it rapidly approaches.
Clearly, these weapon systems bring a whole host of ethical issues to the table. What is an acceptable amount of accuracy? Who gets to evaluate the performance of these systems? Should we really allow the removal of humans and, more importantly, our empathy from the act of killing? With these questions in mind, countries and organizations from across the world have come out to address their concerns with these systems. The proposed approaches differ, with a core group of countries advocating for an outright ban while the United States and others have refused to commit to one.
It needs to be stressed that these systems are not a reality. No country or company has developed an AWS but with the rapid progress of machine learning technologies and the massive boom in investment, nations need to realize that this matter needs to be addressed sooner rather than later. And that’s the crux of the issue: countries have been stagnant in facilitating meaningful conversations on either a ban or regulatory approach. There have been several conferences but no meaningful progress has been made and there’s a clear reason – there is no standard legal definition to identify AWS.
It’s daunting to develop a definition for a (currently) theoretical system, but countries like the US and China have submitted one. Unfortunately, these definitions vary significantly and much of that has to do with what seems to be intentional ambiguity. As countries propose their definitions without real commitment from other nations, loopholes are being included to allow for their own continued advancement. That’s the problem – a lack of serious commitment has perpetuated a myriad of definitions that are often too vague. We’ve seen from previous weapons agreements, like the Chemical Weapons Convention (CWC), that regulatory success is rooted in extensive definitions that allows for actual international collaboration. Unfortunately for AWS, the conversation has now shifted from classifying these systems to whether or not to ban them, which is a clear contradiction – without a standard way of identifying these systems, a ban is hopeless.
The conversation of whether or not to ban these systems is an altogether different topic. Personally, I believe a regulatory approach is needed. The massive influx of money in machine learning is quickly turning into an arm race that can likely only be mitigated through regulation. Whether you believe in a regulatory approach or an outright ban, the first step towards either is creating standardization in identifying and classifying these systems. This is going to require serious commitment from the US and other nations but is crucial in averting devastation beyond comprehension.
Leave a Reply