Branka Marijan: Overcoming the Challenges of Regulating Lethal Autonomous Weapons Systems (LAWS)

The following is an edited transcript of an interview which took place with Branka Marijan.

Could LAWs make conflict more ethical? 

Fully Autonomous Weapon Systems would not make war more ethical. The arguments made in support of the development of these systems tend to highlight precision. However, there have been many instances of more precise weapons not being used very precisely, often due to a lack of care. Precision is important but it does not solve the lack of care or indeed, whether a target was legitimate, or if a military engagement was permissible in the first place. 

In general, there is a tendency to discuss autonomous weapons systems as if they would exist in a vacuum, but obviously, that is not the case. Nations and non-state actors will have access to these technologies and will use them in different ways to achieve particular aims, military and political. This reality will not change, regardless of the sophistication of systems. Moreover, we are likely to see mistakes, accidents, and hacking of these systems. This is immensely concerning—particularly, the potential of proliferation to non-state groups, and authoritarian states. Experts such as Michael C. Horowitz have pointed out that our ability to control these systems is going to be much lower than is the case with other technologies. We have to consider so many variables and that is even before we get into any moral considerations.

Should there be concern regarding the ability of non-state actors, lone wolves, or terrorists to acquire and potentially modify commercially available capabilities? 

That is definitely a concern—commercial, off-the-shelf capabilities are very adaptable and malleable. We have already seen the capability with which terrorist organizations like ISIS adapt to these kinds of systems. Drones, and surveillance technologies, have also been used in criminal networks, by cartels for instance, and terrorists. Another aspect to consider is the practice of states assisting non-state actors in the acquisition of these sophisticated systems, or directly providing with the technology. Diversion is another area of concern—weapons are exported within the guidelines and framework of international regulations but are either misused or apprehended and redirected for use not intended by the state actor. Now the weapons are in the wrong hands. These weapons would then allow non-state actors to scale the impacts of their objectives in a way that would not have been possible before. Again, not difficult to acquire and very easy to modify to maximize harm potential. We also tend to think about hardware but we also have to think of software and less tangible ways that new technologies, and know-how about them, could be acquired by non-state groups. Cyber attacks are also a well-noted concern.

At what point should computer systems not be permitted to make autonomous decisions? To what extent should human oversight, engagement, and control factor in, and what kinds of decisions need to remain in the human domain? 

Decisions over human lives should remain firmly in human hands. We do not want machines deciding whether someone should live or die. There are many technical, ethical, and legal reasons for that. It is very clear to me that we need to have meaningful human control over that level of decision-making. All of our norms and international laws are tied to the responsibility of human beings regarding decision-making. There would be no accountability—no one can be held responsible for a decision made by an autonomous system functioning without a great degree of human control. The International Committee of the Red Cross recently raised the concern that existing international humanitarian law is not sufficient to tackle the challenges posed by autonomous weapons. We need clarity. We need to make new laws and we have to be certain about where we draw the lines. This is in everyone’s best interest—states, companies, industry, individuals, and society. There needs to be a way to establish clear accountability whenever these weapons are used and that requires a meaningful level of human control. Users need to be highly specialized decision-makers. Operating a drone is not the same as playing a video game—this is real life. You are not just pressing buttons and hitting targets, there has to be a full understanding of the decisions being made as well as their consequences.

Could the struggle to define control and autonomy pose challenges to regulation? How can future norms account for the gray zone between autonomous and automatic weapons? 

Once again, countries need to be clear on these terms. They have the capacity to be. There are plenty of tools and forums available to hash out these issues, negotiate, and highlight different areas of concern. These issues need to be discussed at length and we need to develop precise terminology. How do we define autonomy and why is that definition important? Why do we need to understand human-machine interaction? University of Ottawa professor, Jason Miller brought up the excellent consideration at a United Nations discussion on autonomous weapons. Namely, that systems can be designed in ways that will nudge the user to make certain decisions, and as such an operator can be ‘nudged’ to accept or enable a particular command based on system design. States need to familiarize themselves with the nuances of this discussion. I expect that some more progress can be made at the United Nations Convention on Certain Conventional Weapons talks on autonomous weapons in Geneva. These discussions have been going on since 2014, and progress has been slow, but it is important nonetheless. Countries recently had an opportunity to be a part of remote, informal discussions and exchange views. We are starting to garner a better understanding of how to put parameters on these terms. There is a convergence of views on the need for human control and establishing types of systems that would be permissible and which would cross a certain line. 

It will not be perfect, none of our regulations are, but this is a start. We just need to have more of these conversations so we can begin the process of establishing norms and regulations that will capture the top concerns. We need to ensure that meaningful human control becomes a requirement and is captured through some sort of regulation. We need to future proof these regulations that we agree upon because the technology will advance very quickly. There has to be certainty that what we are regulating is the use and not a specific system that will not be relevant a few years down the road. Right now, I think it is a matter of political will—do we have it? I’m not sure all of us fully comprehend the impact these types of systems will have on global security. That is a big challenge.

How can Canada collaborate with allies and multilaterals to ensure these technologies and weapons are used ethically and responsibility? Do you have any policy recommendations? 

I think Canada can really step up and have a strong role in this space. Until recently, we were in the wait-and-see camp, but that is changing. Our current Foreign Minister has a mandate to support a ban on fully autonomous weapon systems in these international discussions, so the political support exists. Canada can become much more engaged and informed—we are starting to see this little by little. DND is certainly thinking about this a lot because there is going to be a great deal of focus on interoperability. The US, in particular, is very interested in working with allies on the uses of artificial intelligence for defence applications. The US-led “AI Partnership for Defense,” which includes Canada, and some twelve other states is a space where Canada can promote new norms and thoughtful interjections.

Our AI community is quite talented, vibrant, and engaged on a global level. Diplomatically, we also have the Global Partnership for Artificial Intelligence. While they have sectioned off security and defence uses of AI, it can still be a venue for Canada to engage. AI is a multi-use technology. Even if it is not specifically designed for lethal purposes, the same technology can still be used dangerously. There needs to be much more engagement and a whole-of-government approach. 

There is also the option of export controls, which is an important consideration due to the multi-use nature of these technologies. As stated previously, many of these technologies will be acquired by undesirable actors. We must be very thoughtful about using export controls, but we should use them to ensure these tools do not end up in the wrong hands. With multi-use technologies, there is also the concern that they may, at some point, be used to undermine our domestic security. We need to be thoughtful about the technologies that are being created in Canada, and where those technologies end up.

Ultimately, it is important to get beyond the broad discussions surrounding autonomous weapons. We sometimes get bogged down in our discourse—the international community will not respond, the speed of technology is outpacing our ability to create policy, etc. That latter part is true. Regulation is nearly always outpaced by technology—so we need to see a much more proactive response from governments as well as a willingness to have these discussions. Yes, the current global political reality is challenging for arms control, still, that is all the more reason to work on it. If we think back to the Cold War, you had exchanges between different states. I think we are missing that element today—that openness to dialogue. Canada needs a real techno-diplomatic policy and to look at these technologies and the diplomacy required to tackle this global challenge. 

 

Branka Marijan is a Senior Researcher at Project Ploughshares. At Ploughshares, Branka leads the research on the military and security implications of emerging technologies. Her work examines concerns regarding the development of autonomous weapons systems and the impact of artificial intelligence and robotics on security provision and trends in warfare. Her research interests include trends in warfare, civilian protection, use of drones, and civil-military relations.

She holds a PhD from the Balsillie School of International Affairs with a specialization in conflict and security. She has conducted research on post-conflict societies and published academic articles and reports on the impacts of conflict on civilians and diverse issues of security governance, including security sector reform. Branka closely follows United Nations disarmament efforts and attends international and national consultations and conferences. Branka is a board member of the Peace and Conflict Studies Association of Canada (PACS-Can).

Share the article :

Do you want to respond to this piece?

Submit and article. Find out how, here:

Cookies

In order to personalize your user experience, CDA Institute uses strictly necessary cookies and similar technologies to operate this site. See details here.