Leveraging AI for the Canadian Armed Forces

Daniel Araya

In this episode, Daniel Araya explores the Canadian Armed Forces’ integration of artificial intelligence, highlighting its role in enhancing military capabilities through drones, robotics, and cyber operations.

He addresses both the advantages and challenges of AI, including ethical considerations and the need for interoperability with allies. Araya also emphasizes the critical need for a robust tech sector and startup ecosystem in Canada. This discussion provides valuable insights into how AI is reshaping the future of military strategy and national defence.

What does AI look like in practice for the Canadian Armed Forces (CAF)? What do we mean when we say that the CAF will pursue AI integration?  

I want to answer that question in two different ways. One is to define what we mean by AI as a general-purpose technology. Two is to get into basic examples of how it is being deployed. Not just by the Canadian military, but by militaries around the world.  

I think it is a misnomer to say that AI is literally artificial intelligence. What it is is machine learning. It is machine learning that leverages data, algorithms, and massive computing capabilities to mimic human cognition. AI is software. In that sense, it is software that has been designed to as closely as possible align itself with human reasoning. In that way, give us human beings the capacity to scale up our own reasoning capabilities.  

It is very early in terms of where AI is at. It is not accurate that we know where AI is going. Fundamentally, we do not understand where this is taking us. We have some frames of reference in history. The steam engine, the printing press, the internal combustion engine, electricity – all these general-purpose technologies have had massive transformational impact on industrial society. AI is one of these. It is not only going to be deployed in the military. It is going to be part of telecommunications, transportation, logistics, healthcare, education. Every institution that uses information, data and intelligence in some form, will be leveraging AI. 

That is the first basic framing for understanding what we are talking about. To build on that in terms of military application, the most obvious examples are drones. Drones have been a key feature of the Ukraine War. The U.S. military has been deploying drones for decades. The kind of drones we speak about now about being the tipping point are low-cost and battery powered, particularly Chinese DJI drones. Those drones have had an outsized impact on warfighting because they are cheap and relatively silent. In that sense, they provide an asymmetrical advantage in comparison to the Joint Strike Fighter, which costs millions of dollars. The difference in terms of what you can do and on a cost basis are significant. There is a certain economic advantage to using AI-powered technologies like drones.  

The next generation is robotics. We talk about drones that are air-based, sea-based and land-based. All these drones are just early generation robotics. These kinds of tools are going to be significantly impactful in terms of augmenting mass and making it possible for the military to deploy at a scale while having very limited impact of vulnerability to personnel. This is a working debate. I know many people in the military are dubious about the potential of drones and robotics in terms of their accuracy and in terms of leveraging AI in an effective way, but I would say that these are very early days. This is a nascent technology that will improve over time. I think it is important to work with the technology within limits. 

There is other applications of AI around cyber operations, defensive and offensive information processing, and recommendation systems. For example, pattern recognition around satellite imagery, logistics and maintenance of vehicles and resources, particularly predictive maintenance. All of these involve cost reductions. They are valuable because they reduce the need for personnel so you can put those personnel elsewhere. 

One thread that runs through the CAF is the argument or need for strategy around human-machine teaming. The one caveat that is critical is cloud computing, or platform-centric military and software centric systems. It is very difficult to attach AI to already structured military operations. What is more likely to happen over time is that software will become the centerpiece upon which the whole architecture of military operations is built. Transitioning from the kind of vertically integrated military systems, or bureaucratic military systems we have today to a much more agile network, to distributed military is a huge challenge This is something that large multinationals have had to deal with over the past decade or more. It is not a simple challenge.   

In “Strong, Secure and Engaged,” the 2022 Defence Policy Update, Canada has taken some steps to start this process. Canada has said they will try and integrate new AI-based capabilities like remotely piloted drones, cyber technologies and space-based surveillance assets. Where is the CAF currently with its implementation of these capabilities, and what progress is currently being made on that front? 

I mean, there’s a long way to go.  

The CAF is strapped for financial resources. There are significant issues around retention and bringing new people into the military. The Canadian military is not in a good place now. That is the base case for understanding what’s coming next, which is around augmenting the military with more advanced data driven or AI driven technologies. There is a lot of discussion in the Canadian military around what to buy, where to procure from, and what tools are going to be the most valuable.  

The thing that comes up first is drones. Drones are a given. There is already a process in place to procure drones. DND is experimenting with new drone technologies, but this technology is well established on the battlefield already. Not just in Ukraine, but Azerbaijan, Ethiopia, and lots of other small wars around the world.  

Cyber operations are also a big part of what the Canadian military is trying to do. They’re doing information processing and human-machine teaming, but one area where they’re apprehensive is robotics. If you look at the US military, there is a lot of investment in trying to work with startups to leverage the next generation of robotics to supplement personnel and to make the military more data driven fundamentally.  

Ultimately, you want smart robots to back up the creative intuition of human beings in a human-machine teaming environment. The CAF knows this and that is the direction they want to go in. 

We have talked about the benefits and what potential AI implementation would look like. However, there are obvious challenges that come with that. One major problem, especially with AI implementation, is ethical, legal and safety concerns. Could you elaborate a bit more on what these challenges are?  

This is not unique to the military. This is the nature of machine learning. Machine learning, or AI models, are driven by data. Sometimes that data has bias built into it. It comes from human experience. Humans are biased, and that bias feeds into the data through algorithms of AI models.  

Bias is a problem, but it is not the scale of problem the media makes it sound like. However, if you want to deploy them on the battlefield with soldiers, it must be resolved. The only caveat is that the kind of AI we are talking about today is a moving target. What we call AI in one decade is no longer considered AI in the next.  Rather than looking at AI in terms of the machine learning we have on the ground or in deployment now, we should be thinking in terms of ten-to-twenty-year horizons. We need to come to understand that AI is a fundamental part of what software is doing to systems and processes.  

So, the question about how to govern or manage AI is a big one. It is not one that can be answered easily. I say this because I do not think AI is a singular or static thing. AI is a process, and it is a moving process. I think we need to guide the process. It is just like if you are raising a child – you need to train that child or discipline that child so that it is coherent with the social norms in which it operates or engages in the world. AI is similar. It will mature, and as it matures it is going to be a much more robust and sophisticated technology than what we have seen so far. It is important to bear that in mind as we move forward down the road, or down the track of governance.  

Could you crystallize some of the exact harms that would occur if we had AI trained on biased data deployed in the field? What might be some of the harms to Canadian military operations if there was a biased data set in there?  

Well, obviously. I mean, let’s talk about image recognition. If you are hunting a particular target, that target may look friendly. However, if it is like the images the machine-learning was training on, it could kill the target. It is not supposed to kill or target in a way that is dangerous to the mission, and if it does, that’s not technology you want to be in the field with.  

If it does not respond in ways that are predictable or provide an advantage to soldiers on the ground, then why have it there? That must be dealt with in practical terms. That is a challenge for AI developers more than it is for the military. However, innovation on this is coming from the private sector. It is not coming from the government or public sector. As the military engages with tech firms, there must be a mandate that these problems get worked through before they’re deployed so no unnecessary harm is done. That is not to say no harm will ever be done. New technologies always have consequences. If we talk about the general-purpose technologies we’ve seen in the past, we couldn’t imagine a military being deployed without these technologies, right? There is no way forward outside of using AI, or, more specifically, software and ultimately robotics to make the Canadian military more robust and capable. It is an unavoidable challenge that is ahead of us. It’s not something we can dismiss and retreat from.  

You have noted that most of these challenges stem from industry rather than the CAF. Is there anything that the CAF itself can do, rather than just relying on industry? 

The United States, which has the largest military in the world, often procures from tech firms. I would like to see Canada mirror this – placing more value on startups to augment the procurement process and make it more efficient. The military procurement process is notoriously slow and inefficient.  

When we are talking about software and robotics, however, the cycles of development are so much faster that if the procurement process is not accelerated, we do not get the best quality right. The software we then use and deploy in our military becomes redundant very quickly. If you look at startups like Palantir, Android, or even SpaceX, they have become fundamental pieces of the US military. We need the equivalent of Canada, but they have not been provided the kind of financial oxygen to grow. It is important for the military to begin wrapping its head around how to incubate new tech companies and not just go to large providers in the way they might have done in a conventional era.  

How could the CAF work better with the traditional industry in the absence of this kind of startup environment to take advantage of new AI developments? 

That is a very good question because it is not the military’s role, in some respects – it is the federal government’s. The federal government should earmark resources to build out our startup ecosystem to be competitive.  

We have challenges around productivity and economic growth that are fundamentally rooted in a lack of investment in the tech sector. One of the consequences that we are now seeing as a result of this is that Canada is falling behind other peers across the G7, but especially the United States. That is a horizontal problem across institutions, and across the national system. We need to rethink how strategically we do technology as a nation and begin to appreciate that we need to move away from older neoliberal models in which you know, we focus on competition for its own sake. We need to begin to strategically think about how we can build up a tech ecosystem that is competitive and one that will facilitate both economic growth and national security.  

We have previously approached this issue as an opportunity for CAF advancement. Are there risks to Canadian security we will face if we do not keep up with technological advancement? What will happen if our adversaries or allies potentially get more advanced in this space than we do?  

This is a fundamental question. To be blunt, we are already there.  

The United States does not look at us in a serious way, in military terms. We have been riding on the security umbrella that the United States provides for us. We are very fortunate in that respect, but it does sap our sovereignty and self-confidence as a nation. 

So, it is not about reaching the 2% goal that is talked about in NATO. It is about using resources more wisely, reducing bureaucracy. It is about minimizing the sluggishness of procurement and taking our tech sector more seriously, understanding on a strategic level where this is all going. 

One big difference in how innovation is done is that the tech sector plans around what you might call S curves, which are exponential changes that are delivered or emerge because of software evolution. The government has a very hard time understanding this concept. I found government operates in linear terms. They assume the future will be just more of the present. The cost consequence of that is that they are constantly falling behind the curve of change. If you are in a regulatory environment, that is sort of par for the course. You are always going to be chasing technology. If you look at other nations, say, in Scandinavia or even South Korea, which are small nations as well, they often do a better job at planning for innovation than we do. I think we can learn from other nations that are our peers. Whether it is Australia or the Brits, which are our common parallel models, or more dynamic examples like South Korea or Finland or Estonia. We need to get better at software, and we need to deploy that software across our bureaucracies in a way that makes our systems far less sluggish and more automated to a certain degree.  

Are there any other challenges the CAF faces with implementing AI that we have not really covered, or areas government could improve with its application of this technology?  

One, we should get back to manufacturing. These technologies need to be built, particularly in We need to get back to that as a nation.  

The other thing is, I want to see more creativity in strategic planning in the military. We need to understand that if you look at some examples of deploying AI in games like Go or chess, I think war, the pace of war is going to accelerate dramatically. Now, none of us want war when it’s all said and done. We want a system of peace, but you must be prepared as a nation.  

Appreciating that war will change just as it has always changed with new general-purpose technologies, an example that comes to mind, is Polish cavalry attacking Nazi tanks in World War Two. We do not want to be the cavalry of the 21st century. We want to be keeping up with technological change and deploying it intelligently. That is about leveraging resources and so, the Canadian government at both at provincial and federal level, need to appreciate that tech is critical to our future, both economically and in terms of security. 

How would you say that Canada can work together and learn from its allies to develop a truly interoperable AI system that is ethical, legal and safe, as well as serving the needs that the Canadian military needs it to?  

That question is not exclusive to Canada – it is a question across NATO. That, I would park under the category of cloud computing and the platform military. What you want is an extensible platform that is sort of plug and play for various military so that all the data streams can be integrated together, and the systems can work as one whole. You need interoperable training as well. It is not just a tech question, but the fact is that as we move into a software centric military environment having our software be effectively interoperable with the United States and our allies in Europe is going to be the basic minimum requirement to having a comprehensive security architecture.  

The age of the people in government matters. If you are not familiar with software and do not understand how it works, I think there should be a certain bias with regard to who you have managing and organizing decision-making. I think tech savvy millennials and Gen Z will be very valuable over time. It is important that we make this generational shift and move beyond the old industrial, conventional way of thinking about the military towards a much more software centric, savvy digital institution. 

Are there any final thoughts that you want to share with our audience about AI implementation? 

Bear with the technology. I think it makes people nervous. They look at the Terminator movies and they think that is the future. I don’t think that is the future. Obviously, because it is a nascent technology, we are going to have some challenges ahead of us in terms of managing and governing it.  

One example that comes to mind is when electricity was discovered, everyone thought we were going to have ray guns, and the military would be turned into an electricity fighting system. In reality, it was just communications that really benefited. I think AI is similar. I think it will be more profound than electricity, but it is similar in that we do not know how to leverage it yet.  

Do not put the cart before the horse. Let us experiment and figure out how this works with guardrails, with governance, with institutions that have knowledgeable personnel, and have some level of confidence that we can get this done.  

Share the article :

Do you want to respond to this piece?

Submit and article. Find out how, here:

Cookies

In order to personalize your user experience, CDA Institute uses strictly necessary cookies and similar technologies to operate this site. See details here.