Danielle C. Tarraf: The Digital Battlefield: Unpacking the Military Adoption of AI

What developments do you expect to see on digital battlefields in the near future?

I like to remind people that AI encompasses many different technologies. What constitutes AI has changed since the term was first coined in the Dartmouth workshop in 1956. AI has been a part of the digital battlefield for decades—consider the Aegis Combat System. Where we have seen a lot of recent advances, which I think has fueled all this current excitement , is in the realm of deep learning (which has applications for image, speech, text processing, etc.) and deep reinforcement learning. When people talk about AI these days, the focus is on these particular methods, new advances—deep learning, and the technology surrounding it.

Last year I led a congressionally mandated study to assess the DoD AI posture, and provide recommendations for improvement. We wanted to address some fundamental questions, including: What is the current state of AI relevant to DoD? How do we distill that into a succinct message about what senior DoD people should know about AI?

We laid out our thinking about the DoD applications of AI along a spectrum that’s characterized by four factors: operating environment, resources, tempo, and implications of failure. On one end (of this spectrum) we have what’s called enterprise AI—financial management systems, systems for managing personnel records, and health records. On the other end you have operational AI—autonomous tanks, vehicles, command and control systems, fire control systems, and other such systems. There is limited control over the environment and how it’s evolving. It’s generally adversarial and resource constrained. Our ability to collect data is limited and this is an environment where the tempo of synthesis and decision making based on data is very quick. In between, we can think of applications of what we call mission support—AI enabled support for logistics, predictive maintenance, and some aspects of cybersecurity.

A central message of our report was the need to maintain realistic expectations regarding the expected performance and timeline of AI; from demonstrating the art of the possible to deploying and using these technologies at scale. If DoD were to make investments in deep learning advances starting today, we should get to at-scale deployments on the enterprise side in the near term (within five years). On the operational side, we’re looking at 10 years or beyond, with mission support somewhere in between.

 

What are some of the capabilities and limitations of future weapons systems using AI/Machine learning? i.e. Fully autonomous combat vehicles and vehicles with AI/ML enabled situational awareness? 

In the 16 years following the DARPA Grand Challenge, anywhere from $16 billion to $80 billion has been spent on R&D for civilian and government applications of autonomous vehicles. Regardless of what the actual figure is, we’ve spent between $1 – 5 billion or more per year on self driving vehicles, but we’re not jumping into self-driving vehicles yet, because we still have a number of technical challenges to work through. When these systems work, they have narrow operating conditions and tend to fall apart once placed in certain real-world situations. Navigation alone is a big issue. Going from the civilian world to the battlefield is a big jump. There are things we take for granted in the civilian world that are a core part of the capability developments that will not be available in the battlefield environment, for instance reliable GPS and maps.  

There are many technological hurdles to overcome when moving from the civilian world to the military realm. We are no doubt living in interesting times though. There have been rapid advances, beyond self-driving cars, in computer vision, object recognition, image processing, natural language processing, and decision-making systems. With a little imagination, one can certainly think of interesting applications of all of these advances for battlefield environments. The leap in imagination is small, but the leap in technology required to get us there is significant. The battlefield environment is highly unstructured, dynamic, and adversarial by nature. It faces computation, communications, and data constraints not seen in the civilian realm on so many levels, and I haven’t even addressed the non-technical challenges yet.

 

What challenges do militaries face when integrating A.I into their systems? What is required for A.I to be successfully adopted by militaries? 

When you’re on the enterprise side, the technology gap is small, but it tends to grow bigger as you approach the operational side. In enterprise AI, the challenges right now are not necessarily the technology itself, but rather the surrounding ecosystem that needs to exist to enable these technologies. Access to the right data sets, sufficient quantity and quality, access to talent, and alignment of incentives are important elements to consider. Regarding operational AI, technically we are a long way off. Think of advances in strategy games. What do they say about command and control, and the digital battlefield? Devising algorithms to beat humans at a strategy game like Go, well that’s different. Go consists of two players, it’s a full information game, there’s a neat order of play, I play then you play, and so on and so forth. It takes place in a discrete world. None of that is true in the real world out there—especially not in a battlefield environment.

I think there are at least three things that would be required beyond overcoming technological hurdles. The first is establishing trust in the systems. Engineers and developers that build these systems need to be able to trust that they will perform as desired. Part of that includes advancing the science and practice of verification, validation, testing, and evaluation (VVT&E). The other aspect involves whether users and operators can trust the system. Part of that is including them in the design process, which serves two purposes. One is to establish familiarity and buy in, but also to improve the design of the system itself.

I also think it’s important to revise processes and procedures depending on whether you’re on the enterprise or operational side of AI to get the best use out of the technology. You don’t just want to automate all the steps of the process by using AI and replacing humans with machines. How people interact with and use these new capabilities is important to consider as well. ran a tactical wargame experiment where some players were given autonomous vehicle capabilities. How the players tended to use these vehicles varied. On more than one occasion, the players decided to use these robotic vehicles to draw out the opponent during battles, which is something you would never do if you actually had a real vehicle full of soldiers.

Finally, talent is an important consideration. In financial markets we see autonomous decision systems that use AI. The individuals who designed these systems are usually also present to monitor the systems, adjust if needed as market conditions change, and potentially take them offline completely if need be. Extrapolating from here to a battlefield setting raises interesting questions about the skill sets necessary to field, employ, and sustain these systems.

What is robustness and what happens when safety-critical A.I-enabled systems lack robustness? 

I would characterize robustness as being a desirable property of a system. Let’s think of a system as a box that processes inputs and produces outputs or some desired performance. If the inputs of the system change slightly, but the outputs and performance don’t change drastically, you have a pretty robust system. If your system lacks robustness, then you have a very fragile system—one that’s likely incapable of operating outside a lab environment where everything is very tightly controlled. Robustness is a desirable property of all engineered systems, not just AI. We can mitigate a lot of operational risks by building systems that are robust and verifiable. More broadly we need to focus on trust at every stage of the research, development, and fielding process.

Suppose we were living in a world where car wheels didn’t exist and then somebody invents a wheel. Its great, except this wheel that that has been produced only performs well in a building where the terrain is flat and the temperature is controlled between 69 and 71 degrees Fahrenheit. Would you put that wheel on your car and drive off with it? Some current deep learning approaches have a very narrow window in which we know they operate well. Going back to the wheel analogy, we don’t even know whether it’s just the temperature and the terrain that matters. Perhaps there are other things that matter for the performance of the system, but we can’t figure it out yet.

China announced its ambitions to become the world leader in A.I by 2030. What are some concerning and/or promising aspects of potential Chinese A.I leadership? Should the U.S and its allies prioritize collaborative AI research to stay ahead of China?

The People’s Republic of China has identified AI as the key to enhancing its competitiveness in national security, right. China has put forth a national AI plan, which is basically a top-down, centrally coordinated whole-of-society approach, backed by significant investment. There are a number of cultural and structural factors that work for China on this path towards becoming the global leader in AI. This can certainly give rise for concern. Partnership is important at every level. What renewed current progress in deep learning in object recognition was the sharing of a curated data set, made available to all researchers. The competition that ensued spurred innovation and enabled rapid advances. I cannot overemphasize the importance of partners and an open environment.

Danielle C. Tarraf is a Senior Information Scientist at the RAND Corporation. Her research interests are in control theory, particularly as it interfaces with theoretical computer science, machine learning, and optimization, with motivating applications in autonomy as well as defense strategy and technology. She was previously on the Electrical and Computer Engineering faculty at Johns Hopkins, and has been a visiting/summer faculty fellow at the Air Force Research Lab. She is the recipient of multiple awards, including an NSF CAREER award in 2010 and an AFOSR Young Investigator award in 2011. She received her Ph.D. from MIT in 2006.

Share the article :

Do you want to respond to this piece?

Submit and article. Find out how, here:

Cookies

In order to personalize your user experience, CDA Institute uses strictly necessary cookies and similar technologies to operate this site. See details here.