Q: In a piece you wrote for CIGI you refuted the claim that “data is the new oil”. What does this statement mean and why don’t you agree with it?

A: Everything from search results to image recognition—everything about the way that algorithms work requires data to make [them] go. The underlying idea is that data is comparably as important as oil and coal were a century ago in thinking about the future of international politics. This [sentiment] is problematic because when people say, “data is the new oil”, they tend to be referring to quantities of data—i.e. “the Chinese have [the most] people, therefore they have more data, thus they will rule AI”. There are all kinds of algorithms and they do different things. It’s not quantity, but the quality of data. That China has a huge population doesn’t mean they have an advantage in designing algorithms for a variety of military purposes, because population data doesn’t necessarily translate into data necessary for military operations. You need the right data, not just lots of it.

The best way to think about AI is as a general-purpose technology that will impact economics, society, the military, and politics in many ways that we can and can’t anticipate. We are used to thinking about technology as either military or commercial, but there are dual-use technologies, with applications for military and commercial spheres. GPTs are even bigger. GPTs are broad and impact huge portions of our lives. Examples from the past include electricity, the combustion engine, and computing.

 

Q: What shifts in the character of warfare and military technology will be ushered in with the use of AI?

A: AI has the potential to have a substantial impact on the balance of power in international conflict. There are several applications of AI—most will be far from the battlefield, such as logistics and training. There will be pattern recognition algorithms that help with everything from planning truck routes to analyzing drone footage and feeds, such as the U.S-based Project Maven. There will be several autonomous systems integrated into submarines and airplanes as well. There are applications in the decision science arena—decision aides for commanders that could help impact the way that commanders make decisions in a complicated multidomain battlefield. We’re also looking at potential force structure consequences. If advances in AI mean the best way to project air power is through large numbers of cheaper platforms that are linked together, like Swarm (SI), then that’s a major shift away from air warfare, which relies on small numbers of really expensive technologies.

You have algorithms that come from the commercial sector, and then military applications, where an algorithm must be more secure for cyber security reasons. However, a pattern recognition algorithm is a pattern recognition algorithm. Those kinds of algorithms are likely to proliferate widely because lots of countries will have companies that can develop algorithms in that vein. You might have algorithms, like a decision science battlefield management algorithm or a Swarm algorithm, where only militaries will want to have them. If they’re effective they could really reshape the balance of power because, despite being software, they are difficult to acquire.

Another interesting area to think about is the potential use of AI in nuclear weapons systems, and how the United States has thought about it thus far. The U.S generally doesn’t like to say it won’t do things. But there have now been multiple senior U.S military commanders in the last 5 years that have talked about how they think integrating AI into these platforms is a bad idea. You don’t want to lose positive control over the use of nuclear weapons. I think there are risks though, that countries with less secure second-strike capabilities that fear decapitation in a conflict might be more likely to take some of those kinds of risks. In the nuclear arena it is essential to have a human in the loop on the early warning side and on the use side.

Q: China announced its ambitions to become the world leader in A.I by 2030. What are the implications of potential Chinese AI leadership?

A: China is an autocratic regime, and the character of that regime, which is inherently untrusting of its people, will impact its use of AI. China envisions [the technology] in an entirely different way than the U.S. When Western countries think about AI, they think about using it to improve human life and the ability of humans to make complex decisions—algorithms that help decrease the cognitive load on busy decision makers. I think Chinese leaders imagine AI as a tool to control, because they don’t trust their people in the first place. China has a different vision that grows out of its politics, which underscores the importance of the U.S and other free societies working together to ensure leadership in AI.

More generally, I think national power in the world of AI will be based more on information than we’ve ever seen before. It is not just a matter of writing algorithms, but a question of what companies, people and government bureaucracies do with those algorithms. One of the big lessons from military history and the history of national power is that technical advances only take you so far. Countries that are most likely to succeed during periods of technological transition aren’t necessarily those that invent technologies, but those that employ them for the good—whether the good is economic efficiency, jobs, or military power. I would expect the same thing to be true in the era of AI. That is important because it means that while we should focus on AI leadership from the perspective of technical advances, those advances are only a piece of the broader puzzle when it comes to national power. It will be more about how we act than about specific techniques.

Q: What policy-making challenges does AI present and how can we bridge the policy-tech gap?

A: You have two simultaneous challenges—on one hand, you have policy makers that aren’t trained in coding and don’t necessarily understand the details of how algorithms work, trying to make decisions about their application and use. On the other side, you have programmers, AI researchers or scientists who haven’t traditionally been as deeply engaged in public policy questions. I think it is only natural that there would be some tension and miscommunication and issues similar to what we saw in the early biotech era. I think the gap between the tech and policy worlds has shrunk a bit in the last few years. People were worried when Google decided not to renew its Project Maven contract. It was interpreted as tech industries not wanting to work with Washington. But the past two years have demonstrated that that is not the case, and I think things could improve moving forward. With better communication more cooperation is possible. The number of individuals being trained in AI is growing as well, which will help.

 

Michael C. Horowitz is Political Science Professor, Director of Perry World House, and Richard Perry Professor at the University of Pennsylvania. He is the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and the co-author of Why Leaders Fight. He is affiliated with the Center for a New American Security, the Center for Strategic and International Studies, and the Foreign Policy Research Institute. He is a member of the Council on Foreign Relations. Professor Horowitz received his Ph.D. in Government from Harvard University and his B.A. in political science from Emory University.

Show Buttons
Hide Buttons