Artificial Intelligence and Military Engagement

original_0.jpg

The Common Good had the honor of hosting Major General (Ret.) Dr. Robert H. Latiff on May 10th, 2019 at The Common Good Forum & American Spirit Awards, an annual program presenting headline issues and the most important, forward-looking ideas affecting public policy and our lives. Latiff is a retired U.S. Air Force Major General and Adjunct Faculty Member with the John J. Reilly Center for Science, Technology, and Values at the University of Notre Dame. He is also the author of Future War: Preparing for the New Global Battlefield, and his scholarly and military experience makes him an expert on the topic of Artificial Intelligence (AI) in the military.

Automated drones are commonplace in the modern battlefield today.

Automated drones are commonplace in the modern battlefield today.

Latiff spoke about the role of AI in warfare, beginning his talk by acknowledging that the hype around AI is often overblown; the development of AI does not necessitate the reign of robots over human beings, as depicted in popular science-fiction movies today. However, he did admit that AI has the power to alter “the fabric of our society” - it is an “incredibly powerful tool that needs careful examination and deliberation before being used in some things like the military, which have life and death consequences.”

AI is the science of making intelligent machines capable of achieving set goals by its own computation. The type of AI relevant to military applications is called Artificial Narrow Intelligence (ANI). ANI is capable of tasks such as image recognition and game supremacy in chess or Jeopardy. In warfare, it can learn from decades of combat injury data to devise means of avoiding similar injuries, improve the military logistics system by figuring out what combatants need to succeed in battle, and free humans from doing boring, repetitive, and dangerous jobs. 

Weaponry may be integrated with facial recognition, but can it be trusted to find the correct targets?

Weaponry may be integrated with facial recognition, but can it be trusted to find the correct targets?

AI has seized the attention of leaders around the globe. The apparent potential for AI to increase military and political power has prompted Russian President Vladimir Putin to state that “whoever becomes the leader in this sphere will become the ruler of the world.” In 2017, Putin declared AI a priority in Russia’s effort to engage in direct geopolitical competition with the world’s great powers. The United States (US) has implied a similar priority: in June 2018, the Pentagon established the Joint Artificial Intelligence Center (JAIC), which oversees the roughly 600 AI projects currently underway across the department at a proported cost of $1.7 billion. 

According to a 2019 World Intellectual Property Organization study on the global upsurge of AI in inventive activity, the US is currently leading in AI incorporation, but Latiff warned that China may soon overtake the US as the global leader of AI innovation. China released an AI plan in 2017, declaring AI a national priority for the country and announcing the leadership’s vision for a new economic model driven by AI. Indeed, according to a study by Sinovation Ventures AI institute, the number of AI research authors in China is increasing. With citation output growing rapidly, Chinese researchers may outpace their American counterparts as the main producers of AI research in the near future. 

3671071.jpg

“While the military says it won’t let a computer pull the trigger, it is developing target recognition systems to suggest when to do so and algorithms to turn a commander’s general intent into a detailed combat plan.”

Latiff warned that racing to implement AI systems, specifically into the military, comes with significant drawbacks. First, he noted that AI systems cannot explain why they make their decisions, making it hard to understand their successes and failures and adjust for future decisions. Second, he referenced AI’s vulnerability to spoofing and false data, whether from adversarial input or stray information. This vulnerability makes AI systems susceptible to reaching extreme or unpredictable conclusions that may harm their users. Third, he argued that AI systems are hard to trust, as such systems cannot be fully explained and thus not fully tested. According to the retired Major General, trustworthiness is an important quality in military minds as military commanders “do not very much like surprises”; the often unexpected or even aggressive behaviors exhibited by AIs are not compatible with such values. Ultimately, since AI systems may provide unpredictable answers, they could instill more confusion on a battlefield, not less.

In addition to its shortcomings in military applications, Latiff explained that AI has the ability to usher in an ominous future for global warfare. He argued that the most concerning use of AI today is the implementation of AI into command and control systems for the military. Decision makers use these systems as sources of knowledge about threats and for decision-making assistance based on this knowledge. 

“Indeed, human beings are capable of empathetic and good, sometimes counterintuitive decisions. Machines, even smart ones, are unlikely to be capable of such nuance.”

Latiff warned that introducing AI into control centers may compound military aggression. In recent years, the US military has turned to increasingly bellicose and forward approaches. Such trends would be registered by AIs in the control center, which would generate more aggressive combat plans in turn. Latiff explained that AI systems are not capable of coming up with empathetic or counter-intuitive solutions.

If Soviet military leaders had given such power to AI in 1983, the world may have suffered nuclear devastation. In 1983, former Soviet military officer Stanislav Petrov spotted a warning from computers that the U.S. had launched several missiles. If this were true, the Soviets could have launched missiles of their own by the promise of mutually assured destruction. Petrov who determined that the warning was illegitimate upon closer inspection, has since been credited as the man who saved the world from nuclear ruin. It is unlikely that such a decision would be made by an AI system. Latiff stressed that such detrimental shortcomings must be considered before AI is implemented into higher offices of the military.

 
Maintaining peace as AI becomes more widespread will require strong diplomacy.
 

AI could also erode geopolitical stability and remove the status of nuclear weapons as a means of deterrence. The idea that the possession of nuclear weapons maintains peace rests on the notion that one country will not use its nuclear weapons on another country it knows to possess such weapons, as doing so would result in its own destruction. According to a research paper by the Rand Corporation, the potential for AI and machine-learning to dictate military actions could mean that the assurance of stability breaks down; countries become less confident in their ability to predict the moves of their adversaries and thus become more insecure in their own passivity. 

Maintaining peace as AI becomes more widespread will require strong diplomacy. Latiff argued that agreements among nations may be even more important in the struggle to maintain peace than military strength and preparedness. The Rand Corporation paper suggests the same. Meanwhile, the US has withdrawn from the ABM treaty, the INF treaty, and the Iran nuclear agreement while refusing to consider treaties on space or cyber warfare. For now, it seems likely that AI weapon development will continue at a rapid pace, but an international agreement may not be far behind. According to Latiff, there is a broad, perhaps tacit agreement that AI systems are to be used wisely, and that the UN must come together to agree on a way to do so responsibly. 

 Author: Francesca Martini [TCG Intern]


 

watch Dr. Latiff’s full discussion below:

 
 

 

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of The Common Good.