Implications of AI in Military Applications: A Call for Regulation



The rapid development of Artificial Intelligence (AI) systems in the past decade has led to a paradigm shift in various sectors, including the military. Autonomous weapons systems (AWS), commonly called “killer robots”,[1] can independently track, select, and engage targets without further human intervention.[2] Such integration of AI into military applications can potentially revolutionize warfare due to capabilities such as autonomous decision-making, possibly enhanced precision, and much faster response times. However, these weapons also raise considerable international security and ethical concerns. This article delves into some of these issues and underscores the need for comprehensive global regulation within the United Nations framework.


AI in Military Systems: A Double-Edged Sword

OpenAI’s GPT-{n} has caused societal turmoil in the past few months. The tool’s release to the public sparked discussions on risks and ethical problems. However, companies have swiftly implemented the model to accelerate and improve processes. Individuals have found many ways in which the system can facilitate their work or day-to-day activities. Such a scenario of duality, benefits vs risks, is not new in the AI context.[3] In the military, AWS offers potential advantages but also presents significant risks.

Firstly, there are questions regarding the capability of AWS to comply with the rules of International Humanitarian Law (IHL). This legislation outlines the key principles of distinction and proportionality. Distinction assigns a clear divide between civilians and combatants, where non-combatants should not be attacked during battle, while proportionality calls for a non-excessive use of force. Some researchers suggest that AWS, being potentially more objective and more reliable (as they can be programmed to avoid self-protection) than humans, might better abide by these principles.[4] However, there is not yet a way to translate these principles into algorithms, especially since making these distinctions requires complex, context-sensitive assessments beyond the capability of current AI systems. One way of mitigating the risk of civilian harm and excessive use of force is ensuring AWS only targets military objects, not individuals. Until it is possible to encode IHL into AI reliably, military weapons should remain under human control, so operators can prevent malfunctions and guarantee compliance with IHL principles, acting as ‘fail-safe’ mechanisms in the system.

Secondly, the technological aspects are also relevant to this discussion. As the battlefield is an unstructured and unpredictable environment, it is challenging to design software that can adapt to such constant changes and ensure ethical engagements in such scenarios. In addition, AWS rely on massive amounts of data to train algorithms. The data might be biased or unrepresentative, leading to unpredictable or harmful outcomes. Crafting legislation around collecting, treating, and using of data sets for military algorithms could be a beneficial first step.

In addition to these issues, there are also challenges involving political and operational perspectives.[5]

The Ethical Quandary

The use of AI systems in military weapons raises profound ethical considerations. Central to these are the questions of whether machines should be making life-and-death decisions. Currently, these decisions are informed by soldiers’ contextual interpretation, in addition to human emotions (e.g., compassion and empathy) and reasoning (e.g., assessing nuances and broader social, cultural, and environmental factors at play). At the moment, AI systems cannot understand and replicate these qualities. Moreover, if such important decisions are left to algorithms, humans are objectified as data points that must be eliminated rather than human subjects.

The Need for Regulation

Could legislation stifle technological developments? That has been one of the critiques against regulating technology for decades, and it is also one of the current critiques against the EU’s Artificial Intelligence Act. Recently, executives from over 150 European companies signed an open letter urging the EU to rethink the AIA, saying that the rules could jeopardize competitiveness without truly dealing with the risks. Regarding AWS, the discussion at a global level involves countries that spend billions on military R&D and understand that staying at the forefront of technological development is necessary for national security purposes. While the argument is understandable, it is vital to evaluate if society is willing to compromise their safety and fundamental rights in favour of AI-enabled or AI-enhanced weaponry.

A robust regulation that focuses on the product itself (AWS) and covers issues involving the development and deployment of the weapons is paramount to mitigate risks. A regulatory framework focused on adherence to IHL and designed to ensure transparency and accountability can lead to increased security, making it possible to incorporate the benefits of technology.

If regulation comes to fruition, there is a need for wide and meaningful adoption among States. States considered big players in military development should take the lead for a regulation favouring international security. Otherwise, we could see the use of AWS when convenient, as recent developments with other weapons, which are internationally regulated, have shown.[6]


The developments in the field of AI advance at an exponential pace. Humans hold the power to decide if the deployment of AI will be inhumane or not.[7] That is true for all sectors of society, including the military.

Within the United Nations framework, the international community is able to take action to regulate the technology. Even though it has been difficult to reach a consensus, dialogue should continue for international cooperation to manage the risks AWS pose. The task is complex, but the consequences of inaction are too concerning to be ignored.

[1] Stop Killer Robots – Less Autonomy, More Humanity, in, 2021.

[2] C. Heyns, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, in, 9 April 2013.

[3] T.F. Blauth – O.J. Gstrein – A. Zwitter, Artificial Intelligence Crime: An Overview of Malicious Use and Abuse of AI, in IEEE Access, 10, 2022, 77110 ff.

[4] R.C. Arkin, Governing Lethal Behavior in Autonomous Robots, Boca Raton FL, 2009, 29–30.

[5] T.F. Blauth, Autonomous Weapons Systems in Warfare: Is Meaningful Human Control Enough?, in A. Zwitter – O. Gstrein (eds), Handbook on the Politics and Governance of Big Data and Artificial Intelligence, Cheltenham, 2023, 489 ff.

[6] D. E. Sanger – E. Schmitt, Biden Weighs Giving Ukraine Weapons Banned by Many U.S. Allies, in, 6 July 2023.

[7] R. Chowdhury – S. Hendrickson, Artificial Intelligence Doesn’t Have to Be Inhumane, in, 14 June 2023.

Share this article!

About Author

Leave A Reply