Imagining an International Liability Framework for Autonomous and Semi-Autonomous Weapons
Author
Rebecca Dragusin
Editor
Angel Liang
Publications Lead
Artjom Gavryshev
I. Introduction
Autonomous and semi-autonomous weapons systems employing Artificial Intelligence (AI) can have technical capabilities to remove human decision-making. For this reason, AI-enabled weapons generate a legal ‘accountability gap’ across their life cycle that complicates efforts to assign responsibility or secure compensation for AI-related harm. This brief introduces approaches to ascribing liability for AI-enabled weapons, despite this accountability gap. When referring to autonomous and semi-autonomous weapons, this analysis uses the term “weapons systems”.
II. The accountability gap
a. Distinguishing autonomous and semi-autonomous weapons
The accountability gap arises from the technical capabilities of autonomous and semi-autonomous weapons - including surveillance, selection[i], or engagement with a target for the purpose of destroying or neutralizing that target. Autonomous weapons systems lack “meaningful human control”[ii] and can “independently [select and engage] targets”.[iii] These systems are also described as operating without a ‘man-in-the-loop’[iv] because humans are generally unable to intervene once the system initiates its decision-making process .[v] Semi-autonomous systems have similar capabilities to autonomous ones, but retain a ‘man-in-the-loop’ allowing human operators to override or direct the AI‘s decision.[vi]
Although both autonomous and semi-autonomous systems disrupt existing understanding of liability, they do so to different degrees.[vii] For this reason, autonomous weapons are challenging because there is no person to whom liability can be ascribed for decision-making. In contrast, semi-autonomous weapons are challenging within command structures where decision-making is distributed across multiple actors.
b. The theoretical context of the accountability gap
The accountability gap is the gap in liability created because weapons systems’ capabilities alter the way we conceptualize liability. Specifically, weapons systems’ “expand familiar sources of error and complicate causal analyses”.[viii] In turn, it is more difficult to attribute liability for damages and, relatedly, it is harder for victims to pursue recourse for wrongs committed against them.
The accountability gap for weapons systems is created throughout their life cycle, including at the “design, development, training and testing” stage.[ix] [x] Therefore, the accountability gap implicates multiple actors, including states, militaries, private military contractors, and the corporations that design and supply the underlying technologies. For corporations, an accountability framework must determine who within these corporations is responsible for the technologies. Functionally, addressing the accountability gap is important because it can deter misuse[xi] and poor oversight of weapons systems at all aspects of their life cycle by attributing consequences to these actions.
III. Legal theories of liability for autonomous and semi-autonomous weapons
a. Vicarious liability
Under strict liability models, intent is irrelevant to ascribing legal liability. Vicarious liability is a version of strict liability that reflects “the idea that employees can cause wrongs that are nevertheless incidental to their employment”.[xii] In this way, employees who work on weapons systems are not solely responsible for damages. Instead, corporations absorb responsibility for their employees actions if they are acting within the scope of their employment.[xiii] At a theoretical level, AI parallels “the employer-employee relation” because this technology “obeys its owner”.[xiv] Thus, a vicarious liability approach ensures that victims can seek meaningful compensation from actors that have substantive funds.
b. AI as the ‘agent’ of a human principal
This liability regime likens AI to “an agent” with “the human or corporate deployer as the agent’s principal”.[xv] This approach is beneficial because it ensures that a human is responsible for AI’s actions by always maintaining a person in a supervisory role. Furthermore, the degree to which a person is responsible for weapons under this model depends on “the extent to which the defendant has delegated responsibility to the system” (called a “graded agency” model).[xvi] In this way, the agency approach helps to distinguish between autonomous and semi-autonomous weapons.
c. Negligence model
Under a negligence model of liability “anyone who is… involved with an algorithm would be liable” if they act negligently.[xvii] A negligent act for weapons systems is one wherein an actor involved in the AI life cycle breached a standard of care.[xviii] Because standard of care is defined differently in every field, it is necessary to consider what a standard of care definition would require in multiple industries involved in the AI life-cycle. The benefit of this model is that there are “no limits on what sort of connection could implicate a defendant”.[xix] With such an expansive model, victims would have recourse to negligent actions committed by anyone involved in weapons systems’ life cycle. A negligence approach would also focus on “what the system actually did, rather than what made it act as it did”.[xx] In this way, a more holistic conception of liability is adopted that favours victims and their experience of harm.[xxi]
d. War torts regime
‘War torts’ combines aspects of domestic and international liability frameworks, providing a mechanism through which to create a governance framework for liability that meaningfully incorporates all relevant actors in weapons systems’ life-cycle. The notion of ‘war torts’ helps expand state liability beyond willful acts which is required for existing war crimes frameworks.[xxii] Instead, individuals are held liable for “reckless” conduct.[xxiii]
For example, militaries (which operate under a command responsibility structure) and private military companies (PMCs) can both be held liable under a war torts regime. Under military command, soldiers are “held responsible for failing to uphold” the laws of war (through independent decisions, “or by obeying unlawful commands”).[xxiv] A PMC differs from traditional military because command structures might not exist in the same way in these private corporations.[xxv] PMCs are also held liable for crimes as civilians[xxvi], while completing functions similar to militaries. Instead, a war torts framework can consider PMC and military responsibility similarly.
IV. International law on party obligations
Notably, private companies are governed by the UN Guiding Principles on Business and Human Rights (UNGPs) and the Organization for Economic Co-operation and Development’s (OECD) Guidelines for Multinational Enterprises which define what due diligence looks like for human rights violations.[xxvii] Generally, due diligence is defined as a “standard of care with which to assess… implementation and compliance with obligations of conduct”.[xxviii]
IV. Recommendations
a. Create an international governance framework using a war torts framework
Elements of domestic and international liability approaches should be integrated to create a practical international governance framework. An international governance framework is beneficial because it would create “specific conditions and procedures” that eliminate ambiguities about liability.[xxix] In turn, states are more likely to be deterred from committing wrongful acts through weapons systems use.
b. Who bears responsibility under what conditions
A strict liability approach should be adopted in the governance framework because not having to consider intent means there is no need to trace decision-making to a particular person or point. Although strict liability models can lead to over-deterrence of innovation, distinguishing between high risk and lower-risk applications can overcome this limitation. Under a war torts model, autonomous systems should be defined as ‘high-risk’. In high-risk applications, corporations which create technology should be liable under vicarious liability. To further distinguish approaches based on risk, corporations should be responsible under a negligence model for semi-autonomous weapons. Using a negligence model for semi-autonomous weapons encourages on-the-loop engagement by reducing the relative cost of this behaviour (when compared to strict liability for autonomous weapons).
State representatives will continue to be covered by international law for their weapons systems use. However, militaries should be responsible under a negligence standard for their actions (instead of strict liability), because they engage with weapons’ technology at the use-stage of their life cycle (i.e., the governance framework focuses on the military’s actions and what the weapons system actually did). A negligence standard is also applicable to PMCs because they engage with weapons’ systems similarly to militaries.
The standard of care for negligence has to be defined independently for each sector. For companies which create technology, the negligence standard should be informed by UNGP and OECD guidelines on human rights violations. For the military, negligence standards should be informed by international law that guides state responsibility. The standard of care for PMCs should begin with consideration of the commonalities between the private sector and military approach to negligence standards.
[i] Bode et al., 2023, p. 1
[ii] Fuzaylova, 2019
[iii] Crootof, 2016, p. 1367
[iv] Fuzaylova, 2019
[v] Sari & Celik, 2021, p. 5
[vi] Fuzaylova, 2019
[vii] Fuzaylova, 2019
[viii] Crootof, 2022
[ix] British Columbia Law Institute, 2024, p.47
[x] British Columbia Law Institute, 2024, p.47
[xi] Casey & Lemley, “Remedies for Robots”
[xii] Glavaničová & Pascucci, 2022, p. 5
[xiii] Glavaničová & Pascucci, 2022, p. 5
[xiv] Glavaničová & Pascucci, 2022, p. 6
[xv] British Columbia Law Institute, 2024, p. 36
[xvi] British Columbia Law Institute, 2024, p. 116
[xvii] Diamnatis, 2021, p. 331
[xviii] British Columbia Law Institute, 2024, p. 121
[xix] Diamantis, 2021, p. 332
[xx] British Columbia Law Institute, 2024, p. 121
[xxi] British Columbia Law Institute, 2024, p. 27
[xxii] Fuzaylova, 2019
[xxiii] Fuzaylova, 2019
[xxiv] Roff et al., 2013, p. 354
[xxv] Lehnardt, 2008, p. 1015
[xxvi] Lehnardt, 2008, p. 1017
[xxvii] Kanetake & Ryngaert, 2023, p. 6
[xxviii] Kanetake & Ryngaert, 2023, p. 12
[xxix] Kanetake & Ryngaert, 2023, p. 37-8
References
Abbott, R.B. (2020). The Reasonable Robot: Artificial Intelligence and the Law. Cambridge University Press https://ssrn.com/abstract=3611370
Anup, S. S. & A. (2023). Resolving the Liability Dilemma in AI Caused Harms. RSRR. https://www.rsrr.in/post/resolving-the-liability-dilemma-in-ai-caused-harms
Batallas, C. (2024). When AI Meets the Laws of War. ieUniversity. https://www.ie.edu/insights/articles/when-ai-meets-the-laws-of-war/
Beckers, A. , & Teubner, G. (2021). Conclusion: Three Liability Regimes and Their Interrelations. In Three Liability Regimes for Artificial Intelligence: Algorithmic Actants, Hybrids, Crowds (pp. 138–166). Oxford: Hart Publishing. Retrieved April 15, 2025, from http://dx.doi.org/10.5040/9781509949366.ch-006
Bode, I., Huelss, H., Nadibaidze, A., Qiao-Franco, G., & Watts, T. F. A. (2023). Algorithmic Warfare: Taking Stock of a Research Programme. Global Society, 38(1), 1–23.
British Columbia Institute. (2024). Artificial Intelligence and Civil Liability. British Columbia Law Institute, 96, 1-113.
Cops, D. Countering the diversion of the components of conventional weapons. An assessment of potential technological solutions. Flemish Peace Institute, 1-28. https://vlaamsvredesinstituut.eu/wp-content/uploads/2024/08/4010848-VVI-ANALYSE-02-DTECT-WEB.pdf
Crootof, R. (2016). War Torts: Accountability For Autonomous Weapons. University of Pennsylvania Law Review, 164(6), 1347–1402.
Crootof, R. (2022). AI and the Actual IHL Accountability Gap. CIGI. https://www.cigionline.org/articles/ai-and-the-actual-ihl-accountability-gap/
Crootof, R. (2022b). Implementing War Torts. Forthcoming, Virginia Journal of International Law, No. 63, 2023, http://dx.doi.org/10.2139/ssrn.4268691
Crootof, R. (2022c). The Case for War Torts—for Ukraine and Beyond. Lawfare. https://www.lawfaremedia.org/article/case-war-torts%E2%80%94-ukraine-and-beyond
Crootof, R. (2022d). War torts. New York University Law Review (1950), 97(4), 1063–1142. https://doi.org/10.2139/ssrn.4040075
Diamantis, M. (2021). Vicarious Liability for AI (SSRN Scholarly Paper 3850418). Social Science Research Network. https://papers.ssrn.com/abstract=3850418
Dickinson, L.A. (2007). Accountability of Private Security Contractors under International and Domestic Law.. American Society of International Law 11(31). https://www.asil.org/insights/volume/11/issue/31/accountability-private-security-contractors-under-international-and
EU Artificial Intelligence Act. Article 6: Classification Rules for High-Risk AI Systems. EU Artificial Intelligence Act. https://artificialintelligenceact.eu/article/6/
Fuzaylova, E. (2019). War Torts, Autonomous Weapon Systems, and Liability: Why a Limited Strict Liability Tort Regime Should Be Implemented, Cardozo Law Review 40(3).
Goldfarb, A., & Lindsay, J. (2020). Artificial Intelligence In War: Human Judgment As An Organizational Strength And A Strategic Liability. Brookings. https://www.brookings.edu/wp-content/uploads/2020/11/fp_20201130_artificial_intelligence_in_war.pdf
Glavaničová, D., & Pascucci, M. (2022). Vicarious liability: A solution to a problem of AI responsibility? Ethics and Information Technology, 24(3), 28. https://doi.org/10.1007/s10676-022-09657-8
Haim, A. Tort Liability in War. University of Essex. https://www.essex.ac.uk/research-projects/tort-liability-in-war
Hill, L. (2024). Artificial Intelligence & Legal Liability Using AI: What Happens When Artificial Intelligence Goes Wrong?. Ward Hadaway. https://www.wardhadaway.com/insights/updates/artificial-intelligence-legal-liability-using-ai-what-happens-when-artificial-intelligence-goes-wrong/
Holistic AI White Paper. (2023). Regulation of AI in Biometrics: The key laws you need to know. Holistic AI, 2-22. https://cdn.prod.website-files.com/6305e5d52c28356b4fe71bac/64c7efe775d6584959e41068_Holistic-AI-White-Paper-Regulation-of-AI-in-Biometrics-Compressed.pdf m
Huberman, P. (2021). A Theory of Vicarious Liability for Autonomous-Machine-Caused
Harm. Osgoode Hall Law Journal 58(2), 233-284, https://doi.org/10.60082/2817-5069.3678
Kannetake, M., & Ryngaert, C. (2023). Due diligence and corporate liability of the defence industry: Arms exports, end use and corporate responsibility. Flemish Peace Institute. https://vlaamsvredesinstituut.eu/wp-content/uploads/2023/05/VVI-Rapport-Due-Dilligence-WEB-new.pdf
Karkason, D. (2024). Vicarious Liability in International law: How does it Function?. Transnational Matters. https://www.transnationalmatters.com/vicarious-liability-in-international-law-how-does-it-function/
Kraska, J. (2021). Command Accountability for AI Weapon Systems in the Law of Armed Conflict. International Law Studies 97, 408-447. ISSN 2375-2831
Lehnardt, C. (2008). Individual Liability of Private Military Personnel under International Criminal Law. European Journal of International Law, 19(5), 1015–1034. https://doi.org/10.1093/ejil/chn058
Lemley, M.A., & Caesy, B. Remedies for Robots. The University of Chicago Law Review, 86(5). https://lawreview.uchicago.edu/print-archive/remedies-robots
LLP, G. (2024, March 18). When Is a Company Liable in Cases of AI Negligence? Gluckstein LLP. https://www.gluckstein.com/news-item/ai-negligence--when-is-a-company-liable-for-damages
Lonsdorf, K. (2024). Eyewitnesses in Gaza say Israel is using sniper drones to shoot Palestinians. NPR. https://www.npr.org/2024/11/26/g-s1-35437/israel-sniper-drones-gaza-eyewitnesses
Sari, O., & Celik, S. (2021). Legal evaluation of the attacks caused by artificial intelligence-based lethal weapon systems within the context of Rome statute. The Computer Law and Security Report, 42. https://doi.org/10.1016/j.clsr.2021.105564
Mann, J-K. (2019). Autonomous Weapons Systems and the Liability Gap, Part Two: Civil Liability and State Responsibility. Rethinking SLIC. https://rethinkingslic.org/blog/53-autonomous-weapons-systems-and-the-liability-gap-part-two-civil-liability-and-state-responsibility
Marijan, B. AI-Guided Weapons Must Be Curbed by Global Rules - and Soon: Battlefield use of AI is clearly visible in the war in Ukraine. CIGI. https://www.cigionline.org/articles/ai-guided-weapons-must-be-curbed-by-global-rules-and-soon/#:~:text=The%20Turkish%2Dmade%20Kargu
%2D2,in%20the%20war%20in%20Libya
Merwe, M. van der, Ramakrishnan, K., & Anderljung, M. (2024). Tort law and Frontier AI Governance. Retrieved from https://www.lawfaremedia.org/article/tort-law-and-frontier-ai-governance
Morgan, Phillip. “Tort Law and AI: Vicarious Liability.” In The Cambridge Handbook of Private Law and Artificial Intelligence, 135–71, 2024. https://doi.org/10.1017/9781108980197.008.
Osmani, N. (2020). The Complexity of Criminal Liability of AI Systems. Masaryk University Journal of Law and Technology, 14(1), 53–82. https://doi.org/10.5817/MUJLT2020-1-3
Ramakrishnan, K., Smith, G., & Downey, C. (2024). U.S. Tort Liability for Large-Scale Artificial Intelligence Damages. RAND. https://www.rand.org/pubs/research_reports/RRA3084-1.html
Roff, H. M., Henschke, A., Allhoff, F., & Evans, N. G. (2013). Killing in War: Responsibility, liability, and lethal autonomous robots. In Routledge Handbook of Ethics and War (1st ed., pp. 352–364). Routledge. https://doi.org/10.4324/9780203107164-38
Shah, R. (2014). Beating Blackwater: Using Domestic Legislation to Enforce the International Code of Conduct for Private Military Companies. The Yale Law Review 123(7). https://www.yalelawjournal.org/comment/beating-blackwater-using-domestic-legislation-to-enforce-the-international-code-of-conduct-for-private-military-companies
Tadors, V. (2014). Orwell’s Battle with Brittain: Vicarious Liability for Unjust Aggression. Philosophy & Public Affairs, 42(1), 42–77. https://doi.org/10.1111/papa.12025
Tannenbaum, W.A., Song, K., Malek, L.A. (2022). Theories of AI liability: It's still about the human element. Reuters. https://www.reuters.com/legal/litigation/theories-ai-liability-its-still-about-human-element-2022-09-20/
Weil, G. (2024). Tort Law Should Be the Centerpiece of AI Governance. Lawfare. https://www.lawfaremedia.org/article/tort-law-should-be-the-centerpiece-of-ai-governance
Wilson, I., & Falokun, T. Liability for Damage Caused by Artificial Intelligence. Templars Law. https://www.templars-law.com/app/uploads/2021/05/LIABILITY-FOR-DAMAGE-CAUSED-BY-ARTIFICAL-INTELLIGENCE.pdf