An increasing number of States are developing military AI capabilities, which may include using AI to enable autonomous systems.1 Military use of AI can and should be ethical, responsible, and enhance international security. Use of AI in armed conflict must be in accord with applicable international humanitarian law, including its fundamental principles. Military use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control. A principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents. States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous systems. These measures should be applied across the life cycle of military AI capabilities.
The following statements reflect best practices that the endorsing States believe should be implemented in the development, deployment, and use of military AI capabilities, including those enabling autonomous systems:
- States should take effective steps, such as legal reviews, to ensure that their military AI capabilities will only be used consistent with their respective obligations under international law, in particular international humanitarian law.
- States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.
- States should ensure that senior officials oversee the development and deployment of all military AI capabilities with high-consequence applications, including, but not limited to, weapon systems.
- States should adopt, publish, and implement principles for the responsible design, development, deployment, and use of AI capabilities by their military organizations.
- States should ensure that relevant personnel exercise appropriate care, including appropriate levels of human judgment, in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.
- States should ensure that deliberate steps are taken to minimize unintended bias in military AI capabilities.
- States should ensure that military AI capabilities are developed with auditable methodologies, data sources, design procedures, and documentation.
- States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those capabilities and can make context-informed judgments on their use.
- States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.
- States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles. Self-learning or continuously updating military AI capabilities should also be subject to a monitoring process to ensure that critical safety features have not been degraded.
- States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. States should also implement other appropriate safeguards to mitigate risks of serious failures. These safeguards may be drawn from those designed for all military systems as well as those for AI capabilities not intended for military use.
- States should pursue continued discussions on how military AI capabilities are developed, deployed, and used in a responsible manner, to promote the effective implementation of these practices, and the establishment of other practices which the endorsing States find appropriate. These discussions should include consideration of how to implement these practices in the context of their exports of military AI capabilities.
The endorsing States will:
- implement these practices when developing, deploying, or using military AI capabilities, including those enabling autonomous systems;
- publicly describe their commitment to these practices;
- support other appropriate efforts to ensure that such capabilities are used responsibly and lawfully; and
- further engage the rest of the international community to promote these practices, including in other fora on related subjects, and without prejudice to ongoing discussions on related subjects in other fora.
 The concepts of artificial intelligence and autonomy are subject to a range of interpretations. For the purposes of this Declaration, artificial intelligence may be understood to refer to the ability of machines to perform tasks that would otherwise require human intelligence — for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action — whether digitally or as the smart software behind autonomous physical systems. Similarly, autonomy may be understood to involve a system operating without further human intervention after activation. [back to 1]